All Projects → WilliamCCHuang → GraphLIME

WilliamCCHuang / GraphLIME

Licence: MIT license
This is a Pytorch implementation of GraphLIME

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to GraphLIME

RioGNN
Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks
Stars: ✭ 46 (+15%)
Mutual labels:  graph-algorithms, graph-neural-networks
Literatures-on-GNN-Acceleration
A reading list for deep graph learning acceleration.
Stars: ✭ 50 (+25%)
Mutual labels:  graph-algorithms, graph-neural-networks
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+282.5%)
Mutual labels:  explainable-ai, graph-neural-networks
ASAP
AAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Stars: ✭ 83 (+107.5%)
Mutual labels:  graph-algorithms, graph-neural-networks
Graph-Algorithms
Everything you need to know about graph theory to ace a technical interview 🔥
Stars: ✭ 87 (+117.5%)
Mutual labels:  graph-algorithms
TeamReference
Team reference for Competitive Programming. Algorithms implementations very used in the ACM-ICPC contests. Latex template to build your own team reference.
Stars: ✭ 29 (-27.5%)
Mutual labels:  graph-algorithms
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+55%)
Mutual labels:  explainable-ai
swap
A Solver for the Wavelength Assignment Problem (RWA) in WDM networks
Stars: ✭ 27 (-32.5%)
Mutual labels:  graph-algorithms
Meta-GDN AnomalyDetection
Implementation of TheWebConf 2021 -- Few-shot Network Anomaly Detection via Cross-network Meta-learning
Stars: ✭ 22 (-45%)
Mutual labels:  graph-neural-networks
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+42.5%)
Mutual labels:  explainable-ai
Belief-Propagation
Overview and implementation of Belief Propagation and Loopy Belief Propagation algorithms: sum-product, max-product, max-sum
Stars: ✭ 85 (+112.5%)
Mutual labels:  graph-algorithms
eeg-gcnn
Resources for the paper titled "EEG-GCNN: Augmenting Electroencephalogram-based Neurological Disease Diagnosis using a Domain-guided Graph Convolutional Neural Network". Accepted for publication (with an oral spotlight!) at ML4H Workshop, NeurIPS 2020.
Stars: ✭ 50 (+25%)
Mutual labels:  graph-neural-networks
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+17.5%)
Mutual labels:  explainable-ai
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+312.5%)
Mutual labels:  graph-neural-networks
mage
MAGE - Memgraph Advanced Graph Extensions 🔮
Stars: ✭ 89 (+122.5%)
Mutual labels:  graph-algorithms
cognipy
In-memory Graph Database and Knowledge Graph with Natural Language Interface, compatible with Pandas
Stars: ✭ 31 (-22.5%)
Mutual labels:  graph-algorithms
KGPool
[ACL 2021] KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction
Stars: ✭ 33 (-17.5%)
Mutual labels:  graph-neural-networks
graphtrans
Representing Long-Range Context for Graph Neural Networks with Global Attention
Stars: ✭ 45 (+12.5%)
Mutual labels:  graph-neural-networks
position-rank
PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents
Stars: ✭ 89 (+122.5%)
Mutual labels:  graph-algorithms
GeometricFlux.jl
Geometric Deep Learning for Flux
Stars: ✭ 288 (+620%)
Mutual labels:  graph-neural-networks

GraphLIME

GraphLIME is a model-agnostic, local, and nonlinear explanation method for GNN in node classfication task. It uses Hilbert-Schmit Independence Criterion (HSIC) Lasso, which is a nonlinear interpretable model. More details can be seen in the paper.

This repo implements GraphLIME by using the awasome GNN library PyTorch Geometric, and reproduces the results of filtering useless features until now; that is, Figure 3 in the paper.

Notice

Please check that it is not the official implementation.

Installation

Just use pip to install.

> pip install graphlime

Usage

This implementation is easy to use. All you need to do is to confirm that your model outputs log probabilities (for example, outputs by F.log_softmax()) first, then instantiate a GraphLIME object, and finally explain the specific node by calling explain_node() method.

from graphlime import GraphLIME

data = ...  # a `torch_geometric.data.Data` object
model = ... # any GNN model
node_idx = 0  # the specific node to be explained

explainer = GraphLIME(model, hop=2, rho=0.1)
coefs = explainer.explain_node(node_idx, data.x, data.edge_index)

coefs are the coefficients of features. They correspond to the 𝜷 in the paper. The larger the value is, more important the corresponding feature is.

Tutorial

Example and details can be found in tutorial.ipynb.

Reproduce

All scripts of different experiments are in the scripts folder. You can reproduce the results by running the following command:

> sh scripts/noise_features_cora.sh

Results

Filter Unuseless Features

There are another 10 random features added to the original features and then we train a GNN model. We use several explanation methods to find out which features are important for the model. Then we plot the distribution of noise features selected by the explanation methods. Less noise features are better; that is, a peak around the origin is good.

Requirements

  • python >= 3.6
  • torch >= 1.6.0
  • torch-geometric 1.6.0
  • numpy 1.17.2
  • scikit-learn 0.21.3
  • seaborn 0.10.1
  • matplotlib 2.2.4

Changing Log

  • 2021-02-13: add GNNExplainer in the noise features experiment
  • 2021-02-05: add a tutorial file
  • 2021-01-10: modified the file structure & published to PyPI with version 1.2.0
  • 2020-11-26: fixed bug
  • 2020-11-09: fixed bug
  • 2020-10-08: modified the file structure for publishing to PyPI
  • 2020-05-26: first uploaded
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].