All Projects → Wuyxin → ReFine

Wuyxin / ReFine

Licence: MIT License
Official code of "Towards Multi-Grained Explainability for Graph Neural Networks" (2021 NeurIPS)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to ReFine

stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-15%)
Mutual labels:  graph-neural-network, neurips2021
GNNs-in-Network-Neuroscience
A review of papers proposing novel GNN methods with application to brain connectivity published in 2017-2020.
Stars: ✭ 92 (+130%)
Mutual labels:  graph-neural-network
visdial-gnn
PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations
Stars: ✭ 39 (-2.5%)
Mutual labels:  graph-neural-network
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+10%)
Mutual labels:  graph-neural-network
Social-Knowledge-Graph-Papers
A paper list of research about social knowledge graph
Stars: ✭ 27 (-32.5%)
Mutual labels:  graph-neural-network
Hyper-SAGNN
hypergraph representation learning, graph neural network
Stars: ✭ 53 (+32.5%)
Mutual labels:  graph-neural-network
SuperGAT
[ICLR 2021] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision
Stars: ✭ 122 (+205%)
Mutual labels:  graph-neural-network
egnn-pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Stars: ✭ 249 (+522.5%)
Mutual labels:  graph-neural-network
MixGCF
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems, KDD2021
Stars: ✭ 73 (+82.5%)
Mutual labels:  graph-neural-network
Graph Neural Net
Graph Convolutional Networks, Graph Attention Networks, Gated Graph Neural Net, Mixhop
Stars: ✭ 27 (-32.5%)
Mutual labels:  graph-neural-network
Awesome-Federated-Learning-on-Graph-and-GNN-papers
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.
Stars: ✭ 206 (+415%)
Mutual labels:  graph-neural-network
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+102.5%)
Mutual labels:  neurips2021
GraphDeeSmartContract
Smart contract vulnerability detection using graph neural network (DR-GCN).
Stars: ✭ 84 (+110%)
Mutual labels:  graph-neural-network
GP-GNN
Code and dataset of ACL2019 Paper: Graph Neural Networks with Generated Parameters for Relation Extraction.
Stars: ✭ 52 (+30%)
Mutual labels:  graph-neural-network
GNN-Recommendation
毕业设计:基于图神经网络的异构图表示学习和推荐算法研究
Stars: ✭ 52 (+30%)
Mutual labels:  graph-neural-network
chemicalx
A PyTorch and TorchDrug based deep learning library for drug pair scoring.
Stars: ✭ 176 (+340%)
Mutual labels:  graph-neural-network
KERN
Code for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)
Stars: ✭ 99 (+147.5%)
Mutual labels:  graph-neural-network
Knowledge Graph based Intent Network
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Stars: ✭ 116 (+190%)
Mutual labels:  graph-neural-network
DIG
A library for graph deep learning research
Stars: ✭ 1,078 (+2595%)
Mutual labels:  graph-neural-network
RIB
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)
Stars: ✭ 40 (+0%)
Mutual labels:  neurips2021

ReFine: Multi-Grained Explainability for GNNs

This is the official code for Towards Multi-Grained Explainability for Graph Neural Networks (NeurIPS 2021). Besides, we provide highly modularized explainers for Graph Classification Tasks. Some of them are adapted from the image domain. Below is a summary:

Explainer Paper
ReFine Towards Multi-Grained Explainability for Graph Neural Networks
SA Explainability Techniques for Graph Convolutional Networks.
Grad-CAM Explainability Methods for Graph Convolutional Neural Networks.
DeepLIFT Learning Important Features Through Propagating Activation Differences
Integrated Gradients Axiomatic Attribution for Deep Networks
GNNExplainer GNNExplainer: Generating Explanations for Graph Neural Networks
PGExapliner Parameterized Explainer for Graph Neural Network
PGM-Exapliner PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
Screener Causal Screening to Interpret Graph Neural Networks
CXPlain Cxplain: Causal Explanations for Model Interpretation under Uncertainty

Installation

Requirements

  • CPU or NVIDIA GPU, Linux, Python 3.7
  • PyTorch >= 1.5.0, other packages
  1. Pytorch Geometric. Official Download.
# We use TORCH version 1.6.0
CUDA=cu102
TORCH=1.6.0 
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html 
pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
pip install torch-geometric==1.7.0
  1. Visual Genome (optional). Google Drive Download. This is used for preprocessing the VG-5 dataset and visualizing the generated explanations. Manually download it to the same directory as data. (This package can be accessed by API, but we found it slow to use.) You can still run the other datasets without downloading it.

  2. Other packages

pip install tqdm logging pathlib matplotlib argparse json pgmpy==0.1.11 
# For visualization (optional) 
conda install -c conda-forge rdkit

Datasets

  1. The processed raw data for BA-3motif is available in the data/ folder.
  2. Datasets MNIST, Mutagenicity will be automatically downloaded when training models.
  3. We select and label 4443 graphs from https://visualgenome.org/ to construct the VG-5 dataset. The graphs are labeled with five classes: stadium, street, farm, surfing, forest. Each graph contains regions of the objects as the nodes, while edges indicate the relationships between object nodes. Download the dataset from Google Drive. Arrange the dir as
data ---BA3
 |------VG
        |---raw

Please also cite Visual Genome (bibtex) if you use this dataset.

Train GNNs

We provide the trained GNNs in param/gnns for reproducing the results in our paper. To retrain the GNNs, run

cd gnns/
bash run.sh

The trained GNNs will be saved in param/gnns.

Explaining the Predictions

  1. For global training of PGExplainer and ReFine, run
cd train/
bash run.sh
  1. Load datasets
from utils.dataset import get_datasets
from torch_geometric.data import DataLoader

name = 'ba3'
train_dataset, val_dataset, test_dataset = get_datasets(name=name)
test_loader = DataLoader(test_dataset, batch_size=1)
  1. Instantiate the explainer
from explainers import *

device = torch.device("cuda")
gnn_path = f'param/gnns/{name}_net.pt'

refine = torch.load(f'param/refine/{name}.pt') # load pretrained
refine.remap_device(device)
  1. Explain
for g in test_loadder:
  refine.explain_graph(g, fine_tune=True, 
                      ratio=0.4, lr=1e-4, epoch=20)

For baseline explainers, e.g.,

gnn_explainer = GNNExplainer(device, gnn_path)
gnn_explainer.explain_graph(g,
                           epochs=100, lr=1e-2)
                           
screener = Screener(device, gnn_path)
screener.explain_graph(g)                 
  1. Evaluation & Visualization

Evaluation and visualization are made universal for every explainer. After explaining a single graph, the pair (graph, edge_imp:np.ndarray) is saved as explainer.last_result by default, which is then evaluated or visualized.

ratios = [0.1 *i for i in range(1,11)]
acc_auc = refine.evaluate_acc(ratios).mean()
racall =  refine.evaluate_recall(topk=5)
refine.visualize(vis_ratio=0.3) # visualize the explanation

To evaluate ReFine-FT and ReFine in the testing datasets, run

python evaluate.py --dataset ba3

The results will be included in file results/ba3_results.json, where ReFine-FT.ACC-AUC (ReFine-FT.Recall@5) and ReFine.ACC-AUC (ReFine.Recall@5) are the performances of ReFine-FT and ReFine, respectively.

Citation

Please cite our paper if you find the repository useful.

@inproceedings{wx2021refine,
  title={Towards Multi-Grained Explainability for Graph Neural Networks},
  author={Wang, Xiang and Wu, Ying-Xin and Zhang, An and He, Xiangnan and Chua, Tat-Seng},
  booktitle={Proceedings of the 35th Conference on Neural Information Processing Systems},
  year={2021} 
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].