All Projects → dongkwan-kim → SuperGAT

dongkwan-kim / SuperGAT

Licence: other
[ICLR 2021] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to SuperGAT

AC-VRNN
PyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-82.79%)
Mutual labels:  graph-neural-networks
Entity-Graph-VLN
Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Stars: ✭ 34 (-72.13%)
Mutual labels:  graph-neural-networks
BGCN
A Tensorflow implementation of "Bayesian Graph Convolutional Neural Networks" (AAAI 2019).
Stars: ✭ 129 (+5.74%)
Mutual labels:  graph-neural-networks
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-86.89%)
Mutual labels:  graph-neural-networks
SubGNN
Subgraph Neural Networks (NeurIPS 2020)
Stars: ✭ 136 (+11.48%)
Mutual labels:  graph-neural-networks
pyg autoscale
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch
Stars: ✭ 136 (+11.48%)
Mutual labels:  graph-neural-networks
DCGCN
Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)
Stars: ✭ 73 (-40.16%)
Mutual labels:  graph-neural-networks
GNN-Recommender-Systems
An index of recommendation algorithms that are based on Graph Neural Networks.
Stars: ✭ 505 (+313.93%)
Mutual labels:  graph-neural-networks
ASAP
AAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Stars: ✭ 83 (-31.97%)
Mutual labels:  graph-neural-networks
mdgrad
Pytorch differentiable molecular dynamics
Stars: ✭ 127 (+4.1%)
Mutual labels:  graph-neural-networks
Social-Recommendation
Summary of social recommendation papers and codes
Stars: ✭ 143 (+17.21%)
Mutual labels:  graph-neural-networks
MixGCF
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems, KDD2021
Stars: ✭ 73 (-40.16%)
Mutual labels:  graph-neural-network
SiGAT
source code for signed graph attention networks (ICANN2019) & SDGNN (AAAI2021)
Stars: ✭ 37 (-69.67%)
Mutual labels:  graph-neural-networks
visual-compatibility
Context-Aware Visual Compatibility Prediction (https://arxiv.org/abs/1902.03646)
Stars: ✭ 92 (-24.59%)
Mutual labels:  graph-neural-networks
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+559.84%)
Mutual labels:  graph-neural-networks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-64.75%)
Mutual labels:  graph-neural-networks
Introduction-to-Deep-Learning-and-Neural-Networks-Course
Code snippets and solutions for the Introduction to Deep Learning and Neural Networks Course hosted in educative.io
Stars: ✭ 33 (-72.95%)
Mutual labels:  graph-neural-networks
Graph-Embeddding
Reimplementation of Graph Embedding methods by Pytorch.
Stars: ✭ 113 (-7.38%)
Mutual labels:  graph-neural-networks
GAug
AAAI'21: Data Augmentation for Graph Neural Networks
Stars: ✭ 139 (+13.93%)
Mutual labels:  graph-neural-networks
egnn-pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Stars: ✭ 249 (+104.1%)
Mutual labels:  graph-neural-network

SuperGAT

Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision, International Conference on Learning Representations (ICLR), 2021.

Open Source & Maintenance

  • The documented SuperGATConv layer with an example has been merged to the PyTorch Geometric's main branch.
  • The RandomPartitionGraph is now available at PyTorch Geometric.
  • This repository is based on torch==1.4.0+cu100 and torch-geometric==1.4.3, which are somewhat outdated at this point (Feb 2021). If you are using recent PyTorch/CUDA/PyG, we would recommend using the PyG's. If you want to run codes in this repository, please follow #installation.

BibTeX

@inproceedings{
    kim2021how,
    title={How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision},
    author={Dongkwan Kim and Alice Oh},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=Wi5KUNlqWty}
}

Installation

# In SuperGAT/
bash install.sh ${CUDA, default is cu100}
  • If you have any trouble installing PyTorch Geometric, please install PyG's dependencies manually.
  • Codes are tested with python 3.7.6 and nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 image.
  • PYG's FAQ might be helpful.

Basics

  • The main train/test code is in SuperGAT/main.py.
  • If you want to see the SuperGAT layer in PyTorch Geometric MessagePassing grammar, refer to SuperGAT/layer.py.
  • If you want to see hyperparameter settings, refer to SuperGAT/args.yaml and SuperGAT/arguments.py.

Run

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES
 
...

## RESULTS SUMMARY ##
best_test_perf: 0.853 +- 0.003
best_test_perf_at_best_val: 0.851 +- 0.004
best_val_perf: 0.825 +- 0.003
test_perf_at_best_val: 0.849 +- 0.004
## RESULTS DETAILS ##
best_test_perf: [0.851, 0.853, 0.857, 0.852, 0.858, 0.852, 0.847]
best_test_perf_at_best_val: [0.851, 0.849, 0.855, 0.852, 0.858, 0.848, 0.844]
best_val_perf: [0.82, 0.824, 0.83, 0.826, 0.828, 0.824, 0.822]
test_perf_at_best_val: [0.851, 0.844, 0.853, 0.849, 0.857, 0.848, 0.844]
Time for runs (s): 173.85422565042973

The default setting is 7 runs with different random seeds. If you want to change this number, change num_total_runs in the main block of SuperGAT/main.py.

For ogbn-arxiv, use SuperGAT/main_ogb.py.

GPU Setting

There are three arguments for GPU settings (--num-gpus-total, --num-gpus-to-use, --gpu-deny-list). Default values are from the author's machine, so we recommend you modify these values from SuperGAT/args.yaml or by the command line.

  • --num-gpus-total (default 4): The total number of GPUs in your machine.
  • --num-gpus-to-use (default 1): The number of GPUs you want to use.
  • --gpu-deny-list (default: [1, 2, 3]): The ids of GPUs you want to not use.

If you have four GPUs and want to use the first (cuda:0),

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES \
    --num-gpus-total 4 \
    --gpu-deny-list 1 2 3

Model (--model-name)

Type Model name
GCN GCN
GraphSAGE SAGE
GAT GAT
SuperGATGO GAT
SuperGATDP GAT
SuperGATSD GAT
SuperGATMX GAT

Dataset (--dataset-class, --dataset-name)

Dataset class Dataset name
Planetoid Cora
Planetoid CiteSeer
Planetoid PubMed
PPI PPI
WikiCS WikiCS
WebKB4Univ WebKB4Univ
MyAmazon Photo
MyAmazon Computers
PygNodePropPredDataset ogbn-arxiv
MyCoauthor CS
MyCoauthor Physics
MyCitationFull Cora_ML
MyCitationFull CoraFull
MyCitationFull DBLP
Crocodile Crocodile
Chameleon Chameleon
Flickr Flickr

Custom Key (--custom-key)

Type Custom key (General) Custom key (for PubMed) Custom key (for ogbn-arxiv)
SuperGATGO EV1O8-ES EV1-500-ES -
SuperGATDP EV2O8-ES EV2-500-ES -
SuperGATSD EV3O8-ES EV3-500-ES EV3-ES
SuperGATMX EV13NSO8-ES EV13NSO8-500-ES EV13NS-ES

Other Hyperparameters

See SuperGAT/args.yaml or run $ python3 SuperGAT/main.py --help.

Code Base

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].