All Projects → ChandlerBang → Pro-GNN

ChandlerBang / Pro-GNN

Licence: other
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Pro-GNN

TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (-49.01%)
Mutual labels:  defense, graph-mining, adversarial-attacks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-78.71%)
Mutual labels:  graph-mining, adversarial-attacks, graph-neural-networks
walklets
A lightweight implementation of Walklets from "Don't Walk Skip! Online Learning of Multi-scale Network Embeddings" (ASONAM 2017).
Stars: ✭ 94 (-53.47%)
Mutual labels:  graph-mining, graph-neural-networks
well-classified-examples-are-underestimated
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
Stars: ✭ 21 (-89.6%)
Mutual labels:  adversarial-attacks, graph-neural-networks
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (-65.35%)
Mutual labels:  adversarial-attacks, graph-neural-networks
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (-24.26%)
Mutual labels:  graph-mining, graph-neural-networks
SelfTask-GNN
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"
Stars: ✭ 78 (-61.39%)
Mutual labels:  graph-mining, graph-neural-networks
DiGCN
Implement of DiGCN, NeurIPS-2020
Stars: ✭ 25 (-87.62%)
Mutual labels:  semi-supervised-learning, graph-neural-networks
3DInfomax
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (-47.03%)
Mutual labels:  graph-neural-networks
pywsl
Python codes for weakly-supervised learning
Stars: ✭ 118 (-41.58%)
Mutual labels:  semi-supervised-learning
EAD Attack
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (-83.17%)
Mutual labels:  defense
ST-PlusPlus
[CVPR 2022] ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation
Stars: ✭ 168 (-16.83%)
Mutual labels:  semi-supervised-learning
satellite-placement
Group satellites into constellations such that their average observation coverage is maximized
Stars: ✭ 20 (-90.1%)
Mutual labels:  defense
disentangled graph collaborative filtering
Disentagnled Graph Collaborative Filtering, SIGIR2020
Stars: ✭ 118 (-41.58%)
Mutual labels:  graph-neural-networks
GalaXC
GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification
Stars: ✭ 28 (-86.14%)
Mutual labels:  graph-neural-networks
graph-neural-networks-for-drug-discovery
odr.chalmers.se/handle/20.500.12380/256629?locale=en
Stars: ✭ 78 (-61.39%)
Mutual labels:  graph-neural-networks
MCS2018 Solution
No description or website provided.
Stars: ✭ 16 (-92.08%)
Mutual labels:  adversarial-attacks
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (-23.27%)
Mutual labels:  graph-neural-networks
realistic-ssl-evaluation-pytorch
Reimplementation of "Realistic Evaluation of Deep Semi-Supervised Learning Algorithms"
Stars: ✭ 79 (-60.89%)
Mutual labels:  semi-supervised-learning
demo-routenet
Demo of RouteNet in ACM SIGCOMM'19
Stars: ✭ 79 (-60.89%)
Mutual labels:  graph-neural-networks

Pro-GNN

A PyTorch implementation of "Graph Structure Learning for Robust Graph Neural Networks" (KDD 2020). [paper] [slides]

The code is based on our Pytorch adversarial repository, DeepRobust (https://github.com/DSE-MSU/DeepRobust)

Abstract

Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. However, recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks. Adversarial attacks can easily fool GNNs in making predictions for downstream tasks. The vulnerability to adversarial attacks has raised increasing concerns for applying GNNs in safety-critical applications. Therefore, developing robust algorithms to defend adversarial attacks is of great significance. A natural idea to defend adversarial attacks is to clean the perturbed graph. It is evident that real-world graphs share some intrinsic properties. For example, many real-world graphs are low-rank and sparse, and the features of two adjacent nodes tend to be similar. In fact, we find that adversarial attacks are likely to violate these graph properties. Therefore, in this paper, we explore these properties to defend adversarial attacks on graphs. In particular, we propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties. Extensive experiments on real-world graphs demonstrate that the proposed framework achieves significantly better performance compared with the state-of-the-art defense methods, even when the graph is heavily perturbed.

Requirements

See that in https://github.com/DSE-MSU/DeepRobust/blob/master/requirements.txt

matplotlib==3.1.1
numpy==1.17.1
torch==1.2.0
scipy==1.3.1
torchvision==0.4.0
texttable==1.6.2
networkx==2.4
numba==0.48.0
Pillow==7.0.0
scikit_learn==0.22.1
skimage==0.0
tensorboardX==2.0

Installation

To run the code, first you need to install DeepRobust:

pip install deeprobust

Or you can clone it and install from source code:

git clone https://github.com/DSE-MSU/DeepRobust.git
cd DeepRobust
python setup.py install

Run the code

After installation, you can clone this repository

git clone https://github.com/ChandlerBang/Pro-GNN.git
cd Pro-GNN
python train.py --dataset polblogs --attack meta --ptb_rate 0.15 --epoch 1000

Reproduce the results

All the hyper-parameters settings are included in scripts folder. Note that same hyper-parameters are used under different perturbation for the same dataset.

To reproduce the performance reported in the paper, you can run the bash files in folder scripts.

sh scripts/meta/cora_meta.sh

To test performance under different severity of attack, you can change the ptb_rate in those bash files.

Generate attack by yourself

With the help of DeepRobust, you can run the following code to generate meta attack

python generate_attack.py --dataset cora --ptb_rate 0.05 --seed 15

Cite

For more information, you can take a look at the paper or the detailed code shown in DeepRobust.

If you find this repo to be useful, please cite our paper. Thank you.

@inproceedings{jin2020graph,
  title={Graph Structure Learning for Robust Graph Neural Networks},
  author={Jin, Wei and Ma, Yao and Liu, Xiaorui and Tang, Xianfeng and Wang, Suhang and Tang, Jiliang},
  booktitle={26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020},
  pages={66--74},
  year={2020},
  organization={Association for Computing Machinery}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].