All Projects → THUDM → grb

THUDM / grb

Licence: MIT license
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
TeX
3793 projects
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to grb

well-classified-examples-are-underestimated
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
Stars: ✭ 21 (-70%)
Mutual labels:  adversarial-attacks, graph-neural-networks
Pro-GNN
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+188.57%)
Mutual labels:  adversarial-attacks, graph-neural-networks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-38.57%)
Mutual labels:  adversarial-attacks, graph-neural-networks
robust-ood-detection
Robust Out-of-distribution Detection in Neural Networks
Stars: ✭ 55 (-21.43%)
Mutual labels:  adversarial-attacks
advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-72.86%)
Mutual labels:  adversarial-attacks
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+3668.57%)
Mutual labels:  adversarial-attacks
how attentive are gats
Code for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
Stars: ✭ 200 (+185.71%)
Mutual labels:  graph-neural-networks
perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (-37.14%)
Mutual labels:  adversarial-attacks
disentangled graph collaborative filtering
Disentagnled Graph Collaborative Filtering, SIGIR2020
Stars: ✭ 118 (+68.57%)
Mutual labels:  graph-neural-networks
Nlpaug
Data augmentation for NLP
Stars: ✭ 2,761 (+3844.29%)
Mutual labels:  adversarial-attacks
T3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-64.29%)
Mutual labels:  adversarial-attacks
nn robustness analysis
Python tools for analyzing the robustness properties of neural networks (NNs) from MIT ACL
Stars: ✭ 36 (-48.57%)
Mutual labels:  adversarial-attacks
zero-shot-indoor-localization-release
The official code and datasets for "Zero-Shot Multi-View Indoor Localization via Graph Location Networks" (ACMMM 2020)
Stars: ✭ 44 (-37.14%)
Mutual labels:  graph-neural-networks
3DInfomax
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+52.86%)
Mutual labels:  graph-neural-networks
graph-neural-networks-for-drug-discovery
odr.chalmers.se/handle/20.500.12380/256629?locale=en
Stars: ✭ 78 (+11.43%)
Mutual labels:  graph-neural-networks
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+27.14%)
Mutual labels:  adversarial-attacks
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-67.14%)
Mutual labels:  adversarial-attacks
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-68.57%)
Mutual labels:  adversarial-attacks
Foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+2911.43%)
Mutual labels:  adversarial-attacks
demo-routenet
Demo of RouteNet in ACM SIGCOMM'19
Stars: ✭ 79 (+12.86%)
Mutual labels:  graph-neural-networks

GRB

PyPi Latest Release Documentation Status License

Homepage | Paper | Datasets | Leaderboard | Documentation

Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evaluation on the adversarial robustness of graph machine learning models. GRB has elaborated datasets, unified evaluation pipeline, modular coding framework, and reproducible leaderboards, which facilitate the developments of graph adversarial learning, summarizing existing progress and generating insights into future research.

Updates

  • [08/11/2021] The final version of our paper is now available in arxiv, there is also a representation video for brief introduction of GRB.
  • [11/10/2021] GRB is accepted by NeurIPS 2021 Datasets and Benchmarks Track! Find our paper in OpenReview.
  • [26/09/2021] Add support for graph classification task! See tutorials in examples/.
  • [16/09/2021] Add a paper list of state-of-the-art researches about adversarial robustness in graph machine learning (Keep Updating).
  • [27/08/2021] Add support for modification attacks! 7 implementations and tutorials:
  • [17/08/2021] Add AutoML function based on optuna for training models:
  • [14/08/2021] Add tutorials based on jupyter notebook in examples/:

Get Started

Installation

Install grb via pip (current version v0.1.0):

pip install grb

Install grb via git (for the newest version):

git clone [email protected]:THUDM/grb.git
cd grb
pip install -e .

Preparation

GRB provides all necessary components to ensure the reproducibility of evaluation results. Get datasets from link or download them by running the following script:

cd ./scripts
sh download_dataset.sh

Get attack results (adversarial adjacency matrix and features) from link or download them by running the following script:

sh download_attack_results.sh

Get saved models (model weights) from link or download them by running the following script:

sh download_saved_models.sh

Usage of GRB Modules

Training a GML model

An example of training Graph Convolutional Network (GCN) on grb-cora dataset.

import torch  # pytorch backend
from grb.dataset import Dataset
from grb.model.torch import GCN
from grb.trainer.trainer import Trainer

# Load data
dataset = Dataset(name='grb-cora', mode='easy',
                  feat_norm='arctan')
# Build model
model = GCN(in_features=dataset.num_features,
            out_features=dataset.num_classes,
            hidden_features=[64, 64])
# Training
adam = torch.optim.Adam(model.parameters(), lr=0.01)
trainer = Trainer(dataset=dataset, optimizer=adam,
                  loss=torch.nn.functional.nll_loss)
trainer.train(model=model, n_epoch=200, dropout=0.5,
              train_mode='inductive')

Adversarial attack

An example of applying Topological Defective Graph Injection Attack (TDGIA) on trained GCN model.

from grb.attack.injection.tdgia import TDGIA

# Attack configuration
tdgia = TDGIA(lr=0.01, 
              n_epoch=10,
              n_inject_max=20, 
              n_edge_max=20,
              feat_lim_min=-0.9, 
              feat_lim_max=0.9,
              sequential_step=0.2)
# Apply attack
rst = tdgia.attack(model=model,
                   adj=dataset.adj,
                   features=dataset.features,
                   target_mask=dataset.test_mask)
# Get modified adj and features
adj_attack, features_attack = rst

GRB Evaluation

Evaluation scenarios (Injection attack as an example)

GRB

GRB provides unified evaluation scenarios for fair comparisons between attacks and defenses. The example scenario is Black-box, Evasion, Inductive, Injection.

  • Black-box: Both the attacker and the defender have no knowledge about the applied methods each other uses.
  • Evasion: Models are already trained in trusted data (e.g. authenticated users), which are untouched by the attackers but might have natural noises. Thus, attacks will only happen during the inference phase.
  • Inductive: Models are used to classify unseen data (e.g. new users), i.e. validation or test data are unseen during training, which requires models to generalize to out of distribution data.
  • Injection: The attackers can only inject new nodes but not modify the target nodes directly. Since it is usually hard to hack into users' accounts and modify their profiles. However, it is easier to create fake accounts and connect them to existing users.

GRB Leaderboards

GRB maintains leaderboards that permits a fair comparision across various attacks and defenses. To ensure the reproducibility, we provide all necessary information including datasets, attack results, saved models, etc. Besides, all results on the leaderboards can be easily reproduced by running the following scripts (e.g., [leaderboard for grb-cora dataset](https://cogdl.ai/grb/leaderboard/cora, compatible with v0.1.0)):

sh run_leaderboard_pipeline.sh -d grb-cora -g 0 -s ./leaderboard -n 0
Usage: run_leaderboard_pipeline.sh [-d <string>] [-g <int>] [-s <string>] [-n <int>]
Pipeline for reproducing leaderboard on the chosen dataset.
    -h      Display help message.
    -d      Choose a dataset.
    -s      Set a directory to save leaderboard files.
    -n      Choose the number of an attack from 0 to 9.
    -g      Choose a GPU device. -1 for CPU.

Submission

We welcome researchers to submit new methods including attacks, defenses, or new GML models to enrich the GRB leaderboard. For future submissions, one should follow the GRB Evaluation Rules and respect the reproducibility.

Please submit your methods via the google form GRB submission. Our team will verify the result within a week.

Requirements

  • scipy==1.5.2
  • numpy==1.19.1
  • torch==1.8.0
  • networkx==2.5
  • pandas~=1.2.3
  • cogdl~=0.3.0.post1
  • scikit-learn~=0.24.1

Citing GRB

If you find GRB useful for your research, please cite our paper:

@article{zheng2021grb,
  title={Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning},
  author={Zheng, Qinkai and Zou, Xu and Dong, Yuxiao and Cen, Yukuo and Yin, Da and Xu, Jiarong and Yang, Yang and Tang, Jie},
  journal={Neural Information Processing Systems Track on Datasets and Benchmarks 2021},
  year={2021}
}

Contact

In case of any problem, please contact us via email: [email protected]. We also welcome researchers to join our Google Group for further discussion on the adversarial robustness of graph machine learning.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].