All Projects → sisaman → LPGNN

sisaman / LPGNN

Licence: MIT license
Locally Private Graph Neural Networks (ACM CCS 2021)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to LPGNN

Spektral
Graph Neural Networks with Keras and Tensorflow 2.
Stars: ✭ 1,946 (+6386.67%)
Mutual labels:  graph-neural-networks, graph-deep-learning
graphchem
Graph-based machine learning for chemical property prediction
Stars: ✭ 21 (-30%)
Mutual labels:  graph-neural-networks, pytorch-geometric
pyg autoscale
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch
Stars: ✭ 136 (+353.33%)
Mutual labels:  graph-neural-networks, pytorch-geometric
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+450%)
Mutual labels:  graph-neural-networks, graph-deep-learning
3DInfomax
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+256.67%)
Mutual labels:  graph-neural-networks, pytorch-geometric
federated pca
Federated Principal Component Analysis Revisited!
Stars: ✭ 30 (+0%)
Mutual labels:  differential-privacy
GAug
AAAI'21: Data Augmentation for Graph Neural Networks
Stars: ✭ 139 (+363.33%)
Mutual labels:  graph-neural-networks
ASAP
AAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Stars: ✭ 83 (+176.67%)
Mutual labels:  graph-neural-networks
robust-gcn
Implementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".
Stars: ✭ 35 (+16.67%)
Mutual labels:  graph-neural-networks
differential-privacy-bayesian-optimization
This repo contains the underlying code for all the experiments from the paper: "Automatic Discovery of Privacy-Utility Pareto Fronts"
Stars: ✭ 22 (-26.67%)
Mutual labels:  differential-privacy
deepsphere-weather
A spherical CNN for weather forecasting
Stars: ✭ 44 (+46.67%)
Mutual labels:  graph-neural-networks
BGCN
A Tensorflow implementation of "Bayesian Graph Convolutional Neural Networks" (AAAI 2019).
Stars: ✭ 129 (+330%)
Mutual labels:  graph-neural-networks
Introduction-to-Deep-Learning-and-Neural-Networks-Course
Code snippets and solutions for the Introduction to Deep Learning and Neural Networks Course hosted in educative.io
Stars: ✭ 33 (+10%)
Mutual labels:  graph-neural-networks
GNN-Recommender-Systems
An index of recommendation algorithms that are based on Graph Neural Networks.
Stars: ✭ 505 (+1583.33%)
Mutual labels:  graph-neural-networks
Entity-Graph-VLN
Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Stars: ✭ 34 (+13.33%)
Mutual labels:  graph-neural-networks
graphml-tutorials
Tutorials for Machine Learning on Graphs
Stars: ✭ 125 (+316.67%)
Mutual labels:  graph-neural-networks
SubGNN
Subgraph Neural Networks (NeurIPS 2020)
Stars: ✭ 136 (+353.33%)
Mutual labels:  graph-neural-networks
mdgrad
Pytorch differentiable molecular dynamics
Stars: ✭ 127 (+323.33%)
Mutual labels:  graph-neural-networks
SuperGAT
[ICLR 2021] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision
Stars: ✭ 122 (+306.67%)
Mutual labels:  graph-neural-networks
SiGAT
source code for signed graph attention networks (ICANN2019) & SDGNN (AAAI2021)
Stars: ✭ 37 (+23.33%)
Mutual labels:  graph-neural-networks

Locally Private Graph Neural Networks

This repository is the official implementation of the paper:
Locally Private Graph Neural Networks (ACM CCS '21)

Proceedings version: https://dl.acm.org/doi/abs/10.1145/3460120.3484565
Video presentation: https://www.youtube.com/watch?v=1LdC5G_p-0g

Abstract

Graph Neural Networks (GNNs) have demonstrated superior performance in learning node representations for various graph inference tasks. However, learning over graph data can raise privacy concerns when nodes represent people or human-related variables that involve sensitive or personal information. In this paper, we study the problem of node data privacy, where graph nodes (e.g., social network users) have potentially sensitive data that is kept private, but they could be beneficial for a central server for training a GNN over the graph. To address this problem, we propose a privacy-preserving, architecture-agnostic GNN learning framework with formal privacy guarantees based on Local Differential Privacy (LDP). Specifically, we develop a locally private mechanism to perturb and compress node features, which the server can efficiently collect to approximate the GNN's neighborhood aggregation step. Furthermore, to improve the accuracy of the estimation, we prepend to the GNN a denoising layer, called KProp, which is based on the multi-hop aggregation of node features. Finally, we propose a robust algorithm for learning with privatized noisy labels, where we again benefit from KProp's denoising capability to increase the accuracy of label inference for node classification. Extensive experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.

figure

Requirements

This code is implemented in Python 3.9, and relies on the following packages:

Note: For the DGL-based implementation, switch to the DGL branch.

Usage

Replicating the paper's results

In order to replicate our experiments and reproduce the paper's results, you must do the following steps:

  1. Run python experiments.py -n LPGNN create --LPGNN --baselines
  2. Run python experiments.py -n LPGNN exec --all
    All the datasets will be downloaded automatically into datasets folder, and the results will be stored in results directory.
  3. Go through results.ipynb notebook to visualize the results.

Training individual models

If you want to individually train and evaluate the models on any of the datasets mentioned in the paper, run the following command:

python main.py [OPTIONS...]

dataset arguments:
  -d              <string>       name of the dataset (choices: cora, pubmed, facebook, lastfm) (default: cora)
  --data-dir      <path>         directory to store the dataset (default: ./datasets)
  --data-range    <float pair>   min and max feature value (default: (0, 1))
  --val-ratio     <float>        fraction of nodes used for validation (default: 0.25)
  --test-ratio    <float>        fraction of nodes used for test (default: 0.25)

data transformation arguments:
  -f              <string>       feature transformation method (choices: raw, rnd, one, ohd) (default: raw)
  -m              <string>       feature perturbation mechanism (choices: mbm, 1bm, lpm, agm) (default: mbm)
  -ex             <float>        privacy budget for feature perturbation (default: inf)
  -ey             <float>        privacy budget for label perturbation (default: inf)

model arguments:
  --model         <string>       backbone GNN model (choices: gcn, sage, gat) (default: sage)
  --hidden-dim    <integer>      dimension of the hidden layers (default: 16)
  --dropout       <float>        dropout rate (between zero and one) (default: 0.0)
  -kx             <integer>      KProp step parameter for features (default: 0)
  -ky             <integer>      KProp step parameter for labels (default: 0)
  --forward       <boolean>      applies forward loss correction (default: True)

trainer arguments:
  --optimizer     <string>       optimization algorithm (choices: sgd, adam) (default: adam)
  --max-epochs    <integer>      maximum number of training epochs (default: 500)
  --learning-rate <float>        learning rate (default: 0.01)
  --weight-decay  <float>        weight decay (L2 penalty) (default: 0.0)
  --patience      <integer>      early-stopping patience window size (default: 0)
  --device        <string>       desired device for training (choices: cuda, cpu) (default: cuda)

experiment arguments:
  -s              <integer>      initial random seed (default: None)
  -r              <integer>      number of times the experiment is repeated (default: 1)
  -o              <path>         directory to store the results (default: ./output)
  --log           <boolean>      enable wandb logging (default: False)
  --log-mode      <string>       wandb logging mode (choices: individual,collective) (default: individual)
  --project-name  <string>       wandb project name (default: LPGNN)

The test result for each run will be saved as a csv file in the directory specified by
-o option (default: ./output).

Citation

If you find this code useful, please cite the following paper:

@inproceedings{sajadmanesh2021locally,
   author = {Sajadmanesh, Sina and Gatica-Perez, Daniel},
   title = {Locally Private Graph Neural Networks},
   year = {2021},
   publisher = {Association for Computing Machinery},
   doi = {10.1145/3460120.3484565},
   booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
   pages = {2130–2145},
   series = {CCS '21}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].