All Projects → benedekrozemberczki → Mixhop And N Gcn

benedekrozemberczki / Mixhop And N Gcn

Licence: gpl-3.0
An implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mixhop And N Gcn

Zhihu
This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.
Stars: ✭ 3,307 (+823.74%)
Mutual labels:  convolutional-neural-networks
Ios 10 Sampler
Code examples for new APIs of iOS 10.
Stars: ✭ 3,341 (+833.24%)
Mutual labels:  convolutional-neural-networks
Fire Detection Cnn
real-time fire detection in video imagery using a convolutional neural network (deep learning) - from our ICIP 2018 paper (Dunnings / Breckon) + ICMLA 2019 paper (Samarth / Bhowmik / Breckon)
Stars: ✭ 340 (-5.03%)
Mutual labels:  convolutional-neural-networks
Cs231
Complete Assignments for CS231n: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 317 (-11.45%)
Mutual labels:  convolutional-neural-networks
Pytorch Randaugment
Unofficial PyTorch Reimplementation of RandAugment.
Stars: ✭ 323 (-9.78%)
Mutual labels:  convolutional-neural-networks
Keras Multi Label Image Classification
Keras- Multi Label Image Classification
Stars: ✭ 335 (-6.42%)
Mutual labels:  convolutional-neural-networks
Accelerating Cnn With Fpga
This project accelerates CNN computation with the help of FPGA, for more than 50x speed-up compared with CPU.
Stars: ✭ 301 (-15.92%)
Mutual labels:  convolutional-neural-networks
Face Landmarks Detection Benchmark
Face landmarks(fiducial points) detection benchmark
Stars: ✭ 348 (-2.79%)
Mutual labels:  convolutional-neural-networks
Artificio
Deep Learning Computer Vision Algorithms for Real-World Use
Stars: ✭ 326 (-8.94%)
Mutual labels:  convolutional-neural-networks
Personality Detection
Implementation of a hierarchical CNN based model to detect Big Five personality traits
Stars: ✭ 338 (-5.59%)
Mutual labels:  convolutional-neural-networks
Bcdu Net
BCDU-Net : Medical Image Segmentation
Stars: ✭ 314 (-12.29%)
Mutual labels:  convolutional-neural-networks
Youtube Code Repository
Repository for most of the code from my YouTube channel
Stars: ✭ 317 (-11.45%)
Mutual labels:  convolutional-neural-networks
Tensorflow Project Template
A best practice for tensorflow project template architecture.
Stars: ✭ 3,466 (+868.16%)
Mutual labels:  convolutional-neural-networks
Salgan
SalGAN: Visual Saliency Prediction with Generative Adversarial Networks
Stars: ✭ 314 (-12.29%)
Mutual labels:  convolutional-neural-networks
Thesemicolon
This repository contains Ipython notebooks and datasets for the data analytics youtube tutorials on The Semicolon.
Stars: ✭ 345 (-3.63%)
Mutual labels:  convolutional-neural-networks
Tensorflow Image Detection
A generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception.
Stars: ✭ 306 (-14.53%)
Mutual labels:  convolutional-neural-networks
Keras Anomaly Detection
Anomaly detection implemented in Keras
Stars: ✭ 335 (-6.42%)
Mutual labels:  convolutional-neural-networks
Hardnet
Hardnet descriptor model - "Working hard to know your neighbor's margins: Local descriptor learning loss"
Stars: ✭ 350 (-2.23%)
Mutual labels:  convolutional-neural-networks
T81 558 deep learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
Stars: ✭ 4,152 (+1059.78%)
Mutual labels:  convolutional-neural-networks
Artistic Style Transfer
Convolutional neural networks for artistic style transfer.
Stars: ✭ 341 (-4.75%)
Mutual labels:  convolutional-neural-networks

MixHop and N-GCN

PWC Arxiv codebeat badge repo sizebenedekrozemberczki

A PyTorch implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019) and "A Higher-Order Graph Convolutional Layer" (NeurIPS 2018).

Abstract

Recent methods generalize convolutional layers from Euclidean domains to graph-structured data by approximating the eigenbasis of the graph Laplacian. The computationally-efficient and broadly-used Graph ConvNet of Kipf & Welling, over-simplifies the approximation, effectively rendering graph convolution as a neighborhood-averaging operator. This simplification restricts the model from learning delta operators, the very premise of the graph Laplacian. In this work, we propose a new Graph Convolutional layer which mixes multiple powers of the adjacency matrix, allowing it to learn delta operators. Our layer exhibits the same memory footprint and computational complexity as a GCN. We illustrate the strength of our proposed layer on both synthetic graph datasets, and on several real-world citation graphs, setting the record state-of-the-art on Pubmed.

This repository provides a PyTorch implementation of MixHop and N-GCN as described in the papers:

MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Hrayr Harutyunyan, Nazanin Alipourfard, Kristina Lerman, Greg Ver Steeg, and Aram Galstyan. ICML, 2019. [Paper]

A Higher-Order Graph Convolutional Layer. Sami A Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Hrayr Harutyunyan. NeurIPS, 2018. [Paper]

The original TensorFlow implementation of MixHop is available [Here].

Requirements

The codebase is implemented in Python 3.5.2. package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0
torch             1.1.0
torch-sparse      0.3.0

Datasets

The code takes the **edge list** of the graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. A sample graph for `Cora` is included in the `input/` directory. In addition to the edgelist there is a JSON file with the sparse features and a csv with the target variable.

The **feature matrix** is a sparse binary one it is stored as a json. Nodes are keys of the json and feature indices are the values. For each node feature column ids are stored as elements of a list. The feature matrix is structured as:

{ 0: [0, 1, 38, 1968, 2000, 52727],
  1: [10000, 20, 3],
  2: [],
  ...
  n: [2018, 10000]}

The **target vector** is a csv with two columns and headers, the first contains the node identifiers the second the targets. This csv is sorted by node identifiers and the target column contains the class meberships indexed from zero.

NODE ID Target
0 3
1 1
2 0
3 1
... ...
n 3

Options

Training an N-GCN/MixHop model is handled by the `src/main.py` script which provides the following command line arguments.

Input and output options

  --edge-path       STR    Edge list csv.         Default is `input/cora_edges.csv`.
  --features-path   STR    Features json.         Default is `input/cora_features.json`.
  --target-path     STR    Target classes csv.    Default is `input/cora_target.csv`.

Model options

  --model             STR     Model variant.                 Default is `mixhop`.               
  --seed              INT     Random seed.                   Default is 42.
  --epochs            INT     Number of training epochs.     Default is 2000.
  --early-stopping    INT     Early stopping rounds.         Default is 10.
  --training-size     INT     Training set size.             Default is 1500.
  --validation-size   INT     Validation set size.           Default is 500.
  --learning-rate     FLOAT   Adam learning rate.            Default is 0.01.
  --dropout           FLOAT   Dropout rate value.            Default is 0.5.
  --lambd             FLOAT   Regularization coefficient.    Default is 0.0005.
  --layers-1          LST     Layer sizes (upstream).        Default is [200, 200, 200]. 
  --layers-2          LST     Layer sizes (bottom).          Default is [200, 200, 200].
  --cut-off           FLOAT   Norm cut-off for pruning.      Default is 0.1.
  --budget            INT     Architecture neuron budget.    Default is 60.

Examples

The following commands learn a neural network and score on the test set. Training a model on the default dataset.

$ python src/main.py

Training a MixHop model for a 100 epochs.

$ python src/main.py --epochs 100

Increasing the learning rate and the dropout.

$ python src/main.py --learning-rate 0.1 --dropout 0.9

Training a model with diffusion order 2:

$ python src/main.py --layers 64 64

Training an N-GCN model:

$ python src/main.py --model ngcn

License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].