All Projects → GraphNAS → Graphnas

GraphNAS / Graphnas

Licence: apache-2.0
This directory contains code necessary to run the GraphNAS algorithm.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Graphnas

Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (+500%)
Mutual labels:  neural-architecture-search
Efficientnas
Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search https://arxiv.org/abs/1807.06906
Stars: ✭ 44 (-57.69%)
Mutual labels:  neural-architecture-search
Autodl Projects
Automated deep learning algorithms implemented in PyTorch.
Stars: ✭ 1,187 (+1041.35%)
Mutual labels:  neural-architecture-search
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+550.96%)
Mutual labels:  neural-architecture-search
Morph Net
Fast & Simple Resource-Constrained Learning of Deep Network Structure
Stars: ✭ 937 (+800.96%)
Mutual labels:  neural-architecture-search
Nsganetv2
[ECCV2020] NSGANetV2: Evolutionary Multi-Objective Surrogate-Assisted Neural Architecture Search
Stars: ✭ 52 (-50%)
Mutual labels:  neural-architecture-search
Fasterseg
[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Stars: ✭ 438 (+321.15%)
Mutual labels:  neural-architecture-search
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+10186.54%)
Mutual labels:  neural-architecture-search
Neural Architecture Search With Rl
Minimal Tensorflow implementation of the paper "Neural Architecture Search With Reinforcement Learning" presented at ICLR 2017
Stars: ✭ 37 (-64.42%)
Mutual labels:  neural-architecture-search
Tenas
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang
Stars: ✭ 63 (-39.42%)
Mutual labels:  neural-architecture-search
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+564.42%)
Mutual labels:  neural-architecture-search
Devol
Genetic neural architecture search with Keras
Stars: ✭ 925 (+789.42%)
Mutual labels:  neural-architecture-search
Awesome Architecture Search
A curated list of awesome architecture search resources
Stars: ✭ 1,078 (+936.54%)
Mutual labels:  neural-architecture-search
Randwirenn
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 675 (+549.04%)
Mutual labels:  neural-architecture-search
Hydra
Multi-Task Learning Framework on PyTorch. State-of-the-art methods are implemented to effectively train models on multiple tasks.
Stars: ✭ 87 (-16.35%)
Mutual labels:  neural-architecture-search
Hpbandster
a distributed Hyperband implementation on Steroids
Stars: ✭ 456 (+338.46%)
Mutual labels:  neural-architecture-search
Autokeras
AutoML library for deep learning
Stars: ✭ 8,269 (+7850.96%)
Mutual labels:  neural-architecture-search
Pnasnet.tf
TensorFlow implementation of PNASNet-5 on ImageNet
Stars: ✭ 102 (-1.92%)
Mutual labels:  neural-architecture-search
Robnets
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
Stars: ✭ 95 (-8.65%)
Mutual labels:  neural-architecture-search
Mtlnas
[CVPR 2020] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
Stars: ✭ 58 (-44.23%)
Mutual labels:  neural-architecture-search

GraphNAS

Overview

Graph Neural Architecture Search (GraphNAS for short) enables automatic design of the best graph neural architecture based on reinforcement learning. This directory contains code necessary to run GraphNAS. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with a policy gradient algorithm to maximize the expected accuracy of the generated architectures on a validation data set. An illustration of GraphNAS is shown as follows:

A simple illustration of GraphNAS

A recurrent network (Controller RNN) generates descriptions of graph neural architectures (Child model GNNs). Once an architecture m is generated by the controller, GraphNAS trains the architecture m on a given graph G and test m on a validate set D. The validation result R_D(m) is taken as the reward of the recurrent network.



To improve the search efficiency of GraphNAS, we restrict the search space from an entire architecture to a concatenation of the best search results built on each single architecture layer. An example of GraphNAS constructing a single GNN layer (the right-hand side) is shown as follows:

A simple illustration of GraphNAS

In the above example, the layer has two input states O_1 and O_2, two intermediate states O_3 and O_4, and an output state O_5. The controller at the left-hand side samples O_2 from {O_1, O_2, O_3} and takes O_2 as input of O_4, and then samples "GAT" for processing O_2. The output state O_5=relu(O_3+O_4) collects information from O_3 and O_4, and the controller assigns a readout operator "add" and an activation operator "relu" for O_5. As a result, this layer can be described as a list of operators: [0, gcn, 1, gat, add, relu].

Requirements

Recent versions of PyTorch, numpy, scipy, sklearn, dgl, torch_geometric and networkx are required. Ensure that PyTorch 1.1.0 and CUDA 9.0 are installed. Then run:

pip install torch==1.1.0 -f https://download.pytorch.org/whl/cu90/torch_stable.html
pip install -r requirements.txt

If you want to run in docker, you can run:

docker build -t graphnas -f DockerFile . 
docker run -it -v $(pwd):/GraphNAS graphnas python -m eval_scripts.semi.eval_designed_gnn

Running the code

Architecture evaluation

To evaluate our best architecture designed on semi-supervised experiments by training from scratch, run

python -m eval_scripts.semi.eval_designed_gnn

To evaluate our best architecture designed on semi-supervised experiments by training from scratch, run

python -m eval_scripts.sup.eval_designed_gnn
Results

Semi-supervised node classification w.r.t. accuracy

Model Cora Citeseer Pubmed
GCN 81.5+/-0.4 70.9+/-0.5 79.0+/-0.4
SGC 81.0+/-0.0 71.9+/-0.1 78.9+/-0.0
GAT 83.0+/-0.7 72.5+/-0.7 79.0+/-0.3
LGCN 83.3+/-0.5 73.0+/-0.6 79.5+/-0.2
DGCN 82.0+/-0.2 72.2+/-0.3 78.6+/-0.1
ARMA 82.8+/-0.6 72.3+/-1.1 78.8+/-0.3
APPNP 83.3+/-0.6 71.8+/-0.4 80.2+/-0.2
simple-NAS 81.4+/-0.6 71.7+/-0.6 79.5+/-0.5
GraphNAS 84.3+/-0.4 73.7+/-0.2 80.6+/-0.2

Supervised node classification w.r.t. accuracy

Model Cora Citeseer Pubmed
GCN 90.2+/-0.0 80.0+/-0.3 87.8+/-0.2
SGC 88.8+/-0.0 80.6+/-0.0 86.5+/-0.1
GAT 89.5+/-0.3 78.6+/-0.3 86.5+/-0.6
LGCN 88.7+/-0.5 79.2+/-0.4 OOM
DGCN 88.4+/-0.2 78.0+/-0.2 88.0+/-0.9
ARMA 89.8+/-0.1 79.9+/-0.6 88.1+/-0.2
APPNP 90.4+/-0.2 79.2+/-0.4 87.4+/-0.3
random-NAS 90.0+/-0.3 81.1+/-0.3 90.7+/-0.6
simple-NAS 90.1+/-0.3 79.6+/-0.5 88.5+/-0.2
GraphNAS 90.6+/-0.3 81.3+/-0.4 91.3+/-0.3

Architectures designed in supervised learning are showed as follow:

Architectures designed by GraphNAS in supervised experiments

The architecture G-Cora designed by GraphNAS on Cora is [0, gat6, 0, gcn, 0, gcn, 2, arma, tanh, concat], the architecture G-Citeseer designed by GraphNAS on Citeseer is [0, identity, 0, gat6, linear, concat], the architecture G-Pubmed designed by GraphNAS on Pubmed is [1, gat8, 0, arma, tanh, concat].
Searching for new architectures

To design an entire graph neural architecture based on the search space described in Section 3.2, please run:

python -m graphnas.main --dataset Citeseer

To design an entire graph neural architecture based on the search space described in Section 3.4, please run:

python -m graphnas.main --dataset Citeseer --supervised True --search_mode micro

Be aware that different runs would end up with different local minimum.

Acknowledgements

This repo is modified based on DGL and PYG.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].