All Projects → romulus0914 → NASBench-PyTorch

romulus0914 / NASBench-PyTorch

Licence: Apache-2.0 license
A PyTorch implementation of NASBench

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to NASBench-PyTorch

Nas Benchmark
"NAS evaluation is frustratingly hard", ICLR2020
Stars: ✭ 126 (+193.02%)
Mutual labels:  neural-architecture-search
Aw nas
aw_nas: A Modularized and Extensible NAS Framework
Stars: ✭ 152 (+253.49%)
Mutual labels:  neural-architecture-search
Awesome Nas Papers
Awesome Neural Architecture Search Papers
Stars: ✭ 213 (+395.35%)
Mutual labels:  neural-architecture-search
Awesome Autodl
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.
Stars: ✭ 1,819 (+4130.23%)
Mutual labels:  neural-architecture-search
Deep architect legacy
DeepArchitect: Automatically Designing and Training Deep Architectures
Stars: ✭ 144 (+234.88%)
Mutual labels:  neural-architecture-search
Lycoris
A lightweight and easy-to-use deep learning framework with neural architecture search.
Stars: ✭ 180 (+318.6%)
Mutual labels:  neural-architecture-search
Amla
AutoML frAmework for Neural Networks
Stars: ✭ 119 (+176.74%)
Mutual labels:  neural-architecture-search
awesome-transformer-search
A curated list of awesome resources combining Transformers with Neural Architecture Search
Stars: ✭ 194 (+351.16%)
Mutual labels:  neural-architecture-search
Dna
Block-wisely Supervised Neural Architecture Search with Knowledge Distillation (CVPR 2020)
Stars: ✭ 147 (+241.86%)
Mutual labels:  neural-architecture-search
Atomnas
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'
Stars: ✭ 197 (+358.14%)
Mutual labels:  neural-architecture-search
Single Path One Shot Nas Mxnet
Single Path One-Shot NAS MXNet implementation with full training and searching pipeline. Support both Block and Channel Selection. Searched models better than the original paper are provided.
Stars: ✭ 136 (+216.28%)
Mutual labels:  neural-architecture-search
Scarlet Nas
Bridging the gap Between Stability and Scalability in Neural Architecture Search
Stars: ✭ 140 (+225.58%)
Mutual labels:  neural-architecture-search
Naszilla
Naszilla is a Python library for neural architecture search (NAS)
Stars: ✭ 181 (+320.93%)
Mutual labels:  neural-architecture-search
Nas Segm Pytorch
Code for Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells, CVPR '19
Stars: ✭ 126 (+193.02%)
Mutual labels:  neural-architecture-search
Enas Pytorch
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Stars: ✭ 2,506 (+5727.91%)
Mutual labels:  neural-architecture-search
Nasbot
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
Stars: ✭ 120 (+179.07%)
Mutual labels:  neural-architecture-search
Nsga Net
NSGA-Net, a Neural Architecture Search Algorithm
Stars: ✭ 171 (+297.67%)
Mutual labels:  neural-architecture-search
Auto-Compression
Automatic DNN compression tool with various model compression and neural architecture search techniques
Stars: ✭ 19 (-55.81%)
Mutual labels:  neural-architecture-search
AutoSpeech
[InterSpeech 2020] "AutoSpeech: Neural Architecture Search for Speaker Recognition" by Shaojin Ding*, Tianlong Chen*, Xinyu Gong, Weiwei Zha, Zhangyang Wang
Stars: ✭ 195 (+353.49%)
Mutual labels:  neural-architecture-search
Hyperactive
A hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (+323.26%)
Mutual labels:  neural-architecture-search

NASBench-PyTorch

NASBench-PyTorch is a PyTorch implementation of the search space NAS-Bench-101 including the training of the networks**. The original implementation is written in TensorFlow, and this projects contains some files from the original repository (in the directory nasbench_pytorch/model/).

Important: if you want to reproduce the original results, please refer to the Reproducibility section.

Overview

A PyTorch implementation of training of NAS-Bench-101 dataset: NAS-Bench-101: Towards Reproducible Neural Architecture Search. The dataset contains 423,624 unique neural networks exhaustively generated and evaluated from a fixed graph-based search space.

Usage

You need to have PyTorch installed.

You can install the package by running pip install nasbench_pytorch. The second possibility is to install from source code:

  1. Clone this repo
git clone https://github.com/romulus0914/NASBench-PyTorch
cd NASBench-PyTorch
  1. Install the project
pip install -e .

The file main.py contains an example training of a network. To see the different parameters, run:

python main.py --help

Train a network by hash

To train a network whose architecture is queried from NAS-Bench-101 using its unique hash, install the original nasbench repository. Follow the instructions in the README, note that you need to install TensorFlow. If you need TensorFlow 2.x, install this fork of the repository instead.

Then, you can get the PyTorch architecture of a network like this:

from nasbench_pytorch.model import Network as NBNetwork
from nasbench import api


nasbench_path = '$path_to_downloaded_nasbench'
nb = api.NASBench(nasbench_path)

net_hash = '$some_hash'  # you can get hashes using nasbench.hash_iterator()
m = nb.get_metrics_from_hash(net_hash)
ops = m[0]['module_operations']
adjacency = m[0]['module_adjacency']

net = NBNetwork((adjacency, ops))

Then, you can train it just like the example network in main.py.

Architecture

Example architecture (picture from the original repository) archtecture

Reproducibility

The code should closely match the TensorFlow version (including the hyperparameters), but there are some differences:

  • RMSProp implementation in TensorFlow and PyTorch is different

    • For more information refer to here and here.
    • Optionally, you can install pytorch-image-models where a TensorFlow-like RMSProp is implemented
      • pip install timm
    • Then, pass --optimizer rmsprop_tf to main.py to use it
  • You can turn gradient clipping off by setting --grad_clip_off True

  • The original training was on TPUs, this code enables only GPU and CPU training

  • Input data augmentation methods are the same, but due to randomness they are not applied in the same manner

    • Cause: Batches and images cannot be shuffled as in the original TPU training, and the augmentation seed is also different
  • Results may still differ due to TensorFlow/PyTorch implementation differences

Refer to this issue for more information and for comparison with API results.

Disclaimer

Modified from NASBench: A Neural Architecture Search Dataset and Benchmark. graph_util.py and model_spec.py are directly copied from the original repo. Original license can be found here.

**Please note that this repo is only used to train one possible architecture in the search space, not to generate all possible graphs and train them.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].