All Projects → meijieru → Atomnas

meijieru / Atomnas

Licence: other
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Atomnas

Randwirenn
Pytorch Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 270 (+37.06%)
Mutual labels:  imagenet, neural-architecture-search
TF-NAS
TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search (ECCV2020)
Stars: ✭ 66 (-66.5%)
Mutual labels:  imagenet, neural-architecture-search
regnet.pytorch
PyTorch-style and human-readable RegNet with a spectrum of pre-trained models
Stars: ✭ 50 (-74.62%)
Mutual labels:  imagenet, neural-architecture-search
Pnasnet.pytorch
PyTorch implementation of PNASNet-5 on ImageNet
Stars: ✭ 309 (+56.85%)
Mutual labels:  imagenet, neural-architecture-search
Randwirenn
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 675 (+242.64%)
Mutual labels:  imagenet, neural-architecture-search
Pnasnet.tf
TensorFlow implementation of PNASNet-5 on ImageNet
Stars: ✭ 102 (-48.22%)
Mutual labels:  imagenet, neural-architecture-search
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+5330.46%)
Mutual labels:  neural-architecture-search, distributed
Petridishnn
Code for the neural architecture search methods contained in the paper Efficient Forward Neural Architecture Search
Stars: ✭ 112 (-43.15%)
Mutual labels:  imagenet, neural-architecture-search
Lycoris
A lightweight and easy-to-use deep learning framework with neural architecture search.
Stars: ✭ 180 (-8.63%)
Mutual labels:  neural-architecture-search
Arewedistributedyet
Website + Community effort to unlock the peer-to-peer web at arewedistributedyet.com ⚡🌐🔑
Stars: ✭ 189 (-4.06%)
Mutual labels:  distributed
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+1120.81%)
Mutual labels:  imagenet
Naszilla
Naszilla is a Python library for neural architecture search (NAS)
Stars: ✭ 181 (-8.12%)
Mutual labels:  neural-architecture-search
Zi5book
book.zi5.me全站kindle电子书籍爬取,按照作者书籍名分类,每本书有mobi和equb两种格式,采用分布式进行全站爬取
Stars: ✭ 191 (-3.05%)
Mutual labels:  distributed
Torchdistill
PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.
Stars: ✭ 177 (-10.15%)
Mutual labels:  imagenet
Tfmesos
Tensorflow in Docker on Mesos #tfmesos #tensorflow #mesos
Stars: ✭ 194 (-1.52%)
Mutual labels:  distributed
Bigben
BigBen - a generic, multi-tenant, time-based event scheduler and cron scheduling framework
Stars: ✭ 174 (-11.68%)
Mutual labels:  distributed
Spoon
🥄 A package for building specific Proxy Pool for different Sites.
Stars: ✭ 173 (-12.18%)
Mutual labels:  distributed
Lingvo
Lingvo
Stars: ✭ 2,361 (+1098.48%)
Mutual labels:  distributed
Herddb
A JVM-embeddable Distributed Database
Stars: ✭ 192 (-2.54%)
Mutual labels:  distributed
Js Spark
Realtime calculation distributed system. AKA distributed lodash
Stars: ✭ 187 (-5.08%)
Mutual labels:  distributed

AtomNAS: Fine-Grained End-to-End Neural Architecture Search [PDF]

Updates

  • [Mar 2020] A clean mobilenet-series implementation is provided.
  • [Feb 2020] Simplify validation process, released the pretrained models. Conflict with previous version.

Overview

This is the codebase (including search) for ICLR 2020 paper AtomNAS: Fine-Grained End-to-End Neural Architecture Search.

Setup

Distributed Training

Set the following ENV variable:

$DATA_ROOT: Path to data root
$METIS_WORKER_0_HOST: IP address of worker 0
$METIS_WORKER_0_PORT: Port used for initializing distributed environment
$METIS_TASK_INDEX: Index of task
$ARNOLD_WORKER_NUM: Number of workers
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory

Non-Distributed Training (Not Recommend)

Set the following ENV variable:

$DATA_ROOT: Path to data root
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory

Reproduce AtomNAS results

For Table 1

  • AtomNAS-A: bash scripts/run.sh apps/slimming/shrink/atomnas_a.yml
  • AtomNAS-B: bash scripts/run.sh apps/slimming/shrink/atomnas_b.yml
  • AtomNAS-C: bash scripts/run.sh apps/slimming/shrink/atomnas_c.yml

If everything is OK, you should get similar results.

Pretrained Models could be downloaded from onedrive

Testing

For AtomNAS:

FILE=$(realpath {{log_dir_path}}) checkpoint=ckpt ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_shrink.yml

For AtomNAS+:

TRAIN_CONFIG=$(realpath {{train_config_path}}) ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_se.yml --pretrained {{ckpt_path}}

Related Info

  1. Requirements

    • See requirements.txt
  2. Environment

    • The code is developed using python 3. NVIDIA GPUs are needed. The code is developed and tested using 4 servers with 32 NVIDIA V100 GPU cards. Other platforms or GPU cards are not fully tested.
  3. Dataset

    • Prepare ImageNet data following pytorch example.
    • Optional: Generate lmdb dataset by utils/lmdb_dataset.py. If not, please overwrite dataset:imagenet1k_lmdb in yaml to dataset:imagenet1k.
    • The directory structure of $DATA_ROOT should look like this:
      ${DATA_ROOT}
      ├── imagenet
      └── imagenet_lmdb
      
  4. Miscellaneous

    • The codebase is a general ImageNet training framework using yaml config with several extension under apps dir, based on PyTorch.
      • YAML config with additional features
        • ${ENV} in yaml config.
        • _include for hierachy config.
        • _default key for overwriting.
        • xxx.yyy.zzz for partial overwriting.
      • --{{opt}} {{new_val}} for command line overwriting.

Acknowledgment

This repo is based on slimmable_networks and benefits from the following projects

Thanks the contributors of these repos!

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{
    mei2020atomnas,
    title={Atom{NAS}: Fine-Grained End-to-End Neural Architecture Search},
    author={Jieru Mei and Yingwei Li and Xiaochen Lian and Xiaojie Jin and Linjie Yang and Alan Yuille and Jianchao Yang},
    booktitle={International Conference on Learning Representations},
    year={2020},
    url={https://openreview.net/forum?id=BylQSxHFwr}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].