All Projects → seungwonpark → Randwirenn

seungwonpark / Randwirenn

Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Randwirenn

Atomnas
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'
Stars: ✭ 197 (-70.81%)
Mutual labels:  imagenet, neural-architecture-search
Pnasnet.tf
TensorFlow implementation of PNASNet-5 on ImageNet
Stars: ✭ 102 (-84.89%)
Mutual labels:  imagenet, neural-architecture-search
Randwirenn
Pytorch Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 270 (-60%)
Mutual labels:  imagenet, neural-architecture-search
Petridishnn
Code for the neural architecture search methods contained in the paper Efficient Forward Neural Architecture Search
Stars: ✭ 112 (-83.41%)
Mutual labels:  imagenet, neural-architecture-search
TF-NAS
TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search (ECCV2020)
Stars: ✭ 66 (-90.22%)
Mutual labels:  imagenet, neural-architecture-search
regnet.pytorch
PyTorch-style and human-readable RegNet with a spectrum of pre-trained models
Stars: ✭ 50 (-92.59%)
Mutual labels:  imagenet, neural-architecture-search
Pnasnet.pytorch
PyTorch implementation of PNASNet-5 on ImageNet
Stars: ✭ 309 (-54.22%)
Mutual labels:  imagenet, neural-architecture-search
Class Balanced Loss
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019
Stars: ✭ 433 (-35.85%)
Mutual labels:  imagenet
Mmclassification
OpenMMLab Image Classification Toolbox and Benchmark
Stars: ✭ 532 (-21.19%)
Mutual labels:  imagenet
Computer Vision
Programming Assignments and Lectures for Stanford's CS 231: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 408 (-39.56%)
Mutual labels:  imagenet
Espnetv2
A light-weight, power efficient, and general purpose convolutional neural network
Stars: ✭ 377 (-44.15%)
Mutual labels:  imagenet
Fasterseg
[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Stars: ✭ 438 (-35.11%)
Mutual labels:  neural-architecture-search
Lemniscate.pytorch
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Stars: ✭ 532 (-21.19%)
Mutual labels:  imagenet
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-39.11%)
Mutual labels:  imagenet
Pytorch Mobilenet V3
MobileNetV3 in pytorch and ImageNet pretrained models
Stars: ✭ 616 (-8.74%)
Mutual labels:  imagenet
Autogan
[ICCV 2019] "AutoGAN: Neural Architecture Search for Generative Adversarial Networks" by Xinyu Gong, Shiyu Chang, Yifan Jiang and Zhangyang Wang
Stars: ✭ 388 (-42.52%)
Mutual labels:  neural-architecture-search
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (-7.56%)
Mutual labels:  neural-architecture-search
Label Studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
Stars: ✭ 7,264 (+976.15%)
Mutual labels:  imagenet
Ml5 Library
Friendly machine learning for the web! 🤖
Stars: ✭ 5,280 (+682.22%)
Mutual labels:  imagenet
Tensorflow object tracking video
Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition
Stars: ✭ 491 (-27.26%)
Mutual labels:  imagenet

RandWireNN

PWC

Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition.

Results

Validation result on Imagenet(ILSVRC2012) dataset:

Top 1 accuracy (%) Paper Here
RandWire-WS(4, 0.75), C=78 74.7 69.2
  • (2019.06.26) 69.2%: 250 epoch with SGD optimizer, lr 0.1, momentum 0.9, weight decay 5e-5, cosine annealing lr schedule (no label smoothing applied, see loss curve below)
  • (2019.04.14) 62.6%: 396k steps with SGD optimizer, lr 0.1, momentum 0.9, weigth decay 5e-5, lr decay about 0.1 at 300k
  • (2019.04.12) 62.6%: 416k steps with Adabound optimizer, initial lr 0.001(decayed about 0.1 at 300k), final lr 0.1, no weight decay
  • (2019.04) JiaminRen's implementation reached accuarcy which is almost close to paper, using identical training strategy with paper.
  • (2019.04.10) 63.0%: 450k steps with Adam optimizer, initial lr 0.001, lr decay about 0.1 for every 150k step
  • (2019.04.07) 56.8%: Training took about 16 hours on AWS p3.2xlarge(NVIDIA V100). 120k steps were done in total, and Adam optimizer with lr=0.001, batch_size=128 was used with no learning rate decay.

Dependencies

This code was tested on Python 3.6 with PyTorch 1.0.1. Other packages can be installed by:

pip install -r requirements.txt

Generate random DAG

cd model/graphs
python er.py -p 0.2 -o er-02.txt # Erdos-Renyi
python ba.py -m 7 -o ba-7.txt # Barbasi-Albert
python ws.py -k 4 -p 0.75 ws-4-075.txt # Watts-Strogatz
# number of nodes: -n option

All outputs from commands shown above will produce txt file like:

(number of nodes)
(number of edges)
(lines, each line representing edges)

Train RandWireNN

  1. Download ImageNet dataset. Train/val folder should contain list of 1,000 directories, each containing list of images for corresponding category. For validation image files, this script can be useful: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

  2. Edit config.yaml

    cd config
    cp default.yaml config.yaml
    vim config.yaml # specify data directory, graph txt files
    
  3. Train

    Note. Validation performed here won't use entire test set, since it will consume much time. (about 3 min.)

    python trainer.py -c [config yaml] -m [name]
    
  4. View tensorboardX

    tensorboard --logdir ./logs
    

Validation

Run full validation:

python validation.py -c [config path] -p [checkpoint path]

This will show accuracy and average test loss of the trained model.

Author

Seungwon Park / @seungwonpark

License

Apache License 2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].