All Projects → xiaomi-automl → Moga

xiaomi-automl / Moga

MoGA: Searching Beyond MobileNetV3

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Moga

Pytorch Imagenet Cifar Coco Voc Training
Training examples and results for ImageNet(ILSVRC2012)/CIFAR100/COCO2017/VOC2007+VOC2012 datasets.Image Classification/Object Detection.Include ResNet/EfficientNet/VovNet/DarkNet/RegNet/RetinaNet/FCOS/CenterNet/YOLOv3.
Stars: ✭ 130 (-40.91%)
Mutual labels:  imagenet
Models Comparison.pytorch
Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures
Stars: ✭ 148 (-32.73%)
Mutual labels:  imagenet
Pytorch Cpp
PyTorch C++ inference with LibTorch
Stars: ✭ 194 (-11.82%)
Mutual labels:  imagenet
Aognet
Code for CVPR 2019 paper: " Learning Deep Compositional Grammatical Architectures for Visual Recognition"
Stars: ✭ 132 (-40%)
Mutual labels:  imagenet
Shufflenet V2 Tensorflow
A lightweight convolutional neural network
Stars: ✭ 145 (-34.09%)
Mutual labels:  imagenet
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (-25.91%)
Mutual labels:  imagenet
Imagenet
This implements training of popular model architectures, such as AlexNet, ResNet and VGG on the ImageNet dataset(Now we supported alexnet, vgg, resnet, squeezenet, densenet)
Stars: ✭ 126 (-42.73%)
Mutual labels:  imagenet
Labelimg
🖍️ LabelImg is a graphical image annotation tool and label object bounding boxes in images
Stars: ✭ 16,088 (+7212.73%)
Mutual labels:  imagenet
Alexnet
implement AlexNet with C / convolutional nerual network / machine learning / computer vision
Stars: ✭ 147 (-33.18%)
Mutual labels:  imagenet
Torchdistill
PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.
Stars: ✭ 177 (-19.55%)
Mutual labels:  imagenet
Mobilenetworks
Keras implementation of Mobile Networks
Stars: ✭ 132 (-40%)
Mutual labels:  imagenet
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+772.73%)
Mutual labels:  imagenet
Senet Caffe
A Caffe Re-Implementation of SENet
Stars: ✭ 169 (-23.18%)
Mutual labels:  imagenet
Vision4j Collection
Collection of computer vision models, ready to be included in a JVM project
Stars: ✭ 132 (-40%)
Mutual labels:  imagenet
Atomnas
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'
Stars: ✭ 197 (-10.45%)
Mutual labels:  imagenet
Regnet
Pytorch implementation of network design paradigm described in the paper "Designing Network Design Spaces"
Stars: ✭ 129 (-41.36%)
Mutual labels:  imagenet
Imagenet
TensorFlow implementation of AlexNet and its training and testing on ImageNet ILSVRC 2012 dataset
Stars: ✭ 155 (-29.55%)
Mutual labels:  imagenet
Mini Imagenet Tools
Tools for generating mini-ImageNet dataset and processing batches
Stars: ✭ 209 (-5%)
Mutual labels:  imagenet
Sequential Imagenet Dataloader
A plug-in replacement for DataLoader to load ImageNet disk-sequentially in PyTorch.
Stars: ✭ 198 (-10%)
Mutual labels:  imagenet
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+993.18%)
Mutual labels:  imagenet

MoGA: Searching Beyond MobileNetV3

We propose the first Mobile GPU-Aware (MoGA) neural architecture search in order to be precisely tailored for real-world applications. Further, the ultimate objective to devise a mobile network lies in achieving better performance by maximizing the utilization of bounded resources. While urging higher capability and restraining time consumption, we unconventionally encourage increasing the number of parameters for higher representational power. Undoubtedly, these three forces are not reconcilable and we have to alleviate the tension by weighted evolution techniques. Lastly, we deliver our searched networks at a mobile scale that outperform MobileNetV3 under the similar latency constraints, i.e., MoGA-A achieves 75.9% top-1 accuracy on ImageNet, MoGA-B meets 75.5% which costs only 0.5ms more on mobile GPU than MobileNetV3, which scores 75.2%. MoGA-C best attests GPU-awareness by reaching 75.3% and being slower on CPU but faster on GPU.

MoGA Architectures

Requirements

Benchmarks on ImageNet

ImageNet Dataset

We use the standard ImageNet 2012 dataset, the only difference is that we reorganized the validation set by their classes.

Evaluation

To evaluate,

python3 verify.py --model [MoGA_A|MoGA_B|MoGA_C] --device [cuda|cpu] --val-dataset-root [path/to/ILSVRC2012] --pretrained-path [path/to/pretrained_model]

Citation

This repository goes with this paper, your citations are welcomed!

@article{chu2019moga,
    title={MoGA: Searching Beyond MobileNetV3},
    author={Chu, Xiangxiang and Zhang, Bo and Xu, Ruijun},
    journal={ICASSP},
    url={https://arxiv.org/pdf/1908.01314.pdf},
    year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].