All Projects → d-li14 → Mobilenetv2.pytorch

d-li14 / Mobilenetv2.pytorch

Licence: apache-2.0
72.8% MobileNetV2 1.0 model on ImageNet and a spectrum of pre-trained MobileNetV2 models

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mobilenetv2.pytorch

Hbonet
[ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions
Stars: ✭ 94 (-74.53%)
Mutual labels:  imagenet, pretrained-models, mobilenetv2
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (-55.83%)
Mutual labels:  deep-neural-networks, imagenet
Models Comparison.pytorch
Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures
Stars: ✭ 148 (-59.89%)
Mutual labels:  deep-neural-networks, imagenet
ghostnet.pytorch
73.6% GhostNet 1.0x pre-trained model on ImageNet
Stars: ✭ 90 (-75.61%)
Mutual labels:  imagenet, pretrained-models
Constrained attention filter
(ECCV 2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Stars: ✭ 36 (-90.24%)
Mutual labels:  deep-neural-networks, imagenet
Aognet
Code for CVPR 2019 paper: " Learning Deep Compositional Grammatical Architectures for Visual Recognition"
Stars: ✭ 132 (-64.23%)
Mutual labels:  deep-neural-networks, imagenet
Pyconv
Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition (https://arxiv.org/pdf/2006.11538.pdf)
Stars: ✭ 231 (-37.4%)
Mutual labels:  deep-neural-networks, imagenet
regnet.pytorch
PyTorch-style and human-readable RegNet with a spectrum of pre-trained models
Stars: ✭ 50 (-86.45%)
Mutual labels:  imagenet, pretrained-models
pigallery
PiGallery: AI-powered Self-hosted Secure Multi-user Image Gallery and Detailed Image analysis using Machine Learning, EXIF Parsing and Geo Tagging
Stars: ✭ 35 (-90.51%)
Mutual labels:  imagenet, pretrained-models
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+16.26%)
Mutual labels:  imagenet, pretrained-models
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+551.76%)
Mutual labels:  imagenet, pretrained-models
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (+1142.28%)
Mutual labels:  imagenet, pretrained-models
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+420.33%)
Mutual labels:  imagenet, pretrained-models
Glasses
High-quality Neural Networks for Computer Vision 😎
Stars: ✭ 138 (-62.6%)
Mutual labels:  deep-neural-networks, pretrained-models
Ghostnet
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+372.63%)
Mutual labels:  imagenet, pretrained-models
Octconv.pytorch
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models
Stars: ✭ 229 (-37.94%)
Mutual labels:  deep-neural-networks, imagenet
Classification models
Classification models trained on ImageNet. Keras.
Stars: ✭ 938 (+154.2%)
Mutual labels:  imagenet, pretrained-models
Mobilenet Caffe
Caffe Implementation of Google's MobileNets (v1 and v2)
Stars: ✭ 1,217 (+229.81%)
Mutual labels:  imagenet, mobilenetv2
SAN
[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images
Stars: ✭ 41 (-88.89%)
Mutual labels:  imagenet, mobilenetv2
Mobilenetv3.pytorch
74.3% MobileNetV3-Large and 67.2% MobileNetV3-Small model on ImageNet
Stars: ✭ 283 (-23.31%)
Mutual labels:  imagenet, pretrained-models

PyTorch Implemention of MobileNet V2

+ Release of next generation of MobileNet in my repo *mobilenetv3.pytorch*
+ Release of advanced design of MobileNetV2 in my repo *HBONet* [ICCV 2019]
+ Release of better pre-trained model. See below for details.

Reproduction of MobileNet V2 architecture as described in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov and Liang-Chieh Chen on ILSVRC2012 benchmark with PyTorch framework.

This implementation provides an example procedure of training and validating any prevalent deep neural network architecture, with modular data processing, training, logging and visualization integrated.

Requirements

Dependencies

  • PyTorch 1.0+
  • NVIDIA-DALI (in development, not recommended)

Dataset

Download the ImageNet dataset and move validation images to labeled subfolders. To do this, you can use the following script: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

Pretrained models

The pretrained MobileNetV2 1.0 achieves 72.834% top-1 accuracy and 91.060% top-5 accuracy on ImageNet validation set, which is higher than the statistics reported in the original paper and official TensorFlow implementation.

MobileNetV2 with a spectrum of width multipliers

Architecture # Parameters MFLOPs Top-1 / Top-5 Accuracy (%)
MobileNetV2 1.0 3.504M 300.79 72.192 / 90.534
MobileNetV2 0.75 2.636M 209.08 69.952 / 88.986
MobileNetV2 0.5 1.968M 97.14 64.592 / 85.392
MobileNetV2 0.35 1.677M 59.29 60.092 / 82.172
MobileNetV2 0.25 1.519M 37.21 52.352 / 75.932
MobileNetV2 0.1 1.356M 12.92 34.896 / 56.564

MobileNetV2 1.0 with a spectrum of input resolutions

Architecture # Parameters MFLOPs Top-1 / Top-5 Accuracy (%)
MobileNetV2 224x224 3.504M 300.79 72.192 / 90.534
MobileNetV2 192x192 3.504M 221.33 71.076 / 89.760
MobileNetV2 160x160 3.504M 154.10 69.504 / 88.848
MobileNetV2 128x128 3.504M 99.09 66.740 / 86.952
MobileNetV2 96x96 3.504M 56.31 62.696 / 84.046

Taking MobileNetV2 1.0 as an example, pretrained models can be easily imported using the following lines and then finetuned for other vision tasks or utilized in resource-aware platforms.

from models.imagenet import mobilenetv2

net = mobilenetv2()
net.load_state_dict(torch.load('pretrained/mobilenetv2-c5e733a8.pth'))

Usage

Training

Configuration to reproduce our strong results efficiently, consuming around 2 days on 4x TiTan XP GPUs with non-distributed DataParallel and PyTorch dataloader.

  • batch size 256
  • epoch 150
  • learning rate 0.05
  • LR decay strategy cosine
  • weight decay 0.00004

The newly released model achieves even higher accuracy, with larger bacth size (1024) on 8 GPUs, higher initial learning rate (0.4) and longer training epochs (250). In addition, a dropout layer with the dropout rate of 0.2 is inserted before the final FC layer, no weight decay is imposed on biases and BN layers and the learning rate ramps up from 0.1 to 0.4 in the first five training epochs.

python imagenet.py \
    -a mobilenetv2 \
    -d <path-to-ILSVRC2012-data> \
    --epochs 150 \
    --lr-decay cos \
    --lr 0.05 \
    --wd 4e-5 \
    -c <path-to-save-checkpoints> \
    --width-mult <width-multiplier> \
    --input-size <input-resolution> \
    -j <num-workers>

Test

python imagenet.py \
    -a mobilenetv2 \
    -d <path-to-ILSVRC2012-data> \
    --weight <pretrained-pth-file> \
    --width-mult <width-multiplier> \
    --input-size <input-resolution> \
    -e

Citations

The following is a BibTeX entry for the MobileNet V2 paper that you should cite if you use this model.

@InProceedings{Sandler_2018_CVPR,
author = {Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
title = {MobileNetV2: Inverted Residuals and Linear Bottlenecks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

If you find this implementation helpful in your research, please also consider citing:

@InProceedings{Li_2019_ICCV,
author = {Li, Duo and Zhou, Aojun and Yao, Anbang},
title = {HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2019}
}

License

This repository is licensed under the Apache License 2.0.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].