All Projects → d-li14 → Mobilenetv3.pytorch

d-li14 / Mobilenetv3.pytorch

Licence: mit
74.3% MobileNetV3-Large and 67.2% MobileNetV3-Small model on ImageNet

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mobilenetv3.pytorch

Cnn Models
ImageNet pre-trained models with batch normalization for the Caffe framework
Stars: ✭ 355 (+25.44%)
Mutual labels:  imagenet, pretrained-models
Hbonet
[ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions
Stars: ✭ 94 (-66.78%)
Mutual labels:  imagenet, pretrained-models
Mobilenetv2.pytorch
72.8% MobileNetV2 1.0 model on ImageNet and a spectrum of pre-trained MobileNetV2 models
Stars: ✭ 369 (+30.39%)
Mutual labels:  imagenet, pretrained-models
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+45.23%)
Mutual labels:  imagenet, pretrained-models
regnet.pytorch
PyTorch-style and human-readable RegNet with a spectrum of pre-trained models
Stars: ✭ 50 (-82.33%)
Mutual labels:  imagenet, pretrained-models
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (+1519.79%)
Mutual labels:  imagenet, pretrained-models
Classification models
Classification models trained on ImageNet. Keras.
Stars: ✭ 938 (+231.45%)
Mutual labels:  imagenet, pretrained-models
Efficientnet Pytorch
A PyTorch implementation of EfficientNet and EfficientNetV2 (coming soon!)
Stars: ✭ 6,685 (+2262.19%)
Mutual labels:  imagenet, pretrained-models
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+749.82%)
Mutual labels:  imagenet, pretrained-models
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+578.45%)
Mutual labels:  imagenet, pretrained-models
Ghostnet
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+516.25%)
Mutual labels:  imagenet, pretrained-models
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+51.59%)
Mutual labels:  imagenet, pretrained-models
ghostnet.pytorch
73.6% GhostNet 1.0x pre-trained model on ImageNet
Stars: ✭ 90 (-68.2%)
Mutual labels:  imagenet, pretrained-models
pigallery
PiGallery: AI-powered Self-hosted Secure Multi-user Image Gallery and Detailed Image analysis using Machine Learning, EXIF Parsing and Geo Tagging
Stars: ✭ 35 (-87.63%)
Mutual labels:  imagenet, pretrained-models
rl-trained-agents
A collection of pre-trained RL agents using Stable Baselines3
Stars: ✭ 47 (-83.39%)
Mutual labels:  pretrained-models
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (-45.58%)
Mutual labels:  pretrained-models
Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Stars: ✭ 8,046 (+2743.11%)
Mutual labels:  imagenet
pptod
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System (ACL 2022)
Stars: ✭ 77 (-72.79%)
Mutual labels:  pretrained-models
Bert Squad
SQuAD Question Answering Using BERT, PyTorch
Stars: ✭ 256 (-9.54%)
Mutual labels:  pretrained-models
syntaxdot
Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (-88.69%)
Mutual labels:  pretrained-models

PyTorch Implementation of MobileNet V3

Reproduction of MobileNet V3 architecture as described in Searching for MobileNetV3 by Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam on ILSVRC2012 benchmark with PyTorch framework.

Requirements

Dataset

Download the ImageNet dataset and move validation images to labeled subfolders. To do this, you can use the following script: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

Training recipe

  • batch size 1024
  • epoch 150
  • learning rate 0.4 (ramps up from 0.1 to 0.4 in the first 5 epochs)
  • LR decay strategy cosine
  • weight decay 0.00004
  • dropout rate 0.2 (0.1 for Small-version 0.75)
  • no weight decay biases and BN
  • label smoothing 0.1 (only for Large-version)

Models

Architecture # Parameters MFLOPs Top-1 / Top-5 Accuracy (%)
MobileNetV3-Large 1.0 5.483M 216.60 74.280 / 91.928
MobileNetV3-Large 0.75 3.994M 154.57 72.842 / 90.846
MobileNetV3-Small 1.0 2.543M 56.52 67.214 / 87.304
MobileNetV3-Small 0.75 2.042M 43.40 64.876 / 85.498
from mobilenetv3 import mobilenetv3_large, mobilenetv3_small

net_large = mobilenetv3_large()
net_small = mobilenetv3_small()

net_large.load_state_dict(torch.load('pretrained/mobilenetv3-large-1cd25616.pth'))
net_small.load_state_dict(torch.load('pretrained/mobilenetv3-small-55df8e1f.pth'))

Citation

@InProceedings{Howard_2019_ICCV,
author = {Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and Le, Quoc V. and Adam, Hartwig},
title = {Searching for MobileNetV3},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}

If you find this implementation helpful in your research, please also consider citing:

@InProceedings{Li_2019_ICCV,
author = {Li, Duo and Zhou, Aojun and Yao, Anbang},
title = {HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].