leaderj1001 / Mobilenetv3 Pytorch
Licence: mit
Implementing Searching for MobileNetV3 paper using Pytorch
Stars: ✭ 243
Programming Languages
python
139335 projects - #7 most used programming language
Projects that are alternatives of or similar to Mobilenetv3 Pytorch
Shufflenet V2 Tensorflow
A lightweight convolutional neural network
Stars: ✭ 145 (-40.33%)
Mutual labels: imagenet
Triplet Attention
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (-8.64%)
Mutual labels: imagenet
Models Comparison.pytorch
Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures
Stars: ✭ 148 (-39.09%)
Mutual labels: imagenet
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+889.71%)
Mutual labels: imagenet
Sequential Imagenet Dataloader
A plug-in replacement for DataLoader to load ImageNet disk-sequentially in PyTorch.
Stars: ✭ 198 (-18.52%)
Mutual labels: imagenet
Pyconv
Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition (https://arxiv.org/pdf/2006.11538.pdf)
Stars: ✭ 231 (-4.94%)
Mutual labels: imagenet
Torchdistill
PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.
Stars: ✭ 177 (-27.16%)
Mutual labels: imagenet
Imagenet
TensorFlow implementation of AlexNet and its training and testing on ImageNet ILSVRC 2012 dataset
Stars: ✭ 155 (-36.21%)
Mutual labels: imagenet
Labelimg
🖍️ LabelImg is a graphical image annotation tool and label object bounding boxes in images
Stars: ✭ 16,088 (+6520.58%)
Mutual labels: imagenet
Alexnet
implement AlexNet with C / convolutional nerual network / machine learning / computer vision
Stars: ✭ 147 (-39.51%)
Mutual labels: imagenet
Octconv.pytorch
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models
Stars: ✭ 229 (-5.76%)
Mutual labels: imagenet
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+690.12%)
Mutual labels: imagenet
Atomnas
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'
Stars: ✭ 197 (-18.93%)
Mutual labels: imagenet
Pyramidnet Pytorch
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, https://arxiv.org/abs/1610.02915)
Stars: ✭ 234 (-3.7%)
Mutual labels: imagenet
Fusenet
Deep fusion project of deeply-fused nets, and the study on the connection to ensembling
Stars: ✭ 230 (-5.35%)
Mutual labels: imagenet
Mini Imagenet Tools
Tools for generating mini-ImageNet dataset and processing batches
Stars: ✭ 209 (-13.99%)
Mutual labels: imagenet
Implementing Searching for MobileNetV3 paper using Pytorch
- The current model is a very early model. I will modify it as a general model as soon as possible.
Paper
- Searching for MobileNetV3 paper
- Author: Andrew Howard(Google Research), Mark Sandler(Google Research, Grace Chu(Google Research), Liang-Chieh Chen(Google Research), Bo Chen(Google Research), Mingxing Tan(Google Brain), Weijun Wang(Google Research), Yukun Zhu(Google Research), Ruoming Pang(Google Brain), Vijay Vasudevan(Google Brain), Quoc V. Le(Google Brain), Hartwig Adam(Google Research)
Todo
- Experimental need for ImageNet dataset.
- Code refactoring
MobileNetV3 Block
Experiments
- For CIFAR-100 data, I experimented with resize (224, 224).
Datasets | Model | acc1 | acc5 | Epoch | Parameters |
---|---|---|---|---|---|
CIFAR-100 | MobileNetV3(LARGE) | 70.44% | 91.34% | 80 | 3.99M |
CIFAR-100 | MobileNetV3(SMALL) | 67.04% | 89.41% | 55 | 1.7M |
IMAGENET | MobileNetV3(LARGE) WORK IN PROCESS | 5.15M | |||
IMAGENET | MobileNetV3(SMALL) WORK IN PROCESS | 2.94M |
Usage
Train
python main.py
- If you want to change hyper-parameters, you can check "python main.py --help"
Options:
-
--dataset-mode
(str) - which dataset you use, (example: CIFAR10, CIFAR100), (default: CIFAR100). -
--epochs
(int) - number of epochs, (default: 100). -
--batch-size
(int) - batch size, (default: 128). -
--learning-rate
(float) - learning rate, (default: 1e-1). -
--dropout
(float) - dropout rate, (default: 0.3). -
--model-mode
(str) - which network you use, (example: LARGE, SMALL), (default: LARGE). -
--load-pretrained
(bool) - (default: False). -
--evaluate
(bool) - Used when testing. (default: False). -
--multiplier
(float) - (default: 1.0).
Test
python main.py --evaluate True
- Put the saved model file in the checkpoint folder and saved graph file in the saved_graph folder and type "python main.py --evaluate True".
- If you want to change hyper-parameters, you can check "python test.py --help"
Options:
-
--dataset-mode
(str) - which dataset you use, (example: CIFAR10, CIFAR100), (default: CIFAR100). -
--epochs
(int) - number of epochs, (default: 100). -
--batch-size
(int) - batch size, (default: 128). -
--learning-rate
(float) - learning rate, (default: 1e-1). -
--dropout
(float) - dropout rate, (default: 0.3). -
--model-mode
(str) - which network you use, (example: LARGE, SMALL), (default: LARGE). -
--load-pretrained
(bool) - (default: False). -
--evaluate
(bool) - Used when testing. (default: False). -
--multiplier
(float) - (default: 1.0).
Number of Parameters
import torch
from model import MobileNetV3
def get_model_parameters(model):
total_parameters = 0
for layer in list(model.parameters()):
layer_parameter = 1
for l in list(layer.size()):
layer_parameter *= l
total_parameters += layer_parameter
return total_parameters
tmp = torch.randn((128, 3, 224, 224))
model = MobileNetV3(model_mode="LARGE", multiplier=1.0)
print("Number of model parameters: ", get_model_parameters(model))
Requirements
- torch==1.0.1
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].