All Projects → mit-han-lab → Amc Models

mit-han-lab / Amc Models

Licence: mit
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Amc Models

Amc
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 298 (+93.51%)
Mutual labels:  automl, model-compression
Pocketflow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Stars: ✭ 2,672 (+1635.06%)
Mutual labels:  automl, model-compression
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+348.7%)
Mutual labels:  automl, model-compression
allie
🤖 A machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers).
Stars: ✭ 93 (-39.61%)
Mutual labels:  automl, model-compression
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+6846.75%)
Mutual labels:  automl, model-compression
Auto ml
[UNMAINTAINED] Automated machine learning for analytics & production
Stars: ✭ 1,559 (+912.34%)
Mutual labels:  automl
Collaborative Distillation
PyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (-10.39%)
Mutual labels:  model-compression
Amla
AutoML frAmework for Neural Networks
Stars: ✭ 119 (-22.73%)
Mutual labels:  automl
Lightwood
Lightwood is Legos for Machine Learning.
Stars: ✭ 115 (-25.32%)
Mutual labels:  automl
Ld Net
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-3.9%)
Mutual labels:  model-compression
Fairdarts
Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search
Stars: ✭ 145 (-5.84%)
Mutual labels:  automl
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-14.29%)
Mutual labels:  automl
Microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (-21.43%)
Mutual labels:  model-compression
Milano
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.
Stars: ✭ 140 (-9.09%)
Mutual labels:  automl
Awesome Model Compression
papers about model compression
Stars: ✭ 119 (-22.73%)
Mutual labels:  model-compression
Mlmodels
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...
Stars: ✭ 145 (-5.84%)
Mutual labels:  automl
Deephyper
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-24.03%)
Mutual labels:  automl
Condensa
Programmable Neural Network Compression
Stars: ✭ 129 (-16.23%)
Mutual labels:  model-compression
Yolov3
yolov3 by pytorch
Stars: ✭ 142 (-7.79%)
Mutual labels:  model-compression
Awesome Autodl
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.
Stars: ✭ 1,819 (+1081.17%)
Mutual labels:  automl

AMC Compressed Models

This repo contains some of the compressed models from paper AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV18).

Reference

If you find the models useful, please kindly cite our paper:

@inproceedings{he2018amc,
  title={AMC: AutoML for Model Compression and Acceleration on Mobile Devices},
  author={He, Yihui and Lin, Ji and Liu, Zhijian and Wang, Hanrui and Li, Li-Jia and Han, Song},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={784--800},
  year={2018}
}

Download the Pretrained Models

Firstly, download the pretrained models from here and put it in ./checkpoints.

Models

Compressed MobileNets

We provide compressed MobileNetV1 by 50% FLOPs and 50% Inference time, and also compressed MobileNetV2 by 70% FLOPs, with PyTorch. The comparison with vanila models as follows:

Models Top1 Acc (%) Top5 Acc (%) Latency (ms) MACs (M)
MobileNetV1 70.9 89.5 123 569
MobileNetV1-width*0.75 68.4 88.2 72.5 325
MobileNetV1-50%FLOPs 70.5 89.3 68.9 285
MobileNetV1-50%Time 70.2 89.4 63.2 272
MobileNetV2-width*0.75 69.8 89.6 - 300
MobileNetV2-70%FLOPs 70.9 89.9 - 210

To test the model, run:

python eval_mobilenet_torch.py --profile={mobilenet_0.5flops, mobilenet_0.5time, mobilenetv2_0.7flops}

Converted TensorFLow Models

We converted the 50% FLOPs and 50% time compressed MobileNetV1 model to TensorFlow. We offer the normal checkpoint format and also the TF-Lite format. We used the TF-Lite format to test the speed on MobileNet.

To replicate the results of PyTorch, we write a new preprocessing function, and also adapt some hyper-parameters from the original TF MobileNetV1. To verify the performance, run following scripts:

python eval_mobilenet_tf.py --profile={0.5flops, 0.5time}

The produced result is:

Models Top1 Acc (%) Top5 Acc (%)
50% FLOPs 70.424 89.28
50% Time 70.214 89.244

Timing Logs

Here we provide timing logs on Google Pixel 1 using TensorFlow Lite in ./logs directory. We benchmarked the original MobileNetV1 (mobilenet), MobileNetV1 with 0.75 width multiplier (0.75mobilenet), 50% FLOPs pruned MobileNetV1 (0.5flops) and 50% time pruned MobileNetV1 (0.5time). Each model is benchmarked for 200 iterations with extra 100 iterations for warming up, and repeated for 3 runs.

AMC

You can also find our PyTorch implementation of AMC Here.

Contact

To contact the authors:

Ji Lin, [email protected]

Song Han, [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].