All Projects → SKKU-ESLAB → Auto-Compression

SKKU-ESLAB / Auto-Compression

Licence: MIT license
Automatic DNN compression tool with various model compression and neural architecture search techniques

Programming Languages

c
50402 projects - #5 most used programming language
C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
assembly
5116 projects
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to Auto-Compression

Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+3536.84%)
Mutual labels:  model-compression, neural-architecture-search
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+3463.16%)
Mutual labels:  model-compression, neural-architecture-search
ESNAC
Learnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (+42.11%)
Mutual labels:  model-compression, neural-architecture-search
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+56205.26%)
Mutual labels:  model-compression, neural-architecture-search
Mobile Id
Deep Face Model Compression
Stars: ✭ 187 (+884.21%)
Mutual labels:  model-compression
Yolov3
yolov3 by pytorch
Stars: ✭ 142 (+647.37%)
Mutual labels:  model-compression
Condensa
Programmable Neural Network Compression
Stars: ✭ 129 (+578.95%)
Mutual labels:  model-compression
Microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (+536.84%)
Mutual labels:  model-compression
DS-Net
(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+973.68%)
Mutual labels:  model-compression
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (+1073.68%)
Mutual labels:  model-compression
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+773.68%)
Mutual labels:  model-compression
Ld Net
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (+678.95%)
Mutual labels:  model-compression
Jfasttext
Java interface for fastText
Stars: ✭ 193 (+915.79%)
Mutual labels:  model-compression
Collaborative Distillation
PyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (+626.32%)
Mutual labels:  model-compression
Pocketflow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Stars: ✭ 2,672 (+13963.16%)
Mutual labels:  model-compression
Pretrained Language Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+10600%)
Mutual labels:  model-compression
Keras compressor
Model Compression CLI Tool for Keras.
Stars: ✭ 160 (+742.11%)
Mutual labels:  model-compression
Bert Of Theseus
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
Stars: ✭ 209 (+1000%)
Mutual labels:  model-compression
Pruning
Code for "Co-Evolutionary Compression for Unpaired Image Translation" (ICCV 2019) and "SCOP: Scientific Control for Reliable Neural Network Pruning" (NeurIPS 2020).
Stars: ✭ 159 (+736.84%)
Mutual labels:  model-compression
Pytorch Weights pruning
PyTorch Implementation of Weights Pruning
Stars: ✭ 158 (+731.58%)
Mutual labels:  model-compression

DNAS-Compression

Model compression techniques with differentiable neural architecture search.

Currently, pruning and quantization are supported.

  • Pruning : Channel-level / Group-level
    • Sparsity and group size can be set
  • Quantization : Uniform quantization
    • Bitwidth can be set

References

This project is implemented based on FBNet reproduced version.

Usage

  • Requirement:
    • pytorch 1.6.0
    • tensorboardX 1.8
    • scipy 1.4.0
    • numpy 1.19.4
  1. Choose what type of compression would you run

    • Pruning ( channel / group )
    • Quantization
    • Both pruning and quantization
  2. Edit hyperparmeters in supernet_functions/config_for_supernet.py

    • Usual hyperparameters
      • batch size
      • learning rate
      • epochs
    • Special hyperparameters (pay attention to it!)
      • alpha, beta for adjustment of flops loss
      • w_share_in_train
      • thetas_lr
      • train_thetas_from_the_epoch
  3. Run supernet_main_file.py

    Quick start command:

    python3 supernet_main_file.py --train_or_sample train

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].