All Projects → VITA-Group → ATMC

VITA-Group / ATMC

Licence: MIT license
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ATMC

Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (+443.9%)
Mutual labels:  pruning, quantization, model-compression
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+321.95%)
Mutual labels:  pruning, quantization, model-compression
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+2904.88%)
Mutual labels:  pruning, quantization, model-compression
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+2319.51%)
Mutual labels:  pruning, quantization, model-compression
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+1551.22%)
Mutual labels:  pruning, quantization, model-compression
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+207.32%)
Mutual labels:  pruning, quantization, model-compression
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+304.88%)
Mutual labels:  pruning, quantization, model-compression
Ntagger
reference pytorch code for named entity tagging
Stars: ✭ 58 (+41.46%)
Mutual labels:  pruning, quantization
Awesome Edge Machine Learning
A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (+239.02%)
Mutual labels:  pruning, quantization
sparsezoo
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+543.9%)
Mutual labels:  pruning, quantization
Awesome Emdl
Embedded and mobile deep learning research resources
Stars: ✭ 554 (+1251.22%)
Mutual labels:  pruning, quantization
Model compression
PyTorch Model Compression
Stars: ✭ 150 (+265.85%)
Mutual labels:  pruning, quantization
BitPack
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Stars: ✭ 36 (-12.2%)
Mutual labels:  quantization, model-compression
Awesome Pruning
A curated list of neural network pruning resources.
Stars: ✭ 1,017 (+2380.49%)
Mutual labels:  pruning, model-compression
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+431.71%)
Mutual labels:  pruning, quantization
DS-Net
(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+397.56%)
Mutual labels:  pruning, model-compression
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+9070.73%)
Mutual labels:  pruning, quantization
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+1004.88%)
Mutual labels:  pruning, quantization
Torch Pruning
A pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (+370.73%)
Mutual labels:  pruning, model-compression
ZAQ-code
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (+43.9%)
Mutual labels:  quantization, model-compression

Model Compression with Adversarial Robustness: A Unified Optimization Framework (ATMC)

Authors:

*: Equal Contribution

Overview

In this repo, we present one example implementation of ATMC robust learning framework from NeurIPS 2019 paper Model Compression with Adversarial Robustness: A Unified Optimization Framework.

We propose a noval Adversarially Trained Model Compression (ATMC) framework, which conducts a unified constrained optimization formulation, where existing compression means (pruning, factorization, quantization) are all integrated into the constraints. An extensive group of experiments are presented, demonstrating that ATMC obtains remarkably more favorable trade-off among model size, accuracy and robustness, over currently available alternatives in various settings. ATMC Experiments Results on Various Compression Ratio

Requirements

All experiments were executed on a Linux machine with Intel I7-6700k, 64 GB memory and two GTX1080 Graphics Card. To reproduce the experiment results in the paper, some experiment parameter settings could be tuned for the user case (such as batch size).

The software environment bases on Pytorch (>=1.0.0).

Experiment Example

One example shows how to setup an experiment on CIFAR-10 dataset.

First of all, we need to obtain a dense model for successors.

python cifar/train_proj_admm_quant.py \
--raw_train \
--epochs 200 \
--lr 0.05 \
--decreasing_lr 80,120,150 \
--gpu 0 \
--savedir log/resnet/pretrain \          
--data_root [cifar/data/dir] \            
--attack_algo pgd \
--attack_eps 4 \
--defend_algo pgd \
--defend_eps 4 \
--defend_iter 7 \
--save_model_name cifar10_res_pgd_raw.pth \
--quantize_bits 32 \
--prune_ratio 1.0

The dense model will be stored in 'log/resnet/pretrain'. Then, the second round execution of the python script will be operated with

python cifar/train_proj_admm_quant.py \
--epochs 200 \
--lr 0.01 \
--decreasing_lr 30,60,90,120 \
--gpu 0 \
--savedir log/resnet/l0 \
--loaddir log/resnet/pretrain \
--model_name cifar10_res_pgd_raw.pth \  
--data_root [cifar/data/dir] \       
--quantize_bits 32 \
--attack_algo pgd \
--attack_eps 4 \
--defend_algo pgd \                                             
--defend_eps 4 \
--defend_iter 7 \
--save_model_name cifar10_resnet_pgd_4_l0proj_0.005.pth \
--quantize_bits 32 \ 
--prune_algo l0proj \
--prune_ratio 0.005 \

After the process above, we get an sparse model with 0.005 compression ratio. Then we operate the ATMC process based on this pre-trained model.

python cifar/train_proj_admm_quant.py \
--epochs 200 \                     
--lr 0.005 \     
--decreasing_lr 60,80,120 \   
--gpu 0 \                          
--savedir log/resnet/atmc \                                             
--loaddir log/resnet/l0 \                                               
--save_model_name cifar10_resnet_pgd_4_atmc_0.005_32bit.pth \ 
--data_root [cifar/data/dir] \     
--attack_algo pgd \
--attack_eps 4 \
--defend_algo pgd \                                             
--defend_eps 4 \    
--defend_iter 7 \                                              
--prune_algo l0proj \
--abc_special \        
--prune_ratio 0.005 \ 
--quantize_bits 32 \
--defend_iter 7 \
--model_name sparse_cifar10_resnet_pgd_4_l0proj_0.005.pth

If we want to apply the unified pruning and quantization strategy, we are going to run

python cifar100/train_proj_admm_quant.py \
--epochs 150 \
--lr 0.005 \
--decreasing_lr 60,80,120 \
--gpu 0 \
--savedir log/resnet/atmc8bit \
--loaddir log/resnet/atmc \
--save_model_name cifar10_pgd_4_atmc_0.005_8bit.pth \
--data_root [cifar/data/dir] \
--attack_algo pgd \
--attack_eps 4 \
--abc_special \
--abc_initialize \
--defend_algo pgd \
--defend_eps 4 \
--defend_iter 7 \
--quantize_algo kmeans_nnz_fixed_0_center \
--quant_interval 10 \
--prune_algo l0proj \
--prune_ratio 0.005 \
--quantize_bits 8 \
--model_name sparse_cifar10_pgd_4_atmc_0.005_32bit.pth

If you find this repo useful, please cite:

@InProceedings{gui2019ATMC,
  title = 	 {Model Compression with Adversarial Robustness: A Unified Optimization Framework},
  author = 	 {Gui, Shupeng and Wang, Haotao and Yang, Haichuan and Yu, Chen and Wang, Zhangyang and Liu, Ji},
  booktitle = 	 {Proceedings of the 33rd Conference on Neural Information Processing Systems},
  year = 	 {2019},
}

Reference Implementation

Thanks to the reference repo pytorch-playground.

Dependencies

  • pytorch (>=1.0.0)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].