All Projects → jshilong → FisherPruning

jshilong / FisherPruning

Licence: other
Group Fisher Pruning for Practical Network Compression(ICML2021)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to FisherPruning

Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+2860.63%)
Mutual labels:  pruning, network-compression
Dynamic Model Pruning with Feedback
Implement of Dynamic Model Pruning with Feedback with pytorch
Stars: ✭ 25 (-80.31%)
Mutual labels:  pruning
Mobile Yolov5 Pruning Distillation
mobilev2-yolov5s剪枝、蒸馏,支持ncnn,tensorRT部署。ultra-light but better performence!
Stars: ✭ 192 (+51.18%)
Mutual labels:  pruning
PyTorch-Deep-Compression
A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)
Stars: ✭ 39 (-69.29%)
Mutual labels:  pruning
Neuralnetworks.thought Experiments
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
Stars: ✭ 199 (+56.69%)
Mutual labels:  pruning
deep-compression
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (+22.83%)
Mutual labels:  pruning
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+30.71%)
Mutual labels:  pruning
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+424.41%)
Mutual labels:  pruning
sparsezoo
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+107.87%)
Mutual labels:  pruning
DS-Net
(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+60.63%)
Mutual labels:  pruning
Selecsls Pytorch
Reference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
Stars: ✭ 251 (+97.64%)
Mutual labels:  pruning
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+71.65%)
Mutual labels:  pruning
GAN-LTH
[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-81.1%)
Mutual labels:  pruning
Torch Pruning
A pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (+51.97%)
Mutual labels:  pruning
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-0.79%)
Mutual labels:  pruning
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+36.22%)
Mutual labels:  pruning
Skimcaffe
Caffe for Sparse Convolutional Neural Network
Stars: ✭ 230 (+81.1%)
Mutual labels:  pruning
prunnable-layers-pytorch
Prunable nn layers for pytorch.
Stars: ✭ 47 (-62.99%)
Mutual labels:  pruning
fasterai1
FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-73.23%)
Mutual labels:  pruning
pytorch-network-slimming
A package to make do Network Slimming a little easier
Stars: ✭ 40 (-68.5%)
Mutual labels:  pruning

Group Fisher Pruning for Practical Network Compression (ICML2021)

By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, Wayne Zhang

Updates

  • All one stage models of Detection has been released (21/6/2021)

NOTES

All models about detection has been released. The classification models will be released later, because we want to refactor all our code into a Hook , so that it can become a more general tool for all tasks in OpenMMLab.

We will continue to improve this method and apply it to more other tasks, such as segmentation and pose.

The layer grouping algorithm is implemtated based on the AutoGrad of Pytorch, If you are not familiar with this feature and you can read Chinese, then these materials may be helpful to you.

  1. AutoGrad in Pytorch

  2. Hook of MMCV

Introduction

1. Compare with state-of-the-arts.

2. Can be applied to various complicated structures and various tasks.

3. Boosting inference speed on GPU under same flops.

Get Started

1. Creat a basic environment with pytorch 1.3.0 and mmcv-full

Due to the frequent changes of the autograd interface, we only guarantee the code works well in pytorch==1.3.0.

  1. Creat the environment
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
  1. Install PyTorch 1.3.0 and corresponding torchvision.
conda install pytorch=1.3.0 cudatoolkit=10.0 torchvision=0.2.2 -c pytorch
  1. Build the mmcv-full from source with pytorch 1.3.0 and cuda 10.0

Please use gcc-5.4 and nvcc 10.0

 git clone https://github.com/open-mmlab/mmcv.git
 cd mmcv
 MMCV_WITH_OPS=1 pip install -e .

2. Install the corresponding codebase in OpenMMLab.

e.g. MMdetection

pip install mmdet==2.13.0

3. Pruning the model.

e.g. Detection

cd detection

Modify the load_from as the path to the baseline model in of xxxx_pruning.py

# for slurm train
sh tools/slurm_train.sh PATITION_NAME JOB_NAME configs/retina/retina_pruning.py work_dir
# for slurm_test
sh tools/slurm_test.sh PATITION_NAME JOB_NAME configs/retina/retina_pruning.py PATH_CKPT --eval bbox
# for torch.dist
# sh tools/dist_train.sh configs/retina/retina_pruning.py 8

4. Finetune the model.

e.g. Detection

cd detection

Modify the deploy_from as the path to the pruned model in custom_hooks of xxxx_finetune.py

# for slurm train
sh tools/slurm_train.sh PATITION_NAME JOB_NAME configs/retina/retina_finetune.py work_dir
# for slurm test
sh tools/slurm_test.sh PATITION_NAME JOB_NAME configs/retina/retina_fintune.py PATH_CKPT --eval bbox
# for torch.dist
# sh tools/dist_train.sh configs/retina/retina_finetune.py 8

Models

Detection

Method Backbone Baseline(mAP) Finetuned(mAP) Download
RetinaNet R-50-FPN 36.5 36.5 Baseline/Pruned/Finetuned
ATSS* R-50-FPN 38.1 37.9 Baseline/Pruned/Finetuned
PAA* R-50-FPN 39.0 39.4 Baseline/Pruned/Finetuned
FSAF R-50-FPN 37.4 37.4 Baseline/Pruned/Finetuned

* indicate with no Group Normalization in heads.

Classification

Coming soon.

Please cite our paper in your publications if it helps your research.

@InProceedings{liu2021group,
  title = {Group Fisher Pruning for Practical Network Compression},
  author =       {Liu, Liyang and Zhang, Shilong and Kuang, Zhanghui and Zhou, Aojun and Xue, Jing-Hao and Wang, Xinjiang and Chen, Yimin and Yang, Wenming and Liao, Qingmin and Zhang, Wayne},
  booktitle = {Proceedings of the 38th International Conference on Machine Learning},
  year = {2021},
  series = {Proceedings of Machine Learning Research},
  month = {18--24 Jul},
  publisher ={PMLR},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].