All Projects → he-y → Awesome Pruning

he-y / Awesome Pruning

A curated list of neural network pruning resources.

Projects that are alternatives of or similar to Awesome Pruning

Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-82.99%)
Mutual labels:  model-compression, pruning
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (-2.46%)
Mutual labels:  model-compression, pruning
Torch Pruning
A pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (-81.02%)
Mutual labels:  model-compression, pruning
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-78.07%)
Mutual labels:  model-compression, pruning
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-95.67%)
Mutual labels:  pruning, model-compression
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-83.68%)
Mutual labels:  model-compression, pruning
Filter Pruning Geometric Median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (-66.76%)
Mutual labels:  model-compression, pruning
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+21.14%)
Mutual labels:  model-compression, pruning
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-95.97%)
Mutual labels:  pruning, model-compression
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-87.61%)
Mutual labels:  pruning, model-compression
DS-Net
(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (-79.94%)
Mutual labels:  pruning, model-compression
Soft Filter Pruning
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (-71.39%)
Mutual labels:  model-compression, pruning
SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-95.08%)
Mutual labels:  pruning, model-compression
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (-33.43%)
Mutual labels:  model-compression, pruning
Knowledge Distillation Papers
knowledge distillation papers
Stars: ✭ 422 (-58.51%)
Mutual labels:  model-compression
Pytorch Pruning
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Stars: ✭ 740 (-27.24%)
Mutual labels:  pruning
Data Efficient Model Compression
Data Efficient Model Compression
Stars: ✭ 380 (-62.64%)
Mutual labels:  model-compression
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+269.71%)
Mutual labels:  pruning
Channel Pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Stars: ✭ 979 (-3.74%)
Mutual labels:  model-compression
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (-32.06%)
Mutual labels:  model-compression

Awesome Pruning Awesome

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Please feel free to pull requests or open an issue to add papers.

Table of Contents

Type of Pruning

Type F W Other
Explanation Filter pruning Weight pruning other types

2020

Title Venue Type Code
HYDRA: Pruning Adversarially Robust Neural Networks NeurIPS W PyTorch(Author)
Logarithmic Pruning is All You Need NeurIPS W -
Directional Pruning of Deep Neural Networks NeurIPS W -
Movement Pruning: Adaptive Sparsity by Fine-Tuning NeurIPS W PyTorch(Author)
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot NeurIPS W PyTorch(Author)
Neuron Merging: Compensating for Pruned Neurons NeurIPS F PyTorch(Author)
Neuron-level Structured Pruning using Polarization Regularizer NeurIPS F PyTorch(Author)
SCOP: Scientific Control for Reliable Neural Network Pruning NeurIPS F -
Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning NeurIPS F -
The Generalization-Stability Tradeoff In Neural Network Pruning NeurIPS F PyTorch(Author)
Pruning Filter in Filter NeurIPS Other PyTorch(Author)
Position-based Scaled Gradient for Model Quantization and Pruning NeurIPS Other PyTorch(Author)
Bayesian Bits: Unifying Quantization and Pruning NeurIPS Other -
Pruning neural networks without any data by iteratively conserving synaptic flow NeurIPS Other PyTorch(Author)
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning ECCV (Oral) F PyTorch(Author)
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation ECCV F -
DHP: Differentiable Meta Pruning via HyperNetworks ECCV F PyTorch(Author)
Meta-Learning with Network Pruning ECCV W -
Accelerating CNN Training by Pruning Activation Gradients ECCV W -
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search ECCV Other -
Differentiable Joint Pruning and Quantization for Hardware Efficiency ECCV Other -
Channel Pruning via Automatic Structure Search IJCAI F PyTorch(Author)
Adversarial Neural Pruning with Latent Vulnerability Suppression ICML W -
Proving the Lottery Ticket Hypothesis: Pruning is All You Need ICML W -
Soft Threshold Weight Reparameterization for Learnable Sparsity ICML WF Pytorch(Author)
Network Pruning by Greedy Subnetwork Selection ICML F -
Operation-Aware Soft Channel Pruning using Differentiable Masks ICML F -
DropNet: Reducing Neural Network Complexity via Iterative Pruning ICML F -
Towards Efficient Model Compression via Learned Global Ranking CVPR (Oral) F Pytorch(Author)
HRank: Filter Pruning using High-Rank Feature Map CVPR (Oral) F Pytorch(Author)
Neural Network Pruning with Residual-Connections and Limited-Data CVPR (Oral) F -
Multi-Dimensional Pruning: A Unified Framework for Model Compression CVPR (Oral) WF -
DMCP: Differentiable Markov Channel Pruning for Neural Networks CVPR (Oral) F TensorFlow(Author)
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression CVPR F PyTorch(Author)
Few Sample Knowledge Distillation for Efficient Network Compression CVPR F -
Discrete Model Compression With Resource Constraint for Deep Neural Networks CVPR F -
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization CVPR W -
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration CVPR F -
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy CVPR F -
Comparing Rewinding and Fine-tuning in Neural Network Pruning ICLR (Oral) WF TensorFlow(Author)
A Signal Propagation Perspective for Pruning Neural Networks at Initialization ICLR (Spotlight) W -
ProxSGD: Training Structured Neural Networks under Regularization and Constraints ICLR W TF+PT(Author)
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation ICLR W -
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning ICLR W PyTorch(Author)
Dynamic Model Pruning with Feedback ICLR WF -
Provable Filter Pruning for Efficient Neural Networks ICLR F -
Data-Independent Neural Pruning via Coresets ICLR W -
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates AAAI F -
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks AAAI Other -
Pruning from Scratch AAAI Other -

2019

Title Venue Type Code
Network Pruning via Transformable Architecture Search NeurIPS F PyTorch(Author)
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks NeurIPS F PyTorch(Author)
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask NeurIPS W TensorFlow(Author)
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers NeurIPS W -
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks NeurIPS W PyTorch(Author)
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters NeurIPS W -
Model Compression with Adversarial Robustness: A Unified Optimization Framework NeurIPS Other PyTorch(Author)
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning ICCV F PyTorch(Author)
Accelerate CNN via Recursive Bayesian Pruning ICCV F -
Adversarial Robustness vs Model Compression, or Both? ICCV W PyTorch(Author)
Learning Filter Basis for Convolutional Neural Network Compression ICCV Other -
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration CVPR (Oral) F PyTorch(Author)
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning CVPR F PyTorch(Author)
Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure CVPR F PyTorch(Author)
On Implicit Filter Level Sparsity in Convolutional Neural Networks, Extension1, Extension2 CVPR F PyTorch(Author)
Structured Pruning of Neural Networks with Budget-Aware Regularization CVPR F -
Importance Estimation for Neural Network Pruning CVPR F PyTorch(Author)
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks CVPR F -
Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search CVPR Other TensorFlow(Author)
Variational Convolutional Neural Network Pruning CVPR - -
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks ICLR (Best) W TensorFlow(Author)
Rethinking the Value of Network Pruning ICLR F PyTorch(Author)
Dynamic Channel Pruning: Feature Boosting and Suppression ICLR F TensorFlow(Author)
SNIP: Single-shot Network Pruning based on Connection Sensitivity ICLR W TensorFLow(Author)
Dynamic Sparse Graph for Efficient Deep Learning ICLR F CUDA(3rd)
Collaborative Channel Pruning for Deep Networks ICML F -
Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github ICML F -
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis4 ICML W PyTorch(Author)
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning IJCAI F Tensorflow(Author)

2018

Title Venue Type Code
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers ICLR F TensorFlow(Author), PyTorch(3rd)
To prune, or not to prune: exploring the efficacy of pruning for model compression ICLR W -
Discrimination-aware Channel Pruning for Deep Neural Networks NeurIPS F TensorFlow(Author)
Frequency-Domain Dynamic Pruning for Convolutional Neural Networks NeurIPS W -
Learning Sparse Neural Networks via Sensitivity-Driven Regularization NeurIPS WF -
Amc: Automl for model compression and acceleration on mobile devices ECCV F TensorFlow(3rd)
Data-Driven Sparse Structure Selection for Deep Neural Networks ECCV F MXNet(Author)
Coreset-Based Neural Network Compression ECCV F PyTorch(Author)
Constraint-Aware Deep Neural Network Compression ECCV W SkimCaffe(Author)
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers ECCV W Caffe(Author)
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning CVPR F PyTorch(Author)
NISP: Pruning Networks using Neuron Importance Score Propagation CVPR F -
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization CVPR W -
“Learning-Compression” Algorithms for Neural Net Pruning CVPR W -
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks IJCAI F PyTorch(Author)
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning IJCAI F -

2017

Title Venue Type Code
Pruning Filters for Efficient ConvNets ICLR F PyTorch(3rd)
Pruning Convolutional Neural Networks for Resource Efficient Inference ICLR F TensorFlow(3rd)
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee NeurIPS W TensorFlow(Author)
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon NeurIPS W PyTorch(Author)
Runtime Neural Pruning NeurIPS F -
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning CVPR F -
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression ICCV F Caffe(Author), PyTorch(3rd)
Channel pruning for accelerating very deep neural networks ICCV F Caffe(Author)
Learning Efficient Convolutional Networks Through Network Slimming ICCV F PyTorch(Author)

2016

Title Venue Type Code
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding ICLR (Best) W Caffe(Author)
Dynamic Network Surgery for Efficient DNNs NeurIPS W Caffe(Author)

2015

Title Venue Type Code
Learning both Weights and Connections for Efficient Neural Networks NeurIPS W PyTorch(3rd)

Related Repo

Awesome-model-compression-and-acceleration

EfficientDNNs

Embedded-Neural-Network

awesome-AutoML-and-Lightweight-Models

Model-Compression-Papers

knowledge-distillation-papers

Network-Speed-and-Compression

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].