All Projects → jack-willturner → batchnorm-pruning

jack-willturner / batchnorm-pruning

Licence: other
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to batchnorm-pruning

deep-compression
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (+136.36%)
Mutual labels:  pruning
fasterai1
FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-48.48%)
Mutual labels:  pruning
SGDLibrary
MATLAB/Octave library for stochastic optimization algorithms: Version 1.0.20
Stars: ✭ 165 (+150%)
Mutual labels:  sgd
torchprune
A research library for pytorch-based neural network pruning, compression, and more.
Stars: ✭ 133 (+101.52%)
Mutual labels:  pruning
pytorch-network-slimming
A package to make do Network Slimming a little easier
Stars: ✭ 40 (-39.39%)
Mutual labels:  pruning
FactorizationMachine
implementation of factorization machine, support classification.
Stars: ✭ 19 (-71.21%)
Mutual labels:  sgd
vgg16 batchnorm
VGG16 architecture with BatchNorm
Stars: ✭ 14 (-78.79%)
Mutual labels:  batchnorm
NaiveNASflux.jl
Your local Flux surgeon
Stars: ✭ 20 (-69.7%)
Mutual labels:  pruning
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+909.09%)
Mutual labels:  pruning
Pruning filters for efficient convnets
PyTorch implementation of "Pruning Filters For Efficient ConvNets"
Stars: ✭ 96 (+45.45%)
Mutual labels:  pruning
sparsezoo
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+300%)
Mutual labels:  pruning
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+90.91%)
Mutual labels:  pruning
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-37.88%)
Mutual labels:  pruning
GAN-LTH
[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-63.64%)
Mutual labels:  pruning
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-15.15%)
Mutual labels:  pruning
prunnable-layers-pytorch
Prunable nn layers for pytorch.
Stars: ✭ 47 (-28.79%)
Mutual labels:  pruning
FisherPruning
Group Fisher Pruning for Practical Network Compression(ICML2021)
Stars: ✭ 127 (+92.42%)
Mutual labels:  pruning
Generalizing-Lottery-Tickets
This repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers"
Stars: ✭ 48 (-27.27%)
Mutual labels:  pruning
jetbrains-utility
Remove/Backup – settings & cli for macOS (OS X) – DataGrip, AppCode, CLion, Gogland, IntelliJ, PhpStorm, PyCharm, Rider, RubyMine, WebStorm
Stars: ✭ 62 (-6.06%)
Mutual labels:  pruning
numpy-neuralnet-exercise
Implementation of key concepts of neuralnetwork via numpy
Stars: ✭ 49 (-25.76%)
Mutual labels:  sgd

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

Tensorflow implementation from original author here.

A PyTorch implementation of this paper.

To do list:

  • Extend to MobileNet and VGG
  • Fix MAC op calculation for strided convolution
  • Include training scheme from paper

Usage

I haven't included any code for transfer learning/ using pretrained models, so everything here must be done from scratch. You will have to rewrite your models to use my extended version of batch normalization, so any occurences of nn.BatchNorm2d should be replaced with bn.BatchNorm2dEx. I have included a few examples in the models folder. Note that in the forward pass you need to provide the weight from the last convolution to the batchnorm (e.g. out = self.bn1(self.conv1(x), self.conv1.weight).

I will add command line support for hyperparameters soon, but for now they will have to be altered in the main script itself. Currently the default is set to train ResNet-18; this can easily be swapped out for another model.

python main.py

Results on CIFAR-10

Model Size MAC ops Inf. time Accuracy
ResNet-18
ResNet-18-Compressed
VGG-16
VGG-16-Compressed
MobileNet
MobileNet-Compressed

Citing

Now accepted to ICLR 2018, will update bibtex soon:

@article{ye2018rethinking,
  title={Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers},
  author={Ye, Jianbo and Lu, Xin and Lin, Zhe and Wang, James Z},
  journal={arXiv preprint arXiv:1802.00124},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].