All Projects → lucaslie → torchprune

lucaslie / torchprune

Licence: MIT license
A research library for pytorch-based neural network pruning, compression, and more.

Programming Languages

shell
77523 projects
python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to torchprune

AGD
[ICML2020] "AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks" by Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang
Stars: ✭ 98 (-26.32%)
Mutual labels:  compression, neural-architecture-search
prunnable-layers-pytorch
Prunable nn layers for pytorch.
Stars: ✭ 47 (-64.66%)
Mutual labels:  compression, pruning
tthresh
C++ compressor for multidimensional grid data using the Tucker decomposition
Stars: ✭ 35 (-73.68%)
Mutual labels:  compression, tensor-decomposition
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-66.92%)
Mutual labels:  pruning, filter-pruning
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+63.91%)
Mutual labels:  compression, pruning
fasterai1
FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-74.44%)
Mutual labels:  compression, pruning
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+240.6%)
Mutual labels:  compression, pruning
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+409.02%)
Mutual labels:  pruning, neural-architecture-search
Hrank
Pytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (+23.31%)
Mutual labels:  compression, pruning
Model Compression And Acceleration Progress
Repository to track the progress in model compression and acceleration
Stars: ✭ 63 (-52.63%)
Mutual labels:  compression, pruning
kesi
Knowledge distillation from Ensembles of Iterative pruning (BMVC 2020)
Stars: ✭ 23 (-82.71%)
Mutual labels:  weight-pruning, filter-pruning
NTFk.jl
Unsupervised Machine Learning: Nonnegative Tensor Factorization + k-means clustering
Stars: ✭ 36 (-72.93%)
Mutual labels:  sparsity, tensor-decomposition
SSD-Pruning-and-quantization
Pruning and quantization for SSD. Model compression.
Stars: ✭ 19 (-85.71%)
Mutual labels:  compression, pruning
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+645.86%)
Mutual labels:  compression, pruning
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+400.75%)
Mutual labels:  sparsity, pruning
deep-compression
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (+17.29%)
Mutual labels:  sparsity, pruning
dedupsqlfs
Deduplicating filesystem via Python3, FUSE and SQLite
Stars: ✭ 24 (-81.95%)
Mutual labels:  compression
pyowl
Ordered Weighted L1 regularization for classification and regression in Python
Stars: ✭ 52 (-60.9%)
Mutual labels:  sparsity
Hypernets
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (+66.17%)
Mutual labels:  neural-architecture-search
blz4
Example of LZ4 compression with optimal parsing using BriefLZ algorithms
Stars: ✭ 24 (-81.95%)
Mutual labels:  compression

torchprune

Main contributors of this code base: Lucas Liebenwein, Cenk Baykal.

Please check individual paper folders for authors of each paper.

Papers

This repository contains code to reproduce the results from the following papers:

Paper Venue Title & Link
Node NeurIPS 2021 Sparse Flows: Pruning Continuous-depth Models
ALDS NeurIPS 2021 Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
Lost MLSys 2021 Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
PFP ICLR 2020 Provable Filter Pruning for Efficient Neural Networks
SiPP SIAM 2022 SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks

Packages

In addition, the repo also contains two stand-alone python packages that can be used for any desired pruning experiment:

Packages Location Description
torchprune ./src/torchprune This package can be used to run any of the implemented pruning algorithms. It also contains utilities to use pre-defined networks (or use your own network) and utilities for standard datasets.
experiment ./src/experiment This package can be used to run pruning experiments and compare multiple pruning methods for different prune ratios. Each experiment is configured using a .yaml-configuration files.

Paper Reproducibility

The code for each paper is implemented in the respective packages. In addition, for each paper we have a separate folder that contains additional information about the paper and scripts and parameter configuration to reproduce the exact results from the paper.

Paper Location
Node paper/node
ALDS paper/alds
Lost paper/lost
PFP paper/pfp
SiPP paper/sipp

Setup

We provide three ways to install the codebase:

  1. Github repo + full conda environment
  2. Installation via pip
  3. Docker image

1. Github Repo

Clone the github repo:

git pull [email protected]:lucaslie/torchprune.git
# (or your favorite way to pull a repo)

We recommend installing the packages in a separate conda environment. Then to create a new conda environment run

conda create -n prune python=3.8 pip
conda activate prune

To install all required dependencies and both packages, run:

pip install -r misc/requirements.txt

Note that this will also install pre-commit hooks for clean commits :-)

2. Pip Installation

To separately install each package with minimal dependencies without cloning the repo manually, run the following commands:

# "torchprune" package
pip install git+https://github.com/lucaslie/torchprune/#subdirectory=src/torchprune

# "experiment" package
pip install git+https://github.com/lucaslie/torchprune/#subdirectory=src/experiment

Note that the experiment package does not automatically install the torchprune package.

3. Docker Image

You can simply pull the docker image from our docker hub:

docker pull liebenwein/torchprune

You can run it interactively with

docker run -it liebenwein/torchprune bash

For your reference you can find the Dockerfile here.

More Information and Usage

Check out the following READMEs in the sub-directories to find out more about using the codebase.

READMEs More Information
src/torchprune/README.md more details to prune neural networks, how to use and setup the data sets, how to implement custom pruning methods, and how to add your data sets and networks.
src/experiment/README.md more details on how to configure and run your own experiments, and more information on how to re-produce the results.
paper/node/README.md check out for more information on the Node paper.
paper/alds/README.md check out for more information on the ALDS paper.
paper/lost/README.md check out for more information on the Lost paper.
paper/pfp/README.md check out for more information on the PFP paper.
paper/sipp/README.md check out for more information on the SiPP paper.

Citations

Please cite the respective papers when using our work.

Sparse flows: Pruning continuous-depth models

@article{liebenwein2021sparse,
  title={Sparse flows: Pruning continuous-depth models},
  author={Liebenwein, Lucas and Hasani, Ramin and Amini, Alexander and Rus, Daniela},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={22628--22642},
  year={2021}
}

Towards Determining the Optimal Layer-wise Decomposition

@inproceedings{liebenwein2021alds,
 author = {Lucas Liebenwein and Alaa Maalouf and Dan Feldman and Daniela Rus},
 booktitle = {Advances in Neural Information Processing Systems},
 title = {Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition},
 url = {https://arxiv.org/abs/2107.11442},
 volume = {34},
 year = {2021}
}

Lost In Pruning

@article{liebenwein2021lost,
title={Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy},
author={Liebenwein, Lucas and Baykal, Cenk and Carter, Brandon and Gifford, David and Rus, Daniela},
journal={Proceedings of Machine Learning and Systems},
volume={3},
year={2021}
}

Provable Filter Pruning

@inproceedings{liebenwein2020provable,
title={Provable Filter Pruning for Efficient Neural Networks},
author={Lucas Liebenwein and Cenk Baykal and Harry Lang and Dan Feldman and Daniela Rus},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=BJxkOlSYDH}
}

SiPPing Neural Networks (Weight Pruning)

@article{baykal2022sensitivity,
  title={Sensitivity-informed provable pruning of neural networks},
  author={Baykal, Cenk and Liebenwein, Lucas and Gilitschenski, Igor and Feldman, Dan and Rus, Daniela},
  journal={SIAM Journal on Mathematics of Data Science},
  volume={4},
  number={1},
  pages={26--45},
  year={2022},
  publisher={SIAM}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].