All Projects → NVlabs → Condensa

NVlabs / Condensa

Licence: apache-2.0
Programmable Neural Network Compression

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Condensa

Knowledge Distillation Pytorch
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Stars: ✭ 986 (+664.34%)
Mutual labels:  deep-neural-networks, model-compression
Channel Pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Stars: ✭ 979 (+658.91%)
Mutual labels:  deep-neural-networks, model-compression
Model Compression Papers
Papers for deep neural network compression and acceleration
Stars: ✭ 296 (+129.46%)
Mutual labels:  deep-neural-networks, model-compression
Microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (-6.2%)
Mutual labels:  deep-neural-networks, model-compression
Monet
MONeT framework for reducing memory consumption of DNN training
Stars: ✭ 126 (-2.33%)
Mutual labels:  deep-neural-networks
Perceptualsimilarity
LPIPS metric. pip install lpips
Stars: ✭ 2,037 (+1479.07%)
Mutual labels:  deep-neural-networks
Trainer Mac
Trains a model, then generates a complete Xcode project that uses it - no code necessary
Stars: ✭ 122 (-5.43%)
Mutual labels:  deep-neural-networks
Nlp Pretrained Model
A collection of Natural language processing pre-trained models.
Stars: ✭ 122 (-5.43%)
Mutual labels:  deep-neural-networks
Pretrained Language Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+1475.97%)
Mutual labels:  model-compression
Pytorch convlstm
convolutional lstm implementation in pytorch
Stars: ✭ 126 (-2.33%)
Mutual labels:  deep-neural-networks
Echo
Python package containing all custom layers used in Neural Networks (Compatible with PyTorch, TensorFlow and MegEngine)
Stars: ✭ 126 (-2.33%)
Mutual labels:  deep-neural-networks
Pointwise
Code for Pointwise Convolutional Neural Networks, CVPR 2018
Stars: ✭ 123 (-4.65%)
Mutual labels:  deep-neural-networks
Deep Learning For Time Series Forecasting
This repository is designed to teach you, step-by-step, how to develop deep learning methods for time series forecasting with concrete and executable examples in Python.
Stars: ✭ 125 (-3.1%)
Mutual labels:  deep-neural-networks
Imagecluster
Cluster images based on image content using a pre-trained deep neural network, optional time distance scaling and hierarchical clustering.
Stars: ✭ 122 (-5.43%)
Mutual labels:  deep-neural-networks
Phantoscope
Open Source, Cloud Native, RESTful Search Engine Powered by Neural Networks
Stars: ✭ 127 (-1.55%)
Mutual labels:  deep-neural-networks
Lenet 5
PyTorch implementation of LeNet-5 with live visualization
Stars: ✭ 122 (-5.43%)
Mutual labels:  deep-neural-networks
100 Days Of Nlp
Stars: ✭ 125 (-3.1%)
Mutual labels:  deep-neural-networks
Simple Neural Network
Creating a simple neural network in Python with one input layer (3 inputs) and one output neuron.
Stars: ✭ 126 (-2.33%)
Mutual labels:  deep-neural-networks
Keras Kaldi
Keras Interface for Kaldi ASR
Stars: ✭ 124 (-3.88%)
Mutual labels:  deep-neural-networks
Chinese Speech To Text
Chinese Speech To Text Using Wavenet
Stars: ✭ 124 (-3.88%)
Mutual labels:  deep-neural-networks

A Programming System for Neural Network Compression

Condensa is a framework for programmable model compression in Python. It comes with a set of built-in compression operators which may be used to compose complex compression schemes targeting specific combinations of DNN architecture, hardware platform, and optimization objective. To recover any accuracy lost during compression, Condensa uses a constrained optimization formulation of model compression and employs an Augmented Lagrangian-based algorithm as the optimizer.

Status: Condensa is under active development, and bug reports, pull requests, and other feedback are all highly appreciated. See the contributions section below for more details on how to contribute.

Supported Operators and Schemes

Condensa provides the following set of pre-built compression schemes:

The schemes above are built using one or more compression operators, which may be combined in various ways to define your own custom schemes.

Please refer to the documentation for a detailed description of available operators and schemes.

Prerequisites

Condensa requires:

  • A working Linux installation (we use Ubuntu 18.04)
  • NVIDIA drivers and CUDA 10+ for GPU support
  • Python 3.5 or newer
  • PyTorch 1.0 or newer

Installation

The most straightforward way of installing Condensa is via pip:

pip install condensa

Installation from Source

Retrieve the latest source code from the Condensa repository:

git clone https://github.com/NVlabs/condensa.git

Navigate to the source code directory and run the following:

pip install -e .

Test out the Installation

To check the installation, run the unit test suite:

bash run_all_tests.sh -v

Getting Started

The AlexNet Notebook contains a simple step-by-step walkthrough of compressing a pre-trained model using Condensa. Check out the examples folder for additional, more complex examples of using Condensa (note: some examples require the torchvision package to be installed).

Documentation

Documentation is available here. Please also check out the Condensa paper for a detailed description of Condensa's motivation, features, and performance results.

Contributing

We appreciate all contributions, including bug fixes, new features and documentation, and additional tutorials. You can initiate contributions via Github pull requests. When making code contributions, please follow the PEP 8 Python coding standard and provide unit tests for the new features. Finally, make sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Citing Condensa

If you use Condensa for research, please consider citing the following paper:

@article{condensa2020,
  title={A Programmable Approach to Neural Network Compression}, 
  author={V. {Joseph} and G. L. {Gopalakrishnan} and S. {Muralidharan} and M. {Garland} and A. {Garg}},
  journal={IEEE Micro}, 
  year={2020},
  volume={40},
  number={5},
  pages={17-25},
  doi={10.1109/MM.2020.3012391}
}

Disclaimer

Condensa is a research prototype and not an official NVIDIA product. Many features are still experimental and yet to be properly documented.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].