All Projects → bayesgroup → Variational Dropout Sparsifies Dnn

bayesgroup / Variational Dropout Sparsifies Dnn

Licence: gpl-3.0
Sparse Variational Dropout, ICML 2017

Projects that are alternatives of or similar to Variational Dropout Sparsifies Dnn

Nn compression
Stars: ✭ 193 (-30.58%)
Mutual labels:  jupyter-notebook, compression
Digital video introduction
A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding).
Stars: ✭ 12,184 (+4282.73%)
Mutual labels:  jupyter-notebook, compression
Demo Chinese Text Binary Classification With Bert
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Generative models tutorial with demo
Generative Models Tutorial with Demo: Bayesian Classifier Sampling, Variational Auto Encoder (VAE), Generative Adversial Networks (GANs), Popular GANs Architectures, Auto-Regressive Models, Important Generative Model Papers, Courses, etc..
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Bert Gen
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Data analysis
一些爬虫和数据分析相关实战练习
Stars: ✭ 275 (-1.08%)
Mutual labels:  jupyter-notebook
Pyopenpose
Python bindings for the Openpose library
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Toon Me
A Deep Learning project to Toon Portrait Images
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Style transfer
Style Transfer as Optimal Transport
Stars: ✭ 278 (+0%)
Mutual labels:  jupyter-notebook
Pgcn
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Adaptnlp
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
Stars: ✭ 278 (+0%)
Mutual labels:  jupyter-notebook
Oneclickrun
Another colab notebook!
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Quietnet
Simple chat program that communicates using inaudible sounds
Stars: ✭ 2,924 (+951.8%)
Mutual labels:  jupyter-notebook
Unrolled gan
Unrolled Generative Adversarial Networks
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Iccv19 Gluoncv
Tutorial Materials for ICCV19
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Latest News Classifier
Master in Data Science Final Project
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Transformer
Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series.
Stars: ✭ 273 (-1.8%)
Mutual labels:  jupyter-notebook
Machine Learning
my machine-learning tutorial
Stars: ✭ 276 (-0.72%)
Mutual labels:  jupyter-notebook
Scipy2018 Geospatial Data
Stars: ✭ 277 (-0.36%)
Mutual labels:  jupyter-notebook
Cryptocurrency Analysis Python
Open-Source Tutorial For Analyzing and Visualizing Cryptocurrency Data
Stars: ✭ 278 (+0%)
Mutual labels:  jupyter-notebook

Variational Dropout Sparsifies Deep Neural Networks

Tensorflow implementation

Google AI Research has released State of Sparsity in Deep Neural Networks - a nice large scale study of sparsification methods. The code contains an implementation of Sparse variational dropout on Tensorflow.

Play around w/ SparseVD (PyTorch)

You can play with compression of a small neural network using the following IPython notebook @ Colab, which is also available as an assigment @ Colab from DeepBayes Summer School. The code is not highly tuned but it is simple.

This repo contains the code for our ICML17 paper, Variational Dropout Sparsifies Deep Neural Networks (talk, slides, poster, blog-post). We showed that Variational Dropout leads to extremely sparse solutions both in fully-connected and convolutional layers. Sparse VD reduced the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy. This effect is similar to the Automatic Relevance Determination effect in empirical Bayes. However, in Sparse VD the prior distribution remaines fixed, so there is no additional risk of overfitting.

We visualize the weights of Sparse VD LeNet-5-Caffe network and demonstrate several filters of the first convolutional layer and a piece of the fully-connected layer :)

ICML 2017 Oral Presentation by Dmitry Molchanov

ICML 2017 Oral Presentation by Dmitry Molchanov

MNIST Experiments

The table containes the comparison of different sparsity-inducing techniques (Pruning (Han et al., 2015b;a), DNS (Guo et al., 2016), SWS (Ullrich et al., 2017)) on LeNet architectures. Our method provides the highest level of sparsity with a similar accuracy

Network Method Error Sparsity per Layer Compression
Original 1.64 1
Pruning 1.59 92.0 − 91.0 − 74.0 12
LeNet-300-100 DNS 1.99 98.2 − 98.2 − 94.5 56
SWS 1.94 23
(ours) SparseVD 1.92 98.9 − 97.2 − 62.0 68
Original 0.8 1
Pruning 0.77 34 − 88 − 92.0 − 81 12
LeNet-5 DNS 0.91 86 − 97 − 99.3 − 96 111
SWH 0.97 200
(ours) SparseVD 0.75 67 − 98 − 99.8 − 95 280

CIFAR Experiments

The plot contains the accuracy and sparsity level for VGG-like architectures of different sizes. The number of neurons and filters scales as k. Dense networks were trained with Binary Dropout, and Sparse VD networks were trained with Sparse Variational Dropout on all layers. The overall sparsity level, achieved by our method, is reported as a dashed line. The accuracy drop is negligible in most cases, and the sparsity level is high, especially in larger networks.

Environment setup

sudo apt install virtualenv python-pip python-dev
virtualenv venv --system-site-packages
source venv/bin/activate

pip install numpy tabulate 'ipython[all]' sklearn matplotlib seaborn  
pip install --upgrade https://github.com/Theano/Theano/archive/rel-0.9.0.zip
pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip

Launch experiments

source ~/venv/bin/activate
cd variational-dropout-sparsifies-dnn
THEANO_FLAGS='floatX=float32,device=gpu0,lib.cnmem=1' ipython ./experiments/<experiment>.py
  • If you have CuDNN problem please look at this issue.
  • This repo seems to use more up-to-date libs (Python 3.5 and Theano 1.0.0).

Further extensions

These two papers heavily rely on the Sparse Variational Dropout technique and extend it to other applications:

Citation

If you found this code useful please cite our paper

@InProceedings{
  molchanov2017variational,
  title={Variational Dropout Sparsifies Deep Neural Networks},
  author={Dmitry Molchanov and Arsenii Ashukha and Dmitry Vetrov},
  booktitle={Proceedings of the 34th International Conference on Machine Learning},
  year={2017}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].