All Projects → liuzhuang13 → Slimming

liuzhuang13 / Slimming

Licence: mit
Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Slimming

Deep Koalarization
Keras/Tensorflow implementation of our paper Grayscale Image Colorization using deep CNN and Inception-ResNet-v2 (https://arxiv.org/abs/1712.03400)
Stars: ✭ 364 (-12.08%)
Mutual labels:  convolutional-neural-networks
Human Activity Recognition Using Cnn
Convolutional Neural Network for Human Activity Recognition in Tensorflow
Stars: ✭ 382 (-7.73%)
Mutual labels:  convolutional-neural-networks
Computer Vision
Programming Assignments and Lectures for Stanford's CS 231: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 408 (-1.45%)
Mutual labels:  convolutional-neural-networks
Carnd Vehicle Detection
Vehicle detection using YOLO in Keras runs at 21FPS
Stars: ✭ 367 (-11.35%)
Mutual labels:  convolutional-neural-networks
Espnetv2
A light-weight, power efficient, and general purpose convolutional neural network
Stars: ✭ 377 (-8.94%)
Mutual labels:  convolutional-neural-networks
Convnetdraw
Draw multi-layer neural network in your browser
Stars: ✭ 391 (-5.56%)
Mutual labels:  convolutional-neural-networks
Hardnet
Hardnet descriptor model - "Working hard to know your neighbor's margins: Local descriptor learning loss"
Stars: ✭ 350 (-15.46%)
Mutual labels:  convolutional-neural-networks
Multi Class Text Classification Cnn
Classify Kaggle Consumer Finance Complaints into 11 classes. Build the model with CNN (Convolutional Neural Network) and Word Embeddings on Tensorflow.
Stars: ✭ 410 (-0.97%)
Mutual labels:  convolutional-neural-networks
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (-8.45%)
Mutual labels:  convolutional-neural-networks
Tf Pose Estimation
Deep Pose Estimation implemented using Tensorflow with Custom Architectures for fast inference.
Stars: ✭ 3,856 (+831.4%)
Mutual labels:  convolutional-neural-networks
Trashnet
Dataset of images of trash; Torch-based CNN for garbage image classification
Stars: ✭ 368 (-11.11%)
Mutual labels:  convolutional-neural-networks
First Steps Towards Deep Learning
This is an open sourced book on deep learning.
Stars: ✭ 376 (-9.18%)
Mutual labels:  convolutional-neural-networks
Neuralnetwork.net
A TensorFlow-inspired neural network library built from scratch in C# 7.3 for .NET Standard 2.0, with GPU support through cuDNN
Stars: ✭ 392 (-5.31%)
Mutual labels:  convolutional-neural-networks
Easy Deep Learning With Keras
Keras tutorial for beginners (using TF backend)
Stars: ✭ 367 (-11.35%)
Mutual labels:  convolutional-neural-networks
Deep Convolution Stock Technical Analysis
Uses Deep Convolutional Neural Networks (CNNs) to model the stock market using technical analysis. Predicts the future trend of stock selections.
Stars: ✭ 407 (-1.69%)
Mutual labels:  convolutional-neural-networks
Mixhop And N Gcn
An implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019).
Stars: ✭ 358 (-13.53%)
Mutual labels:  convolutional-neural-networks
Deeplogo
A brand logo detection system using tensorflow object detection API.
Stars: ✭ 388 (-6.28%)
Mutual labels:  convolutional-neural-networks
Activity Recognition With Cnn And Rnn
Temporal Segments LSTM and Temporal-Inception for Activity Recognition
Stars: ✭ 415 (+0.24%)
Mutual labels:  convolutional-neural-networks
Deepface
Deep Learning Models for Face Detection/Recognition/Alignments, implemented in Tensorflow
Stars: ✭ 409 (-1.21%)
Mutual labels:  convolutional-neural-networks
Graphwaveletneuralnetwork
A PyTorch implementation of "Graph Wavelet Neural Network" (ICLR 2019)
Stars: ✭ 404 (-2.42%)
Mutual labels:  convolutional-neural-networks

Network Slimming

This repository contains the code for the following paper

Learning Efficient Convolutional Networks through Network Slimming (ICCV 2017).

Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, Changshui Zhang.

The code is based on fb.resnet.torch.

We have now released another [PyTorch implementation] which supports ResNet and DenseNet, based on Qiang Wang's Pytorch implementation listed below.

Other Implementations: [Pytorch] by Qiang Wang. [Chainer] by Daiki Sanno. [Pytorch hand detection using YOLOv3] by Lam1360. [Pytorch object detection using YOLOv3] by talebolano.

Citation:

@inproceedings{Liu2017learning,
	title = {Learning Efficient Convolutional Networks through Network Slimming},
	author = {Liu, Zhuang and Li, Jianguo and Shen, Zhiqiang and Huang, Gao and Yan, Shoumeng and Zhang, Changshui},
	booktitle = {ICCV},
	year = {2017}
}

Introduction

Network Slimming is a neural network training scheme that can simultaneously reduce the model size, run-time memory, computing operations, while introducing no accuracy loss to and minimum overhead to the training process. The resulting models require no special libraries/hardware for efficient inference.

Approach

Figure 1: The channel pruning process.

We associate a scaling factor (reused from batch normalization layers) with each channel in convolutional layers. Sparsity regularization is imposed on these scaling factors during training to automatically identify unimportant channels. The channels with small scaling factor values (in orange color) will be pruned (left side). After pruning, we obtain compact models (right side), which are then fine-tuned to achieve comparable (or even higher) accuracy as normally trained full network.


Figure 2: Flow-chart of the network slimming procedure. The dotted line is for the multi-pass version of the procedure.

Example Usage

This repo holds the example code for VGGNet on CIFAR-10 dataset.

  1. Prepare the directories to save the results
mkdir vgg_cifar10/
mkdir vgg_cifar10/pruned
mkdir vgg_cifar10/converted
mkdir vgg_cifar10/fine_tune
  1. Train vgg network with channel level sparsity, S is the lambda in the paper which controls the significance of sparsity
th main.lua -netType vgg -save vgg_cifar10/ -S 0.0001
  1. Identify a certain percentage of relatively unimportant channels and set their scaling factors to 0
th prune/prune.lua -percent 0.7 -model vgg_cifar10/model_160.t7  -save vgg_cifar10/pruned/model_160_0.7.t7
  1. Re-build a real compact network and copy the weights from the model in the last stage
th convert/vgg.lua -model vgg_cifar10/pruned/model_160_0.7.t7 -save vgg_cifar10/converted/model_160_0.7.t7
  1. Fine-tune the compact network
th main_fine_tune.lua -retrain vgg_cifar10/converted/model_160_0.7.t7 -save vgg_cifar10/fine_tune/

Note

The original paper has a bug on the VGG results on ImageNet (Table 2). Please refer to this [issue]. The correct result was presented in Table 4 of this paper.

Contact

liuzhuangthu at gmail.com

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].