All Projects → dongyp13 → Stochastic-Quantization

dongyp13 / Stochastic-Quantization

Licence: other
Training Low-bits DNNs with Stochastic Quantization

Programming Languages

Jupyter Notebook
11667 projects
C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
Cuda
1817 projects
CMake
9771 projects
Protocol Buffer
295 projects

Projects that are alternatives of or similar to Stochastic-Quantization

ppq
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
Stars: ✭ 281 (+301.43%)
Mutual labels:  caffe, quantization
Pinto model zoo
A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx]
Stars: ✭ 634 (+805.71%)
Mutual labels:  caffe, quantization
Php Opencv Examples
Tutorial for computer vision and machine learning in PHP 7/8 by opencv (installation + examples + documentation)
Stars: ✭ 333 (+375.71%)
Mutual labels:  caffe, imagenet
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (+0%)
Mutual labels:  imagenet, quantization
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+1697.14%)
Mutual labels:  caffe, imagenet
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+3335.71%)
Mutual labels:  imagenet, cifar
Cnn Models
ImageNet pre-trained models with batch normalization for the Caffe framework
Stars: ✭ 355 (+407.14%)
Mutual labels:  caffe, imagenet
Open-Set-Recognition
Open Set Recognition
Stars: ✭ 49 (-30%)
Mutual labels:  imagenet, cifar
Mobilenet Caffe
Caffe Implementation of Google's MobileNets (v1 and v2)
Stars: ✭ 1,217 (+1638.57%)
Mutual labels:  caffe, imagenet
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-30%)
Mutual labels:  caffe, quantization
Densenet Caffe
DenseNet Caffe Models, converted from https://github.com/liuzhuang13/DenseNet
Stars: ✭ 350 (+400%)
Mutual labels:  caffe, imagenet
Senet Caffe
A Caffe Re-Implementation of SENet
Stars: ✭ 169 (+141.43%)
Mutual labels:  caffe, imagenet
Caffenet Benchmark
Evaluation of the CNN design choices performance on ImageNet-2012.
Stars: ✭ 700 (+900%)
Mutual labels:  caffe, imagenet
Resnet Imagenet Caffe
train resnet on imagenet from scratch with caffe
Stars: ✭ 105 (+50%)
Mutual labels:  caffe, imagenet
Xception-caffe
Xception implemented with caffe
Stars: ✭ 45 (-35.71%)
Mutual labels:  caffe, imagenet
ImageModels
ImageNet model implemented using the Keras Functional API
Stars: ✭ 63 (-10%)
Mutual labels:  imagenet
onnx2caffe
pytorch to caffe by onnx
Stars: ✭ 341 (+387.14%)
Mutual labels:  caffe
colorchecker-detection
Multiple ColorChecker Detection. This code implements a multiple colorChecker detection method, as described in the paper Fast and Robust Multiple ColorChecker Detection.
Stars: ✭ 51 (-27.14%)
Mutual labels:  caffe
img classification deep learning
No description or website provided.
Stars: ✭ 19 (-72.86%)
Mutual labels:  imagenet
SharpPeleeNet
ImageNet pre-trained SharpPeleeNet can be used in real-time Semantic Segmentation/Objects Detection
Stars: ✭ 13 (-81.43%)
Mutual labels:  imagenet

Stochastic-Quantization

Introduction

This repository contains the codes for training and testing Stocastic Quantization described in the paper "Learning Accurate Low-bit Deep Neural Networks with Stochastic Quantization" (BMVC 2017, Oral).

We implement our codes based on Caffe framework. Our codes can be used for training BWN (Binary Weighted Networks), TWN (Ternary Weighted Networks), SQ-BWN and SQ-TWN.

Usage

Build Caffe

Please follow the standard installation of Caffe.

cd caffe/
make
cd ..

Training and Testing

CIFAR

For CIFAR-10(100), we provide two network architectures VGG-9 and ResNet-56 (See details in the paper). For example, use the following commands to train ResNet-56:

  • FWN
./CIFAR/ResNet-56/FWN/train.sh
  • BWN
./CIFAR/ResNet-56/BWN/train.sh
  • TWN
./CIFAR/ResNet-56/TWN/train.sh
  • SQ-BWN
./CIFAR/ResNet-56/SQ-BWN/train.sh
  • SQ-TWN
./CIFAR/ResNet-56/SQ-TWN/train.sh

ImageNet

For ImageNet, we provide AlexNet-BN and ResNet-18 network architectures. For example, use the following commands to train ResNet-18:

  • FWN
./ImageNet/ResNet-18/FWN/train.sh
  • BWN
./ImageNet/ResNet-18/BWN/train.sh
  • TWN
./ImageNet/ResNet-18/TWN/train.sh
  • SQ-BWN
./ImageNet/ResNet-18/SQ-BWN/train.sh
  • SQ-TWN
./ImageNet/ResNet-18/SQ-TWN/train.sh

Implementation

Layers

We add BinaryConvolution, BinaryInnerProduct, TernaryConvolution and TernaryInnerProduct layers to train binary or ternary networks. We also put useful functions of low-bits DNNs in lowbit-functions.

Params

We add two more parameters in convolution_param and inner_product_param, which are sq and ratio. sq means whether to use stochastic quantization (default to false). ratio is the SQ ratio (default to 100).

Note

Our codes can only run appropriately on GPU. CPU version should be further implemented.

Have fun to deploy your own low-bits DNNs!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].