All Projects → microsoft → Lq Nets

microsoft / Lq Nets

Licence: mit
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lq Nets

Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-42.05%)
Mutual labels:  cnn, quantization, dnn
Model Quantization
Collections of model quantization algorithms
Stars: ✭ 118 (-39.49%)
Mutual labels:  cnn, quantization, compression
Rmdl
RMDL: Random Multimodel Deep Learning for Classification
Stars: ✭ 375 (+92.31%)
Mutual labels:  cnn, dnn
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+132.31%)
Mutual labels:  quantization, compression
Dialectid e2e
End to End Dialect Identification using Convolutional Neural Network
Stars: ✭ 40 (-79.49%)
Mutual labels:  cnn, dnn
Caffe Hrt
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
Stars: ✭ 271 (+38.97%)
Mutual labels:  cnn, dnn
Caffe Mobile
Optimized (for size and speed) Caffe lib for iOS and Android with out-of-the-box demo APP.
Stars: ✭ 316 (+62.05%)
Mutual labels:  cnn, dnn
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+408.72%)
Mutual labels:  quantization, compression
Sai
SDK for TEE AI Stick (includes model training script, inference library, examples)
Stars: ✭ 28 (-85.64%)
Mutual labels:  cnn, quantization
Deepzip
NN based lossless compression
Stars: ✭ 69 (-64.62%)
Mutual labels:  cnn, compression
SSD-Pruning-and-quantization
Pruning and quantization for SSD. Model compression.
Stars: ✭ 19 (-90.26%)
Mutual labels:  compression, quantization
Zeroq
[CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework
Stars: ✭ 150 (-23.08%)
Mutual labels:  quantization, compression
DNNAC
All about acceleration and compression of Deep Neural Networks
Stars: ✭ 29 (-85.13%)
Mutual labels:  compression, quantization
Numpy neural network
仅使用numpy从头开始实现神经网络,包括反向传播公式推导过程; numpy构建全连接层、卷积层、池化层、Flatten层;以及图像分类案例及精调网络案例等,持续更新中... ...
Stars: ✭ 339 (+73.85%)
Mutual labels:  cnn, dnn
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+11.79%)
Mutual labels:  quantization, compression
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-74.87%)
Mutual labels:  cnn, quantization
Awesome Speech Recognition Speech Synthesis Papers
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
Stars: ✭ 2,085 (+969.23%)
Mutual labels:  cnn, dnn
Keraspp
코딩셰프의 3분 딥러닝, 케라스맛
Stars: ✭ 178 (-8.72%)
Mutual labels:  cnn, dnn
Anime4k
A High-Quality Real Time Upscaler for Anime Video
Stars: ✭ 14,083 (+7122.05%)
Mutual labels:  cnn
Did Mdn
Density-aware Single Image De-raining using a Multi-stream Dense Network (CVPR 2018)
Stars: ✭ 192 (-1.54%)
Mutual labels:  cnn

LQ-Nets

By Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua.

Microsoft Research Asia (MSRA).

Introduction

This repository contains the training code of LQ-Nets introduced in our ECCV 2018 paper:

D. Zhang*, J. Yang*, D. Ye* and G. Hua. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. ECCV 2018 (*: Equal contribution) PDF

Dependencies

  • Python 2.7 or 3.3+
  • Python bindings for OpenCV
  • TensorFlow >= 1.3.0
  • TensorPack

Usage

Download the ImageNet dataset and decompress into the structure like

dir/
  train/
    n01440764/
      n01440764_10026.JPEG
      ...
    ...
  val/
    ILSVRC2012_val_00000001.JPEG
    ...

To train a quantized "pre-activation" ResNet-18, simply run

python imagenet.py --gpu 0,1,2,3 --data /PATH/TO/IMAGENET --mode preact --depth 18 --qw 1 --qa 2 --logdir_id w1a2 

After the training, the result model will be stored in ./train_log/w1a2.

For more options, please refer to python imagenet.py -h.

Results

ImageNet Experiments

Quantizing both weight and activation

Model Bit-width(W/A) Top-1(%) Top-5(%)
ResNet-18 1/2 62.6 84.3
ResNet-18 2/2 64.9 85.9
ResNet-18 3/3 68.2 87.9
ResNet-18 4/4 69.3 88.8
ResNet-34 1/2 66.6 86.9
ResNet-34 2/2 69.8 89.1
ResNet-34 3/3 71.9 90.2
ResNet-50 1/2 68.7 88.4
ResNet-50 2/2 71.5 90.3
ResNet-50 3/3 74.2 91.6
ResNet-50 4/4 75.1 92.4
AlexNet 1/2 55.7 78.8
AlexNet 2/2 57.4 80.1
DenseNet-121 2/2 69.6 89.1
VGG-Variant 1/2 67.1 87.6
VGG-Variant 2/2 68.8 88.6
GoogLeNet-Variant 1/2 65.6 86.4
GoogLeNet-Variant 2/2 68.2 88.1

Quantizing weight only

Model Bit-width(W/A) Top-1(%) Top-5(%)
ResNet-18 2/32 68.0 88.0
ResNet-18 3/32 69.3 88.8
ResNet-18 4/32 70.0 89.1
ResNet-50 2/32 75.1 92.3
ResNet-50 4/32 76.4 93.1
AlexNet 2/32 60.5 82.7

More results can be found in the paper.

Citation

If you use our code or models in your research, please cite our paper with

@inproceedings{ZhangYangYeECCV2018,
    author = {Zhang, Dongqing and Yang, Jiaolong and Ye, Dongqiangzi and Hua, Gang},
    title = {LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].