All Projects → DwangoMediaVillage → Keras_compressor

DwangoMediaVillage / Keras_compressor

Model Compression CLI Tool for Keras.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Keras compressor

Awesome Knowledge Distillation
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
Stars: ✭ 1,031 (+544.38%)
Mutual labels:  model-compression
Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-29.37%)
Mutual labels:  model-compression
Yolov3
yolov3 by pytorch
Stars: ✭ 142 (-11.25%)
Mutual labels:  model-compression
Aquvitae
The Easiest Knowledge Distillation Library for Lightweight Deep Learning
Stars: ✭ 71 (-55.62%)
Mutual labels:  model-compression
Ghostnet
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+990%)
Mutual labels:  model-compression
Microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (-24.37%)
Mutual labels:  model-compression
Compress
Compressing Representations for Self-Supervised Learning
Stars: ✭ 43 (-73.12%)
Mutual labels:  model-compression
Pytorch Weights pruning
PyTorch Implementation of Weights Pruning
Stars: ✭ 158 (-1.25%)
Mutual labels:  model-compression
Hawq
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-32.5%)
Mutual labels:  model-compression
Collaborative Distillation
PyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (-13.75%)
Mutual labels:  model-compression
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+670%)
Mutual labels:  model-compression
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+6586.25%)
Mutual labels:  model-compression
Pretrained Language Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+1170.63%)
Mutual labels:  model-compression
Keras model compression
Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Stars: ✭ 59 (-63.12%)
Mutual labels:  model-compression
Ld Net
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-7.5%)
Mutual labels:  model-compression
Awesome Pruning
A curated list of neural network pruning resources.
Stars: ✭ 1,017 (+535.63%)
Mutual labels:  model-compression
Awesome Model Compression
papers about model compression
Stars: ✭ 119 (-25.62%)
Mutual labels:  model-compression
Pruning
Code for "Co-Evolutionary Compression for Unpaired Image Translation" (ICCV 2019) and "SCOP: Scientific Control for Reliable Neural Network Pruning" (NeurIPS 2020).
Stars: ✭ 159 (-0.62%)
Mutual labels:  model-compression
Amc Models
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 154 (-3.75%)
Mutual labels:  model-compression
Condensa
Programmable Neural Network Compression
Stars: ✭ 129 (-19.37%)
Mutual labels:  model-compression

keras_compressor

Model compression CLI tool for keras.

How to use it

Requirements

  • Python 3.5, 3.6
  • Keras
    • We tested on Keras 2.0.3 (TensorFlow backend)

Install

$ git clone ${this repository}
$ cd ./keras_compressor
$ pip install .

Compress

Simple example:

$ keras-compressor.py model.h5 compressed.h5

With accuracy parameter error:

$ keras-compressor.py --error 0.001 model.h5 compressed.h5

Help

$ keras-compressor.py --help                                                                               [impl_keras_compressor:keras_compressor]
Using TensorFlow backend.
usage: keras-compressor.py [-h] [--error 0.1]
                           [--log-level {CRITICAL,ERROR,WARNING,INFO,DEBUG}]
                           model.h5 compressed.h5

compress keras model

positional arguments:
  model.h5              target model, whose loss is specified by
                        `model.compile()`.
  compressed.h5         compressed model path

optional arguments:
  -h, --help            show this help message and exit
  --error 0.1           layer-wise acceptable error. If this value is larger,
                        compressed model will be less accurate and achieve
                        better compression rate. Default: 0.1
  --log-level {CRITICAL,ERROR,WARNING,INFO,DEBUG}
                        log level. Default: INFO

How compress it

  • low-rank approximation
    • with SVD (matrix)
    • with Tucker (tensor)

Examples

In example directory, you will find model compression of VGG-like models using MNIST and CIFAR10 dataset.

$ cd ./keras_compressor/example/mnist/

$ python train.py
-> outputs non-compressed model `model_raw.h5`

$ python compress.py
-> outputs compressed model `model_compressed.h5` from `model_raw.h5`

$ python finetune.py
-> outputs finetuned and compressed model `model_finetuned.h5` from `model_compressed.h5`

$ python evaluate.py model_raw.h5
$ python evaluate.py model_compressed.h5
$ python evaluate.py model_finetuned.h5
-> output test accuracy and the number of model parameters
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].