All Projects → tensorflow → Model Optimization

tensorflow / Model Optimization

Licence: apache-2.0
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Model Optimization

Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-83.27%)
Mutual labels:  quantization, model-compression, pruning
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (-54.33%)
Mutual labels:  quantization, pruning, compression
SSD-Pruning-and-quantization
Pruning and quantization for SSD. Model compression.
Stars: ✭ 19 (-98.08%)
Mutual labels:  compression, pruning, quantization
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-95.87%)
Mutual labels:  pruning, quantization, model-compression
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-77.52%)
Mutual labels:  quantization, model-compression, pruning
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-82.56%)
Mutual labels:  quantization, model-compression, pruning
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+24.19%)
Mutual labels:  quantization, model-compression, pruning
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (-78.02%)
Mutual labels:  quantization, pruning, compression
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-87.3%)
Mutual labels:  pruning, quantization, model-compression
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (-31.75%)
Mutual labels:  quantization, model-compression, pruning
libcaesium
The Caesium compression library written in Rust
Stars: ✭ 58 (-94.15%)
Mutual labels:  compression, optimization
image-optimizer
Smart image optimization
Stars: ✭ 15 (-98.49%)
Mutual labels:  compression, optimization
SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-94.96%)
Mutual labels:  pruning, model-compression
Imager
Automated image compression for efficiently distributing images on the web.
Stars: ✭ 266 (-73.19%)
Mutual labels:  optimization, compression
sparsify
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-86.09%)
Mutual labels:  pruning, quantization
nuxt-prune-html
🔌⚡ Nuxt module to prune html before sending it to the browser (it removes elements matching CSS selector(s)), useful for boosting performance showing a different HTML for bots/audits by removing all the scripts with dynamic rendering
Stars: ✭ 69 (-93.04%)
Mutual labels:  optimization, pruning
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-95.56%)
Mutual labels:  pruning, model-compression
Soft Filter Pruning
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (-70.67%)
Mutual labels:  model-compression, pruning
Filter Pruning Geometric Median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (-65.93%)
Mutual labels:  model-compression, pruning
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+279.03%)
Mutual labels:  quantization, pruning

TensorFlow Model Optimization Toolkit

The TensorFlow Model Optimization Toolkit is a suite of tools that users, both novice and advanced, can use to optimize machine learning models for deployment and execution.

Supported techniques include quantization and pruning for sparse weights. There are APIs built specifically for Keras.

For an overview of this project and individual tools, the optimization gains, and our roadmap refer to tensorflow.org/model_optimization. The website also provides various tutorials and API docs.

The toolkit provides stable Python APIs.

Installation

For installation instructions, see tensorflow.org/model_optimization/guide/install.

Contribution guidelines

If you want to contribute to TensorFlow Model Optimization, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs.

Maintainers

Subpackage Maintainers
tfmot.clustering Arm ML Tooling
tfmot.quantization TensorFlow Model Optimization
tfmot.sparsity TensorFlow Model Optimization

Community

As part of TensorFlow, we're committed to fostering an open and welcoming environment.

  • TensorFlow Blog: Stay up to date on content from the TensorFlow team and best articles from the community.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].