All Projects → PaddlePaddle → Paddleslim

PaddlePaddle / Paddleslim

Licence: apache-2.0
PaddleSlim is an open-source library for deep model compression and architecture search.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Paddleslim

Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+2.07%)
Mutual labels:  nas, neural-architecture-search, quantization, model-compression
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-93.94%)
Mutual labels:  pruning, quantization, model-compression
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+81.98%)
Mutual labels:  quantization, model-compression, pruning
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+46.53%)
Mutual labels:  quantization, model-compression, pruning
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-75.48%)
Mutual labels:  quantization, model-compression, pruning
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-74.45%)
Mutual labels:  quantization, model-compression, pruning
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-67.06%)
Mutual labels:  quantization, model-compression, pruning
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+1480.21%)
Mutual labels:  nas, neural-architecture-search, model-compression
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-81.39%)
Mutual labels:  pruning, quantization, model-compression
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-91.73%)
Mutual labels:  pruning, quantization
BossNAS
(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (-81.54%)
Mutual labels:  nas, neural-architecture-search
mmrazor
OpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (-4.87%)
Mutual labels:  pruning, nas
ESNAC
Learnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (-96.01%)
Mutual labels:  model-compression, neural-architecture-search
TF-NAS
TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search (ECCV2020)
Stars: ✭ 66 (-90.25%)
Mutual labels:  nas, neural-architecture-search
deep-learning-roadmap
my own deep learning mastery roadmap
Stars: ✭ 40 (-94.09%)
Mutual labels:  nas, neural-architecture-search
SSD-Pruning-and-quantization
Pruning and quantization for SSD. Model compression.
Stars: ✭ 19 (-97.19%)
Mutual labels:  pruning, quantization
nas-encodings
Encodings for neural architecture search
Stars: ✭ 29 (-95.72%)
Mutual labels:  nas, neural-architecture-search
Neural-Architecture-Search
This repo is about NAS
Stars: ✭ 26 (-96.16%)
Mutual labels:  nas, neural-architecture-search
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-93.5%)
Mutual labels:  pruning, model-compression
SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-92.61%)
Mutual labels:  pruning, model-compression

PaddleSlim

Documentation Status Documentation Status License

简介

PaddleSlim是一个专注于深度学习模型压缩的工具库,提供剪裁、量化、蒸馏、和模型结构搜索等模型压缩策略,帮助用户快速实现模型的小型化。

版本对齐

PaddleSlim PaddlePaddle PaddleLite 备注
1.0.1 <=1.7 2.7 支持静态图
1.1.1 1.8 2.7 支持静态图
1.2.0 2.0Beta/RC 2.8 支持静态图
2.0.0 2.0 2.8 支持动态图和静态图

安装

安装最新版本:

pip install paddleslim -i https://pypi.tuna.tsinghua.edu.cn/simple

安装指定版本:

pip install paddleslim==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

最近更新

2021.2.5: 发布V2.0.0版本,新增支持动态图,新增OFA压缩功能,优化剪枝功能。 2020.9.16: 发布V1.2.0版本,新增PACT量化训练功能,新增DML(互蒸馏功能),修复部分剪裁bug,加强对depthwise_conv2d的剪裁能力,优化剪裁和量化API的易用性和灵活性。

更多信息请参考:release note

功能概览

PaddleSlim支持以下功能,也支持自定义量化、裁剪等功能。

Quantization Pruning NAS Distilling

注:

  • *表示仅支持静态图,**表示仅支持动态图
  • 敏感度裁剪指的是通过各个层的敏感度分析来确定各个卷积层的剪裁率,需要和其他裁剪方法配合使用。

效果展示

PaddleSlim在典型视觉和自然语言处理任务上做了模型压缩,并且测试了Nvidia GPU、ARM等设备上的加速情况,这里展示部分模型的压缩效果,详细方案可以参考下面CV和NLP模型压缩方案:


表1: 部分模型压缩加速情况

注:

  • YOLOv3: 在移动端SD855上加速3.55倍。
  • PP-OCR: 体积由8.9M减少到2.9M, 在SD855上加速1.27倍。
  • BERT: 模型参数由110M减少到80M,精度提升的情况下,Tesla T4 GPU FP16计算加速1.47倍。

文档教程

快速开始

快速开始教程是能基于CIFAR10数据集快速运行起来的简单示例,若您是Paddle官方模型套件用户,请直接使用下方的CV模型压缩或者NLP模型压缩中教程。

进阶教程

进阶教程详细介绍了每一步的流程,帮助您把相应方法迁移到您自己的模型上。

推理部署

CV模型压缩

本系列教程均基于Paddle官方的模型套件中模型进行压缩,若您不是模型套件用户,更推荐使用快速教程和进阶教程。

NLP模型压缩

API文档

FAQ

许可证书

本项目的发布受Apache 2.0 license许可认证。

贡献代码

我们非常欢迎你可以为PaddleSlim提供代码,也十分感谢你的反馈。

欢迎加入PaddleSlim技术交流群

请添加微信公众号"AIDigest",备注“压缩”,飞桨同学会拉您进入微信交流群。

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].