All Categories → Machine Learning → model-compression

Top 61 model-compression open source projects

Pocketflow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Bert Of Theseus
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
Torch Pruning
A pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Pruning
Code for "Co-Evolutionary Compression for Unpaired Image Translation" (ICCV 2019) and "SCOP: Scientific Control for Reliable Neural Network Pruning" (NeurIPS 2020).
Pytorch Weights pruning
PyTorch Implementation of Weights Pruning
Amc Models
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Ld Net
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Collaborative Distillation
PyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Condensa
Programmable Neural Network Compression
Pretrained Language Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Awesome Model Compression
papers about model compression
Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Hawq
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Aquvitae
The Easiest Knowledge Distillation Library for Lightweight Deep Learning
Keras model compression
Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Awesome Knowledge Distillation
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
Awesome Pruning
A curated list of neural network pruning resources.
Compress
Compressing Representations for Self-Supervised Learning
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Knowledge Distillation Pytorch
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Channel Pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Bipointnet
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Lightctr
Lightweight and Scalable framework that combines mainstream algorithms of Click-Through-Rate prediction based computational DAG, philosophy of Parameter Server and Ring-AllReduce collective communication.
Knowledge Distillation Zoo
Pytorch implementation of various Knowledge Distillation (KD) methods.
Ghostnet.pytorch
[CVPR2020] GhostNet: More Features from Cheap Operations
Data Efficient Model Compression
Data Efficient Model Compression
Filter Pruning Geometric Median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Amc
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Model Compression Papers
Papers for deep neural network compression and acceleration
Soft Filter Pruning
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
ESNAC
Learnable Embedding Space for Efficient Neural Architecture Compression
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Structured-Bayesian-Pruning-pytorch
pytorch implementation of Structured Bayesian Pruning
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
BitPack
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
ZAQ-code
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Auto-Compression
Automatic DNN compression tool with various model compression and neural architecture search techniques
1-60 of 61 model-compression projects