Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+413.99%)
DS-Net(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+5.7%)
Filter Pruning Geometric MedianFilter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+75.13%)
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+538.34%)
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+426.94%)
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-10.36%)
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-13.99%)
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+250.78%)
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-74.09%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-78.76%)
Regularization-Pruning[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-77.2%)
Soft Filter PruningSoft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (+50.78%)
GraspCode for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH
Stars: ✭ 58 (-69.95%)
CondensaProgrammable Neural Network Compression
Stars: ✭ 129 (-33.16%)
DelvePyTorch and Keras model training and layer saturation monitor
Stars: ✭ 49 (-74.61%)
PruningCode for "Co-Evolutionary Compression for Unpaired Image Translation" (ICCV 2019) and "SCOP: Scientific Control for Reliable Neural Network Pruning" (NeurIPS 2020).
Stars: ✭ 159 (-17.62%)
MicroexpnetMicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (-37.31%)
CompressCompressing Representations for Self-Supervised Learning
Stars: ✭ 43 (-77.72%)
Channel PruningChannel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Stars: ✭ 979 (+407.25%)
Cen[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (-41.97%)
BipointnetThis project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
Stars: ✭ 27 (-86.01%)
Pytorch PruningPyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Stars: ✭ 740 (+283.42%)
JfasttextJava interface for fastText
Stars: ✭ 193 (+0%)
Amc Models[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 154 (-20.21%)
Filter GraftingFilter Grafting for Deep Neural Networks(CVPR 2020)
Stars: ✭ 110 (-43.01%)
Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+258.03%)
Keras model compressionModel Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Stars: ✭ 59 (-69.43%)
Collaborative DistillationPyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (-28.5%)
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-69.95%)
Pretrained Language ModelPretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+953.37%)
Knowledge Distillation PytorchA PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Stars: ✭ 986 (+410.88%)
Tf Keras SurgeonPruning and other network surgery for trained TF.Keras models.
Stars: ✭ 25 (-87.05%)
Tf2An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-41.45%)
Awesome EmdlEmbedded and mobile deep learning research resources
Stars: ✭ 554 (+187.05%)
GhostnetCV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+803.63%)
AimetAIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+134.72%)
LightctrLightweight and Scalable framework that combines mainstream algorithms of Click-Through-Rate prediction based computational DAG, philosophy of Parameter Server and Ring-AllReduce collective communication.
Stars: ✭ 644 (+233.68%)
HawqQuantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-44.04%)
Ghostnet.pytorch[CVPR2020] GhostNet: More Features from Cheap Operations
Stars: ✭ 440 (+127.98%)
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+5443.01%)
HrankPytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (-15.03%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-23.32%)
NeuronblocksNLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Stars: ✭ 1,356 (+602.59%)
DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+1848.19%)
Keras SurgeonPruning and other network surgery for trained Keras models.
Stars: ✭ 339 (+75.65%)
Yolov3yolov3 by pytorch
Stars: ✭ 142 (-26.42%)
Adventures In Tensorflow LiteThis repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.
Stars: ✭ 79 (-59.07%)
Amc[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 298 (+54.4%)