ZAQ-codeCVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (-97.1%)
Mutual labels: quantization, model-compression
Tf2An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-94.44%)
Mutual labels: model-compression, quantization
BitPackBitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Stars: ✭ 36 (-98.23%)
Mutual labels: quantization, model-compression
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-91.83%)
Mutual labels: model-compression, quantization
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (-39.4%)
Mutual labels: model-compression, quantization
Awesome Ai InfrastructuresInfrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-89.03%)
Mutual labels: model-compression, quantization
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (-67.24%)
Mutual labels: quantization, knowledge-distillation
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-91.49%)
Mutual labels: model-compression, quantization
HawqQuantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-94.69%)
Mutual labels: model-compression, quantization
Efficient-ComputingEfficient-Computing
Stars: ✭ 474 (-76.68%)
Mutual labels: knowledge-distillation, model-compression
torch-model-compression针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-93.8%)
Mutual labels: quantization, model-compression
Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (-66.01%)
Mutual labels: model-compression, quantization
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-97.98%)
Mutual labels: quantization, model-compression
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (-66.7%)
Mutual labels: model-compression, quantization
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (-51.21%)
Mutual labels: model-compression, quantization
Dsqpytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"
Stars: ✭ 70 (-96.56%)
Mutual labels: quantization
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+426.22%)
Mutual labels: model-compression
Keras model compressionModel Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Stars: ✭ 59 (-97.1%)
Mutual labels: model-compression
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-97.15%)
Mutual labels: quantization
Model QuantizationCollections of model quantization algorithms
Stars: ✭ 118 (-94.2%)
Mutual labels: quantization