ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (+13.89%)
Mutual labels: quantization, model-compression
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+2655.56%)
Mutual labels: quantization, model-compression
Pretrained Language ModelPretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+5547.22%)
Mutual labels: quantization, model-compression
torch-model-compression针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+250%)
Mutual labels: quantization, model-compression
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+361.11%)
Mutual labels: quantization, model-compression
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+1780.56%)
Mutual labels: quantization, model-compression
Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+1819.44%)
Mutual labels: quantization, model-compression
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+3322.22%)
Mutual labels: quantization, model-compression
Tf2An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (+213.89%)
Mutual labels: quantization, model-compression
HawqQuantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (+200%)
Mutual labels: quantization, model-compression
Awesome Ai InfrastructuresInfrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (+519.44%)
Mutual labels: quantization, model-compression
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+380.56%)
Mutual labels: quantization, model-compression
ZAQ-codeCVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (+63.89%)
Mutual labels: quantization, model-compression
memoryMemory game 🎴
Stars: ✭ 24 (-33.33%)
Mutual labels: memory
docGet usage and health data about your Node.js process.
Stars: ✭ 17 (-52.78%)
Mutual labels: memory
slimarraySlimArray compresses uint32 into several bits, by using a polynomial to describe overall trend of an array.
Stars: ✭ 39 (+8.33%)
Mutual labels: memory
fastT5⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+1069.44%)
Mutual labels: quantization
kvsLightweight key-value storage library for Browser, Node.js, and In-Memory.
Stars: ✭ 126 (+250%)
Mutual labels: memory
nodejsNode.js in-process collectors for Instana
Stars: ✭ 66 (+83.33%)
Mutual labels: memory
libmemAdvanced Game Hacking Library for C/C++, Rust and Python (Windows/Linux/FreeBSD) (Process/Memory Hacking) (Hooking/Detouring) (Cross Platform) (x86/x64/ARM/ARM64) (DLL/SO Injection) (Internal/External)
Stars: ✭ 336 (+833.33%)
Mutual labels: memory