CompressCompressing Representations for Self-Supervised Learning
Stars: ✭ 43 (-72.78%)
Mutual labels: model-compression
GhostnetCV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+1003.8%)
Mutual labels: model-compression
CondensaProgrammable Neural Network Compression
Stars: ✭ 129 (-18.35%)
Mutual labels: model-compression
NeuronblocksNLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Stars: ✭ 1,356 (+758.23%)
Mutual labels: model-compression
Tf2An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-28.48%)
Mutual labels: model-compression
Knowledge Distillation PytorchA PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Stars: ✭ 986 (+524.05%)
Mutual labels: model-compression
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-6.33%)
Mutual labels: model-compression
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+6670.89%)
Mutual labels: model-compression
Pretrained Language ModelPretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+1186.71%)
Mutual labels: model-compression
Keras model compressionModel Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy.An Implementation of "Distilling the Knowledge in a Neural Network - Geoffery Hinton et. al"
Stars: ✭ 59 (-62.66%)
Mutual labels: model-compression
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+679.75%)
Mutual labels: model-compression
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+543.67%)
Mutual labels: model-compression
Collaborative DistillationPyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (-12.66%)
Mutual labels: model-compression
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+527.85%)
Mutual labels: model-compression
HawqQuantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-31.65%)
Mutual labels: model-compression
Amc Models[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 154 (-2.53%)
Mutual labels: model-compression
Yolov3yolov3 by pytorch
Stars: ✭ 142 (-10.13%)
Mutual labels: model-compression
MicroexpnetMicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Stars: ✭ 121 (-23.42%)
Mutual labels: model-compression