Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+2.07%)
Mutual labels: nas, neural-architecture-search, quantization, model-compression
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-93.94%)
Mutual labels: pruning, quantization, model-compression
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+81.98%)
Mutual labels: quantization, model-compression, pruning
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+46.53%)
Mutual labels: quantization, model-compression, pruning
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-75.48%)
Mutual labels: quantization, model-compression, pruning
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-74.45%)
Mutual labels: quantization, model-compression, pruning
Awesome Ai InfrastructuresInfrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-67.06%)
Mutual labels: quantization, model-compression, pruning
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+1480.21%)
Mutual labels: nas, neural-architecture-search, model-compression
torch-model-compression针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-81.39%)
Mutual labels: pruning, quantization, model-compression
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-91.73%)
Mutual labels: pruning, quantization
BossNAS(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (-81.54%)
Mutual labels: nas, neural-architecture-search
mmrazorOpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (-4.87%)
Mutual labels: pruning, nas
ESNACLearnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (-96.01%)
Mutual labels: model-compression, neural-architecture-search
TF-NASTF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search (ECCV2020)
Stars: ✭ 66 (-90.25%)
Mutual labels: nas, neural-architecture-search
deep-learning-roadmapmy own deep learning mastery roadmap
Stars: ✭ 40 (-94.09%)
Mutual labels: nas, neural-architecture-search
nas-encodingsEncodings for neural architecture search
Stars: ✭ 29 (-95.72%)
Mutual labels: nas, neural-architecture-search
Regularization-Pruning[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-93.5%)
Mutual labels: pruning, model-compression
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-92.61%)
Mutual labels: pruning, model-compression