PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+307.83%)
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+642.17%)
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+497.59%)
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+4.22%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-75.3%)
Awesome Edge Machine LearningA curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (-16.27%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+301.2%)
Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+316.27%)
Torch PruningA pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (+16.27%)
DS-Net(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+22.89%)
BitPackBitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Stars: ✭ 36 (-78.31%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-66.27%)
ZAQ-codeCVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (-64.46%)
BrevitasBrevitas: quantization-aware training in PyTorch
Stars: ✭ 343 (+106.63%)
Soft Filter PruningSoft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (+75.3%)
NncfPyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+31.33%)
Filter Pruning Geometric MedianFilter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+103.61%)
Regularization-Pruning[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-73.49%)
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-69.88%)
sparsifyEasy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-16.87%)
DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+2165.06%)
AimetAIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+172.89%)
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+512.65%)
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-65.06%)
GraspCode for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH
Stars: ✭ 58 (-65.06%)
Tf2An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-31.93%)
sparsezooNeural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+59.04%)
Awesome EmdlEmbedded and mobile deep learning research resources
Stars: ✭ 554 (+233.73%)
HawqQuantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-34.94%)
Pretrained Language ModelPretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+1124.7%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-10.84%)
Capsule Net Pytorch[NO MAINTENANCE INTENDED] A PyTorch implementation of CapsNet architecture in the NIPS 2017 paper "Dynamic Routing Between Capsules".
Stars: ✭ 158 (-4.82%)
Merlin.jlDeep Learning for Julia
Stars: ✭ 147 (-11.45%)
Autograd.jlJulia port of the Python autograd package.
Stars: ✭ 147 (-11.45%)
Inq PytorchA PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"
Stars: ✭ 147 (-11.45%)
100daysofmlcodeMy journey to learn and grow in the domain of Machine Learning and Artificial Intelligence by performing the #100DaysofMLCode Challenge.
Stars: ✭ 146 (-12.05%)
FrvsrFrame-Recurrent Video Super-Resolution (official repository)
Stars: ✭ 157 (-5.42%)
Uncertainty MetricsAn easy-to-use interface for measuring uncertainty and robustness.
Stars: ✭ 145 (-12.65%)
HrankPytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (-1.2%)
GatGraph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+1242.77%)
NettackImplementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
Stars: ✭ 156 (-6.02%)
LivianetThis repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (-13.86%)
Cnn QuantizationQuantization of Convolutional Neural networks.
Stars: ✭ 141 (-15.06%)
Ensemble PytorchA unified ensemble framework for Pytorch to improve the performance and robustness of your deep learning model
Stars: ✭ 153 (-7.83%)
Yolov3yolov3 by pytorch
Stars: ✭ 142 (-14.46%)
Enhancenet CodeEnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis (official repository)
Stars: ✭ 142 (-14.46%)
Emotion Recognition Using SpeechBuilding and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
Stars: ✭ 159 (-4.22%)
EmlearnMachine Learning inference engine for Microcontrollers and Embedded devices
Stars: ✭ 154 (-7.23%)
LacmusLacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.
Stars: ✭ 142 (-14.46%)
AlgorithmsA collection of common algorithms and data structures implemented in java, c++, and python.
Stars: ✭ 142 (-14.46%)
Amc Models[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 154 (-7.23%)
Glcic PytorchA High-Quality PyTorch Implementation of "Globally and Locally Consistent Image Completion".
Stars: ✭ 141 (-15.06%)
Ctranslate2Fast inference engine for OpenNMT models
Stars: ✭ 140 (-15.66%)