Soft Filter PruningSoft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (+42.65%)
Filter Pruning Geometric MedianFilter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+65.69%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-79.9%)
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+398.53%)
Regularization-Pruning[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-78.43%)
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-15.2%)
Torch PruningA pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (-5.39%)
GhostnetCV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+754.9%)
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-18.63%)
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-75.49%)
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+503.92%)
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+231.86%)
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+386.27%)
Adventures In Tensorflow LiteThis repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.
Stars: ✭ 79 (-61.27%)
TextPrunerA PyTorch-based model pruning toolkit for pre-trained language models
Stars: ✭ 94 (-53.92%)
NncfPyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+6.86%)
GraspCode for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH
Stars: ✭ 58 (-71.57%)
sparsifyEasy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-32.35%)
DelvePyTorch and Keras model training and layer saturation monitor
Stars: ✭ 49 (-75.98%)
jp-ocr-prunned-cnnAttempting feature map prunning on a CNN trained for Japanese OCR
Stars: ✭ 15 (-92.65%)
Neuralnetworks.thought ExperimentsObservations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
Stars: ✭ 199 (-2.45%)
Lottery Ticket Hypothesis In PytorchThis repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
Stars: ✭ 162 (-20.59%)
mmrazorOpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (+215.69%)
batchnorm-pruningRethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
Stars: ✭ 66 (-67.65%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-88.73%)
Tf Keras SurgeonPruning and other network surgery for trained TF.Keras models.
Stars: ✭ 25 (-87.75%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-72.55%)
RMNetRM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.
Stars: ✭ 129 (-36.76%)
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-71.57%)
nuxt-prune-html🔌⚡ Nuxt module to prune html before sending it to the browser (it removes elements matching CSS selector(s)), useful for boosting performance showing a different HTML for bots/audits by removing all the scripts with dynamic rendering
Stars: ✭ 69 (-66.18%)
HrankPytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (-19.61%)
SkimcaffeCaffe for Sparse Convolutional Neural Network
Stars: ✭ 230 (+12.75%)
Awesome Edge Machine LearningA curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (-31.86%)
FisherPruningGroup Fisher Pruning for Practical Network Compression(ICML2021)
Stars: ✭ 127 (-37.75%)
fasterai1FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-83.33%)
Generalizing-Lottery-TicketsThis repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers"
Stars: ✭ 48 (-76.47%)
jetbrains-utilityRemove/Backup – settings & cli for macOS (OS X) – DataGrip, AppCode, CLion, Gogland, IntelliJ, PhpStorm, PyCharm, Rider, RubyMine, WebStorm
Stars: ✭ 62 (-69.61%)
Pytorch PruningPyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Stars: ✭ 740 (+262.75%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+226.47%)
Awesome EmdlEmbedded and mobile deep learning research resources
Stars: ✭ 554 (+171.57%)
AimetAIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+122.06%)
Cen[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (-45.1%)
Selecsls PytorchReference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
Stars: ✭ 251 (+23.04%)
DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+1743.14%)
sparsezooNeural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+29.41%)
Filter GraftingFilter Grafting for Deep Neural Networks(CVPR 2020)
Stars: ✭ 110 (-46.08%)