Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+151.52%)
Adventures In Tensorflow LiteThis repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.
Stars: ✭ 79 (+19.7%)
PyTorch-Deep-CompressionA PyTorch implementation of the iterative pruning method described in Han et. al. (2015)
Stars: ✭ 39 (-40.91%)
Neuralnetworks.thought ExperimentsObservations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
Stars: ✭ 199 (+201.52%)
Tf Keras SurgeonPruning and other network surgery for trained TF.Keras models.
Stars: ✭ 25 (-62.12%)
deep-compressionLearning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (+136.36%)
Awesome Edge Machine LearningA curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (+110.61%)
fasterai1FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-48.48%)
DelvePyTorch and Keras model training and layer saturation monitor
Stars: ✭ 49 (-25.76%)
Awd Lstm LmLSTM and QRNN Language Model Toolkit for PyTorch
Stars: ✭ 1,834 (+2678.79%)
AimetAIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+586.36%)
torchpruneA research library for pytorch-based neural network pruning, compression, and more.
Stars: ✭ 133 (+101.52%)
FactorizationMachineimplementation of factorization machine, support classification.
Stars: ✭ 19 (-71.21%)
Lottery Ticket Hypothesis In PytorchThis repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
Stars: ✭ 162 (+145.45%)
Filter GraftingFilter Grafting for Deep Neural Networks(CVPR 2020)
Stars: ✭ 110 (+66.67%)
SGDLibraryMATLAB/Octave library for stochastic optimization algorithms: Version 1.0.20
Stars: ✭ 165 (+150%)
GraspCode for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH
Stars: ✭ 58 (-12.12%)
AutoOptAutomatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
Stars: ✭ 44 (-33.33%)
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+1403.03%)
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+925.76%)
SkimcaffeCaffe for Sparse Convolutional Neural Network
Stars: ✭ 230 (+248.48%)
DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+5596.97%)
sparsezooNeural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+300%)
NncfPyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (+230.3%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-37.88%)
Torch PruningA pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (+192.42%)
GAN-LTH[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-63.64%)
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+162.12%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-15.15%)
HrankPytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (+148.48%)
FisherPruningGroup Fisher Pruning for Practical Network Compression(ICML2021)
Stars: ✭ 127 (+92.42%)
Cen[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (+69.7%)
LinkOS-Android-SamplesJava based sample code for developing on Android. The demos in this repository are stored on separate branches. To navigate to a demo, please click branches.
Stars: ✭ 52 (-21.21%)
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+1766.67%)
theedhum-nandrumA sentiment classifier on mixed language (and mixed script) reviews in Tamil, Malayalam and English
Stars: ✭ 16 (-75.76%)
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-12.12%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+909.09%)
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+1440.91%)
DS-Net(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+209.09%)
Pytorch PruningPyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Stars: ✭ 740 (+1021.21%)
DiFacto2 ffmDistributed Fieldaware Factorization Machines based on Parameter Server
Stars: ✭ 11 (-83.33%)
Awesome EmdlEmbedded and mobile deep learning research resources
Stars: ✭ 554 (+739.39%)
TransETransE方法的Python实现,解释SGD中TransE的向量更新
Stars: ✭ 31 (-53.03%)
Generalizing-Lottery-TicketsThis repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers"
Stars: ✭ 48 (-27.27%)
jetbrains-utilityRemove/Backup – settings & cli for macOS (OS X) – DataGrip, AppCode, CLion, Gogland, IntelliJ, PhpStorm, PyCharm, Rider, RubyMine, WebStorm
Stars: ✭ 62 (-6.06%)
Selecsls PytorchReference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
Stars: ✭ 251 (+280.3%)