DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+8445.45%)
torchpruneA research library for pytorch-based neural network pruning, compression, and more.
Stars: ✭ 133 (+202.27%)
Torch PruningA pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
Stars: ✭ 193 (+338.64%)
DS-Net(CVPR 2021, Oral) Dynamic Slimmable Network
Stars: ✭ 204 (+363.64%)
Soft Filter PruningSoft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (+561.36%)
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+1438.64%)
Awesome PruningA curated list of neural network pruning resources.
Stars: ✭ 1,017 (+2211.36%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-6.82%)
Micronetmicronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+2700%)
Filter Pruning Geometric MedianFilter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+668.18%)
Awesome Ml Model CompressionAwesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (+277.27%)
Model OptimizationA toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+2154.55%)
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (+13.64%)
Kd libA Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (+293.18%)
GAN-LTH[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-45.45%)
Deep-Learning-Specialization-CourseraDeep Learning Specialization Course by Coursera. Neural Networks, Deep Learning, Hyper Tuning, Regularization, Optimization, Data Processing, Convolutional NN, Sequence Models are including this Course.
Stars: ✭ 75 (+70.45%)
pyowlOrdered Weighted L1 regularization for classification and regression in Python
Stars: ✭ 52 (+18.18%)
SReTOfficial PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"
Stars: ✭ 51 (+15.91%)
Statistical-Learning-using-RThis is a Statistical Learning application which will consist of various Machine Learning algorithms and their implementation in R done by me and their in depth interpretation.Documents and reports related to the below mentioned techniques can be found on my Rpubs profile.
Stars: ✭ 27 (-38.64%)
awesome-efficient-gnnCode and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+1031.82%)
FisherPruningGroup Fisher Pruning for Practical Network Compression(ICML2021)
Stars: ✭ 127 (+188.64%)
PenaltyFunctions.jlJulia package of regularization functions for machine learning
Stars: ✭ 25 (-43.18%)
Auto-CompressionAutomatic DNN compression tool with various model compression and neural architecture search techniques
Stars: ✭ 19 (-56.82%)
deep-compressionLearning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (+254.55%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (+27.27%)
pyunfoldIterative unfolding for Python
Stars: ✭ 23 (-47.73%)
manifold mixupTensorflow implementation of the Manifold Mixup machine learning research paper
Stars: ✭ 24 (-45.45%)
sparsebnSoftware for learning sparse Bayesian networks
Stars: ✭ 41 (-6.82%)
batchnorm-pruningRethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
Stars: ✭ 66 (+50%)
hyperstarHyperstar: Negative Sampling Improves Hypernymy Extraction Based on Projection Learning.
Stars: ✭ 24 (-45.45%)
ZAQ-codeCVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (+34.09%)
PyTorch-Deep-CompressionA PyTorch implementation of the iterative pruning method described in Han et. al. (2015)
Stars: ✭ 39 (-11.36%)
fasterai1FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-22.73%)
group sparsityGroup Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
Stars: ✭ 45 (+2.27%)
SSE-PTCodes and Datasets for paper RecSys'20 "SSE-PT: Sequential Recommendation Via Personalized Transformer" and NurIPS'19 "Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers"
Stars: ✭ 103 (+134.09%)
Generalizing-Lottery-TicketsThis repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers"
Stars: ✭ 48 (+9.09%)
ESNACLearnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (-38.64%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+1413.64%)
deep-learning-notes🧠👨💻Deep Learning Specialization • Lecture Notes • Lab Assignments
Stars: ✭ 20 (-54.55%)
SAN[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images
Stars: ✭ 41 (-6.82%)
DeeplearningPython for《Deep Learning》,该书为《深度学习》(花书) 数学推导、原理剖析与源码级别代码实现
Stars: ✭ 4,020 (+9036.36%)
AMP-RegularizerCode for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Stars: ✭ 26 (-40.91%)
FSCNMFAn implementation of "Fusing Structure and Content via Non-negative Matrix Factorization for Embedding Information Networks".
Stars: ✭ 16 (-63.64%)
tulipScaleable input gradient regularization
Stars: ✭ 19 (-56.82%)
FastPosepytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-18.18%)
mixupspeechpro.com/
Stars: ✭ 23 (-47.73%)
L0LearnEfficient Algorithms for L0 Regularized Learning
Stars: ✭ 74 (+68.18%)
traj-pred-irlOfficial implementation codes of "Regularizing neural networks for future trajectory prediction via IRL framework"
Stars: ✭ 23 (-47.73%)