All Projects → SforAiDl → Kd_lib

SforAiDl / Kd_lib

Licence: mit
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Kd lib

Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+612.14%)
Mutual labels:  quantization, model-compression, pruning
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+291.33%)
Mutual labels:  quantization, model-compression, pruning
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-76.3%)
Mutual labels:  pruning, quantization, model-compression
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+473.41%)
Mutual labels:  quantization, model-compression, pruning
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (+28.9%)
Mutual labels:  quantization, model-compression, pruning
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-27.17%)
Mutual labels:  pruning, quantization, model-compression
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-4.05%)
Mutual labels:  quantization, model-compression, pruning
Awesome Emdl
Embedded and mobile deep learning research resources
Stars: ✭ 554 (+220.23%)
Mutual labels:  quantization, pruning
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+2073.41%)
Mutual labels:  quantization, pruning
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+299.42%)
Mutual labels:  quantization, model-compression
Ntagger
reference pytorch code for named entity tagging
Stars: ✭ 58 (-66.47%)
Mutual labels:  quantization, pruning
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+161.85%)
Mutual labels:  quantization, pruning
Genrl
A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL
Stars: ✭ 356 (+105.78%)
Mutual labels:  data-science, benchmarking
Filter Pruning Geometric Median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+95.38%)
Mutual labels:  model-compression, pruning
Awesome Pruning
A curated list of neural network pruning resources.
Stars: ✭ 1,017 (+487.86%)
Mutual labels:  model-compression, pruning
Openml R
R package to interface with OpenML
Stars: ✭ 81 (-53.18%)
Mutual labels:  data-science, benchmarking
Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-34.68%)
Mutual labels:  quantization, model-compression
Hawq
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Stars: ✭ 108 (-37.57%)
Mutual labels:  quantization, model-compression
Pretrained Language Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (+1075.14%)
Mutual labels:  model-compression, quantization
sparsify
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-20.23%)
Mutual labels:  pruning, quantization

KD_Lib

.. image:: https://travis-ci.com/SforAiDl/KD_Lib.svg?branch=master :target: https://travis-ci.com/SforAiDl/KD_Lib

.. image:: https://readthedocs.org/projects/kd-lib/badge/?version=latest :target: https://kd-lib.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status

A PyTorch library to easily facilitate knowledge distillation for custom deep learning models

Installation :

============== Stable release

KD_Lib is compatible with Python 3.6 or later and also depends on pytorch. The easiest way to install KD_Lib is with pip, Python's preferred package installer.

.. code-block:: console

$ pip install KD-Lib

Note that KD_Lib is an active project and routinely publishes new releases. In order to upgrade KD_Lib to the latest version, use pip as follows.

.. code-block:: console

$ pip install -U KD-Lib

================= Build from source

If you intend to install the latest unreleased version of the library (i.e from source), you can simply do:

.. code-block:: console

$ git clone https://github.com/SforAiDl/KD_Lib.git
$ cd KD_Lib
$ python setup.py install

Usage

To implement the most basic version of knowledge distillation from Distilling the Knowledge in a Neural Network <https://arxiv.org/abs/1503.02531>_ and plot losses

.. code-block:: python

import torch
import torch.optim as optim
from torchvision import datasets, transforms
from KD_Lib import VanillaKD

# This part is where you define your datasets, dataloaders, models and optimizers

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        "mnist_data",
        train=True,
        download=True,
        transform=transforms.Compose(
            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
        ),
    ),
    batch_size=32,
    shuffle=True,
)

test_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        "mnist_data",
        train=False,
        transform=transforms.Compose(
            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
        ),
    ),
    batch_size=32,
    shuffle=True,
)

teacher_model = <your model>
student_model = <your model>

teacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)
student_optimizer = optim.SGD(student_model.parameters(), 0.01)

# Now, this is where KD_Lib comes into the picture

distiller = VanillaKD(teacher_model, student_model, train_loader, test_loader, 
                      teacher_optimizer, student_optimizer)  
distiller.train_teacher(epochs=5, plot_losses=True, save_model=True)    # Train the teacher network
distiller.train_student(epochs=5, plot_losses=True, save_model=True)    # Train the student network
distiller.evaluate(teacher=False)                                       # Evaluate the student network
distiller.get_parameters()                                              # A utility function to get the number of parameters in the teacher and the student network 

To train a collection of 3 models in an online fashion using the framework in Deep Mutual Learning <https://arxiv.org/abs/1706.00384>_ and log training details to Tensorboard

.. code-block:: python

import torch
import torch.optim as optim
from torchvision import datasets, transforms
from KD_Lib import DML
from KD_Lib import ResNet18, ResNet50                                   # To use models packaged in KD_Lib

# This part is where you define your datasets, dataloaders, models and optimizers

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        "mnist_data",
        train=True,
        download=True,
        transform=transforms.Compose(
            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
        ),
    ),
    batch_size=32,
    shuffle=True,
)

test_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        "mnist_data",
        train=False,
        transform=transforms.Compose(
            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
        ),
    ),
    batch_size=32,
    shuffle=True,
)

student_params = [4, 4, 4, 4, 4]
student_model_1 = ResNet50(student_params, 1, 10)
student_model_2 = ResNet18(student_params, 1, 10)

student_cohort = [student_model_1, student_model_2]

student_optimizer_1 = optim.SGD(student_model_1.parameters(), 0.01)
student_optimizer_2 = optim.SGD(student_model_2.parameters(), 0.01)

student_optimizers = [student_optimizer_1, student_optimizer_2]

# Now, this is where KD_Lib comes into the picture 

distiller = DML(student_cohort, train_loader, test_loader, student_optimizers)

distiller.train_students(epochs=5, log=True, logdir="./Logs")
distiller.evaluate()
distiller.get_parameters()

Currently implemented works

Some benchmark results can be found in the logs <./logs.rst>_ file.

+-----------------------------------------------------------+----------------------------------+----------------------+ | Paper | Link | Repository (KD_Lib/) | +===========================================================+==================================+======================+ | Distilling the Knowledge in a Neural Network | https://arxiv.org/abs/1503.02531 | KD/vision/vanilla | +-----------------------------------------------------------+----------------------------------+----------------------+ | Improved Knowledge Distillation via Teacher Assistant | https://arxiv.org/abs/1902.03393 | KD/vision/TAKD | +-----------------------------------------------------------+----------------------------------+----------------------+ | Relational Knowledge Distillation | https://arxiv.org/abs/1904.05068 | KD/vision/RKD | +-----------------------------------------------------------+----------------------------------+----------------------+ | Distilling Knowledge from Noisy Teachers | https://arxiv.org/abs/1610.09650 | KD/vision/noisy | +-----------------------------------------------------------+----------------------------------+----------------------+ | Paying More Attention To The Attention | https://arxiv.org/abs/1612.03928 | KD/vision/attention | +-----------------------------------------------------------+----------------------------------+----------------------+ | Revisit Knowledge Distillation: a Teacher-free Framework | https://arxiv.org/abs/1909.11723 |KD/vision/teacher_free| +-----------------------------------------------------------+----------------------------------+----------------------+ | Mean Teachers are Better Role Models | https://arxiv.org/abs/1703.01780 |KD/vision/mean_teacher| +-----------------------------------------------------------+----------------------------------+----------------------+ | Knowledge Distillation via Route Constrained Optimization | https://arxiv.org/abs/1904.09149 | KD/vision/RCO | +-----------------------------------------------------------+----------------------------------+----------------------+ | Born Again Neural Networks | https://arxiv.org/abs/1805.04770 | KD/vision/BANN | +-----------------------------------------------------------+----------------------------------+----------------------+ | Preparing Lessons: Improve Knowledge Distillation with | https://arxiv.org/abs/1911.07471 | KD/vision/KA | | Better Supervision | | | +-----------------------------------------------------------+----------------------------------+----------------------+ | Improving Generalization Robustness with Noisy | https://arxiv.org/abs/1910.05057 | KD/vision/noisy | | Collaboration in Knowledge Distillation | | | +-----------------------------------------------------------+----------------------------------+----------------------+ | Distilling Task-Specific Knowledge from BERT into | https://arxiv.org/abs/1903.12136 | KD/text/BERT2LSTM | | Simple Neural Networks | | | +-----------------------------------------------------------+----------------------------------+----------------------+ | Deep Mutual Learning | https://arxiv.org/abs/1706.00384 | KD/vision/DML | +-----------------------------------------------------------+----------------------------------+----------------------+ | The Lottery Ticket Hypothesis: Finding | https://arxiv.org/abs/1803.03635 | Pruning/ | | Sparse, Trainable Neural Networks | | lottery_tickets | +-----------------------------------------------------------+----------------------------------+----------------------+ | Regularizing Class-wise Predictions via Self- | https://arxiv.org/abs/2003.13964 | KD/vision/CSDK | | knowledge Distillation. | | | +-----------------------------------------------------------+----------------------------------+----------------------+

Please cite our pre-print if you find KD_Lib useful in any way :)

.. code-block:: console

@misc{shah2020kdlib,
  title={KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization}, 
  author={Het Shah and Avishree Khare and Neelay Shah and Khizir Siddiqui},
  year={2020},
  eprint={2011.14691},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].