All Projects → intel → neural-compressor

intel / neural-compressor

Licence: Apache-2.0 license
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
typescript
32286 projects
HTML
75241 projects
Jupyter Notebook
11667 projects
SCSS
7915 projects

Projects that are alternatives of or similar to neural-compressor

torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (-81.08%)
Mutual labels:  pruning, quantization, quantization-aware-training
Ntagger
reference pytorch code for named entity tagging
Stars: ✭ 58 (-91.29%)
Mutual labels:  pruning, quantization
Model Optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Stars: ✭ 992 (+48.95%)
Mutual labels:  pruning, quantization
sparsezoo
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (-60.36%)
Mutual labels:  pruning, quantization
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (-31.98%)
Mutual labels:  pruning, quantization
Awesome Emdl
Embedded and mobile deep learning research resources
Stars: ✭ 554 (-16.82%)
Mutual labels:  pruning, quantization
Awesome Edge Machine Learning
A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (-79.13%)
Mutual labels:  pruning, quantization
SSD-Pruning-and-quantization
Pruning and quantization for SSD. Model compression.
Stars: ✭ 19 (-97.15%)
Mutual labels:  pruning, quantization
Awesome Ml Model Compression
Awesome machine learning model compression research papers, tools, and learning material.
Stars: ✭ 166 (-75.08%)
Mutual labels:  pruning, quantization
Kd lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Stars: ✭ 173 (-74.02%)
Mutual labels:  pruning, quantization
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (-67.27%)
Mutual labels:  pruning, quantization
deep-compression
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (-76.58%)
Mutual labels:  sparsity, pruning
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+464.56%)
Mutual labels:  pruning, quantization
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+1.65%)
Mutual labels:  pruning, quantization
sparsify
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-79.28%)
Mutual labels:  pruning, quantization
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+84.98%)
Mutual labels:  pruning, quantization
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-91.59%)
Mutual labels:  pruning, quantization
mmrazor
OpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (-3.3%)
Mutual labels:  pruning, knowledge-distillation
Model compression
PyTorch Model Compression
Stars: ✭ 150 (-77.48%)
Mutual labels:  pruning, quantization
Awesome Ai Infrastructures
Infrastructures™ for Machine Learning Training/Inference in Production.
Stars: ✭ 223 (-66.52%)
Mutual labels:  pruning, quantization

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)

python version license coverage Downloads


Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, is an open-source Python library that runs on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep-learning frameworks for popular network compression technologies such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help the user quickly find out the best quantized model. It also implements different weight-pruning algorithms to generate a pruned model with predefined sparsity goal. It also supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor is a critical AI software component in the Intel® oneAPI AI Analytics Toolkit.

Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.

Installation

Prerequisites

Python version: 3.7, 3.8, 3.9, 3.10

Install on Linux

  • Release binary install
    # install stable basic version from pip
    pip install neural-compressor
    # Or install stable full version from pip (including GUI)
    pip install neural-compressor-full
  • Nightly binary install
    git clone https://github.com/intel/neural-compressor.git
    cd neural-compressor
    pip install -r requirements.txt
    # install nightly basic version from pip
    pip install -i https://test.pypi.org/simple/ neural-compressor
    # Or install nightly full version from pip (including GUI)
    pip install -i https://test.pypi.org/simple/ neural-compressor-full

More installation methods can be found at Installation Guide. Please check out our FAQ for more details.

Getting Started

Quantization with Python API

# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
import tensorflow as tf
from neural_compressor.experimental import Quantization, common
quantizer = Quantization()
quantizer.model = './mobilenet_v1_1.0_224_frozen.pb'
dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()

Quantization with JupyterLab Extension

Search for jupyter-lab-neural-compressor in the Extension Manager in JupyterLab and install with one click:

Extension

Quantization with GUI

# An ONNX Example
pip install onnx==1.12.0 onnxruntime==1.12.1 onnxruntime-extensions
# Prepare fp32 model
wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx
# Start GUI
inc_bench
Architecture

System Requirements

Validated Hardware Environment

Intel® Neural Compressor supports CPUs based on Intel 64 architecture or compatible processors:

  • Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
  • Future Intel Xeon Scalable processor (code name Sapphire Rapids)

Intel® Neural Compressor supports GPUs built on Intel's Xe architecture:

Intel® Neural Compressor quantized ONNX models support multiple hardware vendors through ONNX Runtime:

  • Intel CPU, AMD/ARM CPU, and NVidia GPU. Please refer to the validated model list.

Validated Software Environment

  • OS version: CentOS 8.4, Ubuntu 20.04
  • Python version: 3.7, 3.8, 3.9, 3.10
Framework TensorFlow Intel TensorFlow PyTorch Intel® Extension for PyTorch* ONNX Runtime MXNet
Version 2.10.0
2.9.1
2.8.2
2.10.0
2.9.1
2.8.0
1.12.1+cpu
1.11.0+cpu
1.10.0+cpu
1.12.0
1.11.0
1.10.0
1.12.1
1.11.0
1.10.0
1.8.0
1.7.0
1.6.0

Note: Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.

Validated Models

Intel® Neural Compressor validated 420+ examples for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available here.

Documentation

Overview
Architecture Examples GUI APIs
Intel oneAPI AI Analytics Toolkit AI and Analytics Samples
Basic API
Transform Dataset Metric Objective
Deep Dive
Quantization Pruning(Sparsity) Knowledge Distillation Mixed Precision Orchestration
Benchmarking Distributed Training Model Conversion TensorBoard
Distillation for Quantization Neural Coder
Advanced Topics
Adaptor Strategy Reference Example

Selected Publications/Events

View our full publication list.

Additional Content

Hiring

We are actively hiring. Send your resume to [email protected] if you are interested in model compression techniques.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].