All Projects → htqin → Awesome Model Quantization

htqin / Awesome Model Quantization

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

Labels

Projects that are alternatives of or similar to Awesome Model Quantization

Rust hdl
Stars: ✭ 120 (-40%)
Mutual labels:  vhdl
Space Invaders Vhdl
Space Invaders game implemented with VHDL
Stars: ✭ 142 (-29%)
Mutual labels:  vhdl
Potato
A simple RISC-V processor for use in FPGA designs.
Stars: ✭ 181 (-9.5%)
Mutual labels:  vhdl
Hdl4fpga
VHDL library 4 FPGAs
Stars: ✭ 120 (-40%)
Mutual labels:  vhdl
Vhdl lib
Library of VHDL components that are useful in larger designs.
Stars: ✭ 139 (-30.5%)
Mutual labels:  vhdl
Vna2
Second version of homemade 30 MHz - 6 GHz VNA
Stars: ✭ 150 (-25%)
Mutual labels:  vhdl
Neppielight
FPGA-based HDMI ambient lighting
Stars: ✭ 114 (-43%)
Mutual labels:  vhdl
Ghdl Yosys Plugin
VHDL synthesis (based on ghdl)
Stars: ✭ 192 (-4%)
Mutual labels:  vhdl
Osvvm
OSVVM Utility Library: AlertLogPkg, CoveragePkg, RandomPkg, ScoreboardGenericPkg, MemoryPkg, TbUtilPkg, TranscriptPkg, ...
Stars: ✭ 140 (-30%)
Mutual labels:  vhdl
Degate
Open source software for chip reverse engineering.
Stars: ✭ 156 (-22%)
Mutual labels:  vhdl
Fmcw3
Two RX-channel 6 GHz FMCW radar design files
Stars: ✭ 126 (-37%)
Mutual labels:  vhdl
Mega65 Core
MEGA65 FPGA core
Stars: ✭ 137 (-31.5%)
Mutual labels:  vhdl
Tinytpu
Implementation of a Tensor Processing Unit for embedded systems and the IoT.
Stars: ✭ 153 (-23.5%)
Mutual labels:  vhdl
Neo430
A very small msp430-compatible customizable soft-core microcontroller-like processor system written in platform-independent VHDL.
Stars: ✭ 120 (-40%)
Mutual labels:  vhdl
Uvvm
UVVM (Universal VHDL Verification Methodology) is a free and Open Source Methodology and Library for very efficient VHDL verification of FPGA and ASIC – resulting also in significant quality improvement. Community forum: https://forum.uvvm.org/ UVVM.org: https://uvvm.org/
Stars: ✭ 191 (-4.5%)
Mutual labels:  vhdl
Zpu
The Zylin ZPU
Stars: ✭ 118 (-41%)
Mutual labels:  vhdl
Fletcher
Fletcher: A framework to integrate FPGA accelerators with Apache Arrow
Stars: ✭ 144 (-28%)
Mutual labels:  vhdl
Bladerf Wiphy
bladeRF-wiphy is an open-source IEEE 802.11 compatible software defined radio VHDL modem
Stars: ✭ 203 (+1.5%)
Mutual labels:  vhdl
Fpga displayport
An implementation of DisplayPort protocol for FPGAs
Stars: ✭ 192 (-4%)
Mutual labels:  vhdl
Hardh264
A hardware h264 video encoder written in VHDL. Designed to be synthesized into an FPGA. Initial testing is using Xilinx tools and FPGAs but it is not specific to Xilinx.
Stars: ✭ 155 (-22.5%)
Mutual labels:  vhdl

Awesome Model Quantization Awesome

This repo collects papers, docs, codes about model quantization for anyone who wants to do research on it. We are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

Table of Contents

Survey_of_BNN

Our survey paper Binary Neural Networks: A Survey (Pattern Recognition) is a comprehensive survey of recent progress in binary neural networks. For details, please refer to:

Binary Neural Networks: A Survey [Paper] [Blog]

Haotong Qin, Ruihao Gong, Xianglong Liu*, Xiao Bai, Jingkuan Song, and Nicu Sebe.

Bibtex
@article{Qin:pr20_bnn_survey,
	title = "Binary neural networks: A survey",
	author = "Haotong Qin and Ruihao Gong and Xianglong Liu and Xiao Bai and Jingkuan Song and Nicu Sebe",
	journal = "Pattern Recognition",
	volume = "105",
	pages = "107281",
	year = "2020"
}

survey

Papers

Keywords: low-bit: Low-bit Quantization | binarization | hardware | nlp: Based on Natural Language Processing Models | other: Other Relative Methods

Statistics: 🔥 highly cited | ⭐️ code is available and star > 50


2021

  • [CVPR] Diversifying Sample Generation for Accurate Data-Free Quantization. [low-bit]
  • [ICLR] BiPointNet: Binary Neural Network for Point Clouds. [binarization] [torch]
  • [ICLR] Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks. [binarization]
  • [ICLR] High-Capacity Expert Binary Networks. [binarization]
  • [ICLR] Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network. [binarization]
  • [ICLR] BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction. [low-bit]
  • [ICLR] Neural gradients are near-lognormal: improved quantized and sparse training. [low-bit]
  • [ICLR] Training with Quantization Noise for Extreme Model Compression. [low-bit]
  • [ICLR] Incremental few-shot learning via vector quantization in deep embedded space. [low-bit]
  • [ICLR] Degree-Quant: Quantization-Aware Training for Graph Neural Networks. [low-bit]
  • [ICLR] BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization. [low-bit]
  • [ICLR] Simple Augmentation Goes a Long Way: ADRL for DNN Quantization. [low-bit]
  • [ICLR] Sparse Quantized Spectral Clustering. [low-bit]
  • [ICLR] WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic. [low-bit]
  • [AAAI] Distribution Adaptive INT8 Quantization for Training CNNs. [low-bit]
  • [AAAI] Stochastic Precision Ensemble: Self-­‐Knowledge Distillation for Quantized Deep Neural Networks. [low-bit]
  • [AAAI] Optimizing Information Theory Based Bitwise Bottlenecks for Efficient Mixed-­Precision Activation Quantization. [low-bit]
  • [AAAI] OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization. [low-bit]
  • [AAAI] Scalable Verification of Quantized Neural Networks. [low-bit]
  • [AAAI] Uncertainty Quantification in CNN through the Bootstrap of Convex Neural Networks. [low-bit]
  • [AAAI] FracBits: Mixed Precision Quantization via Fractional Bit-­Widths. [low-bit]
  • [AAAI] Post-­‐training Quantization with Multiple Points: Mixed Precision without Mixed Precision. [low-bit]
  • [AAAI] Vector Quantized Bayesian Neural Network Inference for Data Streams. [low-bit]
  • [AAAI] TRQ: Ternary Neural Networks with Residual Quantization. [low-bit]
  • [AAAI] Memory and Computation-­Efficient Kernel SVM via Binary Embedding and Ternary Coefficients. [binarization]
  • [AAAI] Compressing Deep Convolutional Neural Networks by Stacking Low-­Dimensional Binary Convolution Filters. [binarization]
  • [AAAI] Training Binary Neural Network without Batch Normalization for Image Super-Resolution. [binarization]
  • [AAAI] SA-BNN: State-­Aware Binary Neural Network. [binarization]

2020

  • [ACL] End to End Binarized Neural Networks for Text Classification. [binarization]
  • [AAAI] [72🔥] Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. [low-bit] [nlp]
  • [AAAI] Sparsity-Inducing Binarized Neural Networks. [binarization]
  • [AAAI] Towards Accurate Low Bit-Width Quantization with Multiple Phase Adaptations.
  • [COOL CHIPS] A Novel In-DRAM Accelerator Architecture for Binary Neural Network. [hardware]
  • [CoRR] Training Binary Neural Networks using the Bayesian Learning Rule. [binarization]
  • [CVPR] [47🔥] GhostNet: More Features from Cheap Operations. [low-bit] [tensorflow & torch] [1.2k⭐️]
  • [CVPR] Forward and Backward Information Retention for Accurate Binary Neural Networks. [binarization] [torch] [105⭐️]
  • [CVPR] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy. [low-bit] [torch] [76⭐️]
  • [CVPR] Rotation Consistent Margin Loss for Efficient Low-Bit Face Recognition. [low-bit]
  • [CVPR] BiDet: An Efficient Binarized Object Detector. [ binarization ] [torch] [112⭐️]
  • [CVPR] Fixed-Point Back-Propagation Training. [video] [low-bit]
  • [CVPR] Low-Bit Quantization Needs Good Distribution. [low-bit]
  • [DATE] BNNsplit: Binarized Neural Networks for embedded distributed FPGA-based computing systems. [binarization]
  • [DATE] PhoneBit: Efficient GPU-Accelerated Binary Neural Network Inference Engine for Mobile Phones. [binarization] [hardware]
  • [DATE] OrthrusPE: Runtime Reconfigurable Processing Elements for Binary Neural Networks. [binarization]
  • [ECCV] Learning Architectures for Binary Networks. [binarization] [torch]
  • [ECCV]PROFIT: A Novel Training Method for sub-4-bit MobileNet Models. [low-bit]
  • [ECCV] ProxyBNN: Learning Binarized Neural Networks via Proxy Matrices. [binarization]
  • [ECCV] ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions. [binarization] [torch] [108⭐️]
  • [EMNLP] TernaryBERT: Distillation-aware Ultra-low Bit BERT. [low-bit] [nlp]
  • [EMNLP] Fully Quantized Transformer for Machine Translation. [low-bit] [nlp]
  • [ICET] An Energy-Efficient Bagged Binary Neural Network Accelerator. [binarization] [hardware]
  • [ICASSP] Balanced Binary Neural Networks with Gated Residual. [binarization]
  • [ICML] Training Binary Neural Networks through Learning with Noisy Supervision. [binarization]
  • [ICLR] DMS: Differentiable Dimension Search for Binary Neural Networks. [binarization]
  • [ICLR] [19🔥] Training Binary Neural Networks with Real-to-Binary Convolutions. [binarization] [code is comming] [re-implement]
  • [ICLR] BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations. [binarization] [torch]
  • [IJCV] Binarized Neural Architecture Search for Efficient Object Recognition. [binarization]
  • [IJCAI] CP-NAS: Child-Parent Neural Architecture Search for Binary Neural Networks. [binarization]
  • [IJCAI] Towards Fully 8-bit Integer Inference for the Transformer Model. [low-bit] [nlp]
  • [IJCAI] Soft Threshold Ternary Networks. [low-bit]
  • [IJCAI] Overflow Aware Quantization: Accelerating Neural Network Inference by Low-bit Multiply-Accumulate Operations. [low-bit]
  • [IJCAI] Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks. [low-bit]
  • [IJCAI] Fully Nested Neural Network for Adaptive Compression and Quantization. [low-bit]
  • [ISCAS] MuBiNN: Multi-Level Binarized Recurrent Neural Network for EEG Signal Classification. [binarization]
  • [ISQED] BNN Pruning: Pruning Binary Neural Network Guided by Weight Flipping Frequency. [binarization] [torch]
  • [MICRO] GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference. [low-bit] [nlp]
  • [MLST] Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML. [hardware] [binarization] [low-bit]
  • [NeurIPS] Rotated Binary Neural Network. [binarization] [torch]
  • [NeurIPS] Searching for Low-Bit Weights in Quantized Neural Networks. [low-bit] [torch]
  • [NeurIPS] Universally Quantized Neural Compression. [low-bit]
  • [NeurIPS] Efficient Exact Verification of Binarized Neural Networks. [binarization] [torch]
  • [NeurIPS] Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks. [binarization] [code]
  • [NeurIPS] HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks. [low-bit]
  • [NeurIPS] Bayesian Bits: Unifying Quantization and Pruning. [low-bit]
  • [NeurIPS] Robust Quantization: One Model to Rule Them All. [low-bit]
  • [NeurIPS] Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow. [low-bit] [torch]
  • [NeurIPS] Adaptive Gradient Quantization for Data-Parallel SGD. [low-bit] [torch]
  • [NeurIPS] FleXOR: Trainable Fractional Quantization. [low-bit]
  • [NeurIPS] Position-based Scaled Gradient for Model Quantization and Pruning. [low-bit] [torch]
  • [NN] Training high-performance and large-scale deep neural networks with full 8-bit integers. [low-bit]
  • [Neurocomputing] Eye localization based on weight binarization cascade convolution neural network. [binarization]
  • [PR] [23🔥] Binary neural networks: A survey. [binarization]
  • [PR Letters] Controlling information capacity of binary neural network. [binarization]
  • [SysML] Riptide: Fast End-to-End Binarized Neural Networks. [low-bit] [tensorflow] [129⭐️]
  • [TVLSI] Phoenix: A Low-Precision Floating-Point Quantization Oriented Architecture for Convolutional Neural Networks. [low-bit]
  • [WACV] MoBiNet: A Mobile Binary Network for Image Classification. [binarization]
  • [IEEE Access] An Energy-Efficient and High Throughput in-Memory Computing Bit-Cell With Excellent Robustness Under Process Variations for Binary Neural Network. [binarization] [hardware]
  • [IEEE Trans. Magn] SIMBA: A Skyrmionic In-Memory Binary Neural Network Accelerator. [binarization]
  • [IEEE TCS.II] A Resource-Efficient Inference Accelerator for Binary Convolutional Neural Networks. [hardware]
  • [IEEE TCS.I] IMAC: In-Memory Multi-Bit Multiplication and ACcumulation in 6T SRAM Array. [low-bit]
  • [IEEE Trans. Electron Devices] Design of High Robustness BNN Inference Accelerator Based on Binary Memristors. [binarization] [hardware]
  • [arxiv] Training with Quantization Noise for Extreme Model Compression. [low-bit] [torch]
  • [arxiv] Binarized Graph Neural Network. [binarization]
  • [arxiv] How Does Batch Normalization Help Binary Training? [binarization]
  • [arxiv] Distillation Guided Residual Learning for Binary Convolutional Neural Networks. [binarization]
  • [arxiv] Accelerating Binarized Neural Networks via Bit-Tensor-Cores in Turing GPUs. [binarization] [code]
  • [arxiv] MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy? [binarization] [code] [192⭐️]
  • [arxiv] RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks. [binarization] [low-bit]
  • [paper] Towards Lossless Binary Convolutional Neural Networks Using Piecewise Approximation. [binarization]
  • [arxiv] Understanding Learning Dynamics of Binary Neural Networks via Information Bottleneck. [binarization]
  • [arxiv] BinaryBERT: Pushing the Limit of BERT Quantization. [binarization] [nlp]

2019

  • [AAAI] Efficient Quantization for Neural Networks with Binary Weights and Low Bitwidth Activations. [low-bit] [binarization]
  • [AAAI] [31🔥] Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation. [binarization]
  • [APCCAS] Using Neuroevolved Binary Neural Networks to solve reinforcement learning environments. [binarization] [code]
  • [BMVC] [32🔥] XNOR-Net++: Improved Binary Neural Networks. [binarization]
  • [BMVC] Accurate and Compact Convolutional Neural Networks with Trained Binarization. [binarization]
  • [CoRR] RBCN: Rectified Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs. [binarization]
  • [CoRR] TentacleNet: A Pseudo-Ensemble Template for Accurate Binary Convolutional Neural Networks. [binarization]
  • [CoRR] Improved training of binary networks for human pose estimation and image recognition. [binarization]
  • [CoRR] Binarized Neural Architecture Search. [binarization]
  • [CoRR] Matrix and tensor decompositions for training binary neural networks. [binarization]
  • [CoRR] Back to Simplicity: How to Train Accurate BNNs from Scratch? [binarization] [code] [193⭐️]
  • [CVPR] [53🔥] Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation. [binarization]
  • [CVPR] SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization. [low-bit]
  • [CVPR] [218🔥] HAQ: Hardware-Aware Automated Quantization with Mixed Precision. [low-bit] [hardware] [torch] [233⭐️]
  • [CVPR] [48🔥] Quantization Networks. [binarization] [torch] [82⭐️]
  • [CVPR] Fully Quantized Network for Object Detection. [low-bit]
  • [CVPR] Learning Channel-Wise Interactions for Binary Convolutional Neural Networks. [binarization]
  • [CVPR] [31🔥] Circulant Binary Convolutional Networks: Enhancing the Performance of 1-bit DCNNs with Circulant Back Propagation. [binarization]
  • [CVPR] [36🔥] Regularizing Activation Distribution for Training Binarized Deep Networks. [binarization]
  • [CVPR] A Main/Subsidiary Network Framework for Simplifying Binary Neural Network. [binarization]
  • [CVPR] Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit? [binarization]
  • [FPGA] Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA. [binarization] [hardware]
  • [GLSVLSI] Binarized Depthwise Separable Neural Network for Object Tracking in FPGA. [binarization] [hardware]
  • [ICCV] [55🔥] Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks. [low-bit]
  • [ICCV] Bayesian optimized 1-bit cnns. [binarization]
  • [ICCV] Searching for Accurate Binary Neural Architectures. [binarization]
  • [ICML] Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model. [low-bit] [nlp]
  • [ICLR] [37🔥] ProxQuant: Quantized Neural Networks via Proximal Operators. [binarization] [low-bit] [torch]
  • [ICLR] An Empirical study of Binary Neural Networks' Optimisation. [binarization]
  • [ICIP] Training Accurate Binary Neural Networks from Scratch. [binarization] [code] [192⭐️]
  • [ICUS] Balanced Circulant Binary Convolutional Networks. [binarization]
  • [IJCAI] Binarized Neural Networks for Resource-Efficient Hashing with Minimizing Quantization Loss. [binarization]
  • [IJCAI] Binarized Collaborative Filtering with Distilling Graph Convolutional Network. [binarization]
  • [IJCAI] Binarized Collaborative Filtering with Distilling Graph Convolutional Networks. [binarization]
  • [ISOCC] Dual Path Binary Neural Network. [binarization]
  • [IEEE J. Emerg. Sel. Topics Circuits Syst.] Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine. [hardware]
  • [IEEE JETC] [128🔥] Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices. [hardware]
  • [IEEE J. Solid-State Circuits] An Energy-Efficient Reconfigurable Processor for Binary-and Ternary-Weight Neural Networks With Flexible Data Bit Width. [binarization] [low-bit]
  • [MDPI Electronics] A Review of Binarized Neural Networks. [binarization]
  • [NeurIPS] MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization. [low-bit] [torch]
  • [NeurIPS] Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization. [binarization] [tensorflow]
  • [NeurIPS] [43🔥] Regularized Binary Network Training. [binarization]
  • [NeurIPS] [44🔥] Q8BERT: Quantized 8Bit BERT. [low-bit] [nlp]
  • [NeurIPS] Fully Quantized Transformer for Improved Translation. [low-bit] [nlp]
  • [RoEduNet] PXNOR: Perturbative Binary Neural Network. [binarization] [code]
  • [SiPS] Knowledge distillation for optimization of quantized deep neural networks. [low-bit]
  • [TMM] [45🔥] Deep Binary Reconstruction for Cross-Modal Hashing. [binarization]
  • [TMM] Compact Hash Code Learning With Binary Deep Neural Network. [binarization]
  • [TMM] Compact Hash Code Learning With Binary Deep Neural Network. [binarization]
  • [IEEE TCS.I] Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays. [hardware]
  • [IEEE TCS.I] Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory. [binarization]
  • [VLSI-SoC] A Product Engine for Energy-Efficient Execution of Binary Neural Networks Using Resistive Memories. [binarization] [hardware]
  • [paper] [43🔥] BNN+: Improved Binary Network Training. [binarization]
  • [arxiv] Self-Binarizing Networks. [binarization]
  • [arxiv] Towards Unified INT8 Training for Convolutional Neural Network. [low-bit]
  • [arxiv] daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices. [binarization] [hardware] [code]
  • [arxiv] QKD: Quantization-aware Knowledge Distillation. [low-bit]
  • [arxiv] [59🔥] Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search. [low-bit]

2018

  • [AAAI] From Hashing to CNNs: Training BinaryWeight Networks via Hashing. [binarization]
  • [AAAI] [136🔥] Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM. [low-bit] [homepage]
  • [CAAI] Fast object detection based on binary deep convolution neural networks. [binarization]
  • [CoRR] LightNN: Filling the Gap between Conventional Deep Neural Networks and Binarized Networks. [binarization]
  • [CoRR] BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights. [binarization]
  • [CVPR] [63🔥] Two-Step Quantization for Low-bit Neural Networks. [low-bit]
  • [CVPR] Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations. [low-bit]
  • [CVPR] [97🔥] Towards Effective Low-bitwidth Convolutional Neural Networks. [low-bit]
  • [CVPR] Modulated convolutional networks. [binarization]
  • [CVPR] [67🔥] SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks. [low-bit] [code]
  • [CVPR] [630🔥] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. [low-bit]
  • [ECCV] Training Binary Weight Networks via Semi-Binary Decomposition. [binarization]
  • [ECCV] [47🔥] TBN: Convolutional Neural Network with Ternary Inputs and Binary Weights. [binarization] [low-bit] [torch]
  • [ECCV] [202🔥] LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. [low-bit] [tensorflow] [188⭐️]
  • [ECCV] [145🔥] Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm. [binarization] [torch] [120⭐️]
  • [FCCM] ReBNet: Residual Binarized Neural Network. [binarization] [tensorflow]
  • [FPL] FBNA: A Fully Binarized Neural Network Accelerator. [hardware]
  • [ICLR] [65🔥] Loss-aware Weight Quantization of Deep Networks. [low-bit] [code]
  • [ICLR] [230🔥] Model compression via distillation and quantization. [low-bit] [torch] [284⭐️]
  • [ICLR] [201🔥] PACT: Parameterized Clipping Activation for Quantized Neural Networks. [low-bit]
  • [ICLR] [168🔥] WRPN: Wide Reduced-Precision Networks. [low-bit]
  • [ICLR] [141🔥] Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy. [low-bit]
  • [IJCAI] Deterministic Binary Filters for Convolutional Neural Networks. [binarization]
  • [IJCAI] Planning in Factored State and Action Spaces with Learned Binarized Neural Network Transition Models. [binarization]
  • [IJCNN] Analysis and Implementation of Simple Dynamic Binary Neural Networks. [binarization]
  • [IPDPS] BitFlow: Exploiting Vector Parallelism for Binary Neural Networks on CPU. [binarization]
  • [IEEE J. Solid-State Circuits] [66🔥] BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W. [hardware] [low-bit] [binarization]
  • [NCA] [88🔥] A survey of FPGA-based accelerators for convolutional neural networks. [hardware]
  • [NeurIPS] [150🔥] Training Deep Neural Networks with 8-bit Floating Point Numbers. [low-bit]
  • [NeurIPS] [91🔥] Scalable methods for 8-bit training of neural networks. [low-bit] [torch]
  • [MM] BitStream: Efficient Computing Architecture for Real-Time Low-Power Inference of Binary Neural Networks on CPUs. [binarization]
  • [Res Math Sci] Blended coarse gradient descent for full quantization of deep neural networks. [low-bit] [binarization]
  • [TCAD] XNOR Neural Engine: A Hardware Accelerator IP for 21.6-fJ/op Binary Neural Network Inference. [hardware]
  • [TRETS] [50🔥] FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks. [low-bit]
  • [TVLSI] An Energy-Efficient Architecture for Binary Weight Convolutional Neural Networks. [binarization]
  • [arxiv] Training Competitive Binary Neural Networks from Scratch. [binarization] [code] [192⭐️]
  • [arxiv] Joint Neural Architecture Search and Quantization. [low-bit] [torch]

2017

  • [CoRR] BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet. [binarization] [code]
  • [CVPR] [251🔥] Deep Learning with Low Precision by Half-wave Gaussian Quantization. [low-bit] [code] [118⭐️]
  • [CVPR] [156🔥] Local Binary Convolutional Neural Networks. [binarization] [torch] [94⭐️]
  • [FPGA] [463🔥] FINN: A Framework for Fast, Scalable Binarized Neural Network Inference. [hardware] [binarization]
  • [ICCV] [130🔥] Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources. [binarization] [homepage] [torch] [207⭐️]
  • [ICCV] [55🔥] Performance Guaranteed Network Acceleration via High-Order Residual Quantization. [low-bit]
  • [ICLR] [554🔥] Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights. [low-bit] [torch] [144⭐️]
  • [ICLR] [119🔥] Loss-aware Binarization of Deep Networks. [binarization] [code]
  • [ICLR] [222🔥] Soft Weight-Sharing for Neural Network Compression. [other]
  • [ICLR] [637🔥] Trained Ternary Quantization. [low-bit] [torch] [90⭐️]
  • [InterSpeech] Binary Deep Neural Networks for Speech Recognition. [binarization]
  • [IPDPSW] On-Chip Memory Based Binarized Convolutional Deep Neural Network Applying Batch Normalization Free Technique on an FPGA. [hardware]
  • [JETC] A GPU-Outperforming FPGA Accelerator Architecture for Binary Convolutional Neural Networks. [hardware] [binarization]
  • [NeurIPS] [293🔥] Towards Accurate Binary Convolutional Neural Network. [binarization] [tensorflow]
  • [Neurocomputing] [126🔥] FP-BNN: Binarized neural network on FPGA. [hardware]
  • [MWSCAS] Deep learning binary neural network on an FPGA. [hardware] [binarization]
  • [arxiv] [71🔥] Ternary Neural Networks with Fine-Grained Quantization. [low-bit]
  • [arxiv] ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks. [low-bit] [code] [53⭐️]

2016

  • [CoRR] [1k🔥] DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. [low-bit] [code] [5.8k⭐️]
  • [ECCV] [2.7k🔥] XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. [binarization] [torch] [787⭐️]
  • [NeurIPS] [572🔥] Ternary weight networks. [low-bit] [code] [61⭐️]
  • [NeurIPS)] [1.7k🔥] Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. [binarization] [torch] [239⭐️]

2015

  • [ICML] [191🔥] Bitwise Neural Networks. [binarization]
  • [NeurIPS] [1.8k🔥] BinaryConnect: Training Deep Neural Networks with binary weights during propagations. [binarization] [code] [330⭐️]

Codes_and_Docs

  • [code] [doc] ZF-Net: An Open Source FPGA CNN Library.

  • [doc] Accelerating CNN inference on FPGAs: A Survey.

  • [code] Different quantization methods implement by Pytorch.

  • [中文] Quantization Methods.

  • [中文] Run BNN in FPGA.

  • [中文] An Overview of Deep Compression Approaches.

  • [中文] 嵌入式深度学习之神经网络二值化(3)- FPGA实现

Our_Team

Our team is part of the DIG group of the State Key Laboratory of Software Development Environment (SKLSDE), supervised Prof. Xianglong Liu. The main research goals of our team is compressing and accelerating models under multiple scenes.

Members

Haotong Qin

  • Haotong Qin is a Ph.D. student in the State Key Laboratory of Software Development Environment (SKLSDE) and Shen Yuan Honors College at Beihang University, supervised by Prof. Wei Li and Prof. Xianglong Liu. I obtained a B.Eng degree in computer science and engineering from Beihang University, and interned at the MSRA and Tencent WXG. I'm interested in hardware-friendly deep learning and neural network quantization. And my research goal is to enable state-of-the-art neural network models to be deployed on resource-limited hardware, which includes the compression and acceleration for multiple architectures, and the flexible and efficient deployment on multiple hardware.

Xiangguo Zhang

  • Xiangguo Zhang is a graduate student in the School of Computer Science of Beihang University, under the guidance of Prof. Xianglong Liu. He received a bachelor's degree from Shandong University in 2019 and entered Beihang University in the same year. Currently, he is interested in computer vision and post training quantization.

Yifu Ding

  • Yifu Ding is a senior student in the School of Computer Science and Engineering at Beihang University. She is in the State Key Laboratory of Software Development Environment (SKLSDE), under the supervision of Prof. Xianglong Liu. Currently, she is interested in computer vision and model quantization. She thinks that neural network models which are highly compressed can be deployed on resource-constrained devices. And among all the compression methods, quantization is a potential one.

Xiuying Wei

  • Xiuying Wei is a first-year graduate student at Beihang University under the supervision of Prof. Xianglong Liu. She recevied a bachelor’s degree from Shandong University in 2020. Currently, she is interested in model quantization. She thinks that quantization could make model faster and more robust, which could put deep learning systems on low-power devices and bring more opportunity for future.

Qinghua Yan

  • I am a senior student in the Sino-French Engineer School at Beihang University. I just started the research on model compression in the Skate Key Laboratory of Software Development Environment (SKLSDE), under the supervision of Prof. Xianglong Liu. I have great enthusiasm for deep learning and model quantization and I really enjoy working with my talented teammates.

Alumnus

Ruihao Gong

  • Ruihao Gong is currently a senior researcher at SenseTime. Before this, he studied at Beihang University under the supervision of Prof. Xianglong Liu. Since 2017, he worked on the build-up of computer vision systems and model quantization as an intern at Sensetime Research, where he enjoyed working with the talented researchers and grew up a lot with the help of Fengwei Yu, Wei Wu, Jing Shao, and Junjie Yan. During the early time of the internship, he independently took responsibility for the development of intelligent video analysis system Sensevideo. Later, he started the research on model quantization which can speed up the inference and even the training of neural networks on edge devices. Now he is devoted to further promoting the accuracy of extremely low-bit models and the auto-deployment of quantized models.

Publications

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].