All Projects → htqin → Bipointnet

htqin / Bipointnet

Licence: mit
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Bipointnet

ESNAC
Learnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (+0%)
Mutual labels:  model-compression
Soft Filter Pruning
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Stars: ✭ 291 (+977.78%)
Mutual labels:  model-compression
Ghostnet.pytorch
[CVPR2020] GhostNet: More Features from Cheap Operations
Stars: ✭ 440 (+1529.63%)
Mutual labels:  model-compression
allie
🤖 A machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers).
Stars: ✭ 93 (+244.44%)
Mutual labels:  model-compression
A- Guide -to Data Sciecne from mathematics
It is a blueprint to data science from the mathematics to algorithms. It is not completed.
Stars: ✭ 25 (-7.41%)
Mutual labels:  model-compression
Amc
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 298 (+1003.7%)
Mutual labels:  model-compression
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (+33.33%)
Mutual labels:  model-compression
Paddleslim
PaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (+2407.41%)
Mutual labels:  model-compression
SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (+85.19%)
Mutual labels:  model-compression
Knowledge Distillation Papers
knowledge distillation papers
Stars: ✭ 422 (+1462.96%)
Mutual labels:  model-compression
Yolov5-distillation-train-inference
Yolov5 distillation training | Yolov5知识蒸馏训练,支持训练自己的数据
Stars: ✭ 84 (+211.11%)
Mutual labels:  model-compression
DLCV2018SPRING
Deep Learning for Computer Vision (CommE 5052) in NTU
Stars: ✭ 38 (+40.74%)
Mutual labels:  model-compression
Filter Pruning Geometric Median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Stars: ✭ 338 (+1151.85%)
Mutual labels:  model-compression
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (+62.96%)
Mutual labels:  model-compression
Knowledge Distillation Zoo
Pytorch implementation of various Knowledge Distillation (KD) methods.
Stars: ✭ 514 (+1803.7%)
Mutual labels:  model-compression
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (+51.85%)
Mutual labels:  model-compression
Model Compression Papers
Papers for deep neural network compression and acceleration
Stars: ✭ 296 (+996.3%)
Mutual labels:  model-compression
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+2459.26%)
Mutual labels:  model-compression
Lightctr
Lightweight and Scalable framework that combines mainstream algorithms of Click-Through-Rate prediction based computational DAG, philosophy of Parameter Server and Ring-AllReduce collective communication.
Stars: ✭ 644 (+2285.19%)
Mutual labels:  model-compression
Data Efficient Model Compression
Data Efficient Model Compression
Stars: ✭ 380 (+1307.41%)
Mutual labels:  model-compression

BiPointNet: Binary Neural Network for Point Clouds

Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Liu, and Hao Su from Beihang University, SenseTime, and UCSD.

prediction example

Introduction

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds [PDF]. To alleviate the resource constraint for real-time point cloud applications that run on edge devices, in this paper we present BiPointNet, the first model binarization approach for efficient deep learning on point clouds. We first discover that the immense performance drop of binarized models for point clouds mainly stems from two challenges: aggregation-induced feature homogenization that leads to a degradation of information entropy, and scale distortion that hinders optimization and invalidates scale-sensitive structures. With theoretical justifications and in-depth analysis, our BiPointNet introduces Entropy-Maximizing Aggregation (EMA) to modulate the distribution before aggregation for the maximum information entropy, and Layer-wise Scale Recovery (LSR) to efficiently restore feature representation capacity. Extensive experiments show that BiPointNet outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones, e.g., BiPointNet gives an impressive 14.7x speedup and 18.9x storage saving on real-world resource-constrained devices.

Installation

# create new conda environment
conda create -n pyg python=3.7 -y
conda activate pyg

# install pytorch
conda install pytorch==1.5.0 torchvision cudatoolkit=10.1 -c pytorch -y

# install pytorch-geometric
export CUDA=cu101
pip install torch-scatter==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.5.0.html
pip install torch-sparse==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.5.0.html
pip install torch-cluster==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.5.0.html
pip install torch-spline-conv==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.5.0.html
pip install torch-geometric

# install other dependencies
pip install pyyaml

Training

export PYTHONPATH=$(pwd):$PYTHONPATH
conda activate pyg
python scripts/main.py ${CONFIG} ${PYTHON_ARGS}

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{Qin:iclr21,
  author    = {Haotong Qin and Zhongang Cai and Mingyuan Zhang 
  and Yifu Ding and Haiyu Zhao and Shuai Yi 
  and Xianglong Liu and Hao Su},
  title     = {BiPointNet: Binary Neural Network for Point Clouds},
  booktitle = {ICLR},
  year      = {2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].