All Projects → SmallMunich → Nutonomy_pointpillars

SmallMunich / Nutonomy_pointpillars

Licence: mit
Convert pointpillars Pytorch Model To ONNX for TensorRT Inference

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Nutonomy pointpillars

Onnx R
R Interface to Open Neural Network Exchange (ONNX)
Stars: ✭ 31 (-74.17%)
Mutual labels:  onnx
Onnx Chainer
Add-on package for ONNX format support in Chainer
Stars: ✭ 83 (-30.83%)
Mutual labels:  onnx
Onnx2keras
Convert ONNX model graph to Keras model format.
Stars: ✭ 103 (-14.17%)
Mutual labels:  onnx
Onnx tflite yolov3
A Conversion tool to convert YOLO v3 Darknet weights to TF Lite model (YOLO v3 PyTorch > ONNX > TensorFlow > TF Lite), and to TensorRT (YOLO v3 Pytorch > ONNX > TensorRT).
Stars: ✭ 52 (-56.67%)
Mutual labels:  onnx
Onnxruntime Projects
Code for some onnxruntime projects
Stars: ✭ 78 (-35%)
Mutual labels:  onnx
Ngraph
nGraph has moved to OpenVINO
Stars: ✭ 1,322 (+1001.67%)
Mutual labels:  onnx
Yolov3
YOLOv3 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 8,159 (+6699.17%)
Mutual labels:  onnx
3ddfa v2
The official PyTorch implementation of Towards Fast, Accurate and Stable 3D Dense Face Alignment, ECCV 2020.
Stars: ✭ 1,961 (+1534.17%)
Mutual labels:  onnx
Micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Stars: ✭ 1,232 (+926.67%)
Mutual labels:  onnx
Keras Oneclassanomalydetection
[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.
Stars: ✭ 102 (-15%)
Mutual labels:  onnx
Pytorch Onnx Tensorrt
A set of tool which would make your life easier with Tensorrt and Onnxruntime. This Repo is designed for YoloV3
Stars: ✭ 66 (-45%)
Mutual labels:  onnx
Gluon2pytorch
Gluon to PyTorch deep neural network model converter
Stars: ✭ 70 (-41.67%)
Mutual labels:  onnx
Onnx Tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
Stars: ✭ 1,285 (+970.83%)
Mutual labels:  onnx
Advbox
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Stars: ✭ 1,055 (+779.17%)
Mutual labels:  onnx
Yolov5 Rt Stack
Yet another yolov5, with its runtime stack for libtorch, onnx, tvm and specialized accelerators. You like torchvision's retinanet? You like yolov5? You love yolort!
Stars: ✭ 107 (-10.83%)
Mutual labels:  onnx
Mlnet Workshop
ML.NET Workshop to predict car sales prices
Stars: ✭ 29 (-75.83%)
Mutual labels:  onnx
Gen Efficientnet Pytorch
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS
Stars: ✭ 1,275 (+962.5%)
Mutual labels:  onnx
Onnx
Open standard for machine learning interoperability
Stars: ✭ 11,829 (+9757.5%)
Mutual labels:  onnx
Yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 19,914 (+16495%)
Mutual labels:  onnx
Mivisionx
MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. AMD MIVisionX also delivers a highly optimized open-source implementation of the Khronos OpenVX™ and OpenVX™ Extensions.
Stars: ✭ 100 (-16.67%)
Mutual labels:  onnx

PointPillars Pytorch Model Convert To ONNX, And Using TensorRT to Load this IR(ONNX) for Fast Speeding Inference

Welcome to PointPillars(This is origin from nuTonomy/second.pytorch ReadMe.txt).

This repo demonstrates how to reproduce the results from PointPillars: Fast Encoders for Object Detection from Point Clouds (to be published at CVPR 2019) on the KITTI dataset by making the minimum required changes from the preexisting open source codebase SECOND.

Meanwhile, This part of the code also refers to the open source k0suke-murakami (https://github.com/k0suke-murakami/train_point_pillars) this code.

This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.

WARNING: This code is not being actively maintained. This code can be used to reproduce the results in the first version of the paper, https://arxiv.org/abs/1812.05784v1. For an actively maintained repository that can also reproduce PointPillars results on nuScenes, we recommend using SECOND. We are not the owners of the repository, but we have worked with the author and endorse his code.

Example Results

Getting Started

This is a fork of SECOND for KITTI object detection and the relevant subset of the original README is reproduced here.

Docker Environments

If you do not waste time on pointpillars envs, please pull my docker virtual environments :

docker pull smallmunich/suke_pointpillars:v1 

Attention: when you launch this docker envs, please run this command :

conda activate pointpillars 

And Then, you can run train or evaluation or onnx model generate command line.

Install

1. Clone code

git clone https://github.com/SmallMunich/nutonomy_pointpillars.git

2. Install Python packages

It is recommend to use the Anaconda package manager.

First, use Anaconda to configure as many packages as possible.

conda create -n pointpillars python=3.6 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda

Then use pip for the packages missing from Anaconda.

pip install --upgrade pip
pip install fire tensorboardX

Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this to be correctly configured. However, I suggest you install the spconv instead of SparseConvNet.

git clone [email protected]:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead

Additionally, you may need to install Boost geometry:

sudo apt-get install libboost-all-dev

3. Setup cuda for numba

You need to add following environment variables for numba to ~/.bashrc:

export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice

4. PYTHONPATH

Add nutonomy_pointpillars/ to your PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:/your_root_path/nutonomy_pointpillars/

Prepare dataset

1. Dataset preparation

Download KITTI dataset and create some directories first:

└── KITTI_DATASET_ROOT
       ├── training    <-- 7481 train data
       |   ├── image_2 <-- for visualization
       |   ├── calib
       |   ├── label_2
       |   ├── velodyne
       |   └── velodyne_reduced <-- empty directory
       └── testing     <-- 7580 test data
           ├── image_2 <-- for visualization
           ├── calib
           ├── velodyne
           └── velodyne_reduced <-- empty directory

Note: PointPillar's protos use KITTI_DATASET_ROOT=/data/sets/kitti_second/.

2. Create kitti infos:

python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT

3. Create reduced point cloud:

python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT

4. Create groundtruth-database infos:

python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT

5. Modify config file

The config file needs to be edited to point to the above datasets:

train_input_reader: {
  ...
  database_sampler {
    database_info_path: "/path/to/kitti_dbinfos_train.pkl"
    ...
  }
  kitti_info_path: "/path/to/kitti_infos_train.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
  ...
  kitti_info_path: "/path/to/kitti_infos_val.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}

Train

cd ~/second.pytorch/second
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
  • If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
  • Training only supports a single GPU.
  • Training uses a batchsize=2 which should fit in memory on most standard GPUs.
  • On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.

Evaluate

cd ~/second.pytorch/second/
python pytorch/train.py evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • Detection result will saved in model_dir/eval_results/step_xxx.
  • By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.

ONNX IR Generate

pointpillars pytorch model convert to IR onnx, you should verify some code as follows:

this python file is : second/pyotrch/models/voxelnet.py

        voxel_features = self.voxel_feature_extractor(pillar_x, pillar_y, pillar_z, pillar_i,
                                                      num_points, x_sub_shaped, y_sub_shaped, mask)

        ###################################################################################
        # return voxel_features ### onnx voxel_features export
        # middle_feature_extractor for trim shape
        voxel_features = voxel_features.squeeze()
        voxel_features = voxel_features.permute(1, 0)

UNCOMMENT this line: return voxel_features

And Then, you can run convert IR command.

cd ~/second.pytorch/second/
python pytorch/train.py onnx_model_generate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir

Compare ONNX model With Pytorch Origin model predicts

  • If you want to check this convert model about pfe.onnx and rpn.onnx model, please refer to this py-file: check_onnx_valid.py

  • Now, we can compare onnx results with pytorch origin model predicts as follows :

  • the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.

    eval_voxel_features.txt 
    eval_voxel_features_onnx.txt 
    eval_rpn_features.txt 
    eval_rpn_onnx_features.txt 
  • pfe.onnx model compare with origin pfe-layer : Example Results

  • rpn.onnx model compare with origin rpn-layer : Example Results

Compare ONNX with TensorRT Fast Speed Inference

  • First you needs this environments(onnx_tensorrt envs):
      docker pull smallmunich/onnx_tensorrt:latest
  • If you want to use pfe.onnx and rpn.onnx model for tensorrt inference, please refer to this py-file: tensorrt_onnx_infer.py

  • Now, we can compare onnx results with pytorch origin model predicts as follows :

  • the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.

    pfe_rpn_onnx_outputs.txt 
    pfe_tensorrt_outputs.txt 
    rpn_onnx_outputs.txt 
    rpn_tensorrt_outputs.txt 
  • pfe.onnx model compare with tensorrt pfe-layer : Example Results

  • rpn.onnx model compare with tensorrt rpn-layer : Example Results

Blog Address

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].