All Projects → Gorilla-Lab-SCUT → frustum-convnet

Gorilla-Lab-SCUT / frustum-convnet

Licence: MIT license
The PyTorch Implementation of F-ConvNet for 3D Object Detection

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to frustum-convnet

BtcDet
Behind the Curtain: Learning Occluded Shapes for 3D Object Detection
Stars: ✭ 104 (-54.39%)
Mutual labels:  point-cloud, 3d-object-detection
M3DETR
Code base for M3DeTR: Multi-representation, Multi-scale, Mutual-relation 3D Object Detection with Transformers
Stars: ✭ 47 (-79.39%)
Mutual labels:  point-cloud, 3d-object-detection
ViP
A New 3D Detector. Code Will be made public.
Stars: ✭ 29 (-87.28%)
Mutual labels:  point-cloud, 3d-object-detection
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (-77.19%)
Mutual labels:  point-cloud, 3d-object-detection
efficient online learning
Efficient Online Transfer Learning for 3D Object Detection in Autonomous Driving
Stars: ✭ 20 (-91.23%)
Mutual labels:  point-cloud, 3d-object-detection
Open3D-PointNet2-Semantic3D
Semantic3D segmentation with Open3D and PointNet++
Stars: ✭ 422 (+85.09%)
Mutual labels:  point-cloud
softpool
SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification - ECCV 2020 oral
Stars: ✭ 62 (-72.81%)
Mutual labels:  point-cloud
LiDAR fog sim
LiDAR fog simulation
Stars: ✭ 101 (-55.7%)
Mutual labels:  point-cloud
imvoxelnet
[WACV2022] ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Stars: ✭ 179 (-21.49%)
Mutual labels:  3d-object-detection
point based clothing
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)
Stars: ✭ 57 (-75%)
Mutual labels:  point-cloud
pcc geo cnn v2
Improved Deep Point Cloud Geometry Compression
Stars: ✭ 55 (-75.88%)
Mutual labels:  point-cloud
zed-ros2-wrapper
ROS 2 wrapper beta for the ZED SDK
Stars: ✭ 61 (-73.25%)
Mutual labels:  point-cloud
pcl-edge-detection
Edge-detection application with PointCloud Library
Stars: ✭ 32 (-85.96%)
Mutual labels:  point-cloud
labelCloud
A lightweight tool for labeling 3D bounding boxes in point clouds.
Stars: ✭ 264 (+15.79%)
Mutual labels:  3d-object-detection
ECCV-2020-point-cloud-analysis
ECCV 2020 papers focusing on point cloud analysis
Stars: ✭ 22 (-90.35%)
Mutual labels:  point-cloud
PyKinect2-PyQtGraph-PointClouds
Creating real-time dynamic Point Clouds using PyQtGraph, Kinect 2 and the python library PyKinect2.
Stars: ✭ 42 (-81.58%)
Mutual labels:  point-cloud
From-Voxel-to-Point
"From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with Voxel-to-Point Decoder" and "Anchor-free 3D Single Stage Detector with Mask-Guided Attention for Point Cloud" in ACM MM 2021.
Stars: ✭ 29 (-87.28%)
Mutual labels:  3d-object-detection
persee-depth-image-server
Stream openni2 depth images over the network
Stars: ✭ 21 (-90.79%)
Mutual labels:  point-cloud
CVPR-2020-point-cloud-analysis
CVPR 2020 papers focusing on point cloud analysis
Stars: ✭ 48 (-78.95%)
Mutual labels:  point-cloud
fastDesp-corrProp
Fast Descriptors and Correspondence Propagation for Robust Global Point Cloud Registration
Stars: ✭ 16 (-92.98%)
Mutual labels:  point-cloud

Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection

This repository is the code for our IROS 2019 paper [arXiv],[IEEEXplore].

Citation

If you find this work useful in your research, please consider citing.

@inproceedings{wang2019frustum,
    title={Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection},
    author={Wang, Zhixin and Jia, Kui},
    booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages={1742--1749},
    year={2019},
    organization={IEEE}
}

Installation

Requirements

  • PyTorch 1.0+
  • Python 3.6+

We test our code under Ubuntu-16.04 with CUDA-9.0, CUDNN-7.0, Python-3.7.2, PyTorch-1.0.

Clone the repository and install dependencies

git clone https://github.com/zhixinwang/frustum-convnet.git

You may need to install extra packages, like pybind11, opencv, yaml, tensorflow(optional).

If you want to use tensorboard to visualize the training status, you should install tensorflow (CPU version is enough). Otherwise, you should set the config 'USE_TFBOARD: False' in cfgs/*.yaml.

Compile extension

cd ops
bash clean.sh
bash make.sh

Download data

Download the KITTI 3D object detection dataset from here and organize them as follows.

data/kitti
├── testing
│   ├── calib
│   ├── image_2
│   └── velodyne
└── training
    ├── calib
    ├── image_2
    ├── label_2
    └── velodyne

Training and evaluation

First stage

Run following command to prepare pickle files for car training. We use the 2D detection results from F-PointNets. The pickle files will be saved in kitti/data/pickle_data.

python kitti/prepare_data.py --car_only --gen_train --gen_val --gen_val_rgb_detection

Run following commands to train and evaluate the final model. You can use export CUDA_VISIBLE_DEVICES=? to specify which GPU to use. And you can modify the setting after OUTPUT_DIR to set a directory to save the log, model files and evaluation results. All the config settings are under the configs/config.py.

python train/train_net_det.py --cfg cfgs/det_sample.yaml OUTPUT_DIR output/car_train
python train/test_net_det.py --cfg cfgs/det_sample.yaml OUTPUT_DIR output/car_train TEST.WEIGHTS output/car_train/model_0050.pth

We also provide the shell script, so you can also run bash scripts/car_train.sh instead.

Refinement stage

Run following command to prepare pickle files for car training. We use the first stage predicted results. If you don't use the default directory in the first stage, you should change the corresponding directory in here and here before running following commands. The pickle files will be saved in kitti/data/pickle_data_refine.

python kitti/prepare_data_refine.py --car_only --gen_train --gen_val_det --gen_val_rgb_detection

Run following commands to train and evaluate the final model.

python train/train_net_det.py --cfg cfgs/refine_car.yaml OUTPUT_DIR output/car_train_refine
python train/test_net_det.py --cfg cfgs/refine_car.yaml OUTPUT_DIR output/car_train_refine TEST.WEIGHTS output/car_train_refine/model_0050.pth

We also provide the shell script, so you can also run bash scripts/car_train_refine.sh instead.

All commands in one script file

You can simply run bash scripts/car_all.sh to execute all the above commands.

Pretrained models

We provide the pretrained models for car category, you can download from here. After extracting the files under root directory, you can run bash scripts/eval_pretrained_models.sh to evaluate the pretrained models. The performance on validation set is as follows:

# first stage
Car [email protected], 0.70, 0.70:
bbox AP:98.33, 90.40, 88.24
bev  AP:90.32, 88.02, 79.41
3d   AP:87.76, 77.41, 68.79

# refinement stage
Car [email protected], 0.70, 0.70:
bbox AP:98.43, 90.39, 88.15
bev  AP:90.42, 88.99, 86.88
3d   AP:89.31, 79.08, 77.17

SUNRGBD dataset

Please follow the instruction here.

Note

Since we update our code from PyTorch-0.3.1 to PyTorch-1.0 and our code uses many random sampling operations, the results may be not exactly the same as those reported in our paper. But the difference should be +-0.5%, if you can not get the similar results, please contact me. I am still working to make results stable.

Our code is supported multiple GPUs for training, but now the training is very fast for small dataset, like KITTI, SUN-RGBD. All the steps will finish in one day on single GPU.

Acknowledgements

Part of the code was adapted from F-PointNets.

License

Our code is released under MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].