All Projects → jialeli1 → From-Voxel-to-Point

jialeli1 / From-Voxel-to-Point

Licence: Apache-2.0 license
"From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with Voxel-to-Point Decoder" and "Anchor-free 3D Single Stage Detector with Mask-Guided Attention for Point Cloud" in ACM MM 2021.

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to From-Voxel-to-Point

Py3dtiles
⚠️ Project migrated to : https://gitlab.com/Oslandia/py3dtiles ⚠️
Stars: ✭ 152 (+424.14%)
Mutual labels:  pointcloud
SSVIO
Graduation Project: A point cloud semantic segmentation and VIO based 3D reconstruction method using RGB-D and IMU
Stars: ✭ 25 (-13.79%)
Mutual labels:  pointcloud
Pointcloud-to-Images
An algorithm for projecting three-dimensional laser point cloud data into serialized two-dimensional images.
Stars: ✭ 54 (+86.21%)
Mutual labels:  pointcloud
Pyicp Slam
Full-python LiDAR SLAM using ICP and Scan Context
Stars: ✭ 155 (+434.48%)
Mutual labels:  pointcloud
Mcl 3dl
A ROS node to perform a probabilistic 3-D/6-DOF localization system for mobile robots with 3-D LIDAR(s). It implements pointcloud based Monte Carlo localization that uses a reference pointcloud as a map.
Stars: ✭ 221 (+662.07%)
Mutual labels:  pointcloud
annotate
Create 3D labelled bounding boxes in RViz
Stars: ✭ 104 (+258.62%)
Mutual labels:  pointcloud
Pytorch semantic segmentation
Implement some models of RGB/RGBD semantic segmentation in PyTorch, easy to run. Such as FCN, RefineNet, PSPNet, RDFNet, 3DGNN, PointNet, DeepLab V3, DeepLab V3 plus, DenseASPP, FastFCN
Stars: ✭ 137 (+372.41%)
Mutual labels:  pointcloud
nnDetection
nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.
Stars: ✭ 355 (+1124.14%)
Mutual labels:  3d-object-detection
Deepglobalregistration
[CVPR 2020 Oral] A differentiable framework for 3D registration
Stars: ✭ 222 (+665.52%)
Mutual labels:  pointcloud
roofn3d
Roof Classification, Segmentation, and Damage Completion using 3D Point Clouds
Stars: ✭ 35 (+20.69%)
Mutual labels:  pointcloud
3d Bat
3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling
Stars: ✭ 179 (+517.24%)
Mutual labels:  pointcloud
Pcat open source
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb
Stars: ✭ 209 (+620.69%)
Mutual labels:  pointcloud
lodToolkit
level-of-details toolkit(LTK). Convert osgb lod tree to 3mx tree. Convert pointcloud in ply/las/laz/xyz to 3mx/osgb tree.
Stars: ✭ 81 (+179.31%)
Mutual labels:  pointcloud
Mvstudio
An integrated SfM (Structure from Motion) and MVS (Multi-View Stereo) solution.
Stars: ✭ 154 (+431.03%)
Mutual labels:  pointcloud
lt-mapper
A Modular Framework for LiDAR-based Lifelong Mapping
Stars: ✭ 301 (+937.93%)
Mutual labels:  pointcloud
D3feat
[TensorFlow] Implementation of CVPR'20 oral paper - D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features https://arxiv.org/abs/2003.03164
Stars: ✭ 143 (+393.1%)
Mutual labels:  pointcloud
global l0
Global L0 algorithm for regularity-constrained plane fitting
Stars: ✭ 45 (+55.17%)
Mutual labels:  pointcloud
imvoxelnet
[WACV2022] ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Stars: ✭ 179 (+517.24%)
Mutual labels:  3d-object-detection
lopocs
Migrated to: https://gitlab.com/Oslandia/lopocs
Stars: ✭ 78 (+168.97%)
Mutual labels:  pointcloud
3D object recognition
recognize and localize an object in 3D Point Cloud scene using VFH - SVMs based method and 3D-CNNs method
Stars: ✭ 91 (+213.79%)
Mutual labels:  pointcloud

FromVoxelToPoint & MGAF-3DSSD

This is a reproduced repo of "From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with Voxel-to-Point Decoder" (FromVoxelToPoint) and "Anchor-free 3D Single Stage Detector with Mask-Guided Attention for Point Cloud" (MGAF-3DSSD) in ACM MM 2021.

The code is mainly based on OpenPCDet.

Introduction

We provide codes and training configurations of FromVoxelToPoint & MGAF-3DSSD on the KITTI and Waymo datasets. Checkpoints will not be released.

Requirements

The codes are tested in the following environment:

  • Ubuntu 20.04.1 LTS
  • Python 3.6
  • PyTorch 1.7.1+cu110
  • CUDA 11.0
  • OpenPCDet v0.3.0 (You can easily add the relevant codes to the latest OpenPCDet if you want.)

Note that we use a modified spconv to avoid sudo permission requirements during the installation process, which can be easily installed by setup.py.

Installation

a. Clone this repository.

git clone https://github.com/jialeli1/From-Voxel-to-Point.git

b. Install the dependent python libraries as follows:

pip install -r requirements.txt 

c. Compile CUDA operators by running the following command:

  • CUDA ops in OpenPCDet and the useful spconv.
python setup.py develop
cd pcdet/ops/DeformableConvolutionV2PyTorch
sh make.sh

Dataset Preparation

We provide model configurations on KITTI and Waymo. Please follow OpenPCDet to prepare the datasets. You can also use "ln -s" to link an existing dataset here for a quick start.

Training

Details are in paper. If you use different number of GPUs for training, it's necessary to change the respective training epochs to attain a decent performance.

You can run training and evaluation commands following OpenPCDet. We also provide some examples on KITTI as follows.

KITTI

  • models
# MGAF-3DSSD: An RTX 3090 GPU (24G) can contrain 4 KITTI point clouds for training.
tools/cfgs/kitti_models/MGAF-3DSSD/mgaf-3dssd.yaml
tools/cfgs/kitti_models/MGAF-3DSSD/mgaf-3dssd_3classes.yaml

# FromVoxelToPoint: An RTX 3090 GPU (24G) can contrain 3 KITTI point clouds for training. It requires a large GPU memory for reproduction.
tools/cfgs/kitti_models/FV2P/fv2p.yaml
tools/cfgs/kitti_models/FV2P/fv2p_3classes.yaml
  • training on KITTI
cd tools

CUDA_VISIBLE_DEVICES=6,7 bash scripts/dist_train.sh 2 --cfg_file ./cfgs/kitti_models/MGAF-3DSSD/mgaf-3dssd.yaml

CUDA_VISIBLE_DEVICES=4,5,6,7 bash scripts/dist_train.sh 4 --cfg_file ./cfgs/kitti_models/FV2P/kitti_fv2p.yaml
  • evaluation on KITTI
cd tools

CUDA_VISIBLE_DEVICES=7 python test.py --cfg_file ./cfgs/kitti_models/MGAF-3DSSD/mgaf-3dssd.yaml --eval_all

Waymo

  • models
# MGAF-3DSSD: 
tools/cfgs/waymo_models/MGAF-3DSSD/waymo_mgaf-3dssd_e36.yaml

# FromVoxelToPoint: 
tools/cfgs/waymo_models/FV2P/waymo_fv2p_e30.yaml

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{fv2p_mm21,
  author    = {Jiale Li and
               Hang Dai and
               Ling Shao and
               Yong Ding},
  title     = {From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud
               with Voxel-to-Point Decoder},
  booktitle = {{MM} '21: {ACM} Multimedia Conference},
  pages     = {4622--4631},
  year      = {2021},
}

@inproceedings{mgaf_mm21,
  author    = {Jiale Li and
               Hang Dai and
               Ling Shao and
               Yong Ding},
  title     = {Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
               Point Cloud},
  booktitle = {{MM} '21: {ACM} Multimedia Conference},
  pages     = {553--562},
  year      = {2021},
}

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].