All Projects → hlesmqh → WS3D

hlesmqh / WS3D

Licence: MIT license
Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to WS3D

3d cnn tensorflow
KITTI data processing and 3D CNN for Vehicle Detection
Stars: ✭ 266 (+155.77%)
Mutual labels:  point-cloud, lidar, vehicle-detection
Extrinsic lidar camera calibration
This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration. This package is used for Cassie Blue's 3D LiDAR semantic mapping and automation.
Stars: ✭ 149 (+43.27%)
Mutual labels:  point-cloud, lidar
Lidar camera calibration
Light-weight camera LiDAR calibration package for ROS using OpenCV and PCL (PnP + LM optimization)
Stars: ✭ 133 (+27.88%)
Mutual labels:  point-cloud, lidar
Liblas
C++ library and programs for reading and writing ASPRS LAS format with LiDAR data
Stars: ✭ 211 (+102.88%)
Mutual labels:  point-cloud, lidar
Laser Camera Calibration Toolbox
A Laser-Camera Calibration Toolbox extending from that at http://www.cs.cmu.edu/~ranjith/lcct.html
Stars: ✭ 99 (-4.81%)
Mutual labels:  point-cloud, lidar
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+1703.85%)
Mutual labels:  point-cloud, lidar
Displaz
A hackable lidar viewer
Stars: ✭ 177 (+70.19%)
Mutual labels:  point-cloud, lidar
Depth clustering
🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.
Stars: ✭ 657 (+531.73%)
Mutual labels:  point-cloud, lidar
Spvnas
[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Stars: ✭ 239 (+129.81%)
Mutual labels:  point-cloud, lidar
MCIS wsss
Code for ECCV 2020 paper (oral): Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation
Stars: ✭ 151 (+45.19%)
Mutual labels:  weakly-supervised-learning, eccv2020
Displaz.jl
Julia bindings for the displaz lidar viewer
Stars: ✭ 16 (-84.62%)
Mutual labels:  point-cloud, lidar
Weakly Supervised 3d Object Detection
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Stars: ✭ 61 (-41.35%)
Mutual labels:  point-cloud, lidar
Hdl graph slam
3D LIDAR-based Graph SLAM
Stars: ✭ 945 (+808.65%)
Mutual labels:  point-cloud, lidar
Openpcdet
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Stars: ✭ 2,199 (+2014.42%)
Mutual labels:  point-cloud, 3d-detection
Lidar camera calibration
ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
Stars: ✭ 734 (+605.77%)
Mutual labels:  point-cloud, lidar
Vision3d
Research platform for 3D object detection in PyTorch.
Stars: ✭ 177 (+70.19%)
Mutual labels:  point-cloud, lidar
Point2Mesh
Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance (ECCV2020)
Stars: ✭ 61 (-41.35%)
Mutual labels:  point-cloud, eccv2020
Interactive slam
Interactive Map Correction for 3D Graph SLAM
Stars: ✭ 372 (+257.69%)
Mutual labels:  point-cloud, lidar
Superpoint graph
Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
Stars: ✭ 533 (+412.5%)
Mutual labels:  point-cloud, lidar
Pclpy
Python bindings for the Point Cloud Library (PCL)
Stars: ✭ 212 (+103.85%)
Mutual labels:  point-cloud, lidar

WS3D

Weakly Supervised 3D object detection from Lidar Point Cloud

This is the official repo of 'Weakly Supervised 3D object detection from Lidar Point Cloud' (ECCV2020).

Author: Qinghao Meng, Wenguan Wang, Tianfei Zhou, Jianbing Shen, Luc Van Gool, and Dengxin Dai

intro

Introduction:

This work proposes a weakly supervised approach for 3D object detection, only requiring a small set of weakly annotated scenes, associated with a few precisely labeled object instances. This is achieved by a two-stage architecture design. Using only 500 weakly annotated scenes and 534 precisely labeled vehicle instances, our method achieves 85−95% the performance of current top-leading, fully supervised detectors (which require 3, 712 exhaustively and precisely annotated scenes with 15, 654 instances) on KITTI 3D object detection leaderboard. More importantly, our trained model can be applied as a 3D object annotator, generating annotations which can be used to train 3D object detectors with over 94% of their original performance (under manually labeled data). Above designs make our approach highly practical and introduce new opportunities for learning 3D object detection with reduced annotation burden.

For more details of WS3D, please refer to our paper or project page. The implementation is based on the preexisting open source codebase PointRCNN.

ToDo list

  • Installation
  • Dataset preparation
  • BEV annotator instruction
  • BEV center-click annotation
  • Stage-1 Training
  • Partly labeled objects list
  • Stage-2 data preparation
  • Stage-2 Training
  • 3D Annotation tool instruction
  • Pretrained model

Installation:

Requirements:

All the codes are tested in the following environment:

Linux (tested on Ubuntu 16.04), Python 3.6 PyTorch 1.1.0

Install WS3D

a. Clone the PointRCNN repository.

git clone --recursive https://github.com/hlesmqh/WS3D.git

b. Install the dependent python libraries.

pip install -r requirement.txt 

c. Build and install the pointnet2_lib, iou3d, roipool3d libraries by executing the following command:

sh build_and_install.sh

Dataset Preparation

Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows:

Kitti
├── ImageSets
├── object
│   ├──training
      ├──calib & velodyne & label_2 & image_2
│   ├──testing
│      ├──calib & velodyne & image_2

Change the the files /tools/train_*.py follow:

DATA_PATH = os.path.join('/your/path/Kitti/object')

BEV Annotator Instruction

Our BEV center click annotator is placed in /Pointcloud_Annotation/. For running annotator, you should run:

python ./Pointcloud_Annotation/annotation.py 

Be aware that you need to have Qt interface accessible on your machine.

BEV center-click annotation

Our BEV click annotation can be get from here or BaiduDisk.

Stage-1 Training

python ./tools/train_rpn.py --noise_kind='label_noise' --weakly_num=500
  • The noise_kind is the directory of BEV center-click annotation file, it is saved as KITTI official Label format, but only (x,z) information available.
  • The weakly_num is the number of click annotated scenes, in our implementation, we choose the first 500 non-empty scenes in KITTI training split, which is already officially random shuffled.
  • The other training parameter can be found in file tools/cfgs/weaklyRPN and in args of /tools/train_rpn.py.
  • Our BEV annotator and BEV center-click annotation will available soon, but you can also set noise_kind='lable_2' for using accurate (x,z) information from KITTI original label.

Stage-2 Data Perparation

Please select your trained stage-1 model and generate your stage-2 training set following below guidence: Change /tools/generate_box_dataset.py

ckpt_file = '/path/to/your/ckpt.pth'
save_dir =  '/path/to/save/this/small/trainingset/'

The program will generates a file saving proposals according to the result of your stage-1 model and saves them with nearby groundtruth boxes.

Partly labeled objects list

This list is gained by randomly select groundtruth boxes which have at least one proposal nearby. For convenience, we write a script which help you select training instances from stage-2 training set. Our best model's list can be get by generating stage-2 training set by our pretrained stage-1 model. When you trains your stage-2 model, the program will selects them from your saved set. It will not be changed for next training.

Stage-2 Training

You need to change the training set path in self.boxes_dir = os.path.join(self.imageset_dir, 'boxes_410fl030500_Car') and then run:

python ./tools/train_cascade1.py --weakly_num=500

Pretrained Model

You could download the pretrained model(Car) of WS3D from here Stage-1 and Stage-2, which is trained on the train split (3712 samples) and evaluated on the val split (3769 samples) and test split (7518 samples). The performance on validation set is as follows:

Car [email protected], 0.70, 0.70:
bbox AP:90.38, 89.15, 88.59
bev  AP:88.95, 85.83, 85.03
3d   AP:85.04, 75.94, 74.38
aos  AP:90.25, 88.78, 88.11

3D Annotation tool instruction

run python ./Pointcloud_Annotation/annotation, and you can see the interface below.

anno

Please first click the object on above camera view image. The program will select the nearest point projected on this view to you mouse and show a zoom in BEV map left below. If you aren't satisified with this region, you can click the camera view again for a better BEV region. Then, you can click the BEV center of object on this zoom in BEV map, your click location will be saved in desired file which is setted at f = open('label_w/label.txt', 'a+'). After labeling all objects you can click on global bev map right below for opening next scene. Please notice that the program will automatic start from you last labeled image.

Citation:

Please consider citing this paper if it helps your research:

@inproceedings{meng2020ws3d,
    title={Weakly Supervised 3D Object Detection from Lidar Point Cloud},
    author={Meng, Qinghao and Wang, Wenguan and Zhou, Tianfei and Shen, Jianbing and Van Gool, Luc and Dai, Dengxin},
    booktitle={ECCV},
    year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].