All Projects → xingyul → Flownet3d

xingyul / Flownet3d

Licence: mit
FlowNet3D: Learning Scene Flow in 3D Point Clouds (CVPR 2019)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Flownet3d

3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (-28.11%)
Mutual labels:  point-cloud
Samplenet
Differentiable Point Cloud Sampling (CVPR 2020 Oral)
Stars: ✭ 212 (-14.86%)
Mutual labels:  point-cloud
Cylinder3d
Rank 1st in the leaderboard of SemanticKITTI semantic segmentation (both single-scan and multi-scan) (Nov. 2020) (CVPR2021 Oral)
Stars: ✭ 221 (-11.24%)
Mutual labels:  point-cloud
Orb Slam2 with semantic label
orb-slam2 with semantic label
Stars: ✭ 186 (-25.3%)
Mutual labels:  point-cloud
Graph Cnn In 3d Point Cloud Classification
Code for A GRAPH-CNN FOR 3D POINT CLOUD CLASSIFICATION (ICASSP 2018)
Stars: ✭ 206 (-17.27%)
Mutual labels:  point-cloud
Cgal
The public CGAL repository, see the README below
Stars: ✭ 2,825 (+1034.54%)
Mutual labels:  point-cloud
3d Bat
3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling
Stars: ✭ 179 (-28.11%)
Mutual labels:  point-cloud
Pcn
Code for PCN: Point Completion Network in 3DV'18 (Oral)
Stars: ✭ 238 (-4.42%)
Mutual labels:  point-cloud
Liblas
C++ library and programs for reading and writing ASPRS LAS format with LiDAR data
Stars: ✭ 211 (-15.26%)
Mutual labels:  point-cloud
Pointnetvlad
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018
Stars: ✭ 224 (-10.04%)
Mutual labels:  point-cloud
Msn Point Cloud Completion
Morphing and Sampling Network for Dense Point Cloud Completion (AAAI2020)
Stars: ✭ 196 (-21.29%)
Mutual labels:  point-cloud
Frustum Convnet
The PyTorch Implementation of F-ConvNet for 3D Object Detection
Stars: ✭ 203 (-18.47%)
Mutual labels:  point-cloud
Kitti Dataset
Visualising LIDAR data from KITTI dataset.
Stars: ✭ 217 (-12.85%)
Mutual labels:  point-cloud
3dgnn pytorch
3D Graph Neural Networks for RGBD Semantic Segmentation
Stars: ✭ 187 (-24.9%)
Mutual labels:  point-cloud
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (-9.64%)
Mutual labels:  point-cloud
Cloud annotation tool
L-CAS 3D Point Cloud Annotation Tool
Stars: ✭ 182 (-26.91%)
Mutual labels:  point-cloud
Pclpy
Python bindings for the Point Cloud Library (PCL)
Stars: ✭ 212 (-14.86%)
Mutual labels:  point-cloud
Spvnas
[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Stars: ✭ 239 (-4.02%)
Mutual labels:  point-cloud
Asis
Associatively Segmenting Instances and Semantics in Point Clouds, CVPR 2019
Stars: ✭ 228 (-8.43%)
Mutual labels:  point-cloud
Point Cloud Annotation Tool
Stars: ✭ 224 (-10.04%)
Mutual labels:  point-cloud

FlowNet3D: Learning Scene Flow in 3D Point Clouds

Created by Xingyu Liu, Charles R. Qi and Leonidas J. Guibas from Stanford University and Facebook AI Research (FAIR).

Citation

If you find our work useful in your research, please cite:

    @article{liu:2019:flownet3d,
      title={FlowNet3D: Learning Scene Flow in 3D Point Clouds},
      author={Liu, Xingyu and Qi, Charles R and Guibas, Leonidas J},
      journal={CVPR},
      year={2019}
    }

Abstract

Many applications in robotics and human-computer interaction can benefit from understanding 3D motion of points in a dynamic environment, widely noted as scene flow. While most previous methods focus on stereo and RGB-D images as input, few try to estimate scene flow directly from point clouds. In this work, we propose a novel deep neural network named FlowNet3D that learns scene flow from point clouds in an end-to-end fashion. Our network simultaneously learns deep hierarchical features of point clouds and flow embeddings that represent point motions, supported by two newly proposed learning layers for point sets. We evaluate the network on both challenging synthetic data from FlyingThings3D and real Lidar scans from KITTI. Trained on synthetic data only, our network successfully generalizes to real scans, outperforming various baselines and showing competitive results to the prior art. We also demonstrate two applications of our scene flow output (scan registration and motion segmentation) to show its potential wide use cases.

Installation

Install TensorFlow. The code is tested under TF1.9.0 GPU version, g++ 5.4.0, CUDA 9.0 and Python 3.5 on Ubuntu 16.04. There are also some dependencies for a few Python libraries for data processing and visualizations like cv2. It's highly recommended that you have access to GPUs.

Compile Customized TF Operators

The TF operators are included under tf_ops, you need to compile them first by make under each ops subfolder (check Makefile). Update arch in the Makefiles for different CUDA Compute Capability that suits your GPU if necessary.

Usage

Flyingthings3d Data preparation

The data preprocessing scripts are included in data_preprocessing. To process the raw data, first download FlyingThings3D dataset. flyingthings3d__disparity.tar.bz2, flyingthings3d__disparity_change.tar.bz2, flyingthings3d__optical_flow.tar.bz2 and flyingthings3d__frames_finalpass.tar are needed. Then extract the files in /path/to/flyingthings3d such that the directory looks like

/path/to/flyingthings3d
  disparity/
  disparity_change/
  optical_flow/
  frames_finalpass/

Then cd into directory data_preprocessing and execute command to generate .npz files of processed data

python proc_dataset_gen_point_pairs_color.py --input_dir /path/to/flyingthings3d --output_dir data_processed_maxcut_35_20k_2k_8192

The processed data is also provided here for download (total size ~11GB).

Training and Evaluation

To train the model, simply execute the shell script command_train.sh. Batch size, learning rate etc are adjustable. The model used for training is model_concat_upsa.py.

sh command_train.sh

To evaluate the model, simply execute the shell script command_evaluate_flyingthings.sh.

sh command_evaluate_flyingthings.sh

A pre-trained model is provided here for download.

KITTI Experiment

We release the processed KITTI scene flow dataset here for download (total size ~266MB). The KITTI scene flow dataset was processed by converting the 2D optical flow into 3D scene flow and removing the ground points. We processed the first 150 data points from KITTI scene flow dataset. Each of the data points are stored as a .npz file and its dictionary has three keys: pos1, pos2 and gt, representing the first frame of point cloud, second frame of point cloud and the ground truth scene flow vectors for the points in the first frame.

To evaluate the FlyingThings3D trained model on KITTI without finetuning, first download the processed KITTI data and extract it into kitti_rm_ground/ directory. Then execute the shell script command_evaluate_kitti.sh.

sh command_evaluate_kitti.sh

Note that the model used for evaluation is in model_concat_upsa_eval_kitti.py instead of the model used for training.

License

Our code is released under MIT License (see LICENSE file for details).

Related Projects

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].