All Projects → fangchangma → Self Supervised Depth Completion

fangchangma / Self Supervised Depth Completion

Licence: mit
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Self Supervised Depth Completion

rlas
R package to read and write las and laz files used to store LiDAR data
Stars: ✭ 23 (-94.85%)
Mutual labels:  lidar
Sparse Depth Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st place on KITTI)
Stars: ✭ 272 (-39.15%)
Mutual labels:  lidar
Dynamic robot localization
Point cloud registration pipeline for robot localization and 3D perception
Stars: ✭ 339 (-24.16%)
Mutual labels:  lidar
Pandora SDK
Development kit for Pandora
Stars: ✭ 14 (-96.87%)
Mutual labels:  lidar
3dfier
The open-source tool for creating of 3D models
Stars: ✭ 260 (-41.83%)
Mutual labels:  lidar
Overlapnet
OverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
Stars: ✭ 299 (-33.11%)
Mutual labels:  lidar
UrbanLoco
UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes
Stars: ✭ 147 (-67.11%)
Mutual labels:  lidar
Tracking With Extended Kalman Filter
Object (e.g Pedestrian, vehicles) tracking by Extended Kalman Filter (EKF), with fused data from both lidar and radar sensors.
Stars: ✭ 393 (-12.08%)
Mutual labels:  lidar
3d cnn tensorflow
KITTI data processing and 3D CNN for Vehicle Detection
Stars: ✭ 266 (-40.49%)
Mutual labels:  lidar
Sc Lego Loam
LiDAR SLAM: Scan Context + LeGO-LOAM
Stars: ✭ 332 (-25.73%)
Mutual labels:  lidar
lidar transfer
Code for Langer et al. "Domain Transfer for Semantic Segmentation of LiDAR Data using Deep Neural Networks", IROS, 2020.
Stars: ✭ 54 (-87.92%)
Mutual labels:  lidar
Veloview
VeloView performs real-time visualization and easy processing of live captured 3D LiDAR data from Velodyne sensors (Alpha Prime™, Puck™, Ultra Puck™, Puck Hi-Res™, Alpha Puck™, Puck LITE™, HDL-32, HDL-64E). Runs on Windows, Linux and MacOS
Stars: ✭ 253 (-43.4%)
Mutual labels:  lidar
Lidr
R package for airborne LiDAR data manipulation and visualisation for forestry application
Stars: ✭ 310 (-30.65%)
Mutual labels:  lidar
tloam
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time
Stars: ✭ 164 (-63.31%)
Mutual labels:  lidar
Interactive slam
Interactive Map Correction for 3D Graph SLAM
Stars: ✭ 372 (-16.78%)
Mutual labels:  lidar
Awesome-3D-Object-Detection-for-Autonomous-Driving
Papers on 3D Object Detection for Autonomous Driving
Stars: ✭ 52 (-88.37%)
Mutual labels:  lidar
Open3d Ml
An extension of Open3D to address 3D Machine Learning tasks
Stars: ✭ 284 (-36.47%)
Mutual labels:  lidar
Semantic suma
SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)
Stars: ✭ 431 (-3.58%)
Mutual labels:  lidar
Pptk
The Point Processing Toolkit (pptk) is a Python package for visualizing and processing 2-d/3-d point clouds.
Stars: ✭ 383 (-14.32%)
Mutual labels:  lidar
Hdl localization
Real-time 3D localization using a (velodyne) 3D LIDAR
Stars: ✭ 332 (-25.73%)
Mutual labels:  lidar

self-supervised-depth-completion

This repo is the PyTorch implementation of our ICRA'19 paper on "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera", developed by Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman at MIT. A video demonstration is available on YouTube.

photo not available

Our network is trained with the KITTI dataset alone, without pretraining on Cityscapes or other similar driving dataset (either synthetic or real). The use of additional data is likely to further improve the accuracy.

Please create a new issue for code-related questions.

Contents

  1. Dependency
  2. Data
  3. Trained Models
  4. Commands
  5. Citation

Dependency

This code was tested with Python 3 and PyTorch 1.0 on Ubuntu 16.04.

pip install numpy matplotlib Pillow
pip install torch torchvision # pytorch

# for self-supervised training requires opencv, along with the contrib modules
pip install opencv-contrib-python==3.4.2.16

Data

  • Download the KITTI Depth Dataset from their website. Use the following scripts to extract corresponding RGB images from the raw dataset.
./download/rgb_train_downloader.sh
./download/rgb_val_downloader.sh

The downloaded rgb files will be stored in the ../data/data_rgb folder. The overall code, data, and results directory is structured as follows (updated on Oct 1, 2019)

.
├── self-supervised-depth-completion
├── data
|   ├── data_depth_annotated
|   |   ├── train
|   |   ├── val
|   ├── data_depth_velodyne
|   |   ├── train
|   |   ├── val
|   ├── depth_selection
|   |   ├── test_depth_completion_anonymous
|   |   ├── test_depth_prediction_anonymous
|   |   ├── val_selection_cropped
|   └── data_rgb
|   |   ├── train
|   |   ├── val
├── results

Trained Models

Download our trained models at http://datasets.lids.mit.edu/self-supervised-depth-completion to a folder of your choice.

Commands

A complete list of training options is available with

python main.py -h

For instance,

# train with the KITTI semi-dense annotations, rgbd input, and batch size 1
python main.py --train-mode dense -b 1 --input rgbd

# train with the self-supervised framework, not using ground truth
python main.py --train-mode sparse+photo 

# resume previous training
python main.py --resume [checkpoint-path] 

# test the trained model on the val_selection_cropped data
python main.py --evaluate [checkpoint-path] --val select

Citation

If you use our code or method in your work, please cite the following:

@article{ma2018self,
	title={Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera},
	author={Ma, Fangchang and Cavalheiro, Guilherme Venturelli and Karaman, Sertac},
	booktitle={ICRA},
	year={2019}
}
@article{Ma2017SparseToDense,
	title={Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image},
	author={Ma, Fangchang and Karaman, Sertac},
	booktitle={ICRA},
	year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].