All Projects → sibozhang → Depth-Guided-Inpainting

sibozhang / Depth-Guided-Inpainting

Licence: other
Code for ECCV 2020 "DVI: Depth Guided Video Inpainting for Autonomous Driving"

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects
c
50402 projects - #5 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Depth-Guided-Inpainting

Implicit-Internal-Video-Inpainting
[ICCV 2021]: IIVI: Internal Video Inpainting by Implicit Long-range Propagation
Stars: ✭ 190 (+280%)
Mutual labels:  image-inpainting, video-inpainting, object-removal
Video Object Removal
Just draw a bounding box and you can remove the object you want to remove.
Stars: ✭ 2,283 (+4466%)
Mutual labels:  image-inpainting, video-inpainting, object-removal
urban road filter
Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗
Stars: ✭ 134 (+168%)
Mutual labels:  point-cloud, autonomous-driving
awesome-lidar
😎 Awesome LIDAR list. The list includes LIDAR manufacturers, datasets, point cloud-processing algorithms, point cloud frameworks and simulators.
Stars: ✭ 217 (+334%)
Mutual labels:  point-cloud, autonomous-driving
Depth clustering
🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.
Stars: ✭ 657 (+1214%)
Mutual labels:  point-cloud, depth
LiDAR fog sim
LiDAR fog simulation
Stars: ✭ 101 (+102%)
Mutual labels:  point-cloud, autonomous-driving
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (+4%)
Mutual labels:  point-cloud, autonomous-driving
So Net
SO-Net: Self-Organizing Network for Point Cloud Analysis, CVPR2018
Stars: ✭ 297 (+494%)
Mutual labels:  point-cloud, autonomous-driving
BtcDet
Behind the Curtain: Learning Occluded Shapes for 3D Object Detection
Stars: ✭ 104 (+108%)
Mutual labels:  point-cloud, autonomous-driving
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+3652%)
Mutual labels:  point-cloud, autonomous-driving
Dh3d
DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DOF Relocalization
Stars: ✭ 96 (+92%)
Mutual labels:  point-cloud, autonomous-driving
Openpcdet
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Stars: ✭ 2,199 (+4298%)
Mutual labels:  point-cloud, autonomous-driving
3d Bat
3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling
Stars: ✭ 179 (+258%)
Mutual labels:  point-cloud, autonomous-driving
efficient online learning
Efficient Online Transfer Learning for 3D Object Detection in Autonomous Driving
Stars: ✭ 20 (-60%)
Mutual labels:  point-cloud, autonomous-driving
Pointcnn
PointCNN: Convolution On X-Transformed Points (NeurIPS 2018)
Stars: ✭ 1,120 (+2140%)
Mutual labels:  point-cloud, autonomous-driving
Dbnet
DBNet: A Large-Scale Dataset for Driving Behavior Learning, CVPR 2018
Stars: ✭ 172 (+244%)
Mutual labels:  point-cloud, autonomous-driving
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (+258%)
Mutual labels:  point-cloud, autonomous-driving
pointcloud viewer
No description or website provided.
Stars: ✭ 16 (-68%)
Mutual labels:  point-cloud
SelfDrivingCarsControlDesign
Self Driving Cars Longitudinal and Lateral Control Design
Stars: ✭ 96 (+92%)
Mutual labels:  autonomous-driving
UnsupervisedPointCloudReconstruction
Experiments on unsupervised point cloud reconstruction.
Stars: ✭ 133 (+166%)
Mutual labels:  point-cloud

PWC

Depth-Guided-Inpainting

This is code for "DVI: Depth Guided Video Inpainting for Autonomous Driving". ECCV 2020. Project Page

Video Inpainting:

Introduction

To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud. By building a dense 3D map from stitched point clouds, frames within a video are geometrically correlated via this common 3D map. In order to fill a target inpainting area in a frame, it is straightforward to transform pixels from other frames into the current one with correct occlusion. Furthermore, we are able to fuse multiple videos through 3D point cloud registration, making it possible to inpaint a target video with multiple source videos.

Data preparation

Inpainting dataset consists of synchronized Labeled image and LiDAR scanned point clouds. It captured by HESAI Pandora All-in-One Sensing Kit. It is collected under various lighting conditions and traffic densities in Beijing, China.

Please download full data at Apolloscape or using link below. The first video inpainting dataset with depth. The synced lidar and image data also can be used for 3D perception and other tasks.

Sample data: sample_mask_and_image.zip sample_data.zip sample_lidar_bg.zip

Full data:

mask_and_image_0.zip data_0.zip lidar_bg_0.zip

mask_and_image_1.zip data_1.zip lidar_bg_1.zip

mask_and_image_2.zip data_2.zip lidar_bg_2.zip

mask_and_image_3.zip data_3.zip lidar_bg_3.zip

Data Structure

The folder structure of the inpainting is as follows:

  1. xxx-yyy_mask.zip: xxx.aaa.jpg is original image. xxx.aaa.png is labelled mask of cars.

  2. xxx-yyy.zip: Data includes ds_map.ply, global_poses.txt, rel_poses.txt, xxx.aaa_optR.xml. ds_map.ply is dense map build from lidar frames.

  3. lidar_bg.zip: lidar background point cloud in ply format.

Data Prepareation

catkin_ws
├── build
├── devel
├── src   
code
├── libDAI-0.3.0
├── opengm
data
├── pandora_liang
    ├── set2
          ├── 1534313590-1534313597
          ├── 1534313590-1534313597_mask
          ├── 1534313590-1534313597_results
          ├── lidar_bg
          ├── ...

Set up

  1. Install ROS Kinetic at http://wiki.ros.org/ROS/Installation

  2. Install opengm

    download OpenGM 2.3.5 at http://hciweb2.iwr.uni-heidelberg.de/opengm/index.php?l0=library

    or

    https://github.com/opengm/opengm for version 2.0.2

  3. Build opengm with MRF:

    cd code/opengm
    mkdir build
    cd build 
    cmake -DWITH_MRF=ON ..
    make
    sudo make install
    
  4. Make catkin:

    cd catkin_ws
    source devel/setup.bash
    catkin_make
    

Evaluation

cd catkin_ws 
rosrun loam_velodyne videoInpaintingTexSynthFusion 1534313590 1534313597 1534313594 ../data/pandora_liang/set2

Baseline result

Method MAE RMSE PSNR SSIM
DVI 6.135 9.633 21.631 0.895

How to label your own data and build 3D map

  1. Label Mask image of inpainting area

    python label_mask.py
    

    F: forward S: undo D: backward

  2. Build 3D map from lider frames

    rosrun loam_velodyne loamMapper 1534313591 1534313599 /disk1/data/pandora_liang/set2
    

    Get global_pose.txt rel_pose.txt. Use meshlab to visualize ds_map.ply

Citation

Please cite our paper in your publications.

DVI: Depth Guided Video Inpainting for Autonomous Driving.

Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang. ECCV 2020. PDF, Webpage, Inpainting Dataset, Result Video, Presentation Video

@inproceedings{liao2020dvi,
  title={DVI: Depth Guided Video Inpainting for Autonomous Driving},
  author={Liao, Miao and Lu, Feixiang and Zhou, Dingfu and Zhang, Sibo and Li, Wei and Yang, Ruigang},
  booktitle={European Conference on Computer Vision},
  pages={1--17},
  year={2020},
  organization={Springer}
}

ECCV 2020 Presentation Video

Depth Guided Video Inpainting for Autonomous Driving

Result Video

Depth Guided Video Inpainting for Autonomous Driving

Q & A

Get MRF-LIB working within opengm2:

~/code/opengm/build$ cmake -DWITH_MRF=ON ..  #turn on MRF option within opengm cmake
~/code/opengm/src/external/patches/MRF$ ./patchMRF-v2.1.sh

Change to:
TRWS_URL=https://download.microsoft.com/download/6/E/D/6ED0E6CF-C06E-4D4E-9F70-C5932795CC12/
Within patchMRF-v2.1.sh
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].