All Projects → hlzz → Deepmatchvo

hlzz / Deepmatchvo

Licence: mit
Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deepmatchvo

realsense explorer bot
Autonomous ground exploration mobile robot which has 3-DOF manipulator with Intel Realsense D435i mounted on a Tracked skid-steer drive mobile robot. The robot is capable of mapping spaces, exploration through RRT, SLAM and 3D pose estimation of objects around it. This is an custom robot with self built URDF model.The Robot uses ROS's navigation…
Stars: ✭ 61 (-65.73%)
Mutual labels:  slam, pose-estimation
slamkit
SLAM Kit
Stars: ✭ 28 (-84.27%)
Mutual labels:  slam, pose-estimation
GA SLAM
🚀 SLAM for autonomous planetary rovers with global localization
Stars: ✭ 40 (-77.53%)
Mutual labels:  slam, pose-estimation
OPVO
Sample code of BMVC 2017 paper: "Visual Odometry with Drift-Free Rotation Estimation Using Indoor Scene Regularities"
Stars: ✭ 40 (-77.53%)
Mutual labels:  slam, pose-estimation
Lili Om
LiLi-OM is a tightly-coupled, keyframe-based LiDAR-inertial odometry and mapping system for both solid-state-LiDAR and conventional LiDARs.
Stars: ✭ 159 (-10.67%)
Mutual labels:  slam, pose-estimation
Eao Slam
[IROS 2020] EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association
Stars: ✭ 95 (-46.63%)
Mutual labels:  slam, pose-estimation
Pyicp Slam
Full-python LiDAR SLAM using ICP and Scan Context
Stars: ✭ 155 (-12.92%)
Mutual labels:  slam
Augmented reality
💎 "Marker-less Augmented Reality" with OpenCV and OpenGL.
Stars: ✭ 165 (-7.3%)
Mutual labels:  pose-estimation
Awesome Human Pose Estimation
A collection of awesome resources in Human Pose estimation.
Stars: ✭ 2,022 (+1035.96%)
Mutual labels:  pose-estimation
Synthdet
SynthDet - An end-to-end object detection pipeline using synthetic data
Stars: ✭ 148 (-16.85%)
Mutual labels:  pose-estimation
Densereg
3D hand pose estimation via dense regression
Stars: ✭ 176 (-1.12%)
Mutual labels:  pose-estimation
Deeplabcut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with deep learning for all animals incl. humans
Stars: ✭ 2,550 (+1332.58%)
Mutual labels:  pose-estimation
Pop up slam
Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments
Stars: ✭ 164 (-7.87%)
Mutual labels:  slam
Pythonrobotics
Python sample codes for robotics algorithms.
Stars: ✭ 13,934 (+7728.09%)
Mutual labels:  slam
Rnn For Human Activity Recognition Using 2d Pose Input
Activity Recognition from 2D pose using an LSTM RNN
Stars: ✭ 165 (-7.3%)
Mutual labels:  pose-estimation
G2o
g2o: A General Framework for Graph Optimization
Stars: ✭ 2,082 (+1069.66%)
Mutual labels:  slam
Visual Gps Slam
This is a repo for my master thesis research about the Fusion of Visual SLAM and GPS. It contains the research paper, code and other interesting data.
Stars: ✭ 175 (-1.69%)
Mutual labels:  slam
Arcoreinsideouttrackinggearvr
Inside Out Positional Tracking (6DoF) for GearVR/Cardboard/Daydream using ARCore v1.6.0
Stars: ✭ 150 (-15.73%)
Mutual labels:  slam
Ochumanapi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
Stars: ✭ 168 (-5.62%)
Mutual labels:  pose-estimation
Superpoint slam
SuperPoint + ORB_SLAM2
Stars: ✭ 163 (-8.43%)
Mutual labels:  slam

DeepMatchVO

Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

@inproceedings{shen2019icra,  
  title={Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation},
  author={Shen, Tianwei and Luo, Zixin and Zhou, Lei and Deng, Hanyu and Zhang, Runze and Fang, Tian and Quan, Long},  
  booktitle={International Conference on Robotics and Automation},  
  year={2019},  
  organization={IEEE}  
}

Update (Sep-26, 19):

We published an follow-up paper on this topic, whose updated loss terms have positive influence on the depth estimation performance. See Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency for details.

@inproceedings{shen2019iccvw,  
  title={Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency},
  author={Shen, Tianwei and Zhou, Lei and Luo, Zixin and Yao, Yao and Li, Shiwei and Zhang, Jiahui and Fang, Tian and Quan, Long},  
  booktitle={International Conference on Computer Vision (ICCV) Workshops},  
  year={2019},  
  organization={IEEE}  
}

Environment

This codebase is tested on Ubuntu 16.04 with Tensorflow 1.7 and CUDA 9.0.

Demo

Download Pre-trained Models

Download the models presented in the paper, and then unzip them into the ckpt folder under the root.

Run a Simple Script

After downloading the model, you can run a simple demo to make sure the setup is correct.

python demo.py

The output is shown below

Generate Train and Test Data

Given that you have already downloaded the KITTI odometry and raw datasets, the provided python script data/prepare_train_data.py is able to generate the training data with SIFT feature matches. Yet, the feature and match files are in accord with our internal format, which are not publicly available at this point. Alternatively, we suggest first generating the concatenated image triplets by

# for odometry dataset
python data/prepare_train_data.py --dataset_dir=$kitti_raw_odom --dataset_name=kitti_odom --dump_root=$kitti_odom_match3 --seq_length=3 --img_width=416 --img_height=128 --num_threads=8

where $kitti_raw_odom and $kitti_odom_match3 are the input odometry dataset and output files for training. Some example input paths (on my machine) are shown in command.sh.

Then download our pre-computed camera/match files from link. Replace the corresponding generated camera files in $kitti_odom_match3 with the ones you have downloaded. It contains the all the camera intrinsics and the sampled matching information (for each file of an image triplet, the first line is the camera intrinsics, then the next 200 (2*100) lines are the matching coordinates for two image pairs (target image with left source image and target image with right source image)).

Train

The training is done, e.g. on the KITTI odometry dataset, by using

# Train on KITTI odometry dataset
match_num=100
python train.py --dataset_dir=$kitti_odom_match3 --checkpoint_dir=$checkpoint_dir --img_width=416 --img_height=128 --batch_size=4 --seq_length 3 \
    --max_steps 300000 --save_freq 2000 --learning_rate 0.001 --num_scales 1 --init_ckpt_file $checkpoint_dir'model-'$model_idx --continue_train=True --match_num $match_num

We suggest training from a pre-trained model, such as the ones we have provided in models. Also note that do not use the model trained on the KITTI odometry dataset (for pose evaluation) on depth evaluation, nor the model trained on the KITTI Eigen split on pose evaluation. Otherwise, you will get better but biased (train-on-test) results because test samples in one dataset have overlap with the training samples in another.

Test

To evaluate the depth and pose estimation performance in the paper, use

# Testing depth model
r=250000
depth_ckpt_file=$rootfolder$checkpoint_dir'model-'$r
depth_pred_file='output/model-'$r'.npy' 
python test_kitti_depth.py --dataset_dir $kitti_raw_dir --output_dir $output_folder --ckpt_file $depth_ckpt_file #--show
python kitti_eval/eval_depth.py --kitti_dir=$kitti_raw_dir --pred_file $depth_pred_file #--show True --use_interp_depth True

You can also use --show option to visualize the depth maps.

# Testing pose model
sl=3
r=258000
pose_ckpt_file=$root_folder$checkpoint_dir'model-'$r
for seq_num in 09 10
do 
    rm -rf $output_folder/$seq_num/
    echo 'seq '$seq_num
    python test_kitti_pose.py --test_seq $seq_num --dataset_dir $kitti_raw_odom --output_dir $output_folder'/'$seq_num'/' --ckpt_file $pose_ckpt_file --seq_length $sl --concat_img_dir $kitti_odom_match3
    python kitti_eval/eval_pose.py --gtruth_dir=$root_folder'kitti_eval/pose_data/ground_truth/seq'$sl'/'$seq_num/  --pred_dir=$output_folder'/'$seq_num'/'
done

It outputs the same result in the paper:

Seq ATE mean std
09 0.0089 0.0054
10 0.0084 0.0071

Contact

Feel free to contact me (Tianwei) if you have any questions, either by email or by issue.

Acknowledgements

We appreciate the great works/repos along this direction, such as SfMLearner and GeoNet, and also the evaluation tool evo for KITTI full sequence evaluation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].