All Projects → jytime → Deep-SfM-Revisited

jytime / Deep-SfM-Revisited

Licence: MIT License
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to Deep-SfM-Revisited

Meshroom
3D Reconstruction Software
Stars: ✭ 7,254 (+5750%)
Mutual labels:  structure-from-motion
Alicevision
Photogrammetric Computer Vision Framework
Stars: ✭ 2,029 (+1536.29%)
Mutual labels:  structure-from-motion
DenseDescriptorLearning-Pytorch
Official Repo for the paper "Extremely Dense Point Correspondences using a Learned Feature Descriptor" (CVPR 2020)
Stars: ✭ 66 (-46.77%)
Mutual labels:  structure-from-motion
Hierarchical Localization
Visual localization made easy with hloc
Stars: ✭ 997 (+704.03%)
Mutual labels:  structure-from-motion
Kapture
kapture is a file format as well as a set of tools for manipulating datasets, and in particular Visual Localization and Structure from Motion data.
Stars: ✭ 128 (+3.23%)
Mutual labels:  structure-from-motion
Ceres Solver
A large scale non-linear optimization library
Stars: ✭ 2,180 (+1658.06%)
Mutual labels:  structure-from-motion
Theiasfm
An open source library for multiview geometry and structure from motion
Stars: ✭ 647 (+421.77%)
Mutual labels:  structure-from-motion
how-to-sfm
A self-reliant tutorial on Structure-from-Motion
Stars: ✭ 112 (-9.68%)
Mutual labels:  structure-from-motion
Monocularsfm
Monocular Structure from Motion
Stars: ✭ 128 (+3.23%)
Mutual labels:  structure-from-motion
EGSfM
The old implementation of GraphSfM based on openMVG.
Stars: ✭ 87 (-29.84%)
Mutual labels:  structure-from-motion
Py3drec
3D modeling from uncalibrated images
Stars: ✭ 65 (-47.58%)
Mutual labels:  structure-from-motion
Uav Mapper
UAV-Mapper is a lightweight UAV Image Processing System, Visual SFM reconstruction or Aerial Triangulation, Fast Ortho-Mosaic, Plannar Mosaic, Fast Digital Surface Map (DSM) and 3d reconstruction for UAVs.
Stars: ✭ 106 (-14.52%)
Mutual labels:  structure-from-motion
Deeptam
DeepTAM: Deep Tracking and Mapping https://lmb.informatik.uni-freiburg.de/people/zhouh/deeptam/
Stars: ✭ 198 (+59.68%)
Mutual labels:  structure-from-motion
Awesome Learning Mvs
A list of awesome learning-based multi-view stereo papers
Stars: ✭ 27 (-78.23%)
Mutual labels:  structure-from-motion
sfm-disambiguation-colmap
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures
Stars: ✭ 189 (+52.42%)
Mutual labels:  structure-from-motion
Boofcv
Fast computer vision library for SFM, calibration, fiducials, tracking, image processing, and more.
Stars: ✭ 706 (+469.35%)
Mutual labels:  structure-from-motion
Mvstudio
An integrated SfM (Structure from Motion) and MVS (Multi-View Stereo) solution.
Stars: ✭ 154 (+24.19%)
Mutual labels:  structure-from-motion
pybot
Research tools for autonomous systems in Python
Stars: ✭ 60 (-51.61%)
Mutual labels:  structure-from-motion
cv-arxiv-daily
🎓Automatically Update CV Papers Daily using Github Actions (Update Every 12th hours)
Stars: ✭ 216 (+74.19%)
Mutual labels:  structure-from-motion
simple-sfm
A readable implementation of structure-from-motion
Stars: ✭ 19 (-84.68%)
Mutual labels:  structure-from-motion

Deep Two-View Structure-from-Motion Revisited

This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

We have a plan to re-org the codes around May 2022. Please feel free to submit issues if you feel confused about some parts.

We have provided the functions for training, validating, and visualization.

Requirements

Python = 3.6.x
Pytorch >= 1.6.0
CUDA >= 10.1

and the others could be installed by

pip install -r requirements.txt

Pytorch from 1.1.0 to 1.6.0 should also work well, but it will disenable mixed precision training, and we have not tested it.

To use the RANSAC five-point algorithm, you also need to

cd RANSAC_FiveP

python setup.py install --user

The CUDA extension would be installed as 'essential_matrix'. Tested under Ubuntu and CUDA 10.1.

Models

Pretrained models are provided here.

KITTI Depth

To reproduce our results, please first download the KITTI dataset RAW data and 14GB official depth maps. Please first unzip the KITTI official depth maps (train and val) into a folder, and change the flag cfg.GT_DEPTH_DIR in kitti.yml to the folder name. You should also download the split files provided by us, and unzip them into the root of the KITTI raw data.

For training,

python main.py -b 32 --lr 0.0005 --nlabel 128 --fix_flownet \
--data PATH/TO/YOUR/KITTI/DATASET --cfg cfgs/kitti.yml \
--pretrained-depth depth_init.pth.tar --pretrained-flow flow_init.pth.tar

For evaluation,

python main.py -v -b 1 -p 1 --nlabel 128 \
--data PATH/TO/YOUR/KITTI/DATASET --cfg cfgs/kitti.yml \
--pretrained kitti.pth.tar"

The default evaluation split is Eigen, where the metric abs_rel should be around 0.053 and rmse should be close to 2.22 (if 'loading official ground truth depth').

If you would like to use the Eigen SfM split, please set cfg.EIGEN_SFM = True and cfg.KITTI_697 = False.

KITTI Pose

For fair comparison, we use a KITTI odometry evaluation toolbox as provided here. Please generate poses by sequence, and evaluate the results correspondingly.

Acknowledgment:

Thanks Shihao Jiang and Dylan Campbell for sharing the implementation of the GPU-accelerated RANSAC Five-point algorithm. We really appreciate the valuable feedback from our area chairs and reviewers. We would like to thank Charles Loop for helpful discussions and Ke Chen for providing field test images from NVIDIA AV cars.

BibTex:

@article{wang2021deep,
  title={Deep Two-View Structure-from-Motion Revisited},
  author={Wang, Jianyuan and Zhong, Yiran and Dai, Yuchao and Birchfield, Stan and Zhang, Kaihao and Smolyanskiy, Nikolai and Li, Hongdong},
  journal={CVPR},
  year={2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].