All Projects → alexklwong → adareg-monodispnet

alexklwong / adareg-monodispnet

Licence: other
Repository for Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction (CVPR2019)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to adareg-monodispnet

learning-topology-synthetic-data
Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
Stars: ✭ 22 (+0%)
Mutual labels:  unsupervised-learning, 3d-reconstruction, 3d-vision, self-supervised-learning
Sfmlearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Stars: ✭ 1,661 (+7450%)
Mutual labels:  unsupervised-learning, depth-prediction, self-supervised-learning
mmselfsup
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Stars: ✭ 2,315 (+10422.73%)
Mutual labels:  unsupervised-learning, self-supervised-learning
PaiConvMesh
Official repository for the paper "Learning Local Neighboring Structure for Robust 3D Shape Representation"
Stars: ✭ 19 (-13.64%)
Mutual labels:  3d-reconstruction, 3d-vision
DiverseDepth
The code and data of DiverseDepth
Stars: ✭ 150 (+581.82%)
Mutual labels:  depth-prediction, single-image-depth-prediction
PIC
Parametric Instance Classification for Unsupervised Visual Feature Learning, NeurIPS 2020
Stars: ✭ 41 (+86.36%)
Mutual labels:  unsupervised-learning, self-supervised-learning
ViCC
[WACV'22] Code repository for the paper "Self-supervised Video Representation Learning with Cross-Stream Prototypical Contrasting", https://arxiv.org/abs/2106.10137.
Stars: ✭ 33 (+50%)
Mutual labels:  unsupervised-learning, self-supervised-learning
dynamic plane convolutional onet
[WACV 2021] Dynamic Plane Convolutional Occupancy Networks
Stars: ✭ 25 (+13.64%)
Mutual labels:  3d-reconstruction, 3d-vision
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+268.18%)
Mutual labels:  unsupervised-learning, self-supervised-learning
awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Stars: ✭ 748 (+3300%)
Mutual labels:  unsupervised-learning, self-supervised-learning
SimCLR-in-TensorFlow-2
(Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2.
Stars: ✭ 75 (+240.91%)
Mutual labels:  unsupervised-learning, self-supervised-learning
object nerf
Code for "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", ICCV 2021
Stars: ✭ 135 (+513.64%)
Mutual labels:  3d-reconstruction, 3d-vision
PiCIE
PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in clustering (CVPR2021)
Stars: ✭ 102 (+363.64%)
Mutual labels:  unsupervised-learning, self-supervised-learning
void-dataset
Visual Odometry with Inertial and Depth (VOID) dataset
Stars: ✭ 74 (+236.36%)
Mutual labels:  3d-reconstruction, 3d-vision
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+3559.09%)
Mutual labels:  unsupervised-learning, self-supervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (+54.55%)
Mutual labels:  unsupervised-learning, self-supervised-learning
MVSNet pl
MVSNet: Depth Inference for Unstructured Multi-view Stereo using pytorch-lightning
Stars: ✭ 49 (+122.73%)
Mutual labels:  3d-reconstruction, depth-prediction
CLSA
official implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (+118.18%)
Mutual labels:  unsupervised-learning, self-supervised-learning
naru
Neural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (+245.45%)
Mutual labels:  unsupervised-learning, self-supervised-learning
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (+50%)
Mutual labels:  depth-prediction, self-supervised-learning

Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction

Author: Alex Wong [email protected]

If you use this code, please cite the following paper:

A. Wong, B. W. Hong and S. Soatto. Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction.
https://arxiv.org/abs/1903.07309

@article{wong2018bilateral,
title={Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction},
author={Wong, Alex and Hong, Byung-Woo and Soatto, Stefano},
journal={arXiv preprint arXiv:1903.07309},
year={2019}
}

Getting Started

The following guide assumes that you are located in the root directory of this repository
and that you have Tensorflow 1.0+ installed

Create a symbolic link to your dataset directory

ln -s /path/to/data/directory/containing/kitti/root/folder data

where /path/to/data/directory/containing/kitti/root/folder contains your raw KITTI dataset and KITTI 2015 Stereo benchmark

/path/to/data/directory/containing/kitti/root/folder/kitti_raw_data
/path/to/data/directory/containing/kitti/root/folder/kitti_stereo_flow

Run the KITTI data setup script to generate text files containing KITTI training and validation filepaths:

python setup/prep_kitti_eigen_split_data.py
python setup/prep_kitti_kitti_split_data.py

Training the Monocular Disparity Network

For training on KITTI Eigen Split:

python src/train_monodispnet.py \
--trn_im0_path training/eigen_trn_im0.txt \
--trn_im1_path training/eigen_trn_im1.txt \
--learning_rates 1.8e-4,2.0e-4,1.0e-4,5.0e-5 \
--learning_bounds 0.01,0.90,0.95 \
--max_disparity 0.33 \
--w_ph 0.15 \
--w_st 0.85 \
--w_sm 0.10 \
--w_bc 1.05 \
--n_checkpoint 5000 \
--checkpoint_path checkpoints/eigen_model

For training on KITTI KITTI 2015 Split:

python src/train_monodispnet.py \
--trn_im0_path training/kitti_trn_im0.txt \
--trn_im1_path training/kitti_trn_im1.txt \
--learning_rates 1.8e-4,2.0e-4,1.0e-4,5.0e-5 \
--learning_bounds 0.01,0.90,0.95 \
--max_disparity 0.33 \
--w_ph 0.15 \
--w_st 0.85 \
--w_sm 0.10 \
--w_bc 1.05 \
--n_checkpoint 5000 \
--checkpoint_path checkpoints/kitti_model

Evaluation on KITTI Eigen Split and KITTI 2015 Split Benchmark

Run the following script to evaluate your model:

Generating output for KITTI Eigen Split

python src/run_monodispnet.py \
--im0_path testing/eigen_tst_im0.txt \
--restore_path checkpoints/eigen_model/model.ckpt-000000 \
--output_path checkpoints/eigen_model/outputs \
--max_disparity 0.33

Evaluating KITTI Eigen Split

python src/evaluate_kitti.py \
--npy_path checkpoints/eigen_model/outputs/disparities.npy \
--ims_path testing/eigen_tst_im0.txt \
--gts_path testing/eigen_tst_gtd.txt \
--split eigen \
--max_depth 80

Generating output for KITTI KITTI 2015 Split

python src/run_monodispnet.py \
--im0_path testing/kitti_tst_im0.txt \
--restore_path checkpoints/kitti_model/model.ckpt-000000 \
--output_path checkpoints/kitti_model/outputs \
--max_disparity 0.33

Evaluating KITTI 2015 Split

python src/evaluate_kitti.py \
--npy_path checkpoints/kitti_model/outputs/disparities.npy \
--ims_path testing/kitti_tst_im0.txt \
--gts_path testing/kitti_tst_gtd.txt \
--split kitti

Downloading Pre-trained Models

To get the pre-trained models on Eigen and KITTI split and output disparities please visit:

https://tinyurl.com/y2adhhb3
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].