All Projects → xxlong0 → CNMNet

xxlong0 / CNMNet

Licence: other
ECCV 2020

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to CNMNet

Depth-Guided-Inpainting
Code for ECCV 2020 "DVI: Depth Guided Video Inpainting for Autonomous Driving"
Stars: ✭ 50 (+28.21%)
Mutual labels:  depth
Three-Filters-to-Normal
Three-Filters-to-Normal: An Accurate and Ultrafast Surface Normal Estimator (RAL+ICRA'21)
Stars: ✭ 41 (+5.13%)
Mutual labels:  depth
depth-from-defocus-and-correspondence
⛄Depth from Combining Defocus and Correspondence Using Light-Field Cameras.
Stars: ✭ 17 (-56.41%)
Mutual labels:  depth
DDLCtVN
Doki Doki Literature Club, the Normal Visual Novel!
Stars: ✭ 70 (+79.49%)
Mutual labels:  normal
Structured-Light-Laser-Stripe-Reconstruction
Reconstructs a 3D stripe on the area of an object on which a laser falls as seen by the camera
Stars: ✭ 35 (-10.26%)
Mutual labels:  depth
bcp-47-normalize
Normalize, canonicalize, and format BCP 47 tags
Stars: ✭ 16 (-58.97%)
Mutual labels:  normal
MSG-Net
Depth Map Super-Resolution by Deep Multi-Scale Guidance, ECCV 2016
Stars: ✭ 76 (+94.87%)
Mutual labels:  depth
Awesome-Mainframes
Awesome list of mainframe related resources & projects
Stars: ✭ 31 (-20.51%)
Mutual labels:  mvs
MVSNet pl
MVSNet: Depth Inference for Unstructured Multi-view Stereo using pytorch-lightning
Stars: ✭ 49 (+25.64%)
Mutual labels:  multi-view-stereo
diskusage
FANTASTIC SPEED utility to find out top largest folders/files on the disk.
Stars: ✭ 64 (+64.1%)
Mutual labels:  depth
Fall-Detection-Dataset
FUKinect-Fall dataset was created using Kinect V1. The dataset includes walking, bending, sitting, squatting, lying and falling actions performed by 21 subjects between 19-72 years of age.
Stars: ✭ 16 (-58.97%)
Mutual labels:  depth
FastMVSNet
[CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Stars: ✭ 193 (+394.87%)
Mutual labels:  multi-view-stereo
Kinect Dataset Builder
No description or website provided.
Stars: ✭ 16 (-58.97%)
Mutual labels:  depth
ArmorPlus
ArmorPlus is a mod based on exploration, killing, building, getting geared up, fight the bosses and explore the depths of your worlds.
Stars: ✭ 25 (-35.9%)
Mutual labels:  depth
G2LTex
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (+166.67%)
Mutual labels:  depth
BlindAid
Capstone Project: Assist the blind in moving around safely by warning them of impending obstacles using depth sensing, computer vision, and tactile glove feedback.
Stars: ✭ 14 (-64.1%)
Mutual labels:  depth
GeoSup
Code for Geo-Supervised Visual Depth Prediction
Stars: ✭ 27 (-30.77%)
Mutual labels:  depth
M4Depth
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"
Stars: ✭ 62 (+58.97%)
Mutual labels:  depth
one-click-tangle
"One Click Tangle" intends to make lives easier to IOTA adopters by providing pre-configured scripts and recipes that allow to deploy IOTA Networks and Nodes "in one click".
Stars: ✭ 43 (+10.26%)
Mutual labels:  depth
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-15.38%)
Mutual labels:  depth

Occlusion-Aware Depth Estimation with Adaptive Normal Constraints

Xiaoxiao Long,Lingjie Liu, Christian Theobalt,Wenping Wang, ECCV 2020

Image

Introduction

We present a new learning-based method for multi-frame depth estimation from a color video, which is a fundamental problem in scene understanding, robot navigation or handheld 3D reconstruction. While recent learning-based methods estimate depth at high accuracy, 3D point clouds exported from their depth maps often fail to preserve important geometric feature (e.g., corners, edges, planes) of man-made scenes. Widely-used pixel-wise depth errors do not specifically penalize inconsistency on these features. These inaccuracies are particularly severe when subsequent depth reconstructions are accumulated in an attempt to scan a full environment with man-made objects with this kind of features. Our depth estimation algorithm therefore introduces a Combined Normal Map (CNM) constraint, which is designed to better preserve high-curvature features and global planar regions. In order to further improve the depth estimation accuracy, we introduce a new occlusion-aware strategy that aggregates initial depth predictions from multiple adjacent views into one final depth map and one occlusion probability map for the current reference view. Our method outperforms the state-of-the-art in terms of depth estimation accuracy, and preserves essential geometric features of man-made indoor scenes much better than other algorithms.

If you find this project useful for your research, please cite:

@article{long2020occlusion,
  title={Occlusion-Aware Depth Estimation with Adaptive Normal Constraints},
  author={Long, Xiaoxiao and Liu, Lingjie and Theobalt, Christian and Wang, Wenping},
  journal={ECCV},
  year={2020}
}

How to use

Environment

The environment requirements are listed as follows:

  • Pytorch>=1.2.0
  • CUDA 10.0
  • CUDNN 7

Preparation

  • Check out the source code

    git clone https://github.com/xxlong0/CNMNet.git && cd CNMNet

  • Install dependencies

  • Prepare training/testing datasets

    • ScanNet : Due to license of ScanNet, please follow the instruction of ScanNet and download the raw dataset.
    • 7scenes
    • plane segmentation of ScanNet: Please follow the instruction and download the plane segmentation annotations.

Training

  • Before start training, need to clean the plane segementation annotations and do data preprocessing.
# train the DepthNet without refinement
python train.py train with dataset.batch_size=5 dataset.root_dir=/path/dataset dataset.list_filepath=./scannet/train_plane_view3_scans0_999_interval2_error01.txt dataset.image_width=256 dataset.image_height=192 k_size=9

# train whole model with refinement (one reference image and two source images)
python train.py train_refine with dataset.batch_size=5 dataset.root_dir=/path/dataset dataset.list_filepath=./scannet/train_plane_view3_scans0_999_interval2_error01.txt dataset.image_width=256 dataset.image_height=192 k_size=9

Testing

  • predict disparity and convert to depth (more accurate for near objects, reported in paper): download pretrained model
# evaluate the DepthNet without refinement
python eval.py eval with dataset.batch_size=5 dataset.root_dir=/path/dataset dataset.list_filepath=./scannet/train_plane_view3_scans0_999_interval2_error01.txt dataset.image_width=256 dataset.image_height=192 k_size=9

# evaluate whole model with refinement (one reference image and two source images)
python eval.py eval_refine with dataset.batch_size=5 dataset.root_dir=/path/dataset dataset.image_width=256 dataset.image_height=192 k_size=9

Acknowledgement

The code partly relies on code from MVDepthNet

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].