All Projects → zju3dv → NeuralRecon

zju3dv / NeuralRecon

Licence: Apache-2.0 License
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to NeuralRecon

PaiConvMesh
Official repository for the paper "Learning Local Neighboring Structure for Robust 3D Shape Representation"
Stars: ✭ 19 (-97.66%)
Mutual labels:  3d-reconstruction, 3d-vision
learning-topology-synthetic-data
Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
Stars: ✭ 22 (-97.29%)
Mutual labels:  3d-reconstruction, 3d-vision
object nerf
Code for "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", ICCV 2021
Stars: ✭ 135 (-83.37%)
Mutual labels:  3d-reconstruction, 3d-vision
dynamic plane convolutional onet
[WACV 2021] Dynamic Plane Convolutional Occupancy Networks
Stars: ✭ 25 (-96.92%)
Mutual labels:  3d-reconstruction, 3d-vision
void-dataset
Visual Odometry with Inertial and Depth (VOID) dataset
Stars: ✭ 74 (-90.89%)
Mutual labels:  3d-reconstruction, 3d-vision
adareg-monodispnet
Repository for Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction (CVPR2019)
Stars: ✭ 22 (-97.29%)
Mutual labels:  3d-reconstruction, 3d-vision
cs231a
Stanford University CS231A: Computer Vision, From 3D Reconstruction to Recognition HomeWork Answer
Stars: ✭ 27 (-96.67%)
Mutual labels:  3d-reconstruction
G2LTex
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (-87.19%)
Mutual labels:  3d-reconstruction
Structured-Light-Depth-Acquisition
Matlab Implementation of a 3D Reconstruction algorithm
Stars: ✭ 48 (-94.09%)
Mutual labels:  3d-reconstruction
slam-python
SLAM - Simultaneous localization and mapping using OpenCV and NumPy.
Stars: ✭ 80 (-90.15%)
Mutual labels:  3d-reconstruction
label-fusion
Volumetric Fusion of Multiple Semantic Labels and Masks
Stars: ✭ 18 (-97.78%)
Mutual labels:  3d-vision
ResDepth
[ISPRS Journal of Photogrammetry and Remote Sensing, 2022] ResDepth: A Deep Residual Prior For 3D Reconstruction From High-resolution Satellite Images
Stars: ✭ 30 (-96.31%)
Mutual labels:  3d-reconstruction
lowshot-shapebias
Learning low-shot object classification with explicit shape bias learned from point clouds
Stars: ✭ 37 (-95.44%)
Mutual labels:  3d-vision
CV Learning
Projects related to computer vision and image processing.
Stars: ✭ 20 (-97.54%)
Mutual labels:  3d-vision
Computer-Vision
Cool Vision projects
Stars: ✭ 51 (-93.72%)
Mutual labels:  3d-reconstruction
Danesfield
Kitware's system for 3D building reconstruction for the IARPA CORE3D program
Stars: ✭ 100 (-87.68%)
Mutual labels:  3d-reconstruction
SimpleView
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"
Stars: ✭ 95 (-88.3%)
Mutual labels:  3d-vision
handobjectconsist
[cvpr 20] Demo, training and evaluation code for joint hand-object pose estimation in sparsely annotated videos
Stars: ✭ 100 (-87.68%)
Mutual labels:  3d-reconstruction
EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (-86.33%)
Mutual labels:  3d-vision
JetScan
JetScan : GPU accelerated portable RGB-D reconstruction system
Stars: ✭ 77 (-90.52%)
Mutual labels:  3d-reconstruction

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video

Project Page | Paper


NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Jiaming Sun*, Yiming Xie*, Linghao Chen, Xiaowei Zhou, Hujun Bao
CVPR 2021 (Oral Presentation and Best Paper Candidate)

real-time video


TODO List and ETA

  • Code (with detailed comments) for training and inference, and the data preparation scripts (2021-5-2).
  • Pretrained models on ScanNet (2021-5-2).
  • Real-time reconstruction demo on custom ARKit data with instructions (2021-5-7).
  • Evaluation code and metrics (expected 2021-6-10).

How to Use

Installation

# Ubuntu 18.04 and above is recommended.
sudo apt install libsparsehash-dev  # you can try to install sparsehash with conda if you don't have sudo privileges.
conda env create -f environment.yaml
conda activate neucon
[FAQ on environment installation]
  • AttributeError: module 'torchsparse_backend' has no attribute 'hash_forward'

    • Clone torchsparse to a local directory. If you have done that, recompile and install torchsparse after removing the build folder.
  • No sudo privileges to install libsparsehash-dev

    • Install sparsehash in conda (included in environment.yaml) and run export CPLUS_INCLUDE_PATH=$CONDA_PREFIX/include before installing torchsparse.
  • For other problems, you can also refer to the FAQ in torchsparse.

Pretrained Model on ScanNet

Download the pretrained weights and put it under PROJECT_PATH/checkpoints/release. You can also use gdown to download it in command line:

mkdir checkpoints && cd checkpoints
gdown --id 1zKuWqm9weHSm98SZKld1PbEddgLOQkQV

Real-time Demo on Custom Data with Camera Poses from ARKit.

We provide a real-time demo of NeuralRecon running with self-captured ARKit data. Please refer to DEMO.md for details.

Data Preperation for ScanNet

Download and extract ScanNet by following the instructions provided at http://www.scan-net.org/.

[Expected directory structure of ScanNet (click to expand)]

You can obtain the train/val/test split information from here.

DATAROOT
└───scannet
│   └───scans
│   |   └───scene0000_00
│   |       └───color
│   |       │   │   0.jpg
│   |       │   │   1.jpg
│   |       │   │   ...
│   |       │   ...
│   └───scans_test
│   |   └───scene0707_00
│   |       └───color
│   |       │   │   0.jpg
│   |       │   │   1.jpg
│   |       │   │   ...
│   |       │   ...
|   └───scannetv2_test.txt
|   └───scannetv2_train.txt
|   └───scannetv2_val.txt

Next run the data preparation script which parses the raw data format into the processed pickle format. This script also generates the ground truth TSDFs using TSDF Fusion.

[Data preparation script]
# Change PATH_TO_SCANNET and OUTPUT_PATH accordingly.
# For the training/val split:
python tools/tsdf_fusion/generate_gt.py --data_path PATH_TO_SCANNET --save_name all_tsdf_9 --window_size 9
# For the test split
python tools/tsdf_fusion/generate_gt.py --test --data_path PATH_TO_SCANNET --save_name all_tsdf_9 --window_size 9

Inference on ScanNet test-set

python main.py --cfg ./config/test.yaml

The reconstructed meshes will be saved to PROJECT_PATH/results.

Evaluation on ScanNet test-set

python tools/evaluation.py --model ./results/scene_scannet_release_fusion_eval_47 --n_proc 16

Note that evaluation.py uses pyrender to render depth maps from the predicted mesh for 2D evaluation. If you are using headless rendering you must also set the enviroment variable PYOPENGL_PLATFORM=osmesa (see pyrender for more details).

You can print the results of a previous evaluation run using

python tools/visualize_metrics.py --model ./results/scene_scannet_release_fusion_eval_47

Training on ScanNet

Start training by running ./train.sh. More info about training (e.g. GPU requirements, convergence time, etc.) to be added soon.

[train.sh]
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1
python -m torch.distributed.launch --nproc_per_node=2 main.py --cfg ./config/train.yaml

The training is seperated to two phases and the switching between phases is controlled manually for now:

  • Phase 1 (the first 0-20 epoch), training single fragments. MODEL.FUSION.FUSION_ON=False, MODEL.FUSION.FULL=False

  • Phase 2 (the remaining 21-50 epoch), with GRUFusion. MODEL.FUSION.FUSION_ON=True, MODEL.FUSION.FULL=True

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{sun2021neucon,
  title={{NeuralRecon}: Real-Time Coherent {3D} Reconstruction from Monocular Video},
  author={Sun, Jiaming and Xie, Yiming and Chen, Linghao and Zhou, Xiaowei and Bao, Hujun},
  journal={CVPR},
  year={2021}
}

Acknowledgment

We would like to specially thank Reviewer 3 for the insightful and constructive comments. We would like to thank Sida Peng , Siyu Zhang and Qi Fang for the proof-reading. Some of the code in this repo is borrowed from MVSNet_pytorch, thanks Xiaoyang!

Copyright

This work is affiliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].