All Projects → akar43 → Lsm

akar43 / Lsm

Licence: mit
Code for Learnt Stereo Machines based on the NIPS 2017 paper

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lsm

Flame pytorch
This is a implementation of the 3D FLAME model in PyTorch
Stars: ✭ 153 (-34.33%)
Mutual labels:  3d-reconstruction
Gan2shape
Code for GAN2Shape (ICLR2021 oral)
Stars: ✭ 183 (-21.46%)
Mutual labels:  3d-reconstruction
3dreconstruction
3D reconstruction, sfm with Python3
Stars: ✭ 213 (-8.58%)
Mutual labels:  3d-reconstruction
Oanet
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"
Stars: ✭ 166 (-28.76%)
Mutual labels:  3d-reconstruction
Meshlab
The open source mesh processing system
Stars: ✭ 2,619 (+1024.03%)
Mutual labels:  3d-reconstruction
Msn Point Cloud Completion
Morphing and Sampling Network for Dense Point Cloud Completion (AAAI2020)
Stars: ✭ 196 (-15.88%)
Mutual labels:  3d-reconstruction
Prnet pytorch
Training & Inference Code of PRNet in PyTorch 1.1.0
Stars: ✭ 149 (-36.05%)
Mutual labels:  3d-reconstruction
Patchmatchstereo
PatchMatch,倾斜窗口经典,效果极佳,OpenMVS&Colmap稠密匹配算法。完整实现,代码规范,注释清晰,博客教学,欢迎star!
Stars: ✭ 219 (-6.01%)
Mutual labels:  3d-reconstruction
Tailornet
Code for our CVPR 2020 (ORAL) paper - TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style.
Stars: ✭ 180 (-22.75%)
Mutual labels:  3d-reconstruction
Computervisiondatasets
Stars: ✭ 207 (-11.16%)
Mutual labels:  3d-reconstruction
Deodr
A differentiable 3D renderer with Pytorch, Tensorflow and Matlab interfaces
Stars: ✭ 167 (-28.33%)
Mutual labels:  3d-reconstruction
Opensfm
Open source Structure-from-Motion pipeline
Stars: ✭ 2,342 (+905.15%)
Mutual labels:  3d-reconstruction
Tf flame
Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.
Stars: ✭ 193 (-17.17%)
Mutual labels:  3d-reconstruction
Flame
FLaME: Fast Lightweight Mesh Estimation
Stars: ✭ 164 (-29.61%)
Mutual labels:  3d-reconstruction
Scancomplete
[CVPR'18] ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans
Stars: ✭ 213 (-8.58%)
Mutual labels:  3d-reconstruction
Sltk
An OpenCV-based structured light processing toolkit.
Stars: ✭ 151 (-35.19%)
Mutual labels:  3d-reconstruction
Pixel2meshplusplus
Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation. In ICCV2019.
Stars: ✭ 188 (-19.31%)
Mutual labels:  3d-reconstruction
Structured3d
[ECCV'20] Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Stars: ✭ 224 (-3.86%)
Mutual labels:  3d-reconstruction
Pix2vox
Implementation of "Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images" (Xie et al., ICCV 2019)
Stars: ✭ 216 (-7.3%)
Mutual labels:  3d-reconstruction
Horizonnet
Pytorch implementation of HorizonNet: Learning Room Layout with 1D Representation and Pano Stretch Data Augmentation.
Stars: ✭ 195 (-16.31%)
Mutual labels:  3d-reconstruction

Learnt Stereo Machines

This is a Tensorflow implementation of Learnt Stereo Machines as presented in the NIPS 2017 paper below. It supports training, validation and testing of Voxel LSMs and Depth LSMs on the ShapeNet dataset.

Learning a Multi-view Stereo Machine
Abhishek Kar, Christian Häne, Jitendra Malik
NIPS 2017
[blog] [paper] [arxiv]

LSM

Setup

Prerequisites

  • Python 2.7
  • Linux or OSX (Tested on Ubuntu 16.04)
  • NVIDIA GPU + CUDA + CuDNN (CPU mode and CUDA without CuDNN may work with minimal modification, but untested)

Prepare data

The system requires rendered images, depth maps (for D-LSMs), intrinsic/extrinsic camera matrices and voxelizations of the 3D models for training. A version of these renders and cameras can be downloaded using the provided script prepare_data.sh.

WARNING: The full release is large (21G tar file) and will take some time to unpack. Use the following script to download all the data required to train LSMs from scratch.

If you are interested in only the voxelizations of the models, we also make them available at the links below.

[ShapeNet voxels (32^3 and 64^3) (58M)] [ShapeNet renderings + voxels (21G)]

We also provide a small sample set for running the demos which can be downloaded from below or by using the download_sample.sh script.

[ShapeNet Sample (8M)]

Setup virtualenv

We recommend using virtualenv to run experiments without modifying your global python distribution.

cd <project root>
virtualenv env
source env/bin/activate
pip install -r requirements.txt

Optional setup

You might want to specify the GPU to use for experiments (in a multi GPU machine) and suppress TF messages before running scripts. The project root also needs to be added to PYTHONPATH.

export CUDA_VISIBLE_DEVICES=<GPU to run on> #Specify GPU to run on
export PYTHONPATH=`pwd`:$PYTHONPATH #Add project root to PYTHONPATH
export TF_CPP_MIN_LOG_LEVEL=2 #Suppress extra messages from TF

Quick Start

You can use the demo jupyter notebooks demo_vlsm.ipynb and demo_dlsm.ipynb to get started with running pretrained LSMs on the sample data (download within the notebook). The notebooks allow interactive 3D visualizatin of the predicted voxel grids and point clouds!

Pretrained Models

We are releasing pretrained models for V-LSMs and D-LSMs trained on the ShapeNet dataset which can be used to reproduce numbers from the paper. Note that the numbers might differ a little (higher for the code release) due to minor changes in the code after submission. The models can be downloaded with the tensorboard run logs (1.7G) or without (70M) and can be downloaded from the links below. You can also use the get_models.sh script to download the models.

[LSM v1 (with logs)] [LSM v1 (models only)]

Voxel LSM (V-LSM)

VLSM

Training

Training a V-LSM on ShapeNet with default arguments. Model checkpoints and tensorboard logs are written out to a unique directory created by default within ./log displayed at the top after starting training.

python voxels/train_vlsm.py --argsjs args/args_vlsm.json

Testing

Testing a Voxel LSM on ShapeNet.

LOG=<log directory used while training. e.g. ./log/2017-10-30_132841/train>
CHECKPOINT=<checkpoint to evaluate. e.g. mvnet-100000>

python voxels/test_vlsm.py --log $LOG --ckpt $CHECKPOINT --test_split_file data/splits.json

Validation

You can also choose to run continuous validation while the model is training using the following command. This should add new fields to tensorboard showing validation accuracy/error.

LOG=<log directory used while training. e.g. ./log/2017-10-30_132841/train>

python voxels/val_vlsm.py --log $LOG --val_split_file data/splits.json

Depth LSM (D-LSM)

The instructions for D-LSMs are very similar to V-LSMs. You can perform training, validation and testing using the following scripts as well as visualize progress on Tensorboard.

DLSM

Training

python depth/train_dlsm.py --argsjs args/args_dlsm.json

Validation

LOG=<log directory used while training. e.g. ./log/2017-10-30_132841/train>

python depth/val_dlsm.py --log $LOG --val_split_file data/splits.json

Testing

LOG=<log directory used while training. e.g. ./log/2017-10-30_132841/train>
CHECKPOINT=<checkpoint to evaluate. e.g. mvnet-100000>

python depth/test_dlsm.py --log $LOG --ckpt $CHECKPOINT --test_split_file data/splits.json

Viewing progress on Tensorboard

You can view the training progress on tensorboard by using the logs written out while training.

LOG=<log directory used while training. e.g. ./log/2017-10-30_132841/train>
tensorboard --logdir $LOG

Citation

If you use our code, we request you to cite the following work.

@incollection{lsmKarHM2017,
  author = {Abhishek Kar and
  Christian H\"ane and
  Jitendra Malik},
  title = {Learning a Multi-View Stereo Machine},
  booktitle = NIPS,
  year = {2017},
  }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].