All Projects → Gorilla-Lab-SCUT → LPDC-Net

Gorilla-Lab-SCUT / LPDC-Net

Licence: MIT license
CVPR2021 paper "Learning Parallel Dense Correspondence from Spatio-Temporal Descriptorsfor Efficient and Robust 4D Reconstruction"

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
cython
566 projects
Mako
254 projects
shell
77523 projects

Projects that are alternatives of or similar to LPDC-Net

RfDNet
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
Stars: ✭ 150 (+455.56%)
Mutual labels:  3d-reconstruction, cvpr2021
SkeletonBridgeRecon
The code for CVPR2019 Oral paper "A Skeleton-bridged Deep Learning Approach for Generating Meshes of Complex Topologies from Single RGB Images"
Stars: ✭ 72 (+166.67%)
Mutual labels:  3d-reconstruction, shape-generation
CVPR2021-Papers-with-Code-Demo
收集 CVPR 最新的成果,包括论文、代码和demo视频等,欢迎大家推荐!
Stars: ✭ 752 (+2685.19%)
Mutual labels:  cvpr2021
CRNN
Chemical Reaction Neural Network
Stars: ✭ 43 (+59.26%)
Mutual labels:  neural-ode
ShapeFormer
Official repository for the ShapeFormer Project
Stars: ✭ 97 (+259.26%)
Mutual labels:  3d-reconstruction
3d-recon
Implementation for paper "Learning Single-View 3D Reconstruction with Limited Pose Supervision".
Stars: ✭ 59 (+118.52%)
Mutual labels:  3d-reconstruction
Modaily-Aware-Audio-Visual-Video-Parsing
Code for CVPR 2021 paper Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing
Stars: ✭ 19 (-29.63%)
Mutual labels:  cvpr2021
HESIC
Official Code of "Deep Homography for Efficient Stereo Image Compression"[cvpr21oral]
Stars: ✭ 42 (+55.56%)
Mutual labels:  cvpr2021
DiffEqSensitivity.jl
A component of the DiffEq ecosystem for enabling sensitivity analysis for scientific machine learning (SciML). Optimize-then-discretize, discretize-then-optimize, and more for ODEs, SDEs, DDEs, DAEs, etc.
Stars: ✭ 186 (+588.89%)
Mutual labels:  neural-ode
pyRANSAC-3D
A python tool for fitting primitives 3D shapes in point clouds using RANSAC algorithm
Stars: ✭ 253 (+837.04%)
Mutual labels:  3d-reconstruction
CVPR2021 PLOP
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Stars: ✭ 102 (+277.78%)
Mutual labels:  cvpr2021
BLIP
Official Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving
Stars: ✭ 33 (+22.22%)
Mutual labels:  cvpr2021
softpool
SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification - ECCV 2020 oral
Stars: ✭ 62 (+129.63%)
Mutual labels:  3d-reconstruction
Involution
PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) [CVPR 2021].
Stars: ✭ 98 (+262.96%)
Mutual labels:  cvpr2021
DCNet
Dense Relation Distillation with Context-aware Aggregation for Few-Shot Object Detection, CVPR 2021
Stars: ✭ 113 (+318.52%)
Mutual labels:  cvpr2021
semantic-guidance
Code for our CVPR-2021 paper on Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings.
Stars: ✭ 19 (-29.63%)
Mutual labels:  cvpr2021
SkeletonMerger
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.
Stars: ✭ 49 (+81.48%)
Mutual labels:  cvpr2021
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (+114.81%)
Mutual labels:  cvpr2021
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+74.07%)
Mutual labels:  cvpr2021
efficient-annotation-cookbook
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)
Stars: ✭ 54 (+100%)
Mutual labels:  cvpr2021

LPDC-Net

Homepage | Paper-Pdf | Video

This repository contains the code for the project LPDC-Net - Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors for Efficient and Robust 4D Reconstruction

You can find detailed usage instructions for training your own models and using the pretrained models.

Installation

First you have to make sure that you have all dependencies in place. You can create and activate an anaconda environment called lpdc using

conda env create -f environment.yaml
conda activate lpdc

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

You can test our code on the provided input point cloud sequences in the demo/ folder. To this end, simple run

python generate.py configs/demo.yaml

This script should create a folder out/demo/ where the output is stored.

Dataset

Point-based Data

To train a new model from scratch, you have to download the full dataset. You can download the pre-processed data (~42 GB) using

bash scripts/download_data.sh

The script will download the point-based point-based data for the Dynamic FAUST (D-FAUST) dataset to the data/ folder.

Mesh Data

Please follow the instructions on D-FAUST homepage to download the "female and male registrations" as well as "scripts to load / parse the data". Next, follow their instructions in the scripts/README.txt file to extract the obj-files of the sequences. Once completed, you should have a folder with the following structure:


your_dfaust_folder/
| 50002_chicken_wings/
    | 00000.obj
    | 00001.obj
    | ...
    | 000215.obj
| 50002_hips/
    | 00000.obj
    | ...
| ...
| 50027_shake_shoulders/
    | 00000.obj
    | ...


You can now run

bash scripts/migrate_dfaust.sh path/to/your_dfaust_folder

to copy the mesh data to the dataset folder. The argument has to be the folder to which you have extracted the mesh data (the your_dfaust_folder from the directory tree above).

Incomplete Point Cloud Sequence

You can now run

bash scripts/build_dataset_incomplete.sh

to create incomplete point cloud sequences for the experiment of 4D Shape Completion.

Usage

When you have installed all dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.

Generation

To start the normal mesh generation process using a trained model, use

python generate.py configs/CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

The easiest way is to use a pretrained model. You can do this by using one of the config files

configs/noflow/lpdc_even_pretrained.yaml
configs/noflow/lpdc_uneven_pretrained.yaml
configs/noflow/lpdc_completion_pretrained.yaml

Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/pointcloud folder.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

Evaluation

You can evaluate the generated output of a model on the test set using

python eval.py configs/CONFIG.yaml

The evaluation results will be saved to pickle and csv files.

Training

Finally, to train a new network from scratch, run

python train.py configs/CONFIG.yaml

You can monitor the training process on http://localhost:6006 using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./logs --port 6006

where you replace OUTPUT_DIR with the respective output directory. For available training options, please have a look at config/default.yaml.

Acknowledgements

Most of the code is borrowed from Occupancy Flow. We thank Michael Niemeyer for his great works and repos.

Citation

If you find our code or paper useful, please consider citing

@inproceedings{tang2021learning,
  title={Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors for Efficient and Robust 4D Reconstruction},
  author={Tang, Jiapeng and Xu, Dan and Jia, Kui and Zhang, Lei},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6022--6031},
  year={2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].