All Projects β†’ ChrisWu1997 β†’ Multimodal-Shape-Completion

ChrisWu1997 / Multimodal-Shape-Completion

Licence: MIT license
code for our ECCV 2020 spotlight paper "Multimodal Shape Completion via Conditional Generative Adversarial Networks"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Multimodal-Shape-Completion

FastMassSpring
Interactive cloth simulator using the method described in the SIGGRAPH paper "Fast Simulation of Mass-Spring Systems" by Liu, T., Bargteil, A. W., Obrien, J. F., & Kavan, L.
Stars: ✭ 170 (+132.88%)
Mutual labels:  computer-graphics
odak
πŸ”¬ Scientific computing library for optics πŸ”­, computer graphics πŸ’» and visual perception πŸ‘€
Stars: ✭ 99 (+35.62%)
Mutual labels:  computer-graphics
SymmetricRL
Repo for "On Learning Symmetric Locomotion"
Stars: ✭ 30 (-58.9%)
Mutual labels:  computer-graphics
Reversing
Code for "Reversing the cycle: self-supervised deep stereo through enhanced monocular distillation"
Stars: ✭ 43 (-41.1%)
Mutual labels:  eccv-2020
CycleGAN-gluon-mxnet
this repo attemps to reproduce Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks(CycleGAN) use gluon reimplementation
Stars: ✭ 31 (-57.53%)
Mutual labels:  computer-graphics
photon mapping
minimal but extensible header only implementation of photon mapping in C++
Stars: ✭ 65 (-10.96%)
Mutual labels:  computer-graphics
GRNet
Implementation of "GRNet: Gridding Residual Network for Dense Point Cloud Completion". (Xie et al., ECCV 2020)
Stars: ✭ 239 (+227.4%)
Mutual labels:  eccv-2020
weekend-raytracer-zig
A Zig implementation of the "Ray Tracing in One Weekend" book
Stars: ✭ 74 (+1.37%)
Mutual labels:  computer-graphics
Exploring-SceneKit
πŸ‘Ύ An app created to explorer the features of SceneKit iOS framework. Assets load (Model I/O), interaction, animation, classical rendering (Blinn-Phong), physically based rendering.
Stars: ✭ 52 (-28.77%)
Mutual labels:  computer-graphics
mesh-deform
🍭 Physically plausible interactive 3D mesh deformation based on as rigid as possible constraints.
Stars: ✭ 67 (-8.22%)
Mutual labels:  computer-graphics
C-Raytracer
A CPU raytracer from scratch in C
Stars: ✭ 49 (-32.88%)
Mutual labels:  computer-graphics
GraphMemVOS
Video Object Segmentation with Episodic Graph Memory Networks (ECCV2020 spotlight)
Stars: ✭ 92 (+26.03%)
Mutual labels:  eccv-2020
computer-vision-notebooks
πŸ‘οΈ An authorial set of fundamental Python recipes on Computer Vision and Digital Image Processing.
Stars: ✭ 89 (+21.92%)
Mutual labels:  computer-graphics
HOT
Hierarchical Optimization Time Integration (HOT) for efficient implicit timestepping of the material point method (MPM)
Stars: ✭ 83 (+13.7%)
Mutual labels:  computer-graphics
AdvPC
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
Stars: ✭ 35 (-52.05%)
Mutual labels:  eccv-2020
TermGL
2D & 3D graphics engine in the terminal [C/C++]
Stars: ✭ 219 (+200%)
Mutual labels:  computer-graphics
chainer-pix2pix
Chainer implementation for Image-to-Image Translation Using Conditional Adversarial Networks
Stars: ✭ 40 (-45.21%)
Mutual labels:  computer-graphics
platonicgan
Escaping Plato’s Cave: 3D Shape from Adversarial Rendering [ICCV 2019]
Stars: ✭ 40 (-45.21%)
Mutual labels:  computer-graphics
Volume-Unity-Plugin
A Unity3D plugin for rendering Volume assets
Stars: ✭ 68 (-6.85%)
Mutual labels:  computer-graphics
Ray-Tracer
Simple Ray Tracer
Stars: ✭ 18 (-75.34%)
Mutual labels:  computer-graphics

Multimodal-Shape-Completion

This repository provides PyTorch implementation of our paper:

Multimodal Shape Completion via Conditional Generative Adversarial Networks

Rundi Wu*, Xuelin Chen*, Yixin Zhuang, Baoquan Chen (* equal contribution)

ECCV 2020 (spotlight)

Overview

Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN
  • Python 3.6

Dependencies

  1. Install python package dependencies through pip:

    pip install -r requirements.txt
  2. Install external dependency PyTorchEMD for the usage of EMD loss.

Data

We use three datasets in our paper.

  1. 3D-EPN

    Please download the partial scan point cloud data from their website and extract it into data folder. For the complete point clouds data, please download it from PKU disk and extract it into data folder. Or you can follow this οΏ½blender-render script to virtually scan ShapeNet objects by yourself.

  2. PartNet

    Please first download the PartNet dataset and then run our script to merge the point cloud segmentation label to the first level:

    cd data/partnet_process
    python pc_merge.py --src {your-path-to-PartNet-dataset}

    Also, remember to change the data_root path in the corresponding scripts for training/testing on PartNet dataset.

  3. PartNet-Scan

    Please first download the PartNet dataset and then run our script to (1) create 4 parital mesh for each complete shape and (2) scan complete shape and parital mesh using blender.

    cd data/partnet_process
    sh gen_partnet_scan.sh {your-path-to-PartNet-dataset} {your-path-to-save-scanned-results}

    Note that the scan process (blender render) may take a very long time and the outputs require quite a lot disk space. You can also increase the number of sub-process (default 10) to speed up.

Training

Training scripts can be found in scripts folder and please see common.py for specific definition of all command line parameters. For example, to train on 3DEPN chair category:

# 1. pre-train the PointNet AE
sh scripts/3depn/chair/train-3depn-chair-ae.sh

# 2. pre-train the PointNet VAE
sh scripts/3depn/chair/train-3depn-chair-vae.sh

# 3. train the conditional GAN for multimodal shape completion
sh scripts/3depn/chair/train-3depn-chair-gan.sh

Training log and model weights will be saved in proj_log folder by default.

Testing

Testing scripts can also be found in scripts folder. For example, to test the model trained on 3DEPN chair category:

# by default, run over all test examples and output 10 completion results for each
sh scripts/3depn/chair/test-3depn-chair-gan.sh

The completion results, along with the input parital shape, will be saved in proj_log/mpc-3depn-chair/gan/results by default.

Evaluation

Evaluation code can be found in evaluation folder. To evaluate the completion diversity , fidelity and quality:

cd evaluation/
# calculate Total Mutual Difference (TMD)
python total_mutual_diff.py --src {path-to-saved-testing-results}
# calculate Unidirectional Hausdorff Distance (UHD) and completeness
python completeness.py --src {path-to-saved-testing-results}
# calculate Minimal Matching Distance (MMD), this requries a tensorflow environment
python mmd.py --src {path-to-saved-testing-results} --dataset {which-dataset} --class_name {which-category} -g 0

Note that MMD calculation requires an compiled external library from its original repo.

Pre-trained models

Pretrained model can be downloaded from PKU disk or Google Drive. Put the downloaded files into proj_log folder and then testing scripts shall be run directly.

Cite

Please cite our work if you find it useful:

@InProceedings{wu_2020_ECCV,
author = {Wu, Rundi and Chen, Xuelin and Zhuang, Yixin and Chen, Baoquan},
title = {Multimodal Shape Completion via Conditional Generative Adversarial Networks},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].