All Projects → yifita → deep_cage

yifita / deep_cage

Licence: MIT license
code for "Neural Cages for Detail-Preserving 3D Deformations"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to deep cage

HiCMD
[CVPR2020] Hi-CMD: Hierarchical Cross-Modality Disentanglement for Visible-Infrared Person Re-Identification
Stars: ✭ 64 (-44.35%)
Mutual labels:  cvpr, cvpr2020
Meta-Fine-Tuning
[CVPR 2020 VL3] The repository for meta fine-tuning in cross-domain few-shot learning.
Stars: ✭ 29 (-74.78%)
Mutual labels:  cvpr, cvpr2020
attention-target-detection
[CVPR2020] "Detecting Attended Visual Targets in Video"
Stars: ✭ 105 (-8.7%)
Mutual labels:  cvpr, cvpr2020
CVPR-2020-point-cloud-analysis
CVPR 2020 papers focusing on point cloud analysis
Stars: ✭ 48 (-58.26%)
Mutual labels:  cvpr, cvpr2020
SCT
SCT: Set Constrained Temporal Transformer for Set Supervised Action Segmentation (CVPR2020) https://arxiv.org/abs/2003.14266
Stars: ✭ 35 (-69.57%)
Mutual labels:  cvpr, cvpr2020
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+6106.96%)
Mutual labels:  cvpr, cvpr2020
pcv
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)
Stars: ✭ 23 (-80%)
Mutual labels:  cvpr, cvpr2020
Vibe
Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
Stars: ✭ 2,080 (+1708.7%)
Mutual labels:  cvpr, cvpr2020
LUVLi
[CVPR 2020] Re-hosting of the LUVLi Face Alignment codebase. Please download the codebase from the original MERL website by agreeing to all terms and conditions. By using this code, you agree to MERL's research-only licensing terms.
Stars: ✭ 24 (-79.13%)
Mutual labels:  cvpr, cvpr2020
pytorch-psetae
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Stars: ✭ 117 (+1.74%)
Mutual labels:  cvpr, cvpr2020
ACGPN
"Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content",CVPR 2020. (Modified from original with fixes for inference)
Stars: ✭ 48 (-58.26%)
Mutual labels:  cvpr
Finished Senior LSC Python
Python implementation of LSC algorithm, (C) Zhengqin Li, Jiansheng Chen, 2014
Stars: ✭ 19 (-83.48%)
Mutual labels:  cvpr
learning2hash.github.io
Website for "A survey of learning to hash for Computer Vision" https://learning2hash.github.io
Stars: ✭ 14 (-87.83%)
Mutual labels:  cvpr
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-61.74%)
Mutual labels:  cvpr
Modaily-Aware-Audio-Visual-Video-Parsing
Code for CVPR 2021 paper Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing
Stars: ✭ 19 (-83.48%)
Mutual labels:  cvpr
DSGN
DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
Stars: ✭ 276 (+140%)
Mutual labels:  cvpr2020
MotionNet
CVPR 2020, "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps"
Stars: ✭ 141 (+22.61%)
Mutual labels:  cvpr2020
LED2-Net
CVPR 2021 Oral paper "LED2-Net: Monocular 360˚ Layout Estimation via Differentiable Depth Rendering" official PyTorch implementation.
Stars: ✭ 79 (-31.3%)
Mutual labels:  cvpr
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (-49.57%)
Mutual labels:  cvpr
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-81.74%)
Mutual labels:  cvpr

Neural Cages for Detail-Preserving 3D Deformations

[project page][pdf][supplemental]

Installation

git clone --recursive https://github.com/yifita/deep_cage.git
# install dependency
cd pytorch_points
conda env create --name pytorch-all --file environment.yml
python setup.py develop
# install pymesh2
# if this step fails, try to install pymesh from source as instructed here
# https://pymesh.readthedocs.io/en/latest/installation.html
# make sure that the cmake 3.15+ is used
pip install pymesh/pymesh2-0.2.1-cp37-cp37m-linux_x86_64.whl
# install other dependecies
pip install -r requirements.txt

Trained model

Download trained models from https://igl.ethz.ch/projects/neural-cage/trained_models.zip.

Unzip under trained_models. You should get several subfolders under trained_models, e.g. trained_models/chair_ablation_full etc.

Optional

install Thea https://github.com/sidch/Thea to batch render outputs

Demo

  • download shapenet data
wget https://igl.ethz.ch/projects/neural-cage/processed_shapenetseg.zip
  • deform source shape to target shape

To test your with your own chair models, please make sure that your data is axis-aligned in the same way as our provided examples.

# results will be saved in trained_models/chair_ablation_full/test
python cage_deformer_3d.py --dataset SHAPENET --full_net --bottleneck_size 256 --n_fold 2 --ckpt trained_models/chair_ablation_full/net_final.pth --target_model data/shapenet_target/**/*.obj  --source_model data/elaborated_chairs/throne_no_base.obj data/elaborated_chairs/Chaise_longue_noir_House_Doctor.ply --subdir fancy_chairs --phase test --is_poly

Example: input - target - output chair-example

  • deformation transfer
# download surreal data from 3DCoded
cd data && mkdir Surreal && cd Surreal
wget https://raw.githubusercontent.com/ThibaultGROUEIX/3D-CODED/master/data/download_dataset.sh
chmod a+'x' download_dataset.sh
./download_dataset.sh

# baseline deform the original training source
python deformer_3d.py --dataset SURREAL --nepochs 2 --data_dir data/Surreal --batch_size 1 --num_point 6890 --bottleneck_size 1024 --template data/cage_tpose.ply --source_model data/surreal_template_tpose.ply  --ckpt trained_models/tpose_atlas_b1024/net_final.pth --phase test

# deformation transfer to a skeleton
python optimize_cage.py --dataset SURREAL --nepochs 3000 --data_dir data/Surreal --num_point 6890 --bottleneck_size 1024 --clap_weight 0.05 --template data/cage_tpose.ply --model data/fancy_humanoid/Skeleton/skeleton_tpose.obj --subdir skeleton --source_model data/surreal_template_tpose.ply --ckpt trained_model/tpose_atlas_b1024/net_final.pth --lr 0.005 --is_poly

# deformation transfer to a robot (with another model, which is trained using resting pose instead of the tpose)
python optimize_cage.py --ckpt trained_models/rpose_mlp/net_final.pth --nepochs 8000 --mlp --num_point 6890 --phase test --dataset SURREAL --data_dir data/Surreal --model data/fancy_humanoid/robot.obj --subdir robot --source_model data/surreal_template.ply --clap_weight 0.1 --lr 0.0005 --template data/surreal_template_v77.ply

Training

ShapeNet deformations

A binary file storing preprocessed training data is provided data/train_Chair_-1_2500.pkl. This consists of the chair models from the PartSegv0 subset of ShapeNetCore.v1 dataset. The following command is what we ran to create our results in the paper.

python cage_deformer.py --data_cat Chair --dataset SHAPENET --data_dir {ROOT_POINT_DIR} \
  --batch_size 8 --nepochs 12 --num_point 1024 --bottleneck_size 256 --n_fold 2 --loss CD \
  --name shapenet_chairs --mvc_weight 0.1 --sym_weight 0.5 --p2f_weight 0.1 --snormal_weight 0.1 --full_net

Data generation

You can also create your own data from shapenet.

  1. Download data from ShapeNet.org. Make sure that a synsetoffset2category.txt file is located in the root directory. If it doesn't, you can copy data/processed_shapenetseg/synsetoffset2category.txt to the root directory.
  2. Sample points from ShapeNet using the scripts/resample_shapenet.py
python resample_shapenet.py {INPUT_DIR} {OUTPUT_DIR}
# example
python resample_shapenet.py /home/mnt/points/data/ShapeNet/ShapeNetCore.v2/04530566 /home/mnt/points/data/ShapeNet/ShapeNetCore.v2.5000p/

Alternatively, you can use the presampled point data provided by Thibault (https://github.com/ThibaultGROUEIX/AtlasNet/blob/master/dataset/download_shapenet_pointclouds.sh).

Humanoid deformations

surreal_deform_test Train a deformation only model with a fixed source shape and cage. The comman below will use the rest pose source shape and the a handcreated source cage One can also use --template data/surreal_template_v77.ply, which is a cage created by edge collapsing.

python deformer_3d.py --dataset SURREAL --nepochs 3 --data_dir data/Surreal --batch_size 4 --warmup_epoch 0.5 \
--num_point 2048 --bottleneck_size 512 --template data/cage_rpose.obj --source_model data/surreal_template.ply \
--mvc_weight 1.0 --loss MSE --mlp --name surreal_rpose

For deformation transfer, we use Thea to mark correspondences. An example of the landmarks is shown below. This creates a landmark file such as data/surreal_template.picked, which will be used by optimize_cage.py to adapt the source cage to a novel target shape. deformation_transfer_corres

cite

@inproceedings{Yifan:NeuralCage:2020,
  author={Wang Yifan and Noam Aigerman and Vladimir G. Kim and Siddhartha Chaudhuri and Olga Sorkine-Hornung},
  title={Neural Cages for Detail-Preserving 3D Deformations},
  booktitle = {CVPR},
  year = {2020},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].