All Projects → qianlim → SCALE

qianlim / SCALE

Licence: other
Official implementation of "SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements", CVPR 2021 https://arxiv.org/abs/2104.07660

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to SCALE

CVPR2021-Papers-with-Code-Demo
收集 CVPR 最新的成果,包括论文、代码和demo视频等,欢迎大家推荐!
Stars: ✭ 752 (+553.91%)
Mutual labels:  cvpr2021
semantic-guidance
Code for our CVPR-2021 paper on Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings.
Stars: ✭ 19 (-83.48%)
Mutual labels:  cvpr2021
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-61.74%)
Mutual labels:  cvpr2021
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (-49.57%)
Mutual labels:  cvpr2021
Involution
PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) [CVPR 2021].
Stars: ✭ 98 (-14.78%)
Mutual labels:  cvpr2021
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-59.13%)
Mutual labels:  cvpr2021
HESIC
Official Code of "Deep Homography for Efficient Stereo Image Compression"[cvpr21oral]
Stars: ✭ 42 (-63.48%)
Mutual labels:  cvpr2021
SSIS
Single-Stage Instance Shadow Detection with Bidirectional Relation Learning (CVPR 2021 Oral)
Stars: ✭ 32 (-72.17%)
Mutual labels:  cvpr2021
CVPR2021 PLOP
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Stars: ✭ 102 (-11.3%)
Mutual labels:  cvpr2021
Context-Aware-Consistency
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)
Stars: ✭ 121 (+5.22%)
Mutual labels:  cvpr2021
BLIP
Official Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving
Stars: ✭ 33 (-71.3%)
Mutual labels:  cvpr2021
Modaily-Aware-Audio-Visual-Video-Parsing
Code for CVPR 2021 paper Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing
Stars: ✭ 19 (-83.48%)
Mutual labels:  cvpr2021
LPDC-Net
CVPR2021 paper "Learning Parallel Dense Correspondence from Spatio-Temporal Descriptorsfor Efficient and Robust 4D Reconstruction"
Stars: ✭ 27 (-76.52%)
Mutual labels:  cvpr2021
DCNet
Dense Relation Distillation with Context-aware Aggregation for Few-Shot Object Detection, CVPR 2021
Stars: ✭ 113 (-1.74%)
Mutual labels:  cvpr2021
RSTNet
RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words (CVPR 2021)
Stars: ✭ 71 (-38.26%)
Mutual labels:  cvpr2021
SkeletonMerger
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.
Stars: ✭ 49 (-57.39%)
Mutual labels:  cvpr2021
efficient-annotation-cookbook
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)
Stars: ✭ 54 (-53.04%)
Mutual labels:  cvpr2021
CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+20.87%)
Mutual labels:  cvpr2021
MOON
Model-Contrastive Federated Learning (CVPR 2021)
Stars: ✭ 93 (-19.13%)
Mutual labels:  cvpr2021
MiVOS
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion. Semi-supervised VOS as well!
Stars: ✭ 302 (+162.61%)
Mutual labels:  cvpr2021

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

Paper Open In Colab

This repository contains the official PyTorch implementation of the CVPR 2021 paper:

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, and Michael J. Black
Full paper | Video | Project website | Poster

Installation

  • The code has been tested with python 3.6 on both (Ubuntu 18.04 + CUDA 10.0) and (Ubuntu 20.04 + CUDA 11.1).

  • First, in the folder of this SCALE repository, run the following commands to create a new virtual environment and install dependencies:

    python3 -m venv $HOME/.virtualenvs/SCALE
    source $HOME/.virtualenvs/SCALE/bin/activate
    pip install -U pip setuptools
    pip install -r requirements.txt
    mkdir checkpoints
  • Install the Chamfer Distance package (MIT license, taken from this implementation). Note: the compilation is verified to be successful under CUDA 10.0, but may not be compatible with later CUDA versions.

    cd chamferdist
    python setup.py install
    cd ..
  • You are now good to go with the next steps! All the commands below are assumed to be run from the SCALE repository folder, within the virtual environment created above.

Run SCALE

  • Download our pre-trained model weights, unzip it under the checkpoints folder, such that the checkpoints' path is <SCALE repo folder>/checkpoints/SCALE_demo_00000_simuskirt/<checkpoint files>.

  • Download the packed data for demo, unzip it under the data/ folder, such that the data file paths are <SCALE repo folder>/data/packed/00000_simuskirt/<train,test,val split>/<data npz files>.

  • With the data and pre-trained model ready, the following code will generate a sequence of .ply files of the teaser dancing animation in results/saved_samples/SCALE_demo_00000_simuskirt:

    python main.py --config configs/config_demo.yaml
  • To render images of the generated point sets, run the following command:

    python render/o3d_render_pcl.py --model_name SCALE_demo_00000_simuskirt

    The images (with both the point normal coloring and patch coloring) will be saved under results/rendered_imgs/SCALE_demo_00000_simuskirt.

Train SCALE

Training demo with our data examples

  • Assume the demo training data is downloaded from the previous step under data/packed/. Now run:

    python main.py --config configs/config_train_demo.yaml

    The training will start!

  • The code will also save the loss curves in the TensorBoard logs under tb_logs/<date>/SCALE_train_demo_00000_simuskirt.

  • Examples from the validation set at every 10 (can be set) epoch will be saved at results/saved_samples/SCALE_train_demo_00000_simuskirt/val.

  • Note: the training data provided above are only for demonstration purposes. Due to their very limited number of frames, they will not likely yield a satisfying model. Please refer to the README files in the data/ and lib_data/ folders for more information on how to process your customized data.

Training with your own data

We provide example codes in lib_data/ to assist you in adapting your own data to the format required by SCALE. Please refer to lib_data/README for more details.

News

  • [2021/10/29] We now provide the packed, SCALE-compatible CAPE data on the CAPE dataset website. Simply register as a user there to access the download links (at the bottom of the Download page).
  • [2021/06/24] Code online!

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SCALE code, including the scripts, animation demos and pre-trained models. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

The SMPL body related files (including assets/{smpl_faces.npy, template_mesh_uv.obj} and the UV masks under assets/uv_masks/) are subject to the license of the SMPL model. The provided demo data (including the body pose and the meshes of clothed human bodies) are subject to the license of the CAPE Dataset. The Chamfer Distance implementation is subject to its original license.

Related Research

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks (CVPR 2021)
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black

Our implicit solution to pose-dependent shape modeling: cycle-consistent implicit skinning fields + locally pose-aware implicit function = a fully animatable avatar with implicit surface from raw scans without surface registration!

Learning to Dress 3D People in Generative Clothing (CVPR 2020)
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black

CAPE --- a generative model and a large-scale dataset for 3D clothed human meshes in varied poses and garment types. We trained SCALE using the CAPE dataset, check it out!

Citations

@inproceedings{Ma:CVPR:2021,
  title = {{SCALE}: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements},
  author = {Ma, Qianli and Saito, Shunsuke and Yang, Jinlong and Tang, Siyu and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].