All Projects → xuchen-ethz → snarf

xuchen-ethz / snarf

Licence: MIT license
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Programming Languages

python
139335 projects - #7 most used programming language
cython
566 projects
shell
77523 projects

Projects that are alternatives of or similar to snarf

CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (-48.91%)
Mutual labels:  iccv, 3d-vision, iccv2021
ICON
ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)
Stars: ✭ 641 (+248.37%)
Mutual labels:  computer-graphics, implicit-functions
ICCV2021-Single-Image-Desnowing-HDCWNet
This paper is accepted by ICCV 2021.
Stars: ✭ 47 (-74.46%)
Mutual labels:  iccv, iccv2021
DeepCAD
code for our ICCV 2021 paper "DeepCAD: A Deep Generative Network for Computer-Aided Design Models"
Stars: ✭ 74 (-59.78%)
Mutual labels:  computer-graphics, iccv2021
Vision-Language-Transformer
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)
Stars: ✭ 127 (-30.98%)
Mutual labels:  iccv2021
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (-54.89%)
Mutual labels:  3d-vision
rasterator
Real-time software rasterizer written in C++ with windowing and model loading support.
Stars: ✭ 15 (-91.85%)
Mutual labels:  computer-graphics
ComputerGraphics-OpenGL
No description or website provided.
Stars: ✭ 25 (-86.41%)
Mutual labels:  computer-graphics
gnerf
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )
Stars: ✭ 152 (-17.39%)
Mutual labels:  iccv2021
glText
Cross-platform single header text rendering library for OpenGL
Stars: ✭ 93 (-49.46%)
Mutual labels:  computer-graphics
dynamic plane convolutional onet
[WACV 2021] Dynamic Plane Convolutional Occupancy Networks
Stars: ✭ 25 (-86.41%)
Mutual labels:  3d-vision
ilvr adm
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)
Stars: ✭ 133 (-27.72%)
Mutual labels:  iccv2021
cycle-confusion
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".
Stars: ✭ 67 (-63.59%)
Mutual labels:  iccv2021
PARE
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation
Stars: ✭ 222 (+20.65%)
Mutual labels:  computer-graphics
LoFTR
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021
Stars: ✭ 1,046 (+468.48%)
Mutual labels:  3d-vision
GX-EncinoWaves
Graphics Experiment - FFT Ocean Water Simulation
Stars: ✭ 34 (-81.52%)
Mutual labels:  computer-graphics
Nabla
OpenGL/OpenGL ES/Vulkan/CUDA/OptiX Modular Rendering Framework for PC/Linux/Android
Stars: ✭ 235 (+27.72%)
Mutual labels:  computer-graphics
FontRNN
Implementation of FontRNN [Computer Graphics Forum, 2019].
Stars: ✭ 27 (-85.33%)
Mutual labels:  computer-graphics
Unsupervised-Adaptation-for-Deep-Stereo
Code for "Unsupervised Adaptation for Deep Stereo" - ICCV17
Stars: ✭ 59 (-67.93%)
Mutual labels:  iccv
LEMO
Official Pytorch implementation for 2021 ICCV paper "Learning Motion Priors for 4D Human Body Capture in 3D Scenes" and trained models / data
Stars: ✭ 149 (-19.02%)
Mutual labels:  3d-vision

SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes

Paper | Supp | Video | Project Page | Blog (AITAVG)

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes. We propose a novel forward skinning module to animate neural implicit shapes with good generalization to unseen poses.

If you find our code or paper useful, please cite as

@inproceedings{chen2021snarf,
  title={SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes},
  author={Chen, Xu and Zheng, Yufeng and Black, Michael J and Hilliges, Otmar and Geiger, Andreas},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021}
}

Quick Start

Clone this repo:

git clone https://github.com/xuchen-ethz/snarf.git
cd snarf

Install environment:

conda env create -f environment.yml
conda activate snarf
python setup.py install

Download SMPL models (1.0.0 for Python 2.7 (10 shape PCs)) and move them to the corresponding places:

mkdir lib/smpl/smpl_model/
mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_FEMALE.pkl
mv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_MALE.pkl

Download our pretrained models and test motion sequences:

sh ./download_data.sh

Run a quick demo for clothed human:

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

You can the find the video in outputs/cape/3375/demo.mp4 and images in outputs/cape/3375/images/. To save the meshes, add demo.save_mesh=true to the command.

You can also try other subjects (see outputs/data/cape for available options) by setting subject=xx, and other motion sequences from AMASS by setting demo.motion_path=/path/to/amass_modetion.npz.

Some motion sequences have high fps and one might want to skip some frames. To do this, add demo.every_n_frames=x to consider every x frame in the motion sequence. (e.g. demo.every_n_frames=10 for PosePrior sequences)

By default, we use demo.fast_mode=true for fast mesh extraction. In this mode, we first extract mesh in canonical space, and then forward skin the mesh to posed space. This bypasses the root finding during inference, thus is faster. However, it's not really deforming a continuous field. To first deform the continuous field and then extract mesh in deformed space, use demo.fast_mode=false instead.

Training and Evaluation

Install Additional Dependencies

Install kaolin for fast occupancy query from meshes.

git clone https://github.com/NVIDIAGameWorks/kaolin
cd kaolin
git checkout v0.9.0
python setup.py develop

Minimally Clothed Human

Prepare Datasets

Download the AMASS dataset. We use ''DFaust Snythetic'' and ''PosePrior'' subsets and SMPL-H format. Unzip the dataset into data folder.

tar -xf DFaust67.tar.bz2 -C data
tar -xf MPILimits.tar.bz2 -C data

Preprocess dataset:

python preprocess/sample_points.py --output_folder data/DFaust_processed
python preprocess/sample_points.py --output_folder data/MPI_processed --skip 10 --poseprior

Training

Run the following command to train for a specified subject:

python train.py subject=50002

Training logs are available on wandb (registration needed, free of charge). It should take ~12h on a single 2080Ti.

Evaluation

Run the following command to evaluate the method for a specified subject on within distribution data (DFaust test split):

python test.py subject=50002

and outside destribution (PosePrior):

python test.py subject=50002 datamodule=jointlim

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname='dfaust' subject=50002 demo.motion_path='data/aist_demo/seqs'

Clothed Human

Training

Download the CAPE dataset and unzip into data folder.

Run the following command to train for a specified subject and clothing type:

python train.py datamodule=cape subject=3375 datamodule.clothing='blazerlong' +experiments=cape  

Training logs are available on wandb. It should take ~24h on a single 2080Ti.

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

Acknowledgement

We use the pre-processing code in PTF and LEAP with some adaptions (./preprocess). The network and sampling part of the code (lib/model/network.py and lib/model/sample.py) is implemented based on IGR and IDR. The code for extracting mesh (lib/utils/meshing.py) is adapted from NASA. Our implementation of Broyden's method (lib/model/broyden.py) is based on DEQ. We sincerely thank these authors for their awesome work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].