All Projects → vchoutas → Expose

vchoutas / Expose

Licence: other
ExPose - EXpressive POse and Shape rEgression

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Expose

generative pose
Code for our ICCV 19 paper : Monocular 3D Human Pose Estimation by Generation and Ordinal Ranking
Stars: ✭ 63 (-75.2%)
Mutual labels:  human-pose-estimation
MEVA
Official implementation of ACCV 2020 paper "3D Human Motion Estimation via Motion Compression and Refinement" (Identical repo to https://github.com/KlabCMU/MEVA, will be kept in sync)
Stars: ✭ 93 (-63.39%)
Mutual labels:  human-pose-estimation
DeepVTB
🌌 OpenVTuber-虚拟アイドル共享计划 An application of real-time face and gaze analyzation via deep nerual networks.
Stars: ✭ 32 (-87.4%)
Mutual labels:  human-pose-estimation
posture recognition
Posture recognition based on common camera
Stars: ✭ 91 (-64.17%)
Mutual labels:  human-pose-estimation
MultiPerson-pose-estimation
This is the proposal network for MultiPerson Pose Estimation.
Stars: ✭ 15 (-94.09%)
Mutual labels:  human-pose-estimation
simple-HigherHRNet
Multi-person Human Pose Estimation with HigherHRNet in Pytorch
Stars: ✭ 122 (-51.97%)
Mutual labels:  human-pose-estimation
pose-estimation-3d-with-stereo-camera
This demo uses a deep neural network and two generic cameras to perform 3D pose estimation.
Stars: ✭ 40 (-84.25%)
Mutual labels:  human-pose-estimation
rmpe dataset server
Realtime Multi-Person Pose Estimation data server. Used as a training and validation data provider in training process.
Stars: ✭ 14 (-94.49%)
Mutual labels:  human-pose-estimation
Keypoint Communities
[ICCV '21] In this repository you find the code to our paper "Keypoint Communities".
Stars: ✭ 255 (+0.39%)
Mutual labels:  human-pose-estimation
DenseNet-human-pose-estimation
Using DenseNet for human pose estimation based on TensorFlow.
Stars: ✭ 34 (-86.61%)
Mutual labels:  human-pose-estimation
OpenPoseDotNet
OpenPose wrapper written in C++ and C# for Windows
Stars: ✭ 55 (-78.35%)
Mutual labels:  human-pose-estimation
kapao
KAPAO is an efficient single-stage human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.
Stars: ✭ 604 (+137.8%)
Mutual labels:  human-pose-estimation
PARE
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation
Stars: ✭ 222 (-12.6%)
Mutual labels:  human-pose-estimation
Lite-HRNet
This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.
Stars: ✭ 677 (+166.54%)
Mutual labels:  human-pose-estimation
openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.
Stars: ✭ 900 (+254.33%)
Mutual labels:  human-pose-estimation
BlazePoseBarracuda
BlazePoseBarracuda is a human 2D/3D pose estimation neural network that runs the Mediapipe Pose (BlazePose) pipeline on the Unity Barracuda with GPU.
Stars: ✭ 131 (-48.43%)
Mutual labels:  human-pose-estimation
metro-pose3d
Metric-Scale Truncation-Robust Heatmaps for 3D Human Pose Estimation
Stars: ✭ 51 (-79.92%)
Mutual labels:  human-pose-estimation
BOA
Bilevel Online Adaptation for Human Mesh Reconstruction
Stars: ✭ 43 (-83.07%)
Mutual labels:  human-pose-estimation
BASH-Model
We developed a method animating a statistical 3D human model for biomechanical analysis to increase accessibility for non-experts, like patients, athletes, or designers.
Stars: ✭ 51 (-79.92%)
Mutual labels:  human-pose-estimation
ICON
ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)
Stars: ✭ 641 (+152.36%)
Mutual labels:  human-pose-estimation

ExPose: Monocular Expressive Body Regression through Body-Driven Attention

report

[Project Page] [Paper] [Supp. Mat.]

SMPL-X Examples

Short Video Long Video
ShortVideo LongVideo

Table of Contents

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the ExPose data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description

EXpressive POse and Shape rEgression (ExPose) is a method that estimates 3D body pose and shape, hand articulation and facial expression of a person from a single RGB image. For more details, please see our ECCV paper Monocular Expressive Body Regression through Body-Driven Attention. This repository contains:

  • A PyTorch demo to run ExPose on images.
  • An inference script for the supported datasets.

Installation

To install the necessary dependencies run the following command:

    pip install -r requirements.txt

The code has been tested with two configurations: a) with Python 3.7, CUDA 10.1, CuDNN 7.5 and PyTorch 1.5 on Ubuntu 18.04, and b) with Python 3.6, CUDA 10.2 and PyTorch 1.6 on Ubuntu 18.04.

Preparing the data

First, you should head to the project website and create an account. If you want to stay informed, please opt-in for email communication and we will reach out with any updates on the project. Once you have your account, login and head to the download section to get the pre-trained ExPose model. Create a folder named data and extract the downloaded zip there. You should now have a folder with the following structure:

data
├── checkpoints
├── all_means.pkl
├── conf.yaml
├── shape_mean.npy
├── SMPLX_to_J14.pkl

For more information on the data, please read the data documentation. If you don't already have an account on the SMPL-X website, please register to be able to download the model. Afterward, extract the SMPL-X model zip inside the data folder you created above.

data
├── models
│   ├── smplx

You are now ready to run the demo and inference scripts.

Demo

We provide a script to run ExPose directly on images. To get you started, we provide a sample folder, taken from pexels, which can be processed with the the following command:

    python demo.py --image-folder samples \
    --exp-cfg data/conf.yaml \
    --show=False \
    --output-folder OUTPUT_FOLDER \
    --save-params [True/False] \
    --save-vis [True/False] \
    --save-mesh [True/False]

The script will use a Keypoint R-CNN from torchvision to detect people in the images and then produce a SMPL-X prediction for each using ExPose. You should see the following output for the sample image:

Sample HD Overlay

Inference

The inference script can be used to run inference on one of the supported datasets. For example, if you have a folder with images and OpenPose keypoints with the following structure:

folder
├── images
│   ├── img0001.jpg
│   └── img0002.jpg
│   └── img0002.jpg
├── keypoints
│   ├── img0001_keypoints.json
│   └── img0002_keypoints.json
│   └── img0002_keypoints.json

Then you can use the following command to run ExPose for each person:

python inference.py --exp-cfg data/conf.yaml \
           --datasets openpose \
           --exp-opts datasets.body.batch_size B datasets.body.openpose.data_folder folder \
           --show=[True/False] \
           --output-folder OUTPUT_FOLDER \
           --save-params [True/False] \
           --save-vis [True/False] \
           --save-mesh [True/False]

You can select if you want to save the estimated parameters, meshes, and renderings by setting the corresponding flags.

Citation

If you find this Model & Software useful in your research we would kindly ask you to cite:

@inproceedings{ExPose:2020,
    title= {Monocular Expressive Body Regression through Body-Driven Attention},
    author= {Choutas, Vasileios and Pavlakos, Georgios and Bolkart, Timo and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2020},
    url = {https://expose.is.tue.mpg.de}
}
@inproceedings{SMPL-X:2019,
    title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
    author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2019}
}

Acknowledgments

We thank Haiwen Feng for the FLAME fits, Nikos Kolotouros, Muhammed Kocabas and Nikos Athanasiou for helpful discussions, Sai Kumar Dwivedi and Lea Muller for proofreading, Mason Landry and Valerie Callaghan for video voiceovers.

Contact

The code of this repository was implemented by Vassilis Choutas.

For questions, please contact [email protected].

For commercial licensing (and all related questions for business applications), please contact [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].