All Projects → JiangWenPL → Multiperson

JiangWenPL / Multiperson

Code repository for the paper: "Coherent Reconstruction of Multiple Humans from a Single Image" in CVPR'20

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Multiperson

HybrIK
Official code of "HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation", CVPR 2021
Stars: ✭ 395 (+86.32%)
Mutual labels:  cvpr, pose-estimation
awesome-visual-localization-papers
The relocalization task aims to estimate the 6-DoF pose of a novel (unseen) frame in the coordinate system given by the prior model of the world.
Stars: ✭ 60 (-71.7%)
Mutual labels:  cvpr, pose-estimation
Rnn For Human Activity Recognition Using 2d Pose Input
Activity Recognition from 2D pose using an LSTM RNN
Stars: ✭ 165 (-22.17%)
Mutual labels:  pose-estimation
Ml Auto Baseball Pitching Overlay
⚾🤖⚾ Automatic baseball pitching overlay in realtime
Stars: ✭ 200 (-5.66%)
Mutual labels:  pose-estimation
Face Yaw Roll Pitch From Pose Estimation Using Opencv
This work is used for pose estimation(yaw, pitch and roll) by Face landmarks(left eye, right eye, nose, left mouth, right mouth and chin)
Stars: ✭ 183 (-13.68%)
Mutual labels:  pose-estimation
Ochumanapi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
Stars: ✭ 168 (-20.75%)
Mutual labels:  pose-estimation
Cvpr 2017 Abstracts Collection
Collection of CVPR 2017, including titles, links, authors, abstracts and my own comments
Stars: ✭ 186 (-12.26%)
Mutual labels:  cvpr
Lili Om
LiLi-OM is a tightly-coupled, keyframe-based LiDAR-inertial odometry and mapping system for both solid-state-LiDAR and conventional LiDARs.
Stars: ✭ 159 (-25%)
Mutual labels:  pose-estimation
Orn
Oriented Response Networks, in CVPR 2017
Stars: ✭ 207 (-2.36%)
Mutual labels:  cvpr
Pose2pose
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
Stars: ✭ 182 (-14.15%)
Mutual labels:  pose-estimation
Mocapnet
We present MocapNET2, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance (70 fps in CPU-only execution).
Stars: ✭ 194 (-8.49%)
Mutual labels:  pose-estimation
Amass
Data preparation and loader for AMASS
Stars: ✭ 180 (-15.09%)
Mutual labels:  pose-estimation
Deeplabcut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with deep learning for all animals incl. humans
Stars: ✭ 2,550 (+1102.83%)
Mutual labels:  pose-estimation
A2j
Code for paper "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image". ICCV2019
Stars: ✭ 190 (-10.38%)
Mutual labels:  pose-estimation
Handpose
A python program to detect and classify hand pose using deep learning techniques
Stars: ✭ 168 (-20.75%)
Mutual labels:  pose-estimation
Human Pose Estimation Opencv
Perform Human Pose Estimation in OpenCV Using OpenPose MobileNet
Stars: ✭ 201 (-5.19%)
Mutual labels:  pose-estimation
Augmented reality
💎 "Marker-less Augmented Reality" with OpenCV and OpenGL.
Stars: ✭ 165 (-22.17%)
Mutual labels:  pose-estimation
Deepmatchvo
Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
Stars: ✭ 178 (-16.04%)
Mutual labels:  pose-estimation
Hope
Source code of CVPR 2020 paper, "HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation"
Stars: ✭ 184 (-13.21%)
Mutual labels:  pose-estimation
Hourglass Facekeypoints Detection
face keypoints deteciton based on stackedhourglass
Stars: ✭ 206 (-2.83%)
Mutual labels:  pose-estimation

Coherent Reconstruction of Multiple Humans from a Single Image

Code repository for the paper:
Coherent Reconstruction of Multiple Humans from a Single Image
Wen Jiang*, Nikos Kolotouros*, Georgios Pavlakos, Xiaowei Zhou, Kostas Daniilidis
CVPR 2020 [paper] [project page]

teaser

Contents

Our repository includes training/testing/demo code for our paper. Additionally, you might find useful some parts of the code that can also be used in a standalone manner. More specifically:

Neural Mesh Renderer: Fast implementation of the original NMR.

SDF: CUDA implementation of the SDF computation and our SDF-based collision loss.

SMPLify 3D fitting: Extension of SMPLify that offers the functionality of fitting the SMPL model to 3D keypoints.

Installation instructions

This codebase was adopted from early version of mmdetection and mmcv. Users of this repo are highly recommended to read the readme of mmcv and mmdetection before using this code.

To install mmcv and mmdetection:

conda env create -f environment.yml
cd neural_renderer/
python3 setup.py install
cd ../mmcv
python3 setup.py install
cd ../mmdetection
./compile.sh
python setup.py develop
cd ../sdf
python3 setup.py install

Fetch data

Download our model data and place them under mmdetection/data. This includes the model checkpoint and joint regressors. You also need to download the mean SMPL parameters from here. Besides these files, you also need to download the SMPL model. You will need the neutral model for training, evaluation and running the demo code. Please go to the websites for the corresponding projects and register to get access to the downloads section. In case you need to convert the models to be compatible with python3, please follow the instructions here.

After finishing with the installation and downloading the necessary data, you can continue with running the demo/evaluation/training code.

Run demo code

We provide code to evaluate our pretrained model on a folder of images by running:

cd mmdetection
python3 tools/demo.py --config=configs/smpl/tune.py --image_folder=demo_images/ --output_folder=results/ --ckpt data/checkpoint.pt

Prepare datasets

Please refer to DATASETS.md for the preparation of the dataset files.

Run evaluation code

Besides the demo code, we also provide code to evaluate our models on the datasets we employ for our quantitative evaluation. Before continuing, please make sure that you follow the preparation of test sets.

You could use either our pretrained checkpoint or the model trained by yourself to evaluate on Panoptic, MuPoTS-3D, Human3.6M and PoseTrack.

Example usage:

cd mmdetection
python3 tools/full_eval.py configs/smpl/tune.py full_h36m --ckpt ./work_dirs/tune/latest.pth

Running the above command will compute the MPJPE and Reconstruction Error on the Human3.6M dataset (Protocol I). The full_h36m option can be replaced with other dataset or sequences based on the type of evaluation you want to perform:

  • haggling: haggling sequence of Panoptic
  • mafia: mafia sequence of Panoptic
  • ultimatum: ultimatum sequence of Panoptic
  • haggling: haggling sequence of Panoptic
  • mupots: MuPoTS-3D dataset
  • posetrack: PoseTrack dataset

Regarding the evaluation:

  • For Panoptic, the command will compute the MPJPE for each sequence.
  • For MuPoTS-3D, the command will save the results to the work_dirs/tune/mupots.mat which can be taken as input for official MuPoTS-3D test script.
  • For H36M, the command will compute P1 and P2 for test set.

Run training code

Please make sure you have prepared all datasets before running our training script. The training of our model would take three phases, pretrain -> baseline -> fine tuning. We prepared three configuration files under mmdetection/configs/smpl/. To train our model from scratch:

cd mmdetection
# Phase 1: pretraining
python3 tools/train.py configs/smpl/pretrain.py --create_dummy
while true:
do
    python3 tools/train.py configs/smpl/pretrain.py
done
# We could move to next phase after training for 240k iterations

# Phase 2: baseline
python3 tools/train.py configs/smpl/baseline.py --load_pretrain ./work_dirs/pretrain/latest.pth
while true:
do
    python3 tools/train.py configs/smpl/baseline.py 
done
# We could move to next phase after training for 180k iterations

# Phase 3: Fine-tuning
python3 tools/train.py configs/smpl/tune.py --load_pretrain ./work_dirs/baseline/latest.pth
while true:
do
    python3 tools/train.py configs/smpl/tune.py 
done
# It could be done after 100k iterations of training

All the checkpoints, evaluation results and logs would be saved to ./mmdetection/work_dirs/ + pretrain|baseline|tune respectively. Our training program will save the checkpoints and restart every 50 mins. You could change the time_limit in the configurations files to something more convenient

Citing

If you find this code useful for your research or the use data generated by our method, please consider citing the following paper:

@Inproceedings{jiang2020mpshape,
  Title          = {Coherent Reconstruction of Multiple Humans from a Single Image},
  Author         = {Jiang, Wen and Kolotouros, Nikos and Pavlakos, Georgios and Zhou, Xiaowei and Daniilidis, Kostas},
  Booktitle      = {CVPR},
  Year           = {2020}
}

Acknowledgements

This code uses (mmcv and mmdetection) as backbone. We gratefully appreciate the impact these libraries had on our work. If you use our code, please consider citing the original paper as well.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].