All Projects → isarandi → metro-pose3d

isarandi / metro-pose3d

Licence: MIT license
Metric-Scale Truncation-Robust Heatmaps for 3D Human Pose Estimation

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to metro-pose3d

Vibe
Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
Stars: ✭ 2,080 (+3978.43%)
Mutual labels:  human-pose-estimation, 3d-human-pose
Awesome Human Pose Estimation
A collection of awesome resources in Human Pose estimation.
Stars: ✭ 2,022 (+3864.71%)
Mutual labels:  human-pose-estimation, 3d-human-pose
MEVA
Official implementation of ACCV 2020 paper "3D Human Motion Estimation via Motion Compression and Refinement" (Identical repo to https://github.com/KlabCMU/MEVA, will be kept in sync)
Stars: ✭ 93 (+82.35%)
Mutual labels:  human-pose-estimation, 3d-human-pose
Pytorch realtime multi Person pose estimation
Pytorch version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 205 (+301.96%)
Mutual labels:  human-pose-estimation
Cu Net
Code for "Quantized Densely Connected U-Nets for Efficient Landmark Localization" (ECCV 2018) and "CU-Net: Coupled U-Nets" (BMVC 2018 oral)
Stars: ✭ 218 (+327.45%)
Mutual labels:  human-pose-estimation
Lite-HRNet
This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.
Stars: ✭ 677 (+1227.45%)
Mutual labels:  human-pose-estimation
Keypoint Communities
[ICCV '21] In this repository you find the code to our paper "Keypoint Communities".
Stars: ✭ 255 (+400%)
Mutual labels:  human-pose-estimation
Multiposenet.pytorch
pytorch implementation of MultiPoseNet (ECCV 2018, Muhammed Kocabas et al.)
Stars: ✭ 191 (+274.51%)
Mutual labels:  human-pose-estimation
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-29.41%)
Mutual labels:  human-pose-estimation
generative pose
Code for our ICCV 19 paper : Monocular 3D Human Pose Estimation by Generation and Ordinal Ranking
Stars: ✭ 63 (+23.53%)
Mutual labels:  human-pose-estimation
BlazePoseBarracuda
BlazePoseBarracuda is a human 2D/3D pose estimation neural network that runs the Mediapipe Pose (BlazePose) pipeline on the Unity Barracuda with GPU.
Stars: ✭ 131 (+156.86%)
Mutual labels:  human-pose-estimation
Pyranet
Code for "Learning Feature Pyramids for Human Pose Estimation" (ICCV 2017)
Stars: ✭ 222 (+335.29%)
Mutual labels:  human-pose-estimation
poseviz
3D Human Pose Visualizer for Python
Stars: ✭ 68 (+33.33%)
Mutual labels:  3d-human-pose
Binary Human Pose Estimation
This code implements a demo of the Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources paper by Adrian Bulat and Georgios Tzimiropoulos.
Stars: ✭ 210 (+311.76%)
Mutual labels:  human-pose-estimation
kapao
KAPAO is an efficient single-stage human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.
Stars: ✭ 604 (+1084.31%)
Mutual labels:  human-pose-estimation
Human Pose Estimation.pytorch
The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"
Stars: ✭ 2,485 (+4772.55%)
Mutual labels:  human-pose-estimation
OpenPoseDotNet
OpenPose wrapper written in C++ and C# for Windows
Stars: ✭ 55 (+7.84%)
Mutual labels:  human-pose-estimation
pose-estimation-3d-with-stereo-camera
This demo uses a deep neural network and two generic cameras to perform 3D pose estimation.
Stars: ✭ 40 (-21.57%)
Mutual labels:  human-pose-estimation
MobileHumanPose
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).
Stars: ✭ 206 (+303.92%)
Mutual labels:  3d-human-pose
Monoloco
[ICCV 2019] Official implementation of "MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation" in PyTorch + Social Distancing
Stars: ✭ 242 (+374.51%)
Mutual labels:  human-pose-estimation

Update: An extended journal version (MeTRAbs) of this paper has now been accepted!

It's recommended to use the corresponding newer codebase, which has been updated to work under TensorFlow 2

MeTRAbs repository at github.com/isarandi/metrabs


MeTRo 3D Human Pose Estimator

What is this?

TensorFlow code to train and evaluate the MeTRo method, proposed in our paper:

What does it do?

It takes a single RGB image of a person as input and returns the 3D coordinates of pre-defined body joints relative to the pelvis. The coordinates are estimated in millimeters directly. Also, it always returns a complete pose by guessing joint positions even outside of the image boundaries (truncation).

How do I run it?

There is a small, self-contained script called inference.py with minimal dependencies (just TensorFlow + NumPy), which can be used for inference with a pretrained, exported model. This makes it easy to drop it into different application scenarios without carrying along the whole training codebase. The following exported models are available now:

3D Training Data 2D Training Data Architecture Joint Convention Model Download
H36M MPII ResNet-50 H36M (17) stride_32, stride_16, stride_8, stride_4
H36M, 3DHP, 3DPW, CMU-Panoptic COCO, MPII ResNet-50 COCO/CMU (19) stride_32, stride_16, stride_8, stride_4
H36M, 3DHP, 3DPW, CMU-Panoptic COCO, MPII ResNet-101 COCO/CMU (19) stride_32, stride_16, stride_8, stride_4

Use the inference script as follows:

model=many_rn101_st16.pb
wget https://omnomnom.vision.rwth-aachen.de/data/metro-pose3d/$model
./inference.py --model-path=$model

These .pb files contain both the full TensorFlow graph and the weights.

The main purpose of the H36M model is to get benchmark results comparable with prior work. If you simply need good 3D pose predictions for a downstream application, it's recommended to use the model trained on more data. These were also trained with upper body crops, to enhance truncation robustness. Furthermore, they use the COCO/CMU-Panoptic joint convention, so some facial keypoints are also estimated (eyes, ears, nose).

How do I train it?

  1. See DEPENDENCIES.md for installing the dependencies.
  2. Then follow DATASETS.md to download and prepare the training and test data.
  3. Finally, see TRAINING.md for instructions on running experiments.

How do I cite it?

If you use this work, please cite it as:

@inproceedings{Sarandi20FG,
  title={Metric-Scale Truncation-Robust Heatmaps for 3{D} Human Pose Estimation},
  author={S\'ar\'andi, Istv\'an and Linder, Timm and Arras, Kai O. and Leibe, Bastian},
  booktitle={IEEE Int Conf Automatic Face and Gesture Recognition (FG)},
  year={2020},
}

Can I ask you something?

Sure, create an issue or write an email to István Sárándi ([email protected]).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].