All Projects → zhangboshen → A2j

zhangboshen / A2j

Licence: mit
Code for paper "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image". ICCV2019

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to A2j

FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-81.05%)
Mutual labels:  real-time, pose-estimation
Hope
Source code of CVPR 2020 paper, "HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation"
Stars: ✭ 184 (-3.16%)
Mutual labels:  pose-estimation, real-time
Mocapnet
We present MocapNET2, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance (70 fps in CPU-only execution).
Stars: ✭ 194 (+2.11%)
Mutual labels:  pose-estimation, real-time
Poseflow
PoseFlow: Efficient Online Pose Tracking (BMVC'18)
Stars: ✭ 330 (+73.68%)
Mutual labels:  pose-estimation, real-time
Mobilepose Pytorch
Light-weight Single Person Pose Estimator
Stars: ✭ 427 (+124.74%)
Mutual labels:  pose-estimation, real-time
Keras realtime multi Person pose estimation
Keras version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 728 (+283.16%)
Mutual labels:  pose-estimation, real-time
Openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Stars: ✭ 22,892 (+11948.42%)
Mutual labels:  pose-estimation, real-time
Gccpm Look Into Person Cvpr19.pytorch
Fast and accurate single-person pose estimation, ranked 10th at CVPR'19 LIP challenge. Contains implementation of "Global Context for Convolutional Pose Machines" paper.
Stars: ✭ 137 (-27.89%)
Mutual labels:  pose-estimation, real-time
Rekord
A javascript REST ORM that is offline and real-time capable
Stars: ✭ 171 (-10%)
Mutual labels:  real-time
Kimera Vio Ros
ROS wrapper for Kimera-VIO
Stars: ✭ 182 (-4.21%)
Mutual labels:  real-time
Handpose
A python program to detect and classify hand pose using deep learning techniques
Stars: ✭ 168 (-11.58%)
Mutual labels:  pose-estimation
Deeplabcut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with deep learning for all animals incl. humans
Stars: ✭ 2,550 (+1242.11%)
Mutual labels:  pose-estimation
Pose2pose
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
Stars: ✭ 182 (-4.21%)
Mutual labels:  pose-estimation
Ochumanapi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
Stars: ✭ 168 (-11.58%)
Mutual labels:  pose-estimation
Goaccess
GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.
Stars: ✭ 14,096 (+7318.95%)
Mutual labels:  real-time
Rnn For Human Activity Recognition Using 2d Pose Input
Activity Recognition from 2D pose using an LSTM RNN
Stars: ✭ 165 (-13.16%)
Mutual labels:  pose-estimation
Amass
Data preparation and loader for AMASS
Stars: ✭ 180 (-5.26%)
Mutual labels:  pose-estimation
Augmented reality
💎 "Marker-less Augmented Reality" with OpenCV and OpenGL.
Stars: ✭ 165 (-13.16%)
Mutual labels:  pose-estimation
Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (-5.79%)
Mutual labels:  real-time
Deepmatchvo
Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
Stars: ✭ 178 (-6.32%)
Mutual labels:  pose-estimation

PWC PWC PWC PWC PWC

A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image

Update (2020-6-16)

We upload A2J's prediction results in pixel coordinates (i.e., UVD format) for NYU and ICVL datasets: https://github.com/zhangboshen/A2J/tree/master/result_nyu_icvl, Evaluation code (https://github.com/xinghaochen/awesome-hand-pose-estimation/tree/master/evaluation) can be applied for performance comparision among SoTA methods.

Update (2020-3-23)

We released our training code here.

Introduction

This is the official implementation for the paper, "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image", ICCV 2019.

In this paper, we propose a simple and effective approach termed A2J, for 3D hand and human pose estimation from a single depth image. Wide-range evaluations on 5 datasets demonstrate A2J's superiority.

Please refer to our paper for more details, https://arxiv.org/abs/1908.09999.

pipeline

If you find our work useful in your research or publication, please cite our work:

@inproceedings{A2J,
author = {Xiong, Fu and Zhang, Boshen and Xiao, Yang and Cao, Zhiguo and Yu, Taidong and Zhou Tianyi, Joey and Yuan, Junsong},
title = {A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image},
booktitle = {Proceedings of the IEEE Conference on International Conference on Computer Vision (ICCV)},
year = {2019}
}

Comparison with state-of-the-art methods

result_hand result_body

A2J achieves 2nd place in HANDS2019 3D hand pose estimation Challenge

Task 1: Depth-Based 3D Hand Pose Estimation

T1

Task 2: Depth-Based 3D Hand Pose Estimation while Interacting with Objects

T2

About our code

Dependencies

Our code is tested under Ubuntu 16.04 environment with NVIDIA 1080Ti GPU, both Pytorch0.4.1 and Pytorch1.2 work (Pytorch1.0/1.1 should also work).

code

First clone this repository:

git clone https://github.com/zhangboshen/A2J
  • src folder contains model definition, anchor, and test files for NYU, ICVL, HANDS2017, ITOP, K2HPD datasets.
  • data folder contains center point, bounding box, mean/std, and GT keypoints files for 5 datasets.

Next you may download our pre-trained model files from:

Directory structure of this code should look like:

A2J
│   README.md
│   LICENSE.md  
│
└───src
│   │   ....py
└───data
│   │   hands2017
│   │   icvl
│   │   itop_side
│   │   itop_top
│   │   k2hpd
│   │   nyu
└───model
│   │   HANDS2017.pth
│   │   ICVL.pth
│   │   ITOP_side.pth
│   │   ITOP_top.pth
│   │   K2HPD.pth
│   │   NYU.pth

You may also have to download these datasets manually:

  • NYU Hand Pose Dataset [link]
  • ICVL Hand Pose Dataset [link]
  • HANDS2017 Hand Pose Dataset [link]
  • ITOP Body Pose Dataset [link]
  • K2HPD Body Pose Dataset [link]

After downloaded these datasets, you can follow the code from data folder (data_preprosess.py) to convert ICVL, NYU, ITOP, and K2HPD images to .mat files.

Finally, simply run DATASET_NAME.py in the src folder to test our model. For example, you can reproduce our HANDS2017 results by running:

python hands2017.py

There are some optional configurations you can adjust in the DATASET_NAME.py files.

Thanks Gyeongsik et al. for their nice work to provide precomputed center files (https://github.com/mks0601/V2V-PoseNet_RELEASE) for NYU, ICVL, HANDS2017 and ITOP datasets. This is really helpful to our work!

Qualitative Results

NYU hand pose dataset:

NYU_1  

ITOP body pose dataset:

ITOP_1

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].