All Projects → yuanyuanli85 → Fast_human_pose_estimation_pytorch

yuanyuanli85 / Fast_human_pose_estimation_pytorch

Licence: apache-2.0
Pytorch Code for CVPR2019 paper "Fast Human Pose Estimation" https://arxiv.org/abs/1811.05419

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Fast human pose estimation pytorch

Indoor-SfMLearner
[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
Stars: ✭ 115 (-60.07%)
Mutual labels:  pose-estimation
HyperFace-TensorFlow-implementation
HyperFace
Stars: ✭ 68 (-76.39%)
Mutual labels:  pose-estimation
Detectron.pytorch
A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
Stars: ✭ 2,805 (+873.96%)
Mutual labels:  pose-estimation
openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.
Stars: ✭ 900 (+212.5%)
Mutual labels:  pose-estimation
ONNX-Mobile-Human-Pose-3D
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.
Stars: ✭ 69 (-76.04%)
Mutual labels:  pose-estimation
handobjectconsist
[cvpr 20] Demo, training and evaluation code for joint hand-object pose estimation in sparsely annotated videos
Stars: ✭ 100 (-65.28%)
Mutual labels:  pose-estimation
chainer-dense-fusion
Chainer implementation of Dense Fusion
Stars: ✭ 21 (-92.71%)
Mutual labels:  pose-estimation
Tf.fashionai
Full pipeline for TianChi FashionAI clothes keypoints detection compitetion in TensorFlow
Stars: ✭ 280 (-2.78%)
Mutual labels:  pose-estimation
rmpe dataset server
Realtime Multi-Person Pose Estimation data server. Used as a training and validation data provider in training process.
Stars: ✭ 14 (-95.14%)
Mutual labels:  pose-estimation
Rmpe
RMPE: Regional Multi-person Pose Estimation, forked from Caffe. Research purpose only.
Stars: ✭ 258 (-10.42%)
Mutual labels:  pose-estimation
ViPNAS
The official repo for CVPR2021——ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search.
Stars: ✭ 32 (-88.89%)
Mutual labels:  pose-estimation
StarMap
StarMap for Category-Agnostic Keypoint and Viewpoint Estimation
Stars: ✭ 97 (-66.32%)
Mutual labels:  pose-estimation
binPicking 3dseg
separate from 6dpose repo for deployment
Stars: ✭ 19 (-93.4%)
Mutual labels:  pose-estimation
EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (-61.46%)
Mutual labels:  pose-estimation
Pose Residual Network Pytorch
Code for the Pose Residual Network introduced in 'MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network' paper https://arxiv.org/abs/1807.04067
Stars: ✭ 277 (-3.82%)
Mutual labels:  pose-estimation
Barracuda-PoseNet
PoseNet Using Unity MLAgents Barracuda Engine
Stars: ✭ 31 (-89.24%)
Mutual labels:  pose-estimation
deep underwater localization
Source Code for "DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization", submitted to IROS 2020
Stars: ✭ 13 (-95.49%)
Mutual labels:  pose-estimation
Mspn
Multi-Stage Pose Network
Stars: ✭ 283 (-1.74%)
Mutual labels:  pose-estimation
Awesome Action Recognition
A curated list of action recognition and related area resources
Stars: ✭ 3,202 (+1011.81%)
Mutual labels:  pose-estimation
EgoPose
Official PyTorch Implementation of "Ego-Pose Estimation and Forecasting as Real-Time PD Control". ICCV 2019.
Stars: ✭ 65 (-77.43%)
Mutual labels:  pose-estimation

Fast Human Pose Estimation Pytorch

This is an unoffical implemention for paper Fast Human Pose Estimation, Feng Zhang, Xiatian Zhu, Mao Ye. Most of code comes from pytorch implementation for stacked hourglass network pytorch-pose. In this repo, we followed Fast Pose Distillation approach proposed by Fast Human Pose Estimation to improve accuracy of a lightweight network. We first trained a deep teacher network (stacks=8, standard convolution, 88.33@Mpii pckh), and used it to teach a student network (stacks=2, depthwise convolution, 84.69%@Mpii pckh). Our experiment shows 0.7% gain from knowledge distillation.

I benchmarked the light student model hg_s2_b1_mobile_fpd and got 43fps on i7-8700K via OpenVino. Details can be found from Fast_Stacked_Hourglass_Network_OpenVino

Please check the offical implementation by fast-human-pose-estimation.pytorch

Update at Feb 2019

  • Model trained by using extra unlabeled images uploaded, hg_s2_b1_mobile_fpd_unlabeled shows 0.28% extra gain from knowledge transfered from teacher on unlabeled data.
  • The key idea is inserting unlabeled images into mpii dataset. For unlabeled samples, loss comes from difference b/w teacher and student. For labeled samples, loss is the sum of teacher-vs-student and student-vs-groundtruth.

Results

hg_s8_b1: teacher model, hg_s2_b1_mobile:student model, hg_s2_b1_mobile_kd: student model trained with FPD. hg_s2_b1_mobile_fpd_unlabeled: student model trained with FPD with extral unlabeled samples.

Model in_res featrues # of Weights Head Shoulder Elbow Wrist Hip Knee Ankle Mean GFlops Link
hg_s8_b1 256 128 25.59m 96.59 95.35 89.38 84.15 88.70 83.98 79.59 88.33 28 GoogleDrive
hg_s2_b1_mobile 256 128 2.31m 95.80 93.61 85.50 79.63 86.13 77.82 73.62 84.69 3.2 GoogleDrive
hg_s2_b1_mobile_fpd 256 128 2.31m 95.67 94.07 86.31 79.68 86.00 79.67 75.51 85.41 3.2 GoogleDrive
hg_s2_b1_mobile_fpd_unlabeled 256 128 2.31m 95.94 94.11 87.18 80.69 87.03 79.17 74.82 85.69 3.2 GoogleDrive

Installation

  1. Create a virtualenv

    virtualenv -p /usr/bin/python2.7 pose_venv
    
  2. Clone the repository with submodule

    git clone --recursive https://github.com/yuanyuanli85/Fast_Human_Pose_Estimation_Pytorch.git
    
  3. Install all dependencies in virtualenv

    source posevenv/bin/activate
    pip install -r requirements.txt
    
  4. Create a symbolic link to the images directory of the MPII dataset:

    ln -s PATH_TO_MPII_IMAGES_DIR data/mpii/images
    
  5. Disable cudnn for batchnorm layer to solve bug in pytorch0.4.0

    sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g" ./pose_venv/lib/python2.7/site-packages/torch/nn/functional.py
    

Quick Demo

  • Download pre-trained modelhg_s2_b1_mobile_fpd) and save it to somewhere, i.e checkpoint/mpii_hg_s2_b1_mobile_fpd/
  • Run demo on sample image
python tools/mpii_demo.py -a hg -s 2 -b 1 --mobile True --checkpoint checkpoint/mpii_hg_s2_b1_mobile_fpd/model_best.pth.tar --in_res 256 --device cuda 
  • You will see the detected keypoints drawn on image on your screen

Training teacher network

  • In our experiments, we used stack=8 input resolution=256 as teacher network
python example/mpii.py -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/hg_s8_b1/ 
  • Run evaluation to get val score.
python tools/mpii.py -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/hg_s8_b1/preds_best.mat 

Training with Knowledge Distillation

  • Download teacher model's checkpoint or you can train from scratch. In our experiments, we used hg_s8_b1 as teacher.

  • Train student network with knowledge distillation from teacher

python example/mpii_kd.py -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/hg_s2_b1_mobile/ mobile=True --teacher_stack 8 --teacher_checkpoint 
checkpoint/hg_s8_b1/model_best.pth.tar  

Evaluation

Run evaluation to generate mat file

python example/mpii.py -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/hg_s2_b1/ --resume checkpoint/hg_s2_b1/model_best.pth.tar -e
  • --resume_checkpoint is the checkpoint want to evaluate

Run tools/eval_PCKh.py to get val score

Export pytorch checkpoint to onnx

python tools/mpii_export_to_onxx.py -a hg -s 2 -b 1 --num-classes 16 --mobile True --in_res 256  --checkpoint checkpoint/model_best.pth.tar 
--out_onnx checkpoint/model_best.onnx 

Here

  • --checkpoint is the checkpoint want to export
  • --out_onnx is the exported onnx file

Reference

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].