All Projects → bearpaw → Pyranet

bearpaw / Pyranet

Licence: apache-2.0
Code for "Learning Feature Pyramids for Human Pose Estimation" (ICCV 2017)

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Pyranet

Tensorflow realtime multi Person pose estimation
Multi-Person Pose Estimation project for Tensorflow 2.0 with a small and fast model based on MobilenetV3
Stars: ✭ 129 (-41.89%)
Mutual labels:  human-pose-estimation
Vnect
Real-time 3D human pose estimation, implemented by tensorflow
Stars: ✭ 157 (-29.28%)
Mutual labels:  human-pose-estimation
Multiposenet.pytorch
pytorch implementation of MultiPoseNet (ECCV 2018, Muhammed Kocabas et al.)
Stars: ✭ 191 (-13.96%)
Mutual labels:  human-pose-estimation
Vibe
Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
Stars: ✭ 2,080 (+836.94%)
Mutual labels:  human-pose-estimation
Evoskeleton
Official project website for the CVPR 2020 paper (Oral Presentation) "Cascaded Deep Monocular 3D Human Pose Estimation With Evolutionary Training Data"
Stars: ✭ 154 (-30.63%)
Mutual labels:  human-pose-estimation
Keraspersonlab
Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
Stars: ✭ 163 (-26.58%)
Mutual labels:  human-pose-estimation
Metrabs
This is a computer vision algorithm that takes a single RGB image as the input and estimates 3D human poses as the output.
Stars: ✭ 123 (-44.59%)
Mutual labels:  human-pose-estimation
Binary Human Pose Estimation
This code implements a demo of the Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources paper by Adrian Bulat and Georgios Tzimiropoulos.
Stars: ✭ 210 (-5.41%)
Mutual labels:  human-pose-estimation
V2v Posenet Pytorch
PyTorch implementation of V2V-PoseNet with IntegralPose/PoseFix loss
Stars: ✭ 156 (-29.73%)
Mutual labels:  human-pose-estimation
Posenet Pytorch
A PyTorch port of Google TensorFlow.js PoseNet (Real-time Human Pose Estimation)
Stars: ✭ 187 (-15.77%)
Mutual labels:  human-pose-estimation
Cdcl Human Part Segmentation
Repository for Paper: Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation (TCSVT20)
Stars: ✭ 143 (-35.59%)
Mutual labels:  human-pose-estimation
Pytorch Pose Estimation
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.
Stars: ✭ 152 (-31.53%)
Mutual labels:  human-pose-estimation
Deepstream pose estimation
This is a sample DeepStream application to demonstrate a human pose estimation pipeline.
Stars: ✭ 168 (-24.32%)
Mutual labels:  human-pose-estimation
Gccpm Look Into Person Cvpr19.pytorch
Fast and accurate single-person pose estimation, ranked 10th at CVPR'19 LIP challenge. Contains implementation of "Global Context for Convolutional Pose Machines" paper.
Stars: ✭ 137 (-38.29%)
Mutual labels:  human-pose-estimation
Human Pose Estimation.pytorch
The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"
Stars: ✭ 2,485 (+1019.37%)
Mutual labels:  human-pose-estimation
Human Pose Estimation Papers
2D&3D human pose estimation
Stars: ✭ 126 (-43.24%)
Mutual labels:  human-pose-estimation
Awesome Human Motion
🏃‍♀️ A curated list about human motion capture, analysis and synthesis.
Stars: ✭ 161 (-27.48%)
Mutual labels:  human-pose-estimation
Cu Net
Code for "Quantized Densely Connected U-Nets for Efficient Landmark Localization" (ECCV 2018) and "CU-Net: Coupled U-Nets" (BMVC 2018 oral)
Stars: ✭ 218 (-1.8%)
Mutual labels:  human-pose-estimation
Pytorch realtime multi Person pose estimation
Pytorch version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 205 (-7.66%)
Mutual labels:  human-pose-estimation
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+983.33%)
Mutual labels:  human-pose-estimation

Learning Feature Pyramids for Human Pose Estimation

Training and testing code for the paper

Learning Feature Pyramids for Human Pose Estimation
Wei Yang, Shuang Li, Wanli Ouyang, Hongsheng Li, Xiaogang Wang
ICCV, 2017

This code is based on stacked hourglass networks and fb.resnet.torch. Thanks to the authors.

Install

  1. Install Torch.

  2. Install dependencies.

    luarocks install hdf5
    luarocks install matio
    luarocks install optnet
    
  3. (Optional) Install nccl for better performance when training with multi-GPUs.

    git clone https://github.com/NVIDIA/nccl.git
    cd nccl
    make 
    make install
    luarocks install nccl
    

    set LD_LIBRARY_PATH in file ~/.bashrc if libnccl.so is not found.

  4. Prepare dataset.
    Create a symbolic link to the images directory of the MPII dataset:

    ln -s PATH_TO_MPII_IMAGES_DIR data/mpii/images
    

    Create a symbolic link to the images directory of the LSP dataset (images are stored in PATH_TO_LSP_DIR/images):

    ln -s PATH_TO_LSP_DIR data/lsp/lsp_dataset
    

    Create a symbolic link to the images directory of the LSP extension dataset (images are stored in PATH_TO_LSPEXT_DIR/images):

    ln -s PATH_TO_LSPEXT_DIR data/lsp/lspet_dataset
    

Training and Testing

Quick Start

Testing from our pretrained model

Download our pretrained model to ./pretrained folder from Google Drive. Test on the MPII validation set by running the following command

qlua main.lua -batchSize 1 -nGPU 1 -nStack 8 -minusMean true -loadModel pretrained/model_250.t7 -testOnly true -debug true

Example

For multi-scale testing, run

qlua evalPyra.lua -batchSize 1 -nGPU 1 -nStack 8 -minusMean true -loadModel pretrained/model_250.t7 -testOnly true -debug true

Note:

  • If you DO NOT want to visualize the training results. Set -debug false and use th instead of qlua.
  • you may set the number of scales in evalPyra.lua (Line 22 ). Use fewer number of scales or multiple GPUs if "out of memory" occurs.
  • use -loadModel MODEL_PATH to load a specific model for testing or training

Train a two-stack hourglass model

Train an example two-stack hourglass model on the MPII dataset with the proposed Pyramids Residual Modules (PRMs)

sh ./experiments/mpii/hg-prm-stack2.sh 

Customize your own training and testing procedure

A sample script for training on the MPII dataset with 8-stack hourglass model.

#!/usr/bin/env sh
expID=mpii/mpii_hg8   # snapshots and log file will save in checkpoints/$expID
dataset=mpii          # mpii | mpii-lsp | lsp |
gpuID=0,1             # GPUs visible to program
nGPU=2                # how many GPUs will be used to train the model
batchSize=16          
LR=6.7e-4
netType=hg-prm        # network architecture
nStack=2
nResidual=1
nThreads=4            # how many threads will be used to load data
minusMean=true
nClasses=16
nEpochs=200           
snapshot=10           # save models for every $snapshot

OMP_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=$gpuID th main.lua \
   -dataset $dataset \
   -expID $expID \
   -batchSize $batchSize \
   -nGPU $nGPU \
   -LR $LR \
   -momentum 0.0 \
   -weightDecay 0.0 \
   -netType $netType \
   -nStack $nStack \
   -nResidual $nResidual \
   -nThreads $nThreads \
   -minusMean $minusMean \
   -nClasses $nClasses \
   -nEpochs $nEpochs \
   -snapshot $snapshot \
   # -resume checkpoints/$expID  \  # uncomment this line to resume training
   # -testOnly true \               # uncomment this line to test on validation data
   # -testRelease true \            # uncomment this line to test on test data (MPII dataset)

Evaluation

You may evaluate the PCKh score of your model on the MPII validation set. To get start, download our prediction pred_multiscale_250.h5 to ./pretrained from Google Drive, and run the MATLAB script evaluation/eval_PCKh.m. You'll get the following results

      Head , Shoulder , Elbow , Wrist , Hip , Knee  , Ankle , Mean , 
name , 97.41 , 96.16 , 91.10 , 86.88 , 90.05 , 86.00 , 83.89 , 90.27

Citation

If you find this code useful in your research, please consider citing:

@inproceedings{yang2017pyramid,
    Title = {Learning Feature Pyramids for Human Pose Estimation},
    Author = {Yang, Wei and Li, Shuang and Ouyang, Wanli and Li, Hongsheng and Wang, Xiaogang},
    Booktitle = {arXiv preprint arXiv:1708.01101},
    Year = {2017}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].