All Projects → hellojialee → Multi-Person-Pose-using-Body-Parts

hellojialee / Multi-Person-Pose-using-Body-Parts

Licence: other
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Multi-Person-Pose-using-Body-Parts

OffsetGuided
Code for "Greedy Offset-Guided Keypoint Grouping for Human Pose Estimation"
Stars: ✭ 31 (-24.39%)
Mutual labels:  heatmap, pose-estimation, multi-person-pose-estimation
Improved Body Parts
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation
Stars: ✭ 202 (+392.68%)
Mutual labels:  training, heatmap, pose-estimation
MobilePose
Light-weight Single Person Pose Estimator
Stars: ✭ 588 (+1334.15%)
Mutual labels:  heatmap, pose-estimation
Mobilepose Pytorch
Light-weight Single Person Pose Estimator
Stars: ✭ 427 (+941.46%)
Mutual labels:  heatmap, pose-estimation
HPRNet
Bottom-up whole-body pose estimation method in constant time.
Stars: ✭ 51 (+24.39%)
Mutual labels:  pose-estimation, multi-person-pose-estimation
How-to-use-Readline-in-NodeJS
⌨️ How to manipulate the terminal window using NodeJS
Stars: ✭ 20 (-51.22%)
Mutual labels:  article
example-static-website-docker-nginx-certbot
Example static website with Docker, Nginx and Certbot
Stars: ✭ 29 (-29.27%)
Mutual labels:  article
fengyehong123.github.io
心血来潮的资源整理-闲暇之余就更新
Stars: ✭ 31 (-24.39%)
Mutual labels:  article
sleap
A deep learning framework for multi-animal pose tracking.
Stars: ✭ 200 (+387.8%)
Mutual labels:  pose-estimation
label-studio-frontend
Data labeling react app that is backend agnostic and can be embedded into your applications — distributed as an NPM package
Stars: ✭ 230 (+460.98%)
Mutual labels:  pose-estimation
articulated-pose
[CVPR 2020, Oral] Category-Level Articulated Object Pose Estimation
Stars: ✭ 85 (+107.32%)
Mutual labels:  pose-estimation
ICON
ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)
Stars: ✭ 641 (+1463.41%)
Mutual labels:  pose-estimation
curriculum-foundation
iSAQB Curriculum for the CPSA - Foundation Level. This repository contains copyrighted work.
Stars: ✭ 35 (-14.63%)
Mutual labels:  training
fuzzy-search
A collection of algorithms for fuzzy search like in Sublime Text.
Stars: ✭ 49 (+19.51%)
Mutual labels:  article
ModelZoo.pytorch
Hands on Imagenet training. Unofficial ModelZoo project on Pytorch. MobileNetV3 Top1 75.64🌟 GhostNet1.3x 75.78🌟
Stars: ✭ 42 (+2.44%)
Mutual labels:  training
traindown-dart
Dart (and Flutter) library for the Traindown Markup Language. This is the reference implementation for now. It is first to receive features and fixes.
Stars: ✭ 16 (-60.98%)
Mutual labels:  training
KataSuperHeroesIOS
Super heroes kata for iOS Developers. The main goal is to practice UI Testing.
Stars: ✭ 69 (+68.29%)
Mutual labels:  training
QHeatMap
Generate Heat map in Qt.
Stars: ✭ 72 (+75.61%)
Mutual labels:  heatmap
diwa
A Deliberately Insecure Web Application
Stars: ✭ 32 (-21.95%)
Mutual labels:  training
Wipro-PJP
Code written during Wipro PJP. 🍵📑
Stars: ✭ 60 (+46.34%)
Mutual labels:  training

Multi-Person Pose Estimation Based on Gaussian Response Heatmaps

Code and pre-trained models for our paper.

This repo is the Part A of our work. We associate the keypoints of individual poses using body parts.

Pat B is in the repo on GitHub, Improved-Body-Parts.

Introduction

A bottom-up approach for the problem of multi-person pose estimation. This Part is based on the network backbones in CMU-Pose (namely OpenPose). A modified network is also trained and evalutated.

method FL2

Contents

  1. Training
  2. Evaluation
  3. Demo

Task Lists

  • Add more descriptions and complete the project

Project Features

  • Implement the models using Keras with TensorFlow backend
  • VGG as the backbone
  • No batch normalization layer
  • Supprot training on multiple GPUs and slice training samples among GPUs
  • Fast data preparing and augmentation during training
  • Different learning rate at different layers

Prepare

  1. Install packages:

    Python=3.6, Keras=2.1.2, TensorFlow-gpu=1.3.0rc2 and other packages needed (please refer to requirements.txt). We haven't tested other platforms and packages of different versions.

  2. Download the MSCOCO dataset.

  3. Download the pre-trained models from Dropbox.

  4. Change the paths in the code according to your environment.

Run a Demo

python demo_image.py

Evaluation Steps

The corresponding code is in pure python without multiprocess for now.

python testing/evaluation.py

Results on MSCOCO2017 Dataset

Results on MSCOCO 2017 validation subset (model trained without val data, + focal L2 loss, default size 368, 4 scales + flip)

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.607
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.817
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.661
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.581
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.652
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.647
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.837
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.692
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.600
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.717

Update

We have tried new network structures.

Results of posenet/model3 on MSCOCO 2017 validation subset (model trained with val data, + focal L2 loss, default size 368, 4 scales + flip).

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.622
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.828
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.674
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.594
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.669
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.659
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.844
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.706
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.613
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.730

Results on MSCOCO 2017 test-dev subset (model trained with val data, + focal L2 loss, default size 368, 8 scales + flip)

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.599
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.825
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.647
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.580
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.634
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.642
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.848
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.686
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.598
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.705

According to our results, the performance of posenet/model3 in this repo is similar to CMU-Net (the cascaed CNN used in CMU-Pose ), which means only merging different feature maps with diferent receptive fields at low resolution (stride=8) could not help much (without offset regression). And the limitted capacity of the network is also a bottleneck of the estimation accuracy.

FYI

New pre-trained IMHN models achieving over 0.69 AP on the MSCOCO test-dev dataset is to be shared soon as long as the review is done.

News!

Recently, we are lucky to have time and machine to utilize. Thus, we revisit our previous work. More accurate results had been achieved after we adopted more powerful Network and use higher resolution of heatmaps (stride=4). Enhanced models with body part representation, variant loss functions and training parameters have been tried.

Please also refer to our new repo: Improved-Body-Parts (highly recommended)

improved

Results on MSCOCO 2017 test-dev subset (focal L2 loss, default size 512, 5 scales + flip)

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.685
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.867
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.749
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.664
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.719
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.728
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.892
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.782
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.688
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.784

Training Steps

Before training, prepare the training data using ''training/coco_masks_hdf5.py''.

  • The training code is available

python training/train_pose.py

Notice: change the sample slicing ratios between different GPUs at this line in the code as you want.

Referred Repositories (mainly)

Citation

If this work help your research, please cite the corresponding paper:

@inproceedings{li2020simple,
  title={Simple pose: Rethinking and improving a bottom-up approach for multi-person pose estimation},
  author={Li, Jia and Su, Wen and Wang, Zengfu},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  volume={34},
  number={07},
  pages={11354--11361},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].