All Projects → stefanopini → Simple Hrnet

stefanopini / Simple Hrnet

Licence: gpl-3.0
Multi-person Human Pose Estimation with HRNet in Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Simple Hrnet

pose-estimation-3d-with-stereo-camera
This demo uses a deep neural network and two generic cameras to perform 3D pose estimation.
Stars: ✭ 40 (-86.62%)
Mutual labels:  human-pose-estimation, yolov3
Pytorch Yolo V3
A PyTorch implementation of the YOLO v3 object detection algorithm
Stars: ✭ 3,148 (+952.84%)
Mutual labels:  yolov3
YOLOv3-Cloud-Tutorial
Everything you need in order to get YOLOv3 up and running in the cloud. Learn to train your custom YOLOv3 object detector in the cloud for free!
Stars: ✭ 68 (-77.26%)
Mutual labels:  yolov3
3dmppe rootnet release
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019
Stars: ✭ 276 (-7.69%)
Mutual labels:  human-pose-estimation
Mobilenetv2 Yolov3
yolov3 with mobilenetv2 and efficientnet
Stars: ✭ 258 (-13.71%)
Mutual labels:  yolov3
V2v Posenet release
Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
Stars: ✭ 286 (-4.35%)
Mutual labels:  human-pose-estimation
computer-vision-dojo
This is a repository to learn and get more computer vision skills, make robotics projects integrating the computer vision as a perception tool and create a lot of awesome advanced controllers for the robots of the future.
Stars: ✭ 15 (-94.98%)
Mutual labels:  yolov3
Deep High Resolution Net.pytorch
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"
Stars: ✭ 3,521 (+1077.59%)
Mutual labels:  human-pose-estimation
Semgcn
The Pytorch implementation for "Semantic Graph Convolutional Networks for 3D Human Pose Regression" (CVPR 2019).
Stars: ✭ 290 (-3.01%)
Mutual labels:  human-pose-estimation
Yolov3 Tensorflow
Implement YOLOv3 with TensorFlow
Stars: ✭ 279 (-6.69%)
Mutual labels:  yolov3
Pose Residual Network Pytorch
Code for the Pose Residual Network introduced in 'MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network' paper https://arxiv.org/abs/1807.04067
Stars: ✭ 277 (-7.36%)
Mutual labels:  human-pose-estimation
Mmdetection To Tensorrt
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Stars: ✭ 262 (-12.37%)
Mutual labels:  yolov3
Fast Human Pose Estimation.pytorch
Official pytorch Code for CVPR2019 paper "Fast Human Pose Estimation" https://arxiv.org/abs/1811.05419
Stars: ✭ 290 (-3.01%)
Mutual labels:  human-pose-estimation
Expose
ExPose - EXpressive POse and Shape rEgression
Stars: ✭ 254 (-15.05%)
Mutual labels:  human-pose-estimation
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (-5.02%)
Mutual labels:  yolov3
PP-YOLO
PaddlePaddle实现的目标检测模型PP-YOLO
Stars: ✭ 59 (-80.27%)
Mutual labels:  yolov3
Pytorch Yolov4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
Stars: ✭ 3,690 (+1134.11%)
Mutual labels:  yolov3
Pytorch 0.4 Yolov3
Yet Another Implimentation of Pytroch 0.4.1 and YoloV3 on python3
Stars: ✭ 284 (-5.02%)
Mutual labels:  yolov3
Tensorrt
TensorRT-7 Network Lib 包括常用目标检测、关键点检测、人脸检测、OCR等 可训练自己数据
Stars: ✭ 294 (-1.67%)
Mutual labels:  yolov3
Posefix release
Official TensorFlow implementation of "PoseFix: Model-agnostic General Human Pose Refinement Network", CVPR 2019
Stars: ✭ 296 (-1%)
Mutual labels:  human-pose-estimation

Multi-person Human Pose Estimation with HRNet in PyTorch

This is an unofficial implementation of the paper Deep High-Resolution Representation Learning for Human Pose Estimation.
The code is a simplified version of the official code with the ease-of-use in mind.

The code is fully compatible with the official pre-trained weights and the results are the same of the original implementation (only slight differences on gpu due to CUDA). It supports both Windows and Linux.

This repository provides:

  • A simple HRNet implementation in PyTorch (>=1.0) - compatible with official weights (pose_hrnet_*).
  • A simple class (SimpleHRNet) that loads the HRNet network for the human pose estimation, loads the pre-trained weights, and make human predictions on a single image or a batch of images.
  • NEW Support for "SimpleBaselines" model based on ResNet - compatible with official weights (pose_resnet_*).
  • NEW Support for multi-GPU inference.
  • NEW Add option for using YOLOv3-tiny (faster, but less accurate person detection).
  • NEW Add options for retrieving yolo bounding boxes and HRNet heatmaps.
  • Multi-person support with YOLOv3 (enabled by default).
  • A reference code that runs a live demo reading frames from a webcam or a video file.
  • A relatively-simple code for training and testing the HRNet network.
  • A specific script for training the network on the COCO dataset.

If you are interested in HigherHRNet, please look at simple-HigherHRNet

Examples

Class usage

import cv2
from SimpleHRNet import SimpleHRNet

model = SimpleHRNet(48, 17, "./weights/pose_hrnet_w48_384x288.pth")
image = cv2.imread("image.png", cv2.IMREAD_COLOR)

joints = model.predict(image)

The most useful parameters of the __init__ function are:

c number of channels (HRNet: 32, 48; PoseResNet: resnet size)
nof_joints number of joints (COCO: 17, MPII: 16)
checkpoint_path path of the (official) weights to be loaded
model_name 'HRNet' or 'PoseResNet'
resolution image resolution, it depends on the loaded weights
multiperson enable multiperson prediction
return_heatmaps the `predict` method returns also the heatmaps
return_bounding_boxes the `predict` method returns also the bounding boxes (useful in conjunction with `multiperson`)
max_batch_size maximum batch size used in hrnet inference
device device (cpu or cuda)

Running the live demo

From a connected camera:

python scripts/live-demo.py --camera_id 0

From a saved video:

python scripts/live-demo.py --filename video.mp4

For help:

python scripts/live-demo.py --help

Extracting keypoints:

From a saved video:

python scripts/extract-keypoints.py --filename video.mp4

For help:

python scripts/extract-keypoints.py --help

Running the training script

python scripts/train_coco.py

For help:

python scripts/train_coco.py --help

Installation instructions

  • Clone the repository
    git clone https://github.com/stefanopini/simple-HRNet.git

  • Install the required packages
    pip install -r requirements.txt

  • Download the official pre-trained weights from https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
    Direct links (official Drive folder, official OneDrive folder):

    Remember to set the parameters of SimpleHRNet accordingly (in particular c, nof_joints, and resolution).

  • For multi-person support:

    • Get YOLOv3:
      • Clone YOLOv3 in the folder ./models/detectors and change the folder name from PyTorch-YOLOv3 to yolo
        OR
      • Update git submodules
        git submodule update --init --recursive
    • Install YOLOv3 required packages
      pip install -r requirements.txt (from folder ./models/detectors/yolo)
    • Download the pre-trained weights running the script download_weights.sh from the weights folder
  • (Optional) Download the COCO dataset and save it in ./datasets/COCO

  • Your folders should look like:

    simple-HRNet
    ├── datasets                (datasets - for training only)
    │  └── COCO                 (COCO dataset)
    ├── losses                  (loss functions)
    ├── misc                    (misc)
    │  └── nms                  (CUDA nms module - for training only)
    ├── models                  (pytorch models)
    │  └── detectors            (people detectors)
    │    └── yolo               (PyTorch-YOLOv3 repository)
    │      ├── ...
    │      └── weights          (YOLOv3 weights)
    ├── scripts                 (scripts)
    ├── testing                 (testing code)
    ├── training                (training code)
    └── weights                 (HRnet weights)
    
  • If you want to run the training script on COCO scripts/train_coco.py, you have to build the nms module first.
    Please note that a linux machine with CUDA is currently required. Build it with either:

    • cd misc; make or
    • cd misc/nms; python setup_linux.py build_ext --inplace

    You may need to add the ./misc/nms directory in the PYTHONPATH variable:
    export PYTHONPATH="<path-to-simple-HRNet>/misc/nms:$PYTHONPATH"

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].