All Projects → Daniil-Osokin → Lightweight Human Pose Estimation 3d Demo.pytorch

Daniil-Osokin / Lightweight Human Pose Estimation 3d Demo.pytorch

Licence: apache-2.0
Real-time 3D multi-person pose estimation demo in PyTorch. OpenVINO backend can be used for fast inference on CPU.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lightweight Human Pose Estimation 3d Demo.pytorch

Lightweight Human Pose Estimation.pytorch
Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.
Stars: ✭ 958 (+189.43%)
Mutual labels:  lightweight, human-pose-estimation, real-time
Gccpm Look Into Person Cvpr19.pytorch
Fast and accurate single-person pose estimation, ranked 10th at CVPR'19 LIP challenge. Contains implementation of "Global Context for Convolutional Pose Machines" paper.
Stars: ✭ 137 (-58.61%)
Mutual labels:  lightweight, human-pose-estimation, real-time
Openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Stars: ✭ 22,892 (+6816.01%)
Mutual labels:  human-pose-estimation, real-time
Trt pose
Real-time pose estimation accelerated with NVIDIA TensorRT
Stars: ✭ 525 (+58.61%)
Mutual labels:  human-pose-estimation, real-time
Deepstream pose estimation
This is a sample DeepStream application to demonstrate a human pose estimation pipeline.
Stars: ✭ 168 (-49.24%)
Mutual labels:  human-pose-estimation, real-time
Keras realtime multi Person pose estimation
Keras version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 728 (+119.94%)
Mutual labels:  human-pose-estimation, real-time
Openpose unity plugin
OpenPose's Unity Plugin for Unity users
Stars: ✭ 446 (+34.74%)
Mutual labels:  human-pose-estimation, real-time
Shelfnet Human Pose Estimation
Fast and accurate Human Pose Estimation using ShelfNet with PyTorch
Stars: ✭ 95 (-71.3%)
Mutual labels:  human-pose-estimation, real-time
Openpose train
Training repository for OpenPose
Stars: ✭ 381 (+15.11%)
Mutual labels:  human-pose-estimation, real-time
Mobilepose Pytorch
Light-weight Single Person Pose Estimator
Stars: ✭ 427 (+29%)
Mutual labels:  lightweight, real-time
Pytorch realtime multi Person pose estimation
Pytorch version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 205 (-38.07%)
Mutual labels:  human-pose-estimation, real-time
jeelizGlanceTracker
JavaScript/WebGL lib: detect if the user is looking at the screen or not from the webcam video feed. Lightweight and robust to all lighting conditions. Great for play/pause videos if the user is looking or not, or for person detection. Link to live demo.
Stars: ✭ 68 (-79.46%)
Mutual labels:  lightweight, real-time
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-89.12%)
Mutual labels:  real-time, human-pose-estimation
Lightweight Segmentation
Lightweight models for real-time semantic segmentation(include mobilenetv1-v3, shufflenetv1-v2, igcv3, efficientnet).
Stars: ✭ 261 (-21.15%)
Mutual labels:  lightweight, real-time
Bebop
An extremely simple, fast, efficient, cross-platform serialization format
Stars: ✭ 305 (-7.85%)
Mutual labels:  real-time
Sc Crud Sample
Sample real-time CRUD inventory tracking app built with SocketCluster
Stars: ✭ 323 (-2.42%)
Mutual labels:  real-time
Monoport
Volumetric Human Teleportation (SIGGRAPH 2020 Real-Time Live) Monocular Real-Time Volumetric Performance Capture(ECCV 2020)
Stars: ✭ 296 (-10.57%)
Mutual labels:  real-time
Trace4j
基于注解的轻量级java流程跟踪工具
Stars: ✭ 302 (-8.76%)
Mutual labels:  lightweight
Wekan
The Open Source kanban (built with Meteor). Keep variable/table/field names camelCase. For translations, only add Pull Request changes to wekan/i18n/en.i18n.json , other translations are done at https://transifex.com/wekan/wekan only.
Stars: ✭ 17,648 (+5231.72%)
Mutual labels:  real-time
Perspective
A data visualization and analytics component, especially well-suited for large and/or streaming datasets.
Stars: ✭ 3,989 (+1105.14%)
Mutual labels:  real-time

Real-time 3D Multi-person Pose Estimation Demo

This repository contains 3D multi-person pose estimation demo in PyTorch. Intel OpenVINO™ backend can be used for fast inference on CPU. This demo is based on Lightweight OpenPose and Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB papers. It detects 2D coordinates of up to 18 types of keypoints: ears, eyes, nose, neck, shoulders, elbows, wrists, hips, knees, and ankles, as well as their 3D coordinates. It was trained on MS COCO and CMU Panoptic datasets and achieves 100 mm MPJPE (mean per joint position error) on CMU Panoptic subset. This repository significantly overlaps with https://github.com/opencv/open_model_zoo/, however contains just the necessary code for 3D human pose estimation demo.

The major part of this work was done by Mariia Ageeva, when she was the 🔝🚀🔥 intern at Intel.

Table of Contents

Requirements

  • Python 3.5 (or above)
  • CMake 3.10 (or above)
  • C++ Compiler (g++ or MSVC)
  • OpenCV 4.0 (or above)

[Optional] Intel OpenVINO for fast inference on CPU. [Optional] NVIDIA TensorRT for fast inference on Jetson.

Prerequisites

  1. Install requirements:
pip install -r requirements.txt
  1. Build pose_extractor module:
python setup.py build_ext
  1. Add build folder to PYTHONPATH:
export PYTHONPATH=pose_extractor/build/:$PYTHONPATH

Pre-trained model

Pre-trained model is available at Google Drive.

Running

To run the demo, pass path to the pre-trained checkpoint and camera id (or path to video file):

python demo.py --model human-pose-estimation-3d.pth --video 0

Camera can capture scene under different view angles, so for correct scene visualization, please pass camera extrinsics and focal length with --extrinsics and --fx options correspondingly (extrinsics sample format can be found in data folder). In case no camera parameters provided, demo will use the default ones.

Inference with OpenVINO

To run with OpenVINO, it is necessary to convert checkpoint to OpenVINO format:

  1. Set OpenVINO environment variables:
    source <OpenVINO_INSTALL_DIR>/bin/setupvars.sh
    
  2. Convert checkpoint to ONNX:
    python scripts/convert_to_onnx.py --checkpoint-path human-pose-estimation-3d.pth
    
  3. Convert to OpenVINO format:
    python <OpenVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo.py --input_model human-pose-estimation-3d.onnx --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs
    

To run the demo with OpenVINO inference, pass --use-openvino option and specify device to infer on:

python demo.py --model human-pose-estimation-3d.xml --device CPU --use-openvino --video 0

Inference with TensorRT

To run with TensorRT, it is necessary to install it properly. Please, follow the official guide, these steps work for me:

  1. Install CUDA 11.1.
  2. Install cuDNN 8 (runtime library, then developer).
  3. Install nvidia-tensorrt:
    python -m pip install nvidia-pyindex
    pip install nvidia-tensorrt==7.2.1.6
    
  4. Install torch2trt.

Convert checkpoint to TensorRT format:

python scripts/convert_to_trt.py --checkpoint-path human-pose-estimation-3d.pth

TensorRT does not support dynamic network input size reshape. Make sure you have set proper network input height, width with --height and --width options during conversion (if not, there will be no detections). Default values work for a usual video with 16:9 aspect ratio (1280x720, 1920x1080). You can check the network input size with print(scaled_img.shape) in the demo.py

To run the demo with TensorRT inference, pass --use-tensorrt option:

python demo.py --model human-pose-estimation-3d-trt.pth --use-tensorrt --video 0

I have observed ~10x network inference speedup on RTX 2060 (in comparison with default PyTorch 1.6.0+cu101 inference).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].