All Projects → YuliangXiu → Poseflow

YuliangXiu / Poseflow

PoseFlow: Efficient Online Pose Tracking (BMVC'18)

Projects that are alternatives of or similar to Poseflow

Openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Stars: ✭ 22,892 (+6836.97%)
Mutual labels:  opencv, pose-estimation, real-time
Alphapose
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
Stars: ✭ 5,697 (+1626.36%)
Mutual labels:  pose-estimation, tracking, realtime
Mobilepose Pytorch
Light-weight Single Person Pose Estimator
Stars: ✭ 427 (+29.39%)
Mutual labels:  pose-estimation, real-time, realtime
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-89.09%)
Mutual labels:  real-time, realtime, pose-estimation
FlipED
A LMS built specifically for Thailand's Education 4.0 system.
Stars: ✭ 24 (-92.73%)
Mutual labels:  real-time, realtime
transit
Massively real-time city transit streaming application
Stars: ✭ 20 (-93.94%)
Mutual labels:  real-time, realtime
ChatService
ChatService (SignalR).
Stars: ✭ 26 (-92.12%)
Mutual labels:  real-time, realtime
o1heap
Constant-complexity deterministic memory allocator (heap) for hard real-time high-integrity embedded systems
Stars: ✭ 119 (-63.94%)
Mutual labels:  real-time, realtime
jeelizPupillometry
Real-time pupillometry in the web browser using a 4K webcam video feed processed by this WebGL/Javascript library. 2 demo experiments are included.
Stars: ✭ 78 (-76.36%)
Mutual labels:  tracking, real-time
SiamFC-tf
A TensorFlow implementation of the SiamFC tracker, use with your own camera and video, or integrate to your own project 实时物体追踪,封装API,可整合到自己的项目中
Stars: ✭ 22 (-93.33%)
Mutual labels:  tracking, real-time
Siamfc Tensorflow
A TensorFlow implementation of the SiamFC tracker
Stars: ✭ 325 (-1.52%)
Mutual labels:  tracking, real-time
realtime-object-detection
Detects objects in images/streaming video
Stars: ✭ 16 (-95.15%)
Mutual labels:  real-time, realtime
accelerator-core-ios
Syntax sugar of OpenTok iOS SDK with Audio/Video communication including screen sharing
Stars: ✭ 30 (-90.91%)
Mutual labels:  real-time, realtime
signalr-client
SignalR client library built on top of @aspnet/signalr. This gives you more features and easier to use.
Stars: ✭ 48 (-85.45%)
Mutual labels:  real-time, realtime
traffic
Massively real-time traffic streaming application
Stars: ✭ 25 (-92.42%)
Mutual labels:  real-time, realtime
rmpe dataset server
Realtime Multi-Person Pose Estimation data server. Used as a training and validation data provider in training process.
Stars: ✭ 14 (-95.76%)
Mutual labels:  realtime, pose-estimation
Realtime object detection
Plug and Play Real-Time Object Detection App with Tensorflow and OpenCV. No Bugs No Worries. Enjoy!
Stars: ✭ 260 (-21.21%)
Mutual labels:  opencv, real-time
instant-ngp
Instant neural graphics primitives: lightning fast NeRF and more
Stars: ✭ 1,863 (+464.55%)
Mutual labels:  real-time, realtime
Sc Crud Sample
Sample real-time CRUD inventory tracking app built with SocketCluster
Stars: ✭ 323 (-2.12%)
Mutual labels:  real-time, realtime
Android Ddp
[UNMAINTAINED] Meteor's Distributed Data Protocol (DDP) for clients on Android
Stars: ✭ 271 (-17.88%)
Mutual labels:  real-time, realtime

Pose Flow

Official implementation of Pose Flow: Efficient Online Pose Tracking .

Results on PoseTrack Challenge validation set:

  1. Task2: Multi-Person Pose Estimation (mAP)
Method Head mAP Shoulder mAP Elbow mAP Wrist mAP Hip mAP Knee mAP Ankle mAP Total mAP
Detect-and-Track(FAIR) 67.5 70.2 62 51.7 60.7 58.7 49.8 60.6
AlphaPose 66.7 73.3 68.3 61.1 67.5 67.0 61.3 66.5
  1. Task3: Pose Tracking (MOTA)
Method Head MOTA Shoulder MOTA Elbow MOTA Wrist MOTA Hip MOTA Knee MOTA Ankle MOTA Total MOTA Total MOTP Speed(FPS)
Detect-and-Track(FAIR) 61.7 65.5 57.3 45.7 54.3 53.1 45.7 55.2 61.5 Unknown
PoseFlow(DeepMatch) 59.8 67.0 59.8 51.6 60.0 58.4 50.5 58.3 67.8 8
PoseFlow(OrbMatch) 59.0 66.8 60.0 51.8 59.4 58.4 50.3 58.0 62.2 24

Latest Features

  • Dec 2018: PoseFlow(General Version) released! Support ANY DATASET and pose tracking results visualization.
  • Oct 2018: Support generating correspondence files with ORB(OpenCV), 3X FASTER and no need to compile DeepMatching library.

Requirements

  • Python 2.7.13
  • OpenCV 3.4.2.16
  • OpenCV-contrib 3.4.2.16
  • tqdm 4.19.8

Installation

  1. Download PoseTrack Dataset from PoseTrack to AlphaPose/PoseFlow/posetrack_data/
  2. (Optional) Use DeepMatching to extract dense correspondences between adjcent frames in every video, please refer to DeepMatching Compile Error to compile DeepMatching correctly
pip install -r requirements.txt

cd deepmatching
make clean all
make
cd ..

For Any Datasets (General Version)

  1. Using AlphaPose to generate multi-person pose estimation results.
# pytorch version
python demo.py --indir ${image_dir}$ --outdir ${results_dir}$

# torch version
./run.sh --indir ${image_dir}$ --outdir ${results_dir}$
  1. Run pose tracking
# pytorch version
python tracker-general.py --imgdir ${image_dir}$ 
                          --in_json ${results_dir}$/alphapose-results.json 
                          --out_json ${results_dir}$/alphapose-results-forvis-tracked.json
                          --visdir ${render_dir}$

# torch version
python tracker-general.py --imgdir ${image_dir}$ 
                          --in_json ${results_dir}$/POSE/alpha-pose-results-forvis.json 
                          --out_json ${results_dir}$/POSE/alpha-pose-results-forvis-tracked.json
                          --visdir ${render_dir}$

For PoseTrack Dataset Evaluation (Paper Baseline)

  1. Using AlphaPose to generate multi-person pose estimation results on videos with format like alpha-pose-results-sample.json.
  2. Using DeepMatching/ORB to generate correspondence files.
# Generate correspondences by DeepMatching
# (More Robust but Slower)
python matching.py --orb=0 

or

# Generate correspondences by Orb
# (Faster but Less Robust)
python matching.py --orb=1
  1. Run pose tracking
python tracker-baseline.py --dataset=val/test  --orb=1/0
  1. Evaluation

Original poseval has some instructions on how to convert annotation files from MAT to JSON.

Evaluate pose tracking results on validation dataset:

git clone https://github.com/leonid-pishchulin/poseval.git --recursive
cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
cd ../../
python poseval/py/evaluate.py --groundTruth=./posetrack_data/annotations/val \
                    --predictions=./${track_result_dir}/ \
                    --evalPoseTracking --evalPoseEstimation

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].