All Projects → JiageWang → Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition

JiageWang / Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition

Licence: other
GUI based on the python api of openpose in windows using cuda10 and cudnn7. Support body , hand, face keypoints estimation and data saving. Realtime gesture recognition is realized through two-layer neural network based on the skeleton collected from the gui.

Programming Languages

python
139335 projects - #7 most used programming language
Batchfile
5799 projects
shell
77523 projects

Projects that are alternatives of or similar to Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition

openpose-docker
A docker build file for CMU openpose with Python API support
Stars: ✭ 68 (-1.45%)
Mutual labels:  pose-estimation, openpose
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+1497.1%)
Mutual labels:  action-recognition, pose-estimation
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-47.83%)
Mutual labels:  pose-estimation, openpose
posture recognition
Posture recognition based on common camera
Stars: ✭ 91 (+31.88%)
Mutual labels:  pose-estimation, openpose
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-28.99%)
Mutual labels:  action-recognition, pose-estimation
san
The official PyTorch implementation of "Context Matters: Self-Attention for sign Language Recognition"
Stars: ✭ 17 (-75.36%)
Mutual labels:  action-recognition, gesture-recognition
Openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Stars: ✭ 22,892 (+33076.81%)
Mutual labels:  pose-estimation, openpose
Amass
Data preparation and loader for AMASS
Stars: ✭ 180 (+160.87%)
Mutual labels:  action-recognition, pose-estimation
Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+7147.83%)
Mutual labels:  action-recognition, pose-estimation
Awesome Action Recognition
A curated list of action recognition and related area resources
Stars: ✭ 3,202 (+4540.58%)
Mutual labels:  action-recognition, pose-estimation
Fight detection
Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition
Stars: ✭ 65 (-5.8%)
Mutual labels:  action-recognition, pose-estimation
openpose-pytorch
🔥 OpenPose api wrapper in PyTorch.
Stars: ✭ 52 (-24.64%)
Mutual labels:  pose-estimation, openpose
conv3d-video-action-recognition
My experimentation around action recognition in videos. Contains Keras implementation for C3D network based on original paper "Learning Spatiotemporal Features with 3D Convolutional Networks", Tran et al. and it includes video processing pipelines coded using mPyPl package. Model is being benchmarked on popular UCF101 dataset and achieves result…
Stars: ✭ 50 (-27.54%)
Mutual labels:  action-recognition
gesture-recognition-for-human-robot-interaction
Gesture Recognition For Human-Robot Interaction with modelling, training, analysing and recognising gestures based on computer vision and machine learning techniques. This work was done at Distributed Artificial Intelligence Lab (DAI Labor), Berlin.
Stars: ✭ 62 (-10.14%)
Mutual labels:  gesture-recognition
realsense explorer bot
Autonomous ground exploration mobile robot which has 3-DOF manipulator with Intel Realsense D435i mounted on a Tracked skid-steer drive mobile robot. The robot is capable of mapping spaces, exploration through RRT, SLAM and 3D pose estimation of objects around it. This is an custom robot with self built URDF model.The Robot uses ROS's navigation…
Stars: ✭ 61 (-11.59%)
Mutual labels:  pose-estimation
Intel-Realsense-Hand-Toolkit-Unity
Intel Realsense Toolkit for Hand tracking and Gestural Recognition on Unity3D
Stars: ✭ 72 (+4.35%)
Mutual labels:  gesture-recognition
All4Depth
Self-Supervised Depth Estimation on Monocular Sequences
Stars: ✭ 58 (-15.94%)
Mutual labels:  pose-estimation
kPAM
kPAM: Generalizable Robotic Manipulation
Stars: ✭ 73 (+5.8%)
Mutual labels:  pose-estimation
MSAF
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Stars: ✭ 47 (-31.88%)
Mutual labels:  action-recognition
emotion-and-gender-classification
2 networks to recognition gender and emotion; face detection using Opencv or Mtcnn
Stars: ✭ 21 (-69.57%)
Mutual labels:  emotion-recognition

GUI-for-Pose-Estimate-and-Action-Recognition

avatar

Introduction

  • This is a GUI program for pose estimation and action recognition based on Openpose.
  • You can visualize key-points on image or camera and save the key-points data(as npy format) at the same time.
  • You can train a deep learning model for action ( or gesture or emotion) recognition through the data collected from this program.
  • My platform is windows10. I have complied the openpose with the python api and the complied binary files will be given below. So you don't have to compile the openpose from scratch

Python Dependence

  • numpy==1.14.2
  • PyQt5==5.11.3
  • opencv-python==4.1.0.25
  • torch==1.0.1(only for gesture recognition)

Installation

  • Install cuda10 and [cudnn7]. Or here is my BaiduDisk password:4685.

  • Run models/getModels.bat to get model. Or here is my BaiduDisk password:rmkn and put models in the corresponding position

    avatar

  • Download 3rd-party dlls from my BaiduDisk password:64sg and unzip in your 3rdparty folder.

Usage

  1. avatar : save current result
  2. avatar : save result every interval while camera or video is opening
  3. avatar : open camera
  4. avatar : show setting view
  5. avatar : show file-tree view

Setting

  1. First, you should select which kind of key-points do you want to visualize or collect by checking the checkbox(Body, Hand, Face).
  2. The threshold of three model can be controled by draging corresponding slider.
  3. Change save interval.
  4. Change net resolution for smaller GPU memery, but it will reduce the accuracy.
  5. The function of gesture recognition can only be used when the hand checkbox is on. My model is only a 2 layers MLP, and the data was collected with front camera and left hand. So it may have many limitations. Your can train your own model and replace it.

TODO

  • action recognition

  • emotion recognition

Data format

You will get a output folder like the following figure. The count is set to 0 when the program begins and will automatically increase with the number of images saved. avatar

data_body = np.load('body/0001_body.npy')
data_hand = np.load('hand/0001_hand.npy')
data_face = np.load('face/0001_face.npy')
print(data_body.shape)  
# (1, 25, 3)  : person_num x kep_points_num x x_y_scroe
print(data_hand.shape)  
# (2, 1, 21, 3)  : left_right x person_num x kep_points_num x x_y_scroe
print(data_face.shape) 
# (1, 70, 3)  : person_num x kep_points_num x x_y_scroe

Train your own model

  1. Collect data and make folder for every class.

    train_data_format

  2. run python train.py -p C:\Users\Administrator\Desktop\自建数据集\hand to train your model(replace path with your dataset path)

References

Openpose

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].