All Projects → Idein → Chainer Pose Proposal Net

Idein / Chainer Pose Proposal Net

Licence: other
Chainer implementation of Pose Proposal Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Chainer Pose Proposal Net

chainer-dense-fusion
Chainer implementation of Dense Fusion
Stars: ✭ 21 (-82.35%)
Mutual labels:  chainer, pose-estimation
Adversarial text
Code for Adversarial Training Methods for Semi-Supervised Text Classification
Stars: ✭ 109 (-8.4%)
Mutual labels:  chainer
Onnx Chainer
Add-on package for ONNX format support in Chainer
Stars: ✭ 83 (-30.25%)
Mutual labels:  chainer
Eao Slam
[IROS 2020] EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association
Stars: ✭ 95 (-20.17%)
Mutual labels:  pose-estimation
Ios Openpose
OpenPose Example App
Stars: ✭ 85 (-28.57%)
Mutual labels:  pose-estimation
Nuitrack Sdk
Nuitrack™ is a 3D tracking middleware developed by 3DiVi Inc.
Stars: ✭ 103 (-13.45%)
Mutual labels:  pose-estimation
Ros Openpose
CMU's OpenPose for ROS
Stars: ✭ 81 (-31.93%)
Mutual labels:  pose-estimation
Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (-5.04%)
Mutual labels:  pose-estimation
Human Pose Estimation From Rgb
Human Pose Estimation from RGB Camera - The repo
Stars: ✭ 108 (-9.24%)
Mutual labels:  pose-estimation
Comicolorization
This is the implementation of the "Comicolorization: Semi-automatic Manga Colorization"
Stars: ✭ 99 (-16.81%)
Mutual labels:  chainer
Wasserstein Gan
Chainer implementation of Wasserstein GAN
Stars: ✭ 95 (-20.17%)
Mutual labels:  chainer
Dataset utilities
NVIDIA Dataset Utilities (NVDU)
Stars: ✭ 90 (-24.37%)
Mutual labels:  pose-estimation
Pose Interpreter Networks
Real-Time Object Pose Estimation with Pose Interpreter Networks (IROS 2018)
Stars: ✭ 104 (-12.61%)
Mutual labels:  pose-estimation
Chainer Handson
CAUTION: This is not maintained anymore. Visit https://github.com/chainer-community/chainer-colab-notebook/
Stars: ✭ 84 (-29.41%)
Mutual labels:  chainer
Chainercv
ChainerCV: a Library for Deep Learning in Computer Vision
Stars: ✭ 1,463 (+1129.41%)
Mutual labels:  chainer
Spacenet building detection
Project to train/test convolutional neural networks to extract buildings from SpaceNet satellite imageries.
Stars: ✭ 83 (-30.25%)
Mutual labels:  chainer
Pytorch pose proposal networks
Pytorch implementation of pose proposal networks
Stars: ✭ 93 (-21.85%)
Mutual labels:  pose-estimation
Fingerpose
Finger pose classifier for hand landmarks detected by TensorFlow.js handpose model
Stars: ✭ 102 (-14.29%)
Mutual labels:  pose-estimation
Cpm
Convolutional Pose Machines in TensorFlow
Stars: ✭ 115 (-3.36%)
Mutual labels:  pose-estimation
Aruco tracker
Aruco Markers for pose estimation
Stars: ✭ 111 (-6.72%)
Mutual labels:  pose-estimation

chainer-pose-proposal-net

  • This is an (unofficial) implementation of Pose Proposal Networks with Chainer including training and prediction tools.

License

Copyright (c) 2018 Idein Inc. & Aisin Seiki Co., Ltd. All rights reserved.

This project is licensed under the terms of the license.

Training

Prepare Dataset

MPII

  • If you train with COCO dataset you can skip.
  • Access MPII Human Pose Dataset and jump to Download page. Then download and extract both Images (12.9 GB) and Annotations (12.5 MB) at ~/work/dataset/mpii_dataset for example.

Create mpii.json

We need decode mpii_human_pose_v1_u12_1.mat to generate mpii.json. This will be used on training or evaluating test dataset of MPII.

$ sudo docker run --rm -v $(pwd):/work -v path/to/dataset:mpii_dataset -w /work idein/chainer:4.5.0 python3 convert_mpii_dataset.py mpii_dataset/mpii_human_pose_v1_u12_2/mpii_human_pose_v1_u12_1.mat mpii_dataset/mpii.json

It will generate mpii.json at path/to/dataset. Where path/to/dataset is the root directory of MPII dataset, for example, ~/work/dataset/mpii_dataset. For those who hesitate to use Docker, you may edit config.ini as necessary.

COCO

  • If you train with MPII dataset you can skip.
  • Access COCO dataset and jump to Dataset -> download. Then download and extract 2017 Train images [118K/18GB], 2017 Val images [5K/1GB] and 2017 Train/Val annotations [241MB] at ~/work/dataset/coco_dataset:/coco_dataset for example.

Running Training Scripts

OK let's begin!

$ cat begin_train.sh
cat config.ini
docker run --rm \
-v $(pwd):/work \
-v ~/work/dataset/mpii_dataset:/mpii_dataset \
-v ~/work/dataset/coco_dataset:/coco_dataset \
--name ppn_idein \
-w /work \
idein/chainer:5.1.0 \
python3 train.py
$ sudo bash begin_train.sh
  • Optional argument --runtime=nvidia maybe require for some environment.
  • It will train a model the base network is MobileNetV2 with MPII dataset located in path/to/dataset on host machine.
  • If we would like to train with COCO dataset, edit a part of config.ini as follow:

before

# parts of config.ini
[dataset]
type = mpii

after

# parts of config.ini
[dataset]
type = coco
  • We can choice ResNet based network as the original paper adopts. Edit a part of config.ini as follow:

before

[model_param]
model_name = mv2

after

[model_param]
# you may also choice resnet34 and resnet50
model_name = resnet18

Prediction

  • Very easy, all we have to do is, for example:
$ sudo bash run_predict.sh ./trained
  • If you would like to configure parameter or hide bounding box, edit a part of config.ini as follow:
[predict]
# If `False` is set, hide bbox of annotation other than human instance.
visbbox = True
# detection_thresh
detection_thresh = 0.15
# ignore human its num of keypoints is less than min_num_keypoints
min_num_keypoints= 1

Demo: Realtime Pose Estimation

We tested on an Ubuntu 16.04 machine with GPU GTX1080(Ti)

Build Docker Image for Demo

We will build OpenCV from source to visualize the result on GUI.

$ cd docker/gpu
$ cat build.sh
docker build -t ppn .
$ sudo bash build.sh

Here is an result of ResNet18 trained with COCO running on laptop PC.

Run video.py

  • Set your USB camera that can recognize from OpenCV.

  • Run video.py

$ python video.py ./trained

or

$ sudo bash run_video.sh ./trained

High Performance Version

  • To use feature of Static Subgraph Optimizations to accelerate inference speed, we should install Chainer 5.y.z and CuPy 5.y.z e.g. 5.0.0 or 5.1.0 .
  • Prepare high performance USB camera so that takes more than 60 FPS.
  • Run high_speed.py instead of video.py
  • Do not fall from the chair with surprise :D.

Appendix

Pre-trained Model

  • Without training, you can try our software by downloading pre-trained model from our release page

Our Demo

Actcast

Citation

Please cite the paper in your publications if it helps your research:

@InProceedings{Sekii_2018_ECCV,
  author = {Sekii, Taiki},
  title = {Pose Proposal Networks},
  booktitle = {The European Conference on Computer Vision (ECCV)},
  month = {September},
  year = {2018}
  }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].