All Projects → YoungXIAO13 → Posefromshape

YoungXIAO13 / Posefromshape

Licence: mit
(BMVC 2019) PyTorch implementation of Paper "Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Posefromshape

Adversarial Pose Pytorch
A PyTorch implementation of adversarial pose estimation for multi-person
Stars: ✭ 67 (-45.97%)
Mutual labels:  pose-estimation
Pytorch pose proposal networks
Pytorch implementation of pose proposal networks
Stars: ✭ 93 (-25%)
Mutual labels:  pose-estimation
Aruco tracker
Aruco Markers for pose estimation
Stars: ✭ 111 (-10.48%)
Mutual labels:  pose-estimation
Convolutional Pose Machines Pytorch
Pytroch version of Convolutional Pose Machines
Stars: ✭ 77 (-37.9%)
Mutual labels:  pose-estimation
Dataset utilities
NVIDIA Dataset Utilities (NVDU)
Stars: ✭ 90 (-27.42%)
Mutual labels:  pose-estimation
Fingerpose
Finger pose classifier for hand landmarks detected by TensorFlow.js handpose model
Stars: ✭ 102 (-17.74%)
Mutual labels:  pose-estimation
Margipose
Stars: ✭ 64 (-48.39%)
Mutual labels:  pose-estimation
Chainer Pose Proposal Net
Chainer implementation of Pose Proposal Networks
Stars: ✭ 119 (-4.03%)
Mutual labels:  pose-estimation
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-25.81%)
Mutual labels:  pose-estimation
Human Pose Estimation From Rgb
Human Pose Estimation from RGB Camera - The repo
Stars: ✭ 108 (-12.9%)
Mutual labels:  pose-estimation
Pose Estimation tutorials
Tools and tutorials of pose estimation and deep learning
Stars: ✭ 79 (-36.29%)
Mutual labels:  pose-estimation
Ios Openpose
OpenPose Example App
Stars: ✭ 85 (-31.45%)
Mutual labels:  pose-estimation
Nuitrack Sdk
Nuitrack™ is a 3D tracking middleware developed by 3DiVi Inc.
Stars: ✭ 103 (-16.94%)
Mutual labels:  pose-estimation
Stag
STag: A Stable Fiducial Marker System
Stars: ✭ 75 (-39.52%)
Mutual labels:  pose-estimation
Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (-8.87%)
Mutual labels:  pose-estimation
Fight detection
Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition
Stars: ✭ 65 (-47.58%)
Mutual labels:  pose-estimation
Eao Slam
[IROS 2020] EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association
Stars: ✭ 95 (-23.39%)
Mutual labels:  pose-estimation
Tf2 Mobile 2d Single Pose Estimation
💃 Pose estimation for iOS and android using TensorFlow 2.0
Stars: ✭ 122 (-1.61%)
Mutual labels:  pose-estimation
Cpm
Convolutional Pose Machines in TensorFlow
Stars: ✭ 115 (-7.26%)
Mutual labels:  pose-estimation
Pose Interpreter Networks
Real-Time Object Pose Estimation with Pose Interpreter Networks (IROS 2018)
Stars: ✭ 104 (-16.13%)
Mutual labels:  pose-estimation

PoseFromShape

(BMVC 2019) PyTorch implementation of Paper "Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects" [PDF] [Project webpage]

teaser

If our project is helpful for your research, please consider citing:

@INPROCEEDINGS{Xiao2019PoseFromShape,
    author    = {Yang Xiao and Xuchong Qiu and Pierre{-}Alain Langlois and Mathieu Aubry and Renaud Marlet},
    title     = {Pose from Shape: Deep Pose Estimation for Arbitrary {3D} Objects},
    booktitle = {British Machine Vision Conference (BMVC)},
    year      = {2019}}

Updates 2020 July

The generated point clouds of Pascal3D and ObjectNet3D can be directly downloaded from our repo.

Please see ./data/Pascal3D/pointcloud and ./data/ObjectNet3D/pointcloud

Table of Content

Installation

Dependencies

The code can be used in Linux system with the the following dependencies: Python 3.6, Pytorch 1.0.1, Python-Blender 2.77, meshlabserver

We recommend to utilize conda environment to install all dependencies and test the code.

## Download the repository
git clone 'https://github.com/YoungXIAO13/PoseFromShape'
cd PoseFromShape

## Create python env with relevant packages
conda create --name PoseFromShape --file auxiliary/spec-file.txt
source activate PoseFromShape
conda install -c conda-forge matplotlib

## Install blender as a python module
conda install auxiliary/python-blender-2.77-py36_0.tar.bz2

Data

To download and prepare the datasets for training and testing (Pascal3D, ObjectNet3D, ShapeNetCore, SUN397, Pix3D, LineMod):

cd data
bash prepare_data.sh

To generate point cloud from the .obj file for Pascal3D and ObjectNet3D, check the data folder

Pre-trained Models

To download the pretrained models (Pascal3D, ObjectNet3D, ShapeNetCore):

cd model
bash download_models.sh

Training

To train on the ObjectNet3D dataset with real images and coarse alignment:

bash run/train_ObjectNet3D.sh

To train on the Pascal3D dataset with real images and coarse alignment:

bash run/train_Pascal3D.sh

To train on the ShapeNetCore dataset with synthetic images and precise alignment:

bash run/train_ShapeNetCore.sh

Testing

While the network was trained on real or synthetic images, all the testing was done on real images.

ObjectNet3D

bash run/test_ObjectNet3D.sh

You should obtain the results in Table 1 in the paper (*indicates testing on the novel categories):

Method Average bed bookcase calculator cellphone computer door cabinet guitar iron knife microwave pen pot rifle shoe slipper stove toilet tub wheelchair
StarMap 56 73 78 91 57 82 - 84 73 3 18 94 13 56 4 - 12 87 71 51 60
StarMap* 42 37 69 19 52 73 - 78 61 2 9 88 12 51 0 - 11 82 41 49 14
Ours(MV) 73 82 90 95 65 93 97 89 75 52 32 95 54 82 45 67 46 95 82 67 66
Ours(MV)* 62 65 90 88 65 84 93 84 67 2 29 94 47 79 15 54 32 89 61 68 39

Pascal3D+

To test on the Pascal3D dataset with real images:

bash run/test_Pascal3D.sh

You should obtain the results in Table 2 in the paper (*indicates category-agnostic):

Method Accuracy Median Error
Keypoints and Viewpoints 80.75 13.6
Render for CNN 82.00 11.7
Mousavian 81.03 11.1
Grabner 83.92 10.9
Grabner* 81.33 11.5
StarMap* 81.67 12.8
Ours(MV)* 82.66 10.0

Pix3D

bash run/test_Pix3D.sh

You should obtain the results in Table 3 in the paper (Accuracy / MedErr):

Method Bed Chair Desk
Georgakis 50.8 / 28.6 31.2 / 57.3 34.9 / 51.6
Ours(MV) 59.8 / 20.0 52.4 / 26.6 56.6 / 26.6

Demo

In order to test on other 3D model, first you need to generate multiviews from .obj file by running python ./data/render_utils.py with the correct path and you should save the testing images picturing this model in a folder.

Then you can run bash ./demo/inference.sh to get predictions and images rendered under the predicted pose with the right model_path, image_path, render_path, obj_path.

Some example of applying our model trained on objects of ObjectNet3D with keypoint annotations to armadillo images can be seen below:

Input Image 1 Input Image 2 Input Image 3 Input Image 4 Input Image 5
Prediction 1 Prediction 2 Prediction 3 Prediction 4 Prediction 5

Further Reading

  • A summary of datasets in object pose estimation can be found here
  • A list of papers and challenges in object pose estimation can be found here

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].