All Projects → nghorbani → Amass

nghorbani / Amass

Licence: other
Data preparation and loader for AMASS

Projects that are alternatives of or similar to Amass

Human body prior
VPoser: Variational Human Pose Prior
Stars: ✭ 244 (+35.56%)
Mutual labels:  jupyter-notebook, pose-estimation, motion
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-72.78%)
Mutual labels:  jupyter-notebook, pose-estimation, action-recognition
Pytorch Openpose
pytorch implementation of openpose including Hand and Body Pose Estimation.
Stars: ✭ 716 (+297.78%)
Mutual labels:  jupyter-notebook, pose-estimation
Keras realtime multi Person pose estimation
Keras version of Realtime Multi-Person Pose Estimation project
Stars: ✭ 728 (+304.44%)
Mutual labels:  jupyter-notebook, pose-estimation
Fight detection
Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition
Stars: ✭ 65 (-63.89%)
Mutual labels:  pose-estimation, action-recognition
Mobilepose Pytorch
Light-weight Single Person Pose Estimator
Stars: ✭ 427 (+137.22%)
Mutual labels:  jupyter-notebook, pose-estimation
Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+2678.33%)
Mutual labels:  pose-estimation, action-recognition
Training toolbox caffe
Training Toolbox for Caffe
Stars: ✭ 51 (-71.67%)
Mutual labels:  jupyter-notebook, action-recognition
Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition
GUI based on the python api of openpose in windows using cuda10 and cudnn7. Support body , hand, face keypoints estimation and data saving. Realtime gesture recognition is realized through two-layer neural network based on the skeleton collected from the gui.
Stars: ✭ 69 (-61.67%)
Mutual labels:  action-recognition, pose-estimation
Cpm
Convolutional Pose Machines in TensorFlow
Stars: ✭ 115 (-36.11%)
Mutual labels:  jupyter-notebook, pose-estimation
Pose Interpreter Networks
Real-Time Object Pose Estimation with Pose Interpreter Networks (IROS 2018)
Stars: ✭ 104 (-42.22%)
Mutual labels:  jupyter-notebook, pose-estimation
Tensorflow realtime multi Person pose estimation
Multi-Person Pose Estimation project for Tensorflow 2.0 with a small and fast model based on MobilenetV3
Stars: ✭ 129 (-28.33%)
Mutual labels:  jupyter-notebook, pose-estimation
Action Recognition Visual Attention
Action recognition using soft attention based deep recurrent neural networks
Stars: ✭ 350 (+94.44%)
Mutual labels:  jupyter-notebook, action-recognition
Awesome Action Recognition
A curated list of action recognition and related area resources
Stars: ✭ 3,202 (+1678.89%)
Mutual labels:  pose-estimation, action-recognition
Video Classification
Tutorial for video classification/ action recognition using 3D CNN/ CNN+RNN on UCF101
Stars: ✭ 543 (+201.67%)
Mutual labels:  jupyter-notebook, action-recognition
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+512.22%)
Mutual labels:  action-recognition, pose-estimation
Rnn For Human Activity Recognition Using 2d Pose Input
Activity Recognition from 2D pose using an LSTM RNN
Stars: ✭ 165 (-8.33%)
Mutual labels:  jupyter-notebook, pose-estimation
Epipolar Transformers
Epipolar Transformers (CVPR 2020)
Stars: ✭ 245 (+36.11%)
Mutual labels:  jupyter-notebook, pose-estimation
Pose Estimation tutorials
Tools and tutorials of pose estimation and deep learning
Stars: ✭ 79 (-56.11%)
Mutual labels:  jupyter-notebook, pose-estimation
Dd Net
A lightweight network for body/hand action recognition
Stars: ✭ 161 (-10.56%)
Mutual labels:  jupyter-notebook, action-recognition

AMASS: Archive of Motion Capture as Surface Shapes

alt text

AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning.

Here we provide tools and tutorials to use AMASS in your research projects. More specifically:

  • Following the recommended splits of data by AMASS, we provide three non-overlapping train/validation/test splits.
  • AMASS uses an extended version of SMPL+H with DMPLs. Here we show how to load different components and visualize a body model with AMASS data.
  • AMASS is also compatible with SMPL and SMPL-X body models. We show how to use the body data from AMASS to animate these models.

Table of Contents

Installation

Requirements

Install from this repository for the latest developments:

pip install git+https://github.com/nghorbani/configer
pip install git+https://github.com/nghorbani/human_body_prior
pip install git+https://github.com/nghorbani/amass

Body Models

AMASS fits a statistical body model to labeled marker-based optical motion capture data. In the paper originally we use SMPL+H with extended shape space, e.g. 16 betas, and DMPLs. Please download each and put them in body_models folder of this repository after you obtained the code from GitHub.

Tutorials

We release tools and multiple Jupyter notebooks to demonstrate how to use AMASS to animate SMPLH body model.

Furthermore, as promised in the supplementary material of the paper, we release code to produce synthetic mocap using DFaust registrations.

Please refer to tutorials for further details.

Citation

Please cite the following paper if you use this code directly or indirectly in your research/projects:

@inproceedings{AMASS:2019,
  title={AMASS: Archive of Motion Capture as Surface Shapes},
  author={Mahmood, Naureen and Ghorbani, Nima and F. Troje, Nikolaus and Pons-Moll, Gerard and Black, Michael J.},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2019},
  month = {Oct},
  url = {https://amass.is.tue.mpg.de},
  month_numeric = {10}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the AMASS dataset, and software, (the "Model & Software"). By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Contact

The code in this repository is developed by Nima Ghorbani.

If you have any questions you can contact us at [email protected].

For commercial licensing, contact [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].