All Projects → fabbrimatteo → LoCO

fabbrimatteo / LoCO

Licence: other
This repository contains the source code related to the paper Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language

Learning on Compressed Output (LoCO)

License: CC BY-NC 4.0

Accepted to CVPR 2020

This repo contains the code related to the paper Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation accepted to CVPR 2020 with the instructions for training and testing our models on the JTA dataset. Here you can also find the code for training the Volumetric Heatmap Autoencoder.

Some Results

Input Prediction

Quick Demo

  • run python demo.py --ex=1 (python >= 3.6)
    • please wait some seconds: it will display some precomputed results. You can change the ex number from 1 to 3 to see different results

Compile Cuda Kernel

  • cd into the folder nms3d and run python setup.py install (python >= 3.6). Make sure to add your cuda directory to your environment variables.

Intructions

  • Download the JTA dataset in <your_jta_path>
  • Run python to_poses.py --out_dir_path='poses' --format='torch' (link) to generate the <your_jta_path>/poses directory
  • Run python to_imgs.py --out_dir_path='frames' --img_format='jpg' (link) to generate the <your_jta_path>/frames directory
  • Download our precomputed codes from here and unzip them into <your_jta_path>
  • Modify the conf/default.yaml configuration file specifying the path to the JTA dataset directory
    • JTA_PATH: <your_jta_path>

Train

  • run python main.py default (python >= 3.6)

Show Visual Results

  • run python show.py default (python >= 3.6)
    • Note that, before showing the results, you must have completed at least one training epoch; however, to achieve results comparable to those reported in the paper, it is advisable to carry out a training of at least 100 epochs

Show Paper Results

  • Download the pretrained weights and extract them into the project folder
  • Modify the conf/pretrained.yaml configuration file specifying the path to the JTA dataset directory
    • JTA_PATH: <your_jta_path>
  • run python show.py pretrained to show qualitative results (python >= 3.6)
  • run python eval.py pretrained to obtain the results reported in the paper (python >= 3.6)

Citation

We believe in open research and we are happy if you find this data useful.
If you use it, please cite our work.

@inproceedings{fabbri2020compressed,
   title     = {Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation},
   author    = {Fabbri, Matteo and Lanzi, Fabio and Calderara, Simone and Alletto, Stefano and Cucchiara, Rita},
   booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
   year      = {2020}
 }

License

LoCO is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].