All Projects → zju3dv → Mvpose

zju3dv / Mvpose

Licence: apache-2.0
Code for "Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views" in CVPR'19

Projects that are alternatives of or similar to Mvpose

Cvpr 2019 Paper Statistics
Statistics and Visualization of acceptance rate, main keyword of CVPR 2019 accepted papers for the main Computer Vision conference (CVPR)
Stars: ✭ 527 (+61.66%)
Mutual labels:  jupyter-notebook, cvpr
Fbrs interactive segmentation
[CVPR2020] f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation https://arxiv.org/abs/2001.10331
Stars: ✭ 366 (+12.27%)
Mutual labels:  jupyter-notebook, cvpr
Orn
Oriented Response Networks, in CVPR 2017
Stars: ✭ 207 (-36.5%)
Mutual labels:  jupyter-notebook, cvpr
Epipolarpose
Self-Supervised Learning of 3D Human Pose using Multi-view Geometry (CVPR2019)
Stars: ✭ 477 (+46.32%)
Mutual labels:  jupyter-notebook, cvpr
Pytorch Vdsr
VDSR (CVPR2016) pytorch implementation
Stars: ✭ 313 (-3.99%)
Mutual labels:  jupyter-notebook, cvpr
Observations
Stars: ✭ 325 (-0.31%)
Mutual labels:  jupyter-notebook
Homemade Machine Learning
🤖 Python examples of popular machine learning algorithms with interactive Jupyter demos and math being explained
Stars: ✭ 18,594 (+5603.68%)
Mutual labels:  jupyter-notebook
Jupyter Edu Book
Teaching and Learning with Jupyter
Stars: ✭ 325 (-0.31%)
Mutual labels:  jupyter-notebook
Pytorchneuralstyletransfer
Implementation of Neural Style Transfer in Pytorch
Stars: ✭ 327 (+0.31%)
Mutual labels:  jupyter-notebook
Baal
Using approximate bayesian posteriors in deep nets for active learning
Stars: ✭ 328 (+0.61%)
Mutual labels:  jupyter-notebook
Azurepublicdataset
Microsoft Azure Traces
Stars: ✭ 327 (+0.31%)
Mutual labels:  jupyter-notebook
Pylightgbm
Python binding for Microsoft LightGBM
Stars: ✭ 328 (+0.61%)
Mutual labels:  jupyter-notebook
Nlp fundamentals
📘 Contains a series of hands-on notebooks for learning the fundamentals of NLP
Stars: ✭ 328 (+0.61%)
Mutual labels:  jupyter-notebook
Mth594 machinelearning
The materials for the course MTH 594 Advanced data mining: theory and applications (Dmitry Efimov, American University of Sharjah)
Stars: ✭ 327 (+0.31%)
Mutual labels:  jupyter-notebook
Scipy Cookbook
Scipy Cookbook
Stars: ✭ 326 (+0%)
Mutual labels:  jupyter-notebook
Medium
Code related to blog posts on my Medium page
Stars: ✭ 329 (+0.92%)
Mutual labels:  jupyter-notebook
Owod
(CVPR 2021 Oral) Open World Object Detection
Stars: ✭ 274 (-15.95%)
Mutual labels:  cvpr
Tutorials
Jupyter notebook tutorials from QuantConnect website for Python, Finance and LEAN.
Stars: ✭ 323 (-0.92%)
Mutual labels:  jupyter-notebook
Machine Learning In 90 Days
Stars: ✭ 321 (-1.53%)
Mutual labels:  jupyter-notebook
Cc150
《程序员面试金典》(cc150)
Stars: ✭ 326 (+0%)
Mutual labels:  jupyter-notebook

Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views

Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views
Junting Dong, Wen Jiang, Qixing Huang, Hujun Bao, Xiaowei Zhou
CVPR 2019 Project Page

Any questions or discussions are welcomed!

Installation

  • Set up python environment
pip install -r requirements.txt
  • Compile the backend/tf_cpn/lib and backend/light_head_rcnn/lib
cd mvpose/backend/tf_cpn/lib/
make
cd ./lib_kernel/lib_nms
bash compile.sh
cd mvpose/backend/light_head_rcnn/lib/
bash make.sh

Since they use py-faster-rcnn as backbone. Many people using faster-rcnn meet with some problems when compiling those components. Suggestions on google can be helpful.

  • Compile the pictorial function for acceleration
cd mvpose/src/m_lib/
python setup.py build_ext --inplace

Prepare models and datasets

  • Prepare models: Please put light-head-rcnn models to backend/light_head_rcnn/output/model_dump, backend/tf_cpn/log/model_dump to backend/tf_cpn/log/model_dump, and CamStyle model trained by myself to backend/CamStyle/logs

  • Prepare the datasets: Put datasets such as Shelf and CampusSeq1 to ./datasets/ Download Campus and Shelf datasets. Then, put datasets such as Shelf and CampusSeq1 to datasets/

  • Generate the camera parameters: Since each dataset uses different way to obtain the camera parameters, we show an example to deal with the Campus dataset:

    • Add following code to .datasets/CampusSeq1/Calibration/producePmat.m
    K = cell(1,3);
    K{1} = K1; K{2} = K2; K{3} = K3;
    m_RT = cell(1,3);
    m_RT{1} = RT1; m_RT{2} = RT2; m_RT{3} = RT3;
    save('intrinsic.mat','K');
    save('m_RT.mat', 'm_RT');
    save('P.mat', 'P');
    save('prjectionMat','P');
    
    • generate the camera_parameter.pickle
    python ./src/tools/mat2pickle.py /parameter/dir ./datasets/CampusSeq1
    

    Here, we also provide the camera_parameter.pickle of Campus and Shelf. You can generate the .pickle file for your datasets using the same way.

Demo and Evaluate

Run the demo

python ./src/m_utils/demo.py -d Campus
python ./src/m_utils/demo.py -d Shelf

If all the configuration is OK, you may see the visualization of following items.

matching

Evaluate on the Campus/Shelf datasets

python ./src/m_utils/evaluate.py -d Campus
python ./src/m_utils/evaluate.py -d Shelf

As long as the progress bar finished, you may see a beautified table of evaluation result and a csv file for the evaluation result will be save in ./result directory.

Accelerate the evaluation

Since the 2D pose estimator (CPN) is a little slow, we can save the predicted 2D poses and heatmaps and then start with these saved files.

  1. produce the files
python src/tools/preprocess.py -d Campus -dump_dir ./datasets/Campus_processed
python src/tools/preprocess.py -d Shelf -dump_dir ./datasets/Shelf_processed
  1. evaluate with saved 2D poses and heatmaps
python ./src/m_utils/evaluate.py -d Campus -dumped ./datasets/Campus_processed
python ./src/m_utils/evaluate.py -d Shelf -dumped ./datasets/Shelf_processed

Note: for the sake of convenience, we do not optimize on the size of dumped file. Therefore, the size of Campus_processed is around 4.0G and the size of Shelf_processed is around 234G. Please make sure your disk have 200+G free space. Any pull request to solve this issues will be welcomed.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{dong2019fast,
  title={Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views},
  author={Dong, Junting and Jiang, Wen and Huang, Qixing and Bao, Hujun and Zhou, Xiaowei},
  journal={CVPR},
  year={2019}
}

Acknowledgements

This code uses these code (Light head rcnn, Cascaded Pyramid Network, CamStyle) as backbone. We gratefully appreciate the impact it had on our work. If you use our code, please consider citing the original paper as well.

Copyright

This work is affliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].