All Projects → SeanChenxy → HandMesh

SeanChenxy / HandMesh

Licence: MIT license
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to HandMesh

RSTNet
RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words (CVPR 2021)
Stars: ✭ 71 (-72.48%)
Mutual labels:  cvpr2021
MUST-GAN
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"
Stars: ✭ 39 (-84.88%)
Mutual labels:  cvpr2021
Mask-Propagation
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code 🌟. Semi-supervised video object segmentation evaluation.
Stars: ✭ 71 (-72.48%)
Mutual labels:  cvpr2021
SSIS
Single-Stage Instance Shadow Detection with Bidirectional Relation Learning (CVPR 2021 Oral)
Stars: ✭ 32 (-87.6%)
Mutual labels:  cvpr2021
EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (-56.98%)
Mutual labels:  cvpr2021
MIST VAD
Official codes for CVPR2021 paper "MIST: Multiple Instance Self-Training Framework for Video Anomaly Detection"
Stars: ✭ 52 (-79.84%)
Mutual labels:  cvpr2021
Context-Aware-Consistency
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)
Stars: ✭ 121 (-53.1%)
Mutual labels:  cvpr2021
LBYLNet
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.
Stars: ✭ 46 (-82.17%)
Mutual labels:  cvpr2021
HoHoNet
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
Stars: ✭ 65 (-74.81%)
Mutual labels:  cvpr2021
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (-50.78%)
Mutual labels:  cvpr2021
CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (-46.12%)
Mutual labels:  cvpr2021
AMP-Regularizer
Code for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Stars: ✭ 26 (-89.92%)
Mutual labels:  cvpr2021
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
Stars: ✭ 79 (-69.38%)
Mutual labels:  cvpr2021
MOON
Model-Contrastive Federated Learning (CVPR 2021)
Stars: ✭ 93 (-63.95%)
Mutual labels:  cvpr2021
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+2666.67%)
Mutual labels:  cvpr2021
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-82.95%)
Mutual labels:  cvpr2021
TVR
Transformation Driven Visual Reasoning - CVPR 2021
Stars: ✭ 24 (-90.7%)
Mutual labels:  cvpr2021
DeFMO
[CVPR 2021] DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
Stars: ✭ 144 (-44.19%)
Mutual labels:  cvpr2021
Cvpr2021 Paper Code Interpretation
cvpr2021/cvpr2020/cvpr2019/cvpr2018/cvpr2017 论文/代码/解读/直播合集,极市团队整理
Stars: ✭ 8,075 (+3029.84%)
Mutual labels:  cvpr2021
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (-38.76%)
Mutual labels:  cvpr2021

Hand Mesh Reconstruction

Introduction

This repo is the PyTorch implementation of hand mesh reconstruction described in CMR and MobRecon.

Update

  • 2022-4-28. Wrap old-version code in cmr, including CMR demo/training/evaluation and Mobrecon demo/evaluation. Add mobrecon to release MobRecon training.
  • 2021-12-7. Add MobRecon demo.
  • 2021-6-10. Add Human3.6M dataset.
  • 2021-5-20. Add CMR-G model.

Features

  • SpiralNet++
  • Sub-pose aggregation
  • Adaptive 2D-1D registration for mesh-image alignment
  • DenseStack for 2D encoding
  • Feature lifting with MapReg and PVL
  • DSConv as an efficient mesh operator
  • Complement data will be available here
  • MobRecon training with consistency learning and complement data

Install

  • Environment
    conda create -n handmesh python=3.9
    conda activate handmesh
    
  • Please follow official suggestions to install pytorch and torchvision. We use pytorch=1.11.0-cuda11.3, torchvision=0.12.0
  • Requirements
    pip install -r requirements.txt
    
    If you have difficulty in installing torch_sparse etc., please follow this link.
  • Install MPI-IS Mesh from the source
  • You should accept MANO LICENCE. Download MANO model from official website, then run
    ln -s /path/to/mano_v1_2/MANO_RIGHT.pkl template/MANO_RIGHT.pkl
    
  • Download the files you need from Google drive or Baidu cloud.

Run a demo

  • Prepare pre-trained models as

    cmr/out/Human36M/cmr_g/checkpoints/cmr_pg_res18_human36m.pt
    cmr/out/FreiHAND/cmr_g/checkpoints/cmr_g_res18_moredata.pt
    cmr/out/FreiHAND/cmr_sg/checkpoints/cmr_sg_res18_freihand.pt
    cmr/out/FreiHAND/cmr_pg/checkpoints/cmr_pg_res18_freihand.pt  
    cmr/out/FreiHAND/mobrecon/checkpoints/mobrecon_densestack_dsconv.pt  
    
  • Run

    ./cmr/scripts/demo_cmr.sh
    ./cmr/scripts/demo_mobrecon.sh
    

    The prediction results will be saved in output directory, e.g., out/FreiHAND/mobrecon/demo.

  • Explaination of the output

    • In an JPEG file (e.g., 000_plot.jpg), we show silhouette, 2D pose, projection of mesh, camera-space mesh and pose
    • As for camera-space information, we use a red rectangle to indicate the camera position, or the image plane. The unit is meter.
    • If you run the demo, you can also obtain a PLY file (e.g., 000_mesh.ply).
      • This file is a 3D model of the hand.
      • You can open it with corresponding software (e.g., Preview in Mac).
      • Here, you can get more 3D details through rotation and zoom in.

Dataset

FreiHAND

  • Please download FreiHAND dataset from this link, and create a soft link in data, i.e., data/FreiHAND.
  • Download mesh GT file freihand_train_mesh.zip, and unzip it under data/FreiHAND/training

Human3.6M

  • The official data is now not avaliable. Please follow I2L repo to download it.
  • Download silhouette GT file h36m_mask.zip, and unzip it under data/Human36M.

Real World Testset

  • Please download the dataset from this link, and create a soft link in data, i.e., data/Ge.

Complement data

  • See this file for complement data. Then, create a soft link in data, i.e., data/CompHand.

Data dir

${ROOT}  
|-- data  
|   |-- FreiHAND
|   |   |-- training
|   |   |   |-- rgb
|   |   |   |-- mask
|   |   |   |-- mesh
|   |   |-- evaluation
|   |   |   |-- rgb
|   |   |-- evaluation_K.json
|   |   |-- evaluation_scals.json
|   |   |-- training_K.json
|   |   |-- training_mano.json
|   |   |-- training_xyz.json
|   |-- Human3.6M
|   |   |-- images
|   |   |-- mask
|   |   |-- annotations
|   |   |-- J_regressor_h36m_correct.npy
|   |-- Ge
|   |   |-- images
|   |   |-- params.mat
|   |   |-- pose_gt.mat
|   |-- Compdata
|   |   |-- base_pose
|   |   |-- trans_pose_batch1
|   |   |-- trans_pose_batch2
|   |   |-- trans_pose_batch3

Evaluation

FreiHAND

./cmr/scripts/eval_cmr_freihand.sh
./cmr/scripts/eval_mobrecon_freihand.sh
  • JSON file will be saved as out/FreiHAND/cmr_sg/cmr_sg.json. You can submmit this file to the official server for evaluation.

Human3.6M

./cmr/scripts/eval_cmr_human36m.sh

Performance on PA-MPJPE (mm)

We re-produce the following results after code re-organization.

Model / Dataset FreiHAND Human3.6M (w/o COCO)
CMR-G-ResNet18 7.6 -
CMR-SG-ResNet18 7.5 -
CMR-PG-ResNet18 7.5 50.0
MobRecon-DenseStack 6.9 -

Training

./cmr/scripts/train_cmr_freihand.sh
./cmr/scripts/train_cmr_human36m.sh
./mobrecon/scripts/train_mobrecon.sh

A experiment log will be saved under cmr/out or mobrecon/out

Reference

@inproceedings{bib:CMR,
  title={Camera-Space Hand Mesh Recovery via Semantic Aggregationand Adaptive 2D-1D Registration},
  author={Chen, Xingyu and Liu, Yufeng and Ma, Chongyang and Chang, Jianlong and Wang, Huayan and Chen, Tian and Guo, Xiaoyan and Wan, Pengfei and Zheng, Wen},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}
@inproceedings{bib:MobRecon,
  title={MobRecon: Mobile-Friendly Hand Mesh Reconstruction from Monocular Image},
  author={Chen, Xingyu and Liu, Yufeng and Dong Yajiao and Zhang, Xiong and Ma, Chongyang and Xiong, Yanmin and Zhang, Yuan and Guo, Xiaoyan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Acknowledgement

Our implementation of SpiralConv is based on spiralnet_plus.

We also thank hand-graph-cnn, I2L-MeshNet_RELEASE, detectron2, smplpytorch(https://github.com/gulvarol/smplpytorch) for inspiring implementations.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].