All Projects → tomguluson92 → Prnet_pytorch

tomguluson92 / Prnet_pytorch

Training & Inference Code of PRNet in PyTorch 1.1.0

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Prnet pytorch

Flame Fitting
Example code for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 3D keypoints and 3D scans.
Stars: ✭ 269 (+80.54%)
Mutual labels:  3d-reconstruction, face-alignment
Tf flame
Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.
Stars: ✭ 193 (+29.53%)
Mutual labels:  3d-reconstruction, face-alignment
Stereo dense reconstruction
Dense 3D reconstruction from stereo (using LIBELAS)
Stars: ✭ 113 (-24.16%)
Mutual labels:  3d-reconstruction
Ad Census
AD-Census立体匹配算法,中国学者Xing Mei等人研究成果(Respect!),算法效率高、效果出色,适合硬件加速,Intel RealSense D400 Stereo模块算法。完整实现,代码规范,注释清晰,欢迎star!
Stars: ✭ 145 (-2.68%)
Mutual labels:  3d-reconstruction
Mdm
A TensorFlow implementation of the Mnemonic Descent Method.
Stars: ✭ 120 (-19.46%)
Mutual labels:  face-alignment
Tenginekit
TengineKit - Free, Fast, Easy, Real-Time Face Detection & Face Landmarks & Face Attributes & Hand Detection & Hand Landmarks & Body Detection & Body Landmarks & Iris Landmarks & Yolov5 SDK On Mobile.
Stars: ✭ 2,103 (+1311.41%)
Mutual labels:  face-alignment
Face Alignment Training
Training code for the networks described in "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)" paper.
Stars: ✭ 127 (-14.77%)
Mutual labels:  face-alignment
3ddfa v2
The official PyTorch implementation of Towards Fast, Accurate and Stable 3D Dense Face Alignment, ECCV 2020.
Stars: ✭ 1,961 (+1216.11%)
Mutual labels:  face-alignment
Drc
Code release for "Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency" (CVPR 2017)
Stars: ✭ 147 (-1.34%)
Mutual labels:  3d-reconstruction
Meshroommaya
Photomodeling plugin for Maya
Stars: ✭ 118 (-20.81%)
Mutual labels:  3d-reconstruction
Alicevision
Photogrammetric Computer Vision Framework
Stars: ✭ 2,029 (+1261.74%)
Mutual labels:  3d-reconstruction
Reconstructiondataset
Set of images for doing 3d reconstruction
Stars: ✭ 117 (-21.48%)
Mutual labels:  3d-reconstruction
Cnncomplete
[CVPR'17] Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
Stars: ✭ 117 (-21.48%)
Mutual labels:  3d-reconstruction
Deep Face Alignment
The MXNet Implementation of Stacked Hourglass and Stacked SAT for Robust 2D and 3D Face Alignment
Stars: ✭ 134 (-10.07%)
Mutual labels:  face-alignment
3d Recgan
🔥3D-RecGAN in Tensorflow (ICCV Workshops 2017)
Stars: ✭ 116 (-22.15%)
Mutual labels:  3d-reconstruction
Binary Face Alignment
Real time face alignment
Stars: ✭ 145 (-2.68%)
Mutual labels:  face-alignment
Mvcsnp
Code release for "Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction"
Stars: ✭ 113 (-24.16%)
Mutual labels:  3d-reconstruction
Hyperlandmark
Deep Learning Based Free Mobile Real-Time Face Landmark Detector. Contact:[email protected]
Stars: ✭ 1,528 (+925.5%)
Mutual labels:  face-alignment
Obman train
[cvpr19] Demo, training and evaluation code for generating dense hand+object reconstructions from single rgb images
Stars: ✭ 124 (-16.78%)
Mutual labels:  3d-reconstruction
Surfelwarp
SurfelWarp: Efficient Non-Volumetric Dynamic Reconstruction
Stars: ✭ 149 (+0%)
Mutual labels:  3d-reconstruction

PRNet PyTorch 1.1.0

Github Github Github

This is an unofficial pytorch implementation of PRNet since there is not a complete generating and training code of 300WLP dataset.

  • Author: Samuel Ko, mjanddy.

Update Log

@date: 2019.11.13

@notice: An important bug has been fixed by mj in loading uv map. The original uv_map.jpg is flipped, so *.npy is used here to redress this problem. Thanks to mjanddy!

@date: 2019.11.14

@notice: Inference Stage Uploaded, pretrain model available in results/latest.pth. Thanks to mjanddy!


Noitce

Since replacing the default PIL.Imgae by cv2.imread in image reader, you need do a little revise on your tensorboard package in your_python_path/site-packages/torch/utils/tensorboard/summary.py

What you should do is add tensor = tensor[:, :, ::-1] before image = Image.fromarray(tensor) in function make_image(...).

...
def make_image(tensor, rescale=1, rois=None):
    """Convert an numpy representation image to Image protobuf"""
    from PIL import Image
    height, width, channel = tensor.shape
    scaled_height = int(height * rescale)
    scaled_width = int(width * rescale)

    tensor = tensor[:, :, ::-1]
    image = Image.fromarray(tensor)
    ...
...

① Pre-Requirements

Before we start generat uv position map and train it. The first step is generate BFM.mat according to Basel Face Model. For simplicity, The corresponding BFM.mat has been provided here.

After download it successfully, you need to move BFM.mat to utils/.

Besides, the essential python packages were listed in requirements.txt.

② Generate uv_pos_map

YadiraF/face3d have provide scripts for generating uv_pos_map, here i wrap it for Batch processing.

You can use utils/generate_posmap_300WLP.py as:

python3 generate_posmap_300WLP.py --input_dir ./dataset/300WLP/IBUG/ --save_dir ./300WLP_IBUG/

Then 300WLP_IBUG dataset is the proper structure for training PRNet:

- 300WLP_IBUG
 - 0/
  - IBUG_image_xxx.npy
  - original.jpg (original RGB)
  - uv_posmap.jpg (corresponding UV Position Map)
 - 1/
 - **...**
 - 100/ 

Except from download from 300WLP, I provide processed original--uv_posmap pair of IBUG here.

③ Training

After finish the above two step, you can train your own PRNet as:

python3 train.py --train_dir ./300WLP_IBUG

You can use tensorboard to visualize the intermediate output in localhost:6006:

tensorboard --logdir=absolute_path_of_prnet_runs/

Tensorboard example

The following image is used to judge the effectiveness of PRNet to unknown data.

(Original, UV_MAP_gt, UV_MAP_predicted) Test Data

④ Inference

You can use following instruction to do your prnet inference. The detail about parameters you can find in inference.py.

python3 inference.py -i input_dir(default is TestImages) -o output_dir(default is TestImages/results) --model model_path(default is results/latest.pth) --gpu 0 (-1 denotes cpu)

Test Data


Citation

If you use this code, please consider citing:

@inProceedings{feng2018prn,
  title     = {Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network},
  author    = {Yao Feng and Fan Wu and Xiaohu Shao and Yanfeng Wang and Xi Zhou},
  booktitle = {ECCV},
  year      = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].