All Projects → YadiraF → Prnet

YadiraF / Prnet

Licence: mit
Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Prnet

Deca
DECA: Detailed Expression Capture and Animation
Stars: ✭ 292 (-93.48%)
Mutual labels:  alignment, 3d, face, reconstruction
Extreme 3d faces
Extreme 3D Face Reconstruction: Looking Past Occlusions
Stars: ✭ 653 (-85.42%)
Mutual labels:  3d, face, reconstruction
Vrn
👨 Code for "Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression"
Stars: ✭ 4,391 (-1.96%)
Mutual labels:  3d, face, reconstruction
Tf flame
Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.
Stars: ✭ 193 (-95.69%)
Mutual labels:  3d, face
Jeelizfacefilter
Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...).
Stars: ✭ 2,042 (-54.41%)
Mutual labels:  3d, face
Flame pytorch
This is a implementation of the 3D FLAME model in PyTorch
Stars: ✭ 153 (-96.58%)
Mutual labels:  3d, face
Facealignmentcompare
Empirical Study of Recent Face Alignment Methods
Stars: ✭ 15 (-99.67%)
Mutual labels:  alignment, face
Pulse
A pendant to warn you when you touch your face
Stars: ✭ 229 (-94.89%)
Mutual labels:  3d, face
Mtcnn
face detection and alignment with mtcnn
Stars: ✭ 66 (-98.53%)
Mutual labels:  alignment, face
Peppa-Facial-Landmark-PyTorch
Facial Landmark Detection based on PyTorch
Stars: ✭ 172 (-96.16%)
Mutual labels:  face, alignment
realtime-2D-to-3D-faces
Reconstructing real-time 3D faces from 2D images using deep learning.
Stars: ✭ 92 (-97.95%)
Mutual labels:  face, reconstruction
Facenet-Caffe
facenet recognition and retrieve by using hnswlib and flask, convert tensorflow model to caffe
Stars: ✭ 30 (-99.33%)
Mutual labels:  face, alignment
Synthesize3dviadepthorsil
[CVPR 2017] Generation and reconstruction of 3D shapes via modeling multi-view depth maps or silhouettes
Stars: ✭ 141 (-96.85%)
Mutual labels:  3d, reconstruction
Uav Mapper
UAV-Mapper is a lightweight UAV Image Processing System, Visual SFM reconstruction or Aerial Triangulation, Fast Ortho-Mosaic, Plannar Mosaic, Fast Digital Surface Map (DSM) and 3d reconstruction for UAVs.
Stars: ✭ 106 (-97.63%)
Mutual labels:  3d, reconstruction
3d Iwgan
A repository for the paper "Improved Adversarial Systems for 3D Object Generation and Reconstruction".
Stars: ✭ 166 (-96.29%)
Mutual labels:  3d, reconstruction
Pixel2mesh
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images. In ECCV2018.
Stars: ✭ 997 (-77.74%)
Mutual labels:  3d, reconstruction
Volumetriccapture
A multi-sensor capture system for free viewpoint video.
Stars: ✭ 243 (-94.57%)
Mutual labels:  3d, reconstruction
Cilantro
A lean C++ library for working with point cloud data
Stars: ✭ 577 (-87.12%)
Mutual labels:  3d, reconstruction
3ddfa v2
The official PyTorch implementation of Towards Fast, Accurate and Stable 3D Dense Face Alignment, ECCV 2020.
Stars: ✭ 1,961 (-56.22%)
Mutual labels:  alignment, 3d
Mesh mesh align plus
Precisely align, move, and measure+match objects and mesh parts in your 3D scenes.
Stars: ✭ 350 (-92.19%)
Mutual labels:  alignment, 3d

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network

This is an official python implementation of PRN.

PRN is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. More examples on Multi-PIE and 300VW can be seen in YouTube .

The main features are:

  • End-to-End our method can directly regress the 3D facial structure and dense alignment from a single image bypassing 3DMM fitting.

  • Multi-task By regressing position map, the 3D geometry along with semantic meaning can be obtained. Thus, we can effortlessly complete the tasks of dense alignment, monocular 3D face reconstruction, pose estimation, etc.

  • Faster than real-time The method can run at over 100fps(with GTX 1080) to regress a position map.

  • Robust Tested on facial images in unconstrained conditions. Our method is robust to poses, illuminations and occlusions.

Applications

Basics(Evaluated in paper)

  • Face Alignment

Dense alignment of both visible and non-visible points(including 68 key points).

And the visibility of points(1 for visible and 0 for non-visible).

alignment

  • 3D Face Reconstruction

Get the 3D vertices and corresponding colours from a single image. Save the result as mesh data(.obj), which can be opened with Meshlab or Microsoft 3D Builder. Notice that, the texture of non-visible area is distorted due to self-occlusion.

New:

  1. you can choose to output mesh with its original pose(default) or with front view(which means all output meshes are aligned)
  2. obj file can now also written with texture map(with specified texture size), and you can set non-visible texture to 0.

alignment

More(To be added)

  • 3D Pose Estimation

    Rather than only use 68 key points to calculate the camera matrix(easily effected by expression and poses), we use all vertices(more than 40K) to calculate a more accurate pose.

    pose

  • Depth image

    pose

  • Texture Editing

    • Data Augmentation/Selfie Editing

      modify special parts of input face, eyes for example:

      pose

    • Face Swapping

      replace the texture with another, then warp it to original pose and use Poisson editing to blend images.

      pose

Getting Started

Prerequisite

  • Python 2.7 (numpy, skimage, scipy)

  • TensorFlow >= 1.4

    Optional:

  • dlib (for detecting face. You do not have to install if you can provide bounding box information. )

  • opencv2 (for showing results)

GPU is highly recommended. The run time is ~0.01s with GPU(GeForce GTX 1080) and ~0.2s with CPU(Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz).

Usage

  1. Clone the repository
git clone https://github.com/YadiraF/PRNet
cd PRNet
  1. Download the PRN trained model at BaiduDrive or GoogleDrive, and put it into Data/net-data

  2. Run the test code.(test AFLW2000 images)

    python run_basics.py #Can run only with python and tensorflow

  3. Run with your own images

    python demo.py -i <inputDir> -o <outputDir> --isDlib True

    run python demo.py --help for more details.

  4. For Texture Editing Apps:

    python demo_texture.py -i image_path_1 -r image_path_2 -o output_path

    run python demo_texture.py --help for more details.

Training

The core idea of the paper is:

Using position map to represent face geometry&alignment information, then learning this with an Encoder-Decoder Network.

So, the training steps:

  1. generate position map ground truth.

    the example of generating position map of 300W_LP dataset can be seen in generate_posmap_300WLP

  2. an encoder-decoder network to learn mapping from rgb image to position map.

    the weight mask can be found in the folder Data/uv-data

What you can custom:

  1. the UV space of position map.

    you can change the parameterization method, or change the resolution of UV space.

  2. the backbone of encoder-decoder network

    this demo uses residual blocks. VGG, mobile-net are also ok.

  3. the weight mask

    you can change the weight to focus more on which part your project need more.

  4. the training data

    if you have scanned 3d face, it's better to train PRN with your own data. Before that, you may need use ICP to align your face meshes.

FQA

  1. How to speed up?

    a. network inference part

    you can train a smaller network or use a smaller position map as input.

    b. render part

    you can refer to c++ version.

    c. other parts like detecting face, writing obj

    the best way is to rewrite them in c++.

  2. How to improve the precision?

    a. geometry precision.

    Due to the restriction of training data, the precision of reconstructed face from this demo has little detail. You can train the network with your own detailed data or do post-processing like shape-from-shading to add details.

    b. texture precision.

    I just added an option to specify the texture size. When the texture size > face size in original image, and render new facial image with texture mapping, there will be little resample error.

Changelog

  • 2018/7/19 add training part. can specify the resolution of the texture map.
  • 2018/5/10 add texture editing examples(for data augmentation, face swapping)
  • 2018/4/28 add visibility of vertices, output obj file with texture map, depth image
  • 2018/4/26 can output mesh with front view
  • 2018/3/28 add pose estimation
  • 2018/3/12 first release(3d reconstruction and dense alignment)

License

Code: under MIT license.

Trained model file: please see issue 28, thank Kyle McDonald for his answer.

Citation

If you use this code, please consider citing:

@inProceedings{feng2018prn,
  title     = {Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network},
  author    = {Yao Feng and Fan Wu and Xiaohu Shao and Yanfeng Wang and Xi Zhou},
  booktitle = {ECCV},
  year      = {2018}
}

Contacts

Please contact [email protected] or open an issue for any questions or suggestions.

Thanks! (●'◡'●)

Acknowledgements

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].