All Projects → anilbas → 3dmmasstn

anilbas / 3dmmasstn

Licence: apache-2.0
MatConvNet implementation for incorporating a 3D Morphable Model (3DMM) into a Spatial Transformer Network (STN)

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to 3dmmasstn

Bender
Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.
Stars: ✭ 1,728 (+692.66%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Self Driving Car
Udacity Self-Driving Car Engineer Nanodegree projects.
Stars: ✭ 2,103 (+864.68%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Shainet
SHAInet - a pure Crystal machine learning library
Stars: ✭ 143 (-34.4%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Hyperdensenet
This repository contains the code of HyperDenseNet, a hyper-densely connected CNN to segment medical images in multi-modal image scenarios.
Stars: ✭ 124 (-43.12%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Vidaug
Effective Video Augmentation Techniques for Training Convolutional Neural Networks
Stars: ✭ 178 (-18.35%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Pytorch convlstm
convolutional lstm implementation in pytorch
Stars: ✭ 126 (-42.2%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Models Comparison.pytorch
Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures
Stars: ✭ 148 (-32.11%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Awslambdaface
Perform deep neural network based face detection and recognition in the cloud (via AWS lambda) with zero model configuration or tuning.
Stars: ✭ 98 (-55.05%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (-25.23%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Tf Adnet Tracking
Deep Object Tracking Implementation in Tensorflow for 'Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning(CVPR 2017)'
Stars: ✭ 162 (-25.69%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Lenet 5
PyTorch implementation of LeNet-5 with live visualization
Stars: ✭ 122 (-44.04%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Traffic Sign Detection
Traffic Sign Detection. Code for the paper entitled "Evaluation of deep neural networks for traffic sign detection systems".
Stars: ✭ 200 (-8.26%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Faceaging By Cyclegan
Stars: ✭ 105 (-51.83%)
Mutual labels:  deep-neural-networks, face
Deep Steganography
Hiding Images within other images using Deep Learning
Stars: ✭ 136 (-37.61%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+526.15%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (-34.4%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-55.5%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Har Keras Cnn
Human Activity Recognition (HAR) with 1D Convolutional Neural Network in Python and Keras
Stars: ✭ 97 (-55.5%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Sign Language Interpreter Using Deep Learning
A sign language interpreter using live video feed from the camera.
Stars: ✭ 157 (-27.98%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Hdltex
HDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-12.39%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks

3D Morphable Models as Spatial Transformer Networks

Update: A simple gradient descent method is added to show how the layers work. Please see the demo.m.

This page shows how to use a 3D morphable model as a spatial transformer within a convolutional neural network (CNN). It is an extension of the original spatial transformer network in that we are able to interpret and normalise 3D pose changes and self-occlusions. The network (specifically, the localiser part of the network) learns to fit a 3D morphable model to a single 2D image without needing labelled examples of fitted models.

Elon Musk (34) Christian Bale (51) Elisha Cuthbert (53) Clint Eastwood (62) Emma Watson (73) Chuck Palahniuk (48) Nelson Mandela (52) Kim Jong-un (60) Ben Affleck (66) Courteney Cox (127)

A set of mean flattened images that are obtained by applying the 3DMM-STN to multiple images of the same person from the UMDFaces Dataset.
(Please hover over the image to see the subject's name and the number of images used for averaging)

The proposed architecture is based on a purely geometric approach in which only the shape component of a 3DMM is used to geometrically normalise an image. Our method can be trained in an unsupervised fashion, and thus does not depend on synthetic training data or the fitting results of an existing algorithm.

In contrast to all previous 3DMM fitting networks, the output of our 3DMM-STN is a 2D resampling of the original image which contains all of the high frequency, discriminating detail in a face rather than a model-based reconstruction which only captures the gross, low frequency aspects of appearance that can be explained by a 3DMM.

Citation

Please cite the following paper (DOI) if you use this work in your research:

A. Bas, P. Huber, W.A.P. Smith, M. Awais and J. Kittler. "3D Morphable Models as Spatial Transformer Networks". In Proc. ICCV Workshop on Geometry Meets Deep Learning, pp. 904-912, 2017.

Usage & Training

We train our network using the MatConvNet library. Plese refer to the installation page for the instructions.

In order to start the training, you need to create the resampled expression model first. To do that, you need (1) Basel Face Model, 01_MorphableModel.mat and (2) 3DDFA Expression Model, Model_Expression.mat. You can set the paths accordingly and run the prepareExpressionBFM function in the prepareModel folder to build a resampled expression model.

Finally, run the dagnn_3dmmasstn.m script to start the training.

Overview of the 3DMM-STNThe grid generator network within a 3DMM-STN

Localiser Network

The localiser network is a CNN that takes an image as input and regresses the pose and shape parameters, theta (θ = r, t, logs, α). For our localiser network, we use the pre-trained VGGFaces architecture, delete the classification layer and add a new fully connected layer with 6 + D outputs. The pre-trained models can be downloaded from MatConvNet model repository.

Grid Generator Network

Our grid generator combines a linear statistical model with a scaled orthographic projection. We apply a 3D transformation and projection to a 3D mesh that comes from the morphable model. The intensities sampled from the source image are then assigned to the corresponding points in a flattened 2D grid.

UV texture space embedding for Basel Face Model

The output of our 3DMM-STN is a resampled image in a flattened 2D texture space in which the images are in dense, pixel-wise correspondence. In other words, the output grid is a texture space flattening of the 3DMM mesh. Specifically, we compute a Tutte embedding using conformal Laplacian weights and with the mesh boundary mapped to a square. To ensure a symmetric embedding we map the symmetry line to the symmetry line of the square, flatten only one side of the mesh and obtain the flattening of the other half by reflection.

You can find the UV coordinates as BFM_UV.mat file in the util folder.

The output grid visualisation using the mean textureThe mean shape as a geometry image

Customised Layers

In this section, we summarise our customised layers and loss functions. Please refer to the paper for more details.

  • 3D morphable model layer generates a shape X, comprising N 3D vertices by taking a linear combination of principal components stored in the matrix and the mean shape, according to shape parameters α.
  • Axis-angle to rotation matrix layer converts an axis-angle representation of a rotation, r, into a rotation matrix R.
  • 3D rotation layer takes as input a rotation matrix R and N 3D points X, and applies the rotation.
  • Orthographic projection layer takes as input a set of N 3D points X' and outputs N 2D points Y by applying an orthographic projection along the z axis.
  • Scaling layers scale the 2D points Y based on scale s, after the log scale logs transformed to scale s.
  • Translation layer generates the 2D sample points by adding a 2D translation t to each of the scaled points.
  • Grid layer takes as input 2xN points and produces 2xH'W' grid using re-sampled 3DMM which has N=H'W' vertices and each vertex i, has an associated UV coordinate. To understand how to compute the re-sampled model over a uniform grid in the UV space, please refer to the resampleModel function and the sampling section of the paper.
  • Bilinear sampler is a layer that is exactly as in the original STN.
  • Visibility (self-occlusions) layer takes as input the rotation matrix R and the shape parameters α and outputs a binary occlusion mask M.
  • Masking layer combines the sampled image and the visibility map via pixel-wise products.

Geometric Loss Functions

  • Bilateral symmetry loss measures asymmetry of the sampled face texture over visible pixels.
  • Siamese multi-view fitting loss penalises differences between multiple images of the same face in different poses.
  • Landmark loss minimises the Euclidean distance between observed and predicted 2D points.
  • Statistical prior loss minimises an appearance error, regularising the statistical shape prior (We scale the shape basis vectors such that the shape parameters follow a standard multivariate normal distribution).

Dependencies

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].