All Projects → wtjiang98 → Psgan

wtjiang98 / Psgan

Licence: mit
PyTorch code for "PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer" (CVPR 2020 Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Psgan

MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-95.91%)
Mutual labels:  generative-adversarial-network, gan
Pytorch Srgan
A modern PyTorch implementation of SRGAN
Stars: ✭ 289 (-9.12%)
Mutual labels:  gan, generative-adversarial-network
TextBoxGAN
Generate text boxes from input words with a GAN.
Stars: ✭ 50 (-84.28%)
Mutual labels:  generative-adversarial-network, gan
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+874.53%)
Mutual labels:  gan, generative-adversarial-network
DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (-72.33%)
Mutual labels:  generative-adversarial-network, gan
AvatarGAN
Generate Cartoon Images using Generative Adversarial Network
Stars: ✭ 24 (-92.45%)
Mutual labels:  generative-adversarial-network, gan
Deep Generative Prior
Code for deep generative prior (ECCV2020 oral)
Stars: ✭ 308 (-3.14%)
Mutual labels:  gan, generative-adversarial-network
GAN-auto-write
Generative Adversarial Network that learns to generate handwritten digits. (Learning Purposes)
Stars: ✭ 18 (-94.34%)
Mutual labels:  generative-adversarial-network, gan
ezgan
An extremely simple generative adversarial network, built with TensorFlow
Stars: ✭ 36 (-88.68%)
Mutual labels:  generative-adversarial-network, gan
keras-3dgan
Keras implementation of 3D Generative Adversarial Network.
Stars: ✭ 20 (-93.71%)
Mutual labels:  generative-adversarial-network, gan
Dcgan
The Simplest DCGAN Implementation
Stars: ✭ 286 (-10.06%)
Mutual labels:  gan, generative-adversarial-network
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-1.57%)
Mutual labels:  gan, generative-adversarial-network
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (-43.4%)
Mutual labels:  generative-adversarial-network, gan
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-93.71%)
Mutual labels:  generative-adversarial-network, gan
steam-stylegan2
Train a StyleGAN2 model on Colaboratory to generate Steam banners.
Stars: ✭ 30 (-90.57%)
Mutual labels:  generative-adversarial-network, gan
Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+899.37%)
Mutual labels:  gan, generative-adversarial-network
StyleGANCpp
Unofficial implementation of StyleGAN's generator
Stars: ✭ 25 (-92.14%)
Mutual labels:  generative-adversarial-network, gan
AdvSegLoss
Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]
Stars: ✭ 24 (-92.45%)
Mutual labels:  generative-adversarial-network, gan
DeepFlow
Pytorch implementation of "DeepFlow: History Matching in the Space of Deep Generative Models"
Stars: ✭ 24 (-92.45%)
Mutual labels:  generative-adversarial-network, gan
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-78.62%)
Mutual labels:  generative-adversarial-network, gan

PSGAN

Code for our CVPR 2020 oral paper "PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer".

Contributed by Wentao Jiang, Si Liu, Chen Gao, Jie Cao, Ran He, Jiashi Feng, Shuicheng Yan.

This code was further modified by Zhaoyi Wan.

In addition to the original algorithm, we added high-resolution face support using Laplace tranformation.

Checklist

  • [x] more results
  • [ ] video demos
  • [ ] partial makeup transfer example
  • [ ] interpolated makeup transfer example
  • [x] inference on GPU
  • [x] training code

Requirements

The code was tested on Ubuntu 16.04, with Python 3.6 and PyTorch 1.5.

For face parsing and landmark detection, we use dlib for fast implementation.

If you are using gpu for inference, do make sure you have gpu support for dlib.

Our newly collected Makeup-Wild dataset

  1. Download the Makeup-Wild (MT-Wild) dataset here

Test

Run python3 demo.py or python3 demo.py --device cuda for gpu inference.

Train

  1. Download training data from link_1 or link_2, and move it to sub directory named with "data". (For BaiduYun users, you can download the data here. Password: rtdd)

Your data directory should be looked like:

data
├── images
│   ├── makeup
│   └── non-makeup
├── landmarks
│   ├── makeup
│   └── non-makeup
├── makeup.txt
├── non-makeup.txt
├── segs
│   ├── makeup
│   └── non-makeup
  1. python3 train.py

Detailed configurations can be located and modified in configs/base.yaml, where command-line modification is also supportted.

*Note: * Although multi-GPU training is currently supported, due to the limitation of pytorch data parallel and gpu cost, the numer of adopted gpus and batch size are supposed to be the same.

More Results

MT-Dataset (frontal face images with neutral expression)

MWild-Dataset (images with different poses and expressions)

Video Makeup Transfer (by simply applying PSGAN on each frame)

Citation

Please consider citing this project in your publications if it helps your research. The following is a BibTeX reference. The BibTeX entry requires the url LaTeX package.

@InProceedings{Jiang_2020_CVPR,
  author = {Jiang, Wentao and Liu, Si and Gao, Chen and Cao, Jie and He, Ran and Feng, Jiashi and Yan, Shuicheng},
  title = {PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2020}
}

Acknowledge

Some of the codes are built upon face-parsing.PyTorch and BeautyGAN.

You are encouraged to submit issues and contribute pull requests.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].