All Projects → Prinsphield → Elegant

Prinsphield / Elegant

Licence: mit
ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Elegant

Cocosnet
Cross-domain Correspondence Learning for Exemplar-based Image Translation. (CVPR 2020 Oral)
Stars: ✭ 211 (-16.6%)
Mutual labels:  generative-adversarial-network
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+6412.65%)
Mutual labels:  generative-adversarial-network
The Gan World
Everything about Generative Adversarial Networks
Stars: ✭ 243 (-3.95%)
Mutual labels:  generative-adversarial-network
Ranksrgan
ICCV 2019 (oral) RankSRGAN: Generative Adversarial Networks with Ranker for Image Super-Resolution. PyTorch implementation
Stars: ✭ 213 (-15.81%)
Mutual labels:  generative-adversarial-network
Gan steerability
On the "steerability" of generative adversarial networks
Stars: ✭ 225 (-11.07%)
Mutual labels:  generative-adversarial-network
Gif
GIF is a photorealistic generative face model with explicit 3D geometric and photometric control.
Stars: ✭ 233 (-7.91%)
Mutual labels:  generative-adversarial-network
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (-17%)
Mutual labels:  generative-adversarial-network
Video prediction
Stochastic Adversarial Video Prediction
Stars: ✭ 247 (-2.37%)
Mutual labels:  generative-adversarial-network
Wgan
Tensorflow Implementation of Wasserstein GAN (and Improved version in wgan_v2)
Stars: ✭ 228 (-9.88%)
Mutual labels:  generative-adversarial-network
Adgan
The Implementation of paper "Controllable Person Image Synthesis with Attribute-Decomposed GAN"
Stars: ✭ 239 (-5.53%)
Mutual labels:  generative-adversarial-network
Anogan Tf
Unofficial Tensorflow Implementation of AnoGAN (Anomaly GAN)
Stars: ✭ 218 (-13.83%)
Mutual labels:  generative-adversarial-network
Transmomo.pytorch
This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".
Stars: ✭ 225 (-11.07%)
Mutual labels:  generative-adversarial-network
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (-5.14%)
Mutual labels:  generative-adversarial-network
Generative inpainting
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
Stars: ✭ 2,659 (+950.99%)
Mutual labels:  generative-adversarial-network
Tf Sndcgan
Tensorflow Implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (ICML 2017 workshop)
Stars: ✭ 245 (-3.16%)
Mutual labels:  generative-adversarial-network
Artgan
ArtGAN: This work presents a series of new approaches to improve Generative Adversarial Network (GAN) for conditional image synthesis and we name the proposed model as “ArtGAN”. Implementations are in Caffe/Tensorflow.
Stars: ✭ 210 (-17%)
Mutual labels:  generative-adversarial-network
Improvedgan Pytorch
Semi-supervised GAN in "Improved Techniques for Training GANs"
Stars: ✭ 228 (-9.88%)
Mutual labels:  generative-adversarial-network
Awesome Tensorlayer
A curated list of dedicated resources and applications
Stars: ✭ 248 (-1.98%)
Mutual labels:  generative-adversarial-network
Generative Inpainting Pytorch
A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
Stars: ✭ 242 (-4.35%)
Mutual labels:  generative-adversarial-network
Sgan
Stacked Generative Adversarial Networks
Stars: ✭ 240 (-5.14%)
Mutual labels:  generative-adversarial-network

ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes

Taihong Xiao, Jiapeng Hong and Jinwen Ma

Please cite our paper if you find it useful to your research.

@InProceedings{Xiao_2018_ECCV,
    author = {Xiao, Taihong and Hong, Jiapeng and Ma, Jinwen},
    title = {ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    pages = {172--187},
    month = {September},
    year = {2018}
}

Introduction

This repo is the pytorch implementation of our paper. ELEGANT is a novel model for transferring multiple face attributes by exchanging latent encodings. The model framework is shown below.

model
The ELEGANT Model Framework

If you want to train or test the model on your own images, please do the facial landmark alignment first. We preprocess the whole datasete using 5p alignment here. However, you can use other alignment algorithm as long as it is consistent in both training and testing phases.

Requirements

Training on CelebA dataset

  1. Download celebA dataset and unzip it into datasets directory. There are various source providers for CelebA datasets. To ensure that the size of downloaded images is correct, please run identify datasets/celebA/data/000001.jpg. The size should be 409 x 687 if you are using the same dataset. Besides, please ensure that you have the following directory tree structure in your repo.
├── datasets
│   └── celebA
│       ├── data
│       ├── images.list
│       ├── list_attr_celeba.txt
│       └── list_landmarks_celeba.txt
  1. Run python preprocess.py. It will take only few minutes to preprocess all images. A new directory datasets/celebA/align_5p will be created.

  2. Run python ELEGANT.py -m train -a Bangs Mustache -g 0 to train ELEGANT with respect to two attributes Bangs and Mustache simultaneuously. You can play with other attributes as well. Please refer to list_attr_celeba.txt for all available attributes. If training ELEGANT with more than one gpu cards, you can accordingly increase the batch size, which is indicated in the first number of nchw in dataset.py.

  3. Run tensorboard --logdir=./train_log/log --port=6006 to watch your training process. You can use tags matching for inspecting one group of images. For example, if you type 0_04 in the image tags matching box, then a group of 10 images should be displayed together, including two original images, four residual images and four generated images. In the notation 0_04, 0 indicates the first attribute and the 04 indicates the 4-th group.

Testing

We provide four types of mode for testing. Let me explain all the parameters for testing.

  • -a: All attributes' names.
  • -r: Restore checkpoint.
  • -g: The GPU id(s) for testing.
    • Don't add this parameter to your shell command if you don't want to use gpu for testing.
    • No more than 1 GPU should be specified during test, because 1 image cannot be split into multiple GPUs.
  • --swap: Swap attribute of two images.
  • --linear: Linear interpolation by adding or removing one certain attribute.
  • --matrix: Matrix interpolation with respect to one or two attributes.
  • --swap_list: The attribute id(s) for testing.
    • For example, --swap_list 0 indicates the first attribute.
    • Receives two integers only in the interpolation with respect to two attributes.
    • In other cases, only one integer is required.
  • --input: Input images path that you want to transfer.
  • --target: Target image(s) path for reference.
    • Only one target image is needed in the --swap and --linear mode.
    • Three target images are needed in the --matrix mode with respect to one attribute.
    • Two target images are required in the --matrix mode with respect to two attributes.
  • -s: The output size for interpolation.
    • One integer is needed in the --linear mode.
    • Two integers are required for the --matrix mode.

1. Swap Attribute

We can swap the Mustache attribute of two images. Here --swap_list 1 indicates the second attribute should be swapped and -r 34000 means restoring trained model of step 34000. You can choose the best model by inspecting the quality of generated images in tensorboard or in the directory train_log/img/.

python ELEGANT.py -m test -a Bangs Mustache -r 34000 --swap --swap_list 1 --input ./images/goodfellow_aligned.png --target ./images/bengio_aligned.png
swap
Swap Mustache

2. Linear Interpolation

We can see the linear interpolation results of adding mustache to Bengio by running the following. -s 4 indicates the number of intermediate images.

python ELEGANT.py -m test -a Bangs Mustache -r 34000 --linear --swap_list 1 --input ./images/bengio_aligned.png --target ./images/goodfellow_aligned.png -s 4
linear
Linear Interpolation on Mustache

3. Matrix Interpolation with Respect to One Attribute

We can also add different kinds of bangs to a single person. Here, --swap_list 0 indicates we are dealing with the first attribute, and there are three target images provided for reference.

python ELEGANT.py -m test -a Bangs Mustache -r 34000 --matrix --swap_list 0 --input ./images/ng_aligned.png --target ./images/bengio_aligned.png ./images/goodfellow_aligned.png ./images/jian_sun_aligned.png -s 4 4
matrix1
Matrix Interpolation on different Bangs

4. Matrix Interpolation with Respect to Two Attributes

We can transfer two attributes simultaneously by running the following command.

python ELEGANT.py -m test -a Bangs Mustache -r 34000 --matrix --swap_list 0 1 --input ./images/lecun_aligned.png --target ./images/bengio_aligned.png ./images/goodfellow_aligned.png -s 4 4

The original image gradually owns the first attribute Bangs in the vertical direction and the second attribute Mustache in the horizontal direction.

matrix2
Matrix Interpolation on Bangs and Mustache

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].