All Projects → ParthaEth → Gif

ParthaEth / Gif

Licence: mit
GIF is a photorealistic generative face model with explicit 3D geometric and photometric control.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gif

DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (-62.23%)
Mutual labels:  generative-adversarial-network, gan, gans
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+221.03%)
Mutual labels:  gan, generative-adversarial-network, gans
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+1230.04%)
Mutual labels:  gan, generative-adversarial-network, gans
Sdv
Synthetic Data Generation for tabular, relational and time series data.
Stars: ✭ 360 (+54.51%)
Mutual labels:  gan, generative-adversarial-network, gans
Tagan
An official PyTorch implementation of the paper "Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language", NeurIPS 2018
Stars: ✭ 97 (-58.37%)
Mutual labels:  gan, generative-adversarial-network, gans
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+6971.67%)
Mutual labels:  gan, generative-adversarial-network, gans
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+57.51%)
Mutual labels:  gan, generative-adversarial-network, gans
AvatarGAN
Generate Cartoon Images using Generative Adversarial Network
Stars: ✭ 24 (-89.7%)
Mutual labels:  generative-adversarial-network, gan, gans
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-13.3%)
Mutual labels:  gan, generative-adversarial-network, gans
Doppelganger
[IMC 2020 (Best Paper Finalist)] Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions
Stars: ✭ 97 (-58.37%)
Mutual labels:  gan, generative-adversarial-network, gans
Pacgan
[NeurIPS 2018] [JSAIT] PacGAN: The power of two samples in generative adversarial networks
Stars: ✭ 67 (-71.24%)
Mutual labels:  gan, generative-adversarial-network, gans
Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (-40.77%)
Mutual labels:  gan, generative-adversarial-network, gans
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+4592.27%)
Mutual labels:  gan, generative-adversarial-network, gans
A Pytorch Tutorial To Super Resolution
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution
Stars: ✭ 157 (-32.62%)
Mutual labels:  gan, generative-adversarial-network, gans
Gan steerability
On the "steerability" of generative adversarial networks
Stars: ✭ 225 (-3.43%)
Mutual labels:  gan, generative-adversarial-network
Facegan
TF implementation of our ECCV 2018 paper: Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model
Stars: ✭ 176 (-24.46%)
Mutual labels:  gan, generative-adversarial-network
Ranksrgan
ICCV 2019 (oral) RankSRGAN: Generative Adversarial Networks with Ranker for Image Super-Resolution. PyTorch implementation
Stars: ✭ 213 (-8.58%)
Mutual labels:  gan, generative-adversarial-network
Image generator
DCGAN image generator 🖼️.
Stars: ✭ 173 (-25.75%)
Mutual labels:  gan, generative-adversarial-network
Gan2shape
Code for GAN2Shape (ICLR2021 oral)
Stars: ✭ 183 (-21.46%)
Mutual labels:  gan, generative-adversarial-network
Cocosnet
Cross-domain Correspondence Learning for Exemplar-based Image Translation. (CVPR 2020 Oral)
Stars: ✭ 211 (-9.44%)
Mutual labels:  generative-adversarial-network, gans

GIF: Generative Interpretable Faces

This is the official implementation for the paper GIF: Generative Interpretable Faces. GIF is a photorealistic generative face model with explicit control over 3D geometry (parametrized like FLAME), appearance, and lighting.

  • Key words: Generative Interpretable Faces, conditional generative models, 3D conditioning of GANs, explicit 3D control of photorealistic faces, Photorealistic faces.

Important links

Watch a brief presentation

Watch a presentation

Citation

If you find our work useful in your project please cite us as

@inproceedings{GIF2020,
    title = {{GIF}: Generative Interpretable Faces},
    author = {Ghosh, Partha and Gupta, Pravir Singh and Uziel, Roy and Ranjan, Anurag and Black, Michael J. and Bolkart, Timo},
    booktitle = {International Conference on 3D Vision (3DV)},
    year = {2020},
    url = {http://gif.is.tue.mpg.de/}
}

Installation

  • python3 -m venv ~/.venv/gif
  • source ~/.venv/gif/bin/activate
  • pip install -r requirements.txt

First thing first

Before Running any program you will need to download a few resource files and create a suitable placeholder for the training artifacts to be stored

  1. you can use this link to download input files necessary to train GIF from scratch - http://files.is.tuebingen.mpg.de/gif/input_files.zip
  2. you can use this link to download checkpoints and samples generated from pre-trained GIF models and its ablated versions - http://files.is.tuebingen.mpg.de/gif/output_files.zip
  3. Now create a directory called GIF_resources and unzip the ipput zip or checpoint zip or both in this directory
  4. When you train or fine tune a model the output directory checkpoint and sample directory will be populated. Rmember that the model atifacts can easily become a few 10s of terabytes
  5. The main resource directory should be named GIF_resources and it should have input_files and output_fiels as sub-directories
  6. Now you need to provide the path to this directory in the constants.py script and make changes if necessary if you wish to change names of the subdirectories
  7. Edit the line resources_root = '/path/to/the/unzipped/location/of/GIF_resources'
  8. Modify any other paths as you need
  9. Download the FLAME 2020 model and the FLAME texture space from here - https://flame.is.tue.mpg.de/ (you need to sign up and agree to the license for access)
  10. Please make sure to dowload 2020 version. After signing in you sould be able to download FLAME 2020
  11. Download the FLAME_texture_data, Unzip it and place the texture_data_256.npy file in the flame resources directory.
  12. Please place the generic_model.pkl file in GIF_resources/input_files/flame_resource
  13. In this directory you will need to place the generic_model.pkl, head_template_mesh.obj, and FLAME_texture.npz in addition to the already provided files in the zip you just downloaded from the link given above. You can find these files from the official flame website. Link given in point 9.

Preparing training data

To train GIF you will need to prepare two lmdb datasets

  1. An LMDB datset containing FFHQ images in different scales
    1. To prepare this cd prepare_lmdb
    2. run python prepare_ffhq_multiscale_dataset.py --n_worker N_WORKER DATASET_PATH
    3. Here DATASET_PATH is the parth to the directory that contains the FFHQ images
    4. Place the created lmdb file in the GIF_resources/input_files/FFHQ directory, alongside ffhq_fid_stats
  2. An LMDB dataset containing renderings of the FLAME model
    1. To run GIF you will need the rendered texture and normal images of the FLAME mesh for FFHQ images. This is already provided as deca_rendered_with_public_texture.lmdb with the input_file zip. It is located in GIF_resources_to_upload/input_files/DECA_inferred
    2. To create this on your own simply run python create_deca_rendered_lmdb.py

Training

To resume training from a checkpoint run python train.py --run_id <runid> --ckpt /path/to/saved.mdl/file/<runid>/model_checkpoint_name.model

Note here that you point to the .model file not the npz one.

To start training from scratch run python train.py --run_id <runid>

Note that the training code will take all available GPUs in the system and perform data parallelization. You can set visible GPUs by etting the CUDA_VISIBLE_DEVICES environment variable. Run CUDA_VISIBLE_DEVICES=0,1 python train.py --run_id <runid> to run on GPU 0 and 1

To run random face generation follow the following steps

  1. Clone this repo
  2. Download the pretrained model. Please note that you have to download the model with correct run_id
  3. activate your virtual environment
  4. cd plots
  5. python generate_random_samples.py
  6. Remember to uncomment the appropriate run_id

To generate Figure 3

  1. cd plots
  2. python role_of_different_parameters.py it will generate batch_size number of directories in f'{cnst.output_root}sample/' named gen_iamges<batch_idx>. Each of these directory contain a column of images as shown in figure 3 in the paper.

Amazon mechanical turk (AMT) evaluations:

Disclaimer: This section can be outdated and/or have changed since the time of writing the document. It is neither intended to advertise nor recommend any particular 3rd party product. The inclusion of this guide is solely for quick reference purposes and is provided without any liability.

  • you will need 3 accounts
  1. Mturk - https://requester.mturk.com/
  2. Mturk sandbox - just for experiments (No real money involved) https://requestersandbox.mturk.com/
  3. AWS - for uploading the images https://aws.amazon.com/
  • once that is done you may follow the following steps
  1. Upload the images to S3 in AWS website (into 2 different folders. e.g. model1, model2)
  2. Make the files public. (You can verify it by clicking one file, and try to view the image using the link)
  3. Create one project (not so sure what are the optimal values there, I believe that people at MPI have experience with that
  4. In “Design Layout” insert the html code from mturk/mturk_layout.html or write your own layout
  5. Finally you will have to upload a CSV file which will have s3 or any other public links for images that will be shown to the participants
  6. You can generate such links using the generate_csv.py or the create_csv.py scripts
  7. Finally follow an AMT tutorial to deploye and obtain the results
  8. You may use the plot_results.py or plot_histogram_results.py script to visualize AMT results

Running the naive vector conditioning model

  • Code to run vector conditioning to arrvie soon on a different branch :-)

Acknowledgements

GIF uses DECA to get FLAME geometry, appearance, and lighting parameters for the FFHQ training data. We thank H. Feng for prepraring the training data, Y. Feng and S. Sanyal for support with the rendering and projection pipeline, and C. Köhler, A. Chandrasekaran, M. Keller, M. Landry, C. Huang, A. Osman and D. Tzionas for fruitful discussions, advice and proofreading. We specially thank Taylor McConnell for voicing over our video. The work was partially supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and by Amazon Web Services.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].