All Projects → aelnouby → Text To Image Synthesis

aelnouby / Text To Image Synthesis

Licence: gpl-3.0
Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Text To Image Synthesis

Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+185.42%)
Mutual labels:  gans, image-generation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+3696.18%)
Mutual labels:  gans, image-generation
Img2imggan
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 49 (-82.99%)
Mutual labels:  gans, image-generation
Gansformer
Generative Adversarial Transformers
Stars: ✭ 421 (+46.18%)
Mutual labels:  gans, image-generation
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (-16.67%)
Mutual labels:  gans, image-generation
Data Efficient Gans
[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
Stars: ✭ 682 (+136.81%)
Mutual labels:  gans, image-generation
Icface
ICface: Interpretable and Controllable Face Reenactment Using GANs
Stars: ✭ 122 (-57.64%)
Mutual labels:  gans, image-generation
Matlab Gan
MATLAB implementations of Generative Adversarial Networks -- from GAN to Pixel2Pixel, CycleGAN
Stars: ✭ 63 (-78.12%)
Mutual labels:  gans, image-generation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+5621.18%)
Mutual labels:  gans, image-generation
Fq Gan
Official implementation of FQ-GAN
Stars: ✭ 137 (-52.43%)
Mutual labels:  gans, image-generation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+27.43%)
Mutual labels:  gans, image-generation
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+463.54%)
Mutual labels:  image-generation, gans
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+27.08%)
Mutual labels:  gans, image-generation
Mixnmatch
Pytorch implementation of MixNMatch
Stars: ✭ 694 (+140.97%)
Mutual labels:  gans, image-generation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+18.4%)
Mutual labels:  gans, image-generation
Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (-52.78%)
Mutual labels:  gans, image-generation
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+8.33%)
Mutual labels:  image-generation, gans
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-84.72%)
Mutual labels:  image-generation, gans
DeepDream
Generative deep learning: DeepDream
Stars: ✭ 17 (-94.1%)
Mutual labels:  gans
3d Sdn
[NeurIPS 2018] 3D-Aware Scene Manipulation via Inverse Graphics
Stars: ✭ 256 (-11.11%)
Mutual labels:  gans

Text-to-Image-Synthesis

Intoduction

This is a pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper, we train a conditional generative adversarial network, conditioned on text descriptions, to generate images that correspond to the description. The network architecture is shown below (Image from [1]). This architecture is based on DCGAN.

Image credits [1]

Requirements

  • pytorch
  • visdom
  • h5py
  • PIL
  • numpy

This implementation currently only support running with GPUs.

Implementation details

This implementation follows the Generative Adversarial Text-to-Image Synthesis paper [1], however it works more on training stablization and preventing mode collapses by implementing:

  • Feature matching [2]
  • One sided label smoothing [2]
  • minibatch discrimination [2] (implemented but not used)
  • WGAN [3]
  • WGAN-GP [4] (implemented but not used)

Datasets

We used Caltech-UCSD Birds 200 and Flowers datasets, we converted each dataset (images, text embeddings) to hd5 format.

We used the text embeddings provided by the paper authors

To use this code you can either:

Hd5 file taxonomy `

  • split (train | valid | test )
    • example_name
      • 'name'
      • 'img'
      • 'embeddings'
      • 'class'
      • 'txt'

Usage

Training

`python runtime.py

Arguments:

  • type : GAN archiecture to use (gan | wgan | vanilla_gan | vanilla_wgan). default = gan. Vanilla mean not conditional
  • dataset: Dataset to use (birds | flowers). default = flowers
  • split : An integer indicating which split to use (0 : train | 1: valid | 2: test). default = 0
  • lr : The learning rate. default = 0.0002
  • diter : Only for WGAN, number of iteration for discriminator for each iteration of the generator. default = 5
  • vis_screen : The visdom env name for visualization. default = gan
  • save_path : Path for saving the models.
  • l1_coef : L1 loss coefficient in the generator loss fucntion for gan and vanilla_gan. default=50
  • l2_coef : Feature matching coefficient in the generator loss fucntion for gan and vanilla_gan. default=100
  • pre_trained_disc : Discriminator pre-tranined model path used for intializing training.
  • pre_trained_gen Generator pre-tranined model path used for intializing training.
  • batch_size: Batch size. default= 64
  • num_workers: Number of dataloader workers used for fetching data. default = 8
  • epochs : Number of training epochs. default=200
  • cls: Boolean flag to whether train with cls algorithms or not. default=False

Results

Generated Images

Text to image synthesis

Text Generated Images
A blood colored pistil collects together with a group of long yellow stamens around the outside
The petals of the flower are narrow and extremely pointy, and consist of shades of yellow, blue
This pale peach flower has a double row of long thin petals with a large brown center and coarse loo
The flower is pink with petals that are soft, and separately arranged around the stamens that has pi
A one petal flower that is white with a cluster of yellow anther filaments in the center

References

[1] Generative Adversarial Text-to-Image Synthesis https://arxiv.org/abs/1605.05396

[2] Improved Techniques for Training GANs https://arxiv.org/abs/1606.03498

[3] Wasserstein GAN https://arxiv.org/abs/1701.07875

[4] Improved Training of Wasserstein GANs https://arxiv.org/pdf/1704.00028.pdf

Other Implementations

  1. https://github.com/reedscot/icml2016 (the authors version)
  2. https://github.com/paarthneekhara/text-to-image (tensorflow)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].