All Projects → crisbodnar → text-to-image

crisbodnar / text-to-image

Licence: other
Text to Image Synthesis using Generative Adversarial Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to text-to-image

Awesome-Text-to-Image
A Survey on Text-to-Image Generation/Synthesis.
Stars: ✭ 251 (+248.61%)
Mutual labels:  text-to-image, image-synthesis
cDCGAN
PyTorch implementation of Conditional Deep Convolutional Generative Adversarial Networks (cDCGAN)
Stars: ✭ 49 (-31.94%)
Mutual labels:  conditional-gan
Seg2Eye
Official implementation of "Content-Consistent Generation of Realistic Eyes with Style", ICCW 2019
Stars: ✭ 26 (-63.89%)
Mutual labels:  image-synthesis
feed forward vqgan clip
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Stars: ✭ 135 (+87.5%)
Mutual labels:  text-to-image
universum-contracts
text-to-image generation gems / libraries incl. moonbirds, cyberpunks, coolcats, shiba inu doge, nouns & more
Stars: ✭ 17 (-76.39%)
Mutual labels:  text-to-image
Data-Whisperer
An NLP text to vizualization builder for Tableau.
Stars: ✭ 13 (-81.94%)
Mutual labels:  text-to-image
pix2pix
This project uses a conditional generative adversarial network (cGAN) named Pix2Pix for the Image to image translation task.
Stars: ✭ 28 (-61.11%)
Mutual labels:  conditional-gan
text-to-image
Re-implementation of https://github.com/zsdonghao/text-to-image
Stars: ✭ 25 (-65.28%)
Mutual labels:  text-to-image
Conditional-SeqGAN-Tensorflow
Conditional Sequence Generative Adversarial Network trained with policy gradient, Implementation in Tensorflow
Stars: ✭ 47 (-34.72%)
Mutual labels:  conditional-gan
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+5.56%)
Mutual labels:  conditional-gan
Everybody-dance-now
Implementation of paper everybody dance now for Deep learning course project
Stars: ✭ 22 (-69.44%)
Mutual labels:  conditional-gan
gpuvmem
GPU Framework for Radio Astronomical Image Synthesis
Stars: ✭ 27 (-62.5%)
Mutual labels:  image-synthesis
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+284.72%)
Mutual labels:  conditional-gan
VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Stars: ✭ 2,369 (+3190.28%)
Mutual labels:  text-to-image
CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Stars: ✭ 328 (+355.56%)
Mutual labels:  text-to-image
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+333.33%)
Mutual labels:  image-synthesis
idg
Document image generator
Stars: ✭ 40 (-44.44%)
Mutual labels:  text-to-image
Pix2Pix
Image to Image Translation using Conditional GANs (Pix2Pix) implemented using Tensorflow 2.0
Stars: ✭ 29 (-59.72%)
Mutual labels:  conditional-gan
SuperStyleNet
SuperStyleNet: Deep Image Synthesis with Superpixel Based Style Encoder (BMVC 2021)
Stars: ✭ 28 (-61.11%)
Mutual labels:  image-synthesis
external-internal-inpainting
[CVPR 2021] EII: Image Inpainting with External-Internal Learning and Monochromic Bottleneck
Stars: ✭ 95 (+31.94%)
Mutual labels:  image-synthesis

Text to Image Synthesis using Generative Adversarial Networks

This is the official code for Text to Image Synthesis using Generative Adversarial Networks. Please be aware that the code is in an experimental stage and it might require some small tweaks.

If you find my research useful, please use the following to cite:

@article{Bodnar2018TextTI,
  title={Text to Image Synthesis Using Generative Adversarial Networks},
  author={Cristian Bodnar},
  journal={CoRR},
  year={2018},
  volume={abs/1805.00676}
}

Images generated by the Conditional Wasserstein GAN

As it can be seen, the generated images do not suffer from mode collapse.

Sample from the flowers dataset

Illustration of Conditional Wasserstein Progressive Growing GAN on the flowers dataset:

Sample from the flowers dataset

Samples from the birds dataset

Sample from the birds dataset

How to download the dataset

  1. Setup your PYTHONPATH to point to the root directory of the project.
  2. Download the preprocessed flowers text descriptions and extract them in the /data directory.
  3. Download the images from Oxford102 and extract the images in /data/flowers/jpg. You can alternatively run python preprocess/download_flowers_dataset.py from the root directory of the project.
  4. Run the python preprocess/preprocess_flowers.py script from the root directory of the project.

Requirements

  • python 3.6
  • tensorflow 1.4
  • scipy
  • numpy
  • pillow
  • easydict
  • imageio
  • pyyaml
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].