All Projects → Yutong-Zhou-cv → Awesome-Text-to-Image

Yutong-Zhou-cv / Awesome-Text-to-Image

Licence: MIT License
A Survey on Text-to-Image Generation/Synthesis.

Projects that are alternatives of or similar to Awesome-Text-to-Image

WGAN-GP-TensorFlow
TensorFlow implementations of Wasserstein GAN with Gradient Penalty (WGAN-GP), Least Squares GAN (LSGAN), GANs with the hinge loss.
Stars: ✭ 42 (-83.27%)
Mutual labels:  generative-adversarial-network, image-generation, image-synthesis
universum-contracts
text-to-image generation gems / libraries incl. moonbirds, cyberpunks, coolcats, shiba inu doge, nouns & more
Stars: ✭ 17 (-93.23%)
Mutual labels:  image-generation, text-to-image
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+24.3%)
Mutual labels:  image-generation, image-synthesis
Semantic Pyramid for Image Generation
PyTorch reimplementation of the paper: "Semantic Pyramid for Image Generation" [CVPR 2020].
Stars: ✭ 45 (-82.07%)
Mutual labels:  generative-adversarial-network, image-generation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+6464.54%)
Mutual labels:  generative-adversarial-network, image-generation
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (-4.38%)
Mutual labels:  generative-adversarial-network, image-generation
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+546.61%)
Mutual labels:  generative-adversarial-network, image-generation
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (-43.82%)
Mutual labels:  generative-adversarial-network, image-generation
SuperStyleNet
SuperStyleNet: Deep Image Synthesis with Superpixel Based Style Encoder (BMVC 2021)
Stars: ✭ 28 (-88.84%)
Mutual labels:  image-generation, image-synthesis
text-to-image
Text to Image Synthesis using Generative Adversarial Networks
Stars: ✭ 72 (-71.31%)
Mutual labels:  text-to-image, image-synthesis
keras-text-to-image
Translate text to image in Keras using GAN and Word2Vec as well as recurrent neural networks
Stars: ✭ 60 (-76.1%)
Mutual labels:  generative-adversarial-network, text-to-image
Conditional Gan
Tensorflow implementation for Conditional Convolutional Adversarial Networks.
Stars: ✭ 202 (-19.52%)
Mutual labels:  generative-adversarial-network, image-generation
Arbitrary Text To Image Papers
A collection of arbitrary text to image papers with code (constantly updating)
Stars: ✭ 196 (-21.91%)
Mutual labels:  generative-adversarial-network, image-generation
Data Augmentation Review
List of useful data augmentation resources. You will find here some not common techniques, libraries, links to github repos, papers and others.
Stars: ✭ 785 (+212.75%)
Mutual labels:  survey, generative-adversarial-network
Mmediting
OpenMMLab Image and Video Editing Toolbox
Stars: ✭ 2,618 (+943.03%)
Mutual labels:  generative-adversarial-network, image-generation
ru-dalle
Generate images from texts. In Russian
Stars: ✭ 1,606 (+539.84%)
Mutual labels:  image-generation, text-to-image
Unetgan
Official Implementation of the paper "A U-Net Based Discriminator for Generative Adversarial Networks" (CVPR 2020)
Stars: ✭ 139 (-44.62%)
Mutual labels:  generative-adversarial-network, image-generation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-43.82%)
Mutual labels:  generative-adversarial-network, image-generation
Sketch2Color-anime-translation
Given a simple anime line-art sketch the model outputs a decent colored anime image using Conditional-Generative Adversarial Networks (C-GANs) concept.
Stars: ✭ 90 (-64.14%)
Mutual labels:  generative-adversarial-network, image-synthesis
ArtGAN
Tensorflow codes for our ICIP-17 and arXiv-1708.09533 works: "ArtGAN: Artwork Synthesis with Conditional Categorial GAN" & "Learning a Generative Adversarial Network for High Resolution Artwork Synthesis "
Stars: ✭ 16 (-93.63%)
Mutual labels:  generative-adversarial-network, image-synthesis

Awesome Text📝-to-Image🌇

Awesome Visitors GitHub stars

A collection of resources on text-to-image synthesis task. Figure from paper

From: Hierarchical Text-Conditional Image Generation with CLIP Latents

Content

1. Description

  • In the last few decades, the fields of Computer Vision (CV) and Natural Language Processing (NLP) have been made several major technological breakthroughs in deep learning research. Recently, researchers appear interested in combining semantic information and visual information in these traditionally independent fields. A number of studies have been conducted on the text-to-image synthesis techniques that transfer input textual description (keywords or sentences) into realistic images.

  • Papers, codes and datasets for the text-to-image task are available here.

🐌 Markdown Format:

2. Quantitative Evaluation Metrics «🎯Back To Top»

3. Datasets «🎯Back To Top»

  • Caltech-UCSD Bird(CUB)

    Caltech-UCSD Birds-200-2011 (CUB-200-2011) is an extended version of the CUB-200 dataset, with roughly double the number of images per class and new part location annotations.

    • Detailed information (Images): ⇒ [Paper] [Website]
      • Number of different categories: 200 (Training: 150 categories. Testing: 50 categories.)
      • Number of bird images: 11,788
      • Annotations per image: 15 Part Locations, 312 Binary Attributes, 1 Bounding Box, Ground-truth Segmentation
    • Detailed information (Text Descriptions): ⇒ [Paper] [Website]
      • Descriptions per image: 10 Captions
  • Oxford-102 Flower

    Oxford-102 Flower is a 102 category dataset, consisting of 102 flower categories. The flowers are chosen to be flower commonly occurring in the United Kingdom. The images have large scale, pose and light variations.

    • Detailed information (Images): ⇒ [Paper] [Website]
      • Number of different categories: 102 (Training: 82 categories. Testing: 20 categories.)
      • Number of flower images: 8,189
    • Detailed information (Text Descriptions): ⇒ [Paper] [Download]
      • Descriptions per image: 10 Captions
  • MS-COCO

    COCO is a large-scale object detection, segmentation, and captioning dataset.

    • Detailed information (Images): ⇒ [Paper] [Website]
      • Number of different categories: 91
      • Number of images: 120k (Training: 80k. Testing: 40k.)
    • Detailed information (Text Descriptions): ⇒ [Paper] [Download]
      • Descriptions per image: 5 Captions
  • Multi-Modal-CelebA-HQ

    Multi-Modal-CelebA-HQ is a large-scale face image dataset for text-to-image-generation, text-guided image manipulation, sketch-to-image generation, GANs for face generation and editing, image caption, and VQA.

    • Detailed information (Images & Text Descriptions): ⇒ [Paper] [Website] [Download]
      • Number of images (from Celeba-HQ): 30,000 (Training: 24,000. Testing: 6,000.)
      • Descriptions per image: 10 Captions
    • Detailed information (Masks):
      • Number of masks (from Celeba-Mask-HQ): 30,000 (512 x 512)
    • Detailed information (Sketches):
      • Number of Sketches: 30,000 (512 x 512)
    • Detailed information (Image with transparent background):
      • Not fully uploaded
  • CelebA-Dialog

    CelebA-Dialog is a large-scale visual-language face dataset. It has two properties: (1) Facial images are annotated with rich fine-grained labels, which classify one attribute into multiple degrees according to its semantic meaning. (2) Accompanied with each image, there are captions describing the attributes and a user request sample.

    • Detailed information (Images & Text Descriptions): ⇒ [Paper] [Website] [Download]
      • Number of identities: 10,177
      • Number of images: 202,599
      • 5 fine-grained attributes annotations per image: Bangs, Eyeglasses, Beard, Smiling, and Age
  • FFHQ-Text

    FFHQ-Text is a small-scale face image dataset with large-scale facial attributes, designed for text-to-face generation & manipulation, text-guided facial image manipulation, and other vision-related tasks.

    • Detailed information (Images & Text Descriptions): ⇒ [Paper] [Website] [Download]
      • Number of images (from FFHQ): 760 (Training: 500. Testing: 260.)
      • Descriptions per image: 9 Captions
      • 13 multi-valued facial element groups from coarse to fine.
    • Detailed information (BBox): ⇒ [Website]
  • CelebAText-HQ

    CelebAText-HQ is a large-scale face image dataset with large-scale facial attributes, designed for text-to-face generation.

    • Detailed information (Images & Text Descriptions): ⇒ [Paper] [Website] [Download]
      • Number of images (from Celeba-HQ): 15010 (Training: 13,710. Testing: 1300.)
      • Descriptions per image: 10 Captions

4. Project

  • Aphantasia. [Github]
    • This is a text-to-image tool, part of the artwork of the same name. (Aphantasia is the inability to visualize mental images, the deprivation of visual dreams.)

  • Text2Art. [Try it now!] [Github] [Blog]
    • Text2Art is an AI-powered art generator based on VQGAN+CLIP that can generate all kinds of art such as pixel art, drawing, and painting from just text input.

  • Survey Text Based Image Synthesis [Blog (2021)]

5. Paper With Code

  • Survey «🎯Back To Top»

    • Text-to-Image Synthesis: A Comparative Study [v1(Digital Transformation Technology)] (2021.08)
    • A survey on generative adversarial network-based text-to-image synthesis [v1(Neurocomputing)] (2021.04)
    • Adversarial Text-to-Image Synthesis: A Review [v1(arXiv)] (2021.01) [v2(Neural Networks)] (2021.08)
    • A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis [v1(arXiv)] (2019.10)
  • Text to Face👨🏻🧒👧🏼🧓🏽 «🎯Back To Top»

    • (CVPR 2022) StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis, Zhiheng Li et al. [Paper] [Code]
    • (arXiv preprint 2022) StyleT2F: Generating Human Faces from Textual Description Using StyleGAN2, Mohamed Shawky Sabae et al. [Paper] [Code]
    • (arXiv preprint 2022) AnyFace: Free-style Text-to-Face Synthesis and Manipulation, Jianxin Sun et al. [Paper]
    • (IEEE Transactions on Network Science and Engineering) TextFace: Text-to-Style Mapping based Face Generation and Manipulation, Xianxu Hou et al. [Paper]
    • (FG 2021) Generative Adversarial Network for Text-to-Face Synthesis and Manipulation with Pretrained BERT Model, Yutong Zhou et al. [Paper]
    • (ACMMM 2021) Multi-caption Text-to-Face Synthesis: Dataset and Algorithm, Jianxin Sun et al. [Paper] [Code]
    • (ACMMM 2021) Generative Adversarial Network for Text-to-Face Synthesis and Manipulation, Yutong Zhou. [Paper]
    • (WACV 2021) Faces a la Carte: Text-to-Face Generation via Attribute Disentanglement, Tianren Wang et al. [Paper]
    • (arXiv preprint 2019) FTGAN: A Fully-trained Generative Adversarial Networks for Text to Face Generation, Xiang Chen et al. [Paper]
  • 2022 «🎯Back To Top»

    • (OpenAI) [DALL-E 2] Hierarchical Text-Conditional Image Generation with CLIP Latents, Aditya Ramesh et al. [Paper] [Blog] [Risks and Limitations] [Unofficial Code]
    • (arXiv preprint 2022) Recurrent Affine Transformation for Text-to-image Synthesis, Senmao Ye et al. [Paper] [Code]
    • (AAAI 2022) Interactive Image Generation with Natural-Language Feedback, Yufan Zhou et al. [Paper]
    • (IEEE Transactions on Neural Networks and Learning Systems) DR-GAN: Distribution Regularization for Text-to-Image Generation, Hongchen Tan et al. [Paper]
    • (Pattern Recognition Letters) Text-to-image synthesis with self-supervised learning, Yong Xuan Tan et al. [Paper]
    • (CVPR 2022) Vector Quantized Diffusion Model for Text-to-Image Synthesis, Shuyang Gu et al. [Paper] [Code]
    • (CVPR 2022) Autoregressive Image Generation using Residual Quantization, Doyup Lee et al. [Paper] [Code]
    • (CVPR 2022) Text-to-Image Synthesis based on Object-Guided Joint-Decoding Transformer, Fuxiang Wu et al. [Paper]
    • (CVPR 2022) LAFITE: Towards Language-Free Training for Text-to-Image Generation, Yufan Zhou et al. [Paper] [Code]
    • (CVPR 2022) DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis, Ming Tao et al. [Paper] [Code]
    • (arXiv preprint 2022) DT2I: Dense Text-to-Image Generation from Region Descriptions, Stanislav Frolov et al. [Paper]
    • (arXiv preprint 2022) Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors, Oran Gafni et al. [Paper] [Code]
    • (IEEE Transactions on Network Science and Engineering) TextFace: Text-to-Style Mapping based Face Generation and Manipulation, Xianxu Hou et al. [Paper]
    • (arXiv preprint 2022) CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP, Zihao Wang et al. [Paper]
    • (arXiv preprint 2022) OptGAN: Optimizing and Interpreting the Latent Space of the Conditional Text-to-Image GANs, Zhenxing Zhang et al. [Paper]
    • (arXiv preprint 2022) DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers, Jaemin Cho et al. [Paper] [Code]
    • (IEEE Transactions on Network Science and Engineering) Neural Architecture Search with a Lightweight Transformer for Text-to-Image Synthesis, Wei Li et al. [Paper]
    • (Neurocomputing 2022) DiverGAN: An Efficient and Effective Single-Stage Framework for Diverse Text-to-Image Generation, Zhenxing Zhang et al. [Paper]
    • (Knowledge-Based Systems) CJE-TIG: Zero-shot cross-lingual text-to-image generation by Corpora-based Joint Encoding, Han Zhang et al. [Paper]
    • (WACV 2022) StyleMC: Multi-Channel Based Fast Text-Guided Image Generationand Manipulation, Umut Kocasarı et al. [Paper] [Project]
  • 2021 «🎯Back To Top»

    • (arXiv preprint 2021) Multimodal Conditional Image Synthesis with Product-of-Experts GANs, Xun Huang et al. [Paper] [Project]
      • Text-to-Image, Segmentation-to-Image, Text+Segmentation/Sketch/Image→Image, Sketch+Segmentation/Image→Image, Segmentation+Image→Image
    • (IEEE TCSVT) RiFeGAN2: Rich Feature Generation for Text-to-Image Synthesis from Constrained Prior Knowledge, Jun Cheng et al. [Paper]
    • (ICONIP 2021) TRGAN: Text to Image Generation Through Optimizing Initial Image, Liang Zhao et al. [Paper]
    • (arXiv preprint 2021) GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models, Alex Nichol et al. [Paper] [Code]
    • (NeurIPS 2021) Benchmark for Compositional Text-to-Image Synthesis, Dong Huk Park et al. [Paper]
    • (arXiv preprint 2021) FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization, Xingchao Liu et al. [Paper] [Code]
    • (arXiv preprint 2021) [💬Evaluation] TISE: A Toolbox for Text-to-Image Synthesis Evaluation, Tan M. Dinh et al. [Paper] [Project]
    • (ICONIP 2021) Self-Supervised Image-to-Text and Text-to-Image Synthesis, Anindya Sundar Das et al. [Paper]
    • (arXiv preprint 2021) NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion, Chenfei Wu et al. [Paper] [Code]
      • Multimodal Pretrained Model for Multi-tasks🎄: Text-To-Image (T2I), Sketch-to-Image (S2I), Image Completion (I2I), Text-Guided Image Manipulation (TI2I), Text-to-Video (T2V), Video Prediction (V2V), Sketch-to-Video (S2V), Text-Guided Video Manipulation (TV2V) Figure from paper

        (From: https://github.com/microsoft/NUWA [2021/11/30])

    • (arXiv preprint 2021) DiverGAN: An Efficient and Effective Single-Stage Framework for Diverse Text-to-Image Generation, Zhenxing Zhang et al. [Paper]
    • (Image and Vision Computing) Transformer models for enhancing AttnGAN based text to image generation, S. Naveen et al. [Paper]
    • (ACMMM 2021) R-GAN: Exploring Human-like Way for Reasonable Text-to-Image Synthesis via Generative Adversarial Networks, Yanyuan Qiao et al. [Paper]
    • (ACMMM 2021) Cycle-Consistent Inverse GAN for Text-to-Image Synthesis, Hao Wang et al. [Paper]
    • (ACMMM 2021) Unifying Multimodal Transformer for Bi-directional Image and Text Generation, Yupan Huang et al. [Paper] [Code]
    • (ACMMM 2021) A Picture is Worth a Thousand Words: A Unified System for Diverse Captions and Rich Images Generation, Yupan Huang et al. [Paper] [Code]
    • (ICCV 2021) Talk-to-Edit: Fine-Grained Facial Editing via Dialog, Yuming Jiang et al. [Paper] [Project] [Code]
    • (ICCV 2021) DAE-GAN: Dynamic Aspect-Aware GAN for Text-to-Image Synthesis, Shulan Ruan et al. [Paper] [Supp] [Code]
    • (ICIP 2021) Text To Image Synthesis With Erudite Generative Adversarial Networks, Zhiqiang Zhang et al. [Paper]
    • (PRCV 2021) MAGAN: Multi-attention Generative Adversarial Networks for Text-to-Image Generation, Xibin Jia et al. [Paper]
    • (AAAI 2021) TIME: Text and Image Mutual-Translation Adversarial Networks, Bingchen Liu et al. [Paper] [arXiv Paper]
    • (IJCNN 2021) Text to Image Synthesis based on Multi-Perspective Fusion, Zhiqiang Zhang et al. [Paper]
    • (arXiv preprint 2021) CRD-CGAN: Category-Consistent and Relativistic Constraints for Diverse Text-to-Image Generation, Tao Hu et al. [Paper]
    • (arXiv preprint 2021) Improving Text-to-Image Synthesis Using Contrastive Learning, Hui Ye et al. [Paper] [Code]
    • (arXiv preprint 2021) CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders, Kevin Frans et al. [Paper] [Code]
    • (ICASSP 2021) Drawgan: Text to Image Synthesis with Drawing Generative Adversarial Networks, Zhiqiang Zhang et al. [Paper]
    • (arXiv preprint 2021) Text to Image Generation with Semantic-Spatial Aware GAN, Kai Hu et al. [Paper] [Code]
    • (IJCNN 2021) DTGAN: Dual Attention Generative Adversarial Networks for Text-to-Image Generation, Zhenxing Zhang et al. [Paper]
    • (CVPR 2021) TediGAN: Text-Guided Diverse Image Generation and Manipulation, Weihao Xia et al. [Paper] [Extended Version][Code] [Dataset] [Colab] [Video]
    • (CVPR 2021) Cross-Modal Contrastive Learning for Text-to-Image Generation, Han Zhang et al. [Paper] [Code]
    • (NeurIPS 2021) CogView: Mastering Text-to-Image Generation via Transformers, Ming Ding et al. [Paper] [Code] [Demo Website(Chinese)]
    • (IEEE Transactions on Multimedia 2021) Modality Disentangled Discriminator for Text-to-Image Synthesis, Fangxiang Feng et al. [Paper] [Code]
    • (arXiv preprint 2021) Zero-Shot Text-to-Image Generation, Aditya Ramesh et al. [Paper] [Code] [Blog] [Model Card] [Colab] [Code(Pytorch)]
    • (Pattern Recognition 2021) Unsupervised text-to-image synthesis, Yanlong Dong et al. [Paper]
    • (WACV 2021) Text-to-Image Generation Grounded by Fine-Grained User Attention, Jing Yu Koh et al. [Paper]
    • (IEEE TIP 2021) Multi-Sentence Auxiliary Adversarial Networks for Fine-Grained Text-to-Image Synthesis, Yanhua Yang et al. [Paper]
    • (IEEE Access 2021) DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis, Han Zhang et al. [Paper]
  • 2020 «🎯Back To Top»

    • (WIREs Data Mining and Knowledge Discovery 2020) A survey and taxonomy of adversarial neural networks for text-to-image synthesis, Jorge Agnese et al. [Paper]
    • (TPAMI 2020) Semantic Object Accuracy for Generative Text-to-Image Synthesis, Tobias Hinz et al. [Paper] [Code]
    • (IEEE TIP 2020) KT-GAN: Knowledge-Transfer Generative Adversarial Network for Text-to-Image Synthesis, Hongchen Tan et al. [Paper]
    • (ACM Trans 2020) End-to-End Text-to-Image Synthesis with Spatial Constrains, Min Wang et al. [Paper]
    • (Neural Networks) Image manipulation with natural language using Two-sided Attentive Conditional Generative Adversarial Network, DaweiZhu et al. [Paper]
    • (IEEE Access 2020) TiVGAN: Text to Image to Video Generation With Step-by-Step Evolutionary Generator, Doyeon Kim et al. [Paper]
    • (IEEE Access 2020) Dualattn-GAN: Text to Image Synthesis With Dual Attentional Generative Adversarial Network, Yali Cai et al. [Paper]
    • (ICCL 2020) VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks, Soyeon Caren Han et al. [Paper] [Code]
    • (ECCV 2020) CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis, Jiadong Liang et al. [Paper] [Code]
    • (CVPR 2020) RiFeGAN: Rich Feature Generation for Text-to-Image Synthesis From Prior Knowledge, Jun Cheng et al. [Paper]
    • (CVPR 2020) CookGAN: Causality based Text-to-Image Synthesis, Bin Zhu et al. [Paper]
    • (CVPR 2020 - Workshop) SegAttnGAN: Text to Image Generation with Segmentation Attention, Yuchuan Gou et al. [Paper]
    • (IVPR 2020) PerceptionGAN: Real-world Image Construction from Provided Text through Perceptual Understanding, Kanish Garg et al. [Paper]
    • (COLING 2020) Leveraging Visual Question Answering to Improve Text-to-Image Synthesis, Stanislav Frolov et al. [Paper]
    • (IRCDL 2020) Text-to-Image Synthesis Based on Machine Generated Captions, Marco Menardi et al. [Paper]
    • (arXiv preprint 2020) MPG: A Multi-ingredient Pizza Image Generator with Conditional StyleGANs, Fangda Han et al. [Paper]
  • 2019 «🎯Back To Top»

    • (IEEE TCSVT 2019) Bridge-GAN: Interpretable Representation Learning for Text-to-image Synthesis, Mingkuan Yuan et al. [Paper] [Code]
    • (AAAI 2019) Perceptual Pyramid Adversarial Networks for Text-to-Image Synthesis, Minfeng Zhu et al. [Paper]
    • (AAAI 2019) Adversarial Learning of Semantic Relevance in Text to Image Synthesis, Miriam Cha et al. [Paper]
    • (NeurIPS 2019) Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge, Tingting Qiao et al. [Paper] [Code]
    • (NeurIPS 2019) Controllable Text-to-Image Generation, Bowen Li et al. [Paper] [Code]
    • (CVPR 2019) DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis, Minfeng Zhu et al. [Paper] [Code]
    • (CVPR 2019) Object-driven Text-to-Image Synthesis via Adversarial Training, Wenbo Li et al. [Paper] [Code]
    • (CVPR 2019) MirrorGAN: Learning Text-to-image Generation by Redescription, Tingting Qiao et al. [Paper] [Code]
    • (CVPR 2019) Text2Scene: Generating Abstract Scenes from Textual Descriptions, Fuwen Tan et al. [Paper] [Code]
    • (CVPR 2019) Semantics Disentangling for Text-to-Image Generation, Guojun Yin et al. [Paper] [Website]
    • (CVPR 2019) Text Guided Person Image Synthesis, Xingran Zhou et al. [Paper]
    • (ICCV 2019) Semantics-Enhanced Adversarial Nets for Text-to-Image Synthesis, Hongchen Tan et al. [Paper]
    • (ICCV 2019) Dual Adversarial Inference for Text-to-Image Synthesis, Qicheng Lao et al. [Paper]
    • (ICCV 2019) Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction, Alaaeldin El-Nouby et al. [Paper] [Code]
    • (BMVC 2019) MS-GAN: Text to Image Synthesis with Attention-Modulated Generators and Similarity-aware Discriminators, Fengling Mao et al. [Paper]
    • (arXiv preprint 2019) GILT: Generating Images from Long Text, Ori Bar El et al. [Paper] [Code]
  • 2018 «🎯Back To Top»

    • (TPAMI 2018) StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks, Han Zhang et al. [Paper] [Code]
    • (BMVC 2018) MC-GAN: Multi-conditional Generative Adversarial Network for Image Synthesis, Hyojin Park et al. [Paper] [Code]
    • (CVPR 2018) AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks, Tao Xu et al. [Paper] [Code]
    • (CVPR 2018) Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network, Zizhao Zhang et al. [Paper] [Code]
    • (CVPR 2018) Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis, Seunghoon Hong et al. [Paper]
    • (CVPR 2018) Image Generation from Scene Graphs, Justin Johnson et al. [Paper] [Code]
    • (ICLR 2018 - Workshop) ChatPainter: Improving Text to Image Generation using Dialogue, Shikhar Sharma et al. [Paper]
    • (ACMMM 2018) Text-to-image Synthesis via Symmetrical Distillation Networks, Mingkuan Yuan et al. [Paper]
    • (WACV 2018) C4Synth: Cross-Caption Cycle-Consistent Text-to-Image Synthesis, K. J. Joseph et al. [Paper]
    • (arXiv preprint 2018) Text to Image Synthesis Using Generative Adversarial Networks, Cristian Bodnar. [Paper]
    • (arXiv preprint 2018) Text-to-image-to-text translation using cycle consistent adversarial networks, Satya Krishna Gorti et al. [Paper] [Code]
  • 2017 «🎯Back To Top»

    • (ICCV 2017) StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks, Han Zhang et al. [Paper] [Code]
    • (ICIP 2017) I2T2I: Learning Text to Image Synthesis with Textual Data Augmentation, Hao Dong et al. [Paper] [Code]
    • (MLSP 2017) Adversarial nets with perceptual losses for text-to-image synthesis, Miriam Cha et al. [Paper]
  • 2016 «🎯Back To Top»

    • (ICML 2016) Generative Adversarial Text to Image Synthesis, Scott Reed et al. [Paper] [Code]
    • (NeurIPS 2016) Learning What and Where to Draw, Scott Reed et al. [Paper] [Code]

5. Other Related Works

  • Multimodality «🎯Back To Top»

    • (CVPR 2022) High-Resolution Image Synthesis with Latent Diffusion Models, Robin Rombach et al. [Paper] [Code]
      • 📚Text-to-Image, Conditional Latent Diffusion, Super-Resolution, Inpainting
    • (arXiv preprint 2022) Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework, Peng Wang et al. [Paper] [Code] [Hugging Face]
      • 📚Text-to-Image Generation, Image Captioning, Text Summarization, Self-Supervised Image Classification, [SOTA] Referring Expression Comprehension, Visual Entailment, Visual Question Answering
    • (NeurIPS 2021) M6-UFC: Unifying Multi-Modal Controls for Conditional Image Synthesis via Non-Autoregressive Generative Transformers, Zhu Zhang et al. [Paper]
      • 📚Text-to-Image, Sketch-to-Image, Style Transfer, Image Inpainting, Multi-Modal Control to Image
    • (arXiv preprint 2021) ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation, Han Zhang et al. [Paper]
      • A pre-trained 10-billion parameter model: ERNIE-ViLG.
      • A large-scale dataset of 145 million high-quality Chinese image-text pairs.
      • 📚Text-to-Image, Image Captioning, Generative Visual Question Answering
    • (arXiv preprint 2021) Multimodal Conditional Image Synthesis with Product-of-Experts GANs, Xun Huang et al. [Paper] [Project]
      • 📚Text-to-Image, Segmentation-to-Image, Text+Segmentation/Sketch/Image → Image, Sketch+Segmentation/Image → Image, Segmentation+Image → Image
    • (arXiv preprint 2021) L-Verse: Bidirectional Generation Between Image and Text, Taehoon Kim et al. [Paper] [Code]
      • 📚Text-To-Image, Image-To-Text, Image Reconstruction
    • (arXiv preprint 2021) [💬Semantic Diffusion Guidance] More Control for Free! Image Synthesis with Semantic Diffusion Guidance, Xihui Liu et al. [Paper] [Project]
      • 📚Text-To-Image, Image-To-Image, Text+Image → Image
    • (arXiv preprint 2021) NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion, Chenfei Wu et al. [Paper] [Code]
      • 📚Text-To-Image, Sketch-to-Image, Image Completion, Text-Guided Image Manipulation, Text-to-Video, Video Prediction, Sketch-to-Video, Text-Guided Video Manipulation
  • Text+Image → Image «🎯Back To Top»

    • (arXiv preprint 2022) [💬Image & Video Editing] Text2LIVE: Text-Driven Layered Image and Video Editing, Omer Bar-Tal et al. [Paper] [Project]
    • (Machine Vision and Applications 2022) Paired-D++ GAN for image manipulation with text, Duc Minh Vo et al. [Paper]
    • (CVPR 2022) [💬Hairstyle Transfer] HairCLIP: Design Your Hair by Text and Reference Image, Tianyi Wei et al. [Paper] [Code]
    • (CVPR 2022) DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation, Gwanghyun Kim et al. [Paper]
    • (CVPR 2022) ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation, Jianan Wang et al. [Paper] [Project]
    • (CVPR 2022) Blended Diffusion for Text-driven Editing of Natural Images, Omri Avrahami et al. [Paper] [Code] [Project]
    • (CVPR 2022) Predict, Prevent, and Evaluate: Disentangled Text-Driven Image Manipulation Empowered by Pre-Trained Vision-Language Model, Zipeng Xu et al. [Paper] [Code]
    • (CVPR 2022) Towards Implicit Text-Guided 3D Shape Generation, Zhengzhe Liu et al. [Paper] [Code]
    • (arXiv preprint 2022) [💬Multi-person Image Generation] Pose Guided Multi-person Image Generation From Text, Soon Yau Cheong et al. [Paper]
    • (arXiv preprint 2022) [💬Image Style Transfer] StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation, Peter Schaldenbrand et al. [Paper] [Dataset] [Code] [Demo]
    • (arXiv preprint 2022) [💬Image Style Transfer] Name Your Style: An Arbitrary Artist-aware Image Style Transfer, Zhi-Song Liu et al. [Paper]
    • (arXiv preprint 2022) [💬3D Avatar Generation] Text and Image Guided 3D Avatar Generation and Manipulation, Zehranaz Canfes et al. [Paper] [Project]
    • (arXiv preprint 2022) [💬Image Inpainting] NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN, Minheng Ni et al. [Paper]
    • (arXiv preprint 2021) [💬Text+Image → Video] Make It Move: Controllable Image-to-Video Generation with Text Descriptions, Yaosi Hu et al. [Paper]
    • (arXiv preprint 2021) [💬NeRF] CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields, Can Wang et al. [Paper] [Code] [Project]
    • (arXiv preprint 2021) [💬NeRF] Zero-Shot Text-Guided Object Generation with Dream Fields, Ajay Jain et al. [Paper] [Project]
    • (arXiv preprint 2021) [💬Style Transfer] CLIPstyler: Image Style Transfer with a Single Text Condition, Gihyun Kwon et al. [Paper] [Code]
    • (NeurIPS 2021) Instance-Conditioned GAN, Arantxa Casanova et al. [Paper] [Code]
    • (ICCV 2021) Language-Guided Global Image Editing via Cross-Modal Cyclic Mechanism, Wentao Jiang et al. [Paper]
    • (ICCV 2021) Talk-to-Edit: Fine-Grained Facial Editing via Dialog, Yuming Jiang et al. [Paper] [Project] [Code]
    • (ICCVW 2021) CIGLI: Conditional Image Generation from Language & Image, Xiaopeng Lu et al. [Paper] [Code]
    • (arXiv preprint 2021) StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery, Or Patashnik et al. [Paper] [Code]
    • (arXiv preprint 2021) Paint by Word, David Bau et al. [Paper]
    • (arXiv preprint 2021) Zero-Shot Text-to-Image Generation, Aditya Ramesh et al. [Paper] [Code] [Blog] [Model Card] [Colab]
    • (NeurIPS 2020) Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation, Bowen Li et al. [Paper]
    • (CVPR 2020) ManiGAN: Text-Guided Image Manipulation, Bowen Li et al. [Paper] [Code]
    • (ACMMM 2020) Text-Guided Neural Image Inpainting, Lisai Zhang et al. [Paper] [Code]
    • (ACMMM 2020) Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach, Yahui Liu et al. [Paper]
    • (NeurIPS 2018) Text-adaptive generative adversarial networks: Manipulating images with natural language, Seonghyeon Nam et al. [Paper] [Code]
  • Layout → Image «🎯Back To Top»

    • (CVPR 2022) Interactive Image Synthesis with Panoptic Layout Generation, Bo Wang et al. [Paper]
    • (CVPR 2021 AI for Content Creation Workshop) High-Resolution Complex Scene Synthesis with Transformers, Manuel Jahn et al. [Paper]
    • (CVPR 2021) Context-Aware Layout to Image Generation with Enhanced Object Appearance, Sen He et al. [Paper] [Code]
  • Label-set → Semantic maps «🎯Back To Top»

    • (ECCV 2020) Controllable image synthesis via SegVAE, Yen-Chi Cheng et al. [Paper] [Code]
  • Speech → Image «🎯Back To Top»

    • (IEEE/ACM Transactions on Audio, Speech and Language Processing 2021) Generating Images From Spoken Descriptions, Xinsheng Wang et al. [Paper] [Code] [Project]
    • (INTERSPEECH 2020)[Extent Version👆] S2IGAN: Speech-to-Image Generation via Adversarial Learning, Xinsheng Wang et al. [Paper]
    • (IEEE Journal of Selected Topics in Signal Processing 2020) Direct Speech-to-Image Translation, Jiguo Li et al. [Paper] [Code] [Project]
  • Text → Visual Retrieval «🎯Back To Top»

    • (CVPRW 2021) TIED: A Cycle Consistent Encoder-Decoder Model for Text-to-Image Retrieval, Clint Sebastian et al. [Paper]
    • (CVPR 2021) T2VLAD: Global-Local Sequence Alignment for Text-Video Retrieval, Xiaohan Wang et al. [Paper]
    • (CVPR 2021) Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers, Antoine Miech et al. [Paper]
    • (IEEE Access 2019) Query is GAN: Scene Retrieval With Attentional Text-to-Image Generative Adversarial Network, RINTARO YANAGI et al. [Paper]
  • Text → Video «🎯Back To Top»

    • (arXiv preprint 2022) Video Diffusion Models, Jonathan Ho et al. [Paper] [Project]
    • (arXiv preprint 2021) [Genertation Task] Transcript to Video: Efficient Clip Sequencing from Texts, Yu Xiong et al. [Paper] [Project]
    • (arXiv preprint 2021) GODIVA: Generating Open-DomaIn Videos from nAtural Descriptions, Chenfei Wu et al. [Paper]
    • (arXiv preprint 2021) Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary, Sibo Zhang et al. [Paper]
    • (IEEE Access 2020) TiVGAN: Text to Image to Video Generation With Step-by-Step Evolutionary Generator, DOYEON KIM et al. [Paper]
    • (IJCAI 2019) Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis, Yogesh Balaji et al. [Paper]
    • (IJCAI 2019) IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation, Kangle Deng et al. [Paper]
    • (AAAI 2018) Video Generation From Text, Yitong Li et al. [Paper]
    • (ACMMM 2017) To create what you tell: Generating videos from captions, Yingwei Pan et al. [Paper]

Contact Me

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].