All Projects → asarigun → TransGAN

asarigun / TransGAN

Licence: MIT License
This is a re-implementation of TransGAN: Two Pure Transformers Can Make One Strong GAN (CVPR 2021) in PyTorch.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TransGAN

gan-image-similarity
InfoGAN inspired neural network trained on zap50k images (using Tensorflow + tf-slim). Intermediate layers of the discriminator network are used to do image similarity.
Stars: ✭ 111 (+94.74%)
Mutual labels:  gan
ezgan
An extremely simple generative adversarial network, built with TensorFlow
Stars: ✭ 36 (-36.84%)
Mutual labels:  gan
dcgan anime avatars
基于keras使用dcgan自动生成动漫头像
Stars: ✭ 37 (-35.09%)
Mutual labels:  gan
SLE-GAN
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Stars: ✭ 53 (-7.02%)
Mutual labels:  gan
DeepFlow
Pytorch implementation of "DeepFlow: History Matching in the Space of Deep Generative Models"
Stars: ✭ 24 (-57.89%)
Mutual labels:  gan
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (-8.77%)
Mutual labels:  gan
SlimGAN
Slimmable Generative Adversarial Networks (AAAI 2021)
Stars: ✭ 20 (-64.91%)
Mutual labels:  gan
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (+122.81%)
Mutual labels:  gan
keras-3dgan
Keras implementation of 3D Generative Adversarial Network.
Stars: ✭ 20 (-64.91%)
Mutual labels:  gan
EigenGAN-Tensorflow
EigenGAN: Layer-Wise Eigen-Learning for GANs (ICCV 2021)
Stars: ✭ 294 (+415.79%)
Mutual labels:  gan
Introduction-to-GAN
Introduction to Generative Adversarial Networks
Stars: ✭ 21 (-63.16%)
Mutual labels:  gan
TextBoxGAN
Generate text boxes from input words with a GAN.
Stars: ✭ 50 (-12.28%)
Mutual labels:  gan
DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (+54.39%)
Mutual labels:  gan
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-64.91%)
Mutual labels:  gan
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+177.19%)
Mutual labels:  gan
MUST-GAN
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"
Stars: ✭ 39 (-31.58%)
Mutual labels:  gan
Unsupervised-Anomaly-Detection-with-Generative-Adversarial-Networks
Unsupervised Anomaly Detection with Generative Adversarial Networks on MIAS dataset
Stars: ✭ 95 (+66.67%)
Mutual labels:  gan
AsymmetricGAN
[ACCV 2018 Oral] Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Stars: ✭ 42 (-26.32%)
Mutual labels:  gan
Tensorflow DCGAN
Study Friendly Implementation of DCGAN in Tensorflow
Stars: ✭ 22 (-61.4%)
Mutual labels:  gan
VSGAN
VapourSynth Single Image Super-Resolution Generative Adversarial Network (GAN)
Stars: ✭ 124 (+117.54%)
Mutual labels:  gan

TransGAN: Two Transformers Can Make One Strong GAN [YouTube Video]

Paper Authors: Yifan Jiang, Shiyu Chang, Zhangyang Wang

CVPR 2021

This is re-implementation of TransGAN: Two Transformers Can Make One Strong GAN, and That Can Scale Up, CVPR 2021 in PyTorch.

Generative Adversarial Networks-GAN builded completely free of Convolutions and used Transformers architectures which became popular since Vision Transformers-ViT. In this implementation, CIFAR-10 dataset was used.

0 Epoch 40 Epoch 100 Epoch 200 Epoch

Related Work - Vision Transformers (ViT)

In this implementation, as a discriminator, Vision Transformer(ViT) Block was used. In order to get more info about ViT, you can look at the original paper here

Credits for illustration of ViT: @lucidrains

Installation

Before running train.py, check whether you have libraries in requirements.txt! Also, create ./fid_stat folder and download the fid_stats_cifar10_train.npz file in this folder. To save your model during training, create ./checkpoint folder using mkdir checkpoint.

Training

python train.py

Pretrained Model

You can find pretrained model here. You can download using:

wget https://drive.google.com/file/d/134GJRMxXFEaZA0dF-aPpDS84YjjeXPdE/view

or

curl gdrive.sh | bash -s https://drive.google.com/file/d/134GJRMxXFEaZA0dF-aPpDS84YjjeXPdE/view

License

MIT

Citation

@article{jiang2021transgan,
  title={TransGAN: Two Transformers Can Make One Strong GAN},
  author={Jiang, Yifan and Chang, Shiyu and Wang, Zhangyang},
  journal={arXiv preprint arXiv:2102.07074},
  year={2021}
}
@article{dosovitskiy2020,
  title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
  author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and  Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
  journal={arXiv preprint arXiv:2010.11929},
  year={2020}
}
@inproceedings{zhao2020diffaugment,
  title={Differentiable Augmentation for Data-Efficient GAN Training},
  author={Zhao, Shengyu and Liu, Zhijian and Lin, Ji and Zhu, Jun-Yan and Han, Song},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].