All Projects → hwalsuklee → Tensorflow Generative Model Collections

hwalsuklee / Tensorflow Generative Model Collections

Licence: apache-2.0
Collection of generative models in Tensorflow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tensorflow Generative Model Collections

Pytorch Generative Model Collections
Collection of generative models in Pytorch version.
Stars: ✭ 2,296 (-39.34%)
Mutual labels:  gan, mnist, wgan, wgan-gp, infogan, ebgan, lsgan, began, cgan, dragan, acgan, fashion-mnist
Fun-with-MNIST
Playing with MNIST. Machine Learning. Generative Models.
Stars: ✭ 23 (-99.39%)
Mutual labels:  mnist, infogan, vae, ebgan, wgan, lsgan, began, wgan-gp, acgan
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (-98.26%)
Mutual labels:  infogan, vae, wgan, lsgan, cgan, wgan-gp, generative-models
generative deep learning
Generative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-99.37%)
Mutual labels:  generative-model, vae, generative-adversarial-networks, wgan, wgan-gp
Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (-96.35%)
Mutual labels:  gan, mnist, generative-adversarial-networks, wgan
Gan Tutorial
Simple Implementation of many GAN models with PyTorch.
Stars: ✭ 227 (-94%)
Mutual labels:  gan, mnist, wgan, wgan-gp
Ganotebooks
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch
Stars: ✭ 1,446 (-61.8%)
Mutual labels:  wgan, wgan-gp, infogan, dragan
GANs-Keras
GANs Implementations in Keras
Stars: ✭ 24 (-99.37%)
Mutual labels:  gan, infogan, wgan, cgan
Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (-96.46%)
Mutual labels:  gan, vae, variational-autoencoder
Video prediction
Stochastic Adversarial Video Prediction
Stars: ✭ 247 (-93.47%)
Mutual labels:  gan, vae, variational-autoencoder
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-99.66%)
Mutual labels:  gan, mnist, generative-adversarial-networks
Pytorch-Basic-GANs
Simple Pytorch implementations of most used Generative Adversarial Network (GAN) varieties.
Stars: ✭ 101 (-97.33%)
Mutual labels:  wgan, cgan, wgan-gp
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (-98.63%)
Mutual labels:  gan, generative-model, cgan
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-98.52%)
Mutual labels:  generative-model, vae, variational-autoencoder
Tf Exercise Gan
Tensorflow implementation of different GANs and their comparisions
Stars: ✭ 110 (-97.09%)
Mutual labels:  gan, mnist, wgan
Psgan
Periodic Spatial Generative Adversarial Networks
Stars: ✭ 108 (-97.15%)
Mutual labels:  gan, generative-model, generative-adversarial-networks
playing with vae
Comparing FC VAE / FCN VAE / PCA / UMAP on MNIST / FMNIST
Stars: ✭ 53 (-98.6%)
Mutual labels:  mnist, variational-autoencoder, fashion-mnist
Improved-Wasserstein-GAN-application-on-MRI-images
Improved Wasserstein GAN (WGAN-GP) application on medical (MRI) images
Stars: ✭ 23 (-99.39%)
Mutual labels:  wgan, wgan-gp, improved-wgan
Unified Gan Tensorflow
A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.
Stars: ✭ 93 (-97.54%)
Mutual labels:  gan, wgan, wgan-gp
Generative Models
Comparison of Generative Models in Tensorflow
Stars: ✭ 96 (-97.46%)
Mutual labels:  gan, mnist, vae

tensorflow-generative-model-collections

Tensorflow implementation of various GANs and VAEs.

Related Repositories

Pytorch version

Pytorch version of this repository is availabel at https://github.com/znxlwm/pytorch-generative-model-collections

"Are GANs Created Equal? A Large-Scale Study" Paper

https://github.com/google/compare_gan is the code that was used in the paper.
It provides IS/FID and rich experimental results for all gan-variants.

Generative Adversarial Networks (GANs)

Lists

Name Paper Link Value Function
GAN Arxiv
LSGAN Arxiv
WGAN Arxiv
WGAN_GP Arxiv
DRAGAN Arxiv
CGAN Arxiv
infoGAN Arxiv
ACGAN Arxiv
EBGAN Arxiv
BEGAN Arxiv

Variants of GAN structure

Results for mnist

Network architecture of generator and discriminator is the exaclty sames as in infoGAN paper.
For fair comparison of core ideas in all gan variants, all implementations for network architecture are kept same except EBGAN and BEGAN. Small modification is made for EBGAN/BEGAN, since those adopt auto-encoder strucutre for discriminator. But I tried to keep the capacity of discirminator.

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type <TYPE> --epoch 25 --batch_size 64

Random generation

All results are randomly sampled.

Name Epoch 2 Epoch 10 Epoch 25
GAN
LSGAN
WGAN
WGAN_GP
DRAGAN
EBGAN
BEGAN

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 10 Epoch 25
CGAN
ACGAN
infoGAN

InfoGAN : Manipulating two continous codes

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.
Fashion-mnist is a recently proposed dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot)

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type <TYPE> --epoch 40 --batch_size 64

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 20 Epoch 40
GAN
LSGAN
WGAN
WGAN_GP
DRAGAN
EBGAN
BEGAN

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 20 Epoch 40
CGAN
ACGAN
infoGAN

Without hyper-parameter tuning from mnist-version, ACGAN/infoGAN does not work well as compared with CGAN.
ACGAN tends to fall into mode-collapse.
infoGAN tends to ignore noise-vector. It results in that various style within the same class can not be represented.

InfoGAN : Manipulating two continous codes

Some results for celebA

(to be added)

Variational Auto-Encoders (VAEs)

Lists

Name Paper Link Loss Function
VAE Arxiv
CVAE Arxiv
DVAE Arxiv (to be added)
AAE Arxiv (to be added)

Variants of VAE structure

Results for mnist

Network architecture of decoder(generator) and encoder(discriminator) is the exaclty sames as in infoGAN paper. The number of output nodes in encoder is different. (2x z_dim for VAE, 1 for GAN)

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type <TYPE> --epoch 25 --batch_size 64

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 10 Epoch 25
VAE
GAN

Results of GAN is also given to compare images generated from VAE and GAN. The main difference (VAE generates smooth and blurry images, otherwise GAN generates sharp and artifact images) is cleary observed from the results.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 10 Epoch 25
CVAE
CGAN

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Learned manifold

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type VAE --epoch 25 --batch_size 64 --dim_z 2

Please notice that dimension of noise-vector z is 2.

Name Epoch 1 Epoch 10 Epoch 25
VAE

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type <TYPE> --epoch 40 --batch_size 64

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 20 Epoch 40
VAE
GAN

Results of GAN is also given to compare images generated from VAE and GAN.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 20 Epoch 40
CVAE
CGAN

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Learned manifold

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type VAE --epoch 25 --batch_size 64 --dim_z 2

Please notice that dimension of noise-vector z is 2.

Name Epoch 1 Epoch 10 Epoch 25
VAE

Results for celebA

(to be added)

Folder structure

The following shows basic folder structure.

├── main.py # gateway
├── data
│   ├── mnist # mnist data (not included in this repo)
│   |   ├── t10k-images-idx3-ubyte.gz
│   |   ├── t10k-labels-idx1-ubyte.gz
│   |   ├── train-images-idx3-ubyte.gz
│   |   └── train-labels-idx1-ubyte.gz
│   └── fashion-mnist # fashion-mnist data (not included in this repo)
│       ├── t10k-images-idx3-ubyte.gz
│       ├── t10k-labels-idx1-ubyte.gz
│       ├── train-images-idx3-ubyte.gz
│       └── train-labels-idx1-ubyte.gz
├── GAN.py # vanilla GAN
├── ops.py # some operations on layer
├── utils.py # utils
├── logs # log files for tensorboard to be saved here
└── checkpoint # model files to be saved here

Acknowledgements

This implementation has been based on this repository and tested with Tensorflow over ver1.0 on Windows 10 and Ubuntu 14.04.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].