All Projects → germain-hug → GANs-Keras

germain-hug / GANs-Keras

Licence: other
GANs Implementations in Keras

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to GANs-Keras

Pytorch Generative Model Collections
Collection of generative models in Pytorch version.
Stars: ✭ 2,296 (+9466.67%)
Mutual labels:  gan, infogan, wgan, cgan
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (+175%)
Mutual labels:  infogan, dcgan, wgan, cgan
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+15670.83%)
Mutual labels:  gan, infogan, wgan, cgan
Gan
Resources and Implementations of Generative Adversarial Nets: GAN, DCGAN, WGAN, CGAN, InfoGAN
Stars: ✭ 2,127 (+8762.5%)
Mutual labels:  gan, infogan, dcgan
Tf.gans Comparison
Implementations of (theoretical) generative adversarial networks and comparison without cherry-picking
Stars: ✭ 477 (+1887.5%)
Mutual labels:  gan, dcgan, wgan
Ganotebooks
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch
Stars: ✭ 1,446 (+5925%)
Mutual labels:  infogan, dcgan, wgan
Gan Tutorial
Simple Implementation of many GAN models with PyTorch.
Stars: ✭ 227 (+845.83%)
Mutual labels:  gan, dcgan, wgan
Pytorch-Basic-GANs
Simple Pytorch implementations of most used Generative Adversarial Network (GAN) varieties.
Stars: ✭ 101 (+320.83%)
Mutual labels:  dcgan, wgan, cgan
Wasserstein Gan
Chainer implementation of Wasserstein GAN
Stars: ✭ 95 (+295.83%)
Mutual labels:  gan, dcgan, wgan
Awesome Gans
Awesome Generative Adversarial Networks with tensorflow
Stars: ✭ 585 (+2337.5%)
Mutual labels:  gan, dcgan, wgan
Tf Exercise Gan
Tensorflow implementation of different GANs and their comparisions
Stars: ✭ 110 (+358.33%)
Mutual labels:  gan, dcgan, wgan
Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (+475%)
Mutual labels:  gan, dcgan, wgan
Pytorch Gan
A minimal implementaion (less than 150 lines of code with visualization) of DCGAN/WGAN in PyTorch with jupyter notebooks
Stars: ✭ 150 (+525%)
Mutual labels:  gan, wgan
Tensorflow Mnist Gan Dcgan
Tensorflow implementation of Generative Adversarial Networks (GAN) and Deep Convolutional Generative Adversarial Netwokrs for MNIST dataset.
Stars: ✭ 163 (+579.17%)
Mutual labels:  gan, dcgan
Face generator
DCGAN face generator 🧑.
Stars: ✭ 146 (+508.33%)
Mutual labels:  gan, dcgan
Image generator
DCGAN image generator 🖼️.
Stars: ✭ 173 (+620.83%)
Mutual labels:  gan, dcgan
Catdcgan
A DCGAN that generate Cat pictures 🐱‍💻
Stars: ✭ 177 (+637.5%)
Mutual labels:  gan, dcgan
GAN-Anime-Characters
Applied several Generative Adversarial Networks (GAN) techniques such as: DCGAN, WGAN and StyleGAN to generate Anime Faces and Handwritten Digits.
Stars: ✭ 43 (+79.17%)
Mutual labels:  dcgan, wgan
Semantic image inpainting
Semantic Image Inpainting
Stars: ✭ 140 (+483.33%)
Mutual labels:  gan, dcgan
Anogan Tf
Unofficial Tensorflow Implementation of AnoGAN (Anomaly GAN)
Stars: ✭ 218 (+808.33%)
Mutual labels:  gan, dcgan

GANs Implementations in Keras

Keras implementation of:

The DCGAN code was inspired by Jeremy Howard's course

Requirements:

You will need Keras 1.2.2 with a Tensorflow backend.
To install dependencies, run pip install -r requirements.txt
For command line parameters explanations:

python3 main.py -h

DCGAN

Deep Convolutional GANs was one of the first modifications made to the original GAN architecture to avoid mode collapsing. Theses improvements include:

  • Replacing pooling with strided convolutions
  • Using Batch-Normalization in both G and D
  • Starting G with a single Fully-Connected layer, end D with a flattening layer. The rest should be Fully-Convolutional
  • Using LeakyReLU activations in D, ReLU in G, with the exception of the last layer of G which should be tanh


python3 main.py --type DCGAN --no-train --model weights/DCGAN.h5 # Running pretrained model
python3 main.py --type DCGAN # Retraining

WGAN

Following up on the DCGAN architecture, the Wasserstein GAN aims at leveraging another distance metric between distribution to train G and D. More specifically, WGANs use the EM distance, which has the nice property of being continuous and differentiable for feed-forward networks. In practice, computing the EM distance is intractable, but we can approximate it by clipping the discriminator weights. The insures that D learns a K-Lipschitz function to compute the EM distance. Additionally, we:

  • Remove the sigmoid activation from D, leaving no constraint to its output range
  • Use RMSprop optimizer over Adam
python3 main.py --type WGAN --no-train --model weights/WGAN.h5 # Running pretrained model
python3 main.py --type WGAN # Retraining

cGAN

Conditional GANs are a variant to classic GANs, that allow one to condition both G and D on an auxiliary input y. We do so simply feeding y through an additional input layer to both G and D. In practice we have it go through an initial FC layer. This allows us to have two variables when generating new images:

  • The random noise vector z
  • The conditional label y
python3 main.py --type CGAN --no-train --model weights/CGAN.h5 # Running pretrained model
python3 main.py --type CGAN # Retraining

InfoGAN

The motivation behind the InfoGAN architecture is to learn a smaller dimensional, "disentangled" representation of the images to be generated. To do so, we introduce a latent code c, that is concatenated with the noise vector z. When training, we then want to maximize the mutual information between the latent code c and the generated image G(z,c). In practice, we:

  • Feed c in G through an additional input layer
  • Create an auxiliary head Q that shares some of its weights with D, and train it to maximize the mutual information I(c, G(z,c))
python3 main.py --type InfoGAN --no-train --model weights/InfoGAN_D.h5 # Running pretrained model
python3 main.py --type InfoGAN # Retraining
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].