All Projects → clvrai → Ssgan Tensorflow

clvrai / Ssgan Tensorflow

Licence: mit
A Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks (NIPS 2016: Improved Techniques for Training GANs).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ssgan Tensorflow

Triple Gan
See Triple-GAN-V2 in PyTorch: https://github.com/taufikxu/Triple-GAN
Stars: ✭ 203 (-59.07%)
Mutual labels:  gan, generative-adversarial-network, semi-supervised-learning
Acgan Pytorch
Pytorch implementation of Conditional Image Synthesis with Auxiliary Classifier GANs
Stars: ✭ 57 (-88.51%)
Mutual labels:  gan, generative-adversarial-network, semi-supervised-learning
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+50.81%)
Mutual labels:  gan, generative-adversarial-network, semi-supervised-learning
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (-89.92%)
Mutual labels:  generative-adversarial-network, gan, semi-supervised-learning
Cool Fashion Papers
👔👗🕶️🎩 Cool resources about Fashion + AI! (papers, datasets, workshops, companies, ...) (constantly updating)
Stars: ✭ 464 (-6.45%)
Mutual labels:  gan, generative-adversarial-network
Textgan Pytorch
TextGAN is a PyTorch framework for Generative Adversarial Networks (GANs) based text generation models.
Stars: ✭ 479 (-3.43%)
Mutual labels:  gan, generative-adversarial-network
Wassersteingan.tensorflow
Tensorflow implementation of Wasserstein GAN - arxiv: https://arxiv.org/abs/1701.07875
Stars: ✭ 419 (-15.52%)
Mutual labels:  gan, generative-adversarial-network
Tensorflow Tutorial
Tensorflow tutorial from basic to hard, 莫烦Python 中文AI教学
Stars: ✭ 4,122 (+731.05%)
Mutual labels:  gan, generative-adversarial-network
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-36.9%)
Mutual labels:  gan, generative-adversarial-network
Gan Playground
GAN Playground - Experiment with Generative Adversarial Nets in your browser. An introduction to GANs.
Stars: ✭ 336 (-32.26%)
Mutual labels:  gan, generative-adversarial-network
T2f
T2F: text to face generation using Deep Learning
Stars: ✭ 494 (-0.4%)
Mutual labels:  gan, generative-adversarial-network
Generative Compression
TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
Stars: ✭ 428 (-13.71%)
Mutual labels:  gan, generative-adversarial-network
Seq2seq Chatbot For Keras
This repository contains a new generative model of chatbot based on seq2seq modeling.
Stars: ✭ 322 (-35.08%)
Mutual labels:  gan, generative-adversarial-network
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (-14.92%)
Mutual labels:  gan, generative-adversarial-network
Psgan
PyTorch code for "PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer" (CVPR 2020 Oral)
Stars: ✭ 318 (-35.89%)
Mutual labels:  gan, generative-adversarial-network
Sdv
Synthetic Data Generation for tabular, relational and time series data.
Stars: ✭ 360 (-27.42%)
Mutual labels:  gan, generative-adversarial-network
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+675.2%)
Mutual labels:  gan, generative-adversarial-network
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (-26.01%)
Mutual labels:  gan, generative-adversarial-network
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (-20.56%)
Mutual labels:  gan, generative-adversarial-network
Generative Models
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN
Stars: ✭ 438 (-11.69%)
Mutual labels:  gan, generative-adversarial-network

Semi-supervised learning GAN in Tensorflow

As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our group github site for other projects.

This project is implemented by Shao-Hua Sun and the codes have been reviewed by Jiayuan Mao before being published.

Descriptions

This project is a Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks proposed in the paper Improved Techniques for Training GANs. The intuition is exploiting the samples generated by GAN generators to boost the performance of image classification tasks by improving generalization.

In sum, the main idea is training a network playing both the roles of a classifier performing image classification task as well as a discriminator trained to distinguish generated samples produced by a generator from the real data. To be more specific, the discriminator/classifier takes an image as input and classified it into n+1 classes, where n is the number of classes of a classification task. True samples are classified into the first n classes and generated samples are classified into the n+1-th class, as shown in the figure below.

The loss of this multi-task learning framework can be decomposed into the supervised loss

,

and the GAN loss of a discriminator

,

During the training phase, we jointly minimize the total loss obtained by simply combining the two losses together.

The implemented model is trained and tested on three publicly available datasets: MNIST, SVHN, and CIFAR-10.

Note that this implementation only follows the main idea of the original paper while differing a lot in implementation details such as model architectures, hyperparameters, applied optimizer, etc. Also, some useful training tricks applied to this implementation are stated at the end of this README.

*This code is still being developed and subject to change.

Prerequisites

Usage

Download datasets with:

$ python download.py --dataset MNIST SVHN CIFAR10

Train models with downloaded datasets:

$ python trainer.py --dataset MNIST
$ python trainer.py --dataset SVHN
$ python trainer.py --dataset CIFAR10

Test models with saved checkpoints:

$ python evaler.py --dataset MNIST --checkpoint ckpt_dir
$ python evaler.py --dataset SVHN --checkpoint ckpt_dir
$ python evaler.py --dataset CIFAR10 --checkpoint ckpt_dir

The ckpt_dir should be like: train_dir/default-MNIST_lr_0.0001_update_G5_D1-20170101-194957/model-1001

Train and test your own datasets:

  • Create a directory
$ mkdir datasets/YOUR_DATASET
  • Store your data as an h5py file datasets/YOUR_DATASET/data.hy and each data point contains
    • 'image': has shape [h, w, c], where c is the number of channels (grayscale images: 1, color images: 3)
    • 'label': represented as an one-hot vector
  • Maintain a list datasets/YOUR_DATASET/id.txt listing ids of all data points
  • Modify trainer.py including args, data_info, etc.
  • Finally, train and test models:
$ python trainer.py --dataset YOUR_DATASET
$ python evaler.py --dataset YOUR_DATASET

Results

MNIST

  • Generated samples (100th epochs)
  • First 40 epochs

SVHN

  • Generated samples (100th epochs)
  • First 160 epochs

CIFAR-10

  • Generated samples (1000th epochs)
  • First 200 epochs

Training details

MNIST

  • The supervised loss
  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

SVHN

  • The supervised loss
  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

CIFAR-10

  • The supervised loss
  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

Training tricks

  • To avoid the fast convergence of the discriminator network
    • The generator network is updated more frequently.
    • Higher learning rate is applied to the training of the generator.
  • One-sided label smoothing is applied to the positive labels.
  • Gradient clipping trick is applied to stablize training
  • Reconstruction loss with an annealed weight is applied as an auxiliary loss to help the generator get rid of the initial local minimum.
  • Utilize Adam optimizer with higher momentum.
  • Please refer to the codes for more details.

Related works

Acknowledgement

Part of codes is from an unpublished project with Jongwook Choi

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].