All Projects → amirbar → Rnn.wgan

amirbar / Rnn.wgan

Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Rnn.wgan

Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (-45.24%)
Mutual labels:  gan, gans, wgan
Tf.gans Comparison
Implementations of (theoretical) generative adversarial networks and comparison without cherry-picking
Stars: ✭ 477 (+89.29%)
Mutual labels:  gan, gans, wgan
Gan Tutorial
Simple Implementation of many GAN models with PyTorch.
Stars: ✭ 227 (-9.92%)
Mutual labels:  gan, wgan
Tagan
An official PyTorch implementation of the paper "Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language", NeurIPS 2018
Stars: ✭ 97 (-61.51%)
Mutual labels:  gan, gans
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+4238.49%)
Mutual labels:  gan, gans
Doppelganger
[IMC 2020 (Best Paper Finalist)] Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions
Stars: ✭ 97 (-61.51%)
Mutual labels:  gan, gans
Pytorch Generative Adversarial Networks
A very simple generative adversarial network (GAN) in PyTorch
Stars: ✭ 1,345 (+433.73%)
Mutual labels:  gan, gans
Gdwct
Official PyTorch implementation of GDWCT (CVPR 2019, oral)
Stars: ✭ 122 (-51.59%)
Mutual labels:  gan, gans
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+196.83%)
Mutual labels:  gan, gans
Pytorch Gan
A minimal implementaion (less than 150 lines of code with visualization) of DCGAN/WGAN in PyTorch with jupyter notebooks
Stars: ✭ 150 (-40.48%)
Mutual labels:  gan, wgan
Starnet
StarNet
Stars: ✭ 141 (-44.05%)
Mutual labels:  gan, gans
A Pytorch Tutorial To Super Resolution
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution
Stars: ✭ 157 (-37.7%)
Mutual labels:  gan, gans
Wasserstein Gan
Chainer implementation of Wasserstein GAN
Stars: ✭ 95 (-62.3%)
Mutual labels:  gan, wgan
Unified Gan Tensorflow
A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.
Stars: ✭ 93 (-63.1%)
Mutual labels:  gan, wgan
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+6438.49%)
Mutual labels:  gan, gans
Pacgan
[NeurIPS 2018] [JSAIT] PacGAN: The power of two samples in generative adversarial networks
Stars: ✭ 67 (-73.41%)
Mutual labels:  gan, gans
Tf Exercise Gan
Tensorflow implementation of different GANs and their comparisions
Stars: ✭ 110 (-56.35%)
Mutual labels:  gan, wgan
Textgan Pytorch
TextGAN is a PyTorch framework for Generative Adversarial Networks (GANs) based text generation models.
Stars: ✭ 479 (+90.08%)
Mutual labels:  gan, text-generation
Awesome Gans
Awesome Generative Adversarial Networks with tensorflow
Stars: ✭ 585 (+132.14%)
Mutual labels:  gan, wgan
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-19.84%)
Mutual labels:  gan, gans

Language Generation with Recurrent Generative Adversarial Networks without Pre-training

Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training".

A short summary of the paper is available here.

Sample outputs (32 chars)

" There has been to be a place w
On Friday , the stories in Kapac
From should be taken to make it 
He is conference for the first t
For a lost good talks to ever ti

Training

To start training the CL+VL+TH model, first download the dataset, available at http://www.statmt.org/lm-benchmark/, and extract it into the ./data directory.

Then use the following command:

python curriculum_training.py

The following packages are required:

  • Python 2.7
  • Tensorflow 1.1
  • Scipy
  • Matplotlib

The following parameters can be configured:

LOGS_DIR: Path to save model checkpoints and samples during training (defaults to './logs/')
DATA_DIR: Path to load the data from (defaults to './data/1-billion-word-language-modeling-benchmark-r13output/')
CKPT_PATH: Path to checkpoint file when restoring a saved model
BATCH_SIZE: Size of batch (defaults to 64)
CRITIC_ITERS: Number of iterations for the discriminator (defaults to 10)
GEN_ITERS: Number of iterations for the geneartor (defaults to 50)
MAX_N_EXAMPLES: Number of samples to load from dataset (defaults to 10000000)
GENERATOR_MODEL: Name of generator model (currently only 'Generator_GRU_CL_VL_TH' is available)
DISCRIMINATOR_MODEL: Name of discriminator model (currently only 'Discriminator_GRU' is available)
PICKLE_PATH: Path to PKL directory to hold cached pickle files (defaults to './pkl')
ITERATIONS_PER_SEQ_LENGTH: Number of iterations to run per each sequence length in the curriculum training (defaults to 15000)
NOISE_STDEV: Standard deviation for the noise vector (defaults to 10.0)
DISC_STATE_SIZE: Discriminator GRU state size (defaults to 512)
GEN_STATE_SIZE: Genarator GRU state size (defaults to 512)
TRAIN_FROM_CKPT: Boolean, set to True to restore from checkpoint (defaults to False)
GEN_GRU_LAYERS: Number of GRU layers for the genarator (defaults to 1)
DISC_GRU_LAYERS: Number of GRU layers for the discriminator (defaults to 1)
START_SEQ: Sequence length to start the curriculum learning with (defaults to 1)
END_SEQ: Sequence length to end the curriculum learning with (defaults to 32)
SAVE_CHECKPOINTS_EVERY: Save checkpoint every # steps (defaults to 25000)
LIMIT_BATCH: Boolean that indicates whether to limit the batch size  (defaults to true)

Parameters can be set by either changing their value in the config file or by passing them in the terminal:

python curriculum_training.py --START_SEQ=1 --END_SEQ=32

Generating text

The generate.py script will generate BATCH_SIZE samples using a saved model. It should be run using the parameters used to train the model (if they are different than the default values). For example:

python generate.py --CKPT_PATH=/path/to/checkpoint/seq-32/ckp --DISC_GRU_LAYERS=2 --GEN_GRU_LAYERS=2

(If your model has not reached stage 32 in the curriculum, make sure to change the '32' in the path above to the maximal stage in the curriculum that your model trained on.)

Evaluating text

To evaluate samples using our %-IN-TEST-n metrics, use the following command, linking to a txt file where each row is a sample:

python evaluate.py --INPUT_SAMPLE=/path/to/samples.txt

Reference

If you found this code useful, please cite the following paper:

@article{press2017language,
  title={Language Generation with Recurrent Generative Adversarial Networks without Pre-training},
  author={Press, Ofir and Bar, Amir and Bogin, Ben and Berant, Jonathan and Wolf, Lior},
  journal={arXiv preprint arXiv:1706.01399},
  year={2017}
}

Acknowledgments

This repository is based on the code published in Improved Training of Wasserstein GANs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].