All Projects → jwyang → Lr Gan.pytorch

jwyang / Lr Gan.pytorch

Pytorch code for our ICLR 2017 paper "Layered-Recursive GAN for image generation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lr Gan.pytorch

Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (+30.34%)
Mutual labels:  gan, unsupervised-learning
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-53.1%)
Mutual labels:  gan, unsupervised-learning
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (+39.31%)
Mutual labels:  gan, unsupervised-learning
Distancegan
Pytorch implementation of "One-Sided Unsupervised Domain Mapping" NIPS 2017
Stars: ✭ 180 (+24.14%)
Mutual labels:  gan, unsupervised-learning
Context Encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
Stars: ✭ 731 (+404.14%)
Mutual labels:  gan, unsupervised-learning
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (+44.83%)
Mutual labels:  gan, unsupervised-learning
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (-65.52%)
Mutual labels:  gan, unsupervised-learning
Enlightengan
[IEEE TIP'2021] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
Stars: ✭ 434 (+199.31%)
Mutual labels:  gan, unsupervised-learning
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+334.48%)
Mutual labels:  gan, unsupervised-learning
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+253.79%)
Mutual labels:  gan, unsupervised-learning
Hypergan
Composable GAN framework with api and user interface
Stars: ✭ 1,104 (+661.38%)
Mutual labels:  gan, unsupervised-learning
Discogan Pytorch
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
Stars: ✭ 961 (+562.76%)
Mutual labels:  gan, unsupervised-learning
Oneshottranslation
Pytorch implementation of "One-Shot Unsupervised Cross Domain Translation" NIPS 2018
Stars: ✭ 135 (-6.9%)
Mutual labels:  gan, unsupervised-learning
Human Video Generation
Human Video Generation Paper List
Stars: ✭ 139 (-4.14%)
Mutual labels:  gan
Starnet
StarNet
Stars: ✭ 141 (-2.76%)
Mutual labels:  gan
Nice Gan Pytorch
Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation
Stars: ✭ 140 (-3.45%)
Mutual labels:  gan
Complete Life Cycle Of A Data Science Project
Complete-Life-Cycle-of-a-Data-Science-Project
Stars: ✭ 140 (-3.45%)
Mutual labels:  unsupervised-learning
Spherical Text Embedding
[NeurIPS 2019] Spherical Text Embedding
Stars: ✭ 143 (-1.38%)
Mutual labels:  unsupervised-learning
Glcic Pytorch
A High-Quality PyTorch Implementation of "Globally and Locally Consistent Image Completion".
Stars: ✭ 141 (-2.76%)
Mutual labels:  gan
Isolation Forest
A Spark/Scala implementation of the isolation forest unsupervised outlier detection algorithm.
Stars: ✭ 139 (-4.14%)
Mutual labels:  unsupervised-learning

Pytorch code for Layered Recursive Generative Adversarial Networks

Introduction

This is the pytorch implementation of our ICLR 2017 paper "LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation".

In our paper, we proposed LR-GAN to generate images layer-by-layer recursively considering images naturally have structure and context. As show below, LR-GAN first generates a background image, and then generates foregrounds with appearance, pose and shape. Afterward, LR-GAN place the foregrounds at somewhere of background accordingly.

By this way, LR-GAN can significantly reduce the blending between background and foregrounds. Both the qualitative and quantitative comparisons indicate that LR-GAN could generate better and sharp images than the baseline DCGAN model.

Disclaimer

This is the reimplementation code of LR-GAN based on Pytorch. It is developed based on Pytorch DCGAN. Our original code was implemented based on Torch during the first author's internship. All the results presented in our paper were obtained based on the Torch code, which cannot be released since the firm restriction. This project is an attempt to reproduce the results in our paper.

Citation

If you find this code useful, please cite the following paper:

@article{yang2017lr,
    title={LR-GAN: Layered recursive generative adversarial networks for image generation},
    author={Yang, Jianwei and Kannan, Anitha and Batra, Dhruv and Parikh, Devi},
    journal={ICLR},
    year={2017}
}

Dependencies

  1. PyTorch. Install PyTorch with proper commands. Make sure you also install torchvision.

  2. Spatial transformer network with mask (STNM). Install STNM from this project. Since we might use different gpu devices and cuda driver, etc. Please make your own stnm.so on your machine.

Train LR-GAN

Preparation

Pull this project to your own machine, and then make sure Pytorch is installed successfully. Create a folder datasets to hold the training sets, and a folder images to save the generation results, and a folder models to save the models (generators and discriminators):

$ mkdir datasets
$ mkdir images
$ mkdir models

Then, you can try to train the LR-GAN model on the datasets: 1) MNIST-ONE; 2) MNIST-TWO; 3) CUB-200; 4) CIFAR-10. The sample images are shown below:

In the datasets folder, create subfolders for all these datasets separately:

$ mkdir datasets/mnist-one
$ mkdir datasets/mnist-two
$ mkdir datasets/cub200
$ mkdir datasets/cifar10

Training

  1. MNIST-ONE. We first run experiments on MNIST-ONE, which can be downloaded from here. Unzip this into datasets/mnist-one folder, and then run the following command:
$ python train.py \
      --dataset mnist-one \
      --dataroot datasets/mnist-one \
      --ntimestep 2 \
      --imageSize 32 \
      --maxobjscale 1.2 \
      --niter 50 \
      --session 1

Here, ntimestep specifies the number of recursive layers, e.g., 2 means one background and one foreground layer; imageSize is the scale size the training images; maxobjscale is the maximal object (foreground) scale, the larger the value, the smaller the object size; session specifies the training session; niter specifies the number of training epochs. Below are randomly generation results using trained model in epoch 50:

From left to right, they are generated background images, foreground images, foreground masks and final images.

  1. CUB200. We run on CUB200 in 64x64. Here is the processed dataset. Similarly, download it and unzip it into datasets/cub200. Then, run the following command:
$ python train.py \
      --dataset cub200 \
      --dataroot datasets/cub200 \
      --ntimestep 2 \
      --imageSize 64 \
      --ndf 128 \
      --ngf 128 \
      --maxobjscale 1.2 \
      --niter 200 \
      --session 1

Based on above command, we obtained the model same to the one in our paper. Below are randomly generated images:

The layout is similar to MNIST-ONE. As we an see, the generator generated bird-shape masks, and thus make the final images sharper and cleaner.

  1. CIFAR-10. CIFAR-10 can be automatically downloaded with pytorch dataloader. We use two timesteps for the generation. To train the model, run:
$ python train.py \
      --dataset cifar10 \
      --dataroot datasets/cifar10 \
      --ntimestep 2 \
      --imageSize 32 \
      --maxobjscale 1.2 \
      --niter 100 \
      --session 1

Here are some randomly sampled generation results:

From left to right, they are generated background images, foreground images, foreground masks and final images. We can clearly find some horse-shape, bird-shape and boat-shape masks generated, and the finally generated images are more sharper.

  1. MNIST-TWO. The images are 64x64 and contain two digits. We train the model using the following command:
$ python train.py \
      --dataset mnist-two \
      --dataroot datasets/mnist-two \
      --ntimestep 3 \
      --imageSize 64 \
      --maxobjscale 2 \
      --niter 50 \
      --session 1

The layout is the same to the one in our paper.

  1. LFW. We train on 64x64 images, which can be downloaded from here. Unzip it to the folder datasets/lfw. We train the model using the following command:
$ python train.py \
      --dataset lfw \
      --dataroot datasets/lfw \
      --ntimestep 2 \
      --imageSize 64 \
      --maxobjscale 1.3 \
      --niter 100 \
      --session 1

Below are the generation results:

The left most 8x8 grid are the real images, followed by generated backgrounds, foregrounds, masks and final images.

Test LR-GAN

After the training, the checkpoints will be saved to models. You can append two more options (netG and evaluate) to the command used for training model. Use cifar10 as the example, it will be:

$ python train.py \
      --dataset cifar10 \
      --dataroot datasets/cifar10 \
      --ntimestep 2 \
      --imageSize 32 \
      --maxobjscale 1.2 \
      --niter 100 \
      --session 1
      --netG models/cifar10_netG_s_1_epoch_100.pth
      --evaluate True

Then you can get the generation results for the session:1 and epoch:100 model in the folder images.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].