All Projects β†’ nashory β†’ Pggan Pytorch

nashory / Pggan Pytorch

Licence: mit
πŸ”₯πŸ”₯ PyTorch implementation of "Progressive growing of GANs (PGGAN)" πŸ”₯πŸ”₯

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pggan Pytorch

Cartoongan Tensorflow
Generate your own cartoon-style images with CartoonGAN (CVPR 2018), powered by TensorFlow 2.0 Alpha.
Stars: ✭ 587 (-10.11%)
Mutual labels:  gan, generative-adversarial-network, tensorboard
Textgan Pytorch
TextGAN is a PyTorch framework for Generative Adversarial Networks (GANs) based text generation models.
Stars: ✭ 479 (-26.65%)
Mutual labels:  gan, generative-adversarial-network
Cool Fashion Papers
πŸ‘”πŸ‘—πŸ•ΆοΈπŸŽ© Cool resources about Fashion + AI! (papers, datasets, workshops, companies, ...) (constantly updating)
Stars: ✭ 464 (-28.94%)
Mutual labels:  gan, generative-adversarial-network
Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+750.38%)
Mutual labels:  gan, generative-adversarial-network
Generative Compression
TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
Stars: ✭ 428 (-34.46%)
Mutual labels:  gan, generative-adversarial-network
Generative Models
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN
Stars: ✭ 438 (-32.92%)
Mutual labels:  gan, generative-adversarial-network
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (-3.52%)
Mutual labels:  gan, generative-adversarial-network
Fast Srgan
A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps
Stars: ✭ 417 (-36.14%)
Mutual labels:  generative-adversarial-network, tensorboard
Seqgan
A simplified PyTorch implementation of "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient." (Yu, Lantao, et al.)
Stars: ✭ 502 (-23.12%)
Mutual labels:  gan, generative-adversarial-network
Generative Adversarial Networks
Introduction to generative adversarial networks, with code to accompany the O'Reilly tutorial on GANs
Stars: ✭ 505 (-22.66%)
Mutual labels:  gan, generative-adversarial-network
Awesome Gans
Awesome Generative Adversarial Networks with tensorflow
Stars: ✭ 585 (-10.41%)
Mutual labels:  gan, generative-adversarial-network
Deep Learning Resources
η”±ζ·Ίε…₯ζ·±ηš„ζ·±εΊ¦ε­ΈηΏ’θ³‡ζΊ Collection of deep learning materials for everyone
Stars: ✭ 422 (-35.38%)
Mutual labels:  gan, generative-adversarial-network
Wassersteingan.tensorflow
Tensorflow implementation of Wasserstein GAN - arxiv: https://arxiv.org/abs/1701.07875
Stars: ✭ 419 (-35.83%)
Mutual labels:  gan, generative-adversarial-network
Tensorflow Tutorial
Tensorflow tutorial from basic to hard, θŽ«ηƒ¦Python δΈ­ζ–‡AIζ•™ε­¦
Stars: ✭ 4,122 (+531.24%)
Mutual labels:  gan, generative-adversarial-network
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-1.84%)
Mutual labels:  gan, generative-adversarial-network
T2f
T2F: text to face generation using Deep Learning
Stars: ✭ 494 (-24.35%)
Mutual labels:  gan, generative-adversarial-network
Simgan Captcha
Solve captcha without manually labeling a training set
Stars: ✭ 405 (-37.98%)
Mutual labels:  gan, generative-adversarial-network
Simgan
Implementation of Apple's Learning from Simulated and Unsupervised Images through Adversarial Training
Stars: ✭ 406 (-37.83%)
Mutual labels:  gan, generative-adversarial-network
Ssgan Tensorflow
A Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks (NIPS 2016: Improved Techniques for Training GANs).
Stars: ✭ 496 (-24.04%)
Mutual labels:  gan, generative-adversarial-network
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (-21.44%)
Mutual labels:  gan, generative-adversarial-network

Pytorch Implementation of "Progressive growing GAN (PGGAN)"

PyTorch implementation of PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)

image

What's different from official paper?

  • original: trans(G)-->trans(D)-->stab / my code: trans(G)-->stab-->transition(D)-->stab
  • no use of NIN layer. The unnecessary layers (like low-resolution blocks) are automatically flushed out and grow.
  • used torch.utils.weight_norm for to_rgb_layer of generator.
  • No need to implement the the Celeb A data, Just come with your own dataset :)

How to use?

[step 1.] Prepare dataset
The author of progressive GAN released CelebA-HQ dataset, and which Nash is working on over on the branch that i forked this from. For my version just make sure that all images are the children of that folder that you declare in Config.py. Also i warn you that if you use multiple classes, they should be similar as to not end up with attrocities.

---------------------------------------------
The training data folder should look like : 
<train_data_root>
                |--Your Folder
                        |--image 1
                        |--image 2
                        |--image 3 ...
---------------------------------------------

[step 2.] Prepare environment using virtualenv

  • you can easily set PyTorch (v0.3) and TensorFlow environment using virtualenv.
  • CAUTION: if you have trouble installing PyTorch, install it mansually using pip. [PyTorch Install]
  • For install please take your time and install all dependencies of PyTorch and also install tensorflow
$ virtualenv --python=python2.7 venv
$ . venv/bin/activate
$ pip install -r requirements.txt
$ conda install pytorch torchvision -c pytorch

[step 3.] Run training

  • edit config.py to change parameters. (don't forget to change path to training images)
  • specify which gpu devices to be used, and change "n_gpu" option in config.py to support Multi-GPU training.
  • run and enjoy!
  (example)
  If using Single-GPU (device_id = 0):
  $ vim config.py   -->   change "n_gpu=1"
  $ CUDA_VISIBLE_DEVICES=0 python trainer.py
  
  If using Multi-GPUs (device id = 1,3,7):
  $ vim config.py   -->   change "n_gpu=3"
  $ CUDA_VISIBLE_DEVICES=1,3,7 python trainer.py

[step 4.] Display on tensorboard (At the moment skip this part)

  • you can check the results on tensorboard.

$ tensorboard --logdir repo/tensorboard --port 8888
$ <host_ip>:8888 at your browser.

[step 5.] Generate fake images using linear interpolation

CUDA_VISIBLE_DEVICES=0 python generate_interpolated.py

Experimental results

The result of higher resolution(larger than 256x256) will be updated soon.

Generated Images







Loss Curve

image

To-Do List (will be implemented soon)

  • [ ] Support WGAN-GP loss
  • [ ] training resuming functionality.
  • [ ] loading CelebA-HQ dataset (for 512x512 and 1024x0124 training)

Compatability

  • cuda v8.0 (if you dont have it dont worry)
  • Tesla P40 (you may need more than 12GB Memory. If not, please adjust the batch_table in dataloader.py)

Acknowledgement

Author

MinchulShin, @nashory

Contributors

DeMarcus Edwards, @Djmcflush
MakeDirtyCode, @MakeDirtyCode
Yuan Zhao, @yuanzhaoYZ
zhanpengpan, @szupzp

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].