All Projects → aitorzip → Pytorch Cyclegan

aitorzip / Pytorch Cyclegan

Licence: gpl-3.0
A clean and readable Pytorch implementation of CycleGAN

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Cyclegan

Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+2852.87%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation, cyclegan
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+1859.32%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation, cyclegan
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+47.31%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation, cyclegan
St Cgan
Dataset and Code for our CVPR'18 paper ST-CGAN: "Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal"
Stars: ✭ 13 (-97.67%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-processing
CycleGAN-gluon-mxnet
this repo attemps to reproduce Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks(CycleGAN) use gluon reimplementation
Stars: ✭ 31 (-94.44%)
Mutual labels:  computer-graphics, generative-adversarial-network, cyclegan
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (-34.23%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (-34.41%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+1470.79%)
Mutual labels:  computer-graphics, generative-adversarial-network, image-generation
Computer Vision Video Lectures
A curated list of free, high-quality, university-level courses with video lectures related to the field of Computer Vision.
Stars: ✭ 154 (-72.4%)
Mutual labels:  artificial-intelligence, computer-graphics, image-processing
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (-8.06%)
Mutual labels:  generative-adversarial-network, image-generation, image-processing
3dworld
3D Procedural Game Engine Using OpenGL
Stars: ✭ 527 (-5.56%)
Mutual labels:  artificial-intelligence, computer-graphics
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (-38.89%)
Mutual labels:  image-generation, cyclegan
Artificio
Deep Learning Computer Vision Algorithms for Real-World Use
Stars: ✭ 326 (-41.58%)
Mutual labels:  artificial-intelligence, image-processing
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-43.91%)
Mutual labels:  generative-adversarial-network, image-generation
Sianet
An easy to use C# deep learning library with CUDA/OpenCL support
Stars: ✭ 353 (-36.74%)
Mutual labels:  artificial-intelligence, image-processing
Cyclegan
Tensorflow implementation of CycleGAN
Stars: ✭ 348 (-37.63%)
Mutual labels:  generative-adversarial-network, cyclegan
Texturize
🤖🖌️ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 366 (-34.41%)
Mutual labels:  image-generation, image-processing
Gp Gan
Official Chainer implementation of GP-GAN: Towards Realistic High-Resolution Image Blending (ACMMM 2019, oral)
Stars: ✭ 317 (-43.19%)
Mutual labels:  artificial-intelligence, computer-graphics
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+589.07%)
Mutual labels:  computer-graphics, generative-adversarial-network
Awesome Image Translation
A collection of awesome resources image-to-image translation.
Stars: ✭ 408 (-26.88%)
Mutual labels:  computer-graphics, image-generation

Pytorch-CycleGAN

A clean and readable Pytorch implementation of CycleGAN (https://arxiv.org/abs/1703.10593)

Prerequisites

Code is intended to work with Python 3.6.x, it hasn't been tested with previous versions

PyTorch & torchvision

Follow the instructions in pytorch.org for your current setup

Visdom

To plot loss graphs and draw images in a nice web browser view

pip3 install visdom

Training

1. Setup the dataset

First, you will need to download and setup a dataset. The easiest way is to use one of the already existing datasets on UC Berkeley's repository:

./download_dataset <dataset_name>

Valid <dataset_name> are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos

Alternatively you can build your own dataset by setting up the following directory structure:

.
├── datasets                   
|   ├── <dataset_name>         # i.e. brucewayne2batman
|   |   ├── train              # Training
|   |   |   ├── A              # Contains domain A images (i.e. Bruce Wayne)
|   |   |   └── B              # Contains domain B images (i.e. Batman)
|   |   └── test               # Testing
|   |   |   ├── A              # Contains domain A images (i.e. Bruce Wayne)
|   |   |   └── B              # Contains domain B images (i.e. Batman)

2. Train!

./train --dataroot datasets/<dataset_name>/ --cuda

This command will start a training session using the images under the dataroot/train directory with the hyperparameters that showed best results according to CycleGAN authors. You are free to change those hyperparameters, see ./train --help for a description of those.

Both generators and discriminators weights will be saved under the output directory.

If you don't own a GPU remove the --cuda option, although I advise you to get one!

You can also view the training progress as well as live output images by running python3 -m visdom in another terminal and opening http://localhost:8097/ in your favourite web browser. This should generate training loss progress as shown below (default params, horse2zebra dataset):

Generator loss Discriminator loss Generator GAN loss Generator identity loss Generator cycle loss

Testing

./test --dataroot datasets/<dataset_name>/ --cuda

This command will take the images under the dataroot/test directory, run them through the generators and save the output under the output/A and output/B directories. As with train, some parameters like the weights to load, can be tweaked, see ./test --help for more information.

Examples of the generated outputs (default params, horse2zebra dataset):

Real horse Fake zebra Real zebra Fake horse

License

This project is licensed under the GPL v3 License - see the LICENSE.md file for details

Acknowledgments

Code is basically a cleaner and less obscured implementation of pytorch-CycleGAN-and-pix2pix. All credit goes to the authors of CycleGAN, Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].