All Projects → AAnoosheh → Combogan

AAnoosheh / Combogan

Licence: bsd-2-clause

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Combogan

automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-85.07%)
Mutual labels:  gan, image-manipulation, cyclegan
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+8058.96%)
Mutual labels:  gan, cyclegan, image-manipulation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+12196.27%)
Mutual labels:  gan, cyclegan, image-manipulation
Cyclegan Vc3
Voice Conversion by CycleGAN (语音克隆/语音转换):CycleGAN-VC3
Stars: ✭ 52 (-61.19%)
Mutual labels:  gan, cyclegan
Anime person translation
人脸和动漫脸的互转
Stars: ✭ 35 (-73.88%)
Mutual labels:  gan, cyclegan
Cyclegan
PyTorch implementation of CycleGAN
Stars: ✭ 38 (-71.64%)
Mutual labels:  gan, cyclegan
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+173.88%)
Mutual labels:  gan, image-manipulation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+6441.04%)
Mutual labels:  gan, image-manipulation
Cyclegan Tensorflow
An implementation of CycleGan using TensorFlow
Stars: ✭ 1,096 (+717.91%)
Mutual labels:  gan, cyclegan
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (+1168.66%)
Mutual labels:  gan, image-manipulation
Cyclegan tensorlayer
Re-implement CycleGAN in Tensorlayer
Stars: ✭ 86 (-35.82%)
Mutual labels:  gan, cyclegan
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-27.61%)
Mutual labels:  gan, image-manipulation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+608.21%)
Mutual labels:  gan, image-manipulation
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+513.43%)
Mutual labels:  cyclegan, image-manipulation
Generate to adapt
Implementation of "Generate To Adapt: Aligning Domains using Generative Adversarial Networks"
Stars: ✭ 120 (-10.45%)
Mutual labels:  gan, domain-adaptation
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+2769.4%)
Mutual labels:  gan, image-manipulation
Cyclegan Qp
Official PyTorch implementation of "Artist Style Transfer Via Quadratic Potential"
Stars: ✭ 59 (-55.97%)
Mutual labels:  gan, cyclegan
Lsd Seg
Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation
Stars: ✭ 99 (-26.12%)
Mutual labels:  gan, domain-adaptation
Deep Generative Prior
Code for deep generative prior (ECCV2020 oral)
Stars: ✭ 308 (+129.85%)
Mutual labels:  gan, image-manipulation
Pycadl
Python package with source code from the course "Creative Applications of Deep Learning w/ TensorFlow"
Stars: ✭ 356 (+165.67%)
Mutual labels:  gan, cyclegan

ComboGAN

This is our ongoing PyTorch implementation for ComboGAN. Code was written by Asha Anoosheh (built upon CycleGAN)

[ComboGAN Paper]

If you use this code for your research, please cite:

ComboGAN: Unrestrained Scalability for Image Domain Translation Asha Anoosheh, Eirikur Augustsson, Radu Timofte, Luc van Gool In Arxiv, 2017.





Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Install PyTorch and dependencies from http://pytorch.org
  • Install Torch vision from the source.
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
pip install visdom
pip install dominate
  • Clone this repo:
git clone https://github.com/AAnoosheh/ComboGAN.git
cd ComboGAN

ComboGAN training

Our ready datasets can be downloaded using ./datasets/download_dataset.sh <dataset_name>.

A pretrained model for the 14-painters dataset can be found HERE. Place under ./checkpoints/ and test using the instructions below, with args --name paint14_pretrained --dataroot ./datasets/painters_14 --n_domains 14 --which_epoch 1150.

Example running scripts can be found in the scripts directory.

  • Train a model:
python train.py --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --niter <num_epochs_constant_LR> --niter_decay <num_epochs_decaying_LR>

Checkpoints will be saved by default to ./checkpoints/<experiment_name>/

  • Fine-tuning/Resume training:
python train.py --continue_train --which_epoch <checkpoint_number_to_load> --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --niter <num_epochs_constant_LR> --niter_decay <num_epochs_decaying_LR>
  • Test the model:
python test.py --phase test --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --which_epoch <checkpoint_number_to_load> --serial_test

The test results will be saved to a html file here: ./results/<experiment_name>/<epoch_number>/index.html.

Training/Testing Details

  • Flags: see options/train_options.py for training-specific flags; see options/test_options.py for test-specific flags; and see options/base_options.py for all common flags.
  • Dataset format: The desired data directory (provided by --dataroot) should contain subfolders of the form train*/ and test*/, and they are loaded in alphabetical order. (Note that a folder named train10 would be loaded before train2, and thus all checkpoints and results would be ordered accordingly.)
  • CPU/GPU (default --gpu_ids 0): set--gpu_ids -1 to use CPU mode; set --gpu_ids 0,1,2 for multi-GPU mode. You need a large batch size (e.g. --batchSize 32) to benefit from multiple GPUs.
  • Visualization: during training, the current results and loss plots can be viewed using two methods. First, if you set --display_id > 0, the results and loss plot will appear on a local graphics web server launched by visdom. To do this, you should have visdom installed and a server running by the command python -m visdom.server. The default server URL is http://localhost:8097. display_id corresponds to the window ID that is displayed on the visdom server. The visdom display functionality is turned on by default. To avoid the extra overhead of communicating with visdom set --display_id 0. Secondly, the intermediate results are also saved to ./checkpoints/<experiment_name>/web/index.html. To avoid this, set the --no_html flag.
  • Preprocessing: images can be resized and cropped in different ways using --resize_or_crop option. The default option 'resize_and_crop' resizes the image to be of size (opt.loadSize, opt.loadSize) and does a random crop of size (opt.fineSize, opt.fineSize). 'crop' skips the resizing step and only performs random cropping. 'scale_width' resizes the image to have width opt.fineSize while keeping the aspect ratio. 'scale_width_and_crop' first resizes the image to have width opt.loadSize and then does random cropping of size (opt.fineSize, opt.fineSize).

NOTE: one should not expect ComboGAN to work on just any combination of input and output datasets (e.g. dogs<->houses). We find it works better if two datasets share similar visual content. For example, landscape painting<->landscape photographs works much better than portrait painting <-> landscape photographs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].