All Projects → Xiaoming-Yu → Singlegan

Xiaoming-Yu / Singlegan

Licence: mit
SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning. ACCV 2018

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Singlegan

CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+82.89%)
Mutual labels:  image-translation
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+381.58%)
Mutual labels:  image-translation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+1148.68%)
Mutual labels:  image-translation
Splice
Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022)
Stars: ✭ 126 (+65.79%)
Mutual labels:  image-translation
Cyclegan Tensorflow 2
CycleGAN Tensorflow 2
Stars: ✭ 330 (+334.21%)
Mutual labels:  image-translation
Awesome Image Translation
A collection of awesome resources image-to-image translation.
Stars: ✭ 408 (+436.84%)
Mutual labels:  image-translation
pix2pix-tensorflow
A minimal tensorflow implementation of pix2pix (Image-to-Image Translation with Conditional Adversarial Nets - https://phillipi.github.io/pix2pix/).
Stars: ✭ 22 (-71.05%)
Mutual labels:  image-translation
Sparsely Grouped Gan
Code for paper "Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation"
Stars: ✭ 68 (-10.53%)
Mutual labels:  image-translation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+348.68%)
Mutual labels:  image-translation
Adversarialnetspapers
Awesome paper list with code about generative adversarial nets
Stars: ✭ 6,219 (+8082.89%)
Mutual labels:  image-translation
Pytorch-Image-Translation-GANs
Pytorch implementations of most popular image-translation GANs, including Pixel2Pixel, CycleGAN and StarGAN.
Stars: ✭ 106 (+39.47%)
Mutual labels:  image-translation
Munit Tensorflow
Simple Tensorflow implementation of "Multimodal Unsupervised Image-to-Image Translation" (ECCV 2018)
Stars: ✭ 292 (+284.21%)
Mutual labels:  image-translation
Cyclegan Tensorflow
Tensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf
Stars: ✭ 676 (+789.47%)
Mutual labels:  image-translation
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (-63.16%)
Mutual labels:  image-translation
Cyclegan
PyTorch implementation of CycleGAN
Stars: ✭ 38 (-50%)
Mutual labels:  image-translation
IrwGAN
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation
Stars: ✭ 33 (-56.58%)
Mutual labels:  image-translation
Sean
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)
Stars: ✭ 387 (+409.21%)
Mutual labels:  image-translation
Pytorch Pix2pix
Pytorch implementation of pix2pix for various datasets.
Stars: ✭ 74 (-2.63%)
Mutual labels:  image-translation
Img2imggan
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 49 (-35.53%)
Mutual labels:  image-translation
Fewshot Face Translation Gan
Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.
Stars: ✭ 705 (+827.63%)
Mutual labels:  image-translation

SingleGAN

Pytorch implementation of our paper: "SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning".

By leveraging multiple adversarial learning, our model can perform multi-domain and multi-modal image translation with a single generator.

  • Base model:

  • Extended models:

Dependencies

  • Python 3.x
  • Pytorch 1.1.0 or later

you can install all the dependencies by

pip install -r requirements.txt

Getting Started

Datasets

  • You can either download the default datasets (from pix2pix and CycleGAN) or unzip your own dataset into datasets directory.
    • Download a default dataset (e.g. apple2orange):
     bash ./download_datasets.sh apple2orange
    
    • Please ensure that you have the following directory tree structure in your repository.
     ├── datasets
     │   └── apple2orange
     │       ├── trainA
     │       ├── testA
     │       ├── trainB
     │       ├── testB
     │        ...
    
    • Transient-Attributes dataset can be requested from here.

Training

  • Train a base model (e.g. apple2orange):

     bash ./scripts/train_base.sh apple2orange
    
  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097. More intermediate results can be found in checkpoints directory.

Testing

  • Check the folder name in checkpoints directory (e.g. apple2orange).
     ├── checkpoints
     │   └── base_apple2orange
     │       └── 2018_10_16_14_49_55
     │           └ ...
    
  • Run
     bash ./scripts/test_base.sh apple2orange 2018_10_16_14_49_55
    
  • The testing results will be saved in checkpoints/base_apple2orange/2018_10_16_14_49_55/results directory.

In recent experiments, we found that spectral normaliation (SN) can help stabilize the training stage. So we add SN in this implementation. You may need to update your pytorch to 0.4.1 to support SN or use an old version without SN.

Results

Unsupervised cross-domain translation:

Unsupervised one-to-many translation:

Unsupervised many-to-many translation:

Unsupervised multimodal translation:

Cat ↔ Dog:

Label ↔ Facade:

Edge ↔ Shoes:

Please note that this repository contains only the unsupervised version of SingleGAN, you can implement the supervised version by overloading the data loader and replacing the cycle consistency loss with reconstruction loss. See more details in our paper.

bibtex

If this work is useful for your research, please consider citing :

@inproceedings{yu2018singlegan,    
	title={SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning},    
	author={Yu, Xiaoming and Cai, Xing and Ying, Zhenqiang and Li, Thomas and Li, Ge},    
	booktitle={Asian Conference on Computer Vision},    
	year={2018}
 }

Acknowledgement

The code used in this research is inspired by BicycleGAN.

Contact

Feel free to reach me if there is any questions ([email protected]).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].