All Projects → HideUnderBush → Ui2i_via_stylegan2

HideUnderBush / Ui2i_via_stylegan2

Licence: other
Unsupervised image-to-image translation method via pre-trained StyleGAN2 network

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ui2i via stylegan2

Cyclegan Tensorflow 2
CycleGAN Tensorflow 2
Stars: ✭ 330 (+175%)
Mutual labels:  image-translation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+690.83%)
Mutual labels:  image-translation
Alice
NIPS 2017: ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching
Stars: ✭ 80 (-33.33%)
Mutual labels:  image-translation
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+205%)
Mutual labels:  image-translation
Fewshot Face Translation Gan
Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.
Stars: ✭ 705 (+487.5%)
Mutual labels:  image-translation
Img2imggan
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 49 (-59.17%)
Mutual labels:  image-translation
Shadesketch
Implementation of "Learning to Shadow Hand-drawn Sketches" CVPR 2020 (Oral)
Stars: ✭ 263 (+119.17%)
Mutual labels:  image-translation
Cen
[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (-6.67%)
Mutual labels:  image-translation
Adversarialnetspapers
Awesome paper list with code about generative adversarial nets
Stars: ✭ 6,219 (+5082.5%)
Mutual labels:  image-translation
Singlegan
SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning. ACCV 2018
Stars: ✭ 76 (-36.67%)
Mutual labels:  image-translation
Sean
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)
Stars: ✭ 387 (+222.5%)
Mutual labels:  image-translation
Cyclegan Tensorflow
Tensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf
Stars: ✭ 676 (+463.33%)
Mutual labels:  image-translation
Sparsely Grouped Gan
Code for paper "Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation"
Stars: ✭ 68 (-43.33%)
Mutual labels:  image-translation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+184.17%)
Mutual labels:  image-translation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-19.17%)
Mutual labels:  image-translation
Munit Tensorflow
Simple Tensorflow implementation of "Multimodal Unsupervised Image-to-Image Translation" (ECCV 2018)
Stars: ✭ 292 (+143.33%)
Mutual labels:  image-translation
Cyclegan
PyTorch implementation of CycleGAN
Stars: ✭ 38 (-68.33%)
Mutual labels:  image-translation
Gdwct
Official PyTorch implementation of GDWCT (CVPR 2019, oral)
Stars: ✭ 122 (+1.67%)
Mutual labels:  image-translation
Pixel2style2pixel
Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation"
Stars: ✭ 1,395 (+1062.5%)
Mutual labels:  image-translation
Pytorch Pix2pix
Pytorch implementation of pix2pix for various datasets.
Stars: ✭ 74 (-38.33%)
Mutual labels:  image-translation

License CC BY-NC-SA 4.0

We proposed an unsupervised image-to-image translation method via pre-trained StyleGAN2 network.

mona

paper: Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2 Network

Prerequisite

  • PyTorch 1.3.1
  • CUDA 10.1

Step 1: Model Fine-tuning

To obtain the target model, you need to follow the instruction of data preparation stated in the StyleGAN2 pytorch implementation here

python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH

And fine-tune the model with data in the target domain:

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train.py --batch BATCH_SIZE LMDB_PATH --ckpt your_base_model_path

Step 2: Closed-Form GAN space

Calculate the GAN space via the proposed algorithm, and a factor can then be obtained. python3 closed_form_factorization.py --ckpt your_model --out output_factor_path

Step 3: Image inversion

Inverse the image to a latent code based on the StyleGAN2 model trained on its domain python3 project_factor.py --ckpt stylegan_model_path --fact factor_path IMAGE_FILE

Step 4: LS Image generation with multiple styles

We use the inversed code to generate images with multiple style in the target domain

python3 gen_multi_style.py --model base_model_path --model2 target_model_path --fact base_inverse.pt --fact_base factor_from_base_model -o output_path --swap_layer 3 --stylenum 10

In additon to multi-modal translation, the style of the output can be specified by reference. To achieve this, we need to inverse the reference image as well since its latent code would then be used as style code in the generation.

python3 gen_ref.py --model1 base_model_path --model2 target_model_path --fact base_inverse.pt --fac_ref reference_inverse.pt --fact_base1 factor_from_base_model --fact_base2 factor_from_target_model -o output_path

pre-trained base model and dataset

We use the StyleGAN2 face models trained on FFHQ, 256x256 (by @rosinality). And the 1024x1024 can be found in the StyleGAN2 official implementation, model conversion between TF and Pytorch is needed. Models fine-tuned on such models can be used for I2I translation, though with FreezeFC they can achieve better results.

Many thanks to Gwern for providing the Anime dataset Danbooru and Doron Adler and Justin Pinkney for providing the cartoon dataset.

Some Results

cartoon2face1 cartoon2face2 cartoon2face3 cartoon2face4 face2portrait1 face2portrait2

The code is heavily borrowed from StyleGAN2 implementation (rosality's StyleGAN2 implementation) and close-form Factorization, thanks to their great work and contribution!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].