All Projects → ZPdesu → Sean

ZPdesu / Sean

Licence: other
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sean

Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-74.94%)
Mutual labels:  gan, image-generation, image-translation
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-94.83%)
Mutual labels:  gan, image-generation
Pytorch-Image-Translation-GANs
Pytorch implementations of most popular image-translation GANs, including Pixel2Pixel, CycleGAN and StarGAN.
Stars: ✭ 106 (-72.61%)
Mutual labels:  gan, image-translation
AsymmetricGAN
[ACCV 2018 Oral] Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Stars: ✭ 42 (-89.15%)
Mutual labels:  gan, image-generation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (-11.89%)
Mutual labels:  image-generation, image-translation
automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-94.83%)
Mutual labels:  gan, image-generation
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (-67.18%)
Mutual labels:  gan, image-generation
IrwGAN
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation
Stars: ✭ 33 (-91.47%)
Mutual labels:  gan, image-translation
Inpainting gmcnn
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
Stars: ✭ 256 (-33.85%)
Mutual labels:  gan, image-generation
Anime Face Dataset
🖼 A collection of high-quality anime faces.
Stars: ✭ 272 (-29.72%)
Mutual labels:  gan, image-generation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (-5.17%)
Mutual labels:  gan, image-generation
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (-86.56%)
Mutual labels:  gan, image-generation
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (-92.76%)
Mutual labels:  image-generation, image-translation
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (-53.49%)
Mutual labels:  gan, image-generation
CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (-64.08%)
Mutual labels:  gan, image-translation
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-96.64%)
Mutual labels:  gan, image-generation
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-19.12%)
Mutual labels:  gan, image-generation
Semantic Pyramid for Image Generation
PyTorch reimplementation of the paper: "Semantic Pyramid for Image Generation" [CVPR 2020].
Stars: ✭ 45 (-88.37%)
Mutual labels:  gan, image-generation
pix2pix-tensorflow
A minimal tensorflow implementation of pix2pix (Image-to-Image Translation with Conditional Adversarial Nets - https://phillipi.github.io/pix2pix/).
Stars: ✭ 22 (-94.32%)
Mutual labels:  gan, image-translation
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (-57.88%)
Mutual labels:  gan, image-generation

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral)

Python 3.7 pytorch 1.2.0 pyqt5 5.13.0

image Figure: Face image editing controlled via style images and segmentation masks with SEAN

We propose semantic region-adaptive normalization (SEAN), a simple but effective building block for Generative Adversarial Networks conditioned on segmentation masks that describe the semantic regions in the desired output image. Using SEAN normalization, we can build a network architecture that can control the style of each semantic region individually, e.g., we can specify one style reference image per region. SEAN is better suited to encode, transfer, and synthesize style than the best previous method in terms of reconstruction quality, variability, and visual quality. We evaluate SEAN on multiple datasets and report better quantitative metrics (e.g. FID, PSNR) than the current state of the art. SEAN also pushes the frontier of interactive image editing. We can interactively edit images by changing segmentation masks or the style for any given region. We can also interpolate styles from two reference images per region.

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization
Peihao Zhu, Rameen Abdal, Yipeng Qin, Peter Wonka
Computer Vision and Pattern Recognition CVPR 2020, Oral

[Paper] [Project Page] [Demo]

Installation

Clone this repo.

git clone https://github.com/ZPdesu/SEAN.git
cd SEAN/

This code requires PyTorch, python 3+ and Pyqt5. Please install dependencies by

pip install -r requirements.txt

This model requires a lot of memory and time to train. To speed up the training, we recommend using 4 V100 GPUs

Dataset Preparation

This code uses CelebA-HQ and CelebAMask-HQ dataset. The prepared dataset can be directly downloaded here. After unzipping, put the entire CelebA-HQ folder in the datasets folder. The complete directory should look like ./datasets/CelebA-HQ/train/ and ./datasets/CelebA-HQ/test/.

Generating Images Using Pretrained Models

Once the dataset is prepared, the reconstruction results be got using pretrained models.

  1. Create ./checkpoints/ in the main folder and download the tar of the pretrained models from the Google Drive Folder. Save the tar in ./checkpoints/, then run

    cd checkpoints
    tar CelebA-HQ_pretrained.tar.gz
    cd ../
    
  2. Generate the reconstruction results using the pretrained model.

    python test.py --name CelebA-HQ_pretrained --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/test/labels --image_dir datasets/CelebA-HQ/test/images --label_nc 19 --no_instance --gpu_ids 0
    
  3. The reconstruction images are saved at ./results/CelebA-HQ_pretrained/ and the corresponding style codes are stored at ./styles_test/style_codes/.

  4. Pre-calculate the mean style codes for the UI mode. The mean style codes can be found at ./styles_test/mean_style_code/.

    python calculate_mean_style_code.py
    

Training New Models

To train the new model, you need to specify the option --dataset_mode custom, along with --label_dir [path_to_labels] --image_dir [path_to_images]. You also need to specify options such as --label_nc for the number of label classes in the dataset, and --no_instance to denote the dataset doesn't have instance maps.

python train.py --name [experiment_name] --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/train/labels --image_dir datasets/CelebA-HQ/train/images --label_nc 19 --no_instance --batchSize 32 --gpu_ids 0,1,2,3

If you only have single GPU with small memory, please use --batchSize 2 --gpu_ids 0.

UI Introduction

We provide a convenient UI for the users to do some extension works. To run the UI mode, you need to:

  1. run the step Generating Images Using Pretrained Models to save the style codes of the test images and the mean style codes. Or you can directly download the style codes from here. (Note: if you directly use the downloaded style codes, you have to use the pretrained model.

  2. Put the visualization images of the labels used for generating in ./imgs/colormaps/ and the style images in ./imgs/style_imgs_test/. Some example images are provided in these 2 folders. Note: the visualization image and the style image should be picked from ./datasets/CelebAMask-HQ/test/vis/ and ./datasets/CelebAMask-HQ/test/labels/, because only the style codes of the test images are saved in ./styles_test/style_codes/. If you want to use your own images, please prepare the images, labels and visualization of the labels in ./datasets/CelebAMask-HQ/test/ with the same format, and calculate the corresponding style codes.

  3. Run the UI mode

    python run_UI.py --name CelebA-HQ_pretrained --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/test/labels --image_dir datasets/CelebA-HQ/test/images --label_nc 19 --no_instance --gpu_ids 0
    
  4. How to use the UI. Please check the detail usage of the UI from our Video.

    image

Other Datasets

Will be released soon.

License

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International) The code is released for academic research use only.

Citation

If you use this code for your research, please cite our papers.

@InProceedings{Zhu_2020_CVPR,
author = {Zhu, Peihao and Abdal, Rameen and Qin, Yipeng and Wonka, Peter},
title = {SEAN: Image Synthesis With Semantic Region-Adaptive Normalization},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Acknowledgments

We thank Wamiq Reyaz Para for helpful comments. This code borrows heavily from SPADE. We thank Taesung Park for sharing his codes. This work was supported by the KAUST Office of Sponsored Research (OSR) under AwardNo. OSR-CRG2018-3730.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].