All Projects → eezkni → UEGAN

eezkni / UEGAN

Licence: other
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to UEGAN

Hypergan
Composable GAN framework with api and user interface
Stars: ✭ 1,104 (+1523.53%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (+2400%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+12789.71%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+826.47%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (+197.06%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Context Encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
Stars: ✭ 731 (+975%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+15977.94%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Deep Generative Prior
Code for deep generative prior (ECCV2020 oral)
Stars: ✭ 308 (+352.94%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (+177.94%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (+107.35%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+654.41%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+24130.88%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+5554.41%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+1295.59%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+439.71%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (+42.65%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Distancegan
Pytorch implementation of "One-Sided Unsupervised Domain Mapping" NIPS 2017
Stars: ✭ 180 (+164.71%)
Mutual labels:  gan, image-manipulation, unsupervised-learning
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+4457.35%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (+107.35%)
Mutual labels:  generative-adversarial-network, gan, image-manipulation
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (+208.82%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning

Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network

IEEE Transactions on Image Processing (T-IP)

Zhangkai Ni1, Wenhan Yang1, Shiqi Wang1, Lin Ma2, Sam Kwong1

[Paper-arXiv] [Paper-official]

1City University of Hong Kong, 2Meituan Group

This website shares the Pytorch codes of the "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network", IEEE Transactions on Image Processing (T-IP), vol. 29, pp. 9140-9151, September 2020.

Abstract

Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a ℓ2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images.

Requirements and Installation

We recommended the following dependencies.

  • Python 3.6
  • PyTorch 1.4.0
  • tqdm 4.43.0
  • munch 2.5.0
  • torchvision 0.5.0
git clone https://github.com/eezkni/UEGAN --recursive
cd UEGAN

Preparing Data for the MIT-Adobe FiveK Dataset

You can follow the instructions below to generate your own training images. Or, you can directly download our exported images FiveK_dataset_nzk. (~6GB)

Getting the MIT-Adobe FiveK Dataset

Generating the Low-quality Images

  • Import the FiveK dataset into Adobe Lightroom.
  • In the Collections list (bottom left), select collection Inputs/InputAsShotZeroed.
  • Export all images in the following settings:
    • Select all images at the bottom or in the middle (select one and press Ctrl-A), right-click any of them and select Export/Export....
    • Export Location: Export to=Specific folder, Folder=Your folder for low-quality images.
    • File Settings: Image Format=PNG, Color Space=sRGB, Bit Depth=8 bit/component
    • Image Sizing: Resize to Fit=Short Edge, select Don't Enlarge, Fill in 512 pixels, Resolution doesn't matter to ignort it.
    • Finally, click Export.

Generating the High-quality Images

  • Import the FiveK dataset into Adobe Lightroom.
  • In the Collections list (bottom left), select collection Experts/C.
  • Export all images in the following settings:
    • Select all images at the bottom or in the middle (select one and press Ctrl-A), right-click any of them and select Export/Export....
    • Export Location: Export to=Specific folder, Folder=Your folder for high-quality images.
    • File Settings: Image Format=PNG, Color Space=sRGB, Bit Depth=8 bit/component
    • Image Sizing: Resize to Fit=Short Edge, select Don't Enlarge, Fill in 512 pixels, Resolution doesn't matter to ignort it.
    • Finally, click Export.

Testing

Having trained your models or the pre-trained model on MIT-Adobe FiveK Dataset (put into ./results/UEGAN-FiveK/models/), to test the pre-trained UEGAN on FiveK, run the test script below.

python main.py --mode test --version UEGAN-FiveK --pretrained_model 92 --is_test_nima True --is_test_psnr_ssim True

Training

Prepare the training, testing, and validation data. The folder structure should be:

data
└─── fiveK
	├─── train
	|	├─── exp
	|	|	├──── a1.png                  
	|	|	└──── ......
	|	└─── raw
	|		├──── b1.png                  
	|		└──── ......
	├─── val
	|	├─── label
	|	|	├──── c1.png                  
	|	|	└──── ......
	|	└─── raw
	|		├──── c1.png                  
	|		└──── ......
	└─── test
		├─── label
		| 	├──── d1.png                  
		| 	└──── ......
		└─── raw
			├──── d1.png                  
			└──── ......

raw/contains low-quality images, exp/ contains unpaired high-quality images, and label/ contains corresponding ground truth.

To train UEGAN on FiveK, run the training script below.

python main.py --mode train --version UEGAN-FiveK --use_tensorboard True --is_test_nima True --is_test_psnr_ssim True

This script will create a folder named ./results in which the resulting are saved.

  • The PSNR results will be saved to here: ./results/psnr_val_results (including PSNR for each valiaded epoch and the summary)
  • The SSIM results will be saved to here: ./results/ssim_val_results (including SSIM for each valiaded epoch and the summary)
  • The NIMA results will be saved to here: ./results/nima_val_results (including NIMA for each valiaded epoch and the summary)
  • The training logs will be saved to here: ./results/UEGAN-FiveK/logs
  • The models will be saved to here: ./results/UEGAN-FiveK/models
  • The intermediate results will be saved to here: ./results/UEGAN-FiveK/samples
  • The validation results will be saved to here: ./results/UEGAN-FiveK/validation
  • The test results will be saved to here: ./results/UEGAN-FiveK/test

To view training results and loss plots, run tensorboard --logdir=results/UEGAN-FiveK/logs, and click the URL accordingly (For example, http://nzk-ub:6007/).

The summary of PSNR test results will be save to ./results/psnr_val_results/PSNR_total_results_epoch_avgpsnr.csv. Find the best epoch in the last line of PSNR_total_results_epoch_avgpsnr.csv.

Citation

If this code/UEGAN is useful for your research, please cite our paper:

@article{ni2020towards,
  title={Towards unsupervised deep image enhancement with generative adversarial network},
  author={Ni, Zhangkai and Yang, Wenhan and Wang, Shiqi and Ma, Lin and Kwong, Sam},
  journal={IEEE Transactions on Image Processing},
  volume={29},
  pages={9140--9151},
  year={2020},
  publisher={IEEE}
}

Contact

Thanks for your attention! If you have any suggestion or question, feel free to leave a message here or contact Dr. Zhangkai Ni ([email protected]).

License

MIT License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].