All Projects → VITA-Group → Enlightengan

VITA-Group / Enlightengan

Licence: other
[IEEE TIP'2021] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Enlightengan

Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (-68.2%)
Mutual labels:  gan, generative-adversarial-networks
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-56.45%)
Mutual labels:  gan, unsupervised-learning
Lr Gan.pytorch
Pytorch code for our ICLR 2017 paper "Layered-Recursive GAN for image generation"
Stars: ✭ 145 (-66.59%)
Mutual labels:  gan, unsupervised-learning
Psgan
Periodic Spatial Generative Adversarial Networks
Stars: ✭ 108 (-75.12%)
Mutual labels:  gan, generative-adversarial-networks
GAN-Project-2018
GAN in Tensorflow to be run via Linux command line
Stars: ✭ 21 (-95.16%)
Mutual labels:  gan, generative-adversarial-networks
Gdwct
Official PyTorch implementation of GDWCT (CVPR 2019, oral)
Stars: ✭ 122 (-71.89%)
Mutual labels:  gan, generative-adversarial-networks
Pose2pose
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
Stars: ✭ 182 (-58.06%)
Mutual labels:  gan, generative-adversarial-networks
Pytorch Pix2pix
Pytorch implementation of pix2pix for various datasets.
Stars: ✭ 74 (-82.95%)
Mutual labels:  gan, generative-adversarial-networks
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (-88.48%)
Mutual labels:  gan, unsupervised-learning
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (-51.61%)
Mutual labels:  gan, unsupervised-learning
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+772.12%)
Mutual labels:  gan, generative-adversarial-networks
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-97%)
Mutual labels:  gan, generative-adversarial-networks
Tbgan
Project Page of 'Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks'
Stars: ✭ 105 (-75.81%)
Mutual labels:  gan, generative-adversarial-networks
Oneshottranslation
Pytorch implementation of "One-Shot Unsupervised Cross Domain Translation" NIPS 2018
Stars: ✭ 135 (-68.89%)
Mutual labels:  gan, unsupervised-learning
Dped
Software and pre-trained models for automatic photo quality enhancement using Deep Convolutional Networks
Stars: ✭ 1,315 (+203%)
Mutual labels:  gan, generative-adversarial-networks
Distancegan
Pytorch implementation of "One-Sided Unsupervised Domain Mapping" NIPS 2017
Stars: ✭ 180 (-58.53%)
Mutual labels:  gan, unsupervised-learning
Hypergan
Composable GAN framework with api and user interface
Stars: ✭ 1,104 (+154.38%)
Mutual labels:  gan, unsupervised-learning
Geneva
Code to train and evaluate the GeNeVA-GAN model for the GeNeVA task proposed in our ICCV 2019 paper "Tell, Draw, and Repeat: Generating and modifying images based on continual linguistic instruction"
Stars: ✭ 71 (-83.64%)
Mutual labels:  gan, generative-adversarial-networks
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-53.46%)
Mutual labels:  gan, unsupervised-learning
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (-88.02%)
Mutual labels:  gan, generative-adversarial-networks

EnlightenGAN

IEEE Transaction on Image Processing, 2020, EnlightenGAN: Deep Light Enhancement without Paired Supervision

Representitive Results

representive_results

Overal Architecture

architecture

Environment Preparing

python3.5

You should prepare at least 3 1080ti gpus or change the batch size.

pip install -r requirement.txt mkdir model Download VGG pretrained model from [Google Drive 1], and then put it into the directory model.

Training process

Before starting training process, you should launch the visdom.server for visualizing.

nohup python -m visdom.server -port=8097

then run the following command

python scripts/script.py --train

Testing process

Download pretrained model and put it into ./checkpoints/enlightening

Create directories ../test_dataset/testA and ../test_dataset/testB. Put your test images on ../test_dataset/testA (And you should keep whatever one image in ../test_dataset/testB to make sure program can start.)

Run

python scripts/script.py --predict

Dataset preparing

Training data [Google Drive] (unpaired images collected from multiple datasets)

Testing data [Google Drive] (including LIME, MEF, NPE, VV, DICP)

And [BaiduYun] is available now thanks to @YHLelaine!

If you find this work useful for you, please cite

@article{jiang2021enlightengan,
  title={Enlightengan: Deep light enhancement without paired supervision},
  author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
  journal={IEEE Transactions on Image Processing},
  volume={30},
  pages={2340--2349},
  year={2021},
  publisher={IEEE}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].