All Projects → JunlinHan → CWR

JunlinHan / CWR

Licence: other
Code and dataset for Single Underwater Image Restoration by Contrastive Learning, IGARSS 2021, oral.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to CWR

Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (+216.28%)
Mutual labels:  generative-adversarial-network, image-generation
Mmediting
OpenMMLab Image and Video Editing Toolbox
Stars: ✭ 2,618 (+5988.37%)
Mutual labels:  generative-adversarial-network, image-generation
Unetgan
Official Implementation of the paper "A U-Net Based Discriminator for Generative Adversarial Networks" (CVPR 2020)
Stars: ✭ 139 (+223.26%)
Mutual labels:  generative-adversarial-network, image-generation
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (+188.37%)
Mutual labels:  generative-adversarial-network, image-generation
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (+458.14%)
Mutual labels:  generative-adversarial-network, image-generation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+25325.58%)
Mutual labels:  generative-adversarial-network, image-generation
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (+227.91%)
Mutual labels:  generative-adversarial-network, image-generation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+20283.72%)
Mutual labels:  generative-adversarial-network, image-generation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+38218.6%)
Mutual labels:  generative-adversarial-network, image-generation
Conditional Gan
Tensorflow implementation for Conditional Convolutional Adversarial Networks.
Stars: ✭ 202 (+369.77%)
Mutual labels:  generative-adversarial-network, image-generation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (+125.58%)
Mutual labels:  generative-adversarial-network, image-generation
CResMD
(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (+113.95%)
Mutual labels:  image-restoration, low-level-vision
Ganspace
Discovering Interpretable GAN Controls [NeurIPS 2020]
Stars: ✭ 1,224 (+2746.51%)
Mutual labels:  generative-adversarial-network, image-generation
WGAN-GP-TensorFlow
TensorFlow implementations of Wasserstein GAN with Gradient Penalty (WGAN-GP), Least Squares GAN (LSGAN), GANs with the hinge loss.
Stars: ✭ 42 (-2.33%)
Mutual labels:  generative-adversarial-network, image-generation
Dcgan Tensorflow
A Tensorflow implementation of Deep Convolutional Generative Adversarial Networks trained on Fashion-MNIST, CIFAR-10, etc.
Stars: ✭ 70 (+62.79%)
Mutual labels:  generative-adversarial-network, image-generation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (+227.91%)
Mutual labels:  generative-adversarial-network, image-generation
Multi Viewpoint Image Generation
Given an image and a target viewpoint, generate synthetic image in the target viewpoint
Stars: ✭ 23 (-46.51%)
Mutual labels:  generative-adversarial-network, image-generation
Bringing Old Photos Back To Life
Bringing Old Photo Back to Life (CVPR 2020 oral)
Stars: ✭ 9,525 (+22051.16%)
Mutual labels:  generative-adversarial-network, image-restoration
Arbitrary Text To Image Papers
A collection of arbitrary text to image papers with code (constantly updating)
Stars: ✭ 196 (+355.81%)
Mutual labels:  generative-adversarial-network, image-generation
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+3674.42%)
Mutual labels:  generative-adversarial-network, image-generation

arXiv (conference) | arXiv (journal)

Contrastive UnderWater Restoration (CWR)

New: Please check out our journal version at arXiv (journal).

We provide our PyTorch implementation for paper "Single Underwater Image Restoration by Contrastive Learning". CWR is designed for underwater image restoration, but not limited to it. It performs style-transfer to certain kinds of low-level vision tasks (e.g. Dehaze, Underwater image enhancement, Deraining) while keeping the structure identical.

CWR achieves SOTA performances in underwater image restoration task using HICRD (Heron Island Coral Reef Dataset) as the training data.

CWR and other unsupervised learning-based model works like:

Before restoration:

After restoration:

Datasets

Heron Island Coral Reef Dataset (HICRD) contains 6003 low-quality images, 3673 good-quality images, and 2000 restored images. We use low-quality images and restored images as the unpaired training set (trainA + trainB). In contrast, the paired training set contains good-quality (trainA_paired) images and corresponding restored images (trainB_paired). The test set contains 300 good-quality images (testA) as well as 300 paired restored images (testB) as ground truth. All images are in 1842 x 980 resolution. The copyright belongs to CSIRO (Commonwealth Scientific and Industrial Research Organisation).

Download link: https://data.csiro.au/collections/collection/CIcsiro:49488

To download the dataset, you need to:

1: Click the download link.

2: Click Download, select Download all files via WebDAV in select a method.

3: Enter your email address and click Request files.

4: An email will be sent to your email address for verification, click it.

5: You should receive further instructions soonly. Use the User and Password to access files and download them.

HICRD contains 8 different sites, 6 of them are with water parameters ( diffuse attenuation coefficient). The link contains both paired HICRD and unpaired HICRD. Metadata ( water parameters, camera sensor response) is also provided. More details will be included in the journal version of this paper.

Location of Heron Island:

General information for different sites:

Prerequisites

Python 3.6 or above.

For packages, see requirements.txt.

Getting started

  • Clone this repo:
git clone https://github.com/JunlinHan/CWR.git
  • Install PyTorch 1.6 or above and other dependencies (e.g., torchvision, visdom, dominate, gputil).

    For pip users, please type the command pip install -r requirements.txt.

    For Conda users, you can create a new Conda environment using conda env create -f environment.yml.

CWR Training and Test

  • A one image train/test example is provided.

  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097.

  • Train the CWR model with test HICRD dataset:

python train.py --dataroot ./datasets/HICRD --name HICRD_small

The checkpoints will be stored at ./checkpoints/HICRD_small/web.

  • Test the CWR model:
python test.py --dataroot ./datasets/HICRD --name HICRD_small --preprocess scale_width --load_size 1680

The test results will be saved to an html file here: ./results/HICRD_small/latest_test/index.html.

Pre-trained CWR model

We provide our pre-trained models:

Pre-trained CWR: https://drive.google.com/file/d/1-Ouzzup2jNdg1PoYaQIjd-tm9K3LvwVl/view?usp=sharing

Use the pre-trained model

1: Download the whole HICRD dataset and replace ./dataset/HICRD (optional)

2: Download the pre-tained model, unzip it, and put it inside ./checkpoints (You may need to create checkpoints folder by yourself if you didn't run the training code).

python test.py --dataroot ./datasets/HICRD --name HICRD_CWR --preprocess scale_width --load_size 1680

The test results will be saved to a html file here: ./results/HICRD_CWR/latest_test/index.html.

For FID score, use pytorch-fid.

Test the FID score:

python -m pytorch_fid ./results/HICRD_CWR/test_latest/images/fake_B ./results/HICRD_CWR/test_latest/images/real_B

Citation

Conference: Single Underwater Image Restoration by Contrastive Learning
Junlin Han, Mehrdad Shoeiby, Tim Malthus, Elizabeth Botha, Janet Anstee, Saeed Anwar, Ran Wei, Lars Petersson, Mohammad Ali Armin
CSIRO and Australian National University
In IGARSS 2021

Journal: Underwater Image Restoration via Contrastive Learning and a Real-world Dataset
Junlin Han, Mehrdad Shoeiby, Tim Malthus, Elizabeth Botha, Janet Anstee, Saeed Anwar, Ran Wei, Mohammad Ali Armin, Hongdong Li, Lars Petersson
CSIRO and Australian National University
In submission

If you use our code/results/dataset, please consider citing our paper. Thanks in advance!

@inproceedings{han2021cwr,
  title={Single Underwater Image Restoration by Contrastive Learning},
  author={Junlin Han and Mehrdad Shoeiby and Tim Malthus and Elizabeth Botha and Janet Anstee and Saeed Anwar and Ran Wei and Lars Petersson and Mohammad Ali Armin},
  booktitle={IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
  year={2021}
}

@article{han2021underwater,
  title={Underwater Image Restoration via Contrastive Learning and a Real-world Dataset},
  author={Junlin Han and Mehrdad Shoeiby and Tim Malthus and Elizabeth Botha and Janet Anstee and Saeed Anwar and Ran Wei and Mohammad Ali Armin and Honngdong Li and Lars Petersson},
  journal={arXiv preprint arXiv:2106.10718},
  year={2021}
}

If you use something included in CUT, you may also cite CUT.

Contact

[email protected] or [email protected]

Acknowledgments

Our code is developed based on pytorch-CycleGAN-and-pix2pix and CUT. We thank the awesome work provided by CycleGAN and CUT. We also thank pytorch-fid for FID computation. And great thanks to the anonymous reviewers for their helpful feedback.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].