All Projects → saeed-anwar → UWCNN

saeed-anwar / UWCNN

Licence: other
Code and Datasets for "Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement", Pattern Recognition, 2019

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to UWCNN

ICCV2021-Single-Image-Desnowing-HDCWNet
This paper is accepted by ICCV 2021.
Stars: ✭ 47 (-42.68%)
Mutual labels:  image-restoration, image-enhancement
Awesome-Underwater-Image-Enhancement
A collection of awesome underwater image enhancement methods.
Stars: ✭ 57 (-30.49%)
Mutual labels:  underwater, image-enhancement
Bringing Old Photos Back To Life
Bringing Old Photo Back to Life (CVPR 2020 oral)
Stars: ✭ 9,525 (+11515.85%)
Mutual labels:  image-restoration
Image-Contrast-Enhancement
C++ implementation of several image contrast enhancement techniques.
Stars: ✭ 139 (+69.51%)
Mutual labels:  image-enhancement
SRResCycGAN
Code repo for "Deep Cyclic Generative Adversarial Residual Convolutional Networks for Real Image Super-Resolution" (ECCVW AIM2020).
Stars: ✭ 47 (-42.68%)
Mutual labels:  image-restoration
bluerobotics.github.io
Blue Robotics product documentation site.
Stars: ✭ 34 (-58.54%)
Mutual labels:  underwater
deep-atrous-guided-filter
Deep Atrous Guided Filter for Image Restoration in Under Display Cameras (UDC Challenge, ECCV 2020).
Stars: ✭ 32 (-60.98%)
Mutual labels:  image-restoration
SwinIR
SwinIR: Image Restoration Using Swin Transformer (official repository)
Stars: ✭ 1,260 (+1436.59%)
Mutual labels:  image-restoration
StarEnhancer
[ICCV 2021 Oral] StarEnhancer: Learning Real-Time and Style-Aware Image Enhancement
Stars: ✭ 127 (+54.88%)
Mutual labels:  image-enhancement
ugan-pytorch
Color restoration of underwater images with UGAN, implemented with PyTorch.
Stars: ✭ 26 (-68.29%)
Mutual labels:  underwater-images
Awesome-low-level-vision-resources
A curated list of resources for Low-level Vision Tasks
Stars: ✭ 35 (-57.32%)
Mutual labels:  image-enhancement
underwater image fusion
水下图像增强融合算法-matlab
Stars: ✭ 35 (-57.32%)
Mutual labels:  underwater
CResMD
(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (+12.2%)
Mutual labels:  image-restoration
Reproducible Image Denoising State Of The Art
Collection of popular and reproducible image denoising works.
Stars: ✭ 1,776 (+2065.85%)
Mutual labels:  image-restoration
CURL
Code for the ICPR 2020 paper: "CURL: Neural Curve Layers for Image Enhancement"
Stars: ✭ 177 (+115.85%)
Mutual labels:  image-enhancement
traiNNer
traiNNer: Deep learning framework for image and video super-resolution, restoration and image-to-image translation, for training and testing.
Stars: ✭ 130 (+58.54%)
Mutual labels:  image-restoration
AI-Lossless-Zoomer
AI无损放大工具
Stars: ✭ 940 (+1046.34%)
Mutual labels:  image-restoration
Mobile Image-Video Enhancement
Sensifai image and video enhancement module on mobiles
Stars: ✭ 39 (-52.44%)
Mutual labels:  image-enhancement
sparse-deconv-py
Official Python implementation of the 'Sparse deconvolution'-v0.3.0
Stars: ✭ 18 (-78.05%)
Mutual labels:  image-restoration
miplib
A Python software library with a variety of functions for (optical) microscopy image restoration, reconstruction and analysis.
Stars: ✭ 54 (-34.15%)
Mutual labels:  image-restoration

Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement

This repository is for Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement (UWCNN) introduced in the following paper

Paper

Chongyi Li, Saeed Anwar, Fatih Porikli, "Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement", Pattern Recognition, 2019. [arxiv].

Contents

  1. Introduction
  2. Network
  3. Test
  4. Datasets
  5. Results
  6. Citation

Introduction

In underwater scenes, wavelength-dependent light absorption and scattering degrade the visibility of images and videos. The degraded underwater images and videos affect the accuracy of pattern recognition, visual understanding, and key feature extraction in underwater scenes. In this paper, we propose an underwater image enhancement convolutional neural network (CNN) model based on underwater scene prior, called UWCNN. Instead of estimating the parameters of underwater imaging model, the proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data. Besides, based on the light-weight network structure and effective training data, our UWCNN model can be easily extended to underwater videos for frame-by-frame enhancement. Specifically, combining an underwater imaging physical model with optical properties of underwater scenes, we first synthesize underwater image degradation datasets which cover a diverse set of water types and degradation levels. Then, a light-weight CNN model is designed for enhancing each underwater scene type, which is trained by the corresponding training data. At last, this UWCNN model is directly extended to underwater video enhancement. Experiments on real-world and synthetic underwater images and videos demonstrate that our method generalizes well to different underwater scenes. The underwater types and corresponding values are given below.

Network

Test

Requirements

python = 3.5
tensorflow =1.0.0
scipy=1.1.0 (required)
Tested with tensorflow = 1.14.0 and python 3.6

Quick start

  1. The trained models are in 'checkpoint/coarse_230/'

  2. Choose the needed model (For example, if you need the type 1 model, please put 'model_checkpoint_path: "coarse.model-type1"' on the first line of checkpoint text.If you are using existing checkpoint text then model in the last line will be loaded(coarse.model-typeII))

  3. Put the test images into 'test_real'

    Use the following to test the algorithm

    Python main_test.py
    
  4. Find the results in 'test_real'

Datasets

Synthesized

To synthesize underwater image degradation datasets, we use the attenuation coefficients described in Table 1 for the different water types of oceanic and coastal classes (i.e., I, IA, IB, II, and III for open ocean waters, and 1, 3, 5, 7, and 9 for coastal waters). Type-I is the clearest and Type-III is the most turbid open ocean water. Similarly, for coastal waters, Type-1 is the clearest and Type-9 is the most turbid. We apply Eqs (1) and (2) (please check the paper) to build ten types of underwater image datasets by using the RGB-D NYU-v2 indoor dataset which consists of 1449 images. To improve the quality of datasets, we crop the original size (480x640) of NYU-v2 to 460x620. This dataset is for non-commercial use only. The size of each dataset is 1.2GB

Type-I: [Baidu]

Type-IA: [Baidu]

Type-IB: [Baidu]

Type-II: [Baidu]

Type-III: [Baidu]

Type-1: [Baidu]

Type-3: [Baidu]

Type-5: [Baidu]

Type-7: [Baidu]

Type-9: [Baidu]

Results

Quantitative Results

The performance of state-of-the-art algorithms on widely used publicly available datasets in terms of PSNR (in dB), MSE and SSIM. The best results are highlighted in bold.

Synthetic Visual Results

Visual_Synthetic (a) Raw underwater images. (b) Results of RED [21]. (c) Results of UDCP [22]. (d) Results of ODM [25]. (e) Results of UIBLA [26]. (f) Our results. (g) Ground truth. The types of underwater images in the first column from top to bottom are Type-1, Type-3, Type-5, Type-7, Type-9, Type-I, Type-II, and Type-III. Our method removes the light absorption effects and recovers the original colors without any artifacts.

Real Visual Results

Visual_Real

(a) Real-world underwater images. (b) Results of RED [21]. (c) Results of UDCP [22]. (d) Results of ODM [25]. (e) Results of UIBLA [26]. (f) Results of our UWCNN. (g) Results of our UWCNN+. Our method (i.e., UWCNN and UWCNN+) produces the results without any visual artifacts, color deviations, and over-saturations. It also unveils spatial motifs and details.

Video Visual Results

Visual_VideoFrames (a) Raw underwater video (from top to bottom are frame 1, frame 2, frame 3, frame 4, frame 29, and frame 54 in this video). (b) Results of RED [21]. (c) Results of UDCP [22]. (d) Results of ODM [25]. (e) Results of UIBLA [26]. (f) Results of our UWCNN.

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@article{Anwar2019UWCNN,
  title = "Underwater scene prior inspired deep underwater image and video enhancement",
  journal = "Pattern Recognition",
  volume = "98",
  pages = "107038",
  year = "2020",
  issn = "0031-3203",
  doi = "https://doi.org/10.1016/j.patcog.2019.107038",
  url = "http://www.sciencedirect.com/science/article/pii/S0031320319303401",
  author = "Chongyi Li and Saeed Anwar and Fatih Porikli",
}

@article{anwar2019diving,
  title={Diving Deeper into Underwater Image Enhancement: A Survey},
  author={Anwar, Saeed and Li, Chongyi},
  journal={arXiv preprint arXiv:1907.07863},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].