All Projects → saeed-anwar → R2Net

saeed-anwar / R2Net

Licence: other
Pytorch code for "Attention Based Real Image Restoration", IEEE Transactions on Neural Networks and Learning Systems, 2021

Projects that are alternatives of or similar to R2Net

Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (+757.89%)
Mutual labels:  image-denoising, image-deraining
Uformer
[CVPR 2022] Official repository for the paper "Uformer: A General U-Shaped Transformer for Image Restoration".
Stars: ✭ 415 (+2084.21%)
Mutual labels:  image-denoising, image-deraining
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (+247.37%)
Mutual labels:  raindrop-removal
DudeNet
Designing and Training of A Dual CNN for Image Denoising (Knowledge-based Systems, 2021)
Stars: ✭ 45 (+136.84%)
Mutual labels:  image-denoising
Awesome-low-level-vision-resources
A curated list of resources for Low-level Vision Tasks
Stars: ✭ 35 (+84.21%)
Mutual labels:  image-denoising
Reproducible Image Denoising State Of The Art
Collection of popular and reproducible image denoising works.
Stars: ✭ 1,776 (+9247.37%)
Mutual labels:  image-denoising
SwinIR
SwinIR: Image Restoration Using Swin Transformer (official repository)
Stars: ✭ 1,260 (+6531.58%)
Mutual labels:  image-denoising
PRIDNet
Code for the paper "Pyramid Real Image Denoising Network"
Stars: ✭ 47 (+147.37%)
Mutual labels:  image-denoising
Image-Denoising-with-Deep-CNNs
Use deep Convolutional Neural Networks (CNNs) with PyTorch, including investigating DnCNN and U-net architectures
Stars: ✭ 54 (+184.21%)
Mutual labels:  image-denoising
sparselandtools
✨ A Python package for sparse representations and dictionary learning, including matching pursuit, K-SVD and applications.
Stars: ✭ 55 (+189.47%)
Mutual labels:  image-denoising
NLRN
Code for Non-Local Recurrent Network for Image Restoration (NeurIPS 2018)
Stars: ✭ 165 (+768.42%)
Mutual labels:  image-denoising
strollr2d icassp2017
Image Denoising Codes using STROLLR learning, the Matlab implementation of the paper in ICASSP2017
Stars: ✭ 22 (+15.79%)
Mutual labels:  image-denoising
IRCNN
IRCNN Image denoise
Stars: ✭ 31 (+63.16%)
Mutual labels:  image-denoise
ECNDNet
Enhanced CNN for image denoising (CAAI Transactions on Intelligence Technology, 2019)
Stars: ✭ 58 (+205.26%)
Mutual labels:  image-denoise
Restormer
[CVPR 2022--Oral] Restormer: Efficient Transformer for High-Resolution Image Restoration. SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.
Stars: ✭ 586 (+2984.21%)
Mutual labels:  image-deraining

Attention Based Real Image Restoration

This repository is for Attention Based Real Image Restoration (R2Net) introduced in the following paper

Saeed Anwar, Nick Barnes, and Lars Petersson, "Attention Based Real Image Restoration", IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021

Contents

  1. Introduction
  2. Requirements
  3. Super-resolution
  4. Rain-Removal
  5. JPEG-Compression
  6. Real-Denoising
  7. Citation
  8. Acknowledgements

Introduction

Deep convolutional neural networks perform better on images containing spatially invariant degradations, also known as synthetic degradations; however, their performance is limited on real-degraded photographs and requires multiple-stage network modeling. To advance the practicability of restoration algorithms, this paper proposes a novel single-stage blind real image restoration network (R2Net) by employing a modular architecture. We use a residual on the residual structure to ease low-frequency information flow and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality for four restoration tasks, i.e., Denoising, Super-resolution, Raindrop Removal, and JPEG Compression on 11 real degraded datasets against more than 30 state-of-the-art algorithms demonstrate the superiority of our R2Net. We also present the comparison on three synthetically generated degraded datasets for denoising to showcase our method's capability on synthetics denoising.

Requirements

  • PyTorch 0.4.0, PyTorch 0.4.1
  • Tested on Ubuntu 14.04/16.04 environment
  • torchvision=0.2.1
  • python 3.6
  • CUDA 9.0
  • cuDNN 5.1
  • imageio
  • pillow
  • matplotlib
  • tqdm
  • scikit-image

Super-resolution

The architecture for super-resolution.

SR Test

  1. Download the trained models and code of our paper from here. The total size for all models is 240MB.

  2. cd to '/R2NetSRTestCode/code', either run bash TestR2NET_2x.sh or bash TestR2NET_3x.sh or bash TestR2NET_4x.sh.

**or run the following individual commands and find the results in directory R2NET_SRResults.

**You can use the following script to test the algorithm.**
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 2 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX2.pt --test_only --save_results --chop --save 'R2NET_Set5' --testpath ../LR/LRBI --testset Set5

CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 2 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX2.pt --test_only --save_results --chop --self_ensemble --save 'R2NETplus_Set5' --testpath ../LR/LRBI --testset Set5

#3x
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 3 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX3.pt --test_only --save_results --chop --save 'R2NET_Set14' --testpath ../LR/LRBI --testset Set14

CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 3 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX3.pt --test_only --save_results --chop --self_ensemble --save 'R2NETplus_Set14' --testpath ../LR/LRBI --testset Set14

#4x

CUDA_VISIBLE_DEVICES=5 python main.py --data_test MyImage --scale 4 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX4.pt --test_only --save_results --chop --save 'R2NET_B100' --testpath ../LR/LRBI --testset BSD100

CUDA_VISIBLE_DEVICES=5 python main.py --data_test MyImage --scale 4 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_BIX4.pt --test_only --save_results --chop --self_ensemble --save 'R2NETplus_B100' --testpath ../LR/LRBI --testset BSD100

SR Results

All the results for SuperResolution R2Net can be downloaded from SET5 (2MB), SET5+ (2MB), SET14 (12.5MB), SET14+ (12MB), BSD100 (60MB), BSD100+ (60MB), Urban100 (315MB), and Urban100+ (308MB).

Visual Results

The visual comparisons for 4x super-resolution against several state-of-the-art algorithms on an image from Urban100 dataset. Our R2Ne results are the most accurate.

Quantitative Results

Mean PSNR and SSIM of the denoising methods evaluated on the real images dataset

The performance of super-resolution algorithms on Set5, Set14, BSD100, and URBAN100 datasets for upscaling factors of 2, 3, and 4. The bold highlighted results are the best on single image super-resolution.

Rain Removal

The architecture for Rain Removal and the subsequent restoration tasks. There are two modifications: the change in position of long skip connection and removal of upsampling layer.

RainRemoval Test

  1. The trained models and code for rain removal can be downloaded from here. The total size for all models is 121.5MB.

  2. cd to '/R2NetRainRemovalTestCode/code', either run bash TestScripts.sh or run the following individual commands and find the results in directory R2NET_DeRainResults.

    You can use the following script to test the algorithm.

# test_a
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 1 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_RainRemoval.pt --test_only --save_results --save 'R2NET_test_a' --testpath ../rainy --testset test_a

# test_b
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 1 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_RainRemoval.pt --test_only --save_results --save 'R2NET_test_b' --testpath ../rainy --testset test_b

RainRemoval Results

All the results for Rain Removal R2Net can be downloaded from here for both DeRain's test_a and test_b datasets.

Visual Results

The visual comparisons on rainy images. The first figure is showing the plate which is affected by raindrops. Our method is consistent in restoring raindrop affected areas. Similarly, in the second example of a rainy image, the cropped region is showing the road sign affected by raindrops. Our method recovers the distorted colors closer to the ground-truth.

Quantitative Results

The average PSNR(dB)/SSIM from different methods on raindrop dataset.

JPEG Compression

The architecture is same for the rest of restoration tasks.

JPEG Compression Test

  1. Download the trained models and code for JPEG Compression of R2Net from Google Drive. The total size for all models is 43MB.**

  2. cd to '/R2NetJPEGTestCode/code', either run bash TestScripts.sh or run the following individual commands and find the results in directory R2Net_Results.

    You can use the following script to test the algorithm.

# Q10
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 10 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_Q10.pt --test_only --save_results --save 'R2NET_JPEGQ10' --testpath ../noisy --testset LIVE1

# Q20
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 20 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_Q20.pt --test_only --save_results --save 'R2NET_JPEGQ20' --testpath ../noisy --testset LIVE1

# Q30
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 30 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_Q30.pt --test_only --save_results --save 'R2NET_JPEGQ30' --testpath ../noisy --testset LIVE1

# Q40
CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --noise_g 40 --model R2NET --n_feats 64 --pre_train ../trained_models/R2Net_Q40.pt --test_only --save_results --save 'R2NET_JPEGQ40' --testpath ../noisy --testset LIVE1

JPEG Compression Results

If you don't want to re-run the models and save some computation, then all the results for JPEG Compression R2Net can be downloaded from LIVE1 (51.5MB).

Visual Results

sample images of Monarch and parrot with the artifacts having a quality factor of 20. Our R2Net restore texture correctly, specifically the line, as shown in the zoomed version of the restored patch in Monarch images. Moreover, R2Net restores the texture accurately on the face of the parrot in the second image.

Quantitative Results

Average PSNR/SSIM for JPEG image deblocking for quality factors of 10, 20, 30, and 40 on LIVE1 dataset. The best results are in bold.


Real Denoising

The real image denoising can be found here


Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@article{Anwar2021R2NET,
    title={Attention Prior for Real Image Restoration},
    author={Saeed Anwar and Nick Barnes and Lars Petersson},
    journal={IEEE Transactions on Neural Networks and Learning Systems (TNNLS)},
    year={2021}

}

@article{anwar2019ridnet,
  title={Real Image Denoising with Feature Attention},
  author={Anwar, Saeed and Barnes, Nick},
  journal={IEEE International Conference on Computer Vision (ICCV-Oral)},
  year={2019}
}

@article{Anwar2020IERD,
  author = {Anwar, Saeed and Huynh, Cong P. and Porikli, Fatih },
    title = {Identity Enhanced Image Denoising},
    journal={IEEE Computer Vision and Pattern Recognition Workshops (CVPRW)},
    year={2020}
}

Acknowledgements

This code is built on RIDNET (PyTorch)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].