All Projects → saeed-anwar → Drln

saeed-anwar / Drln

Licence: mit
Densely Residual Laplacian Super-resolution, IEEE Pattern Analysis and Machine Intelligence (TPAMI), 2020

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Drln

Master Thesis Bayesiancnn
Master Thesis on Bayesian Convolutional Neural Network using Variational Inference
Stars: ✭ 222 (+85%)
Mutual labels:  convolutional-neural-networks, super-resolution
Mirnet
Official repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (+105.83%)
Mutual labels:  attention-mechanism, super-resolution
Triplet Attention
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (+85%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Pytorch Question Answering
Important paper implementations for Question Answering using PyTorch
Stars: ✭ 154 (+28.33%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Textclassifier
Text classifier for Hierarchical Attention Networks for Document Classification
Stars: ✭ 985 (+720.83%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Anime4k
A High-Quality Real Time Upscaler for Anime Video
Stars: ✭ 14,083 (+11635.83%)
Mutual labels:  convolutional-neural-networks, super-resolution
PAM
[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (-48.33%)
Mutual labels:  attention-mechanism, super-resolution
Image Super Resolution
🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
Stars: ✭ 3,293 (+2644.17%)
Mutual labels:  convolutional-neural-networks, super-resolution
Tensorflow Srgan
Tensorflow implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" (Ledig et al. 2017)
Stars: ✭ 33 (-72.5%)
Mutual labels:  convolutional-neural-networks, super-resolution
Text classification
all kinds of text classification models and more with deep learning
Stars: ✭ 7,179 (+5882.5%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Attribute Aware Attention
[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
Stars: ✭ 143 (+19.17%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Seranet
Super Resolution of picture images using deep learning
Stars: ✭ 79 (-34.17%)
Mutual labels:  convolutional-neural-networks, super-resolution
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (+5%)
Mutual labels:  convolutional-neural-networks, attention-mechanism
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (+68.33%)
Mutual labels:  convolutional-neural-networks, super-resolution
Pan
[Params: Only 272K!!!] Efficient Image Super-Resolution Using Pixel Attention, in ECCV Workshop, 2020.
Stars: ✭ 151 (+25.83%)
Mutual labels:  attention-mechanism, super-resolution
Pytorch Srgan
A modern PyTorch implementation of SRGAN
Stars: ✭ 289 (+140.83%)
Mutual labels:  convolutional-neural-networks, super-resolution
Jsi Gan
Official repository of JSI-GAN (Accepted at AAAI 2020).
Stars: ✭ 42 (-65%)
Mutual labels:  convolutional-neural-networks, super-resolution
Idn Caffe
Caffe implementation of "Fast and Accurate Single Image Super-Resolution via Information Distillation Network" (CVPR 2018)
Stars: ✭ 104 (-13.33%)
Mutual labels:  convolutional-neural-networks, super-resolution
Brainforge
A Neural Networking library based on NumPy only
Stars: ✭ 114 (-5%)
Mutual labels:  convolutional-neural-networks
Deepway
This project is an aid to the blind. Till date there has been no technological advancement in the way the blind navigate. So I have used deep learning particularly convolutional neural networks so that they can navigate through the streets.
Stars: ✭ 118 (-1.67%)
Mutual labels:  convolutional-neural-networks

Densely Residual Laplacian Super-resolution

This repository is for Densely Residual Laplacian Network (DRLN) introduced in the following paper

Saeed Anwar, Nick Barnes, "Densely Residual Laplacian Super-resolution", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. arXiv version, and Supplementary Materials

The model is built in PyTorch 1.1.0 and tested on Ubuntu 14.04/16.04 environment (Python3.6, CUDA9.0, cuDNN5.1).

Our DRLN is also available in PyTorch 0.4.0 and 0.4.1. You can download this version from Google Drive or here.

Contents

  1. Introduction
  2. Network
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements

Introduction

Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.

PSNR_SSIM_BI Sample results on URBAN100 with Bicubic (BI) degradation for 4x on “img 074” and for 8x on “img 040”.

Network

Net The architecture of our proposed densely residual Laplacian attention network (DRLN) with densely residual laplacian modules (DRLM). LapAtt Laplacian attention architecture.

Test

Quick start

  1. Download the trained models for our paper and place them in '/TestCode/TrainedModels'.

    All the models (BIX2/3/4/8, BDX3) can be downloaded from Google Drive or Baidu or here. The total size for all models is 737MB.

  2. Cd to '/TestCode/code', run the following scripts.

    You can use scripts in file 'TestDRLN_All' to produce results for our paper or You can also use individual scripts such as TestDRLN_2x.sh

    # No self-ensemble: DRLN
    # BI degradation model, x2, x3
    # x2
    CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 2 --model DRLN --n_feats 64 --pre_train ../TrainedModels/DRLN_BIX2/DRLN_BIX2.pt --test_only --save_results --chop --save 'DRLN_Set5' --testpath ../LR/LRBI --testset Set5
    
    CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 2 --model DRLN --n_feats 64 --pre_train ../TrainedModels/DRLN_BIX2/DRLN_BIX2.pt --test_only --save_results --chop --save 'DRLN_Set14' --testpath ../LR/LRBI --testset Set14
    # x3
    CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 3 --model DRLN --n_feats 64 --pre_train ../TrainedModels/DRLN_BIX3/DRLN_BIX3.pt --test_only --save_results --chop --save 'DRLN_Set5' --testpath ../LR/LRBI --testset Set5
    
    CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 3 --model DRLN --n_feats 64 --pre_train ../TrainedModels/DRLN_BIX3/DRLN_BIX3.pt --test_only --save_results --chop --save 'DRLN_Set14' --testpath ../LR/LRBI --testset Set14
    
    # x3 Blur-downgrade 
    CUDA_VISIBLE_DEVICES=0 python main.py --data_test MyImage --scale 3 --model DRLN --n_feats 64 --pre_train ../TrainedModels/DRLN_BDX3/DRLN_BDX3.pt --test_only --save_results --chop --save 'DRLN_BD_Set5' --testpath ../LR/LRBD --testset Set5
    

Results

All the results for DRLN can be downloaded from GoogleDrive or here. The size of the results is 2.41GB

Quantitative Results

PSNR_SSIM_2x3xtable PSNR_SSIM_4x8xtable The performance of state-of-the-art algorithms on widely used publicly available five datasets (SET5, SET14, BSD100, URBAN100, MANGA109), in terms of PSNR (in dB) and SSIM. The best results are highlighted with red color while the blue color represents the second best super-resolution method.

PSNR_SSIM_BDTable Quantitative results on blur-down degradations for 3x. The best results are highlighted with red color while the blue color represents the second best.

The plot shows the average PSNR as functions of noise sigma. Our method consistently improves over specific noisy super-resolution methods and CNN for all noise levels.

Visual Results

Visual_PSNR_SSIM_4x Visual results with Bicubic (BI) degradation (4x) on "img 076" and "img_044" from URBAN100 as well as YumeiroCooking from MANGA109.

Visual_PSNR_SSIM_8x Comparisons on images with fine details for a high upsampling factor of 8x on URBAN100 and MANGA109. The best results are in bold.

Visual_PSNR_SSIM_3x Comparison on Blur-Downscale (BD) degraded images with sharp edges and texture, taken from URBAN100 and SET14 datasets for the scale of 3x. The sharpness of the edges on the objects and textures restored by our method is the best.

Visual_PSNR_SSIM_BI Noisy SR visual Comparison on BSD100. Textures on the birds are much better reconstructed, and the noise removed by our method as compared to the IRCNN and RCAN for sigma = 10.

Visual_PSNR_SSIM_Lama Noisy visual comparison on Llama. Textures on the fur, and on rocks in the background are much better reconstructed in our result as compared to the conventional BM3D-SR and BM3D-SRNI.

Comparison on real-world images. In these cases, neither the downsampling blur kernels nor the ground-truth images are available.

For more information, please refer to our paper

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@article{anwar2019drln,
    title={Densely Residual Laplacian Super-Resolution},
    author={Anwar, Saeed and Barnes, Nick},
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
    year={2020}
}

@article{anwar2020deepSR,
    author = {Anwar, Saeed and Khan, Salman and Barnes, Nick},
    title = {A Deep Journey into Super-Resolution: A Survey},
    year = {2020},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    volume = {53},
    number = {3},
    issn = {0360-0300},
    journal = {ACM Computing Surveys},
    month = may,
    articleno = {60},
    numpages = {34},
}

Acknowledgements

This code is built on RCAN (PyTorch) and EDSR (PyTorch). We thank the authors for sharing their codes.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].