All Projects → hjSim → NTIRE2019_deblur

hjSim / NTIRE2019_deblur

Licence: other
A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules, A source code of the 3rd winner of NTIRE 2019 Video Deblurring Challenge

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to NTIRE2019 deblur

SIUN
Sharp Image Deblurring
Stars: ✭ 123 (+412.5%)
Mutual labels:  deblurring
DeFMO
[CVPR 2021] DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
Stars: ✭ 144 (+500%)
Mutual labels:  deblurring
Deblurgan
Image Deblurring using Generative Adversarial Networks
Stars: ✭ 2,033 (+8370.83%)
Mutual labels:  deblurring
traiNNer
traiNNer: Deep learning framework for image and video super-resolution, restoration and image-to-image translation, for training and testing.
Stars: ✭ 130 (+441.67%)
Mutual labels:  deblurring
blur-kernel-space-exploring
Exploring Image Deblurring via Blur Kernel Space (CVPR'21)
Stars: ✭ 111 (+362.5%)
Mutual labels:  deblurring
RSCD
[CVPR2021] Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes
Stars: ✭ 83 (+245.83%)
Mutual labels:  deblurring
CResMD
(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (+283.33%)
Mutual labels:  deblurring
DisguiseNet
Code for DisguiseNet : A Contrastive Approach for Disguised Face Verification in the Wild
Stars: ✭ 20 (-16.67%)
Mutual labels:  cvprw
HBPN
Hierarchical Back Projection Network (HBPN) for image super-resolution in CVPR2019.
Stars: ✭ 19 (-20.83%)
Mutual labels:  ntire2019

A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules

Updates

  • Oct. 6th, 2020
    • We also provide our model output images of benchmark datasets; GOPRO and REDS(NTIRE,2019). Please refer to below

A source code of the 3rd winner of NTIRE 2019 Video Deblurring Challenge (CVPRW, 2019) : "A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules" by Hyeonjun Sim and Munchurl Kim. [pdf], [NTIRE2019]

example
Examples of deblurring results on GOPRO dataset. (a) Input blurry image ; (b) Result of Tao et al. [2] ; (c) Result of our proposed network ; (d) Clean image.

Prerequisites

  • python 2.7
  • tensorflow (gpu version) >= 1.6 (The runtime in the paper was recorded on tf 1.6. But the code in this repo also runs in tf 1.13 )

Testing with pretrained model

update We also provide our model output images of benchmark datasets; REDS(NTIRE2019) and GOPRO. The output images are generated by our model trained on the corresponding training sets (REDS training sets and GOPRO training sets, respectively). Download link to NTIRE_test_output and GOPRO_test_output

We provide the two test models depending on the training datasets, REDS (NTIRE2019 Video Deblurring Challenge:Track 1 Clean dataset [pdf], [page]) and GOPRO ([pdf], [page]) with checkpoint in /checkpoints_NTIRE/, /checkpoints_GOPRO/, respectively. Download link to /checkpoints_NTIRE/ and /checkpoints_GOPRO/

For NTIRE REDS dataset, our model was trained on the 'blur' and 'sharp' pair.
For GOPRO dataset, our model was trained on the lineared blur (not gamma corrected) and sharp pair (as other state-of-the-art methods did).
For example, to run the test model pretrained on GOPRO dataset,

python main.py --pretrained_dataset 'GOPRO' --test_dataset './Dataset/YOUR_TEST/' --working_directory './data/'

or pretrained on NTIRE dataset with additional geometric self-ensemble (takes much more time),

python main.py --pretrained_dataset 'NTIRE' --test_dataset './Dataset/YOUR_TEST/' --working_directory './data/' --ensemble

test_dataset is the location of the test input blur frames that should follow the format:

├──── Dataset/
   ├──── YOUR_TEST/
      ├──── blur/
        ├──── Video0/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── Video1/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── ...

The deblurred output frames will be generated in working_directory as,

├──── data/
   ├──── test/
     ├──── Video0/
        ├──── 0000.png
        ├──── 0001.png
        └──── ...
     ├──── Video1/
        ├──── 0000.png
        ├──── 0001.png
        └──── ...
     ├──── ...

Evaluation

To calcuate PSNR between the deblurred output and the corresponding sharp frames,

python main.py --phase 'psnr'

Before that, the corresponding sharp frames should follow the format:,

├──── Dataset/
   ├──── YOUR_TEST/
      ├──── sharp/
        ├──── Video0/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── Video1/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── ...

Our NTIRE model had yielded average 33.86 and 33.38 dB PSNR for 300 validation frames with and without self-ensemble, respectively.
For GOPRO benchmark test dataset,

Method PSNR(dB) SSIM
Nah et al. [1] 28.62 0.9094
Tao et al. [2] 30.26 0.9342
Ours 31.34 0.9474

Reference

@inproceedings{sim2019deep,
  title={A Deep Motion Deblurring Network Based on Per-Pixel Adaptive Kernels With Residual Down-Up and Up-Down Modules},
  author={Sim, Hyeonjun and Kim, Munchurl},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  year={2019}
}

Contact

Please send me an email, [email protected]

Reference

[1] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017.
[2] Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In CVPR, 2018

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].