hejingwenhejingwen / Adafm

CVPR2019 (oral) Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers (AdaFM). PyTorch implementation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Adafm

Latest Development Of Isr Vsr
Latest development of ISR/VSR. Papers and related resources, mainly state-of-the-art and novel works in ICCV, ECCV and CVPR about image super-resolution and video super-resolution.
Stars: ✭ 93 (-38.41%)
Mutual labels:  super-resolution
Deeply Recursive Cnn Tf
Test implementation of Deeply-Recursive Convolutional Network for Image Super-Resolution
Stars: ✭ 116 (-23.18%)
Mutual labels:  super-resolution
Enhancenet Code
EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis (official repository)
Stars: ✭ 142 (-5.96%)
Mutual labels:  super-resolution
Vsr Duf Reimplement
It is a re-implementation of paper named "Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation" called VSR-DUF model. There are both training codes and test codes about VSR-DUF based tensorflow.
Stars: ✭ 101 (-33.11%)
Mutual labels:  super-resolution
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-29.14%)
Mutual labels:  super-resolution
Upscalerjs
Image Upscaling in Javascript. Increase image resolution up to 4x using Tensorflow.js.
Stars: ✭ 126 (-16.56%)
Mutual labels:  super-resolution
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-39.07%)
Mutual labels:  super-resolution
Waifu2x Extension
Image, GIF and Video enlarger/upscaler achieved with waifu2x and Anime4K. [NO LONGER UPDATED]
Stars: ✭ 149 (-1.32%)
Mutual labels:  super-resolution
Awesome Eccv2020 Low Level Vision
A Collection of Papers and Codes for ECCV2020 Low Level Vision or Image Reconstruction
Stars: ✭ 111 (-26.49%)
Mutual labels:  super-resolution
Rdn Tensorflow
A TensorFlow implementation of CVPR 2018 paper "Residual Dense Network for Image Super-Resolution".
Stars: ✭ 136 (-9.93%)
Mutual labels:  super-resolution
Idn Caffe
Caffe implementation of "Fast and Accurate Single Image Super-Resolution via Information Distillation Network" (CVPR 2018)
Stars: ✭ 104 (-31.13%)
Mutual labels:  super-resolution
Supper Resolution
Super-resolution (SR) is a method of creating images with higher resolution from a set of low resolution images.
Stars: ✭ 105 (-30.46%)
Mutual labels:  super-resolution
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1101.32%)
Mutual labels:  super-resolution
3d Gan Superresolution
3D super-resolution using Generative Adversarial Networks
Stars: ✭ 97 (-35.76%)
Mutual labels:  super-resolution
Keras Image Super Resolution
EDSR, RCAN, SRGAN, SRFEAT, ESRGAN
Stars: ✭ 143 (-5.3%)
Mutual labels:  super-resolution
Super Resolution Videos
Applying SRGAN technique implemented in https://github.com/zsdonghao/SRGAN on videos to super resolve them.
Stars: ✭ 91 (-39.74%)
Mutual labels:  super-resolution
Drln
Densely Residual Laplacian Super-resolution, IEEE Pattern Analysis and Machine Intelligence (TPAMI), 2020
Stars: ✭ 120 (-20.53%)
Mutual labels:  super-resolution
Basicsr
Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
Stars: ✭ 2,708 (+1693.38%)
Mutual labels:  super-resolution
Awesome Cvpr2021 Cvpr2020 Low Level Vision
A Collection of Papers and Codes for CVPR2021/CVPR2020 Low Level Vision
Stars: ✭ 139 (-7.95%)
Mutual labels:  super-resolution
3pu
Patch-base progressive 3D Point Set Upsampling
Stars: ✭ 131 (-13.25%)
Mutual labels:  super-resolution

Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers paper, supplementary file

By Jingwen He, Chao Dong, and Yu Qiao

class AdaptiveFM(nn.Module):
    def __init__(self, in_channel, kernel_size):
        super(AdaptiveFM, self).__init__()
        padding = (kernel_size - 1) // 2
        self.transformer = nn.Conv2d(in_channel, in_channel, kernel_size, padding=padding, groups=in_channel)

    def forward(self, x):
        return self.transformer(x) + x

BibTex

@InProceedings{He_2019_CVPR,
author = {He, Jingwen and Dong, Chao and Qiao, Yu},
title = {Modulating Image Restoration With Continual Levels via Adaptive Feature Modification Layers},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

Dependencies

Pretrained models

We provide a pretrained model for AdaFM-Net (experiments/pretrained_models) that deals with denoising from σ15 to σ75. Please run the following commands directly:

cd codes
python interpolate.py -opt options/test/test.json

The results can be found in the newly created directory AdaFM/results The noise level of the input image is σ45, and you are supposed to obtain similar interpolated results as follows:

Codes

The overall code framework mainly consists of four parts - Config, Data, Model and Network. We also provides some useful scripts. Please run all the following commands in “codes” directory.

How to Test

basic model and AdaFM-Net

  1. Modify the configuration file options/test/test.json (please refer to options for instructions.)
  2. Run command:
python test.py -opt options/test/test.json

modulation testing

  1. Modify the configuration file options/test/test.json
  2. Run command:
python interpolate.py -opt options/test/test.json

or:

  1. Use scripts/net_interp.py to obtain the interpolated network.
  2. Modify the configuration file options/test/test.json and run command: python test.py -opt options/test/test.json

How to Train

basic model

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data.
  2. Modify the configuration file options/train/train_basic.json (please refer to options for instructions.)
  3. Run command:
python train.py -opt options/train/train_basic.json

AdaFM-Net

  1. Prepare datasets, usually the DIV2K dataset.
  2. Modify the configuration file options/train/train_adafm.json
  3. Run command:
python train.py -opt options/train/train_adafm.json

Acknowledgement

  • This code borrows heavily from BasicSR.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].