All Projects → hieubkset → Keras Image Super Resolution

hieubkset / Keras Image Super Resolution

EDSR, RCAN, SRGAN, SRFEAT, ESRGAN

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Keras Image Super Resolution

Cfsrcnn
Coarse-to-Fine CNN for Image Super-Resolution (IEEE Transactions on Multimedia,2020)
Stars: ✭ 84 (-41.26%)
Mutual labels:  super-resolution
Natsr
Natural and Realistic Single Image Super-Resolution with Explicit Natural Manifold Discrimination (CVPR, 2019)
Stars: ✭ 105 (-26.57%)
Mutual labels:  super-resolution
Upscalerjs
Image Upscaling in Javascript. Increase image resolution up to 4x using Tensorflow.js.
Stars: ✭ 126 (-11.89%)
Mutual labels:  super-resolution
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-35.66%)
Mutual labels:  super-resolution
Vsr Duf Reimplement
It is a re-implementation of paper named "Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation" called VSR-DUF model. There are both training codes and test codes about VSR-DUF based tensorflow.
Stars: ✭ 101 (-29.37%)
Mutual labels:  super-resolution
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-25.17%)
Mutual labels:  super-resolution
Torch Srgan
torch implementation of srgan
Stars: ✭ 76 (-46.85%)
Mutual labels:  super-resolution
Rdn Tensorflow
A TensorFlow implementation of CVPR 2018 paper "Residual Dense Network for Image Super-Resolution".
Stars: ✭ 136 (-4.9%)
Mutual labels:  super-resolution
Idn Caffe
Caffe implementation of "Fast and Accurate Single Image Super-Resolution via Information Distillation Network" (CVPR 2018)
Stars: ✭ 104 (-27.27%)
Mutual labels:  super-resolution
Drln
Densely Residual Laplacian Super-resolution, IEEE Pattern Analysis and Machine Intelligence (TPAMI), 2020
Stars: ✭ 120 (-16.08%)
Mutual labels:  super-resolution
Super Resolution Videos
Applying SRGAN technique implemented in https://github.com/zsdonghao/SRGAN on videos to super resolve them.
Stars: ✭ 91 (-36.36%)
Mutual labels:  super-resolution
3d Gan Superresolution
3D super-resolution using Generative Adversarial Networks
Stars: ✭ 97 (-32.17%)
Mutual labels:  super-resolution
Awesome Eccv2020 Low Level Vision
A Collection of Papers and Codes for ECCV2020 Low Level Vision or Image Reconstruction
Stars: ✭ 111 (-22.38%)
Mutual labels:  super-resolution
Pytoflow
The py version of toflow → https://github.com/anchen1011/toflow
Stars: ✭ 83 (-41.96%)
Mutual labels:  super-resolution
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1168.53%)
Mutual labels:  super-resolution
Seranet
Super Resolution of picture images using deep learning
Stars: ✭ 79 (-44.76%)
Mutual labels:  super-resolution
Supper Resolution
Super-resolution (SR) is a method of creating images with higher resolution from a set of low resolution images.
Stars: ✭ 105 (-26.57%)
Mutual labels:  super-resolution
Enhancenet Code
EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis (official repository)
Stars: ✭ 142 (-0.7%)
Mutual labels:  super-resolution
3pu
Patch-base progressive 3D Point Set Upsampling
Stars: ✭ 131 (-8.39%)
Mutual labels:  super-resolution
Deeply Recursive Cnn Tf
Test implementation of Deeply-Recursive Convolutional Network for Image Super-Resolution
Stars: ✭ 116 (-18.88%)
Mutual labels:  super-resolution

Single Image Super Resolution, EDSR, SRGAN, SRFeat, RCAN, ESRGAN and ERCA (ours) benchmark comparison

This is a keras implementation of single super resolution algorithms: EDSR, SRGAN, SRFeat, RCAN, ESRGAN and ERCA (ours). This project aims to improve the performace of the baseline (SRFeat).

To run this project you need to setup the environment, download the dataset, run script to process data, and then you can train and test the network models. I will show you step by step to run this project and i hope it is clear enough.

Prerequiste

I tested my project in Corei7, 64G RAM, GPU Titan XP. Because it takes about several days for training, I recommend you using CPU/GPU strong enough and about 12G RAM.

Environment

I recommend you using virtualenv to create a virtual environments. You can install virtualenv (which is itself a pip package) via

pip install virtualenv

Create a virtual environment called venv with python3, one runs

virtualenv -p python3 .env

Activate the virtual enviroment:

source .env/bin/activate

Install dependencies:

pip install -r requirements.txt

Dataset

I use DIV2K dataset (link) which consists of 800 HR training images and 100 HR validation images. To expand the volume of training data, I applied data augmentation method as SRFeat. The author provides augmentation code. You can find it here.

Actually, DIV2K dataset only contains high resolution images (HR images) and does not contains low resolution images (LR images). So to run the code, you have to generate LR images first. You can do it by using matlab scripts (https://github.com/hieubkset/Keras-Image-Super-Resolution/tree/master/data_preprocess).

For training LR images, there are two scripts:

  • aug_data_div2k.m: generate LR images by using bicubic interpolation with scale 4.
  • aug_data_div2k_half.m: generate LR images by using bicubic interpolation with scale 2.

If you run both scripts, you shold see about 150 thousands of images for each folder (GT and LR_bicubic).

For testing LR images, using the script testset_bicubic_downsample.m

These scripts will search all HR images from a HR folder, and then generate LR images to a LR folder. So you need to modify first lines of these scripts to your HR and LR folder.

Training

To pretrain a generator, run the following command

python pretrain.py --arc=[edsr, srgan, srfeat, rcan, esrgan, erca] --train=/path/to/training/data --train-ext=[.png, .jpg] --valid=/path/to/validation/data --valid-ext=[.png, .jpg] [--resume=/path/to/checkpoint --init_epoch=0 --cuda=1

For example, to train a ERCA generator with DIV2K dataset:

python pretrain.py --arc=erca --train=data/train/DIV2K --train-ext=.png --valid=data/test/Set5 --valid-ext=.png --cuda=1

Data folders should consist of a HR folder and a LR folder, e.g: data/train/DIV2K/HR and data/train/DIV2K/LR. To train a generator by using GAN, run the following command

python gantrain.py --arc=[edsr, srgan, srfeat, rcan, esrgan, erca] --train=/path/to/training/data --train-ext=[.png, .jpg] --g_init=/path/to/pretrain/model --cuda=1

For example:

python gantrain.py --arc=erca --train=data/train/DIV2K --train-ext=.png --g_init=exp/erca-06-24-21\:12/final_model.h5 --cuda=0

Please note that we only implement a gan algorithm that is same with SRFeat.

Generating Super-Resolution Image

To generate SR images from a trained model, you should able to run:

  • For one image
python demo.py --arc=[edsr, srgan, srfeat, rcan, esrgan, erca] --lr_path=/path/to/one/image --save_dir=/path/to/save --model_path=/path/to/model --cuda=0
  • For images in a folder
python demo.py --arc=[edsr, srgan, srfeat, rcan, esrgan, erca] --lr_dir=/path/to/folder --ext=[.png, .jpg] --save_dir=/path/to/save --model_path=/path/to/model --cuda=0
  • To generate SR images using our gan-trained model, run the following command:
python demo.py --arc=gan --lr_path=/path/to/one/image --save_dir=/path/to/save --model_path=/path/to/model --cuda=0

Benchmark comparisions

Model PSNR SSIM Time per iteration
(s)
Time per epoch
Set5 Set14 BSDS100 Set5 Set14 BSDS100
EDSR-10 32.01 28.56 27.54 0.8918 0.7819 0.7357 0.3962 1h 3min
SRGAN-10 31.75 28.39 27.44 0.8864 0.7761 0.7308 0.3133 50 min
ESRGAN-10 31.90 28.47 27.49 0.8898 0.7789 0.7340 0.5265 1h 24min
RCAN-10 32.12 28.65 27.60 0.8934 0.7840 0.7379 1.2986 3h 27min
SRFeat-10 31.45 28.17 27.39 0.8813 0.7699 0.7245 0.5705 1h 31min
Ours-10 32.14 28.60 27.58 0.8926 0.7823 0.7362 0.5333 1h 25min
SRFeat-20 31.74 28.34 27.39 0.8859 0.7748 0.7298
Ours-20 32.21 28.66 27.60 0.8936 0.7836 0.7370

Model-10: after training 10 epochs. Model-20: after training 20 epochs.

We run all with batch size of 16 and about 9600 iteration per epoch. Running time is reported using a GPU Titan XP 16G. We also find that training on a GPU Titan X 16G is much slower, for example, RCAN takes about 2.5s per iteration.

EDSR: in the paper, the author reported results of a model with 32 residual blocks and 256 features. The version here is one with 16 residual blocks and 128 filters.

Learning Curves

I hope my instructions are clear enough for you. If you have any problem, you can contact me through [email protected] or use the issue tab. If you are insterested in this project, you are very welcome. Many Thanks.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].