All Projects → jiny2001 → Deeply Recursive Cnn Tf

jiny2001 / Deeply Recursive Cnn Tf

Licence: apache-2.0
Test implementation of Deeply-Recursive Convolutional Network for Image Super-Resolution

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deeply Recursive Cnn Tf

Tensorflow Espcn
TensorFlow implementation of the Efficient Sub-Pixel Convolutional Neural Network
Stars: ✭ 49 (-57.76%)
Mutual labels:  super-resolution
Pytoflow
The py version of toflow → https://github.com/anchen1011/toflow
Stars: ✭ 83 (-28.45%)
Mutual labels:  super-resolution
Idn Caffe
Caffe implementation of "Fast and Accurate Single Image Super-Resolution via Information Distillation Network" (CVPR 2018)
Stars: ✭ 104 (-10.34%)
Mutual labels:  super-resolution
Esrgan Tf2
ESRGAN (Enhanced Super-Resolution Generative Adversarial Networks, published in ECCV 2018) implemented in Tensorflow 2.0+. This is an unofficial implementation. With Colab.
Stars: ✭ 61 (-47.41%)
Mutual labels:  super-resolution
Seranet
Super Resolution of picture images using deep learning
Stars: ✭ 79 (-31.9%)
Mutual labels:  super-resolution
Super Resolution Videos
Applying SRGAN technique implemented in https://github.com/zsdonghao/SRGAN on videos to super resolve them.
Stars: ✭ 91 (-21.55%)
Mutual labels:  super-resolution
Rcan Tensorflow
Image Super-Resolution Using Very Deep Residual Channel Attention Networks Implementation in Tensorflow
Stars: ✭ 43 (-62.93%)
Mutual labels:  super-resolution
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-7.76%)
Mutual labels:  super-resolution
Cfsrcnn
Coarse-to-Fine CNN for Image Super-Resolution (IEEE Transactions on Multimedia,2020)
Stars: ✭ 84 (-27.59%)
Mutual labels:  super-resolution
Vsr Duf Reimplement
It is a re-implementation of paper named "Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation" called VSR-DUF model. There are both training codes and test codes about VSR-DUF based tensorflow.
Stars: ✭ 101 (-12.93%)
Mutual labels:  super-resolution
Scn matlab
Matlab reimplementation of SCNSR
Stars: ✭ 70 (-39.66%)
Mutual labels:  super-resolution
Torch Srgan
torch implementation of srgan
Stars: ✭ 76 (-34.48%)
Mutual labels:  super-resolution
Latest Development Of Isr Vsr
Latest development of ISR/VSR. Papers and related resources, mainly state-of-the-art and novel works in ICCV, ECCV and CVPR about image super-resolution and video super-resolution.
Stars: ✭ 93 (-19.83%)
Mutual labels:  super-resolution
Videosuperresolution
A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
Stars: ✭ 1,118 (+863.79%)
Mutual labels:  super-resolution
Natsr
Natural and Realistic Single Image Super-Resolution with Explicit Natural Manifold Discrimination (CVPR, 2019)
Stars: ✭ 105 (-9.48%)
Mutual labels:  super-resolution
Srrescgan
Code repo for "Deep Generative Adversarial Residual Convolutional Networks for Real-World Super-Resolution" (CVPRW NTIRE2020).
Stars: ✭ 44 (-62.07%)
Mutual labels:  super-resolution
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-20.69%)
Mutual labels:  super-resolution
Awesome Eccv2020 Low Level Vision
A Collection of Papers and Codes for ECCV2020 Low Level Vision or Image Reconstruction
Stars: ✭ 111 (-4.31%)
Mutual labels:  super-resolution
Supper Resolution
Super-resolution (SR) is a method of creating images with higher resolution from a set of low resolution images.
Stars: ✭ 105 (-9.48%)
Mutual labels:  super-resolution
3d Gan Superresolution
3D super-resolution using Generative Adversarial Networks
Stars: ✭ 97 (-16.38%)
Mutual labels:  super-resolution

deeply-recursive-cnn-tf

overview

This project is a test implementation of "Deeply-Recursive Convolutional Network for Image Super-Resolution", CVPR2016 using tensorflow

Paper: ["Deeply-Recursive Convolutional Network for Image Super-Resolution"] (https://arxiv.org/abs/1511.04491) by Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee Department of ECE, ASRI, Seoul National University, Korea

Training highly deep CNN layers is so hard. However this paper makes it with some tricks like sharing filter weights and using intermediate outputs to suppress divergence in training. The model in the paper contains 20 CNN layers without no any max-pooling layers, I feel it's amazing.

🔴 I also build another SR model. It is faster and has better PSNR results. Please try this project also. https://github.com/jiny2001/dcscn-super-resolution 🔴

model structure

Those figures are from the paper. There are 3 different networks which cooperates to make images fine.

alt tag

alt tag

This model below is made by my code and drawn by tensorboard.

alt tag alt tag

requirements

tensorflow, scipy, numpy and pillow

how to use

# train with default parameters and evaluate after training for Set5 (takes some hours to train with moderate-GPU)
python main.py

# training with simple model (will be good without GPU)
python main.py —-end_lr 1e-4 —-feature_num 32 -—inference_depth 5

# evaluation for set14 only (after training has done)
# [set5, set14, bsd100, urban100, all] are available. Please specify same model parameter with training.
python main.py -—dataset set14 --is_training False —-feature_num 32 -—inference_depth 5

# train for x4 scale images
python main.py —scale 4
# build augmented (right-left and up-down flipped) training set on SCSR2 folder
python augmentation.py

# train with augmented training data (will have a little better PSNR)
python main.py --training_set ScSR2

# train with your own training data (create directory under "data" and put your data files into it)
python main.py --training_set your_data_directory_name

Network graphs and weights / loss summaries are saved in tf_log directory.

Weights are saved in model directory.

result of my implementation

I use half num of features (128) to make training faster for those results below. Please check with original (100%) image size. (The results I got have a little less PSNR compared to their paper).

alt tag

DataSet Bicubic SRCN SelfEx My Result DRCN
Set5 x2 33.66 36.66 36.49 37.31 37.63
Set14 x2 30.24 32.42 32.22 32.85 33.04
BSD100 x2 29.56 31.36 31.18 31.71 31.85
Urban100 x2 26.88 29.50 29.54 30.01 30.75

I include learned weights for default parameters. default (features:96, inference layers depth:9) with larger dataset (ynag91+general100)x4 augmented.

You can output up-converted images to evaluate. Run below and check [output] folder.

# evaluating for [set5, set14, bsd100, urban100, all] is available
python main.py -—dataset set14 --is_training False

apply to your own image

Put your image file under my project directory and then try those commands below. Please note if you trained with your own parameters like "python3 main.py --inference_depth 5 --feature_num 64", you should use same parameters for test.py.

python test.py --file your_image_filename

#try with your trained model
python test.py --file your_image_filename --same_args_which_you_used_on_your_training blabla

datasets

Some popular dataset images are already set in data folder.

for training:

  • ScSR [ Yang et al. TIP 2010 ] ( J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super- resolution via sparse representation. TIP, 2010 )

for evaluation:

disclaimer

Some details are not shown in the paper and my guesses maybe not enough. My code's PSNR are about 0.5-1.0 lesser than paper's experiments results.

acknowledgments

Thanks a lot for Assoc. Prof. Masayuki Tanaka at Tokyo Institute of Technology and Shigesumi Kuwashima at Viewplus inc.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].