All Projects → leftthomas → Srgan

leftthomas / Srgan

Licence: mit
A PyTorch implementation of SRGAN based on CVPR 2017 paper "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Srgan

Edsr Tensorflow
Tensorflow implementation of Enhanced Deep Residual Networks for Single Image Super-Resolution
Stars: ✭ 314 (-51.24%)
Mutual labels:  super-resolution
Waifu2x
Image Super-Resolution for Anime-Style Art
Stars: ✭ 22,741 (+3431.21%)
Mutual labels:  super-resolution
Srflow
Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch
Stars: ✭ 537 (-16.61%)
Mutual labels:  super-resolution
Srmd
Learning a Single Convolutional Super-Resolution Network for Multiple Degradations (CVPR, 2018) (Matlab)
Stars: ✭ 333 (-48.29%)
Mutual labels:  super-resolution
Rdn
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)
Stars: ✭ 412 (-36.02%)
Mutual labels:  super-resolution
Liif
Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)
Stars: ✭ 458 (-28.88%)
Mutual labels:  super-resolution
Toflow
TOFlow: Video Enhancement with Task-Oriented Flow
Stars: ✭ 314 (-51.24%)
Mutual labels:  super-resolution
Wdsr ntire2018
Code of our winning entry to NTIRE super-resolution challenge, CVPR 2018
Stars: ✭ 570 (-11.49%)
Mutual labels:  super-resolution
Fast Srgan
A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps
Stars: ✭ 417 (-35.25%)
Mutual labels:  super-resolution
Usrnet
Deep Unfolding Network for Image Super-Resolution (CVPR, 2020) (PyTorch)
Stars: ✭ 493 (-23.45%)
Mutual labels:  super-resolution
Realsr
Real-World Super-Resolution via Kernel Estimation and Noise Injection
Stars: ✭ 367 (-43.01%)
Mutual labels:  super-resolution
Waifu2x Extension Gui
Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN, and ACNet.
Stars: ✭ 5,463 (+748.29%)
Mutual labels:  super-resolution
Raisr
A Python implementation of RAISR
Stars: ✭ 480 (-25.47%)
Mutual labels:  super-resolution
Drn
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
Stars: ✭ 327 (-49.22%)
Mutual labels:  super-resolution
Ntire2017
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"
Stars: ✭ 554 (-13.98%)
Mutual labels:  super-resolution
Pytorch Vdsr
VDSR (CVPR2016) pytorch implementation
Stars: ✭ 313 (-51.4%)
Mutual labels:  super-resolution
Dbpn Pytorch
The project is an official implement of our CVPR2018 paper "Deep Back-Projection Networks for Super-Resolution" (Winner of NTIRE2018 and PIRM2018)
Stars: ✭ 459 (-28.73%)
Mutual labels:  super-resolution
Dcscn Super Resolution
A tensorflow implementation of "Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network", a deep learning based Single-Image Super-Resolution (SISR) model.
Stars: ✭ 611 (-5.12%)
Mutual labels:  super-resolution
Zooming Slow Mo Cvpr 2020
Fast and Accurate One-Stage Space-Time Video Super-Resolution (accepted in CVPR 2020)
Stars: ✭ 555 (-13.82%)
Mutual labels:  super-resolution
Srcnn Tensorflow
Image Super-Resolution Using Deep Convolutional Networks in Tensorflow https://arxiv.org/abs/1501.00092v3
Stars: ✭ 489 (-24.07%)
Mutual labels:  super-resolution

SRGAN

A PyTorch implementation of SRGAN based on CVPR 2017 paper Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.

Requirements

conda install pytorch torchvision -c pytorch
  • opencv
conda install opencv

Datasets

Train、Val Dataset

The train and val datasets are sampled from VOC2012. Train dataset has 16700 images and Val dataset has 425 images. Download the datasets from here(access code:5tzp), and then extract it into data directory.

Test Image Dataset

The test image dataset are sampled from | Set 5 | Bevilacqua et al. BMVC 2012 | Set 14 | Zeyde et al. LNCS 2010 | BSD 100 | Martin et al. ICCV 2001 | Sun-Hays 80 | Sun and Hays ICCP 2012 | Urban 100 | Huang et al. CVPR 2015. Download the image dataset from here(access code:xwhy), and then extract it into data directory.

Test Video Dataset

The test video dataset are three trailers. Download the video dataset from here(access code:zabi).

Usage

Train

python train.py

optional arguments:
--crop_size                   training images crop size [default value is 88]
--upscale_factor              super resolution upscale factor [default value is 4](choices:[2, 4, 8])
--num_epochs                  train epoch number [default value is 100]

The output val super resolution images are on training_results directory.

Test Benchmark Datasets

python test_benchmark.py

optional arguments:
--upscale_factor              super resolution upscale factor [default value is 4]
--model_name                  generator model epoch name [default value is netG_epoch_4_100.pth]

The output super resolution images are on benchmark_results directory.

Test Single Image

python test_image.py

optional arguments:
--upscale_factor              super resolution upscale factor [default value is 4]
--test_mode                   using GPU or CPU [default value is 'GPU'](choices:['GPU', 'CPU'])
--image_name                  test low resolution image name
--model_name                  generator model epoch name [default value is netG_epoch_4_100.pth]

The output super resolution image are on the same directory.

Test Single Video

python test_video.py

optional arguments:
--upscale_factor              super resolution upscale factor [default value is 4]
--video_name                  test low resolution video name
--model_name                  generator model epoch name [default value is netG_epoch_4_100.pth]

The output super resolution video and compared video are on the same directory.

Benchmarks

Upscale Factor = 2

Epochs with batch size of 64 takes ~2 minute 30 seconds on a NVIDIA GTX 1080Ti GPU.

Image Results

The left is bicubic interpolation image, the middle is high resolution image, and the right is super resolution image(output of the SRGAN).

  • BSD100_070(PSNR:32.4517; SSIM:0.9191)

BSD100_070

  • Set14_005(PSNR:26.9171; SSIM:0.9119)

Set14_005

  • Set14_013(PSNR:30.8040; SSIM:0.9651)

Set14_013

  • Urban100_098(PSNR:24.3765; SSIM:0.7855)

Urban100_098

Video Results

The left is bicubic interpolation video, the right is super resolution video(output of the SRGAN).

Watch the video

Upscale Factor = 4

Epochs with batch size of 64 takes ~4 minute 30 seconds on a NVIDIA GTX 1080Ti GPU.

Image Results

The left is bicubic interpolation image, the middle is high resolution image, and the right is super resolution image(output of the SRGAN).

  • BSD100_035(PSNR:32.3980; SSIM:0.8512)

BSD100_035

  • Set14_011(PSNR:29.5944; SSIM:0.9044)

Set14_011

  • Set14_014(PSNR:25.1299; SSIM:0.7406)

Set14_014

  • Urban100_060(PSNR:20.7129; SSIM:0.5263)

Urban100_060

Video Results

The left is bicubic interpolation video, the right is super resolution video(output of the SRGAN).

Watch the video

Upscale Factor = 8

Epochs with batch size of 64 takes ~3 minute 30 seconds on a NVIDIA GTX 1080Ti GPU.

Image Results

The left is bicubic interpolation image, the middle is high resolution image, and the right is super resolution image(output of the SRGAN).

  • SunHays80_027(PSNR:29.4941; SSIM:0.8082)

SunHays80_027

  • SunHays80_035(PSNR:32.1546; SSIM:0.8449)

SunHays80_035

  • SunHays80_043(PSNR:30.9716; SSIM:0.8789)

SunHays80_043

  • SunHays80_078(PSNR:31.9351; SSIM:0.8381)

SunHays80_078

Video Results

The left is bicubic interpolation video, the right is super resolution video(output of the SRGAN).

Watch the video

The complete test results could be downloaded from here(access code:nkh9).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].