SRGAN-PyTorch
This repository contains the unoffical pyTorch implementation of SRGAN and also SRResNet in the paper Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, CVPR17.
We closely followed the network structure, training strategy and training set as the orignal SRGAN and SRResNet. We also implemented subpixel convolution layer as Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network, CVPR16. My collaborator also shares contribution to this repository.
License and Citation
All code and other materials (including but not limited to the tables) are provided for academic research purposes only and without any warranty. Any commercial use requires our consent. If our work helps your research or you use any parts of the code in your research, please acknowledge it appropriately:
@InProceedings{ledigsrgan17,
author = {Christian Ledig and Lucas Theis and Ferenc Huszár and Jose Caballero and Andrew Cunningham and Alejandro Acosta and Andrew Aitken and Alykhan Tejani and Johannes Totz and Zehan Wang and Wenzhe Shi},
title = {Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network},
booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {4681--4690},
year = {2017}}
@misc{SRGAN-pyTorch,
author = {Tak-Wai Hui and Wai-Ho Kwok},
title = {SRGAN-PyTorch: A PyTorch Implementation of Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/twhui/SRGAN-PyTorch}}
}
Results of SRGAN in terms of PSNR and SSIM
Dataset | Our | CVPR17 |
---|---|---|
Set5 | 29.4490 / 0.8542 | 29.40 / 0.8472 |
Set14 | 26.0677 / 0.7153 | 26.02 / 0.7397 |
BSD100 | 24.8665 / 0.6594 | 25.16 / 0.6688 |
Urban100 | 23.9434 / 0.7277 | - |
Results of SRResNet in terms of PSNR and SSIM
Dataset | Our | CVPR17 |
---|---|---|
Set5 | 31.9678 / 0.9007 | 32.05 / 0.9019 |
Set14 | 28.5809 / 0.7972 | 28.49 / 0.8184 |
BSD100 | 27.5784 / 0.7538 | 27.58 / 0.7620 |
Urban100 | 26.0479 / 0.7954 | - |
Dependencies
pytorch 0.3+, python 3.5, python-box, scikit-image, numpy
Training set
We used a subset of Imagenet dataset ILSVRC2016_CLS-LOC.tar.gz for training our models. The subset can be found in /subset.txt
Training
CUDA_VISIBLE_DEVICES=0 python ./train.py --option ./options/train/SRResNet/SRResNet_x4.json
CUDA_VISIBLE_DEVICES=0 python ./train.py --option ./options/train/SRGAN/SRGAN_x4.json
Testing
CUDA_VISIBLE_DEVICES=0 python ./test.py --option ./options/test/SRResNet/SRResNet_x4.json
CUDA_VISIBLE_DEVICES=0 python ./test.py --option ./options/test/SRGAN/SRGAN_x4.json
The upsampled images will be generated in /home/twhui/Projects/SRGAN/results/MODEL_NAME/test_images
.
A text file that contains PSNR and SSIM results will be generated in /home/twhui/Projects/SRGAN/results/MODEL_NAME/log
. MODEL_NAME = SRResNet_x4 or SRGAN_x4.
Trained models
The trained models (16 residual blocks) of SRResNet and SRGAN are available.