All Projects → thangvubk → Video Super Resolution

thangvubk / Video Super Resolution

Video super resolution implemented in Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Video Super Resolution

Deeply Recursive Cnn Tf
Test implementation of Deeply-Recursive Convolutional Network for Image Super-Resolution
Stars: ✭ 116 (-31.36%)
Mutual labels:  super-resolution
Awesome Cvpr2021 Cvpr2020 Low Level Vision
A Collection of Papers and Codes for CVPR2021/CVPR2020 Low Level Vision
Stars: ✭ 139 (-17.75%)
Mutual labels:  super-resolution
Mmediting
OpenMMLab Image and Video Editing Toolbox
Stars: ✭ 2,618 (+1449.11%)
Mutual labels:  super-resolution
Upscalerjs
Image Upscaling in Javascript. Increase image resolution up to 4x using Tensorflow.js.
Stars: ✭ 126 (-25.44%)
Mutual labels:  super-resolution
Enhancenet Code
EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis (official repository)
Stars: ✭ 142 (-15.98%)
Mutual labels:  super-resolution
Basicsr
Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
Stars: ✭ 2,708 (+1502.37%)
Mutual labels:  super-resolution
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-36.69%)
Mutual labels:  super-resolution
Waifu2x
PyTorch on Super Resolution
Stars: ✭ 156 (-7.69%)
Mutual labels:  super-resolution
Keras Image Super Resolution
EDSR, RCAN, SRGAN, SRFEAT, ESRGAN
Stars: ✭ 143 (-15.38%)
Mutual labels:  super-resolution
Frvsr
Frame-Recurrent Video Super-Resolution (official repository)
Stars: ✭ 157 (-7.1%)
Mutual labels:  super-resolution
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+973.37%)
Mutual labels:  super-resolution
Rdn Tensorflow
A TensorFlow implementation of CVPR 2018 paper "Residual Dense Network for Image Super-Resolution".
Stars: ✭ 136 (-19.53%)
Mutual labels:  super-resolution
Adafm
CVPR2019 (oral) Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers (AdaFM). PyTorch implementation
Stars: ✭ 151 (-10.65%)
Mutual labels:  super-resolution
Drln
Densely Residual Laplacian Super-resolution, IEEE Pattern Analysis and Machine Intelligence (TPAMI), 2020
Stars: ✭ 120 (-28.99%)
Mutual labels:  super-resolution
A Pytorch Tutorial To Super Resolution
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution
Stars: ✭ 157 (-7.1%)
Mutual labels:  super-resolution
Awesome Eccv2020 Low Level Vision
A Collection of Papers and Codes for ECCV2020 Low Level Vision or Image Reconstruction
Stars: ✭ 111 (-34.32%)
Mutual labels:  super-resolution
Waifu2x Extension
Image, GIF and Video enlarger/upscaler achieved with waifu2x and Anime4K. [NO LONGER UPDATED]
Stars: ✭ 149 (-11.83%)
Mutual labels:  super-resolution
Dpir
Plug-and-Play Image Restoration with Deep Denoiser Prior (PyTorch)
Stars: ✭ 159 (-5.92%)
Mutual labels:  super-resolution
Tenet
Official Pytorch Implementation for Trinity of Pixel Enhancement: a Joint Solution for Demosaicing, Denoising and Super-Resolution
Stars: ✭ 157 (-7.1%)
Mutual labels:  super-resolution
Pan
[Params: Only 272K!!!] Efficient Image Super-Resolution Using Pixel Attention, in ECCV Workshop, 2020.
Stars: ✭ 151 (-10.65%)
Mutual labels:  super-resolution

Video Super Resolution, SRCNN, MFCNN, VDCN (ours) benchmark comparison

This is a pytorch implementation of video super resolution algorithms SRCNN, MFCNN, and VDCN (ours). This project is used for one of my course, which aims to improve the performance of the baseline (SRCNN, MFCNN).

To run this project you need to setup the environment, download the dataset, run script to process data, and then you can train and test the network models. I will show you step by step to run this project and i hope it is clear enough :D.

Prerequisite

I tested my project in Corei7, 64G RAM, GPU Titan X. Because it use big dataset so you should have CPU/GPU strong enough and about 16 or 24G RAM.

Environment

  • Pytorch 1.0
  • tqdm
  • h5py
  • cv2

Dataset

First, download dataset from this link and put it in this project. FYI, the training set (IndMya trainset) is taken the India and Myanmar video from Hamonics website. The test sets include IndMya and vid4 (city, walk, foliage, and calendar). After the download completes, unzip it. Your should see the path of data is video-super-resolution/data/train/.

Process data

The data is processed by MATLAB scripts, the reason for that is interpolation implementation of MATLAB is different from Python. To do that, open your MATLAB then

$ cd matlab_scripts/
$ generate_train_video

When the script is running, you should see the output as follow

create_train

After the scipt finishes, you should see something like

creat_train_result

As you can see, we have a dataset of data and label. The train dataset will be stored in the path video-super-resolution/preprocessed_data/train/3x/dataset.h5

Do the similar thing with test set:

$ generate_test_video

NOTE: If you want to run train and test the network with different dataset and frame up-scale factor, you should modify the dataset, and scale variable in the generate_test_video and generate_train_video scripts (see the scripts for instructions).

Pretrain model

Method Scale Download
VRES 3 model

Execute the code

To train the network: python train.py --verbose

you should see something like

train To test the network: python test.py

you should see something like

test

The experiment results will be saved in results/

NOTE: That is the simplest way to train and test the model, all the settings will take default values. You can add options for training and testing. For example if i want to train model MFCNN, initial learning-rate 1e-2, num of epoch 100, batch_size 64, scale factor 3, verbose true: python train.py -m MFCNN -l 1e-2 -n 100 -b 64 -s 3 --verbose. See python main.py --help and python test.py --help for detail information.

Benchmark comparisions

our network architecture is similar to figure below. Which use 5 consecutive low-resolution frames as the input and produce the high resolution center frame.

network_architecture

Benchmark comparsions on vid4 dataset

Quantity: quantity

Quality: quality

see our report VDCN for more comparison.

Project explaination

  • train.py: where you can start to train the network
  • test.py: where you can start to test the network
  • model.py: declare SRCNN, MFCNN, and our model with different network depth (default 20 layers). Note that our network in the code have name VRES.
  • SR_dataset.py: declare dataset for each model
  • solver.py: encapsulate all the logics to train the network
  • pytorch_ssim.py: pytorch implementation for SSIM loss (with autograd), clone from this repo
  • loss.py: loss function for models

TODO

Upload pretrained models

Building your own model

To create your new model you need to define a new network architecture and new dataset class. See model.py and SR_datset.py for the idea :D.

I hope my instructions are clear enough for you. If you have any problem, you can contact me through [email protected] or use the issue tab. If you are insterested in this project, you are very welcome. Many Thanks.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].