All Projects → andreas128 → Srflow

andreas128 / Srflow

Licence: other
Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch

Projects that are alternatives of or similar to Srflow

Highres Net
Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition.
Stars: ✭ 207 (-61.45%)
Mutual labels:  jupyter-notebook, super-resolution
Enet Real Time Semantic Segmentation
ENet - A Neural Net Architecture for real time Semantic Segmentation
Stars: ✭ 238 (-55.68%)
Mutual labels:  jupyter-notebook, paper
Research Paper Notes
Notes and Summaries on ML-related Research Papers (with optional implementations)
Stars: ✭ 218 (-59.4%)
Mutual labels:  jupyter-notebook, paper
Super resolution with cnns and gans
Image Super-Resolution Using SRCNN, DRRN, SRGAN, CGAN in Pytorch
Stars: ✭ 176 (-67.23%)
Mutual labels:  jupyter-notebook, super-resolution
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+477.09%)
Mutual labels:  jupyter-notebook, image-manipulation
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-64.8%)
Mutual labels:  jupyter-notebook, paper
Zoom Learn Zoom
computational zoom from raw sensor data
Stars: ✭ 224 (-58.29%)
Mutual labels:  jupyter-notebook, super-resolution
Reproduce Stock Market Direction Random Forests
Reproduce research from paper "Predicting the direction of stock market prices using random forest"
Stars: ✭ 67 (-87.52%)
Mutual labels:  jupyter-notebook, paper
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (-69.65%)
Mutual labels:  image-manipulation, super-resolution
TMNet
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".
Stars: ✭ 77 (-85.66%)
Mutual labels:  paper, super-resolution
Starnet
StarNet
Stars: ✭ 141 (-73.74%)
Mutual labels:  jupyter-notebook, image-manipulation
Action Recognition Visual Attention
Action recognition using soft attention based deep recurrent neural networks
Stars: ✭ 350 (-34.82%)
Mutual labels:  jupyter-notebook, paper
Yolo Powered robot vision
Stars: ✭ 133 (-75.23%)
Mutual labels:  jupyter-notebook, paper
Cvpr 2019 Paper Statistics
Statistics and Visualization of acceptance rate, main keyword of CVPR 2019 accepted papers for the main Computer Vision conference (CVPR)
Stars: ✭ 527 (-1.86%)
Mutual labels:  jupyter-notebook, paper
Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+1742.64%)
Mutual labels:  jupyter-notebook, paper
Triplet Attention
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (-58.66%)
Mutual labels:  jupyter-notebook, paper
Deep Embedded Memory Networks
https://arxiv.org/abs/1707.00836
Stars: ✭ 19 (-96.46%)
Mutual labels:  jupyter-notebook, paper
Super Resolution
Tensorflow 2.x based implementation of EDSR, WDSR and SRGAN for single image super-resolution
Stars: ✭ 952 (+77.28%)
Mutual labels:  jupyter-notebook, super-resolution
libpillowfight
Small library containing various image processing algorithms (+ Python 3 bindings) that has almost no dependencies -- Moved to Gnome's Gitlab
Stars: ✭ 60 (-88.83%)
Mutual labels:  paper, image-manipulation
Pytorch Vdsr
VDSR (CVPR2016) pytorch implementation
Stars: ✭ 313 (-41.71%)
Mutual labels:  jupyter-notebook, super-resolution

SRFlow

Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch

[Paper] ECCV 2020 Spotlight


SRFlow


Setup: Data, Environment, PyTorch Demo


git clone https://github.com/andreas128/SRFlow.git && cd SRFlow && ./setup.sh

This oneliner will:

  • Clone SRFlow
  • Setup a python3 virtual env
  • Install the packages from requirements.txt
  • Download the pretrained models
  • Download the validation data
  • Run the Demo Jupyter Notebook

If you want to install it manually, read the setup.sh file. (Links to data/models, pip packages)



Demo: Try Normalizing Flow in PyTorch

./run_jupyter.sh

This notebook lets you:

  • Load the pretrained models.
  • Super-resolve images.
  • Measure PSNR/SSIM/LPIPS.
  • Infer the Normalizing Flow latent space.



Testing: Apply the included pretrained models

source myenv/bin/activate                      # Use the env you created using setup.sh
cd code
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_DF2K_4X.yml      # Diverse Images 4X (Dataset Included)
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_DF2K_8X.yml      # Diverse Images 8X (Dataset Included)
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_CelebA_8X.yml    # Faces 8X

For testing, we apply SRFlow to the full images on CPU.



Training: Reproduce or train on your Data

The following commands train the Super-Resolution network using Normalizing Flow in PyTorch:

source myenv/bin/activate                      # Use the env you created using setup.sh
cd code
python train.py -opt ./confs/SRFlow_DF2K_4X.yml      # Diverse Images 4X (Dataset Included)
python train.py -opt ./confs/SRFlow_DF2K_8X.yml      # Diverse Images 8X (Dataset Included)
python train.py -opt ./confs/SRFlow_CelebA_8X.yml    # Faces 8X
  • To reduce the GPU memory, reduce the batch size in the yml file.
  • CelebA does not allow us to host the dataset. A script will follow.



Dataset: How to train on your own data

The following command creates the pickel files that you can use in the yaml config file:

cd code
python prepare_data.py /path/to/img_dir

The precomputed DF2K dataset gets downloaded using setup.sh. You can reproduce it or prepare your own dataset.



Our paper explains

  • How to train Conditional Normalizing Flow
    We designed an architecture that archives state-of-the-art super-resolution quality.
  • How to train Normalizing Flow on a single GPU
    We based our network on GLOW, which uses up to 40 GPUs to train for image generation. SRFlow only needs a single GPU for training conditional image generation.
  • How to use Normalizing Flow for image manipulation
    How to exploit the latent space for Normalizing Flow for controlled image manipulations
  • See many Visual Results
    Compare GAN vs Normalizing Flow yourself. We've included a lot of visuals results in our [Paper].



GAN vs Normalizing Flow - Blog

  • Sampling: SRFlow outputs many different images for a single input.
  • Stable Training: SRFlow has much fewer hyperparameters than GAN approaches, and we did not encounter training stability issues.
  • Convergence: While GANs cannot converge, conditional Normalizing Flows converge monotonic and stable.
  • Higher Consistency: When downsampling the super-resolution, one obtains almost the exact input.

Get a quick introduction to Normalizing Flow in our [Blog].




Wanna help to improve the code?

If you found a bug or improved the code, please do the following:

  • Fork this repo.
  • Push the changes to your repo.
  • Create a pull request.



Paper

[Paper] ECCV 2020 Spotlight

@inproceedings{lugmayr2020srflow,
  title={SRFlow: Learning the Super-Resolution Space with Normalizing Flow},
  author={Lugmayr, Andreas and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
  booktitle={ECCV},
  year={2020}
}



Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].