All Projects → yinboc → Liif

yinboc / Liif

Licence: bsd-3-clause
Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Liif

Video2x
A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR. Started in Hack the Valley 2, 2018.
Stars: ✭ 3,760 (+720.96%)
Mutual labels:  super-resolution
Pytorch Vdsr
VDSR (CVPR2016) pytorch implementation
Stars: ✭ 313 (-31.66%)
Mutual labels:  super-resolution
Waifu2x Extension Gui
Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN, and ACNet.
Stars: ✭ 5,463 (+1092.79%)
Mutual labels:  super-resolution
Caffe Vdsr
A Caffe-based implementation of very deep convolution network for image super-resolution
Stars: ✭ 273 (-40.39%)
Mutual labels:  super-resolution
Sewar
All image quality metrics you need in one package.
Stars: ✭ 299 (-34.72%)
Mutual labels:  super-resolution
Drn
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
Stars: ✭ 327 (-28.6%)
Mutual labels:  super-resolution
GSOC
Repository for Google Summer of Code 2019 https://summerofcode.withgoogle.com/projects/#4662790671826944
Stars: ✭ 61 (-86.68%)
Mutual labels:  super-resolution
Waifu2x
Image Super-Resolution for Anime-Style Art
Stars: ✭ 22,741 (+4865.28%)
Mutual labels:  super-resolution
Toflow
TOFlow: Video Enhancement with Task-Oriented Flow
Stars: ✭ 314 (-31.44%)
Mutual labels:  super-resolution
Pixel Recursive Super Resolution
Tensorflow implementation of pixel-recursive-super-resolution(Google Brain paper: https://arxiv.org/abs/1702.00783)
Stars: ✭ 374 (-18.34%)
Mutual labels:  super-resolution
Tdan Vsr Cvpr 2020
TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution, CVPR 2020
Stars: ✭ 277 (-39.52%)
Mutual labels:  super-resolution
Pytorch Srgan
A modern PyTorch implementation of SRGAN
Stars: ✭ 289 (-36.9%)
Mutual labels:  super-resolution
Srmd
Learning a Single Convolutional Super-Resolution Network for Multiple Degradations (CVPR, 2018) (Matlab)
Stars: ✭ 333 (-27.29%)
Mutual labels:  super-resolution
Singan
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"
Stars: ✭ 2,983 (+551.31%)
Mutual labels:  super-resolution
Rdn
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)
Stars: ✭ 412 (-10.04%)
Mutual labels:  super-resolution
Mirnet
Official repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (-46.07%)
Mutual labels:  super-resolution
Edsr Tensorflow
Tensorflow implementation of Enhanced Deep Residual Networks for Single Image Super-Resolution
Stars: ✭ 314 (-31.44%)
Mutual labels:  super-resolution
Dbpn Pytorch
The project is an official implement of our CVPR2018 paper "Deep Back-Projection Networks for Super-Resolution" (Winner of NTIRE2018 and PIRM2018)
Stars: ✭ 459 (+0.22%)
Mutual labels:  super-resolution
Fast Srgan
A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps
Stars: ✭ 417 (-8.95%)
Mutual labels:  super-resolution
Realsr
Real-World Super-Resolution via Kernel Estimation and Noise Injection
Stars: ✭ 367 (-19.87%)
Mutual labels:  super-resolution

LIIF

This repository contains the official implementation for LIIF introduced in the following paper:

Learning Continuous Image Representation with Local Implicit Image Function
Yinbo Chen, Sifei Liu, Xiaolong Wang
CVPR 2021 (Oral)

The project page with video is at https://yinboc.github.io/liif/.

Citation

If you find our work useful in your research, please cite:

@article{chen2020learning,
  title={Learning Continuous Image Representation with Local Implicit Image Function},
  author={Chen, Yinbo and Liu, Sifei and Wang, Xiaolong},
  journal={arXiv preprint arXiv:2012.09161},
  year={2020}
}

Environment

  • Python 3
  • Pytorch 1.6.0
  • TensorboardX
  • yaml, numpy, tqdm, imageio

Quick Start

  1. Download a DIV2K pre-trained model.
Model File size Download
EDSR-baseline-LIIF 18M Dropbox | Google Drive
RDN-LIIF 256M Dropbox | Google Drive
  1. Convert your image to LIIF and present it in a given resolution (with GPU 0, [MODEL_PATH] denotes the .pth file)
python demo.py --input xxx.png --model [MODEL_PATH] --resolution [HEIGHT],[WIDTH] --output output.png --gpu 0

Reproducing Experiments

Data

mkdir load for putting the dataset folders.

  • DIV2K: mkdir and cd into load/div2k. Download HR images and bicubic validation LR images from DIV2K website (i.e. Train_HR, Valid_HR, Valid_LR_X2, Valid_LR_X3, Valid_LR_X4). unzip these files to get the image folders.

  • benchmark datasets: cd into load/. Download and tar -xf the benchmark datasets (provided by this repo), get a load/benchmark folder with sub-folders Set5/, Set14/, B100/, Urban100/.

  • celebAHQ: mkdir load/celebAHQ and cp scripts/resize.py load/celebAHQ/, then cd load/celebAHQ/. Download and unzip data1024x1024.zip from the Google Drive link (provided by this repo). Run python resize.py and get image folders 256/, 128/, 64/, 32/. Download the split.json.

Running the code

0. Preliminaries

  • For train_liif.py or test.py, use --gpu [GPU] to specify the GPUs (e.g. --gpu 0 or --gpu 0,1).

  • For train_liif.py, by default, the save folder is at save/_[CONFIG_NAME]. We can use --name to specify a name if needed.

  • For dataset args in configs, cache: in_memory denotes pre-loading into memory (may require large memory, e.g. ~40GB for DIV2K), cache: bin denotes creating binary files (in a sibling folder) for the first time, cache: none denotes direct loading. We can modify it according to the hardware resources before running the training scripts.

1. DIV2K experiments

Train: python train_liif.py --config configs/train-div2k/train_edsr-baseline-liif.yaml (with EDSR-baseline backbone, for RDN replace edsr-baseline with rdn). We use 1 GPU for training EDSR-baseline-LIIF and 4 GPUs for RDN-LIIF.

Test: bash scripts/test-div2k.sh [MODEL_PATH] [GPU] for div2k validation set, bash scripts/test-benchmark.sh [MODEL_PATH] [GPU] for benchmark datasets. [MODEL_PATH] is the path to a .pth file, we use epoch-last.pth in corresponding save folder.

2. celebAHQ experiments

Train: python train_liif.py --config configs/train-celebAHQ/[CONFIG_NAME].yaml.

Test: python test.py --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH] (or test-celebAHQ-64-128.yaml for another task). We use epoch-best.pth in corresponding save folder.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].