All Projects → sjmoran → Deeplpf

sjmoran / Deeplpf

Code for CVPR 2020 paper "Deep Local Parametric Filters for Image Enhancement"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deeplpf

CURL
Code for the ICPR 2020 paper: "CURL: Neural Curve Layers for Image Enhancement"
Stars: ✭ 177 (+94.51%)
Mutual labels:  paper, cvpr
TMNet
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".
Stars: ✭ 77 (-15.38%)
Mutual labels:  paper, cvpr
AIPaperCompleteDownload
Complete download for papers in various top conferences
Stars: ✭ 64 (-29.67%)
Mutual labels:  paper, cvpr
Pwc
Papers with code. Sorted by stars. Updated weekly.
Stars: ✭ 15,288 (+16700%)
Mutual labels:  paper, cvpr
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+7743.96%)
Mutual labels:  paper, cvpr
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
Stars: ✭ 79 (-13.19%)
Mutual labels:  paper, cvpr
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-76.92%)
Mutual labels:  paper, cvpr
Cv paperdaily
CV 论文笔记
Stars: ✭ 555 (+509.89%)
Mutual labels:  paper, cvpr
GuidedLabelling
Exploiting Saliency for Object Segmentation from Image Level Labels, CVPR'17
Stars: ✭ 35 (-61.54%)
Mutual labels:  paper, cvpr
Awesome-Computer-Vision-Paper-List
This repository contains all the papers accepted in top conference of computer vision, with convenience to search related papers.
Stars: ✭ 248 (+172.53%)
Mutual labels:  paper, cvpr
Cvpr 2019 Paper Statistics
Statistics and Visualization of acceptance rate, main keyword of CVPR 2019 accepted papers for the main Computer Vision conference (CVPR)
Stars: ✭ 527 (+479.12%)
Mutual labels:  paper, cvpr
Papercrawler
Crawler used to crawl papers
Stars: ✭ 20 (-78.02%)
Mutual labels:  paper, cvpr
Minecraft Optimization
Minecraft server optimization guide
Stars: ✭ 77 (-15.38%)
Mutual labels:  paper
Bit Rnn
Quantize weights and activations in Recurrent Neural Networks.
Stars: ✭ 86 (-5.49%)
Mutual labels:  paper
Acm Icpc Resource
ACM-ICPC-resource chinese
Stars: ✭ 76 (-16.48%)
Mutual labels:  paper
Colorhighlight
🎨 Lightweight Color Highlight colorizer for Sublime Text
Stars: ✭ 76 (-16.48%)
Mutual labels:  rgb
Snapaper
📰 Past Papers Sharing Platform Based On Vue.js & GCE Guide | CAIE 试卷分享与下载平台
Stars: ✭ 90 (-1.1%)
Mutual labels:  paper
Neural Mmo
Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
Stars: ✭ 1,265 (+1290.11%)
Mutual labels:  paper
Dcm Net
This work is based on our paper "DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes", which appeared at the IEEE Conference On Computer Vision And Pattern Recognition (CVPR) 2020.
Stars: ✭ 75 (-17.58%)
Mutual labels:  cvpr
Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+10773.63%)
Mutual labels:  paper

DeepLPF: Deep Local Parametric Filters for Image Enhancement (CVPR 2020)

Sean Moran, Pierre Marza, Steven McDonagh, Sarah Parisot, Greg Slabaugh

Huawei Noah's Ark Lab

Main repository for the CVPR 2020 paper DeepLPF: Deep Local Parametric Filters for Image Enhancement. Here you will find a link to the code, pre-trained models and information on the datasets. Please raise a Github issue if you need assistance of have any questions on the research.

[Paper]

[Poster]

[Video]

[Supplementary]

Input Label Ours (DeepLPF)
Input Label Ours (DeepLPF)
Input Label Ours (DeepLPF)
Input Label Ours (DeepLPF)
Input Label Ours (DeepLPF)

Dependencies

requirements.txt contains the Python packages used by the code.

How to train DeepLPF and use the model for inference

Training DeepLPF

Instructions:

To get this code working on your system / problem you will need to edit the data loading functions, as follows:

  1. main.py, change the paths for the data directories to point to your data directory
  2. data.py, lines 248, 256, change the folder names of the data input and output directories to point to your folder names

To train, run the command:

python3 main.py

Inference - Using Pre-trained Models for Prediction

The directory pretrained_models contains a set of four DeepLPF pre-trained models on the Adobe5K_DPE dataset, each model output from different epochs. The model with the highest validation dataset PSNR (23.94 dB) is at epoch 500:

  • deeplpf_validpsnr_23.31_validloss_0.033_testpsnr_23.94_testloss_0.031_epoch_499_model.pt

This model achieves a PSNR of 23.94dB and an SSIM of 0.913 on the Adobe_DPE image dataset. To inference with this model, follow these instructions:

  1. Place the images you wish to infer in a directory e.g. ./adobe5k_dpe/deeplpf_example_test_input/. Make sure the directory path has the word "input" somewhere in the path.
  2. Place the images you wish to use as groundtruth in a directory e.g. ./adobe5k_dpe/deeplpf_example_test_output/. Make sure the directory path has the word "output" somewhere in the path.
  3. Place the names of the images (without extension) in a text file in the directory above the directory containing the images i.e. ./adobe5k_dpe/ e.g. ./adobe5k_dpe/images_inference.txt
  4. Run the command and the results will appear in a timestamped directory in the same directory as main.py:
python3 main.py --inference_img_dirpath=./adobe5k_dpe/ --checkpoint_filepath=./pretrained_models/deeplpf_validpsnr_23.31_validloss_0.033_testpsnr_23.94_testloss_0.031_epoch_499_model.pt

Bibtex

@InProceedings{Moran_2020_CVPR,
author = {Moran, Sean and Marza, Pierre and McDonagh, Steven and Parisot, Sarah and Slabaugh, Gregory},
title = {DeepLPF: Deep Local Parametric Filters for Image Enhancement},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Datasets

  • Adobe-DPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. After downloading this dataset you will need to use Lightroom to pre-process the images according to the procedure outlined in the DeepPhotoEnhancer (DPE) paper. Please see the issue here for instructions. Artist C retouching is used as the groundtruth/target. Note, the images must be extracted in sRGB space. Feel free to raise a Gitlab issue if you need assistance with this (or indeed the Adobe-UPE dataset below). You can also find the training, validation and testing dataset splits for Adobe-DPE in the following file. The splits can also be found the the adobe5k_dpe directory in this repository (note these are a best guess at what the orginal splits from the DPE authors might be).

  • Adobe-UPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. As above, you will need to use Lightroom to pre-process the images according to the procedure outlined in the Underexposed Photo Enhancement Using Deep Illumination Estimation (DeepUPE) paper and detailed in the issue here. Artist C retouching is used as the groundtruth/target. You can find the test images for the Adobe-UPE dataset at this link.

License

BSD-3-Clause License

Contributions

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].