All Projects → varun19299 → deep-atrous-guided-filter

varun19299 / deep-atrous-guided-filter

Licence: other
Deep Atrous Guided Filter for Image Restoration in Under Display Cameras (UDC Challenge, ECCV 2020).

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deep-atrous-guided-filter

SRResCycGAN
Code repo for "Deep Cyclic Generative Adversarial Residual Convolutional Networks for Real Image Super-Resolution" (ECCVW AIM2020).
Stars: ✭ 47 (+46.88%)
Mutual labels:  image-restoration, eccv2020
JSTASR-DesnowNet-ECCV-2020
This is the project page of our paper which has been published in ECCV 2020.
Stars: ✭ 17 (-46.87%)
Mutual labels:  image-restoration, eccv2020
Dehazing-PMHLD-Patch-Map-Based-Hybrid-Learning-DehazeNet-for-Single-Image-Haze-Removal-TIP-2020
This is the source code of PMHLD-Patch-Map-Based-Hybrid-Learning-DehazeNet-for-Single-Image-Haze-Removal which has been accepted by IEEE Transaction on Image Processing 2020.
Stars: ✭ 14 (-56.25%)
Mutual labels:  image-restoration, eccv2020
dough
Library containing a lot of useful utility classes for the everyday Java and Spigot/Paper developer.
Stars: ✭ 26 (-18.75%)
Mutual labels:  paper
Paper-clip
List of various interesting papers
Stars: ✭ 16 (-50%)
Mutual labels:  paper
3DObjectTracking
Official Code: A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking
Stars: ✭ 375 (+1071.88%)
Mutual labels:  paper
Awesome-Human-Activity-Recognition
An up-to-date & curated list of Awesome IMU-based Human Activity Recognition(Ubiquitous Computing) papers, methods & resources. Please note that most of the collections of researches are mainly based on IMU data.
Stars: ✭ 72 (+125%)
Mutual labels:  paper
dmfont
Official PyTorch implementation of DM-Font (ECCV 2020)
Stars: ✭ 110 (+243.75%)
Mutual labels:  eccv2020
Glowkit
A fork of the Paper (Bukkit) API for use in Glowstone
Stars: ✭ 17 (-46.87%)
Mutual labels:  paper
Awesome-Image-Matting
📓 A curated list of deep learning image matting papers and codes
Stars: ✭ 281 (+778.13%)
Mutual labels:  paper
Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+1025%)
Mutual labels:  paper
PlantDoc-Dataset
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020
Stars: ✭ 114 (+256.25%)
Mutual labels:  paper
tpprl
Code and data for "Deep Reinforcement Learning of Marked Temporal Point Processes", NeurIPS 2018
Stars: ✭ 68 (+112.5%)
Mutual labels:  paper
AIPaperCompleteDownload
Complete download for papers in various top conferences
Stars: ✭ 64 (+100%)
Mutual labels:  paper
libpillowfight
Small library containing various image processing algorithms (+ Python 3 bindings) that has almost no dependencies -- Moved to Gnome's Gitlab
Stars: ✭ 60 (+87.5%)
Mutual labels:  paper
PQ-NET
code for our CVPR 2020 paper "PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes"
Stars: ✭ 99 (+209.38%)
Mutual labels:  paper
PMMasterQuest
Take Paper Mario 64, buff old and new enemies to absurd levels, then rebalance Mario's overpowered strategies, and you've got one of the most difficult hacks of all time: Paper Mario Master Quest. The Discord:
Stars: ✭ 58 (+81.25%)
Mutual labels:  paper
research-contributions
Implementations of recent research prototypes/demonstrations using MONAI.
Stars: ✭ 564 (+1662.5%)
Mutual labels:  paper
KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
Stars: ✭ 40 (+25%)
Mutual labels:  paper
AI-Lossless-Zoomer
AI无损放大工具
Stars: ✭ 940 (+2837.5%)
Mutual labels:  image-restoration

Deep Atrous Guided Filter

Our submission to the Under Display Camera Challenge (UDC) at ECCV 2020. We placed 2nd and 5th on the POLED and TOLED tracks respectively!

Project Page | Paper | Open In Colab

Method Diagram

Official implementation of our ECCVW 2020 paper, "Deep Atrous Guided Filter for Image Restoration in Under Display Cameras", Varun Sundar*, Sumanth Hedge*, Divya K Raman, Kaushik Mitra. Indian Institute of Technology Madras, * denotes equal contribution.

Quick Collab Demo

If you want to experiment with Deep Atrous Guided Filter (DAGF), we recommend you get started with the collab notebook. It exposes the core aspects of our method, while abstracting away minor details and helper functions.

It requires no prior setup, and contains a demo for both POLED and TOLED measurements.

If you're unfamiliar with Under Display Cameras, they are a new imaging system for smartphones, where the camera is mounted right under the display. This makes truly bezel-free displays possible, and opens up a bunch of other applications. You can read more here.

Get Started

If you would like to reproduce all our experiments presented in the paper, head over to the experiments branch. For a concise version with just our final models, you may continue here.

You'll need to install the following:

  • python 3.7+
  • pytorch 1.5+
  • Use pip install -r utils/requirements.txt for the remaining

Data

Dataset Train Folder Val Folder Test Folder
POLED POLED_train POLED_val POLED_test
TOLED TOLED_train TOLED_val TOLED_test
Simulated POLED Sim_train Sim_val NA
Simulated TOLED Sim_train Sim_val NA

Download the required folder and place it under the data/ directory. The train and val splits contain both low-quality measurements (LQ folder) and high-quality groudtruth (HQ folder). The test set contains only measurements currently.

We also provide our simulated dataset, based on training a shallow version of DAGF with Contextual Bilateral (CoBi) loss. For simulation specific details (procedure etc.) take a look at the experiments branch.

Configs and Checkpoints

We use sacred to handle config parsing, with the following command-line invocation:

python train{val}.py with config_name {other flags} -p

Various configs available:

Model Dataset Config Name Checkpoints
DAGF POLED ours_poled ours-poled
DAGF-sim Simulated POLED ours_poled_sim ours-poled-sim
DAGF-PreTr POLED (fine-tuned from DAGF-sim) ours_poled_PreTr ours-poled-PreTr
DAGF TOLED ours_toled ours-toled
DAGF-sim Simulated TOLED ours_toled_sim ours-toled-sim
DAGF-PreTr TOLED (fine-tuned from DAGF-sim) ours_toled_PreTr ours-toled-PreTr

Download the required checkpoint folder and place it under ckpts/.

DAGF-sim networks are first trained on simulated data. To obtain this data, we trained a shallow version of our final model to transform clean images to Glass/ POLED / TOLED. You can find the checkpoints and code to these networks in our experiments branch.

Further, see config.py for exhaustive set of config options. To add a config, create a new function in config.py and add it to named_configs`.

Directory Setup

Create the following symbolic links (assume path_to_root_folder/ is ~/udc_net):

  • Data folder: ln -s /data_dir/ ~/udc_net
  • Runs folder: ln -s /runs_dir/ ~/udc_net
  • Ckpts folder: ln -s /ckpt_dir/ ~/udc_net
  • Outputs folder: ln -s /output_dir/ ~/udc_net

High Level Organisation

Data folder: Each subfolder contains a data split.

|-- Poled_train
|   |-- HQ
|   |-- |-- 101.png
|   |-- |-- 102.png
|   |-- |-- 103.png
|   `-- LQ
|-- Poled_val
|   `-- LQ

Splits:

  • Poled_{train,val}: Poled acquired images, HQ (glass), LQ (Poled) pairs.
  • Toled_{train,val}: Toled acquired images, HQ (glass), LQ (Toled) pairs.
  • Sim_{train,val}: our simulated set.
  • DIV2K: source images used for train Poled, Toled in monitor acquisition. Used to train sim networks.

Outputs folder: Val, test dumps under various experiment names.

outputs
|-- ours-poled
|   |-- test_latest
|   |-- val_latest
        |-- 99.png
        |-- 9.png
        `-- metrics.txt

Ckpts folder: Ckpts under various experiment names. We store every 64th epoch, and every 5 epochs prior for model snapshots. This is mutable under config.py.

ckpts
|-- ours-poled
|   `-- model_latest.pth

Runs folder: Tensorboard event files under various experiment names.

runs
|-- ours-poled
|   |-- events.out.tfevents.1592369530.genesis.26208.0

Train Script

Run as: python train.py with xyz_config {other flags}

For a multi-gpu version (we use pytorch's distributed-data-parallel):

python -m torch.distributed.launch --nproc_per_node=3 --use_env train.py with xyz_config distdataparallel=True {other flags}

Val Script

Run as: python val.py with xyz_config {other flags}

Useful Flags:

  • self_ensemble: Use self-ensembling. Ops may be found in utils/self_ensembling.py.

See config.py for exhaustive set of arguments (under base_config).

Citation

If you find our work useful in your research, please cite:

@InProceedings{10.1007/978-3-030-68238-5_29,
author="Sundar, Varun
and Hegde, Sumanth
and Kothandaraman, Divya
and Mitra, Kaushik",
title="Deep Atrous Guided Filter for Image Restoration in Under Display Cameras",
booktitle="Computer Vision -- ECCV 2020 Workshops",
year="2020",
publisher="Springer International Publishing",
pages="379--397",
}

Contact

Feel free to mail us if you have any questions!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].