All Projects → ayu-22 → BPPNet-Back-Projected-Pyramid-Network

ayu-22 / BPPNet-Back-Projected-Pyramid-Network

Licence: other
This is the official GitHub repository for ECCV 2020 Workshop paper "Single image dehazing for a variety of haze scenarios using back projected pyramid network"

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to BPPNet-Back-Projected-Pyramid-Network

bmusegan
Code for “Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation”
Stars: ✭ 58 (+65.71%)
Mutual labels:  generative-adversarial-network
hyperstyle
Official Implementation for "HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing" (CVPR 2022) https://arxiv.org/abs/2111.15666
Stars: ✭ 874 (+2397.14%)
Mutual labels:  generative-adversarial-network
ganbert
Enhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+485.71%)
Mutual labels:  generative-adversarial-network
Img2Img-Translation-Networks
Tensorflow implementation of paper "unsupervised image to image translation networks"
Stars: ✭ 67 (+91.43%)
Mutual labels:  generative-adversarial-network
TF2-GAN
🐳 GAN implemented as Tensorflow 2.X
Stars: ✭ 61 (+74.29%)
Mutual labels:  generative-adversarial-network
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+691.43%)
Mutual labels:  generative-adversarial-network
Pytorch models
PyTorch study
Stars: ✭ 14 (-60%)
Mutual labels:  generative-adversarial-network
DiscoGAN-TF
Tensorflow Implementation of DiscoGAN
Stars: ✭ 57 (+62.86%)
Mutual labels:  generative-adversarial-network
path planning GAN
Path Planning using Generative Adversarial Network (GAN)
Stars: ✭ 36 (+2.86%)
Mutual labels:  generative-adversarial-network
DeepSIM
Official PyTorch implementation of the paper: "DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample" (ICCV 2021 Oral)
Stars: ✭ 389 (+1011.43%)
Mutual labels:  generative-adversarial-network
PESR
Official code (Pytorch) for paper Perception-Enhanced Single Image Super-Resolution via Relativistic Generative Networks
Stars: ✭ 28 (-20%)
Mutual labels:  generative-adversarial-network
Introduction-to-Deep-Learning-and-Neural-Networks-Course
Code snippets and solutions for the Introduction to Deep Learning and Neural Networks Course hosted in educative.io
Stars: ✭ 33 (-5.71%)
Mutual labels:  generative-adversarial-network
Sketch2Color-anime-translation
Given a simple anime line-art sketch the model outputs a decent colored anime image using Conditional-Generative Adversarial Networks (C-GANs) concept.
Stars: ✭ 90 (+157.14%)
Mutual labels:  generative-adversarial-network
CIPS-3D
3D-aware GANs based on NeRF (arXiv).
Stars: ✭ 562 (+1505.71%)
Mutual labels:  generative-adversarial-network
DCGAN-Pytorch
A Pytorch implementation of "Deep Convolutional Generative Adversarial Networks"
Stars: ✭ 23 (-34.29%)
Mutual labels:  generative-adversarial-network
Deep-Fakes
No description or website provided.
Stars: ✭ 88 (+151.43%)
Mutual labels:  generative-adversarial-network
ST-CGAN
Dataset and Code for our CVPR'18 paper ST-CGAN: "Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal"
Stars: ✭ 64 (+82.86%)
Mutual labels:  generative-adversarial-network
simplegan
Tensorflow-based framework to ease training of generative models
Stars: ✭ 19 (-45.71%)
Mutual labels:  generative-adversarial-network
pytorch-GAN
My pytorch implementation for GAN
Stars: ✭ 12 (-65.71%)
Mutual labels:  generative-adversarial-network
gan deeplearning4j
Automatic feature engineering using Generative Adversarial Networks using Deeplearning4j and Apache Spark.
Stars: ✭ 19 (-45.71%)
Mutual labels:  generative-adversarial-network

BPPNet : Back Projected Pyramid Network

PWC     PWC     PWC

Ayush Singh    Ajay Bhave    Dilip K Prasad

This is the official implementation of our ECCV 2020 Workshop paper Single image dehazing for a variety of haze scenarios using back projected pyramid network. Paper Link

Image dehazing refers to procedures that attempt to remove the haze amount in a hazy image and grant the degraded image an overall sharpened appearance to obtain a clearer visibility and smooth image. It has application in various fields such Outdoor Video surveillance, Driver assistance systems, etc.

We have implemented the network in pytorch

Acknowledgement

Ayush Singh acknowledges internship funding support from Research Council Norway’s INTPART grant no. 309802.

Experimental Details

Dataset used

We have trained and tested our 4 datasets named I-Haze, O-Haze, Dense haze and NTIRE2020 Non Homogenous Dehazing dataset

Training Details

The initial learning rate for both generator and discriminator was 0.001. The learning rate was decreased manually as the training proceed and the final learning rate for both generator and discriminator was 0.000001.

Results

The results of our model were quite good. As we can see from the below tables that our method clearly outperformed the state of the art with respect PSNR and SSIM both. Our model does the dehazing task in real-time at an average running time of 0.0311 s i.e. 31.1 ms per image.

I-HAZE

Metric Input CVPR’09 TIP’15 ECCV’16 CVPR’16 ICCV’17 CVPRW’18 Our model
SSIM 0.7302 0.7516 0.6065 0.7545 0.6537 0.7323 0.8705 0.8994
PSNR 13.80 14.43 12.24 15.22 14.12 13.98 22.53 22.56

O-HAZE

Metric Input CVPR’09 TIP’15 ECCV’16 CVPR’16 ICCV’17 CVPRW’18 Our model
SSIM 0.5907 0.6532 0.5965 0.6495 0.5849 0.5385 0.7205 0.8919
PSNR 13.56 16.78 16.08 17.56 15.98 15.03 24.24 24.27

Dense Haze

Metric CVPR’09 Meng et. al Fattal et. al Cai et. al Ancuti et. al CVPR’16 ECCV’16 Morales et. al Ours model
SSIM 0.398 0.352 0.326 0.374 0.306 0.358 0.369 0.569 0.613
PSNR 14.56 14.62 12.11 11.36 13.67 13.18 12.52 16.37 17.01

NTIRE2020 Non Homogenous Dehazing

As NTIRE2020 Non Homogenous Dehazing is very new dataset and its ground truth for validation and test data was not available, we were not able to compare it on the other state of the art. We randomly selected 5 images from training as testing images and trainined our model on the rest 40 images. The PSNR and SSIM for that testing data were 19.40 and 0.8726 respectively

Few of our results

indoor         outdoor         outdoor

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].