All Projects → mahmoudnafifi → SIIE

mahmoudnafifi / SIIE

Licence: other
Sensor-Independent Illumination Estimation for DNN Models (BMVC 2019)

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to SIIE

C5
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)
Stars: ✭ 75 (+226.09%)
Mutual labels:  whitebalance, white-balance, illuminant-estimation, color-constancy
DNNAC
All about acceleration and compression of Deep Neural Networks
Stars: ✭ 29 (+26.09%)
Mutual labels:  deep-neural-network
Deep-Learning-Mahjong---
Reinforcement learning (RL) implementation of imperfect information game Mahjong using markov decision processes to predict future game states
Stars: ✭ 45 (+95.65%)
Mutual labels:  deep-neural-network
Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.
Stars: ✭ 25 (+8.7%)
Mutual labels:  color-constancy
dnn.cool
A framework for multi-task learning, where you may precondition tasks and compose them into bigger tasks. Conditional objectives and per-task evaluations and interpretations.
Stars: ✭ 44 (+91.3%)
Mutual labels:  deep-neural-network
midiGenerator
Generate midi file with deep neural network 🎶
Stars: ✭ 30 (+30.43%)
Mutual labels:  deep-neural-network
LSMI-dataset
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination
Stars: ✭ 34 (+47.83%)
Mutual labels:  whitebalance
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+586.96%)
Mutual labels:  color-histogram
py-image-search-engine
Python Image Search Engine with OpenCV
Stars: ✭ 37 (+60.87%)
Mutual labels:  color-histogram
Analytics Zoo
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray
Stars: ✭ 2,448 (+10543.48%)
Mutual labels:  deep-neural-network
Deepvariant
DeepVariant is an analysis pipeline that uses a deep neural network to call genetic variants from next-generation DNA sequencing data.
Stars: ✭ 2,404 (+10352.17%)
Mutual labels:  deep-neural-network
Wer are we
Attempt at tracking states of the arts and recent results (bibliography) on speech recognition.
Stars: ✭ 1,684 (+7221.74%)
Mutual labels:  deep-neural-network
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+46413.04%)
Mutual labels:  deep-neural-network
Ciphey
⚡ Automatically decrypt encryptions without knowing the key or cipher, decode encodings, and crack hashes ⚡
Stars: ✭ 9,116 (+39534.78%)
Mutual labels:  deep-neural-network
Pwnagotchi
(⌐■_■) - Deep Reinforcement Learning instrumenting bettercap for WiFi pwning.
Stars: ✭ 4,678 (+20239.13%)
Mutual labels:  deep-neural-network
Tfjs
A WebGL accelerated JavaScript library for training and deploying ML models.
Stars: ✭ 15,834 (+68743.48%)
Mutual labels:  deep-neural-network
lego-art-remix
Powerful computer vision assisted Lego mosaic creator · Over 500,000 images created (so far!)
Stars: ✭ 148 (+543.48%)
Mutual labels:  deep-neural-network
11K-Hands
Two-stream CNN for gender classification and biometric identification using a dataset of 11K hand images.
Stars: ✭ 44 (+91.3%)
Mutual labels:  deep-neural-network
quasi-unsupervised-cc
Implementation of the method described in the paper "Quasi-unsupervised color constancy" - CVPR 2019
Stars: ✭ 35 (+52.17%)
Mutual labels:  color-constancy

Sensor-Independent Illumination Estimation for DNN Models

Mahmoud Afifi1 and Michael S. Brown1,2

1York University    2Samsung AI Center (SAIC) - Toronto

Project page

BMVC_main

Prerequisite

  1. Matlab 2018b or higher (recommended)
  2. Deep Learning Toolbox for Matlab 2018b or higher

The original experiments were done using Matlab 2018b. In some higher Matlab versions, models designed in Matlab 2018b do not work. For this reason, we provide another version of models that should work but may give higher errors than the Matlab 2018b models.

UPDATE

Strongly recommended to first try out the models for Matlab 2018b, if a compatibility error is faced, then try out the 'Matlab 2019b or higher' option.

Quick start

Matlab View Sensor-Independent Illuminant Estimation Using Deep Learning on File Exchange

Run install_.m, then run demo.m to test our trained models. In demo.m, you should select the version of Matlab by changing the value of Matlab_ver. The supported versions are: Matlab 2018b, Matlab 2019a, or higher. It is highly recommended to set Matlab_ver to Matlab 2018b (even if you use higher version). If you got compatability error, then try out with Matlab_ver = 'Matlab 2019b'.

You can change the model_name and image_name variables to choose between our trained models and to change input image filename, respectively. You can test any of our trained models located in models directory. Each model was trained using different camera sensors, as discussed in our paper. Each model is named based on the validation set used during the training (for example, the model trained_model_wo_CUBE+_CanonEOS550D.mat was trained using all raw-RGB linear images from NUS and Gehler-Shi datasets without including any example from the CanonEOS550D camera in Cube/Cube+ datasets).

The input image file must contain the image raw-RGB values after applying the black/saturation level normalization. This is very important since all trained networks expect to get uint16 input images after applying the black/saturation level normalization.

FAQ

Can I use it to correct sRGB-rendered JPEG images?

No. Our method works with linear raw-RGB images, not camera-rendered images. To corret your sRGB-rendered images, you can check When Color Constancy Goes Wrong: Correcting Improperly White-Balanced Images, CVPR'19 for white balancing sRGB-rendered images (an online demo is provided).

Can I test images captured by camera sensors different than the camera sensors used for training (i.e., NUS, Gehler-Shi, and Cube/Cube+ datasets)?

Yes. Our work is proposed to reduce the differences between camera sensor responses (that is why mostly all of learning-based illuminant estimation models are sensor-dependent and cannot generalize well for new camera sensors -- see On Finding Gray Pixels, CVPR'19 for interesting experiments that highlight this point). Our method, however, targets to work independently from the camera sensor. Read our paper for more details.

How to report results of the trained models using a new set of raw-RGB images?

First, be sure that all testing images are in the raw-RGB linear space and the black level/saturation normalization is correctly applied. The input images should be stored as uint16 PNG files after the normalization. Then, you can use any of our trained models for testing. You can report results of one model or the best, mean, and worst results obtained by all models. An example code to evaluate testing image set is provided in evaluate_images.m for Matlab. You can also use all trained models and report the averaged illuminant vectors for evaluation (i.e., ensemble model).

Why does the demo show faint colors compared to what is shown in the paper?

In the given demo, we show raw-RGB images after white balancing and scaling it up by a constant factor to aid visualization. In the paper, we used the full camera pipeline in A Software Platform for Manipulating the Camera Imaging Pipeline, ECCV'16 to render images to the sRGB space with our estimated illuminant vector.

How to integrate the RGB-uv histogram block into my network?

For Matlab 2018b and 2019a, please check examples given in RGBuvHistBlock/add_RGB_uv_hist.m. If you will use the RGB-uv histogram block for sRGB-rendered images (e.g., JPEG images), you may need to tune the initalization of the scale and fall-off parameters for better results with sRGB images, as the current intalization was used for linear raw-RGB images. To tune these parameters, you can change the initalization of the scale parameter C in scaleLayer.m (line 39) and the fall-off factor sigma in ExponentialKernelLayer.m (line 43). The files scaleLayer.m and ExponentialKernelLayer.m are located in RGBuvHistBlock directory. For debugging, please use the predict function in histOutLayer.m. For Matlab 2019b or higher (recommended), please check the RGBuvHistBlock.m code located in the RGBuvHistBlock directory to tune the scale/fall-off parameters.

Project page

Publication

If you use this code, please cite our paper:

Mahmoud Afifi and Michael S. Brown, Sensor Independent Illumination Estimation for DNN Models, British Machine Vision Conference (BMVC), 2019.

@inproceedings{afifi2019SIIE,
  title={Sensor-Independent Illumination Estimation for DNN Models},
  author={Afifi, Mahmoud and Brown, Michael S},
  booktitle={British Machine Vision Conference (BMVC)},
  pages={},
  year={2019}
}

Commercial Use

This software is provided for research purposes only. A license must be obtained for any commercial application.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].