mahmoudnafifi / C5

Licence: Apache-2.0 license
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to C5

SIIE
Sensor-Independent Illumination Estimation for DNN Models (BMVC 2019)
Stars: ✭ 23 (-69.33%)
Mutual labels:  whitebalance, white-balance, illuminant-estimation, color-constancy
LSMI-dataset
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination
Stars: ✭ 34 (-54.67%)
Mutual labels:  whitebalance
Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.
Stars: ✭ 25 (-66.67%)
Mutual labels:  color-constancy
multi-task-defocus-deblurring-dual-pixel-nimat
Reference github repository for the paper "Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning". We propose a single-image deblurring network that incorporates the two sub-aperture views into a multitask framework. Specifically, we show that jointly learning to predict the two DP views from a single …
Stars: ✭ 29 (-61.33%)
Mutual labels:  computational-photography
hyperstyle
Official Implementation for "HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing" (CVPR 2022) https://arxiv.org/abs/2111.15666
Stars: ✭ 874 (+1065.33%)
Mutual labels:  hypernetworks
colorize-video
colorize video using publicly available neural-networks
Stars: ✭ 24 (-68%)
Mutual labels:  computational-photography
PlaneTR3D
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery
Stars: ✭ 58 (-22.67%)
Mutual labels:  iccv2021
InstanceRefer
[ICCV 2021] InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring
Stars: ✭ 64 (-14.67%)
Mutual labels:  iccv2021
LLVIP
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
Stars: ✭ 438 (+484%)
Mutual labels:  iccv2021
CURL
Code for the ICPR 2020 paper: "CURL: Neural Curve Layers for Image Enhancement"
Stars: ✭ 177 (+136%)
Mutual labels:  computational-photography
texturize
🤖🖌️ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 495 (+560%)
Mutual labels:  computational-photography
hypnettorch
Package for working with hypernetworks in PyTorch.
Stars: ✭ 66 (-12%)
Mutual labels:  hypernetworks
G-SFDA
code for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
Stars: ✭ 88 (+17.33%)
Mutual labels:  iccv2021
quasi-unsupervised-cc
Implementation of the method described in the paper "Quasi-unsupervised color constancy" - CVPR 2019
Stars: ✭ 35 (-53.33%)
Mutual labels:  color-constancy
MSRGCN
Official implementation of MSR-GCN (ICCV2021 paper)
Stars: ✭ 42 (-44%)
Mutual labels:  iccv2021
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (+117.33%)
Mutual labels:  iccv2021
recurrent-defocus-deblurring-synth-dual-pixel
Reference github repository for the paper "Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data". We propose a procedure to generate realistic DP data synthetically. Our synthesis approach mimics the optical image formation found on DP sensors and can be applied to virtual scenes rendered with standard computer software. Lev…
Stars: ✭ 30 (-60%)
Mutual labels:  computational-photography
flow1d
[ICCV 2021 Oral] High-Resolution Optical Flow from 1D Attention and Correlation
Stars: ✭ 91 (+21.33%)
Mutual labels:  iccv2021
Deep-Matching-Prior
Official implementation of deep matching prior
Stars: ✭ 21 (-72%)
Mutual labels:  iccv2021
SnowflakeNet
(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (-1.33%)
Mutual labels:  iccv2021

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral)

Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibel2

1York University   2Google Research

Paper | Poster | PPT | Video

C5_teaser

Reference code for the paper Cross-Camera Convolutional Color Constancy. Mahmoud Afifi, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. In ICCV, 2021. If you use this code, please cite our paper:

@InProceedings{C5,
  title={Cross-Camera Convolutional Color Constancy},
  author={Afifi, Mahmoud and Barron, Jonathan T and LeGendre, Chloe and Tsai, Yun-Ta and Bleibel, Francois},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

C5_figure

Code

Prerequisite

  • Pytorch
  • opencv-python
  • tqdm

Training

To train C5, training/validation data should have the following formatting:

- train_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

In src/ops.py, the function add_camera_name(dataset_dir) can be used to rename image filenames and corresponding ground-truth JSON files. Each JSON file should include a key named either illuminant_color_raw or gt_ill that has the ground-truth illuminant color of the corresponding image.

The training code is given in train.py. The following parameters are required to set model configuration and training data information.

  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --learn-G: to use a G multiplier as explained in the paper.
  • --training-dir-in: training image directory.
  • --validation-dir-in: validation image directory; when this variable is None (default), the validation set will be taken from the training data based on the --validation-ratio.
  • --validation-ratio: when --validation-dir-in is None, this argument determines the validation set ratio of the image set in --training-dir-in directory.
  • --augmentation-dir: directory(s) of augmentation data (optional).
  • --model-name: name of the trained model.

The following parameters are useful to control training settings and hyperparameters:

  • --epochs: number of epochs
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • -optimizer: optimization algorithm for stochastic gradient descent; options are: Adam or SGD.
  • --learning-rate: Learning rate
  • --l2reg: L2 regularization factor
  • --load: to load C5 model from a .pth file; default is False
  • --model-location: when --load is True, this variable should point to the fullpath of the .pth model file.
  • --validation-frequency: validation frequency (in epochs).
  • --cross-validation: To use three-fold cross-validation. When this variable is True, --validation-dir-in and --validation-ratio will be ignored and 3-fold cross-validation, on the data provided in the --training-dir-in, will be applied.
  • --gpu: GPU device ID.
  • --smoothness-factor-*: smoothness loss factor of the following model components: F (conv filter), B (bias), G (multiplier layer). For example, --smoothness-factor-F can be used to set the smoothness loss for the conv filter.
  • --increasing-batch-size: for increasing batch size during training.
  • --grad-clip-value: gradient clipping value; if it's set to 0 (default), no clipping is applied.

Testing

To test a pre-trained C5 model, testing data should have the following formatting:

- test_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

The testing code is given in test.py. The following parameters are required to set model configuration and testing data information.

  • --model-name: name of the trained model.
  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --g-multiplier: to use a G multiplier as explained in the paper.
  • --testing-dir-in: testing image directory.
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • --multiple_test: to apply multiple tests (ten as mentioned in the paper) and save their results.
  • --white-balance: to save white-balanced testing images.
  • --cross-validation: to use three-fold cross-validation. When it is set to True, it is supposed to have three pre-trained models saved with a postfix of the fold number. The testing image filenames should be listed in .npy files located in the folds directory with the same name of the dataset, which should be the same as the folder name in --testing-dir-in.
  • --gpu: GPU device ID.

In the images directory, there are few examples captured by Mobile Sony IMX135 from the INTEL-TAU dataset. To white balance these raw images, as shown in the figure below, using a C5 model (trained on DSLR cameras from NUS and Gehler-Shi datasets), use the following command:

python test.py --testing-dir-in ./images --white-balance True --model-name C5_m_7_h_64

c5_examples

To test with the gain multiplie, use the following command:

python test.py --testing-dir-in ./images --white-balance True --g-multiplier True --model-name C5_m_7_h_64_w_G

Note that in testing, C5 does not require any metadata. The testing code only uses JSON files to load ground-truth illumination for comparisons with our estimated values.

Data augmentation

The raw-to-raw augmentation functions are provided in src/aug_ops.opy. Call the set_sampling_params function to set sampling parameters (e.g., excluding certain camera/dataset from the soruce set, determine the number of augmented images, etc.). Then, call the map_raw_images function to generate a new augmentation set with the determined parameters. The function map_raw_images takes four arguments:

  • xyz_img_dir: directory of XYZ images; you can download the CIE XYZ images from here. All images were transformed to the CIE XYZ space after applying the black-level normalization and masking out the calibration object (i.e., the color rendition chart or SpyderCUBE).
  • target_cameras: a list of one or more of the following camera models: Canon EOS 550D, Canon EOS 5D, Canon EOS-1DS, Canon EOS-1Ds Mark III, Fujifilm X-M1, Nikon D40, Nikon D5200, Olympus E-PL6, Panasonic DMC-GX1, Samsung NX2000, Sony SLT-A57, or All.
  • output_dir: output directory to save the augmented images and their metadata files.
  • params: sampling parameters set by the set_sampling_params function.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].