All Projects → amnesiack → ICIP2018CDM

amnesiack / ICIP2018CDM

Licence: Apache-2.0 license
The ICIP2018 paper "Color Image Demosaicking using a 3-stage Convolutional Neural Network Structure"

Programming Languages

matlab
3953 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ICIP2018CDM

WSCNNTDSaliency
[BMVC17] Weakly Supervised Saliency Detection with A Category-Driven Map Generator
Stars: ✭ 19 (+26.67%)
Mutual labels:  matconvnet
DAIN
Code release for "Differential Angular Imaging for Material Recognition", CVPR 2017.
Stars: ✭ 16 (+6.67%)
Mutual labels:  matconvnet
MatConvNet-oneclick
Train your own data with MatConvNet
Stars: ✭ 84 (+460%)
Mutual labels:  matconvnet
FastAP-metric-learning
Code for CVPR 2019 paper "Deep Metric Learning to Rank"
Stars: ✭ 93 (+520%)
Mutual labels:  matconvnet
cnn-for-image-retrieval
🌅The code of post "Image retrieval using MatconvNet and pre-trained imageNet"
Stars: ✭ 623 (+4053.33%)
Mutual labels:  matconvnet
Machine-Learning-in-Medical-Imaging--U-Net
TUM_MLMI_SS16: Convolutional Neural Network using U-Net architecture to predict one modality of a brain MRI scan from another modality.
Stars: ✭ 22 (+46.67%)
Mutual labels:  matconvnet

Demo code for paper "COLOR IMAGE DEMOSAICKING USING A 3-STAGE CONVOLUTIONAL NEURAL NETWORK STRUCTURE"

K. Cui, Z. Jin, E. Steinbach, Color Image Demosaicking using a 3-stage Convolutional Neural Network Structure, IEEE International Conference on Image Processing (ICIP 2018), Athens, Greece, Oktober 2018. DOI: 10.1109/ICIP.2018.8451020

Update

add TensorFlow implentation and pretrained model.

  • Dependencies:

    • Python 3
    • TensorFlow 1.XX (1.10 or newer)
    • NumPy
    • Pillow
    • NVIDIA GPU + CUDA (if running in GPU mode)
  • Dataset:

    • You need to download the testing datasets to run the demo test for different tasks. We summarize the datasets here. Unzip the datasets and put them into the data folder. If you have your own dataset, please follow the readme in the data folder to organize the dataset.
  • Usage:

    • run python main_py3_tfrecord.py to test the Kodak dataset.
    • When testing other datasets, simply add --test_set NAME, e.g., python main_py3_tfrecord.py --test_set McM
    • It also supports the ensemble testing mode, run python main_py3_tfrecord.py --phase ensemble

Original MatConvNet Implementation

  1. Please download the matconvnet toolbox from http://www.vlfeat.org/matconvnet/ and install it according to the instructions from their website.
  2. Please go to the folder ./MatConvnet_implementation
  3. Please copy the ./MatConvnet_implementation folder into the following path, ./Matconvnet-1.0-beta2X/examples/
  4. Please copy the customized layer functions vl_nnsplit.m, vl_nnsplit_new.m in the ./customized_layers/ to ./Matconvnet-1.0-beta2X/matlab/; Copy the Split.m, Split_new.m in the ./customized_layers/ to ./Matconvnet-1.0-beta2X/matlab/+dagnn/;
  5. The script test_CDMNet.m is a demo for testing using the trained model which is stored in ./model/CNNCDM.mat
  6. In case you would like to train the network. The traing dataset used is the Waterloo Exploration Database. Please download the dataset here https://ece.uwaterloo.ca/~k29ma/exploration/ and put all the images in ./pristine_images/; and then run mosaicked_image_generation to generate the bilinear initial CDM input of the network; and then run train_CDMNet_MSE for training.
  7. Please read our paper for more details!
  8. Have fun!
@INPROCEEDINGS{LMT2018-1279,  
 author = {Kai Cui AND Zhi Jin AND Eckehard Steinbach},
 title = {Color Image Demosaicking using a 3-stage Convolutional Neural Network Structure},
 booktitle = {{IEEE} International Conference on Image Processing ({ICIP} 2018)},
 month = {Oct},
 year = {2018},
 address = {Athens, Greece}
}

Maintainer:

@Kai Cui ([email protected])
Lehrstuhl fuer Medientechnik (LMT)
Technische Universitaet Muenchen (TUM)
Last modified 06.02.2021


License

License
This project is released under the Apache 2.0 license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].