All Projects → ytZhang99 → CF-Net

ytZhang99 / CF-Net

Licence: other
Official repository of "Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution"

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to CF-Net

DAN
This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution
Stars: ✭ 196 (+256.36%)
Mutual labels:  super-resolution
EmiyaEngine
只要蘊藏著想成為真物的意志,偽物就比真物還要來得真實。
Stars: ✭ 27 (-50.91%)
Mutual labels:  super-resolution
ECBSR
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021
Stars: ✭ 216 (+292.73%)
Mutual labels:  super-resolution
picasso
A collection of tools for painting super-resolution images
Stars: ✭ 77 (+40%)
Mutual labels:  super-resolution
libsrcnn
Super-Resolution imaging with Convolutional Neural Network library for G++, Non-OpenCV model.
Stars: ✭ 14 (-74.55%)
Mutual labels:  super-resolution
SR Framework
A generic framework which implements some famouts super-resolution models
Stars: ✭ 54 (-1.82%)
Mutual labels:  super-resolution
tf-perceptual-eusr
A TensorFlow-based image super-resolution model considering both quantitative and perceptual quality
Stars: ✭ 44 (-20%)
Mutual labels:  super-resolution
Super-Resolution-Meta-Attention-Networks
Open source single image super-resolution toolbox containing various functionality for training a diverse number of state-of-the-art super-resolution models. Also acts as the companion code for the IEEE signal processing letters paper titled 'Improving Super-Resolution Performance using Meta-Attention Layers’.
Stars: ✭ 17 (-69.09%)
Mutual labels:  super-resolution
FISR
Official repository of FISR (AAAI 2020).
Stars: ✭ 72 (+30.91%)
Mutual labels:  super-resolution
PNG-Upscale
AI Super - Resolution
Stars: ✭ 116 (+110.91%)
Mutual labels:  super-resolution
EGVSR
Efficient & Generic Video Super-Resolution
Stars: ✭ 774 (+1307.27%)
Mutual labels:  super-resolution
sparse-deconv-py
Official Python implementation of the 'Sparse deconvolution'-v0.3.0
Stars: ✭ 18 (-67.27%)
Mutual labels:  super-resolution
tf-bsrn-sr
Official implementation of block state-based recursive network (BSRN) for super-resolution in TensorFlow
Stars: ✭ 28 (-49.09%)
Mutual labels:  super-resolution
ImSwitch
ImSwitch is a software solution in Python that aims at generalizing microscope control by providing a solution for flexible control of multiple microscope modalities.
Stars: ✭ 43 (-21.82%)
Mutual labels:  super-resolution
Psychic-CCTV
A video analysis tool built completely in python.
Stars: ✭ 21 (-61.82%)
Mutual labels:  super-resolution
LFSSR-SAS-PyTorch
Repository for "Light Field Spatial Super-resolution Using Deep Efficient Spatial-Angular Separable Convolution" , TIP 2018
Stars: ✭ 22 (-60%)
Mutual labels:  super-resolution
TC-YoukuVSRE
天池2019阿里巴巴优酷视频增强和超分辨率挑战赛自用代码,EDVR、WDSR、ESRGAN三个模型。
Stars: ✭ 41 (-25.45%)
Mutual labels:  super-resolution
Super resolution Survey
A survey of recent application of deep learning on super-resolution tasks
Stars: ✭ 32 (-41.82%)
Mutual labels:  super-resolution
EDVR Keras
Keras implementation of EDVR: Video Restoration with Enhanced Deformable Convolutional Networks
Stars: ✭ 35 (-36.36%)
Mutual labels:  super-resolution
SRGAN-PyTorch
An Unofficial PyTorch Implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Stars: ✭ 52 (-5.45%)
Mutual labels:  super-resolution

CF-Net : Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution

  • This is the official repository of the paper "Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution" from IEEE Transactions on Image Processing 2021. [Paper Link][PDF Link]
  • We have conducted a live streaming on Extreme Mart Platform, the Powerpoint file can be downloaded from [PPT Link].

framework

1. Environment

  • Python >= 3.5
  • PyTorch >= 0.4.1 is recommended
  • opencv-python
  • pytorch-msssim
  • tqdm
  • Matlab

2. Dataset

The training data and testing data is from the [SICE dataset]. Or you can download the datasets from our [Google Drive Link].

3. Test

  1. Clone this repository:
    git clone https://github.com/ytZhang99/CF-Net.git
    
  2. Place the low-resolution over-exposed images and under-exposed images in dataset/test_data/lr_over and dataset/test_data/lr_under, respectively.
    dataset 
    └── test_data
        ├── lr_over
        └── lr_under
    
  3. Run the following command for 2 or 4 times SR and exposure fusion:
    python main.py --test_only --scale 2 --model model_x2.pth
    python main.py --test_only --scale 4 --model model_x4.pth
    
  4. Finally, you can find the Super-resolved and Fused results in ./test_results.

4. Training

Preparing training and validation data

  1. Place HR_groundtruth, HR_over_exposed, HR_under_exposed images for training in the following directory, respectively. (Optional) Validation data can also be placed in dataset/val_data.
    dataset 
    ├── train_data
    |   ├── hr
    |   ├── hr_over
    |   └── hr_under
    └── val_data
        ├── gt
        ├── lr_over
        └── lr_under
    
  2. Open Prepare_Data_HR_LR.m file and modify the following lines according to your training commands.
    Line 5 or 6 : scale = 2 or 4
    Line 9 : whether use off-line data augmentation (default = True)
    [Line 12 <-> Line 17] or [Line 13 <-> Line 18] : producing [lr_over/lr_under] images from [hr_over/hr_under] images
    
  3. After the above operations, dataset/train_data should be as follows:
    dataset
    └── train_data 
        ├── hr
        ├── hr_over
        ├── hr_under
        ├── lr_over
        └── lr_under
    

Training

  1. Place the attached files dataset.py and train.py in the same directory with main.py.
  2. Run the following command to train the network for scale=2 or 4 according to the training data.
    python main.py --scale 2 --model my_model
    python main.py --scale 4 --model my_model
    
    If validation data is added, run the following command to get the best model best_ep.pth.
    python main.py --scale 2 --model my_model -v
    python main.py --scale 4 --model my_model -v
    
  3. The trained model are placed in the directory ./model/.

5. Citation

If you find our work useful in your research or publication, please cite our work:

@article{deng2021deep,
  title={Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution.},
  author={Deng, Xin and Zhang, Yutong and Xu, Mai and Gu, Shuhang and Duan, Yiping},
  journal={IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society},
  year={2021}
}

6. Contact

If you have any question about our work or code, please email [email protected] .

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].