All Projects → diegovalsesia → deepsum

diegovalsesia / deepsum

Licence: other
DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deepsum

s5p-tools
Python scripts to download and preprocess air pollution concentration level data aquired from the Sentinel-5P mission
Stars: ✭ 49 (+25.64%)
Mutual labels:  remote-sensing, esa
RAMS
Official TensorFlow code for paper "Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks".
Stars: ✭ 55 (+41.03%)
Mutual labels:  remote-sensing, super-resolution
gsky
Distributed Scalable Geospatial Data Server
Stars: ✭ 23 (-41.03%)
Mutual labels:  remote-sensing
SAR2SAR
SAR2SAR: a self-supervised despeckling algorithm for SAR images - Notebook implementation usable on Google Colaboratory
Stars: ✭ 23 (-41.03%)
Mutual labels:  remote-sensing
solar-panel-segmentation
A U-Net for solar panel identification and segmentation
Stars: ✭ 25 (-35.9%)
Mutual labels:  remote-sensing
ee extra
A ninja python package that unifies the Google Earth Engine ecosystem.
Stars: ✭ 42 (+7.69%)
Mutual labels:  remote-sensing
deepwatermap
a deep model that segments water on multispectral images
Stars: ✭ 81 (+107.69%)
Mutual labels:  remote-sensing
Kujaku
Slack App that to unfurl url of esa.io
Stars: ✭ 22 (-43.59%)
Mutual labels:  esa
Super resolution Survey
A survey of recent application of deep learning on super-resolution tasks
Stars: ✭ 32 (-17.95%)
Mutual labels:  super-resolution
sarbian
We’ve built a plug’n play Operation System (based on Debian Linux) with all the freely and openly available SAR processing software. No knowledge of installation steps needed, just download and get started with SAR data processing. SARbian is free for use in research, education or operational work.
Stars: ✭ 49 (+25.64%)
Mutual labels:  remote-sensing
Super-Resolution-Meta-Attention-Networks
Open source single image super-resolution toolbox containing various functionality for training a diverse number of state-of-the-art super-resolution models. Also acts as the companion code for the IEEE signal processing letters paper titled 'Improving Super-Resolution Performance using Meta-Attention Layers’.
Stars: ✭ 17 (-56.41%)
Mutual labels:  super-resolution
ChangeOS
ChangeOS: Building damage assessment via Deep Object-based Semantic Change Detection - (RSE 2021)
Stars: ✭ 33 (-15.38%)
Mutual labels:  remote-sensing
cate-desktop
Desktop GUI for the ESA CCI Toolbox (Cate)
Stars: ✭ 15 (-61.54%)
Mutual labels:  esa
dfc2020 baseline
Simple Baseline for the IEEE GRSS Data Fusion Contest 2020
Stars: ✭ 44 (+12.82%)
Mutual labels:  remote-sensing
Psychic-CCTV
A video analysis tool built completely in python.
Stars: ✭ 21 (-46.15%)
Mutual labels:  super-resolution
forestools
Tools for detecting deforestation and forest degradation
Stars: ✭ 22 (-43.59%)
Mutual labels:  remote-sensing
speckle2void
Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional Neural Networks
Stars: ✭ 31 (-20.51%)
Mutual labels:  remote-sensing
cars
CARS is a dedicated and open source 3D tool to produce Digital Surface Models from satellite imaging by photogrammetry.
Stars: ✭ 147 (+276.92%)
Mutual labels:  remote-sensing
explicit-semantic-analysis
Wikipedia-based Explicit Semantic Analysis, as described by Gabrilovich and Markovitch
Stars: ✭ 34 (-12.82%)
Mutual labels:  esa
CF-Net
Official repository of "Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution"
Stars: ✭ 55 (+41.03%)
Mutual labels:  super-resolution

DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images

DeepSUM is a novel Multi Image Super-Resolution (MISR) deep neural network that exploits both spatial and temporal correlations to recover a single high resolution image from multiple unregistered low resolution images.

This repository contains python/tensorflow implementation of DeepSUM, trained and tested on the PROBA-V dataset provided by ESA’s Advanced Concepts Team in the context of the European Space Agency's Kelvin competition.

DeepSUM is the winner of the PROBA-V SR challenge.

BibTex reference:

@article{molini2019deepsum,
  title={DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images},
  author={Molini, Andrea Bordone and Valsesia, Diego and Fracastoro, Giulia and Magli, Enrico},
  journal={IEEE Transactions on Geoscience and Remote Sensing},
  volume={58},
  number={5},
  pages={3644--3656},
  year={2019},
  publisher={IEEE}
}

Setup to get started

Make sure you have Python3 and all the required python packages installed:

pip install -r requirements.txt

Load data from Kelvin Competition and create the training set and the validation set

  • Download the PROBA-V dataset from the Kelvin Competition and save it under ./dataset_creation/probav_data
  • Load the dataset from the directories and save it to pickles by running Save_dataset_pickles.ipynb notebook
  • Run the Create_dataset.ipynb notebook to create training dataset and validation dataset for both bands NIR and RED
  • To save RAM memory we advise to extract the best 9 images based on the masks: run Save_best9_from_dataset.ipynb notebook after Create_dataset.ipynb. Based on the dataset you want to use (full or best 9) change the 'full' parameter in the config file.

Usage

In config_files/ you can place your configuration before starting training the model:

"lr" : learning rate
"batch_size" batch size
"skip_step": validation frequency,
"dataset_path": directory with training set and validation set created by means of Create_dataset.ipynb,
"n_chunks": number of pickles in which the training set is divided,
"channels": number of channels of input images,
"T_in": number of images per scene,
"R": upscale factor,
"full": use the full dataset with all images or the best 9 for each imageset,
"patch_size_HR": size of input images,
"border": border size to take into account shifts in the loss and psnr computation,
"spectral_band": NIR or RED,
"RegNet_pretrain_dir": directory with RegNet pretraining checkpoint,
"SISRNet_pretrain_dir": directory with SISRNet pretraining checkpoint,

Run DeepSUM_train.ipynb to train a MISR model on the training dataset just generated. If tensorboard_dir directory is found in checkpoints/, the training will start from the latest checkpoint, otherwise the RegNet and SISRNet weights will be initialized from the checkpoints contained in the pretraining_checkpoints/ directory. These weights come from the pretraining procedure explained in DeepSUM paper.

Challenge checkpoints

The DeepSUM has been trained for both NIR and RED bands. In the 'checkpoints' directory there are the final weights used to produce the superresolved test images for the final ESA challenge submission.

DeepSUM_NIR_lr_5e-06_bsize_8

DeepSUM_NIRpretraining_RED_lr_5e-06_bsize_8

Validation

During training, only the best 9 images for each imageset are considered for the score. After the training procedure is completed, you can compute a final evaluation on the validation set by also exploiting the other images available in each imageset. To do so, run Sliding_window_evaluation.ipynb.

Testing

  • Run the Create_testset.ipynb notebook under dataset_creation/ to create the dataset with the test LR images
  • To test the trained model on new LR images and get the corresponding superresolved images run DeepSUM_superresolve_testdata.ipynb.

Authors & Contacts

DeepSUM is based on work by team SuperPip from the Image Processing and Learning group of Politecnico di Torino: Andrea Bordone Molini (andrea.bordone AT polito.it), Diego Valsesia (diego.valsesia AT polito.it), Giulia Fracastoro (giulia.fracastoro AT polito.it), Enrico Magli (enrico.magli AT polito.it).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].