All Projects → diegovalsesia → speckle2void

diegovalsesia / speckle2void

Licence: other
Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional Neural Networks

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to speckle2void

SAR2SAR
SAR2SAR: a self-supervised despeckling algorithm for SAR images - Notebook implementation usable on Google Colaboratory
Stars: ✭ 23 (-25.81%)
Mutual labels:  remote-sensing, synthetic-aperture-radar, denoising, despeckling
sarbian
We’ve built a plug’n play Operation System (based on Debian Linux) with all the freely and openly available SAR processing software. No knowledge of installation steps needed, just download and get started with SAR data processing. SARbian is free for use in research, education or operational work.
Stars: ✭ 49 (+58.06%)
Mutual labels:  remote-sensing, sar, synthetic-aperture-radar
xarray-sentinel
Xarray backend to Copernicus Sentinel-1 satellite data products
Stars: ✭ 189 (+509.68%)
Mutual labels:  remote-sensing, sar, synthetic-aperture-radar
sarpy
A basic Python library to demonstrate reading, writing, display, and simple processing of complex SAR data using the NGA SICD standard.
Stars: ✭ 133 (+329.03%)
Mutual labels:  sar, synthetic-aperture-radar
sentinel-util
A CLI for downloading, processing, and making a mosaic from Sentinel-1, -2 and -3 data
Stars: ✭ 22 (-29.03%)
Mutual labels:  remote-sensing, sar
lightweight-temporal-attention-pytorch
A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification
Stars: ✭ 43 (+38.71%)
Mutual labels:  remote-sensing
wildfire-forecasting
Forecasting wildfire danger using deep learning.
Stars: ✭ 39 (+25.81%)
Mutual labels:  remote-sensing
awesome-speech-enhancement
A curated list of awesome Speech Enhancement papers, libraries, datasets, and other resources.
Stars: ✭ 48 (+54.84%)
Mutual labels:  denoising
RAT
RAT - Radar Tools: SAR image processing toolbox (discontinued)
Stars: ✭ 19 (-38.71%)
Mutual labels:  sar
awesome-spectral-indices
A ready-to-use curated list of Spectral Indices for Remote Sensing applications.
Stars: ✭ 357 (+1051.61%)
Mutual labels:  remote-sensing
whiteboxgui
An interactive GUI for WhiteboxTools in a Jupyter-based environment
Stars: ✭ 94 (+203.23%)
Mutual labels:  remote-sensing
Start maja
To process a Sentinel-2 time series with MAJA cloud detection and atmospheric correction processor
Stars: ✭ 47 (+51.61%)
Mutual labels:  remote-sensing
Python-for-Remote-Sensing
python codes for remote sensing applications will be uploaded here. I will try to teach everything I learn during my projects in here.
Stars: ✭ 20 (-35.48%)
Mutual labels:  remote-sensing
sarjitsu
dockerized setup for visualizing System Activity Report (SAR) data.
Stars: ✭ 20 (-35.48%)
Mutual labels:  sar
deepriver
a deep-learning-based river centerline extraction model
Stars: ✭ 25 (-19.35%)
Mutual labels:  remote-sensing
satproc
🛰️ Python library and CLI tools for processing geospatial imagery for ML
Stars: ✭ 27 (-12.9%)
Mutual labels:  remote-sensing
ww tvol study
Process global-scale satellite and airborne elevation data into time series of glacier mass change: Hugonnet et al. (2021).
Stars: ✭ 26 (-16.13%)
Mutual labels:  remote-sensing
piradar
Radar using Red Pitaya for RF: using Raspberry Pi 3 for quad-core radar signal processing
Stars: ✭ 59 (+90.32%)
Mutual labels:  remote-sensing
eodag
Earth Observation Data Access Gateway
Stars: ✭ 183 (+490.32%)
Mutual labels:  remote-sensing
programming-for-gis-and-rs
Materials for the Intro to Programming for GIS and Remote Sensing Course that I teach at Saint Louis University. They include the updates I made for the spring 2020 and fall 2020 semesters.
Stars: ✭ 61 (+96.77%)
Mutual labels:  remote-sensing

Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional NeuralNetworks

Speckle2Void is a self-supervised Bayesian despeckling framework that enables direct training on real SAR images. This method bypasses the problem of training a CNN on synthetically-speckled optical images, thus avoiding any domain gap and enabling learning of features from real SAR images.

This repository contains python/tensorflow implementation of Speckle2Void, trained and tested on the TerraSAR-X dataset provided by ESA archive.

BibTex reference:

@ARTICLE{2020arXiv200702075B,
       author = {{Bordone Molini}, Andrea and {Valsesia}, Diego and {Fracastoro}, Giulia and
         {Magli}, Enrico},
        title = "{Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional Neural Networks}",
      journal = {arXiv e-prints},
     keywords = {Electrical Engineering and Systems Science - Image and Video Processing, Computer Science - Computer Vision and Pattern Recognition},
         year = 2020,
        month = jul,
          eid = {arXiv:2007.02075},
        pages = {arXiv:2007.02075},
archivePrefix = {arXiv},
       eprint = {2007.02075},
 primaryClass = {eess.IV}
}

Setup to get started

Make sure you have Python3 and all the required python packages installed:

pip install -r requirements.txt

Get TerraSAR-X data through ESA EO products online search service and build the dataset.

  • Download the TerraSAR-X products from the ESA EO products online search service

  • Pre-process the dataset through the speckle decorrelator explained in the paper: Blind speckle decorrelation for SAR image despeckling. The blind-spot networks work properly if the noise is spatially decorrelated. The SAR imaging system correlates the speckle noise in SAR images during acquisition. A speckle decorrelator is needed before performing despeckling.

    • Convert the downloaded TerraSAR-X SLC products into .mat files as complex SAR images. We need complex SAR data to run the speckle decorrelation procedure. It is advised to save in each .mat file a complex SAR image of size 10000x10000 maximum for performance reasons.
    • The decorrelator.m script takes a .mat file as input and estimates the transfer function of the SAR acquisition and focusing system, inverts it and applies it to the complex SAR image. This procedure estimates the complex backscatter coefficients, representing the target scene before going through the acquistion chain.
      • Required input parameters: input_file: input .mat file. output_file: output .mat file representing the decorrelated complex SAR image. cutoff frequencies f_x and f_y: the cutoff frequencies along each spatial frequency are either supposed to be known from the technical specifications of the SAR system or manually estimated from the inspection of the average periodograms. Run inspect_periodograms to visualize the periodograms of the original SAR data and choose the cutoff frequencies for both the range and azimuth directions. frequency shifts m_x and m_y: sometimes the periodograms of the original SAR data are affected by a frequency shift that has been compensated before fitting. Run inspect_periodograms to visualize the periodograms of the original SAR data and manually choose the recovery frequency shifts for both the range and azimuth directions. cf: Real SAR images usually contain point targets, which are due to man-made features or edges. Such strong scatterers must be generally preserved because they show a high level of reflectivity with no speckle noise. They have to be detected and replaced in order to estimate the complex backscatter coefficients and placed back after appliyng despeckling. The threshold used to select the point targets is threshold = cf · median(intensity_SAR_image). Cf is a coefficient that depends on the dataset at hand. For TerraSARX products, the point targets are identified as all intensity values above the threshold = 50 · median(intensity_SAR_image).
  • Place in the training directory a bunch of 10000x10000 decorrelated complex SAR images and one in the test directory. During traning some 1000x1000 patches, extracted from the 10000x10000 test image, will be used as testing images.

Usage

In Speckle2Void-training.ipynb the Speckle2V object is instantiated and some parameters are required:

"dir_train" : directory with training data.
"dir_test"  : directory with test data.
"file_checkpoint" : checkpoint for loading a specific model. If None, the latest checkpoint is loaded.
"batch_size"  : size of the mini-batch.
"patch_size"  : size of the training patches
"model_name" : starting name of the directory where to save the checkpoints.
"lr" : learning rate.
"steps_per_epoch" : steps for each epoch 
"k_penalty_tv" : coefficient to weigh the total variation term in the loss
"norm" : normalization
"clip" : intensity value to clip the SAR images
"shift_list" : list of the possible shifts to apply to the receptive fields at the end of the network. For example [3,1].
"prob" : list of the probabilities for choosing the possible shifts. For example [0.9,0.1], 0.9 will be the probability of using shift equal to 3 and 0.1 of using shift 1.
"L_noise" : parameter L of the noise distribution gamma(L,L) used to model the speckle

The SAR denoiser training starts by default from the latest checkpoint found in './checkpoint/model_name' or from a specified checkpoint.

Checkpoint

The s2v_checkpoint directory contains the model used to produce the results of the paper.

Testing

Download sample test images from here and place them in the test_examples directory. To test the trained model on the test examples and estimated the clean versions run Speckle2Void-prediction.ipynb.

Authors & Contacts

Speckle2Void is based on work by the Image Processing and Learning group of Politecnico di Torino: Andrea Bordone Molini (andrea.bordone AT polito.it), Diego Valsesia (diego.valsesia AT polito.it), Giulia Fracastoro (giulia.fracastoro AT polito.it), Enrico Magli (enrico.magli AT polito.it).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].