All Projects → EscVM → RAMS

EscVM / RAMS

Licence: Apache-2.0 License
Official TensorFlow code for paper "Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks".

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to RAMS

deepsum
DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)
Stars: ✭ 39 (-29.09%)
Mutual labels:  remote-sensing, super-resolution
TMNet
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".
Stars: ✭ 77 (+40%)
Mutual labels:  super-resolution
GGHL
This is the implementation of GGHL (A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection)
Stars: ✭ 309 (+461.82%)
Mutual labels:  remote-sensing
moveVis
An R package providing tools to visualize movement data (e.g. from GPS tracking) and temporal changes of environmental data (e.g. from remote sensing) by creating video animations.
Stars: ✭ 104 (+89.09%)
Mutual labels:  remote-sensing
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (-5.45%)
Mutual labels:  super-resolution
cattonum
Encode Categorical Features
Stars: ✭ 31 (-43.64%)
Mutual labels:  supervised-learning
land-cover-to-land-use-classification
Satellite image processing pipeline to classify land-cover and land-use
Stars: ✭ 64 (+16.36%)
Mutual labels:  remote-sensing
neural-road-inspector
After a hurricane, roads are often flooded or washed out, making them treacherous for motorists. Using state of the art deep learning methods, I attempted to automatically annotate flooded, washed out, or otherwise severely damaged roads. My goal is create a tool that can help detect and visualize anomalous roads in a simple user interface.
Stars: ✭ 37 (-32.73%)
Mutual labels:  remote-sensing
FDCNN
The implementation of FDCNN in paper - A Feature Difference Convolutional Neural Network-Based Change Detection Method
Stars: ✭ 54 (-1.82%)
Mutual labels:  remote-sensing
Edge2Guard
Code for PerCom Workshop paper title 'Edge2Guard: Botnet Attacks Detecting Offline Models for Resource-Constrained IoT Devices'
Stars: ✭ 16 (-70.91%)
Mutual labels:  supervised-learning
biodivMapR
biodivMapR: an R package for α- and β-diversity mapping using remotely-sensed images
Stars: ✭ 18 (-67.27%)
Mutual labels:  remote-sensing
NanoJ-Fluidics
Manual, source-code and binaries for the NanoJ-Fluidics project
Stars: ✭ 47 (-14.55%)
Mutual labels:  super-resolution
remote-sensing-workshops
2017 workshop content for http://wenfo.org/wald/advanced-remote-sensing
Stars: ✭ 23 (-58.18%)
Mutual labels:  remote-sensing
massive-change-detection
QGIS 2 plugin for applying change detection algorithms on high resolution satellite imagery
Stars: ✭ 18 (-67.27%)
Mutual labels:  remote-sensing
open-impact
To help quickstart impact work with Satellogic [hyperspectral] data
Stars: ✭ 21 (-61.82%)
Mutual labels:  remote-sensing
Kaio-machine-learning-human-face-detection
Machine Learning project a case study focused on the interaction with digital characters, using a character called "Kaio", which, based on the automatic detection of facial expressions and classification of emotions, interacts with humans by classifying emotions and imitating expressions
Stars: ✭ 18 (-67.27%)
Mutual labels:  supervised-learning
modape
MODIS Assimilation and Processing Engine
Stars: ✭ 19 (-65.45%)
Mutual labels:  remote-sensing
tensorrt-examples
TensorRT Examples (TensorRT, Jetson Nano, Python, C++)
Stars: ✭ 31 (-43.64%)
Mutual labels:  super-resolution
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (+1.82%)
Mutual labels:  super-resolution
Real-ESRGAN-colab
A Real-ESRGAN model trained on a custom dataset
Stars: ✭ 18 (-67.27%)
Mutual labels:  super-resolution

PWC License

~ Multi-Image Super-Resolution Task ~

RAMS logo

Are you a Deep Learning practitioner, but you are tired of dealing with Cats and Dogs datasets? Do you want to work on a real problem with a high impact on the research community, but it is always tricky to get your hand's on the final preprocessed data? If that's the case, you are in the right place!

We created this repository for two primary reasons:

  • Give easy access to a unique dataset, introduced by ESA in 2019, to work on the very challenging task of multi-frame super-resolution. If you've never heard about this computer vision task, this survey could help you. Anyway, in a few words, its aim is pretty intuitive and straightforward: reconstruct a high-resolution image from a set of low-resolution frames. So, with a practical and easy to use jupyter notebook we give you the possibility to preprocess this dataset and dive directly into the design of your methodology. It's a very flexible pipeline where all steps can be easily omitted. All data are included with this repository, already split in train validation and testing. At the end of the process you will have three primary tensors: X with shape (B, 128, 128, T), y with shape (B, 384, 384, 1), and y_mask with shape (B, 384, 384, 1) with the quality maps of the ground-truths in y. Your task is straightforward... find a way to turn X into y. In other words, fuse the T images at 128x128, for each scene B, and get a corresponding image 384x384 at the triple of the resolution. It's a very challenging and fascinating task with a significant impact on the work of many peer researchers, but not yet so investigated.

  • Give access to our pre-trained solution (RAMS) that me and fsalv in a joint effort we have conceptualized and implemented. We've tested and proved our ideas with a joint account under the name of our robotics lab PIC4SeR in the post-mortem challenge of the ESA website achieving by far the first position. Yes, because what is beautiful about this dataset is that you can still prove your effort submitting your results directly on the ESA website. On the other hand, you can use the validation set we provide inside this repository to directly compare your signs of progress with the original winner of the ESA challenge (we have selected the same validation scenes used in their paper). So, in any case, you will have direct feedback on your efforts and prove your ability with machine learning and deep learning projects!

N.B.: This repository has been inspired by the work of Francisco Dorr. He effectively adapted the WDSR solution to the multi-image super-resolution task, beating for the first time the record of the official Proba-V challenge winners. Check it out his alternative solution and let you inspired as well for new great and amazing ideas.

1.0 Getting Started with the Installation

Python3 and Tensorflow 2.x are required and should be installed on the host machine following the official guide.

  1. Clone this repository

    git clone https://github.com/EscVM/RAMS
  2. Install the required additional packages

    pip3 install -r requirements.txt
    

2.0 Playground Notebooks

We provide with this repository three different notebooks. The first one can be used to pre-process the Proba-V dataset and experiment with your own solutions. The other two provdies all necessary code to train, modify and test RAMS, the residual attention multi-image super-resolution network expalined in this paper.

2.1 Pre-processing notebook

Use the pre-processing notebook to process the Proba-V original dataset and obtain the training/validation/testing arrays ready to be used.

NB: Testing is the original ESA testing dataset without ground truths. Validation is the portion of the dataset we've used to test our solution and all major multi-image super-resolution methodologies. However, you can still use the testing dataset and the post-mortem ESA website to evaluate your technique.

2.2 Training notebook

Use the training notebook to re-train the original or modified version of the RAMS architecture.

2.3 Testing notebook

Use the testing notebook to test RAMS model, with pre-trained or re-trained weights, over the validation set. Moreover, you can use the final chapter of the notebook to make predictions with the original ESA testing set, create a zip file and submit it to the post-mortem website. The following table gives you a reference with results achieved by the RAMS architecture and all literature major solutions over the validation set.

Citation

Use this bibtex if you enjoyed this repository and you want to cite it:

@article{salvetti2020multi,
  title={Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks},
  author={Salvetti, Francesco and Mazzia, Vittorio and Khaliq, Aleem and Chiaberge, Marcello},
  journal={Remote Sensing},
  volume={12},
  number={14},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].