All Projects → dperdios → us-rawdata-sda

dperdios / us-rawdata-sda

Licence: MIT license
A Deep Learning Approach to Ultrasound Image Recovery

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to us-rawdata-sda

panoptes
Monitor computational workflows in real time
Stars: ✭ 45 (+15.38%)
Mutual labels:  reproducible-research
bulker
Manager for multi-container computing environments
Stars: ✭ 16 (-58.97%)
Mutual labels:  reproducible-research
OpenPlantPathology
Open Plant Pathology website
Stars: ✭ 18 (-53.85%)
Mutual labels:  reproducible-research
ck-analytics
Collective Knowledge repository with actions to unify the access to different predictive analytics engines (scipy, R, DNN) from software, command line and web-services via CK JSON API:
Stars: ✭ 35 (-10.26%)
Mutual labels:  reproducible-research
Magni
A package for AFM image reconstruction and compressed sensing in general
Stars: ✭ 37 (-5.13%)
Mutual labels:  compressed-sensing
openscience
Empirical Software Engineering journal (EMSE) open science and reproducible research initiative
Stars: ✭ 28 (-28.21%)
Mutual labels:  reproducible-research
renv
Creating virtual environments for R.
Stars: ✭ 18 (-53.85%)
Mutual labels:  reproducible-research
ngs-preprocess
A pipeline for preprocessing NGS data from Illumina, Nanopore and PacBio technologies
Stars: ✭ 22 (-43.59%)
Mutual labels:  reproducible-research
ITKPythonPackage
A setup script to generate ITK Python Wheels
Stars: ✭ 59 (+51.28%)
Mutual labels:  reproducible-research
ck-crowd-scenarios
Public scenarios to crowdsource experiments (such as DNN crowd-benchmarking and crowd-tuning) using Collective Knowledge Framework across diverse mobile devices provided by volunteers. Results are continuously aggregated at the open repository of knowledge:
Stars: ✭ 22 (-43.59%)
Mutual labels:  reproducible-research
software-dev
Coding Standards for the USC Biostats group
Stars: ✭ 33 (-15.38%)
Mutual labels:  reproducible-research
binderhub-deploy
Deploy a BinderHub from scratch on Microsoft Azure
Stars: ✭ 27 (-30.77%)
Mutual labels:  reproducible-research
L0Learn
Efficient Algorithms for L0 Regularized Learning
Stars: ✭ 74 (+89.74%)
Mutual labels:  compressed-sensing
reproducibility-guide
⛔ ARCHIVED ⛔
Stars: ✭ 119 (+205.13%)
Mutual labels:  reproducible-research
ukbrest
ukbREST: efficient and streamlined data access for reproducible research of large biobanks
Stars: ✭ 32 (-17.95%)
Mutual labels:  reproducible-research
galaksio
An easy-to-use way for running Galaxy workflows.
Stars: ✭ 19 (-51.28%)
Mutual labels:  reproducible-research
genepattern-server
The GenePattern Server web application
Stars: ✭ 26 (-33.33%)
Mutual labels:  reproducible-research
microbiomeHD
Cross-disease comparison of case-control gut microbiome studies
Stars: ✭ 58 (+48.72%)
Mutual labels:  reproducible-research
wrench
WRENCH: Cyberinfrastructure Simulation Workbench
Stars: ✭ 25 (-35.9%)
Mutual labels:  reproducible-research
genepattern-notebook
Platform for integrating genomic analysis with Jupyter Notebooks.
Stars: ✭ 37 (-5.13%)
Mutual labels:  reproducible-research

A Deep Learning Approach to Ultrasound Image Recovery

Dimitris Perdios1, Adrien Besson1, Marcel Arditi1, and Jean-Philippe Thiran1, 2

1Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

2Department of Radiology, University Hospital Center (CHUV), Switzerland

Paper accepted at the IEEE International Ultrasonics Symposium (IUS 2017).

Based on the success of deep neural networks for image recovery, we propose a new paradigm for the compression and decompression of ultrasound (US) signals which relies on stacked denoising autoencoders. The first layer of the network is used to compress the signals and the remaining layers perform the reconstruction. We train the network on simulated US signals and evaluate its quality on images of the publicly available PICMUS dataset. We demonstrate that such a simple architecture outperforms state-of-the art methods, based on the compressed sensing framework, both in terms of image quality and computational complexity.

Paper updates:

Please note that the accepted preprint version (v1) has been updated. Each version can be found here.

  • v1: accepted preprint
  • v2: fix typos in the abbreviations used in Table II w.r.t. the text
  • v3: fix fig. 2(f) display dB range and corresponding interpretation

Installation

  1. Install Python 3.6 and optionally create a dedicated environment.

  2. Clone the repository.

    git clone https://github.com/dperdios/us-rawdata-sda
    cd us-rawdata-sda
  3. Install the Python dependencies from requirements.txt.

    • Note 1: by default, it will install the GPU version of TensorFlow. If you do not have a compatible GPU, please edit requirements.txt as follow:
      • Comment the line starting with tensorflow-gpu.
      • Uncomment the line starting with tensorflow.
    • Note 2: TensorFlow 1.3 has not been tested yet, hence requirements.txt will install 1.2.1.
      pip install --upgrade pip
      pip install -r requirements.txt
  4. Download the PICMUS dataset, which will be used as the test set.

    python3 setup.py

    This will download, under datasets/picmus17, the following acquisitions, provided by the PICMUS authors:

    • dataset_rf_in_vitro_type1_transmission_1_nbPW_1
    • dataset_rf_in_vitro_type2_transmission_1_nbPW_1
    • dataset_rf_in_vitro_type3_transmission_1_nbPW_1
    • dataset_rf_numerical_transmission_1_nbPW_1

    It will also download, under datasets/picmus16 two in vivo acquisitions provided by the PICMUS authors (from the first version of the challenge), namely:

    • carotid_cross_expe_dataset_rf
    • carotid_long_expe_dataset_rf

Usage

Trained networks

To reproduce the results on the PICMUS dataset described above, using the trained networks provided under networks/ius2017, use the following command:

python3 ius2017_results.py

This will compute the beamformed image for all the trained networks on every PICMUS acquisitions. The corresponding images will be saved as PDF under results/ius2017.

Note 1: This can take some time since PICMUS acquisition for each measurement ratio will be beamformed using a non-optimized delay-and-sum beamformer.

Note 2: The codes used to generate the CS results cannot be provided for the moment. Sorry for the incovenience.

Once the results have been computed, it is possible to lauch the following command for a better visualization:

python3 ius2017_add_figure.py

This will save a PDF image, under results/ius2017/*_metrics.pdf, for each PICMUS acquisition (see above) displaying the results in terms of PSNR for each method (SDA-CL, SDA-CNL and CS) and measurement ratio.

Training the networks (not yet available)

We are currently investigating an appropriate way to release and distribute the training set that has been numerically generated using the open-source k-Wave toolbox. The code is however already provided. Once the training set is available, it will be possible to re-train the networks using the following command:

python3 ius2017_train.py

License

The code is released under the terms of the MIT license.

If you are using this code, please cite our paper.

Acknowledgements

This work was supported in part by the UltrasoundToGo RTD project (no. 20NA21_145911), funded by Nano-Tera.ch.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].