All Projects → matjesg → deepflash2

matjesg / deepflash2

Licence: Apache-2.0 License
A deep-learning pipeline for segmentation of ambiguous microscopic images.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deepflash2

instant-segment
Fast English word segmentation in Rust
Stars: ✭ 49 (+44.12%)
Mutual labels:  segmentation
x-force
winning sloution of Digtial Manfacturing Algorithm Competition II of JinNan Tianjin
Stars: ✭ 56 (+64.71%)
Mutual labels:  segmentation
tensorrt-examples
TensorRT Examples (TensorRT, Jetson Nano, Python, C++)
Stars: ✭ 31 (-8.82%)
Mutual labels:  segmentation
docker-containers
Docker images for fastai
Stars: ✭ 143 (+320.59%)
Mutual labels:  fastai
LineSegm
Line Segmentation of Handwritten Documents using the A* Path Planning Algorithm
Stars: ✭ 19 (-44.12%)
Mutual labels:  segmentation
acdc segmenter
Public code for our submission to the 2017 ACDC Cardiac Segmentation challenge
Stars: ✭ 68 (+100%)
Mutual labels:  segmentation
shellnet
ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics
Stars: ✭ 80 (+135.29%)
Mutual labels:  segmentation
cluster tools
Distributed segmentation for bio-image-analysis
Stars: ✭ 26 (-23.53%)
Mutual labels:  segmentation
fastai-fall2018
🏃Notebooks from the USCF Deep Learning course (fast.ai v3)
Stars: ✭ 12 (-64.71%)
Mutual labels:  fastai
pointnet2-pytorch
A clean PointNet++ segmentation model implementation. Support batch of samples with different number of points.
Stars: ✭ 45 (+32.35%)
Mutual labels:  segmentation
unet-pytorch
No description or website provided.
Stars: ✭ 18 (-47.06%)
Mutual labels:  segmentation
brainreg-segment
Segmentation of 3D shapes in a common anatomical space
Stars: ✭ 13 (-61.76%)
Mutual labels:  segmentation
Semantic-Aware-Attention-Based-Deep-Object-Co-segmentation
Semantic Aware Attention Based Deep Object Co-segmentation
Stars: ✭ 61 (+79.41%)
Mutual labels:  segmentation
coursera-ai-for-medicine-specialization
Programming assignments, labs and quizzes from all courses in the Coursera AI for Medicine Specialization offered by deeplearning.ai
Stars: ✭ 80 (+135.29%)
Mutual labels:  segmentation
face video segment
Face Video Segmentation - Face segmentation ground truth from videos
Stars: ✭ 84 (+147.06%)
Mutual labels:  segmentation
mri-deep-learning-tools
Resurces for MRI images processing and deep learning in 3D
Stars: ✭ 56 (+64.71%)
Mutual labels:  segmentation
pyconvsegnet
Semantic Segmentation PyTorch code for our paper: Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition (https://arxiv.org/pdf/2006.11538.pdf)
Stars: ✭ 32 (-5.88%)
Mutual labels:  segmentation
deepprojects
A non-ending collection of jupyter notebooks
Stars: ✭ 30 (-11.76%)
Mutual labels:  fastai
XNet
CNN implementation for medical X-Ray image segmentation
Stars: ✭ 71 (+108.82%)
Mutual labels:  segmentation
Visual-Transformer-Paper-Summary
Summary of Transformer applications for computer vision tasks.
Stars: ✭ 51 (+50%)
Mutual labels:  segmentation

Welcome to

deepflash2

Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images.

CI PyPI PyPI - Downloads Conda (channel only) Build fastai images GitHub stars GitHub forks


The best of two worlds: Combining state-of-the-art deep learning with a barrier free environment for life science researchers.

Read the paper, watch the tutorials, or read the docs.

  • No coding skills required (graphical user interface)
  • Ground truth estimation from the annotations of multiple experts for model training and validation
  • Quality assurance and out-of-distribution detection for reliable prediction on new data
  • Best-in-class performance for semantic and instance segmentation

Kaggle Gold Medal and Innovation Price Winner: The deepflash2 Python API built the foundation for winning the Innovation Award a Kaggle Gold Medal in the HuBMAP - Hacking the Kidney challenge. Have a look at our solution

Quick Start and Demo

Get started in less than a minute. Watch the tutorials for help.

For a quick start, run deepflash2 in Google Colaboratory (Google account required).

Colab

setup4.mov

Demo usage

The GUI provides a build-in use for our sample data.

  1. Starting the GUI (in Colab or follow the installation instructions below)
  2. Select the task (GT Estimation, Training, or Prediction)
  3. Click the Load Sample Data button in the sidebar and continue to the next sidebar section.

For futher instructions watch the tutorials.

We provide an overview of the tasks below:

Ground Truth (GT) Estimation Training Prediction
Main Task STAPLE or Majority Voting Ensemble training and validation Semantic and instance segmentation
Sample Data 5 masks from 5 experts each 5 image/mask pairs 5 images and 2 trained models
Expected Output 5 GT Segmentation Masks 5 models 5 predicted segmentation masks (semantic and instance) and uncertainty maps
Estimated Time ~ 1 min ~ 150 min ~ 4 min

Times are estimated for Google Colab (with free NVIDIA Tesla K80 GPU).

Paper and Experiments

We provide a complete guide to reproduce our experiments using the deepflash2 Python API here. The data is currently available on Google Drive.

The preprint of our paper is available on arXiv. Please cite

@misc{griebel2021deepflash2,
    title={Deep-learning in the bioimaging wild: Handling ambiguous data with deepflash2}, 
    author={Matthias Griebel and Dennis Segebarth and Nikolai Stein and Nina Schukraft and Philip Tovote and Robert Blum and Christoph M. Flath},
    year={2021},
    eprint={2111.06693},
    archivePrefix={arXiv}
}

System requirements

Works in the browser or on your local pc/server

deepflash2 is designed to run on Windows, Linux, or Mac (x86-64) if pytorch is installable. We generally recommend using Google Colab as it only requires a Google Account and a device with a web browser. To run deepflash2 locally, we recommend using a system with a GPU (e.g., 2 CPUs, 8 GB RAM, NVIDIA GPU with 8GB VRAM or better).

deepflash2 requires Python>3.6 and the software dependencies are defined in the settings.ini file. Additionally, the ground truth estimation functionalities are based on simpleITK>=2.0 and the instance segmentation capabilities are complemented using cellpose v0.6.6.dev13+g316927e.

deepflash2 is tested on Google Colab (Ubuntu 18.04.5 LTS) and locally (Ubuntu 20.04 LTS, Windows 10, MacOS 12.0.1).

Installation Guide

Typical install time is about 1-5 minutes, depending on your internet connection

The GUI of deepflash2 runs as a web application inside a Jupyter Notebook, the de-facto standard of computational notebooks in the scientific community. The GUI is built on top of the deepflash2 Python API, which can be used independently (read the docs).

Google Colab

Colab

Open Colab and excute the Set up environment cell or follow the pip instructions. Colab provides free access to graphics processing units (GPUs) for fast model training and prediction (Google account required).

Other systems

We recommend installation into a clean Python 3.7, 3.8, or 3.9 environment (e.g., using conda).

mamba/conda

Installation with mamba (installaton instructions) allows a fast and realiable installation process (you can replace mamba with conda and add the --update-all flag to do the installation with conda).

mamba install -c fastchan -c conda-forge -c matjesg deepflash2 

pip

If you want to use your GPU and install with pip, we recommend installing PyTorch first by following the installation instructions.

pip install -U deepflash2

Using the GUI

If you want to use the GUI, make sure to download the GUI notebook, e.g., using curl

curl -o deepflash2_GUI.ipynb https://raw.githubusercontent.com/matjesg/deepflash2/master/deepflash2_GUI.ipynb

and start a Jupyter server.

jupyter notebook

Then, open deepflash2_GUI.ipynb within Notebook environment.

Docker

Docker images for deepflash2 are built on top of the latest pytorch image.

  • CPU only

docker run -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh

  • For training, we recommend to run docker with GPU support (You need to install Nvidia-Docker to enable gpu compatibility with these containers.)

docker run --gpus all --shm-size=256m -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh

All docker containers are configured to start a jupyter server. To add data, we recomment using bind mounts with /workspace as target. To start the GUI, open deepflash2_GUI.ipynb within Notebook environment.

For more information on how to run docker see docker orientation and setup.

Creating segmentation masks with Fiji/ImageJ

If you don't have labelled training data available, you can use this instruction manual for creating segmentation maps. The ImagJ-Macro is available here.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].