All Projects → NYUMedML → DARTS

NYUMedML / DARTS

Licence: GPL-3.0 license
Code for DARTS: DenseUnet-based Automatic Rapid Tool for brain Segmentation

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DARTS

Brain-MRI-Segmentation
Smart India Hackathon 2019 project given by the Department of Atomic Energy
Stars: ✭ 29 (-52.46%)
Mutual labels:  medical-imaging, unet, brain-mri
Brainy
Brainy is a virtual MRI analyzer. Just upload the MRI scan file and get 3 different classes of tumors detected and segmented. In Beta.
Stars: ✭ 29 (-52.46%)
Mutual labels:  medical-imaging, unet
visualqc
VisualQC : assistive tool to ease the quality control workflow of neuroimaging data.
Stars: ✭ 56 (-8.2%)
Mutual labels:  medical-imaging, freesurfer
ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (+60.66%)
Mutual labels:  medical-imaging, unet
Data Science Bowl 2018
End-to-end one-class instance segmentation based on U-Net architecture for Data Science Bowl 2018 in Kaggle
Stars: ✭ 56 (-8.2%)
Mutual labels:  medical-imaging, unet
U Net Brain Tumor
U-Net Brain Tumor Segmentation
Stars: ✭ 399 (+554.1%)
Mutual labels:  medical-imaging, unet
Medicalzoopytorch
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation
Stars: ✭ 546 (+795.08%)
Mutual labels:  medical-imaging, unet
Open Solution Data Science Bowl 2018
Open solution to the Data Science Bowl 2018
Stars: ✭ 159 (+160.66%)
Mutual labels:  medical-imaging, unet
Dcmjs
Javascript implementation of DICOM manipulation
Stars: ✭ 150 (+145.9%)
Mutual labels:  medical-imaging
Radio
RadIO is a library for data science research of computed tomography imaging
Stars: ✭ 198 (+224.59%)
Mutual labels:  medical-imaging
Applied Dl 2018
Tel-Aviv Deep Learning Boot-camp: 12 Applied Deep Learning Labs
Stars: ✭ 146 (+139.34%)
Mutual labels:  medical-imaging
Dicom Rs
Pure Rust implementation of the DICOM standard
Stars: ✭ 152 (+149.18%)
Mutual labels:  medical-imaging
Neurite
Neural networks toolbox focused on medical image analysis
Stars: ✭ 203 (+232.79%)
Mutual labels:  medical-imaging
Iciar2018
Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification. ICIAR 2018 Grand Challenge on BreAst Cancer Histology images (BACH)
Stars: ✭ 149 (+144.26%)
Mutual labels:  medical-imaging
Chexnet With Localization
Weakly Supervised Learning for Findings Detection in Medical Images
Stars: ✭ 238 (+290.16%)
Mutual labels:  medical-imaging
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (+134.43%)
Mutual labels:  medical-imaging
Segan
SegAN: Semantic Segmentation with Adversarial Learning
Stars: ✭ 143 (+134.43%)
Mutual labels:  medical-imaging
Gdcm
Grassroots DICOM read-only mirror. Only for Pull Request. Please report bug at http://sf.net/p/gdcm
Stars: ✭ 240 (+293.44%)
Mutual labels:  medical-imaging
Delira
Lightweight framework for fast prototyping and training deep neural networks with PyTorch and TensorFlow
Stars: ✭ 220 (+260.66%)
Mutual labels:  medical-imaging
Fast
A framework for GPU based high-performance medical image processing and visualization
Stars: ✭ 179 (+193.44%)
Mutual labels:  medical-imaging

DenseUnet-based Automatic Rapid brain Segmentation (DARTS)

Paper associated with the project

Here is the paper describing the project and experiments in detail.

Package

  • The DARTS package can be installed using:
pip install DARTSeg

Pre-trained model wts

  • Download the pretrained models from here as follows:
gdown https://drive.google.com/uc?id=1OJ0RmcALNkiU49Npm7Rez6thIKOf3gLQ -O saved_model_wts.zip
unzip saved_model_wts.zip

There are two model architectures: Dense U-Net and U-Net. Each model is trained using 2D slices extracted coronally, sagittally,or axially. The name of the model will contain the orientation and model architecture information.

Using pre-trained models to perform complete brain segmentation

Follow these steps to perform segmentation:

from DARTS import Segmentation
seg_obj = Segmentation(model_wts_path='./saved_model_wts/dense_unet_saggital_finetuned.pth', model_type="dense-unet")
seg_out, seg_proba_out = seg_obj.predict(inputs="T1.mgz")
  • The user may also execute the perform_pred.py script with the following code block to perform segmentation:
usage: perform_pred.py [-h] [--input_image_path INPUT_IMAGE_PATH]
                       [--segmentation_dir_path SEGMENTATION_DIR_PATH]
                       [--file_name FILE_NAME] [--model_type MODEL_TYPE]
                       [--model_wts_path MODEL_WTS_PATH] [--is_mgz]

optional arguments:
  -h, --help            show this help message and exit
  --input_image_path INPUT_IMAGE_PATH
                        Path to input image (can be of .mgz or .nii.gz
                        format)(required)
  --segmentation_dir_path SEGMENTATION_DIR_PATH
                        Directory path to save the output segmentation
                        (required)
  --file_name FILE_NAME
                        Name of the segmentation file (required)
  --model_type MODEL_TYPE
                        Model types: "dense-unet", "unet" (default: "dense-
                        unet")
  --model_wts_path MODEL_WTS_PATH
                        Path for model wts to be used, provide a model from
                        saved_model_wts/
  --is_mgz              Use this flag when image is in .mgz format

An example could look something like this:

perform_pred.py --input_image_path './../../../data_orig/199251/mri/T1.mgz' \
--segmentation_dir_path './sample_pred/' \
--file_name '199251' \
--is_mgz \
--model_wts_path './saved_model_wts/dense_unet_back2front_non_finetuned.pth' \

An illustration can be seen in predicting_segmentation_illustration.ipynb.

Deep learning models for brain MR segmentation

We pretrain our Dense Unet model using the Freesurfer segmentations of 1113 subjects available in the Human Connectome Project dataset and fine-tuned the model using 101 manually labeled brain scans from Mindboggle dataset.

The model is able to perform the segmentation of complete brain within a minute (on a machine with single GPU). The model labels 102 regions in the brain making it the first model to segment more than 100 brain regions within a minute. The details of 102 regions can be found below.

Quantitative results on the Mindboggle held out data

The box plot compares the dice scores of different ROIs for Dense U-Net and U-Net. The Dense U-Net consistently outperforms U-Net and achieves good dice scores for most of the ROIs.

Qualitative results on the HCP held out data

We perform an expert reader evaluation to measure and compare the proposed deep learning models' performance with Freesurfer model. We use HCP held-out test set scans for reader study. On these scans, Freesurfer results have undergone a manual quality control. We also compare the non-finetuned and fine-tuned model with Freesurfer model with manual QC. Seven regions of interest (ROIs) were selected:L/R Putamen (axial view), L/R Pallidum (axial view), L/R Caudate (axial view), L/R Thalamus (axial view), L/R Lateral Ventricles (axial view), L/R Insula (axial view) and L/R Cingulate Gyrus (sagittal view).The readers rated each example on a Likert-type scale from 1 (Poor) to 5 (Excellent).

Based on the readers' ratings, we investigate if there are statistically significant differences between the three methods using paired T-test and Wilcoxon signed rank test at 95% significance level. The results can be seen below.

Output segmentation

The output segmentation has 103 labeled segments with the last one being the None class. The labels of the segmentation closely resembles the aseg+aparc segmentation protocol of Freesurfer.

We exclude 4 brain regions that are not common to a normal brain: White matter and non-white matter hypointentisites, left and right frontal and temporal poles. We also excluded left and right 'unknown' segments. We also exclude left and right bankssts as there is no common definition for these segments that is widely accepted by the neuroradiology community.

The complete list of class number and the corresponding segment name can be found here as a pickled object or here as a .txt file.

Sample Predictions

Insula

Here we can clearly see that Freesurfer (FS) incorrectly predicts the right insula segment, the model trained only using FS segmentations also learns a wrong prediction. Our proposed model which is finetuned on manually annotated dataset correctly captures the region. Moreover, the segment looks biologically natural unlike FS's segmentation which is grainy, noisy and with non-smooth boundaries.

Putamen

Here again, we see that FS segmentation is of low quality but our proposed fine-tuned model performs well and produces more natural looking segmentation.

Pallidum

FS segmentation for pallidum also of low quality, but the proposed model performs well.

More Predictions

Some sample predictions for Putamen, Caudate, Hippocampus and Insula can be seen here. In all the images, prediction 1 = Freesurfer, Prediction 2 = Non-Finetuned Dense Unet, Prediction 3 = Finetuned Dense Unet.

We demonstrate that that Freesurfer often makes errors in determining the accurate boundaries whereas the deep learning-based models have natural looking ROIs with accurate boundaries.

Contact

If you have any questions regarding the code, please contact ark576[at]nyu.edu or raise an issue on the github repo.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].