All Projects → neuronets → nobrainer

neuronets / nobrainer

Licence: other
A framework for developing neural network models for 3D image processing.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to nobrainer

visualqc
VisualQC : assistive tool to ease the quality control workflow of neuroimaging data.
Stars: ✭ 56 (-54.47%)
Mutual labels:  medical-imaging, neuroimaging
ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (-20.33%)
Mutual labels:  medical-imaging, semantic-segmentation
Slicer
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 263 (+113.82%)
Mutual labels:  medical-imaging, neuroimaging
kits19-challenge
Kidney Tumor Segmentation Challenge 2019
Stars: ✭ 44 (-64.23%)
Mutual labels:  medical-imaging, semantic-segmentation
Dltk
Deep Learning Toolkit for Medical Image Analysis
Stars: ✭ 1,249 (+915.45%)
Mutual labels:  medical-imaging, neuroimaging
clinicadl
Framework for the reproducible processing of neuroimaging data with deep learning methods
Stars: ✭ 114 (-7.32%)
Mutual labels:  medical-imaging, neuroimaging
Dipy
DIPY is the paragon 3D/4D+ imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging.
Stars: ✭ 417 (+239.02%)
Mutual labels:  medical-imaging, neuroimaging
unet-pytorch
This is the example implementation of UNet model for semantic segmentations
Stars: ✭ 17 (-86.18%)
Mutual labels:  medical-imaging, semantic-segmentation
Extensionsindex
Slicer extensions index
Stars: ✭ 36 (-70.73%)
Mutual labels:  medical-imaging, neuroimaging
Medicaldetectiontoolkit
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Stars: ✭ 917 (+645.53%)
Mutual labels:  medical-imaging, semantic-segmentation
Cascaded Fcn
Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields"
Stars: ✭ 296 (+140.65%)
Mutual labels:  medical-imaging, semantic-segmentation
Segan
SegAN: Semantic Segmentation with Adversarial Learning
Stars: ✭ 143 (+16.26%)
Mutual labels:  medical-imaging, semantic-segmentation
Slicergitsvnarchive
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 896 (+628.46%)
Mutual labels:  medical-imaging, neuroimaging
Kiu Net Pytorch
Official Pytorch Code of KiU-Net for Image Segmentation - MICCAI 2020 (Oral)
Stars: ✭ 134 (+8.94%)
Mutual labels:  medical-imaging, semantic-segmentation
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (+16.26%)
Mutual labels:  medical-imaging, neuroimaging
Miscnn
A framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning
Stars: ✭ 194 (+57.72%)
Mutual labels:  medical-imaging
Gdcm
Grassroots DICOM read-only mirror. Only for Pull Request. Please report bug at http://sf.net/p/gdcm
Stars: ✭ 240 (+95.12%)
Mutual labels:  medical-imaging
Fast
A framework for GPU based high-performance medical image processing and visualization
Stars: ✭ 179 (+45.53%)
Mutual labels:  medical-imaging
Visvis
Visvis - the object oriented approach to visualization
Stars: ✭ 180 (+46.34%)
Mutual labels:  medical-imaging
VT-UNet
[MICCAI2022] This is an official PyTorch implementation for A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation
Stars: ✭ 151 (+22.76%)
Mutual labels:  semantic-segmentation

Nobrainer

Build status

Nobrainer is a deep learning framework for 3D image processing. It implements several 3D convolutional models from recent literature, methods for loading and augmenting volumetric data that can be used with any TensorFlow or Keras model, losses and metrics for 3D data, and simple utilities for model training, evaluation, prediction, and transfer learning.

Nobrainer also provides pre-trained models for brain extraction, brain segmentation, brain generation and other tasks. Please see the Trained models repository for more information.

The Nobrainer project is supported by NIH RF1MH121885 and is distributed under the Apache 2.0 license. It was started under the support of NIH R01 EB020470.

Table of contents

Implementations

Models

Model Type Application
Highresnet (source) supervised segmentation/classification
Unet (source) supervised segmentation/classification
Vnet (source) supervised segmentation/classification
Meshnet (source) supervised segmentation/clssification
Bayesian Meshnet (source) bayesian supervised segmentation/classification
Bayesian Vnet bayesian supervised segmentation/classification
Semi_Bayesian Vnet semi-bayesian supervised segmentation/classification
DCGAN self supervised generative model
Progressive GAN self supervised generative model
3D Autoencoder self supervised knowledge representation/dimensionality reduction
3D Progressive Autoencoder self supervised knowledge representation/dimensionality reduction
3D SimSiam (source) self supervised Siamese Representation Learning

Dropout and regularization layers

Bernouli dropout layer, Concrete dropout layer, Gaussian dropout, Group normalization layer, Costom padding layer

Losses

Dice, Jaccard, Tversky, ELBO, Wasserstien, Gradient Penalty

Metrics

Dice, Generalized Dice, Jaccard, Hamming, Tversky

Augmentation methods

Spatial Transforms

Center crop, Spacial Constant Padding, Random Crop, Resize, Random flip (left and right)

Intensity Transforms

Add gaussian noise, Min-Max intensity scaling, Costom intensity scaling, Intensity masking, Contrast adjustment

Affine Transform

Afifine transformation including rotation, translation, reflection.

Guide Jupyter Notebooks Open In Colab

Please refer to the Jupyter notebooks in the guide directory to get started with Nobrainer. Try them out in Google Colaboratory!

Installation

Container

We recommend using the official Nobrainer Docker container, which includes all of the dependencies necessary to use the framework. Please see the available images on DockerHub

GPU support

The Nobrainer containers with GPU support use the Tensorflow jupyter GPU containers. Please check the containers for the version of CUDA installed. Nvidia drivers are not included in the container.

$ docker pull neuronets/nobrainer:latest-gpu
$ singularity pull docker://neuronets/nobrainer:latest-gpu

CPU only

This container can be used on all systems that have Docker or Singularity and does not require special hardware. This container, however, should not be used for model training (it will be very slow).

$ docker pull neuronets/nobrainer:latest-cpu
$ singularity pull docker://neuronets/nobrainer:latest-cpu

pip

Nobrainer can also be installed with pip.

$ pip install nobrainer

Using pre-trained networks

Pre-trained networks are available in the Trained models repository. Prediction can be done on the command-line with nobrainer predict or in Python. Similarly, generation can be done on the command-line with nobrainer generate or in Python.

Predicting a brainmask for a T1-weighted brain scan

Model's prediction of brain mask Model's prediction of brain mask Figure: In the first column are T1-weighted brain scans, in the middle are a trained model's predictions, and on the right are binarized FreeSurfer segmentations. Despite being trained on binarized FreeSurfer segmentations, the model outperforms FreeSurfer in the bottom scan, which exhibits motion distortion. It took about three seconds for the model to predict each brainmask using an NVIDIA GTX 1080Ti. It takes about 70 seconds on a recent CPU.

In the following examples, we will use a 3D U-Net trained for brain extraction and documented in Trained models.

In the base case, we run the T1w scan through the model for prediction.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data neuronets/nobrainer \
  predict \
    --model=/models/neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5 \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask.nii.gz

For binary segmentation where we expect one predicted region, as is the case with brain extraction, we can reduce false positives by removing all predictions not connected to the largest contiguous label.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data neuronets/nobrainer \
  predict \
    --model=/models/neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5 \
    --largest-label \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-largestlabel.nii.gz

Because the network was trained on randomly rotated data, it should be agnostic to orientation. Therefore, we can rotate the volume, predict on it, undo the rotation in the prediction, and average the prediction with that from the original volume. This can lead to a better overall prediction but will at least double the processing time. To enable this, use the flag --rotate-and-predict in nobrainer predict.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data neuronets/nobrainer \
  predict \
    --model=/models/neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5 \
    --rotate-and-predict \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-withrotation.nii.gz

Combining the above, we can usually achieve the best brain extraction by using --rotate-and-predict in conjunction with --largest-label.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data neuronets/nobrainer \
  predict \
    --model=/models/neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5 \
    --largest-label \
    --rotate-and-predict \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-maybebest.nii.gz

Generating a synthetic T1-weighted brain scan

Model's generation of brain (sagittal) Model's generation of brain (axial) Model's generation of brain (coronal) Figure: Progressive generation of T1-weighted brain MR scan starting from a resolution of 32 to 256 (Left to Right: 323, 643, 1283, 2563). The brain scans are generated using the same latents in all resolutions. It took about 6 milliseconds for the model to generate the 2563 brainscan using an NVIDIA TESLA V-100.

In the following examples, we will use a Progressive Generative Adversarial Network trained for brain image generation and documented in Trained models.

In the base case, we generate a T1w scan through the model for a given resolution. We need to pass the directory containing the models (tf.SavedModel) created while training the networks.

docker run --rm -v $PWD:/data neuronets/nobrainer \
  generate \
    --model=/models/neuronets/braingen/0.1.0 \
    --output-shape=128 128 128 \
    /data/generated.nii.gz

We can also generate multiple resolutions of the brain image using the same latents to visualize the progression

# Get sample T1w scan.
docker run --rm -v $PWD:/data neuronets/nobrainer \
  generate \
    --model=/models/neuronets/braingen/0.1.0 \
    --multi-resolution \
    /data/generated.nii.gz

In the above example, the multi resolution images will be saved as generated_res_{resolution}.nii.gz

Transfer learning

The pre-trained models can be used for transfer learning. To avoid forgetting important information in the pre-trained model, you can apply regularization to the kernel weights and also use a low learning rate. For more information, please see the Nobrainer guide notebook on transfer learning.

As an example of transfer learning, @kaczmarj re-trained a brain extraction model to label meningiomas in 3D T1-weighted, contrast-enhanced MR scans. The original model is publicly available and was trained on 10,000 T1-weighted MR brain scans from healthy participants. These were all research scans (i.e., non-clinical) and did not include any contrast agents. The meningioma dataset, on the other hand, was composed of relatively few scans, all of which were clinical and used gadolinium as a contrast agent. You can observe the differences in contrast below.

Brain extraction model prediction Meningioma extraction model prediction

Despite the differences between the two datasets, transfer learning led to a much better model than training from randomly-initialized weights. As evidence, please see below violin plots of Dice coefficients on a validation set. In the left plot are Dice coefficients of predictions obtained with the model trained from randomly-initialized weights, and on the right are Dice coefficients of predictions obtained with the transfer-learned model. In general, Dice coefficients are higher on the right, and the variance of Dice scores is lower. Overall, the model on the right is more accurate and more robust than the one on the left.

Package layout

  • nobrainer.io: input/output methods
  • nobrainer.layers: custom layers, which conform to the Keras API
  • nobrainer.losses: loss functions for volumetric segmentation
  • nobrainer.metrics: metrics for volumetric segmentation
  • nobrainer.models: pre-defined Keras models
  • nobrainer.training: training utilities (supports training on single and multiple GPUs)
  • nobrainer.transform: random rigid transformations for data augmentation
  • nobrainer.volume: tf.data.Dataset creation and data augmentation utilities

Citation

If you use this package, please cite it.

Questions or issues

If you have questions about Nobrainer or encounter any issues using the framework, please submit a GitHub issue.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].