All Projects → IAmSuyogJadhav → 3d Mri Brain Tumor Segmentation Using Autoencoder Regularization

IAmSuyogJadhav / 3d Mri Brain Tumor Segmentation Using Autoencoder Regularization

Licence: mit
Keras implementation of the paper "3D MRI brain tumor segmentation using autoencoder regularization" by Myronenko A. (https://arxiv.org/abs/1810.11654).

Projects that are alternatives of or similar to 3d Mri Brain Tumor Segmentation Using Autoencoder Regularization

Keras transfer cifar10
Object classification with CIFAR-10 using transfer learning
Stars: ✭ 120 (-42.58%)
Mutual labels:  jupyter-notebook, cnn
Dgc Net
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"
Stars: ✭ 159 (-23.92%)
Mutual labels:  jupyter-notebook, cnn
Pytorch Sift
PyTorch implementation of SIFT descriptor
Stars: ✭ 123 (-41.15%)
Mutual labels:  jupyter-notebook, cnn
Models
DLTK Model Zoo
Stars: ✭ 101 (-51.67%)
Mutual labels:  jupyter-notebook, cnn
Deep Learning With Python
Deep learning codes and projects using Python
Stars: ✭ 195 (-6.7%)
Mutual labels:  jupyter-notebook, cnn
Self Driving Car
A End to End CNN Model which predicts the steering wheel angle based on the video/image
Stars: ✭ 106 (-49.28%)
Mutual labels:  jupyter-notebook, cnn
Visualizing cnns
Using Keras and cats to visualize layers from CNNs
Stars: ✭ 143 (-31.58%)
Mutual labels:  jupyter-notebook, cnn
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-53.59%)
Mutual labels:  jupyter-notebook, cnn
Cnn Re Tf
Convolutional Neural Network for Multi-label Multi-instance Relation Extraction in Tensorflow
Stars: ✭ 190 (-9.09%)
Mutual labels:  jupyter-notebook, cnn
Lidc nodule detection
lidc nodule detection with CNN and LSTM network
Stars: ✭ 187 (-10.53%)
Mutual labels:  jupyter-notebook, cnn
Keras Oneclassanomalydetection
[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.
Stars: ✭ 102 (-51.2%)
Mutual labels:  jupyter-notebook, cnn
Raspberrypi Facedetection Mtcnn Caffe With Motion
MTCNN with Motion Detection, on Raspberry Pi with Love
Stars: ✭ 204 (-2.39%)
Mutual labels:  jupyter-notebook, cnn
Codesearchnet
Datasets, tools, and benchmarks for representation learning of code.
Stars: ✭ 1,378 (+559.33%)
Mutual labels:  jupyter-notebook, cnn
Deeplearning tutorials
The deeplearning algorithms implemented by tensorflow
Stars: ✭ 1,580 (+655.98%)
Mutual labels:  jupyter-notebook, cnn
Facedetector
A re-implementation of mtcnn. Joint training, tutorial and deployment together.
Stars: ✭ 99 (-52.63%)
Mutual labels:  jupyter-notebook, cnn
Image classifier
CNN image classifier implemented in Keras Notebook 🖼️.
Stars: ✭ 139 (-33.49%)
Mutual labels:  jupyter-notebook, cnn
Cnn intent classification
CNN for intent classification task in a Chatbot
Stars: ✭ 90 (-56.94%)
Mutual labels:  jupyter-notebook, cnn
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-54.07%)
Mutual labels:  jupyter-notebook, cnn
Keraspp
코딩셰프의 3분 딥러닝, 케라스맛
Stars: ✭ 178 (-14.83%)
Mutual labels:  jupyter-notebook, cnn
Pratik Derin Ogrenme Uygulamalari
Çeşitli kütüphaneler kullanılarak Türkçe kod açıklamalarıyla TEMEL SEVİYEDE pratik derin öğrenme uygulamaları.
Stars: ✭ 200 (-4.31%)
Mutual labels:  jupyter-notebook, cnn

3D MRI Brain Tumor Segmentation Using Autoencoder Regularization

PWC Keras

The model architecture

The Model Architecture
Source: https://arxiv.org/pdf/1810.11654.pdf

Keras implementation of the paper 3D MRI brain tumor segmentation using autoencoder regularization by Myronenko A. (https://arxiv.org/abs/1810.11654). The author (team name: NVDLMED) ranked #1 on the BraTS 2018 leaderboard using the model described in the paper.

This repository contains the model complete with the loss function, all implemented end-to-end in Keras. The usage is described in the next section.

Usage

  1. Download the file model.py and keep in the same folder as your project notebook/script.

  2. In your python script, import build_model function from model.py.

    from model import build_model
    

    It will automatically download an additional script needed for the implementation, namely group_norm.py, which contains keras implementation for the group normalization layer.

  3. Note that the input MRI scans you are going to feed need to have 4 dimensions, with channels-first format. i.e., the shape should look like (c, H, W, D), where:

  • c, the no.of channels are divisible by 4.
  • H, W, D, which are height, width and depth, respectively, are all divisible by 24, i.e., 16. This is to get correct output shape according to the model.
  1. Now to create the model, simply run:

    model = build_model(input_shape, output_channels)
    

    where, input_shape is a 4-tuple (channels, Height, Width, Depth) and output_channels is the no. of channels in the output of the model. The output of the model will be the segmentation map generated by the model with the shape (output_channels, Height, Width, Depth), where Height, Width and Depth will be same as that of the input.

Example on BraTS2018 dataset

Go through the Example_on_BRATS2018 notebook to see an example where this model is used on the BraTS2018 dataset.

You can also test-run the example on Google Colaboratory by clicking the following button.

Open In Colab

However, note that you will need to have access to the BraTS2018 dataset before running the example on Google Colaboratory. If you already have access to the dataset, You can simply upload the dataset to Google Drive and input the dataset path in the example notebook.

Issues

If you encounter any issue or have a feedback, please don't hesitate to raise an issue.

Updates

  • Thanks to @Crispy13, issues #29 and #24 are now fixed. VAE branch output was earlier not being included in the model's output. The current format model gives out two outputs: the segmentation map and the VAE output. The VAE branch weights were not being trained for some reason. The issue should be fixed now. Dice score calculation is slightly modified to work for any batch size. SpatialDropout3D is now used instead of Dropout, as specified in the paper.
  • Added an example notebook showing how to run the model on the BraTS2018 dataset.
  • Added a minus term before loss_dice in the loss function. From discussion in #7 with @woodywff and @doc78.
  • Thanks to @doc78 , the NaN loss problem has been permanently fixed.
  • The NaN loss problem has now been fixed (clipping the activations for now).
  • Added an argument in the build_model function to allow for different no. of channels in the output.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].