All Projects → tkuanlun350 → 3dunet Tensorflow Brats18

tkuanlun350 / 3dunet Tensorflow Brats18

3D Unet biomedical segmentation model powered by tensorpack with fast io speed

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 3dunet Tensorflow Brats18

Keras unet plus plus
keras implementation of unet plus plus
Stars: ✭ 166 (-4.05%)
Mutual labels:  segmentation, unet
Open Solution Data Science Bowl 2018
Open solution to the Data Science Bowl 2018
Stars: ✭ 159 (-8.09%)
Mutual labels:  segmentation, unet
Data Science Bowl 2018
End-to-end one-class instance segmentation based on U-Net architecture for Data Science Bowl 2018 in Kaggle
Stars: ✭ 56 (-67.63%)
Mutual labels:  segmentation, unet
Medicalzoopytorch
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation
Stars: ✭ 546 (+215.61%)
Mutual labels:  segmentation, unet
Unet Family
Paper and implementation of UNet-related model.
Stars: ✭ 1,924 (+1012.14%)
Mutual labels:  segmentation, unet
Unet Segmentation Pytorch Nest Of Unets
Implementation of different kinds of Unet Models for Image Segmentation - Unet , RCNN-Unet, Attention Unet, RCNN-Attention Unet, Nested Unet
Stars: ✭ 683 (+294.8%)
Mutual labels:  segmentation, unet
Multiclass Semantic Segmentation Camvid
Tensorflow 2 implementation of complete pipeline for multiclass image semantic segmentation using UNet, SegNet and FCN32 architectures on Cambridge-driving Labeled Video Database (CamVid) dataset.
Stars: ✭ 67 (-61.27%)
Mutual labels:  segmentation, unet
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (+2549.71%)
Mutual labels:  segmentation, unet
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-41.62%)
Mutual labels:  segmentation, unet
Brats17
Patch-based 3D U-Net for brain tumor segmentation
Stars: ✭ 85 (-50.87%)
Mutual labels:  segmentation, unet
Unet
unet for image segmentation
Stars: ✭ 3,751 (+2068.21%)
Mutual labels:  segmentation, unet
Lung Segmentation 2d
Lung fields segmentation on CXR images using convolutional neural networks.
Stars: ✭ 138 (-20.23%)
Mutual labels:  segmentation, unet
Bcdu Net
BCDU-Net : Medical Image Segmentation
Stars: ✭ 314 (+81.5%)
Mutual labels:  segmentation, unet
Segmentation Networks Benchmark
Evaluation framework for testing segmentation networks in Keras
Stars: ✭ 34 (-80.35%)
Mutual labels:  segmentation, unet
Tianchi Medical Lungtumordetect
天池医疗AI大赛[第一季]:肺部结节智能诊断 UNet/VGG/Inception/ResNet/DenseNet
Stars: ✭ 314 (+81.5%)
Mutual labels:  segmentation, unet
Unet 3d
3D Unet Equipped with Advanced Deep Learning Methods
Stars: ✭ 57 (-67.05%)
Mutual labels:  segmentation, unet
Pytorch Saltnet
Kaggle | 9th place single model solution for TGS Salt Identification Challenge
Stars: ✭ 270 (+56.07%)
Mutual labels:  segmentation, unet
Segmentation models
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
Stars: ✭ 3,575 (+1966.47%)
Mutual labels:  segmentation, unet
Dlcv for beginners
《深度学习与计算机视觉》配套代码
Stars: ✭ 1,244 (+619.08%)
Mutual labels:  segmentation, unet
Paddlex
PaddlePaddle End-to-End Development Toolkit(『飞桨』深度学习全流程开发工具)
Stars: ✭ 3,399 (+1864.74%)
Mutual labels:  segmentation, unet

3DUnet-Tensorflow

Tumor Segmentation 1 Tumor Segmentation 2

3D Unet biomedical segmentation model powered by tensorpack with fast io speed.

Borrow a lot of codes from https://github.com/taigw/brats17/. I improved the pipeline and using tensorpack's dataflow for faster io speed. Currently it takes around 7 minutes for 500 iterations with patch size [5 X 20 X 144 X 144]. You can achieve reasonable results within 40 epochs (more gpu will also reduce your training time.)

I want to verify the effectiveness (consistent improvement despite of slight implementation differences and different deep-learning framework) of some architecture proposed these years. Such as dice_loss, generalised dice_loss, residual connection, instance norm, deep supervision ...etc. Those design are popular and used in many papers in BRATS competition.

Dependencies

DIR/
  training/
    HGG/
    LGG/
  val/
    BRATS*.nii.gz

Data

If you don't have Brats data, you can visit ellisdg/3DUnetCNN where he provided sample data from TCGA.

You can modify data_loader.py to apply for different 3D datasets. The data sampling strategy is defined in data_sampler.py BatchData class.

Usage

Change config in config.py:

  1. Change BASEDIR to /path/to/DIR as described above.

Train:

python3 train.py --logdir=./train_log/unet3d --gpu 0

Eval:

python3 train.py --load=./train_log/unet3d/model-30000 --gpu 0 --evaluate

Predict:

python3 train.py --load=./train_log/unet3d/model-30000 --gpu 0 --predict

If you want to use 5 fold cross validation :

  1. Run generate_5fold.py to save 5fold.pkl
  2. Set config CROSS_VALIDATION to True
  3. Set config CROSS_VALIDATION_PATH to {/path/to/5fold.pkl}
  4. Set config FOLD to {0~4}

Results

The detailed parameters and training settings. The results are derived from Brats2018 online evaluation on Validation Set.

Single Model

Setting 1:

Unet3d, num_filters=32 (all), depth=3, sampling=one_positive

  • PatchSize = [5, 20, 144, 144] per gpu, num_gpus = 2, epochs = 40
  • Lr = 0.01, num_step=500, epoch time = 6:35(min), total_training_time ~ 5 hours

Setting 2:

Unet3d, num_filters=32 (all), depth=3, sampling=one_positive

  • PatchSize = [2, 128, 128, 128] pre gpu, num_gpus = 2, epochs = 40
  • Lr = 0.01, num_step=500, epoch time = 20:35(min), total_training_time ~ 8 hours

Setting 3

Unet3d, num_filters=16~256, sampling=one_positive, depth=5, residual

  • PatchSize = [2, 128, 128, 128], num_gpus = 1, epochs = 20
  • Lr = 0.001, num_step=500, epoch time = 20(min), total_training_time ~ 8 hours

Setting 4:

Unet3d, num_filters=16~256, depth=5, residual InstanceNorm, sampling=random

  • PatchSize = [2, 128, 128, 128], num_gpus = 1, epochs = 20
  • Lr = 0.001, num_step=500, epoch time = 20(min), total_training_time ~ 8 hours

Setting 5:

Unet3d, num_filters=16~256, depth=5, residual, InstanceNorm, sampling=one_positive

  • PatchSize = [2, 128, 128, 128], num_gpus = 1, epochs = 20
  • Lr = 0.001, num_step=500, epoch time = 20(min), total_training_time ~ 8 hours

Setting 6:

Unet3d, num_filters=16~256, depth=5, residual, deep-supervision, InstanceNorm, sampling=one_positive

  • PatchSize = [2, 128, 128, 128], num_gpus = 1, epochs = 20
  • Lr = 0.001, epoch time = 19(min), total_training_time ~ 8 hours

Setting 7:

Unet3d, num_filters=16~256, depth=5, residual, deep-supervision, BatchNorm, sampling=one_positive

  • PatchSize = [2, 128, 128, 128], num_gpus = 1, epochs = 20
  • Lr = 0.001, epoch time = 20(min), total_training_time ~ 8 hours

Setting 8:

Unet3d, num_filters=16~256, depth=5, residual, deep-supervision, InstanceNorm, sampling=random

  • PatchSize = [2, 128, 128, 128], num_gpus = 2, epochs = 20
  • Lr = 0.001, epoch time = 22(min), total_training_time ~ 8 hours
Setting Dice_ET Dice_WT Dice_TC
1 0.74 0.85 0.75
2 0.74 0.83 0.77
2* 0.77 0.84 0.77
3 0.74 0.87 0.78
4 0.75 0.87 0.790
5 0.72 0.87 0.796
6 0.73 0.88 0.80
6* 0.75 0.88 0.80
7 0.73 0.87 0.78
8* 0.77 0.87 0.81

Ensemble Results

Multi-View:

Introduced by Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks. Trained with axial, sagittal and coronal and then average the prediction prob.

Currently only support manually set path for each model (see train.py after line 147.)

Test-Time augmentation:

Testing with image augmentation to improve model robustness.

  • Flip: Predicting on original image and horizontal flipped image and average the prediction prob.
Setting Dice_ET Dice_WT Dice_TC
8+Flip 0.73 0.88 0.81
8*+Flip 0.77 0.88 0.82
Multi-View* 0.78 0.89 0.81
Multi-View*+Flip 0.78 0.89 0.82

p.s. * means advanced post-processing

Preprocessing

Zero Mean Unit Variance (default)

Normalize each modality with zero mean and unit variance within brain region

Bias Correction

Details in Tustison, Nicholas J., et al. "N4ITK: improved N3 bias correction." IEEE transactions on medical imaging 29.6 (2010): 1310-1320.

Setting Dice_ET Dice_WT Dice_TC
N4+8*+Flip 0.76 0.87 0.80
Multi-View*+N4+Flip 0.76 0.89 0.80

Using preprocess.py to convert Brats data into corrected image. Will take one days to process 200+ files. (multi-threading could help)

Notes

Results for brats2018 will be updated and more experiments will be included. [2018/8/3]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].