All Projects → CBICA → BrainMaGe

CBICA / BrainMaGe

Licence: other
Brain extraction in presence of abnormalities, using single and multiple MRI modalities

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to BrainMaGe

Slicergitsvnarchive
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 896 (+3795.65%)
Mutual labels:  neuroimaging, segmentation
Dipy
DIPY is the paragon 3D/4D+ imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging.
Stars: ✭ 417 (+1713.04%)
Mutual labels:  neuroimaging, segmentation
Slicer
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 263 (+1043.48%)
Mutual labels:  neuroimaging, segmentation
Extensionsindex
Slicer extensions index
Stars: ✭ 36 (+56.52%)
Mutual labels:  neuroimaging, segmentation
blindassist-ios
BlindAssist iOS app
Stars: ✭ 34 (+47.83%)
Mutual labels:  segmentation
dd-ml-segmentation-benchmark
DroneDeploy Machine Learning Segmentation Benchmark
Stars: ✭ 179 (+678.26%)
Mutual labels:  segmentation
Shadowless
A Fast and Open Source Autonomous Perception System.
Stars: ✭ 29 (+26.09%)
Mutual labels:  segmentation
GIBBON
The Geometry and Image-Based Bioengineering add-On for MATLAB
Stars: ✭ 132 (+473.91%)
Mutual labels:  segmentation
ReSegment
Burmese (Myanmar) syllable level segmentation with regex.
Stars: ✭ 24 (+4.35%)
Mutual labels:  segmentation
segm-lstm
[deprecated] reference code for string segmentation using LSTM(tensorflow)
Stars: ✭ 19 (-17.39%)
Mutual labels:  segmentation
argus-tgs-salt
Kaggle | 14th place solution for TGS Salt Identification Challenge
Stars: ✭ 73 (+217.39%)
Mutual labels:  segmentation
DocuNet
Code and dataset for the IJCAI 2021 paper "Document-level Relation Extraction as Semantic Segmentation".
Stars: ✭ 84 (+265.22%)
Mutual labels:  segmentation
skillful nowcasting
Implementation of DeepMind's Deep Generative Model of Radar (DGMR) https://arxiv.org/abs/2104.00954
Stars: ✭ 117 (+408.7%)
Mutual labels:  pytorch-lightning
nih-chest-xray
Identifying diseases in chest X-rays using convolutional neural networks
Stars: ✭ 83 (+260.87%)
Mutual labels:  nih
SegCaps
A Clone version from Original SegCaps source code with enhancements on MS COCO dataset.
Stars: ✭ 62 (+169.57%)
Mutual labels:  segmentation
deepaudio-speaker
neural network based speaker embedder
Stars: ✭ 19 (-17.39%)
Mutual labels:  pytorch-lightning
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (+21.74%)
Mutual labels:  pytorch-lightning
nidb
NeuroInformatics Database
Stars: ✭ 26 (+13.04%)
Mutual labels:  neuroimaging
MITK-Diffusion
MITK Diffusion - Official part of the Medical Imaging Interaction Toolkit
Stars: ✭ 47 (+104.35%)
Mutual labels:  segmentation
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+1786.96%)
Mutual labels:  segmentation

BrainMaGe (Brain Mask Generator)

Introduction

The Brain Mask Generator (BrainMaGe) is a deep-learning (DL) generalizable robust brain extraction (skull-stripping) tool explicitly developed for application in brain MRI scans with apparent pathologies, e.g., tumors. BrainMaGe introduces a modality-agnostic training method rather than one that needs a specific set or combination of modalities, and hence forces the model to learn the spatial relationships between the structures in the brain and its shape, as opposed to texture, and thereby overriding the need for a particular modality. If you want to read more about BrainMaGe, please use the link in Citations to read the full performance evaluation we conducted, where we have proved that such a model will have comparable (and in most cases better) accuracy to other DL methods while keeping minimal computational and logistical requirements.

Citations

If you use this package, please cite the following paper:

  1. S.Thakur, J.Doshi, S.Pati, S.Rathore, C.Sako, M.Bilello, S.M.Ha, G.Shukla, A.Flanders, A.Kotrotsou, M.Milchenko, S.Liem, G.S.Alexander, J.Lombardo, J.D.Palmer, P.LaMontagne, A.Nazeri, S.Talbar, U.Kulkarni, D.Marcus, R.Colen, C.Davatzikos, G.Erus, S.Bakas, "Brain Extraction on MRI Scans in Presence of Diffuse Glioma: Multi-institutional Performance Evaluation of Deep Learning Methods and Robust Modality-Agnostic Training, NeuroImage, Epub-ahead-of-print, 2020. DOI: 10.1016/j.neuroimage.2020.117081

The following citations are previous conference presentations of related results:

  1. S.P.Thakur, J.Doshi, S.Pati, S.M.Ha, C.Sako, S.Talbar, U.Kulkarni, C.Davatzikos, G.Erus, S.Bakas, "Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning". In International MICCAI BrainLes Workshop, Springer LNCS, 57-68, 2019. DOI: 10.1007/978-3-030-46640-4_6

  2. S.Thakur, J.Doshi, S.M.Ha, G.Shukla, A.Kotrotsou, S.Talbar, U.Kulkarni, D.Marcus, R.Colen, C.Davatzikos, G.Erus, S.Bakas, "NIMG-40. ROBUST MODALITY-AGNOSTIC SKULL-STRIPPING IN PRESENCE OF DIFFUSE GLIOMA: A MULTI-INSTITUTIONAL STUDY", Neuro-Oncology, 21(Supplement_6): vi170, 2019. DOI: 10.1093/neuonc/noz175.710

Installation Instructions

Requirements

Run the following commands:

git clone https://github.com/CBICA/BrainMaGe.git
cd BrainMaGe
git lfs pull
conda env create -f requirements.yml # create a virtual environment named brainmage
conda activate brainmage # activate it
latesttag=$(git describe --tags) # get the latest tag [bash-only]
echo checking out ${latesttag}
git checkout ${latesttag}
python setup.py install # install dependencies and BrainMaGe

Alternative to LFS

In case git lfs pull fails, the weights can be obtained using the following commands:

wget https://github.com/CBICA/BrainMaGe/raw/master/BrainMaGe/weights/resunet_ma.pt ./BrainMaGe/weights
wget https://github.com/CBICA/BrainMaGe/raw/master/BrainMaGe/weights/resunet_multi_4.pt ./BrainMaGe/weights

Generating brain masks for your data using our pre-trained models

  • This application currently has two modes (more coming soon):
    • Modality Agnostic (MA)
    • Multi-4, i.e., using all 4 structural modalities

Steps to run application

  1. Co-registration within patient to the SRI-24 atlas in the LPS/RAI space.

    An easy way to do this is using the BraTSPipeline application from the Cancer Imaging Phenomics Toolkit (CaPTk). This pipeline currently uses a pre-trained model to extract the skull but the processed images (in the order defined above till registration) are also saved.

  2. Make an Input CSV including paths to the co-registered images (prepared in the previous step) that you wish to make brain masks.

  • Multi-4 (use all 4 structural modalities): Prepare a CSV file with the following headers: Patient_ID,T1_path,T2_path,T1ce_path,Flair_path

  • Modality-agnostic (works with any structural modality): Prepare a CSV file with the following headers: Patient_ID_Modality,image_path

  1. Make config files:

    Populate a config file with required parameters. Examples:

    Where mode refers to the inference type, which is a required parameter

    Note: Alternatively, you can use the diretory structure similar to the training as desribed in the next section.

  2. Run the application:

    conda activate brainmage
    brain_mage_run -params $test_params_ma.cfg -test True -mode $mode -dev $device

    Where:

    • $mode can be MA for modality agnostic or Multi-4.
    • $device refers to the GPU device where you want your code to run or the CPU.

Steps to run application (Alternative)

1.Although this method is much slower, and runs for single subject at a time, it works flawlessly on CPU's and GPU's.

conda activate brainmage
brain_mage_single_run -i $path_to_input.nii.gz -o $path_to_output_mask.nii.gz
\ -m  $path_to_output_brain.nii.gz -dev $device

Where:
- `$path_to_input.nii.gz` can be path to the input file as a nifti.
- `$path_to_output_mask.nii.gz` is the output path to save the mask for the nifti
- `$path_to_output_brain.nii.gz` is the output path to brain for the nifti

[ADVANCED] Train your own model

  1. Co-registration within patient in a common atlas space such as the SRI-24 atlas in the LPS/RAI space.

    An easy way to do this is using the BraTSPipeline application from the Cancer Imaging Phenomics Toolkit (CaPTk).

    Note: Any changes done in this step needs to be reflected during the inference process.

  2. Arranging the Input Data, co-registered in the previous step, to the following folder structure. Please note files must be named exactly as below (e.g. ${subjectName}_t1, ${subjectName}_mask.nii.gz etc.)

    Input_Data_folder -- patient_1 -- patient_1_t1.nii.gz
                            -- patient_1_t2.nii.gz
                            -- patient_1_t1ce.nii.gz
                            -- patient_1_flair.nii.gz
                            -- patient_1_mask.nii.gz
                  patient_2 -- ...
                  ...
                  ...
                  patient_n -- ...
    
  3. Standardizing Dataset Intensities

    Use the following command to standardize intensities for both training and validation data:

    brain_mage_intensity_standardize -i ${inputSubjectDirectory} -o ${outputSubjectDirectory} -t ${threads}
    
    • ${inputSubjectDirectory} needs to be structured as described in the previous step (Arranging Data).
    • ${threads} are the maximum number of threads that can be used for computation and is generally dependent on the number of available CPU cores. Should be of type int and should satisfy: 0 < ${threads} < maximum_cpu_cores. Depending on the type of CPU you have, it can vary from 1 to 112 threads.
  4. Prepare configuration file

    Populate a config file with required parameters. Example: train_params.cfg

    Change the mode variable in the config file based on what kind of model you want to train (either modality agnostic or multi-4).

  5. Run the training:

    brain_mage_run -params train_params.cfg -train True -dev $device -load $resume.ckpt
    

    Note that -load $resume.ckpt is only needed if you are resuming your training.

  6. [OPTIONAL] Converting weights after training

  • After training a custom model, you shall have a .ckpt file instead of a .pt file.
  • The file convert_ckpt_to_pt.py can be used to convert the file. For example:
    ./env/python BrainMaGe/utils/convert_ckpt_to_pt.py -i ${path_to_ckpt_file_with_filename} -o {path_to_pt_file_with_filename}
  • Please note that the if you wish to use your own weights, you can use the -load option.

Notes

  • IMPORTANT: This application is neither FDA approved nor CE marked, so the use of this package and any associated risks are the users' responsibility.
  • Please follow instructions carefully and for questions/suggestions, post an issue or contact us.
  • The brain_mage_run command gets installed automatically in the virtual environment.
  • We provide CPU (untested as of 2020/05/31) as well as GPU support.
    • Running on GPU is a lot faster though and should always be preferred.
    • You need an GPU memory of ~5-6GB for testing and ~8GB for training.
  • Added support for hole filling and largest CCA post processing

TO-DO

  • Windows support (this currently works but needs a few work-arounds)
  • Give example of skull stripping dataset
  • In inference, rename model_dir to results_dir for clarity in the configuration and script(s)
  • Test on CPU
  • Move all dependencies to setup.py for consistency
  • Put option to write logs to specific files in output directory
  • Remove -mode parameter in brain_mage_run

Contact

Please email [email protected] with questions.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].