All Projects → jeya-maria-jose → Medical Transformer

jeya-maria-jose / Medical Transformer

Licence: mit
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Medical Transformer

Visual-Transformer-Paper-Summary
Summary of Transformer applications for computer vision tasks.
Stars: ✭ 51 (-66.67%)
Mutual labels:  transformer, attention, segmentation
Vnet Tensorflow
Tensorflow implementation of the V-Net architecture for medical imaging segmentation.
Stars: ✭ 84 (-45.1%)
Mutual labels:  segmentation, medical-imaging
Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+6367.32%)
Mutual labels:  attention, transformer
Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+1448.37%)
Mutual labels:  segmentation, transformer
Data Science Bowl 2018
End-to-end one-class instance segmentation based on U-Net architecture for Data Science Bowl 2018 in Kaggle
Stars: ✭ 56 (-63.4%)
Mutual labels:  segmentation, medical-imaging
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-58.17%)
Mutual labels:  attention, transformer
Setr Pytorch
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Stars: ✭ 96 (-37.25%)
Mutual labels:  segmentation, transformer
Medicaldetectiontoolkit
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Stars: ✭ 917 (+499.35%)
Mutual labels:  segmentation, medical-imaging
Multiturndialogzoo
Multi-turn dialogue baselines written in PyTorch
Stars: ✭ 106 (-30.72%)
Mutual labels:  attention, transformer
Bertqa Attention On Steroids
BertQA - Attention on Steroids
Stars: ✭ 112 (-26.8%)
Mutual labels:  attention, transformer
Sightseq
Computer vision tools for fairseq, containing PyTorch implementation of text recognition and object detection
Stars: ✭ 116 (-24.18%)
Mutual labels:  attention, transformer
Extensionsindex
Slicer extensions index
Stars: ✭ 36 (-76.47%)
Mutual labels:  segmentation, medical-imaging
Segment Open
Segment Source Distribution
Stars: ✭ 34 (-77.78%)
Mutual labels:  segmentation, medical-imaging
Quicknat pytorch
PyTorch Implementation of QuickNAT and Bayesian QuickNAT, a fast brain MRI segmentation framework with segmentation Quality control using structure-wise uncertainty
Stars: ✭ 74 (-51.63%)
Mutual labels:  segmentation, medical-imaging
Cell Detr
Official and maintained implementation of the paper Attention-Based Transformers for Instance Segmentation of Cells in Microstructures [BIBM 2020].
Stars: ✭ 26 (-83.01%)
Mutual labels:  attention, transformer
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+733.99%)
Mutual labels:  segmentation, medical-imaging
Kiu Net Pytorch
Official Pytorch Code of KiU-Net for Image Segmentation - MICCAI 2020 (Oral)
Stars: ✭ 134 (-12.42%)
Mutual labels:  segmentation, medical-imaging
Slicergitsvnarchive
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 896 (+485.62%)
Mutual labels:  segmentation, medical-imaging
Ganseg
Framework for medical image segmentation using deep neural networks
Stars: ✭ 18 (-88.24%)
Mutual labels:  segmentation, medical-imaging
Njunmt Tf
An open-source neural machine translation system developed by Natural Language Processing Group, Nanjing University.
Stars: ✭ 97 (-36.6%)
Mutual labels:  attention, transformer

Medical-Transformer

Pytorch code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Google Colab: Link

About this repo:

This repo hosts the code for the following networks:

  1. Gated Axial Attention U-Net
  2. MedT

Introduction

Majority of existing Transformer-based network architectures proposed for vision applications require large-scale datasets to train properly. However, compared to the datasets for vision applications, for medical imaging the number of data samples is relatively low, making it difficult to efficiently train transformers for medical appli- cations. To this end, we propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module. Furthermore, to train the model effectively on medical images, we propose a Local-Global training strat- egy (LoGo) which further improves the performance. Specifically, we op- erate on the whole image and patches to learn global and local features, respectively. The proposed Medical Transformer (MedT) uses LoGo training strategy on Gated Axial Attention U-Net.

Using the code:

  • Clone this repository:
git clone https://github.com/jeya-maria-jose/Medical-Transformer
cd Medical-Transformer

The code is stable using Python 3.6.10, Pytorch 1.4.0

To install all the dependencies using conda:

conda env create -f environment.yml
conda activate medt

To install all the dependencies using pip:

pip install -r requirements.txt

Links for downloading the public Datasets:

  1. MoNuSeG Dataset - Link (Original)
  2. GLAS Dataset - Link (Original)
  3. Brain Anatomy US dataset from the paper will be made public soon !

Using the Code for your dataset

Dataset Preparation

Prepare the dataset in the following format for easy use of the code. The train and test folders should contain two subfolders each: img and label. Make sure the images their corresponding segmentation masks are placed under these folders and have the same name for easy correspondance. Please change the data loaders to your need if you prefer not preparing the dataset in this format.

Train Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......
Validation Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......
Test Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......

  • The ground truth images should have pixels corresponding to the labels. Example: In case of binary segmentation, the pixels in the GT should be 0 or 255.

Training Command:

python train.py --train_dataset "enter train directory" --val_dataset "enter validation directory" --direc 'path for results to be saved' --batch_size 4 --epoch 400 --save_freq 10 --modelname "gatedaxialunet" --learning_rate 0.001 --imgsize 128 --gray "no"
Change modelname to MedT or logo to train them

Testing Command:

python test.py --loaddirec "./saved_model_path/model_name.pth" --val_dataset "test dataset directory" --direc 'path for results to be saved' --batch_size 1 --modelname "gatedaxialunet" --imgsize 128 --gray "no"

The results including predicted segmentations maps will be placed in the results folder along with the model weights. Run the performance metrics code in MATLAB for calculating F1 Score and mIoU.

Notes:

Note that these experiments were conducted in Nvidia Quadro 8000 with 48 GB memory.

Acknowledgement:

The dataloader code is inspired from pytorch-UNet . The axial attention code is developed from axial-deeplab. Thanks to Pavan for the Google Colab guide.

Citation:

@article{valanarasu2021medical,
  title={Medical Transformer: Gated Axial-Attention for Medical Image Segmentation},
  author={Valanarasu, Jeya Maria Jose and Oza, Poojan and Hacihaliloglu, Ilker and Patel, Vishal M},
  journal={arXiv preprint arXiv:2102.10662},
  year={2021}
}

Open an issue or mail me directly in case of any queries or suggestions.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].