All Projects → sinAshish → Multi Scale Attention

sinAshish / Multi Scale Attention

Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Multi Scale Attention

A Pytorch Tutorial To Image Captioning
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
Stars: ✭ 1,867 (+564.41%)
Mutual labels:  attention-mechanism, encoder-decoder
Deeplearning.ai Natural Language Processing Specialization
This repository contains my full work and notes on Coursera's NLP Specialization (Natural Language Processing) taught by the instructor Younes Bensouda Mourri and Łukasz Kaiser offered by deeplearning.ai
Stars: ✭ 473 (+68.33%)
Mutual labels:  attention-mechanism, encoder-decoder
Abstractive Summarization
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
Stars: ✭ 128 (-54.45%)
Mutual labels:  attention-mechanism, encoder-decoder
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+252.31%)
Mutual labels:  attention-mechanism, encoder-decoder
Image-Caption
Using LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-87.19%)
Mutual labels:  attention-mechanism, encoder-decoder
laika
Experiments with satellite image data
Stars: ✭ 97 (-65.48%)
Mutual labels:  encoder-decoder
galerkin-transformer
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-60.5%)
Mutual labels:  attention-mechanism
pynmt
a simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-95.37%)
Mutual labels:  attention-mechanism
co-attention
Pytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-80.78%)
Mutual labels:  attention-mechanism
Transformer
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (-3.56%)
Mutual labels:  attention-mechanism
Textbox
TextBox is an open-source library for building text generation system.
Stars: ✭ 257 (-8.54%)
Mutual labels:  encoder-decoder
text-generation-transformer
text generation based on transformer
Stars: ✭ 36 (-87.19%)
Mutual labels:  encoder-decoder
Retinal-Disease-Diagnosis-With-Residual-Attention-Networks
Using Residual Attention Networks to diagnose retinal diseases in medical images
Stars: ✭ 14 (-95.02%)
Mutual labels:  attention-mechanism
Writing-editing-Network
Code for Paper Abstract Writing through Editing Mechanism
Stars: ✭ 72 (-74.38%)
Mutual labels:  attention-mechanism
attention-guided-sparsity
Attention-Based Guided Structured Sparsity of Deep Neural Networks
Stars: ✭ 26 (-90.75%)
Mutual labels:  attention-mechanism
Encoder decoder
Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq
Stars: ✭ 269 (-4.27%)
Mutual labels:  encoder-decoder
Attention
一些不同的Attention机制代码
Stars: ✭ 17 (-93.95%)
Mutual labels:  attention-mechanism
linformer
Implementation of Linformer for Pytorch
Stars: ✭ 119 (-57.65%)
Mutual labels:  attention-mechanism
Da Rnn
📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (-8.9%)
Mutual labels:  attention-mechanism
transganformer
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-51.25%)
Mutual labels:  attention-mechanism

Medical Image Segmentation with Guided Attention

This repository contains the code of our paper:
"'Multi-scale self-guided attention for medical image segmentation'", which has been recently accepted at the Journal of Biomedical And Health Informatics (JBHI).

Abstract

Even though convolutional neural networks (CNNs) are driving progress in medical image segmentation, standard models still have some drawbacks. First, the use of multi-scale approaches, i.e., encoder-decoder architectures, leads to a redundant use of information, where similar low-level features are extracted multiple times at multiple scales. Second, long-range feature dependencies are not efficiently modeled, resulting in nonoptimal discriminative feature representations associated with each semantic class. In this paper we attempt to overcome these limitations with the proposed architecture, by capturing richer contextual dependencies based on the use of guided self-attention mechanisms. This approach is able to integrate local features with their corresponding global dependencies, as well as highlight interdependent channel maps in an adaptive manner. Further, the additional loss between different modules guides the attention mechanisms to neglect irrelevant information and focus on more discriminant regions of the image by emphasizing relevant feature associations. We evaluate the proposed model in the context of abdominal organ segmentation on magnetic resonance imaging (MRI). A series of ablation experiments support the importance of these attention modules in the proposed architecture. In addition, compared to other state-of-the-art segmentation networks our model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation. This demonstrates the efficiency of our approach to generate precise and reliable automatic segmentations of medical images.

Design of the Proposed Model

model

Results

Result

Requirements

  • The code has been written in Python (3.6) and requires pyTorch (version 1.1.0)
  • Install the dependencies using pip install -r requirements.txt

Preparing your data

You have to split your data into three folders: train/val/test. Each folder will contain two sub-folders: Img and GT, which contain the png files for the images and their corresponding ground truths. The naming of these images is important, as the code to save the results temporarily to compute the 3D DSC, for example, is sensitive to their names.

Specifically, the convention we follow for the names is as follows:

  • Subj_Xslice_Y.png where X indicates the subject number (or ID) and Y is the slice number within the whole volume. (Do not use 0 padding for numbers, i.e., the first slice should be 1 and not 01)
  • The corresponding mask must be named in the same way as the image.

An example of a sample image is added in dataset

Running the code

Note: Set the data path appropriately in src/main.py before running the code.

To run the code you simply need to use the following script:

bash train.sh

If you use this code for your research, please consider citing our paper:

@article{sinha2020multi,
  title={Multi-scale self-guided attention for medical image segmentation.},
  author={Sinha, A and Dolz, J},
  journal={IEEE Journal of Biomedical and Health Informatics},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].