All Projects → Wenxuan-1119 → TransBTS

Wenxuan-1119 / TransBTS

Licence: Apache-2.0 license
This repo provides the official code for : 1) TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/abs/2103.04430) , accepted by MICCAI2021. 2) TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images(https://arxiv.org/abs/2201.12785).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TransBTS

Mmsegmentation
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Stars: ✭ 2,875 (+1031.89%)
Mutual labels:  transformer, medical-image-segmentation
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+11.81%)
Mutual labels:  transformer
TianChi AIEarth
TianChi AIEarth Contest Solution
Stars: ✭ 57 (-77.56%)
Mutual labels:  transformer
ViTs-vs-CNNs
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (-42.91%)
Mutual labels:  transformer
t5-japanese
Codes to pre-train Japanese T5 models
Stars: ✭ 39 (-84.65%)
Mutual labels:  transformer
ru-dalle
Generate images from texts. In Russian
Stars: ✭ 1,606 (+532.28%)
Mutual labels:  transformer
vietnamese-roberta
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-91.34%)
Mutual labels:  transformer
cape
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
Stars: ✭ 29 (-88.58%)
Mutual labels:  transformer
Cross-lingual-Summarization
Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention
Stars: ✭ 28 (-88.98%)
Mutual labels:  transformer
R-MeN
Transformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Stars: ✭ 74 (-70.87%)
Mutual labels:  transformer
Representation-Learning-for-Information-Extraction
Pytorch implementation of Paper by Google Research - Representation Learning for Information Extraction from Form-like Documents.
Stars: ✭ 82 (-67.72%)
Mutual labels:  transformer
bytekit
Java 字节操作的工具库(不是字节码的工具库)
Stars: ✭ 40 (-84.25%)
Mutual labels:  transformer
php-serializer
Serialize PHP variables, including objects, in any format. Support to unserialize it too.
Stars: ✭ 47 (-81.5%)
Mutual labels:  transformer
MASTER-pytorch
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Stars: ✭ 263 (+3.54%)
Mutual labels:  transformer
EfficientUNetPlusPlus
Decoder architecture based on the UNet++. Combining residual bottlenecks with depthwise convolutions and attention mechanisms, it outperforms the UNet++ in a coronary artery segmentation task, while being significantly more computationally efficient.
Stars: ✭ 37 (-85.43%)
Mutual labels:  medical-image-segmentation
query-selector
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Stars: ✭ 63 (-75.2%)
Mutual labels:  transformer
keras-vision-transformer
The Tensorflow, Keras implementation of Swin-Transformer and Swin-UNET
Stars: ✭ 91 (-64.17%)
Mutual labels:  transformer
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+65.75%)
Mutual labels:  transformer
kaggle-champs
Code for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: ✭ 49 (-80.71%)
Mutual labels:  transformer
PraNet
PraNet: Parallel Reverse Attention Network for Polyp Segmentation, MICCAI 2020 (Oral). Code using Jittor Framework is available.
Stars: ✭ 298 (+17.32%)
Mutual labels:  medical-image-segmentation

TransBTS(MICCAI2021)& TransBTSV2 (To Be Updated)

This repo is the official implementation for:

  1. TransBTS: Multimodal Brain Tumor Segmentation Using Transformer.

  2. TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images.

The details of the our TransBTS and TransBTSV2 can be found at the models directory (TransBTS and TransBTSV2) in this repo or in the original paper.

Requirements

  • python 3.7
  • pytorch 1.6.0
  • torchvision 0.7.0
  • pickle
  • nibabel

Data Acquisition

  • The multimodal brain tumor datasets (BraTS 2019 & BraTS 2020) could be acquired from here.

  • The liver tumor dataset LiTS 2017 could be acquired from here.

  • The kidney tumor dataset KiTS 2019 could be acquired from here.

Data Preprocess (BraTS 2019 & BraTS 2020)

After downloading the dataset from here, data preprocessing is needed which is to convert the .nii files as .pkl files and realize date normalization.

python3 preprocess.py

Training

Run the training script on BraTS dataset. Distributed training is available for training the proposed TransBTS, where --nproc_per_node decides the numer of gpus and --master_port implys the port number.

python3 -m torch.distributed.launch --nproc_per_node=4 --master_port 20003 train.py

Testing

If you want to test the model which has been trained on the BraTS dataset, run the testing script as following.

python3 test.py

After the testing process stops, you can upload the submission file to here for the final Dice_scores.

Citation

If you use our code or models in your work or find it is helpful, please cite the corresponding paper:

  • TransBTS:
@inproceedings{wang2021transbts,
  title={TransBTS: Multimodal Brain Tumor Segmentation Using Transformer},  
  author={Wang, Wenxuan and Chen, Chen and Ding, Meng and Li, Jiangyun and Yu, Hong and Zha, Sen},
  booktitle={International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)},
  year={2021}
}
  • TransBTSV2:
@article{li2022transbtsv2,
  title={TransBTSV2: Wider Instead of Deeper Transformer for Medical Image Segmentation},
  author={Li, Jiangyun and Wang, Wenxuan and Chen, Chen and Zhang, Tianxiang and Zha, Sen and Yu, Hong and Wang, Jing},
  journal={arXiv preprint arXiv:2201.12785},
  year={2022}
}

Reference

1.setr-pytorch

2.BraTS2017

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].