All Projects → himashi92 → VT-UNet

himashi92 / VT-UNet

Licence: MIT license
[MICCAI2022] This is an official PyTorch implementation for A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to VT-UNet

semantic-segmentation
SOTA Semantic Segmentation Models in PyTorch
Stars: ✭ 464 (+207.28%)
Mutual labels:  transformer, semantic-segmentation, vision-transformer
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (-53.64%)
Mutual labels:  transformer, vision-transformer
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+443.71%)
Mutual labels:  transformer, vision-transformer
Walk-Transformer
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Stars: ✭ 26 (-82.78%)
Mutual labels:  transformer, pytorch-implementation
AdaSpeech
AdaSpeech: Adaptive Text to Speech for Custom Voice
Stars: ✭ 108 (-28.48%)
Mutual labels:  transformer, pytorch-implementation
TransMorph Transformer for Medical Image Registration
TransMorph: Transformer for Unsupervised Medical Image Registration (PyTorch)
Stars: ✭ 130 (-13.91%)
Mutual labels:  transformer, vision-transformer
LaTeX-OCR
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Stars: ✭ 1,566 (+937.09%)
Mutual labels:  transformer, vision-transformer
TitleStylist
Source code for our "TitleStylist" paper at ACL 2020
Stars: ✭ 72 (-52.32%)
Mutual labels:  transformer, pytorch-implementation
Ghostnet
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+1054.97%)
Mutual labels:  transformer, vision-transformer
Mmsegmentation
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Stars: ✭ 2,875 (+1803.97%)
Mutual labels:  transformer, semantic-segmentation
Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+1468.87%)
Mutual labels:  transformer, semantic-segmentation
SegFormer
Official PyTorch implementation of SegFormer
Stars: ✭ 1,264 (+737.09%)
Mutual labels:  transformer, semantic-segmentation
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+2163.58%)
Mutual labels:  transformer, pytorch-implementation
YOLOS
You Only Look at One Sequence (NeurIPS 2021)
Stars: ✭ 612 (+305.3%)
Mutual labels:  transformer, vision-transformer
ClusterTransformer
Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
Stars: ✭ 36 (-76.16%)
Mutual labels:  transformer, pytorch-implementation
visualization
a collection of visualization function
Stars: ✭ 189 (+25.17%)
Mutual labels:  transformer, vision-transformer
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+88.08%)
Mutual labels:  transformer, vision-transformer
transformer-ls
Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Stars: ✭ 201 (+33.11%)
Mutual labels:  transformer, vision-transformer
SwinIR
SwinIR: Image Restoration Using Swin Transformer (official repository)
Stars: ✭ 1,260 (+734.44%)
Mutual labels:  transformer, vision-transformer
Setr Pytorch
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Stars: ✭ 96 (-36.42%)
Mutual labels:  transformer, semantic-segmentation

VT-UNet

This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet.

VT-UNet Architecture

Our previous Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation can be found iside version 1 folder.

VT-UNet: A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

Parts of codes are borrowed from nn-UNet.

System requirements

This software was originally designed and run on a system running Ubuntu.

Dataset Preparation

  • Create a folder under VTUNet as DATASET
  • Download MSD BraTS dataset (http://medicaldecathlon.com/) and put it under DATASET/vtunet_raw/vtunet_raw_data
  • Rename folder as Task03_tumor
  • Move dataset.json file to Task03_tumor

Pre-trained weights

Create Environment variables

vi ~/.bashrc

  • export vtunet_raw_data_base="/home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data"
  • export vtunet_preprocessed="/home/VTUNet/DATASET/vtunet_preprocessed"
  • export RESULTS_FOLDER_VTUNET="/home/VTUNet/DATASET/vtunet_trained_models"

source ~/.bashrc

Environment setup

Create a virtual environment

  • virtualenv -p /usr/bin/python3.8 venv
  • source venv/bin/activate

Install torch

Install other dependencies

  • pip install -r requirements.txt

Preprocess Data

cd VTUNet

pip install -e .

  • vtunet_convert_decathlon_task -i /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/Task03_tumor
  • vtunet_plan_and_preprocess -t 3

Train Model

cd vtunet

  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor 3 0 &> small.out &
  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor_base 3 0 &> base.out &

Test Model

cd /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/vtunet_raw_data/Task003_tumor/

  • CUDA_VISIBLE_DEVICES=0 vtunet_predict -i imagesTs -o inferTs/vtunet_tumor -m 3d_fullres -t 3 -f 0 -chk model_best -tr vtunetTrainerV2_vtunet_tumor
  • python vtunet/inference_tumor.py vtunet_tumor

Trained model Weights

  • VT-UNet-S - (fold 0 only)
  • VT-UNet-B (To be updated)

Acknowledgements

This repository makes liberal use of code from open_brats2020, Swin Transformer, Video Swin Transformer, Swin-Unet, nnUNet and nnFormer

References

Citing VT-UNet

    @inproceedings{peiris2022robust,
      title={A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation},
      author={Peiris, Himashi and Hayat, Munawar and Chen, Zhaolin and Egan, Gary and Harandi, Mehrtash},
      booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
      pages={162--172},
      year={2022},
      organization={Springer}
    }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].