All Projects → YuanXue1993 → Segan

YuanXue1993 / Segan

Licence: mit
SegAN: Semantic Segmentation with Adversarial Learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Segan

ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (-31.47%)
Mutual labels:  medical-imaging, semantic-segmentation
kits19-challenge
Kidney Tumor Segmentation Challenge 2019
Stars: ✭ 44 (-69.23%)
Mutual labels:  medical-imaging, semantic-segmentation
Clan
( CVPR2019 Oral ) Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
Stars: ✭ 248 (+73.43%)
Mutual labels:  semantic-segmentation, adversarial-learning
unet-pytorch
This is the example implementation of UNet model for semantic segmentations
Stars: ✭ 17 (-88.11%)
Mutual labels:  medical-imaging, semantic-segmentation
Cascaded Fcn
Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields"
Stars: ✭ 296 (+106.99%)
Mutual labels:  semantic-segmentation, medical-imaging
Adaptsegnet
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
Stars: ✭ 654 (+357.34%)
Mutual labels:  semantic-segmentation, adversarial-learning
nobrainer
A framework for developing neural network models for 3D image processing.
Stars: ✭ 123 (-13.99%)
Mutual labels:  medical-imaging, semantic-segmentation
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-85.31%)
Mutual labels:  semantic-segmentation, adversarial-learning
Kiu Net Pytorch
Official Pytorch Code of KiU-Net for Image Segmentation - MICCAI 2020 (Oral)
Stars: ✭ 134 (-6.29%)
Mutual labels:  semantic-segmentation, medical-imaging
Advsemiseg
Adversarial Learning for Semi-supervised Semantic Segmentation, BMVC 2018
Stars: ✭ 382 (+167.13%)
Mutual labels:  semantic-segmentation, adversarial-learning
Medicaldetectiontoolkit
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Stars: ✭ 917 (+541.26%)
Mutual labels:  semantic-segmentation, medical-imaging
Pytorch Fcn Easiest Demo
PyTorch Implementation of Fully Convolutional Networks (a very simple and easy demo).
Stars: ✭ 138 (-3.5%)
Mutual labels:  semantic-segmentation
Vision4j Collection
Collection of computer vision models, ready to be included in a JVM project
Stars: ✭ 132 (-7.69%)
Mutual labels:  semantic-segmentation
Awesome Semantic Segmentation Pytorch
Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet)
Stars: ✭ 2,022 (+1313.99%)
Mutual labels:  semantic-segmentation
Paz
Hierarchical perception library in Python for pose estimation, object detection, instance segmentation, keypoint estimation, face recognition, etc.
Stars: ✭ 131 (-8.39%)
Mutual labels:  semantic-segmentation
Spatiotemporalsegmentation
4D Spatio-Temporal Semantic Segmentation on a 3D video (a sequence of 3D scans)
Stars: ✭ 141 (-1.4%)
Mutual labels:  semantic-segmentation
Multi Task Refinenet
Multi-Task (Joint Segmentation / Depth / Surface Normas) Real-Time Light-Weight RefineNet
Stars: ✭ 139 (-2.8%)
Mutual labels:  semantic-segmentation
Ganspapercollection
Stars: ✭ 130 (-9.09%)
Mutual labels:  medical-imaging
Segsort
SegSort: Segmentation by Discriminative Sorting of Segments
Stars: ✭ 130 (-9.09%)
Mutual labels:  semantic-segmentation
Gate
Official public repository of Gate
Stars: ✭ 129 (-9.79%)
Mutual labels:  medical-imaging

SegAN: Semantic Segmentation with Adversarial Learning

Pytorch implementation for the basic ideas from the paper SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation by Yuan Xue, Tao Xu, Han Zhang, L. Rodney Long, Xiaolei Huang.

The data and architecture are mainly from the paper Adversarial Learning with Multi-Scale Loss for Skin Lesion Segmentation by Yuan Xue, Tao Xu, Xiaolei Huang.

Dependencies

python 2.7

Pytorch 1.2

Data

Training

  • The steps to train a SegAN model on the ISIC skin lesion segmentation dataset.
    • Run with: CUDA_VISIBLE_DEVICES=X(your GPU id) python train.py --cuda. You can change training hyperparameters as you wish, the default output folder is ~/outputs. For now we only support training with one GPU. The training images will be save in the ~/outputs folder.
    • The training code also includes the validation part, we will report validation results every 10 epochs, validation images will also be saved in the ~/outputs folder.
  • If you want to try your own datasets, you can just do whatever preprocess you want for your data to make them have similar format as this skin lesion segmentation dataset and put them in a folder similar to ~/ISIC-2017_Training_Data. You can run the model directly for a natural image dataset; For 3D medical data such as brain MRI scans, you need to extract 2D slices from the original data first. If your dataset has more than one class of label, you can run multiple S1-1C models as we described in the SegAN paper.

Citing SegAN

If you find SegAN useful in your research, please consider citing:

@article{xue2017segan,
  title={SegAN: Adversarial Network with Multi-scale $ L\_1 $ Loss for Medical Image Segmentation},
  author={Xue, Yuan and Xu, Tao and Zhang, Han and Long, Rodney and Huang, Xiaolei},
  journal={arXiv preprint arXiv:1706.01805},
  year={2017}
}

References

  • Some of the code for Global Convolutional Block are borrowed from Zijun Deng's excellent code
  • We thank the Pytorch team and some of our image prepocessing code are borrowed from the pytorch official examples
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].