All Projects → Media-Smart → Vedaseg

Media-Smart / Vedaseg

Licence: apache-2.0
A semantic segmentation toolbox based on PyTorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Vedaseg

ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (-73.3%)
Mutual labels:  unet, semantic-segmentation
celldetection
Cell Detection with PyTorch.
Stars: ✭ 44 (-88.01%)
Mutual labels:  unet, semantic-segmentation
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (-73.84%)
Mutual labels:  unet, semantic-segmentation
Keras Unet
Helper package with multiple U-Net implementations in Keras as well as useful utility tools helpful when working with image semantic segmentation tasks. This library and underlying tools come from multiple projects I performed working on semantic segmentation tasks
Stars: ✭ 196 (-46.59%)
Mutual labels:  semantic-segmentation, unet
Bcdu Net
BCDU-Net : Medical Image Segmentation
Stars: ✭ 314 (-14.44%)
Mutual labels:  semantic-segmentation, unet
Deep Learning In Production
Develop production ready deep learning code, deploy it and scale it
Stars: ✭ 216 (-41.14%)
Mutual labels:  semantic-segmentation, unet
pytorch-UNet
2D and 3D UNet implementation in PyTorch.
Stars: ✭ 107 (-70.84%)
Mutual labels:  unet, semantic-segmentation
Geo Deep Learning
Deep learning applied to georeferenced datasets
Stars: ✭ 91 (-75.2%)
Mutual labels:  semantic-segmentation, unet
DDUnet-Modified-Unet-for-WMH-with-Dense-Dilate
WMH segmentaion with unet, dilated_unet, and with ideas from denseNet
Stars: ✭ 23 (-93.73%)
Mutual labels:  unet, semantic-segmentation
unet pytorch
Pytorch implementation of UNet for converting aerial satellite images into google maps kinda images.
Stars: ✭ 27 (-92.64%)
Mutual labels:  unet, semantic-segmentation
Eye In The Sky
Satellite Image Classification using semantic segmentation methods in deep learning
Stars: ✭ 185 (-49.59%)
Mutual labels:  semantic-segmentation, unet
Human Segmentation Pytorch
Human segmentation models, training/inference code, and trained weights, implemented in PyTorch
Stars: ✭ 289 (-21.25%)
Mutual labels:  semantic-segmentation, unet
Semantic Segmentation Of Remote Sensing Images
遥感图像的语义分割,基于深度学习,在Tensorflow框架下,利用TF.Keras,运行环境TF2.0+
Stars: ✭ 125 (-65.94%)
Mutual labels:  semantic-segmentation, unet
image-segmentation
Mask R-CNN, FPN, LinkNet, PSPNet and UNet with multiple backbone architectures support readily available
Stars: ✭ 62 (-83.11%)
Mutual labels:  unet, semantic-segmentation
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-72.48%)
Mutual labels:  semantic-segmentation, unet
3D-UNet-PyTorch-Implementation
The implementation of 3D-UNet using PyTorch
Stars: ✭ 78 (-78.75%)
Mutual labels:  unet, semantic-segmentation
Multiclass Semantic Segmentation Camvid
Tensorflow 2 implementation of complete pipeline for multiclass image semantic segmentation using UNet, SegNet and FCN32 architectures on Cambridge-driving Labeled Video Database (CamVid) dataset.
Stars: ✭ 67 (-81.74%)
Mutual labels:  semantic-segmentation, unet
Paddleseg
End-to-end image segmentation kit based on PaddlePaddle.
Stars: ✭ 1,244 (+238.96%)
Mutual labels:  semantic-segmentation, unet
satellite-Image-Semantic-Segmentation-Unet-Tensorflow-keras
Collection of different Unet Variant suchas VggUnet, ResUnet, DenseUnet, Unet. AttUnet, MobileNetUnet, NestedUNet, R2AttUNet, R2UNet, SEUnet, scSEUnet, Unet_Xception_ResNetBlock
Stars: ✭ 43 (-88.28%)
Mutual labels:  unet, semantic-segmentation
TensorFlow-Advanced-Segmentation-Models
A Python Library for High-Level Semantic Segmentation Models based on TensorFlow and Keras with pretrained backbones.
Stars: ✭ 64 (-82.56%)
Mutual labels:  unet, semantic-segmentation

Introduction

vedaseg is an open source semantic segmentation toolbox based on PyTorch.

Features

  • Modular Design

    We decompose the semantic segmentation framework into different components. The flexible and extensible design make it easy to implement a customized semantic segmentation project by combining different modules like building Lego.

  • Support of several popular frameworks

    The toolbox supports several popular semantic segmentation frameworks out of the box, e.g. DeepLabv3+, DeepLabv3, U-Net, PSPNet, FPN, etc.

  • High efficiency

    Multi-GPU data parallelism & distributed training.

  • Multi-Class/Multi-Label segmentation

    We implement multi-class and multi-label segmentation(where a pixel can belong to multiple classes).

  • Acceleration and deployment

    Models can be accelerated and deployed with TensorRT.

License

This project is released under the Apache 2.0 license.

Benchmark and model zoo

Note: All models are trained only on PASCAL VOC 2012 trainaug dataset and evaluated on PASCAL VOC 2012 val dataset.

Architecture backbone OS MS & Flip mIOU
DeepLabv3plus ResNet-101 16 True 79.46%
DeepLabv3plus ResNet-101 16 False 77.90%
DeepLabv3 ResNet-101 16 True 79.22%
DeepLabv3 ResNet101 16 False 77.08%
FPN ResNet-101 2 True 76.19%
FPN ResNet-101 2 False 74.26%
PSPNet ResNet-101 8 True 74.83%
PSPNet ResNet-101 8 False 73.28%
U-Net ResNet-101 1 True 73.89%
U-Net ResNet-101 1 False 72.21%

OS: Output stride used during evaluation
MS: Multi-scale inputs during evaluation
Flip: Adding left-right flipped inputs during evaluation

Models above are available in the GoogleDrive.

Installation

Requirements

  • Linux
  • Python 3.6+
  • PyTorch 1.4.0 or higher
  • CUDA 9.0 or higher

We have tested the following versions of OS and softwares:

  • OS: Ubuntu 16.04.6 LTS
  • CUDA: 10.2
  • PyTorch 1.4.0
  • Python 3.6.9

Install vedaseg

  1. Create a conda virtual environment and activate it.
conda create -n vedaseg python=3.6.9 -y
conda activate vedaseg
  1. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
  1. Clone the vedaseg repository.
git clone https://github.com/Media-Smart/vedaseg.git
cd vedaseg
vedaseg_root=${PWD}
  1. Install dependencies.
pip install -r requirements.txt

Prepare data

VOC data

Download Pascal VOC 2012 and Pascal VOC 2012 augmented (you can get details at Semantic Boundaries Dataset and Benchmark), resulting in 10,582 training images(trainaug), 1,449 validatation images.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz

tar xf VOCtrainval_11-May-2012.tar
tar xf benchmark.tgz

python ../tools/encode_voc12_aug.py
python ../tools/encode_voc12.py

mkdir VOCdevkit/VOC2012/EncodeSegmentationClass
#cp benchmark_RELEASE/dataset/encode_cls/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd benchmark_RELEASE/dataset/encode_cls; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)
#cp VOCdevkit/VOC2012/EncodeSegmentationClassPart/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd VOCdevkit/VOC2012/EncodeSegmentationClassPart; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)

comm -23 <(cat benchmark_RELEASE/dataset/{train,val}.txt VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt | sort -u) <(cat VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt | sort -u) > VOCdevkit/VOC2012/ImageSets/Segmentation/trainaug.txt

To avoid tedious operations, you could save the above linux commands as a shell file and execute it.

COCO data

Download the COCO-2017 dataset.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data
mkdir COCO2017 && cd COCO2017
wget -c http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip && rm train2017.zip
wget -c http://images.cocodataset.org/zips/val2017.zip
unzip val2017.zip &&  rm val2017.zip
wget -c http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip annotations_trainval2017.zip && rm annotations_trainval2017.zip

Folder structure

The folder structure should similar as following:

data
├── COCO2017
│   ├── annotations
│   │   ├── instances_train2017.json
│   │   ├── instances_val2017.json
│   ├── train2017
│   ├── val2017
│── VOCdevkit
│   │   ├── VOC2012
│   │   │   ├── JPEGImages
│   │   │   ├── SegmentationClass
│   │   │   ├── ImageSets
│   │   │   │   ├── Segmentation
│   │   │   │   │   ├── trainaug.txt
│   │   │   │   │   ├── val.txt

Train

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  • for multi-label training use config file configs/coco_multilabel_unet.py and modify some configuration, the difference between single-label and multi-label training are mainly in following parameter in config file: nclasses, multi_label, metrics and criterion. Currently multi-label training is only supported in coco data format.
  1. Ditributed training
./tools/dist_train.sh configs/voc_unet.py gpu_num
  1. Non-distributed training
python tools/train.py configs/voc_unet.py

Snapshots and logs will be generated at ${vedaseg_root}/workdir.

Test

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  1. Ditributed testing
./tools/dist_test.sh configs/voc_unet.py checkpoint_path gpu_num
  1. Non-distributed testing
python tools/test.py configs/voc_unet.py checkpoint_path

Inference

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  1. Run
# visualize the results in a new window
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --show

# save the visualization results in folder which named with image prefix, default under folder './result/'
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --out folder_name

Deploy

  1. Convert to Onnx

Firstly, install volksdep following the official instructions.

Then, run the following code to convert PyTorch to Onnx. The input shape format is CxHxW. If you need the onnx model with dynamic input shape, please add --dynamic_shape in the end.

python tools/torch2onnx.py configs/voc_unet.py weight_path out_path --dummy_input_shape 3,513,513 --opset_version 11

Here are some known issues:

  • Currently PSPNet model is not supported because of the unsupported operation AdaptiveAvgPool2d.
  • Default onnx opset version is 9 and PyTorch Upsample operation is only supported with specified size, nearest mode and align_corners being None. If bilinear mode and align_corners are wanted, please add --opset_version 11 when using torch2onnx.py.
  1. Inference SDK

Firstly, install flexinfer following the official instructions.

Then, see the example for details.

Contact

This repository is currently maintained by Yuxin Zou (@YuxinZou), Tianhe Wang(@DarthThomas), Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].