All Projects → CoinCheung → Deeplab V3 Plus Cityscapes

CoinCheung / Deeplab V3 Plus Cityscapes

Licence: mit
mIOU=80.02 on cityscapes. My implementation of deeplabv3+ (also know as 'Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation' based on the dataset of cityscapes).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deeplab V3 Plus Cityscapes

Deeplabv3 Plus
Tensorflow 2.3.0 implementation of DeepLabV3-Plus
Stars: ✭ 32 (-73.55%)
Mutual labels:  segmentation, cityscapes
blindassist-ios
BlindAssist iOS app
Stars: ✭ 34 (-71.9%)
Mutual labels:  segmentation, cityscapes
Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+1857.85%)
Mutual labels:  segmentation, cityscapes
Erfnet pytorch
Pytorch code for semantic segmentation using ERFNet
Stars: ✭ 304 (+151.24%)
Mutual labels:  segmentation, cityscapes
Bisenet
Add bisenetv2. My implementation of BiSeNet
Stars: ✭ 589 (+386.78%)
Mutual labels:  segmentation, cityscapes
Efficient Segmentation Networks
Lightweight models for real-time semantic segmentationon PyTorch (include SQNet, LinkNet, SegNet, UNet, ENet, ERFNet, EDANet, ESPNet, ESPNetv2, LEDNet, ESNet, FSSNet, CGNet, DABNet, Fast-SCNN, ContextNet, FPENet, etc.)
Stars: ✭ 579 (+378.51%)
Mutual labels:  segmentation, cityscapes
LightNet
LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)
Stars: ✭ 710 (+486.78%)
Mutual labels:  segmentation, cityscapes
Lightnet
LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)
Stars: ✭ 698 (+476.86%)
Mutual labels:  segmentation, cityscapes
Switchnorm segmentation
Switchable Normalization for semantic image segmentation and scene parsing.
Stars: ✭ 47 (-61.16%)
Mutual labels:  segmentation, cityscapes
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-16.53%)
Mutual labels:  segmentation
Masktrack
Implementation of MaskTrack method which is the baseline of several state-of-the-art video object segmentation methods in Pytorch
Stars: ✭ 110 (-9.09%)
Mutual labels:  segmentation
Setr Pytorch
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Stars: ✭ 96 (-20.66%)
Mutual labels:  segmentation
Crfasrnn pytorch
CRF-RNN PyTorch version http://crfasrnn.torr.vision
Stars: ✭ 102 (-15.7%)
Mutual labels:  segmentation
Nnunet
No description or website provided.
Stars: ✭ 2,231 (+1743.8%)
Mutual labels:  segmentation
Dataset
Crop/Weed Field Image Dataset
Stars: ✭ 98 (-19.01%)
Mutual labels:  segmentation
Motsfusion
MOTSFusion: Track to Reconstruct and Reconstruct to Track
Stars: ✭ 118 (-2.48%)
Mutual labels:  segmentation
Deepbrainseg
Fully automatic brain tumour segmentation using Deep 3-D convolutional neural networks
Stars: ✭ 96 (-20.66%)
Mutual labels:  segmentation
Retina Features
Project for segmentation of blood vessels, microaneurysm and hardexudates in fundus images.
Stars: ✭ 95 (-21.49%)
Mutual labels:  segmentation
Openvehiclevision
An opensource lib. for vehicle vision applications (written by MATLAB), lane marking detection, road segmentation
Stars: ✭ 120 (-0.83%)
Mutual labels:  segmentation
Model Quantization
Collections of model quantization algorithms
Stars: ✭ 118 (-2.48%)
Mutual labels:  segmentation

DeepLab V3plus

My implementation of Deeplab_v3plus. This repository is based on the dataset of cityscapes and the mIOU is 70.54.

I am working with python3.5 and pytorch1.0.0 built from source. Other environments are not tested, but you need at least pytorch1.0 since I use torch.distributed to manipulate my gpus. I use two 1080ti to train my model, so you also need two gpus each of which should have at least 9G memory.

Dataset

The experiment is done with the dataset of CityScapes. You need to register on the website and download the dataset images and annotations. Then you create a data directory and then decompress.

    $ cd DeepLabv3plus
    $ mkdir -p data
    $ mv /path/to/leftImg8bit_trainvaltest.zip data
    $ mv /path/to/gtFine_trainvaltest.zip data
    $ cd data
    $ unzip leftImg8bit_trainvaltest.zip
    $ unzip gtFine_trainvaltest.zip

Train && Eval

After creating the dataset, you can train on the cityscapes train set and evaluate on the validation set.
Train:

    $ cd DeepLabv3plus
    $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py

This will take around 13 hours on two 1080ti gpus. After training, the model will be evaluated on the val set automatically, and you will see a mIOU of 70.54.

Eval: If you want to evaluate a trained model, you can also do this:

    $ python evaluate.py

or if you want to evaluate on multi-gpus, you can also do this:

    $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 evaluate.py

Configurations

  • If you want to use your own dataset, you may implement you dataset file as does with my cityscapes.py.

  • As for the hyper-parameters, you may change them in the configuration file configs/configurations.py.

Pretrained Model

If you need model parameters pretrained on cityscapes, you can download the pth file here with extraction code: 3i4g.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].