All Projects → wutianyiRosun → Cgnet

wutianyiRosun / Cgnet

Licence: mit
CGNet: A Light-weight Context Guided Network for Semantic Segmentation [IEEE Transactions on Image Processing 2020]

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cgnet

Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+1173.66%)
Mutual labels:  semantic-segmentation, cityscapes
Deeplabv3 Plus
Tensorflow 2.3.0 implementation of DeepLabV3-Plus
Stars: ✭ 32 (-82.8%)
Mutual labels:  semantic-segmentation, cityscapes
Tusimple Duc
Understanding Convolution for Semantic Segmentation
Stars: ✭ 567 (+204.84%)
Mutual labels:  semantic-segmentation, cityscapes
Panoptic Deeplab
This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)
Stars: ✭ 355 (+90.86%)
Mutual labels:  semantic-segmentation, cityscapes
Nas Segm Pytorch
Code for Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells, CVPR '19
Stars: ✭ 126 (-32.26%)
Mutual labels:  semantic-segmentation, cityscapes
Icnet Tensorflow
TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".
Stars: ✭ 396 (+112.9%)
Mutual labels:  semantic-segmentation, cityscapes
Lightnet
LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)
Stars: ✭ 698 (+275.27%)
Mutual labels:  semantic-segmentation, cityscapes
Erfnet pytorch
Pytorch code for semantic segmentation using ERFNet
Stars: ✭ 304 (+63.44%)
Mutual labels:  semantic-segmentation, cityscapes
Dabnet
Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation (BMVC2019)
Stars: ✭ 109 (-41.4%)
Mutual labels:  semantic-segmentation, cityscapes
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (-59.14%)
Mutual labels:  semantic-segmentation, cityscapes
Ademxapp
Code for https://arxiv.org/abs/1611.10080
Stars: ✭ 333 (+79.03%)
Mutual labels:  semantic-segmentation, cityscapes
Bisenetv2 Tensorflow
Unofficial tensorflow implementation of real-time scene image segmentation model "BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation"
Stars: ✭ 139 (-25.27%)
Mutual labels:  semantic-segmentation, cityscapes
Edgenets
This repository contains the source code of our work on designing efficient CNNs for computer vision
Stars: ✭ 331 (+77.96%)
Mutual labels:  semantic-segmentation, cityscapes
Fasterseg
[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Stars: ✭ 438 (+135.48%)
Mutual labels:  semantic-segmentation, cityscapes
Pspnet Tensorflow
TensorFlow-based implementation of "Pyramid Scene Parsing Network".
Stars: ✭ 313 (+68.28%)
Mutual labels:  semantic-segmentation, cityscapes
Efficient Segmentation Networks
Lightweight models for real-time semantic segmentationon PyTorch (include SQNet, LinkNet, SegNet, UNet, ENet, ERFNet, EDANet, ESPNet, ESPNetv2, LEDNet, ESNet, FSSNet, CGNet, DABNet, Fast-SCNN, ContextNet, FPENet, etc.)
Stars: ✭ 579 (+211.29%)
Mutual labels:  semantic-segmentation, cityscapes
LightNet
LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)
Stars: ✭ 710 (+281.72%)
Mutual labels:  semantic-segmentation, cityscapes
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (-47.31%)
Mutual labels:  semantic-segmentation, cityscapes
Pytorch Auto Drive
Segmentation models (ERFNet, ENet, DeepLab, FCN...) and Lane detection models (SCNN, SAD, PRNet, RESA, LSTR...) based on PyTorch 1.6 with mixed precision training
Stars: ✭ 32 (-82.8%)
Mutual labels:  semantic-segmentation, cityscapes
Contrastiveseg
Exploring Cross-Image Pixel Contrast for Semantic Segmentation
Stars: ✭ 135 (-27.42%)
Mutual labels:  semantic-segmentation, cityscapes

CGNet: A Light-weight Context Guided Network for Semantic Segmentation

Introduction

The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters, hence unsuitable for mobile devices, while other small memory footprint models follow the spirit of classification network and ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight and efficient network for semantic segmentation. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop CGNet which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing and multi-scale testing, the proposed CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters.

Installation

  1. Install PyTorch
  • Env: PyTorch_0.4; cuda_9.2; cudnn_7.5; python_3.6
  1. Clone the repository
    git clone https://github.com/wutianyiRosun/CGNet.git 
    cd CGNet
    
  2. Dataset
├── cityscapes_test_list.txt
├── cityscapes_train_list.txt
├── cityscapes_trainval_list.txt
├── cityscapes_val_list.txt
├── cityscapes_val.txt
├── gtCoarse
│   ├── train
│   ├── train_extra
│   └── val
├── gtFine
│   ├── test
│   ├── train
│   └── val
├── leftImg8bit
│   ├── test
│   ├── train
│   └── val
├── license.txt
  • Download the Camvid dataset. It should have this basic structure.
├── camvid_test_list.txt
├── camvid_train_list.txt
├── camvid_trainval_list.txt
├── camvid_val_list.txt
├── test
├── testannot
├── train
├── trainannot
├── val
└── valannot

Train your own model

For Cityscapes

  1. training on train set
python cityscapes_train.py --gpus 0,1 --dataset cityscapes --train_type ontrain --train_data_list ./dataset/list/Cityscapes/cityscapes_train_list.txt --max_epochs 300
  1. training on train+val set
python cityscapes_train.py --gpus 0,1 --dataset cityscapes --train_type ontrainval --train_data_list ./dataset/list/Cityscapes/cityscapes_trainval_list.txt --max_epochs 350
  1. Evaluation (on validation set)
python cityscapes_eval.py --gpus 0 --val_data_list ./dataset/list/Cityscapes/cityscapes_val_list.txt --resume ./checkpoint/cityscapes/CGNet_M3N21bs16gpu2_ontrain/model_cityscapes_train_on_trainset.pth
  1. Testing (on test set)
python cityscapes_test.py --gpus 0 --test_data_list ./dataset/list/Cityscapes/cityscapes_test_list.txt --resume ./checkpoint/cityscapes/CGNet_M3N21bs16gpu2_ontrainval/model_cityscapes_train_on_trainvalset.pth
  1. Running time on Tesla V100 (single card single batch)
56.8 ms with command "torch.cuda.synchronize()"
20.0 ms w/o command "torch.cuda.synchronize()"

For Camvid

  1. training on train+val set
python camvid_train.py
  1. testing (on test set)
python camvid_test.py

Citation

If CGNet is useful for your research, please consider citing:

  @article{wu2020cgnet,
  title={Cgnet: A light-weight context guided network for semantic segmentation},
  author={Wu, Tianyi and Tang, Sheng and Zhang, Rui and Cao, Juan and Zhang, Yongdong},
  journal={IEEE Transactions on Image Processing},
  volume={30},
  pages={1169--1179},
  year={2020},
  publisher={IEEE}
}

License

This code is released under the MIT License. See LICENSE for additional details.

Thanks to the Third Party Libs

https://github.com/speedinghzl/Pytorch-Deeplab.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].