All Projects → bupt-ai-cz → IAST-ECCV2020

bupt-ai-cz / IAST-ECCV2020

Licence: other
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to IAST-ECCV2020

Seg Uncertainty
IJCAI2020 & IJCV 2020 🌇 Unsupervised Scene Adaptation with Memory Regularization in vivo
Stars: ✭ 202 (+140.48%)
Mutual labels:  semantic-segmentation, cityscapes, domain-adaptation
Fchardnet
Fully Convolutional HarDNet for Segmentation in Pytorch
Stars: ✭ 150 (+78.57%)
Mutual labels:  semantic-segmentation, cityscapes
multichannel-semseg-with-uda
Multichannel Semantic Segmentation with Unsupervised Domain Adaptation
Stars: ✭ 19 (-77.38%)
Mutual labels:  semantic-segmentation, domain-adaptation
Fastseg
📸 PyTorch implementation of MobileNetV3 for real-time semantic segmentation, with pretrained weights & state-of-the-art performance
Stars: ✭ 202 (+140.48%)
Mutual labels:  semantic-segmentation, cityscapes
Contrastiveseg
Exploring Cross-Image Pixel Contrast for Semantic Segmentation
Stars: ✭ 135 (+60.71%)
Mutual labels:  semantic-segmentation, cityscapes
Bisenetv2 Tensorflow
Unofficial tensorflow implementation of real-time scene image segmentation model "BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation"
Stars: ✭ 139 (+65.48%)
Mutual labels:  semantic-segmentation, cityscapes
Cgnet
CGNet: A Light-weight Context Guided Network for Semantic Segmentation [IEEE Transactions on Image Processing 2020]
Stars: ✭ 186 (+121.43%)
Mutual labels:  semantic-segmentation, cityscapes
Lsd Seg
Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation
Stars: ✭ 99 (+17.86%)
Mutual labels:  semantic-segmentation, domain-adaptation
Intrada
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision (CVPR 2020 Oral)
Stars: ✭ 211 (+151.19%)
Mutual labels:  semantic-segmentation, domain-adaptation
Lightnetplusplus
LightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Stars: ✭ 218 (+159.52%)
Mutual labels:  semantic-segmentation, cityscapes
Decouplesegnets
Implementation of Our ECCV2020-work: Improving Semantic Segmentation via Decoupled Body and Edge Supervision
Stars: ✭ 232 (+176.19%)
Mutual labels:  semantic-segmentation, cityscapes
Dise Domain Invariant Structure Extraction
Pytorch Implementation -- All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation, CVPR 2019
Stars: ✭ 129 (+53.57%)
Mutual labels:  semantic-segmentation, domain-adaptation
Nas Segm Pytorch
Code for Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells, CVPR '19
Stars: ✭ 126 (+50%)
Mutual labels:  semantic-segmentation, cityscapes
Cbst
Code for <Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training> in ECCV18
Stars: ✭ 146 (+73.81%)
Mutual labels:  semantic-segmentation, domain-adaptation
Dabnet
Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation (BMVC2019)
Stars: ✭ 109 (+29.76%)
Mutual labels:  semantic-segmentation, cityscapes
Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+2720.24%)
Mutual labels:  semantic-segmentation, cityscapes
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (-9.52%)
Mutual labels:  semantic-segmentation, cityscapes
Seanet
Self-Ensembling Attention Networks: Addressing Domain Shift for Semantic Segmentation
Stars: ✭ 90 (+7.14%)
Mutual labels:  semantic-segmentation, domain-adaptation
MCIS wsss
Code for ECCV 2020 paper (oral): Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation
Stars: ✭ 151 (+79.76%)
Mutual labels:  semantic-segmentation, eccv2020
Clan
( CVPR2019 Oral ) Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
Stars: ✭ 248 (+195.24%)
Mutual labels:  semantic-segmentation, domain-adaptation

IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) Tweet

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to your questions. This repo is almost the same with Another-Version, and you can also refer to that version.

Introduction

Abstract

The divergence between labeled training data and unlabeled testing data is a significant challenge for recent deep learning models. Unsupervised domain adaptation (UDA) attempts to solve such a problem. Recent works show that self-training is a powerful approach to UDA. However, existing methods have difficulty in balancing scalability and performance. In this paper, we propose an instance adaptive self-training framework for UDA on the task of semantic segmentation. To effectively improve the quality of pseudo-labels, we develop a novel pseudo-label generation strategy with an instance adaptive selector. Besides, we propose the region-guided regularization to smooth the pseudo-label region and sharpen the non-pseudo-label region. Our method is so concise and efficient that it is easy to be generalized to other unsupervised domain adaptation methods. Experiments on 'GTA5 to Cityscapes' and 'SYNTHIA to Cityscapes' demonstrate the superior performance of our approach compared with the state-of-the-art methods.

IAST Overview

Result

source target device GPU memory mIoU-19 mIoU-16 mIoU-13 model
GTA5 Cityscapes Tesla V100-32GB 18.5 GB 51.88 - - download
GTA5 Cityscapes Tesla T4 6.3 GB 51.20 - - download
SYNTHIA Cityscapes Tesla V100-32GB 18.5 GB - 51.54 57.81 download
SYNTHIA Cityscapes Tesla T4 9.8 GB - 51.24 57.70 download

Setup

1) Envs

  • Pytorch >= 1.0
  • Python >= 3.6
  • cuda >= 9.0

Install python packages

$ pip install -r  requirements.txt

apex : Tools for easy mixed precision and distributed training in Pytorch

git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

2) Download Dataset

Please download the datasets from these links:

Dataset directory should have this structure:

${ROOT_DIR}/data/GTA5/
${ROOT_DIR}/data/GTA5/images
${ROOT_DIR}/data/GTA5/labels

${ROOT_DIR}/data/SYNTHIA_RAND_CITYSCAPES/RAND_CITYSCAPES
${ROOT_DIR}/data/SYNTHIA_RAND_CITYSCAPES/RAND_CITYSCAPES/RGB
${ROOT_DIR}/data/SYNTHIA_RAND_CITYSCAPES/RAND_CITYSCAPES/GT

${ROOT_DIR}/data/cityscapes
${ROOT_DIR}/data/cityscapes/leftImg8bit
${ROOT_DIR}/data/cityscapes/gtFine

3) Download Pretrained Models

We provide pre-trained models. We recommend that you download them and put them in pretrained_models/, which will save a lot of time for training and ensure consistent results.

V100 models

T4 models

(Optional) Of course, if you have plenty of time, you can skip this step and start training from scratch. We also provide these scripts.

Training

Our original experiments are all carried out on Tesla-V100, and there will be a large number of GPU memory usage (batch_size=8). For low GPU memory devices, we also trained on Tesla-T4 to ensure that most people can reproduce the results (batch_size=2).

Start self-training (download the pre-trained models first)

cd code

# GTA5 to Cityscapes (V100)
sh ../scripts/self_training_only/run_gtav2cityscapes_self_traing_only_v100.sh
# GTA5 to Cityscapes (T4)
sh ../scripts/self_training_only/run_gtav2cityscapes_self_traing_only_t4.sh
# SYNTHIA to Cityscapes (V100)
sh ../scripts/self_training_only/run_syn2cityscapes_self_traing_only_v100.sh
# SYNTHIA to Cityscapes (T4)
sh ../scripts/self_training_only/run_syn2cityscapes_self_traing_only_t4.sh

(Optional) Training from scratch

cd code

# GTA5 to Cityscapes (V100)
sh ../scripts/from_scratch/run_gtav2cityscapes_self_traing_v100.sh
# GTA5 to Cityscapes (T4)
sh ../scripts/from_scratch/run_gtav2cityscapes_self_traing_t4.sh
# SYNTHIA to Cityscapes (V100)
sh ../scripts/from_scratch/run_syn2cityscapes_self_traing_v100.sh
# SYNTHIA to Cityscapes (T4)
sh ../scripts/from_scratch/run_syn2cityscapes_self_traing_t4.sh

Evaluation

cd code
python eval.py --config_file <path_to_config_file> --resume_from <path_to_*.pth_file>

Support multi-scale testing and flip testing.

# Modify the following parameters in the config file

TEST:
  RESIZE_SIZE: [[1024, 512], [1280, 640], [1536, 768], [1800, 900], [2048, 1024]] 
  USE_FLIP: False 

Citation

Please cite this paper in your publications if it helps your research:

@article{mei2020instance,
  title={Instance Adaptive Self-Training for Unsupervised Domain Adaptation},
  author={Mei, Ke and Zhu, Chuang and Zou, Jiaqi and Zhang, Shanghang},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2020}
}

Author

Ke Mei, Chuang Zhu

If you have any questions, you can contact me directly.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].