All Projects → voldemortX → DST-CBC

voldemortX / DST-CBC

Licence: BSD-3-Clause License
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to DST-CBC

Pytorch Auto Drive
Segmentation models (ERFNet, ENet, DeepLab, FCN...) and Lane detection models (SCNN, SAD, PRNet, RESA, LSTR...) based on PyTorch 1.6 with mixed precision training
Stars: ✭ 32 (-67.35%)
Mutual labels:  tensorboard, semantic-segmentation, pascal-voc, cityscapes
Edgenets
This repository contains the source code of our work on designing efficient CNNs for computer vision
Stars: ✭ 331 (+237.76%)
Mutual labels:  semantic-segmentation, pascal-voc, cityscapes
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (-22.45%)
Mutual labels:  semantic-segmentation, pascal-voc, cityscapes
AttaNet
AttaNet for real-time semantic segmentation.
Stars: ✭ 37 (-62.24%)
Mutual labels:  semantic-segmentation, cityscapes
emotion-recognition-GAN
This project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN
Stars: ✭ 20 (-79.59%)
Mutual labels:  semi-supervised-learning, tensorboard
semantic-segmentation
SOTA Semantic Segmentation Models in PyTorch
Stars: ✭ 464 (+373.47%)
Mutual labels:  semantic-segmentation, cityscapes
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (-2.04%)
Mutual labels:  semantic-segmentation, cityscapes
panoptic-forecasting
[CVPR 2021] Forecasting the panoptic segmentation of future video frames
Stars: ✭ 44 (-55.1%)
Mutual labels:  semantic-segmentation, cityscapes
PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch
Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation
Stars: ✭ 63 (-35.71%)
Mutual labels:  semantic-segmentation, cityscapes
SemiSeg-AEL
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)
Stars: ✭ 79 (-19.39%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
plusseg
ShanghaiTech PLUS Lab Segmentation Toolbox and Benchmark
Stars: ✭ 21 (-78.57%)
Mutual labels:  semantic-segmentation, cityscapes
EDANet
Implementation details for EDANet
Stars: ✭ 34 (-65.31%)
Mutual labels:  semantic-segmentation, cityscapes
IAST-ECCV2020
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 84 (-14.29%)
Mutual labels:  semantic-segmentation, cityscapes
SegFormer
Official PyTorch implementation of SegFormer
Stars: ✭ 1,264 (+1189.8%)
Mutual labels:  semantic-segmentation, cityscapes
Adversarial-Semisupervised-Semantic-Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 151 (+54.08%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
DA-RetinaNet
Official Detectron2 implementation of DA-RetinaNet of our Image and Vision Computing 2021 work 'An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites'
Stars: ✭ 31 (-68.37%)
Mutual labels:  pascal-voc, cityscapes
semantic-segmentation-tensorflow
Semantic segmentation task for ADE20k & cityscapse dataset, based on several models.
Stars: ✭ 84 (-14.29%)
Mutual labels:  semantic-segmentation, cityscapes
Dilation-Pytorch-Semantic-Segmentation
A PyTorch implementation of semantic segmentation according to Multi-Scale Context Aggregation by Dilated Convolutions by Yu and Koltun.
Stars: ✭ 32 (-67.35%)
Mutual labels:  semantic-segmentation, cityscapes
FCN-Segmentation-TensorFlow
FCN for Semantic Image Segmentation achieving 68.5 mIoU on PASCAL VOC
Stars: ✭ 34 (-65.31%)
Mutual labels:  semantic-segmentation, pascal-voc
Adversarial Semisupervised Semantic Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 147 (+50%)
Mutual labels:  semi-supervised-learning, semantic-segmentation

PWC

PWC

PWC

PWC

DMT: Dynamic Mutual Training for Semi-Supervised Learning

This repository contains the code for our paper DMT: Dynamic Mutual Training for Semi-Supervised Learning, a concise and effective method for semi-supervised semantic segmentation & image classification.

Some might know it as the previous version DST-CBC, or Semi-Supervised Semantic Segmentation via Dynamic Self-Training and Class-Balanced Curriculum, if you want the old code, you can check out the dst-cbc branch.

Also, for older PyTorch version (<1.6.0) users, or the exact same environment that produced the paper's results, refer to 53853f6.

News

2021.6.7

Multi-GPU training support (based on Accelerate) is added, and the whole project is upgraded to PyTorch 1.6. Thanks to the codes & testing by @jinhuan-hit, and discussions from @lorenmt, @TiankaiHang.

2021.2.10

A slight backbone architecture difference in the segmentation task has just been identified and described in Acknowledgement.

2021.1.1

DMT is released. Happy new year! 😉

2020.12.7

The bug fix for DST-CBC (not fully tested) is released at the scale branch.

2020.11.9

Stay tuned for Dynamic Mutual Training (DMT), an updated version of DST-CBC, which has overall better and stabler performance and will be released early November. A new version Dynamic Mutual Training (DMT) will be released later, which has overall better and stabler performance.

Also, thanks to @lorenmt, a data augmentation bug fix will be released along with the next version, where PASCAL VOC performance is overall boosted by 1~2%, Cityscapes could also have better performance. But probably the gap to oracle will remain similar.

Setup

First, you'll need a CUDA 10, Python3 environment (best on Linux).

1. Setup PyTorch & TorchVision:

pip install torch==1.6.0 torchvision==0.7.0

2. Install other python packages you may require:

pip install packaging accelerate future matplotlib tensorboard tqdm
pip install git+https://github.com/ildoonet/pytorch-randaugment

3. Download the code and prepare the scripts:

git clone https://github.com/voldemortX/DST-CBC.git
cd DST-CBC
chmod 777 segmentation/*.sh
chmod 777 classification/*.sh

Getting started

Get started with SEGMENTATION.md for semantic segmentation.

Get started with CLASSIFICATION.md for image classification.

Understand the code

We refer interested readers to this repository's wiki. It is not updated for DMT yet.

Notes

It's best to use a Turing or Volta architecture GPU when running our code, since they have tensor cores and the computation speed is much faster with mixed precision. For instance, RTX 2080 Ti (which is what we used) or Tesla V100, RTX 20/30 series.

Our implementation is fast and memory efficient. A whole run (train 2 models by DMT on PASCAL VOC 2012) takes about 8 hours on a single RTX 2080 Ti using up to 6GB graphic memory, including on-the-fly evaluations and training baselines. The Cityscapes experiments are even faster.

Contact

Issues and PRs are most welcomed.

If you have any questions that are not answerable with Google, feel free to contact us through [email protected].

Citation

@article{feng2020dmt,
  title={DMT: Dynamic Mutual Training for Semi-Supervised Learning},
  author={Feng, Zhengyang and Zhou, Qianyu and Gu, Qiqi and Tan, Xin and Cheng, Guangliang and Lu, Xuequan and Shi, Jianping and Ma, Lizhuang},
  journal={arXiv preprint arXiv:2004.08514},
  year={2020}
}

Acknowledgements

The DeepLabV2 network architecture and coco pre-trained weights are faithfully re-implemented from AdvSemiSeg. The only difference is we use the so-called ResNetV1.5 implementation for ResNet-101 backbone (same as torchvision), for difference between ResNetV1 and V1.5, refer to this issue. However, the difference is reported to only bring 0-0.5% gain in ImageNet, considering we use the V1 COCO pre-trained weights that mismatch with V1.5, the overall performance should remain similar to V1. The better fully-supervised performance mainly comes from better training schedule. Besides, we base comparisons on relative performance to Oracle, not absolute performance.

The CBC part of the older version DST-CBC is adapted from CRST.

The overall implementation is based on TorchVision and PyTorch.

The people who've helped to make the method & code better: lorenmt, jinhuan-hit, TiankaiHang, etc.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].