All Projects → lhoyer → improving_segmentation_with_selfsupervised_depth

lhoyer / improving_segmentation_with_selfsupervised_depth

Licence: other
[CVPR21] Implementation of our work "Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to improving segmentation with selfsupervised depth

SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (-75.66%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
sc depth pl
Pytorch Lightning Implementation of SC-Depth (V1, V2...) for Unsupervised Monocular Depth Estimation.
Stars: ✭ 86 (-54.5%)
Mutual labels:  depth-estimation, self-supervised-learning
Advsemiseg
Adversarial Learning for Semi-supervised Semantic Segmentation, BMVC 2018
Stars: ✭ 382 (+102.12%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-82.54%)
Mutual labels:  depth-estimation, self-supervised-learning
sesemi
supervised and semi-supervised image classification with self-supervision (Keras)
Stars: ✭ 43 (-77.25%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
SSL CR Histo
Official code for "Self-Supervised driven Consistency Training for Annotation Efficient Histopathology Image Analysis" Published in Medical Image Analysis (MedIA) Journal, Oct, 2021.
Stars: ✭ 32 (-83.07%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
Adversarial Semisupervised Semantic Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 147 (-22.22%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
Usss iccv19
Code for Universal Semi-Supervised Semantic Segmentation models paper accepted in ICCV 2019
Stars: ✭ 57 (-69.84%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
exponential-moving-average-normalization
PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482
Stars: ✭ 76 (-59.79%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
Adversarial-Semisupervised-Semantic-Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 151 (-20.11%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
PiCIE
PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in clustering (CVPR2021)
Stars: ✭ 102 (-46.03%)
Mutual labels:  semantic-segmentation, self-supervised-learning
Context-Aware-Consistency
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)
Stars: ✭ 121 (-35.98%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
HoHoNet
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
Stars: ✭ 65 (-65.61%)
Mutual labels:  semantic-segmentation, depth-estimation
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (-48.15%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
learning-topology-synthetic-data
Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
Stars: ✭ 22 (-88.36%)
Mutual labels:  depth-estimation, self-supervised-learning
Cct
[CVPR 2020] Semi-Supervised Semantic Segmentation with Cross-Consistency Training.
Stars: ✭ 171 (-9.52%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
SemiSeg-AEL
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)
Stars: ✭ 79 (-58.2%)
Mutual labels:  semi-supervised-learning, semantic-segmentation
Semantic-Mono-Depth
Geometry meets semantics for semi-supervised monocular depth estimation - ACCV 2018
Stars: ✭ 98 (-48.15%)
Mutual labels:  semantic-segmentation, depth-estimation
spear
SPEAR: Programmatically label and build training data quickly.
Stars: ✭ 81 (-57.14%)
Mutual labels:  semi-supervised-learning
AdversarialAudioSeparation
Code accompanying the paper "Semi-supervised adversarial audio source separation applied to singing voice extraction"
Stars: ✭ 70 (-62.96%)
Mutual labels:  semi-supervised-learning

Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation

This is the official pytorch implementation of our CVPR21 paper Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation and its extension to semi-supervised domain adaptation Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with Self-Supervised Depth Estimation.

Training deep networks for semantic segmentation requires large amounts of labeled training data, which presents a major challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue, we present a framework for semi-supervised and domain-adaptive semantic segmentation, which is enhanced by self-supervised monocular depth estimation (SDE) trained only on unlabeled image sequences.

In particular, we propose four key contributions:

  1. We automatically select the most useful samples to be annotated for semantic segmentation based on the correlation of sample diversity and difficulty between SDE and semantic segmentation.
  2. We implement a strong data augmentation by mixing images and labels using the structure of the scene.
  3. We transfer knowledge from features learned during SDE to semantic segmentation by means of transfer and multi-task learning.
  4. We exploit additional labeled synthetic data with Cross-Domain DepthMix and Matching Geometry Sampling to align synthetic and real data.

We validate the proposed model on the Cityscapes dataset, where all four contributions demonstrate significant performance gains, and achieve state-of-the-art results for semi-supervised semantic segmentation as well as for semi-supervised domain adaptation. In particular, with only 1/30 of the Cityscapes labels, our method achieves 92% of the fully-supervised baseline performance and even 97% when exploiting additional data from GTA.

Below, you can see the qualitative results of our model trained with only 100 annotated semantic segmentation samples.

example input output gif

If you find this code useful in your research, please consider citing:

@inproceedings{hoyer2021three,
  title={Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Hoyer, Lukas and Dai, Dengxin and Chen, Yuhua and Köring, Adrian and Saha, Suman and Van Gool, Luc},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={11130--11140},
  year={2021}
}
@article{hoyer2021improving,
  title={Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Hoyer, Lukas and Dai, Dengxin and Wang, Qin and Chen, Yuhua and Van Gool, Luc},
  journal={arXiv preprint arXiv:2108.12545 [cs]},
  year={2021}
}

Setup Environment

To install all requirements, you can run:

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html

Please download Cityscapes from https://www.cityscapes-dataset.com/downloads/. We require following packages: gtFine_trainvaltest.zip, leftImg8bit_trainvaltest.zip, and leftImg8bit_sequence_trainvaltest.zip. For performance reasons, we work on a downsampled copy of Cityscapes. Please refer to data_preprocessing/prepare_cityscapes.py for more information.

Before continuing, you should have following folder structure prepared:

CITYSCAPES_DIR/
- gtFine/
- leftImg8bit_small/
- leftImg8bit_sequence_small/

You can setup the paths for data and logging in the machine config configs/machine_confg.py. In the following, we assume that you have called your machine ws.

Inference with a Pretrained Model

If you want to test our pretrained model (trained on 372 Cityscapes images) on some of your own images, you can download the checkpoint here, unzip it, and run it using:

python inference.py --machine ws --model /path/to/checkpoint/dir/ --data /path/to/data/dir/

Pretrain Self-supervised Depth on Cityscapes

To run the two phases of the self-supervised depth estimation pretraining (first 300k iterations with frozen encoder and 50k iterations with ImageNet feature distance loss), you can execute:

python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec5_crop.yml
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml

You can also skip this section if you want to use our pretrained self-supervised depth estimation model. It will be downloaded automatically in the following section. If you want to use your own pretrained model, you can adapt models/utils.py#L108 that the model references point to your own model on Google drive.

Run Semi-Supervised Experiments

The semi-supervised experiments can be executed using:

python run_experiments.py --machine ws --exp EXP_ID

The EXP_ID corresponds to the experiment defined in experiments.py. Following experiments are relevant for the paper:

  • Experiment 210: All configurations that are only based on transfer learning.
  • Experiment 211: Automatic data selection for annotation configurations.
  • Experiment 212: Configurations that involve multi-task learning.

To use the labels selected by the automatic data selection for the other experiments, please copy the content of nlabelsXXX_subset.json from the log directory to loader/preselected_labels.py after running experiment 211. For better reproducibility, we have stored our results there as well. Table 3 is generated using experiment 210 with the config sel_{pres_method}_scratch.

Be aware that running all experiments takes multiple weeks on a single GPU. For that reason, we have commented out all but one subset size and seed as well as minor ablations.

Run Semi-Supervised Domain Adaptation Experiments

In order to run our framework extension to semi-supervised domain adaptation, please switch to the ssda branch and follow its README.md instructions.

Framework Structure

Experiments and Configurations
  • configs/machine_config.yml: Definition of data and log paths for different machines.
  • configs/cityscapes_monodepth*: Configurations for monodepth pretraining on Cityscapes.
  • configs/cityscapes_joint.yml: Base configuration for all semi-supervised segmentation experiments.
  • experiments.py: Generation of derivative configurations from cityscapes_joint.yml for the different experiments.
  • run_experiments.py: Execution of experiments defined in experiments.py.
Training Logic
  • train.py: Training script for a specific configuration. It contains the main training logic for self-supervised depth estimation, semi-supervised semantic segmentation, and DepthMix.
  • label_selection.py: Logic for automatic data selection for annotation.
  • monodepth_loss.py: Loss for self-supervised depth estimation.
Models
  • models/joint_segmentation_depth.py: Combined model for depth estimation, pose prediction, and semantic segmentation.
  • models/joint_segmentation_depth_decoder.py: Segmentation decoders for transfer learning from self-supervised depth and multi-task learning.
  • models/depth_decoder.py: Multi-scale depth decoder.
  • models/monodepth_layers.py: Operations necessary for self-supervised depth estimation.
Data Loading
  • loader/sequence_segmentation_loader.py: Base class for loading image sequences with segmentation labels.
  • loader/cityscapes_loader.py: Implementation for loading Cityscapes.
  • loader/depth_estimator.py: Generate depth estimates from pretrained self-supervised depth model and store them that they can be loaded by sequence_segmentation_loader as pseudo depth label.
  • loader/preselected_labels.py: A selection of annotated samples obtained with label_selection.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].