All Projects → twke18 → SPML

twke18 / SPML

Licence: MIT license
Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to SPML

Simple-does-it-weakly-supervised-instance-and-semantic-segmentation
Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).
Stars: ✭ 46 (-43.21%)
Mutual labels:  semantic-segmentation, weakly-supervised-learning
Segsort
SegSort: Segmentation by Discriminative Sorting of Segments
Stars: ✭ 130 (+60.49%)
Mutual labels:  metric-learning, semantic-segmentation
MCIS wsss
Code for ECCV 2020 paper (oral): Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation
Stars: ✭ 151 (+86.42%)
Mutual labels:  semantic-segmentation, weakly-supervised-learning
Semantic-Segmentation-BiSeNet
Keras BiseNet architecture implementation
Stars: ✭ 55 (-32.1%)
Mutual labels:  semantic-segmentation
deeplabv3plus-keras
deeplabv3plus (Google's new algorithm for semantic segmentation) in keras:Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
Stars: ✭ 67 (-17.28%)
Mutual labels:  semantic-segmentation
wasr network
WaSR Segmentation Network for Unmanned Surface Vehicles v0.5
Stars: ✭ 32 (-60.49%)
Mutual labels:  semantic-segmentation
kitti deeplab
Inference script and frozen inference graph with fine tuned weights for semantic segmentation on images from the KITTI dataset.
Stars: ✭ 26 (-67.9%)
Mutual labels:  semantic-segmentation
MobileUNET
U-NET Semantic Segmentation model for Mobile
Stars: ✭ 39 (-51.85%)
Mutual labels:  semantic-segmentation
scLearn
scLearn:Learning for single cell assignment
Stars: ✭ 26 (-67.9%)
Mutual labels:  metric-learning
CVPR2021 PLOP
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Stars: ✭ 102 (+25.93%)
Mutual labels:  semantic-segmentation
DeepLab-V3
Google DeepLab V3 for Image Semantic Segmentation
Stars: ✭ 103 (+27.16%)
Mutual labels:  semantic-segmentation
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (-32.1%)
Mutual labels:  weakly-supervised-learning
LearningToCompare-Tensorflow
Tensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-79.01%)
Mutual labels:  metric-learning
temporal-depth-segmentation
Source code (train/test) accompanying the paper entitled "Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach" in CVPR 2019 (https://arxiv.org/abs/1903.10764).
Stars: ✭ 20 (-75.31%)
Mutual labels:  semantic-segmentation
knodle
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
Stars: ✭ 76 (-6.17%)
Mutual labels:  weakly-supervised-learning
lfda
Local Fisher Discriminant Analysis in R
Stars: ✭ 74 (-8.64%)
Mutual labels:  metric-learning
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+429.63%)
Mutual labels:  semantic-segmentation
C2C
Implementation of Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole Slide Image Classification approach.
Stars: ✭ 30 (-62.96%)
Mutual labels:  weakly-supervised-learning
Metric Learning Adversarial Robustness
Code for NeurIPS 2019 Paper
Stars: ✭ 44 (-45.68%)
Mutual labels:  metric-learning
SUIM
Semantic Segmentation of Underwater Imagery: Dataset and Benchmark. #IROS2020
Stars: ✭ 53 (-34.57%)
Mutual labels:  semantic-segmentation

Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning

By Tsung-Wei Ke, Jyh-Jing Hwang, and Stella X. Yu

Weakly supervised segmentation requires assigning a label to every pixel based on training instances with partial annotations such as image-level tags, object bounding boxes, labeled points and scribbles. This task is challenging, as coarse annotations (tags, boxes) lack precise pixel localization whereas sparse annotations (points, scribbles) lack broad region coverage. Existing methods tackle these two types of weak supervision differently: Class activation maps are used to localize coarse labels and iteratively refine the segmentation model, whereas conditional random fields are used to propagate sparse labels to the entire image.

We formulate weakly supervised segmentation as a semi-supervised metric learning problem, where pixels of the same (different) semantics need to be mapped to the same (distinctive) features. We propose 4 types of contrastive relationships between pixels and segments in the feature space, capturing low-level image similarity, semantic annotation, co-occurrence, and feature affinity They act as priors; the pixel-wise feature can be learned from training images with any partial annotations in a data-driven fashion. In particular, unlabeled pixels in training images participate not only in data-driven grouping within each image, but also in discriminative feature learning within and across images. We deliver a universal weakly supervised segmenter with significant gains on Pascal VOC and DensePose.

Code Base

This release of code is based on SegSort in ICCV 2019.

Prerequisites

  1. Linux
  2. Python3 (>=3.5)
  3. Cuda >= 9.2 and Cudnn >= 7.6

Required Python Packages

  1. pytorch >= 1.6
  2. numpy
  3. scipy
  4. tqdm
  5. easydict == 1.9
  6. PyYAML
  7. PIL
  8. opencv
  9. pydensecrf

Data Preparation

Pascal VOC 2012

  1. Augmented Pascal VOC training set by SBD. Download link provided by SegSort. Please unzip it and put it besides the VOC2012/ folder as sbd/dataset/clsimg/.
  2. Ground-truth semantic segmentation masks are reformatted as grayscale images. Download link provided by SegSort.
  3. The over-segmentation masks are generated by combining contour detectors with gPb-owt-ucm. HED-owt-ucm masks provided by SegSort.
  4. Scribble and point annotations by ScribbleSup. We follow "Regularized Losses for Weakly-supervised CNN Segmentation" to process scribble annotations. You can download the processed ground-truth scribble(100%, 80%, 50% and 30%) and point annotations, and put them under VOC2012/scribble folder. For scribbles, we set dilation size to 3; For points, we set dilation size to 6.
  5. Cam annotations by SEAM. You can download the processed CAMs for image tag and bounding box annotations, and put them under VOC2012/cam. For image tags, we set alpha to 6 and probability threshold to 0.2. Pixels with probability less than 0.2 are considered as unlabeled regions. For bounding boxes, we normalize CAM to the range of 0 and 1 within each box, and set the probability threshold to 0.5. The pixel labels outside the boxes as background class.
  6. Dataset layout:
   $DATAROOT/
       |-------- sbd/
       |          |-------- dataset/
       |                       |-------- clsimg/
       |
       |-------- VOC2012/
                  |-------- JPEGImages/
                  |-------- segcls/
                  |-------- hed/
                  |-------- scribble/
                  |            |-------- dilate_3/
                  |            |-------- dilate_6_0.0/
                  |
                  |-------- cam/
                               |-------- seam_a6_th0.2/
                               |-------- seambox_a6_th0.5/

DensePose

  1. Images from MSCOCO. Download 2014 Train and Val Images (here).
  2. Ground-truth semantic segmentation by DensePose. You can download the processed ground-truth segmentation here, and put them under segcls/ folder.
  3. Ground-truth point annotation here. Put them under segcls/ folder.
  4. The over-segmentation masks by PMI and gPb-owt-ucm. Download PMI-owt-ucm masks and put them under seginst/ folder.
  5. Dataset layout:
   $DATAROOT/
       |-------- images/
       |          |-------- train2014/
       |          |-------- val2014/
       |
       |-------- segcls/
       |          |-------- densepose/gray/
       |          |-------- densepose_points/gray/
       |
       |-------- seginst/
                  |-------- pmi_0.1_256/

ImageNet Pre-Trained Models

We use the same ImageNet pretrained ResNet101 as EMANet. You can download the pretrained models here and put it under a new directory SPML/snapshots/imagenet/trained/. Note: we do not use MSCOCO pretrained ResNet.

Pascal VOC 2012 Trained Models.

We provide the download links for our SPML models trained using image-level tag/bounding box/scribble/point annotations on PASCAL VOC, and summarize the performance as follows. Note: we report the performance with denseCRF post-processing.

Annotations val test
Image Tags 69.5 71.6
Bounding Box 73.5 74.7
Scribbles 76.1 76.4
Points 73.2 74.0

DensePose Trained Models.

We provide the download link for our SPML models trained using point annotations on DensePose here. We achieve 44.15% of mIoU on minival2014 set.

Bashscripts to Get Started

Pascal VOC 2012

  • SPML with image-level tag.
source bashscripts/voc12/train_spml_tag.sh
  • SPML with bounding box.
source bashscripts/voc12/train_spml_box.sh
  • SPML with scribbles.
source bashscripts/voc12/train_spml_scribble.sh
  • SPML with points.
source bashscripts/voc12/train_spml_point.sh

DensePose

  • SPML with points.
source bashscripts/densepose/train_spml_point.sh

Citation

If you find this code useful for your research, please consider citing our paper Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning.

@inproceedings{ke2021spml,
  title={Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning},
  author={Ke, Tsung-Wei and Hwang, Jyh-Jing and Yu, Stella X},
  booktitle={International Conference on Learning Representations},
  pages={},
  year={2021}
}

License

SPML is released under the MIT License (refer to the LICENSE file for details).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].