All Projects → vasgaowei → TS-CAM

vasgaowei / TS-CAM

Licence: other
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to TS-CAM

laravel5-hal-json
Laravel 5 HAL+JSON API Transformer Package
Stars: ✭ 15 (-84.37%)
Mutual labels:  transformer
LaTeX-OCR
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Stars: ✭ 1,566 (+1531.25%)
Mutual labels:  transformer
transformer-models
Deep Learning Transformer models in MATLAB
Stars: ✭ 90 (-6.25%)
Mutual labels:  transformer
dcsp segmentation
No description or website provided.
Stars: ✭ 34 (-64.58%)
Mutual labels:  weakly-supervised-learning
Vision-Language-Transformer
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)
Stars: ✭ 127 (+32.29%)
Mutual labels:  transformer
HRFormer
This is an official implementation of our NeurIPS 2021 paper "HRFormer: High-Resolution Transformer for Dense Prediction".
Stars: ✭ 357 (+271.88%)
Mutual labels:  transformer
DeepPhonemizer
Grapheme to phoneme conversion with deep learning.
Stars: ✭ 152 (+58.33%)
Mutual labels:  transformer
text2keywords
Trained T5 and T5-large model for creating keywords from text
Stars: ✭ 53 (-44.79%)
Mutual labels:  transformer
Awesome-Weak-Shot-Learning
A curated list of papers, code and resources pertaining to weak-shot classification, detection, and segmentation.
Stars: ✭ 142 (+47.92%)
Mutual labels:  weakly-supervised-learning
text-style-transfer-benchmark
Text style transfer benchmark
Stars: ✭ 56 (-41.67%)
Mutual labels:  transformer
visualization
a collection of visualization function
Stars: ✭ 189 (+96.88%)
Mutual labels:  transformer
transformer-slt
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)
Stars: ✭ 92 (-4.17%)
Mutual labels:  transformer
Simple-does-it-weakly-supervised-instance-and-semantic-segmentation
Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).
Stars: ✭ 46 (-52.08%)
Mutual labels:  weakly-supervised-learning
deviation-network
Source code of the KDD19 paper "Deep anomaly detection with deviation networks", weakly/partially supervised anomaly detection, few-shot anomaly detection
Stars: ✭ 94 (-2.08%)
Mutual labels:  weakly-supervised-learning
CSV2RDF
Streaming, transforming, SPARQL-based CSV to RDF converter. Apache license.
Stars: ✭ 48 (-50%)
Mutual labels:  transformer
Relation-Extraction-Transformer
NLP: Relation extraction with position-aware self-attention transformer
Stars: ✭ 63 (-34.37%)
Mutual labels:  transformer
Transformer Survey Study
"A survey of Transformer" paper study 👩🏻‍💻🧑🏻‍💻 KoreaUniv. DSBA Lab
Stars: ✭ 166 (+72.92%)
Mutual labels:  transformer
RSTNet
RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words (CVPR 2021)
Stars: ✭ 71 (-26.04%)
Mutual labels:  transformer
deformer
[ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
Stars: ✭ 111 (+15.63%)
Mutual labels:  transformer
MinTL
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
Stars: ✭ 61 (-36.46%)
Mutual labels:  transformer

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization

This is the official implementaion of paper TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization, which is accepted as ICCV 2021 poster.

This repository contains Pytorch training code, evaluation code, pretrained models and jupyter notebook for more visualization.

Illustration

Based on Deit, TS-CAM couples attention maps from visual image transformer with semantic-aware maps to obtain accurate localization maps (Token Semantic Coupled Attention Map, ts-cam).

ts-cam

Updates

  • (06/07/2021) Higher performance is reported when using stonger visual transformer Conformer.

Model Zoo

We provide pretrained TS-CAM models trained on CUB-200-2011 and ImageNet_ILSVRC2012 datasets.

CUB-200-2011 dataset

Backbone Loc.Acc@1 Loc.Acc@5 Loc.Gt-Known Cls.Acc@1 Cls.Acc@5 Baidu Drive Google Drive
Deit-T 64.5 80.9 86.4 72.9 91.9 model model
Deit-S 71.3 83.8 87.7 80.3 94.8 model model
Deit-B-384 75.8 84.1 86.6 86.8 96.7 model model
Conformer-S 77.2 90.9 94.1 81.0 95.8 model model

ILSVRC2012 dataset

Backbone Loc.Acc@1 Loc.Acc@5 Loc.Gt-Known Cls.Acc@1 Cls.Acc@5 Baidu Drive Google Drive
Deit-S 53.4 64.3 67.6 74.3 92.1 model model

Note: the Extrate Code for Baidu Drive is gwg7

  • On CUB-200-2011 dataset, we train TS-CAM on one Titan RTX 2080Ti GPU, with batch-size 128 and learning rate 5e-5, respectively.
  • On ILSVRC2012 dataset, we train TS-CAM on four Titan RTX 2080Ti GPUs, with batch-size 256 and learning rate 5e-4, respectively.

Usage

First clone the repository locally:

git clone https://github.com/vasgaowei/TS-CAM.git

Then install Pytorch 1.7.0+ and torchvision 0.8.1+ and pytorch-image-models 0.3.2:


conda create -n pytorch1.7 python=3.6
conda activate pytorc1.7
conda install anaconda
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch
pip install timm==0.3.2

Data preparation

CUB-200-2011 dataset

Please download and extrate CUB-200-2011 dataset.

The directory structure is the following:

TS-CAM/
  data/
    CUB-200-2011/
      attributes/
      images/
      parts/
      bounding_boxes.txt
      classes.txt
      image_class_labels.txt
      images.txt
      image_sizes.txt
      README
      train_test_split.txt

ImageNet1k

Download ILSVRC2012 dataset and extract train and val images.

The directory structure is organized as follows:

TS-CAM/
  data/
  ImageNet_ILSVRC2012/
    ILSVRC2012_list/
    train/
      n01440764/
        n01440764_18.JPEG
        ...
      n01514859/
        n01514859_1.JPEG
        ...
    val/
      n01440764/
        ILSVRC2012_val_00000293.JPEG
        ...
      n01531178/
        ILSVRC2012_val_00000570.JPEG
        ...
    ILSVRC2012_list/
      train.txt
      val_folder.txt
      val_folder_new.txt

And the training and validation data is expected to be in the train/ folder and val folder respectively:

For training:

On CUB-200-2011 dataset:

bash train_val_cub.sh {GPU_ID} ${NET} ${NET_SCALE} ${SIZE}

On ImageNet1k dataset:

bash train_val_ilsvrc.sh {GPU_ID} ${NET}  ${NET_SCALE} ${SIZE}

Please note that pretrained model weights of Deit-tiny, Deit-small and Deit-base on ImageNet-1k model will be downloaded when you first train you model, so the Internet should be connected.

For evaluation:

On CUB-200-2011 dataset:

bash val_cub.sh {GPU_ID} ${NET} ${NET_SCALE} ${SIZE} ${MODEL_PATH}

On ImageNet1k dataset:

bash val_ilsvrc.sh {GPU_ID} ${NET} ${NET_SCALE} ${SIZE} ${MODEL_PATH}

GPU_ID should be specified and multiple GPUs can be used for accelerating training and evaluation.

NET shoule be chosen among deit and conformer.

NET_SCALE shoule be chosen among tiny, small and base.

SIZE shoule be chosen among 224 and 384.

MODEL_PATH is the path of pretrained model.

Visualization

We provided jupyter notebook in tools_cam folder.

TS-CAM/
  tools-cam/
    visualization_attention_map_cub.ipynb
    visualization_attention_map_imaget.ipynb

Please download pretrained TS-CAM model weights and try more visualzation results((Attention maps using our method and Attention Rollout method)). You can try other interseting images you like to show the localization map(ts-cams).

Visualize localization results

We provide some visualization results as follows.

localization

Visualize attention maps

We can also visualize attention maps from different transformer layers.

attention maps_cub attention_map_ilsvrc

Contacts

If you have any question about our work or this repository, please don't hesitate to contact us by emails.

You can also open an issue under this project.

Citation

If you use this code for a paper please cite:

@InProceedings{Gao_2021_ICCV,
    author    = {Gao, Wei and Wan, Fang and Pan, Xingjia and Peng, Zhiliang and Tian, Qi and Han, Zhenjun and Zhou, Bolei and Ye, Qixiang},
    title     = {TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {2886-2895}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].