All Projects → berniwal → swin-transformer-pytorch

berniwal / swin-transformer-pytorch

Licence: MIT license
Implementation of the Swin Transformer in PyTorch.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to swin-transformer-pytorch

Deepattention
Deep Visual Attention Prediction (TIP18)
Stars: ✭ 65 (-89.34%)
Mutual labels:  attention-model
Sa Tensorflow
Soft attention mechanism for video caption generation
Stars: ✭ 154 (-74.75%)
Mutual labels:  attention-model
Pytorch Batch Attention Seq2seq
PyTorch implementation of batched bi-RNN encoder and attention-decoder.
Stars: ✭ 245 (-59.84%)
Mutual labels:  attention-model
Code
ECG Classification
Stars: ✭ 78 (-87.21%)
Mutual labels:  attention-model
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-79.34%)
Mutual labels:  attention-model
Snli Entailment
attention model for entailment on SNLI corpus implemented in Tensorflow and Keras
Stars: ✭ 181 (-70.33%)
Mutual labels:  attention-model
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+62.3%)
Mutual labels:  attention-model
Sinet
Camouflaged Object Detection, CVPR 2020 (Oral & Reported by the New Scientist Magazine)
Stars: ✭ 246 (-59.67%)
Mutual labels:  attention-model
Bamnet
Code & data accompanying the NAACL 2019 paper "Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases"
Stars: ✭ 140 (-77.05%)
Mutual labels:  attention-model
Generative inpainting
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
Stars: ✭ 2,659 (+335.9%)
Mutual labels:  attention-model
Attention Gated Networks
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation
Stars: ✭ 1,237 (+102.79%)
Mutual labels:  attention-model
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-80.49%)
Mutual labels:  attention-model
Speech emotion recognition blstm
Bidirectional LSTM network for speech emotion recognition.
Stars: ✭ 203 (-66.72%)
Mutual labels:  attention-model
Pytorch Attention Guided Cyclegan
Pytorch implementation of Unsupervised Attention-guided Image-to-Image Translation.
Stars: ✭ 67 (-89.02%)
Mutual labels:  attention-model
Generative Inpainting Pytorch
A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
Stars: ✭ 242 (-60.33%)
Mutual labels:  attention-model
Awesome Attention Mechanism In Cv
计算机视觉中用到的注意力模块和其他即插即用模块PyTorch Implementation Collection of Attention Module and Plug&Play Module
Stars: ✭ 54 (-91.15%)
Mutual labels:  attention-model
Pytorch Acnn Model
code of Relation Classification via Multi-Level Attention CNNs
Stars: ✭ 170 (-72.13%)
Mutual labels:  attention-model
BA-Transformer
[MICCAI 2021] Boundary-aware Transformers for Skin Lesion Segmentation
Stars: ✭ 86 (-85.9%)
Mutual labels:  transformer-architecture
Attentionalpoolingaction
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (-59.34%)
Mutual labels:  attention-model
Keras Attention Mechanism
Attention mechanism Implementation for Keras.
Stars: ✭ 2,504 (+310.49%)
Mutual labels:  attention-model

Linear Self Attention

Swin Transformer - PyTorch

Implementation of the Swin Transformer architecture. This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (86.4 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.

This is NOT the official repository of the Swin Transformer. At the moment in time the official code of the authors is not available yet but can be found later at: https://github.com/microsoft/Swin-Transformer.

All credits go to the authors Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.

Install

$ pip install swin-transformer-pytorch

or (if you clone the repository)

$ pip install -r requirements.txt

Usage

import torch
from swin_transformer_pytorch import SwinTransformer

net = SwinTransformer(
    hidden_dim=96,
    layers=(2, 2, 6, 2),
    heads=(3, 6, 12, 24),
    channels=3,
    num_classes=3,
    head_dim=32,
    window_size=7,
    downscaling_factors=(4, 2, 2, 2),
    relative_pos_embedding=True
)
dummy_x = torch.randn(1, 3, 224, 224)
logits = net(dummy_x)  # (1,3)
print(net)
print(logits)

Parameters

  • hidden_dim: int.
    What hidden dimension you want to use for the architecture, noted C in the original paper
  • layers: 4-tuple of ints divisible by 2.
    How many layers in each stage to apply. Every int should be divisible by two because we are always applying a regular and a shifted SwinBlock together.
  • heads: 4-tuple of ints
    How many heads in each stage to apply.
  • channels: int.
    Number of channels of the input.
  • num_classes: int.
    Num classes the output should have.
  • head_dim: int.
    What dimension each head should have.
  • window_size: int.
    What window size to use, make sure that after each downscaling the image dimensions are still divisible by the window size.
  • downscaling_factors: 4-tuple of ints.
    What downscaling factor to use in each stage. Make sure image dimension is large enough for downscaling factors.
  • relative_pos_embedding: bool.
    Whether to use learnable relative position embedding (2M-1)x(2M-1) or full positional embeddings (M²xM²).

TODO

  • Adjust code for and validate on ImageNet-1K and COCO 2017

References

Some part of the code is adapted from the PyTorch - VisionTransformer repository https://github.com/lucidrains/vit-pytorch , which provides a very clean VisionTransformer implementation to start with.

Citations

@misc{liu2021swin,
      title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, 
      author={Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo},
      year={2021},
      eprint={2103.14030},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].