All Projects β†’ lucidrains β†’ halonet-pytorch

lucidrains / halonet-pytorch

Licence: MIT license
Implementation of the πŸ˜‡ Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to halonet-pytorch

Bottleneck Transformer Pytorch
Implementation of Bottleneck Transformer in Pytorch
Stars: ✭ 408 (+125.41%)
Mutual labels:  vision, attention-mechanism
visualization
a collection of visualization function
Stars: ✭ 189 (+4.42%)
Mutual labels:  vision, attention-mechanism
vision-api
Google Vision API made easy!
Stars: ✭ 19 (-89.5%)
Mutual labels:  vision
3HAN
An original implementation of "3HAN: A Deep Neural Network for Fake News Detection" (ICONIP 2017)
Stars: ✭ 29 (-83.98%)
Mutual labels:  attention-mechanism
AttentionGatedVNet3D
Attention Gated VNet3D Model for KiTS19β€”β€”2019 Kidney Tumor Segmentation Challenge
Stars: ✭ 35 (-80.66%)
Mutual labels:  attention-mechanism
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-88.95%)
Mutual labels:  attention-mechanism
A-Persona-Based-Neural-Conversation-Model
No description or website provided.
Stars: ✭ 22 (-87.85%)
Mutual labels:  attention-mechanism
dodrio
Exploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (+28.73%)
Mutual labels:  attention-mechanism
egfr-att
Drug effect prediction using neural network
Stars: ✭ 17 (-90.61%)
Mutual labels:  attention-mechanism
extkeras
Playground for implementing custom layers and other components compatible with keras, with the purpose to learn the framework better and perhaps in future offer some utils for others.
Stars: ✭ 18 (-90.06%)
Mutual labels:  attention-mechanism
domain-attention
codes for paper "Domain Attention Model for Multi-Domain Sentiment Classification"
Stars: ✭ 22 (-87.85%)
Mutual labels:  attention-mechanism
TinyCog
Small Robot, Toy Robot platform
Stars: ✭ 29 (-83.98%)
Mutual labels:  vision
hamnet
PyTorch implementation of AAAI 2021 paper: A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action Localization
Stars: ✭ 30 (-83.43%)
Mutual labels:  attention-mechanism
abcnn pytorch
Implementation of ABCNN(Attention-Based Convolutional Neural Network) on Pytorch
Stars: ✭ 35 (-80.66%)
Mutual labels:  attention-mechanism
LMFD-PAD
Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection
Stars: ✭ 27 (-85.08%)
Mutual labels:  attention-mechanism
Multigrid-Neural-Architectures
Multigrid Neural Architecture
Stars: ✭ 28 (-84.53%)
Mutual labels:  attention-mechanism
MathSolver
⌨️Camera calculator with Vision
Stars: ✭ 70 (-61.33%)
Mutual labels:  vision
minimal-nmt
A minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-80.11%)
Mutual labels:  attention-mechanism
iOS14-Resources
A curated collection of iOS 14 projects ranging from SwiftUI to ML, AR etc.
Stars: ✭ 85 (-53.04%)
Mutual labels:  vision
Vision CoreML-App
This app predicts the age of a person from the picture input using camera or photos gallery. The app uses Core ML framework of iOS for the predictions. The Vision library of CoreML is used here. The trained model fed to the system is AgeNet.
Stars: ✭ 15 (-91.71%)
Mutual labels:  vision

HaloNet - Pytorch

Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This repository will only house the attention layer and not much more.

Install

$ pip install halonet-pytorch

Usage

import torch
from halonet_pytorch import HaloAttention

attn = HaloAttention(
    dim = 512,         # dimension of feature map
    block_size = 8,    # neighborhood block size (feature map must be divisible by this)
    halo_size = 4,     # halo size (block receptive field)
    dim_head = 64,     # dimension of each head
    heads = 4          # number of attention heads
).cuda()

fmap = torch.randn(1, 512, 32, 32).cuda()
attn(fmap) # (1, 512, 32, 32)

Citations

@misc{vaswani2021scaling,
    title   = {Scaling Local Self-Attention For Parameter Efficient Visual Backbones}, 
    author  = {Ashish Vaswani and Prajit Ramachandran and Aravind Srinivas and Niki Parmar and Blake Hechtman and Jonathon Shlens},
    year    = {2021},
    eprint  = {2103.12731},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].