long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (-24.82%)
Mutual labels: transformers, attention-mechanism
Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+5154.74%)
Mutual labels: transformers, attention-mechanism
Ylg[CVPR 2020] Official Implementation: "Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models".
Stars: ✭ 109 (-20.44%)
Mutual labels: attention-mechanism, generative-adversarial-networks
Dalle PytorchImplementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+2572.26%)
Mutual labels: transformers, attention-mechanism
uniformer-pytorchImplementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, debuted in ICLR 2022
Stars: ✭ 90 (-34.31%)
Mutual labels: transformers, attention-mechanism
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (-20.44%)
Mutual labels: transformers, attention-mechanism
Reformer PytorchReformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (+1100%)
Mutual labels: transformers, attention-mechanism
RETRO-pytorchImplementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Stars: ✭ 473 (+245.26%)
Mutual labels: transformers, attention-mechanism
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (+153.28%)
Mutual labels: transformers, attention-mechanism
keras attention🔖 An Attention Layer in Keras
Stars: ✭ 43 (-68.61%)
Mutual labels: attention-mechanism
attention-guided-sparsityAttention-Based Guided Structured Sparsity of Deep Neural Networks
Stars: ✭ 26 (-81.02%)
Mutual labels: attention-mechanism
Diverse-Structure-InpaintingCVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"
Stars: ✭ 131 (-4.38%)
Mutual labels: generative-adversarial-networks
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-47.45%)
Mutual labels: transformers
trapperState-of-the-art NLP through transformer models in a modular design and consistent APIs.
Stars: ✭ 28 (-79.56%)
Mutual labels: transformers
vista-netCode for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Stars: ✭ 67 (-51.09%)
Mutual labels: attention-mechanism
TermiNetwork🌏 A zero-dependency networking solution for building modern and secure iOS, watchOS, macOS and tvOS applications.
Stars: ✭ 80 (-41.61%)
Mutual labels: transformers
HVT[ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"
Stars: ✭ 26 (-81.02%)
Mutual labels: transformers
BangalASRTransformer based Bangla Speech Recognition
Stars: ✭ 20 (-85.4%)
Mutual labels: transformers