STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (+21.11%)
Mutual labels: transformers, attention-mechanism, video-classification
keras-deep-learningVarious implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-75.56%)
Mutual labels: attention-mechanism, video-classification
long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (+14.44%)
Mutual labels: transformers, attention-mechanism
RETRO-pytorchImplementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Stars: ✭ 473 (+425.56%)
Mutual labels: transformers, attention-mechanism
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (+285.56%)
Mutual labels: transformers, attention-mechanism
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (+52.22%)
Mutual labels: transformers, attention-mechanism
Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+7898.89%)
Mutual labels: transformers, attention-mechanism
Reformer PytorchReformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (+1726.67%)
Mutual labels: transformers, attention-mechanism
Dalle PytorchImplementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+3967.78%)
Mutual labels: transformers, attention-mechanism
C3D-tensorflowAction recognition with C3D network implemented in tensorflow
Stars: ✭ 34 (-62.22%)
Mutual labels: video-classification
LSTM-AttentionA Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series
Stars: ✭ 53 (-41.11%)
Mutual labels: attention-mechanism
Visual-Attention-ModelChainer implementation of Deepmind's Visual Attention Model paper
Stars: ✭ 27 (-70%)
Mutual labels: attention-mechanism
TransQuestTransformer based translation quality estimation
Stars: ✭ 85 (-5.56%)
Mutual labels: transformers
MiCT-Net-PyTorchVideo Recognition using Mixed Convolutional Tube (MiCT) on PyTorch with a ResNet backbone
Stars: ✭ 48 (-46.67%)
Mutual labels: video-classification
conv3d-video-action-recognitionMy experimentation around action recognition in videos. Contains Keras implementation for C3D network based on original paper "Learning Spatiotemporal Features with 3D Convolutional Networks", Tran et al. and it includes video processing pipelines coded using mPyPl package. Model is being benchmarked on popular UCF101 dataset and achieves result…
Stars: ✭ 50 (-44.44%)
Mutual labels: video-classification
memory-compressed-attentionImplementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (-47.78%)
Mutual labels: attention-mechanism
MinkLocMultimodalMinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (-27.78%)
Mutual labels: 3d-convolutional-network
clip-italianCLIP (Contrastive Language–Image Pre-training) for Italian
Stars: ✭ 113 (+25.56%)
Mutual labels: transformers
textUsing Transformers from HuggingFace in R
Stars: ✭ 66 (-26.67%)
Mutual labels: transformers