Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-52.02%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+102.02%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+299.19%)
DeepattentionDeep Visual Attention Prediction (TIP18)
Stars: ✭ 65 (-73.79%)
Compact-Global-DescriptorPytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Stars: ✭ 22 (-91.13%)
Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-49.19%)
ActionvladActionVLAD for video action classification (CVPR 2017)
Stars: ✭ 217 (-12.5%)
MmskeletonA OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Stars: ✭ 2,378 (+858.87%)
Attentive Gan DerainnetUnofficial tensorflow implemention of "Attentive Generative Adversarial Network for Raindrop Removal from A Single Image (CVPR 2018) " model https://maybeshewill-cv.github.io/attentive-gan-derainnet/
Stars: ✭ 184 (-25.81%)
Datastories Semeval2017 Task4Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (-25.81%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-15.73%)
Neat VisionNeat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stars: ✭ 213 (-14.11%)
Snli Entailmentattention model for entailment on SNLI corpus implemented in Tensorflow and Keras
Stars: ✭ 181 (-27.02%)
Hidden Two StreamCaffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (-27.82%)
VipVideo Platform for Action Recognition and Object Detection in Pytorch
Stars: ✭ 175 (-29.44%)
Hand pose actionDataset and code for the paper "First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations", CVPR 2018.
Stars: ✭ 173 (-30.24%)
LintelA Python module to decode video frames directly, using the FFmpeg C API.
Stars: ✭ 240 (-3.23%)
Triplet AttentionOfficial PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (-10.48%)
Guided Attention Inference NetworkContains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Stars: ✭ 204 (-17.74%)
C3d KerasC3D for Keras + TensorFlow
Stars: ✭ 171 (-31.05%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (-22.58%)
Ta3n[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
Stars: ✭ 217 (-12.5%)
Action recognition zooCodes for popular action recognition models, verified on the something-something data set.
Stars: ✭ 227 (-8.47%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (-25%)
Ig65m PytorchPyTorch 3D video classification models pre-trained on 65 million Instagram videos
Stars: ✭ 217 (-12.5%)
Linformer PytorchMy take on a practical implementation of Linformer for Pytorch.
Stars: ✭ 239 (-3.63%)
AmassData preparation and loader for AMASS
Stars: ✭ 180 (-27.42%)
Generative inpaintingDeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
Stars: ✭ 2,659 (+972.18%)
Ican[BMVC 2018] iCAN: Instance-Centric Attention Network for Human-Object Interaction Detection
Stars: ✭ 225 (-9.27%)
Lstm attentionattention-based LSTM/Dense implemented by Keras
Stars: ✭ 168 (-32.26%)
Linear Attention TransformerTransformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: ✭ 205 (-17.34%)
Video CaffeVideo-friendly caffe -- comes with the most recent version of Caffe (as of Jan 2019), a video reader, 3D(ND) pooling layer, and an example training script for C3D network and UCF-101 data
Stars: ✭ 172 (-30.65%)
Pytorch Acnn Modelcode of Relation Classification via Multi-Level Attention CNNs
Stars: ✭ 170 (-31.45%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-18.15%)
PaddlevideoComprehensive, latest, and deployable video deep learning algorithm, including video recognition, action localization, and temporal action detection tasks. It's a high-performance, light-weight codebase provides practical models for video understanding research and application
Stars: ✭ 218 (-12.1%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (-33.47%)
Slot AttentionImplementation of Slot Attention from GoogleAI
Stars: ✭ 168 (-32.26%)
Dd NetA lightweight network for body/hand action recognition
Stars: ✭ 161 (-35.08%)
GatGraph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+798.79%)
AlphactionSpatio-Temporal Action Localization System
Stars: ✭ 221 (-10.89%)
X TransformersA simple but complete full-attention transformer with a set of promising experimental features from various papers
Stars: ✭ 211 (-14.92%)
Dynamic routing between capsulesImplementation of Dynamic Routing Between Capsules, Sara Sabour, Nicholas Frosst, Geoffrey E Hinton, NIPS 2017
Stars: ✭ 202 (-18.55%)
Picanet ImplementationPytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Stars: ✭ 157 (-36.69%)
Csa InpaintingCoherent Semantic Attention for image inpainting(ICCV 2019)
Stars: ✭ 202 (-18.55%)
Sinkhorn TransformerSinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
Stars: ✭ 156 (-37.1%)
Sa TensorflowSoft attention mechanism for video caption generation
Stars: ✭ 154 (-37.9%)