self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+18.18%)
Absa kerasKeras Implementation of Aspect based Sentiment Analysis
Stars: ✭ 126 (+281.82%)
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+57.58%)
MmfA modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+14181.82%)
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+163.64%)
Pytorch GatMy implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+2651.52%)
Isab PytorchAn implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-36.36%)
Neat VisionNeat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stars: ✭ 213 (+545.45%)
bottom-up-featuresBottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+87.88%)
Guided Attention Inference NetworkContains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Stars: ✭ 204 (+518.18%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+266.67%)
Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-51.52%)
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+72.73%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+1145.45%)
Lambda NetworksImplementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute
Stars: ✭ 1,497 (+4436.36%)
Global Self Attention NetworkA Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (+93.94%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (+481.82%)
CrabNetPredict materials properties using only the composition information!
Stars: ✭ 57 (+72.73%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (+533.33%)
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-27.27%)
Datastories Semeval2017 Task4Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+457.58%)
visualizationa collection of visualization function
Stars: ✭ 189 (+472.73%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (+463.64%)
FigureQA-baselineTensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-15.15%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-42.42%)
Prediction FlowDeep-Learning based CTR models implemented by PyTorch
Stars: ✭ 138 (+318.18%)
Mac NetworkImplementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+1245.45%)
TRAR-VQA[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+48.48%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+1136.36%)
Seq2seq SummarizerPointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (+827.27%)
Performer PytorchAn implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+1554.55%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-48.48%)
Attend infer repeatA Tensorfflow implementation of Attend, Infer, Repeat
Stars: ✭ 82 (+148.48%)
Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (+281.82%)
Vqa regatResearch Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+290.91%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+139.39%)
datastories-semeval2017-task6Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-39.39%)
LMFD-PADLearnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection
Stars: ✭ 27 (-18.18%)
detect-shortcutsRepo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering
Stars: ✭ 17 (-48.48%)
3HANAn original implementation of "3HAN: A Deep Neural Network for Fake News Detection" (ICONIP 2017)
Stars: ✭ 29 (-12.12%)
jeelizGlanceTrackerJavaScript/WebGL lib: detect if the user is looking at the screen or not from the webcam video feed. Lightweight and robust to all lighting conditions. Great for play/pause videos if the user is looking or not, or for person detection. Link to live demo.
Stars: ✭ 68 (+106.06%)
domain-attentioncodes for paper "Domain Attention Model for Multi-Domain Sentiment Classification"
Stars: ✭ 22 (-33.33%)
dodrioExploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (+606.06%)
Multi-task-Conditional-Attention-NetworksA prototype version of our submitted paper: Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives.
Stars: ✭ 21 (-36.36%)
LFattNetAttention-based View Selection Networks for Light-field Disparity Estimation
Stars: ✭ 41 (+24.24%)
stanford-cs231n-assignments-2020This repository contains my solutions to the assignments for Stanford's CS231n "Convolutional Neural Networks for Visual Recognition" (Spring 2020).
Stars: ✭ 84 (+154.55%)
long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (+212.12%)
abcnn pytorchImplementation of ABCNN(Attention-Based Convolutional Neural Network) on Pytorch
Stars: ✭ 35 (+6.06%)
probnmn-clevrCode for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (+90.91%)