Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-72.55%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (-58.17%)
Global Self Attention NetworkA Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-86.06%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (-82.79%)
Isab PytorchAn implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-95.42%)
Absa kerasKeras Implementation of Aspect based Sentiment Analysis
Stars: ✭ 126 (-72.55%)
Prediction FlowDeep-Learning based CTR models implemented by PyTorch
Stars: ✭ 138 (-69.93%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (-10.46%)
CrabNetPredict materials properties using only the composition information!
Stars: ✭ 57 (-87.58%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-11.11%)
datastories-semeval2017-task6Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-95.64%)
Pytorch GatMy implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+97.82%)
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-74.07%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (-59.48%)
DeepattentionDeep Visual Attention Prediction (TIP18)
Stars: ✭ 65 (-85.84%)
Nlp JourneyDocuments, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc. All codes are implemented intensorflow 2.0.
Stars: ✭ 1,290 (+181.05%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (-73.64%)
Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-96.51%)
visualizationa collection of visualization function
Stars: ✭ 189 (-58.82%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-92.81%)
egfr-attDrug effect prediction using neural network
Stars: ✭ 17 (-96.3%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-54.47%)
Ner BertBERT-NER (nert-bert) with google bert https://github.com/google-research.
Stars: ✭ 339 (-26.14%)
Compact-Global-DescriptorPytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Stars: ✭ 22 (-95.21%)
Performer PytorchAn implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+18.95%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+9.15%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+115.69%)
reasoning attentionUnofficial implementation algorithms of attention models on SNLI dataset
Stars: ✭ 34 (-92.59%)
Lambda NetworksImplementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute
Stars: ✭ 1,497 (+226.14%)
Datastories Semeval2017 Task4Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (-59.91%)
Guided Attention Inference NetworkContains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Stars: ✭ 204 (-55.56%)
Seq2seq SummarizerPointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (-33.33%)
Text classificationall kinds of text classification models and more with deep learning
Stars: ✭ 7,179 (+1464.05%)
SimplecvreproductionReproduce simple cv project including attention module, classification, object detection, segmentation, keypoint detection, tracking 😄 etc.
Stars: ✭ 602 (+31.15%)
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (-81.05%)
AttentionalpoolingactionCode/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (-45.97%)
Neat VisionNeat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stars: ✭ 213 (-53.59%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-95.86%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-96.3%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+694.34%)
Deep learning nlpKeras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Stars: ✭ 407 (-11.33%)
Nlp TutorialsSimple implementations of NLP models. Tutorials are written in Chinese on my website https://mofanpy.com
Stars: ✭ 394 (-14.16%)
Keras GatKeras implementation of the graph attention networks (GAT) by Veličković et al. (2017; https://arxiv.org/abs/1710.10903)
Stars: ✭ 334 (-27.23%)
Tensorflow ResourcesCurated Tensorflow code resources to help you get started with Deep Learning.
Stars: ✭ 330 (-28.1%)
Mac NetworkImplementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (-3.27%)