All Projects → zimmerrol → keras-utility-layer-collection

zimmerrol / keras-utility-layer-collection

Licence: MIT license
Collection of custom layers and utility functions for Keras which are missing in the main framework.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to keras-utility-layer-collection

Tf Rnn Attention
Tensorflow implementation of attention mechanism for text classification tasks.
Stars: ✭ 735 (+1066.67%)
Mutual labels:  rnn, attention
Machine Learning
My Attempt(s) In The World Of ML/DL....
Stars: ✭ 78 (+23.81%)
Mutual labels:  rnn, attention
Attentive Neural Processes
implementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (-47.62%)
Mutual labels:  rnn, attention
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (-61.9%)
Mutual labels:  rnn, attention
Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (+96.83%)
Mutual labels:  rnn, attention
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (-47.62%)
Mutual labels:  rnn, attention
Attention Over Attention Tf Qa
论文“Attention-over-Attention Neural Networks for Reading Comprehension”中AoA模型实现
Stars: ✭ 58 (-7.94%)
Mutual labels:  rnn, attention
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (-31.75%)
Mutual labels:  rnn, attention
Captcharecognition
End-to-end variable length Captcha recognition using CNN+RNN+Attention/CTC (pytorch implementation). 端到端的不定长验证码识别
Stars: ✭ 97 (+53.97%)
Mutual labels:  rnn, attention
Cnn lstm for text classify
CNN, LSTM, NBOW, fasttext 中文文本分类
Stars: ✭ 90 (+42.86%)
Mutual labels:  rnn, attention
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-17.46%)
Mutual labels:  rnn, attention
Doc Han Att
Hierarchical Attention Networks for Chinese Sentiment Classification
Stars: ✭ 206 (+226.98%)
Mutual labels:  rnn, attention
Attend infer repeat
A Tensorfflow implementation of Attend, Infer, Repeat
Stars: ✭ 82 (+30.16%)
Mutual labels:  rnn, attention
Rnn For Joint Nlu
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling" (https://arxiv.org/abs/1609.01454)
Stars: ✭ 176 (+179.37%)
Mutual labels:  rnn, attention
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+5325.4%)
Mutual labels:  rnn, attention
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (+39.68%)
Mutual labels:  attention
chinese ancient poetry
seq2seq attention tensorflow textrank context
Stars: ✭ 30 (-52.38%)
Mutual labels:  attention
Motor-Imagery-Tasks-Classification-using-EEG-data
Implementation of Deep Neural Networks in Keras and Tensorflow to classify motor imagery tasks using EEG data
Stars: ✭ 67 (+6.35%)
Mutual labels:  rnn
STAR Network
[PAMI 2021] Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
Stars: ✭ 16 (-74.6%)
Mutual labels:  rnn
training-charRNN
Training charRNN model for ml5js
Stars: ✭ 87 (+38.1%)
Mutual labels:  rnn

Keras Utility & Layer Collection [WIP]

Collection of custom layers for Keras which are missing in the main framework. These layers might be useful to reproduce current state-of-the-art deep learning papers using Keras.

Applications

Using this library the following research papers have been reimplemented in Keras:

Overview of implemented Layers

At the moment the Keras Layer Collection offers the following layers/features:

Scaled Dot-Product Attention

Implementation as described in Attention Is All You Need. Performs a non-linear transformation on the values V by comparing the queries Q with the keys K. The illustration below is taken from the paper cited above.

Multi-Head Attention

Implementation as described in Attention Is All You Need. This is basically just a bunch a Scaled Dot-Product Attention blocks whose output is combined with a linear transformation. The illustration below is taken from the paper cited above.

Layer Normalization

Sequencewise Attention

This layer applies various attention transformations on data. It needs a time-series of queries and a time-series of values to calculate the attention and the final linear transformation to obtain the output. This is a faster version of the general attention technique. It is similar to the global attention method described in Effective Approaches to Attention-based Neural Machine Translation

Attention Wrapper

The idea of the implementation is based on the paper Effective Approaches to Attention-based Neural Machine Translation. This layer can be wrapped around any RNN in Keras. It calculates for each time step of the RNN the attention vector between the previous output and all input steps. This way, a new attention-based input for the RNN is constructed. This input is finally fed into the RNN. This technique is similar to the input-feeding method described in the paper cited. The illustration below is taken from the paper cited above.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].