All Projects → nengo → Keras Lmu

nengo / Keras Lmu

Licence: other
Keras implementation of Legendre Memory Units

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Keras Lmu

Ai Reading Materials
Some of the ML and DL related reading materials, research papers that I've read
Stars: ✭ 79 (-50.62%)
Mutual labels:  lstm, recurrent-neural-networks
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-39.37%)
Mutual labels:  lstm, recurrent-neural-networks
Language Translation
Neural machine translator for English2German translation.
Stars: ✭ 82 (-48.75%)
Mutual labels:  lstm, recurrent-neural-networks
Chicksexer
A Python package for gender classification.
Stars: ✭ 64 (-60%)
Mutual labels:  lstm, recurrent-neural-networks
Stockprediction
Plain Stock Close-Price Prediction via Graves LSTM RNNs
Stars: ✭ 134 (-16.25%)
Mutual labels:  lstm, recurrent-neural-networks
Bitcoin Price Prediction Using Lstm
Bitcoin price Prediction ( Time Series ) using LSTM Recurrent neural network
Stars: ✭ 67 (-58.12%)
Mutual labels:  lstm, recurrent-neural-networks
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-40%)
Mutual labels:  lstm, recurrent-neural-networks
Tensorflow Lstm Sin
TensorFlow 1.3 experiment with LSTM (and GRU) RNNs for sine prediction
Stars: ✭ 52 (-67.5%)
Mutual labels:  lstm, recurrent-neural-networks
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-21.25%)
Mutual labels:  lstm, recurrent-neural-networks
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-25.62%)
Mutual labels:  lstm, recurrent-neural-networks
Gdax Orderbook Ml
Application of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (-62.5%)
Mutual labels:  lstm, recurrent-neural-networks
Stock Price Predictor
This project seeks to utilize Deep Learning models, Long-Short Term Memory (LSTM) Neural Network algorithm, to predict stock prices.
Stars: ✭ 146 (-8.75%)
Mutual labels:  lstm, recurrent-neural-networks
Sentiment Analysis Nltk Ml Lstm
Sentiment Analysis on the First Republic Party debate in 2016 based on Python,NLTK and ML.
Stars: ✭ 61 (-61.87%)
Mutual labels:  lstm, recurrent-neural-networks
Lstm Ctc Ocr
using rnn (lstm or gru) and ctc to convert line image into text, based on torch7 and warp-ctc
Stars: ✭ 70 (-56.25%)
Mutual labels:  lstm, recurrent-neural-networks
Image Captioning
Image Captioning: Implementing the Neural Image Caption Generator with python
Stars: ✭ 52 (-67.5%)
Mutual labels:  lstm, recurrent-neural-networks
Multitask sentiment analysis
Multitask Deep Learning for Sentiment Analysis using Character-Level Language Model, Bi-LSTMs for POS Tag, Chunking and Unsupervised Dependency Parsing. Inspired by this great article https://arxiv.org/abs/1611.01587
Stars: ✭ 93 (-41.87%)
Mutual labels:  lstm, recurrent-neural-networks
Sangita
A Natural Language Toolkit for Indian Languages
Stars: ✭ 43 (-73.12%)
Mutual labels:  lstm, recurrent-neural-networks
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-69.37%)
Mutual labels:  lstm, recurrent-neural-networks
Rnn Text Classification Tf
Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification
Stars: ✭ 114 (-28.75%)
Mutual labels:  lstm, recurrent-neural-networks
Document Classifier Lstm
A bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (-15%)
Mutual labels:  lstm, recurrent-neural-networks

KerasLMU: Recurrent neural networks using Legendre Memory Units

Paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>_

This is a Keras-based implementation of the Legendre Memory Unit (LMU). The LMU is a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. It has been shown to perform as well as standard LSTM or other RNN-based models in a variety of tasks, generally with fewer internal parameters (see this paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>_ for more details). For the Permuted Sequential MNIST (psMNIST) task in particular, it has been demonstrated to outperform the current state-of-the-art results. See the note below for instructions on how to get access to this model.

The LMU is mathematically derived to orthogonalize its continuous-time history – doing so by solving d coupled ordinary differential equations (ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree d − 1 (the example for d = 12 is shown below).

.. image:: https://i.imgur.com/Uvl6tj5.png :target: https://i.imgur.com/Uvl6tj5.png :alt: Legendre polynomials

A single LMU cell expresses the following computational graph, which takes in an input signal, x, and couples a optimal linear memory, m, with a nonlinear hidden state, h. By default, this coupling is trained via backpropagation, while the dynamics of the memory remain fixed.

.. image:: https://i.imgur.com/IJGUVg6.png :target: https://i.imgur.com/IJGUVg6.png :alt: Computational graph

The discretized A and B matrices are initialized according to the LMU's mathematical derivation with respect to some chosen window length, θ. Backpropagation can be used to learn this time-scale, or fine-tune A and B, if necessary.

Both the kernels, W, and the encoders, e, are learned. Intuitively, the kernels learn to compute nonlinear functions across the memory, while the encoders learn to project the relevant information into the memory (see paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>_ for details).

.. note::

The paper branch in the lmu GitHub repository includes a pre-trained Keras/TensorFlow model, located at models/psMNIST-standard.hdf5, which obtains a psMNIST result of 97.15%. Note that the network is using fewer internal state-variables and neurons than there are pixels in the input sequence. To reproduce the results from this paper <https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks.pdf>_, run the notebooks in the experiments directory within the paper branch.

Nengo Examples

  • LMUs in Nengo (with online learning) <https://www.nengo.ai/nengo/examples/learning/lmu.html>_
  • Spiking LMUs in Nengo Loihi (with online learning) <https://www.nengo.ai/nengo-loihi/examples/lmu.html>_
  • LMUs in NengoDL (reproducing SotA on psMNIST) <https://www.nengo.ai/nengo-dl/examples/lmu.html>_

Citation

.. code-block::

@inproceedings{voelker2019lmu, title={Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks}, author={Aaron R. Voelker and Ivana Kaji'c and Chris Eliasmith}, booktitle={Advances in Neural Information Processing Systems}, pages={15544--15553}, year={2019} }

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].