All Projects → kaustubhhiware → LSTM-GRU-from-scratch

kaustubhhiware / LSTM-GRU-from-scratch

Licence: MIT license
LSTM, GRU cell implementation from scratch in tensorflow

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to LSTM-GRU-from-scratch

Pytorch Rnn Text Classification
Word Embedding + LSTM + FC
Stars: ✭ 112 (+273.33%)
Mutual labels:  lstm, gru
Eeg Dl
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (+450%)
Mutual labels:  lstm, gru
Rnn Text Classification Tf
Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification
Stars: ✭ 114 (+280%)
Mutual labels:  lstm, gru
Tensorflow Lstm Sin
TensorFlow 1.3 experiment with LSTM (and GRU) RNNs for sine prediction
Stars: ✭ 52 (+73.33%)
Mutual labels:  lstm, gru
ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+573.33%)
Mutual labels:  lstm, gru
Gdax Orderbook Ml
Application of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (+100%)
Mutual labels:  lstm, gru
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+6890%)
Mutual labels:  lstm, gru
Sudl
light deep neural network tools box(LSTM,GRU,RNN,CNN,Bi-LSTM,etc)
Stars: ✭ 29 (-3.33%)
Mutual labels:  lstm, gru
Trafficflowprediction
Traffic Flow Prediction with Neural Networks(SAEs、LSTM、GRU).
Stars: ✭ 242 (+706.67%)
Mutual labels:  lstm, gru
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+633.33%)
Mutual labels:  lstm, gru
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (+33.33%)
Mutual labels:  lstm, gru
tf-ran-cell
Recurrent Additive Networks for Tensorflow
Stars: ✭ 16 (-46.67%)
Mutual labels:  lstm, gru
Rnn Notebooks
RNN(SimpleRNN, LSTM, GRU) Tensorflow2.0 & Keras Notebooks (Workshop materials)
Stars: ✭ 48 (+60%)
Mutual labels:  lstm, gru
See Rnn
RNN and general weights, gradients, & activations visualization in Keras & TensorFlow
Stars: ✭ 102 (+240%)
Mutual labels:  lstm, gru
Tensorflow Sentiment Analysis On Amazon Reviews Data
Implementing different RNN models (LSTM,GRU) & Convolution models (Conv1D, Conv2D) on a subset of Amazon Reviews data with TensorFlow on Python 3. A sentiment analysis project.
Stars: ✭ 34 (+13.33%)
Mutual labels:  lstm, gru
Load forecasting
Load forcasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models
Stars: ✭ 160 (+433.33%)
Mutual labels:  lstm, gru
Easy Deep Learning With Keras
Keras tutorial for beginners (using TF backend)
Stars: ✭ 367 (+1123.33%)
Mutual labels:  lstm, gru
Cryptocurrencyprediction
Predict Cryptocurrency Price with Deep Learning
Stars: ✭ 453 (+1410%)
Mutual labels:  lstm, gru
Haste
Haste: a fast, simple, and open RNN library
Stars: ✭ 214 (+613.33%)
Mutual labels:  lstm, gru
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+11293.33%)
Mutual labels:  lstm, gru

LSTM-GRU-from-scratch

LSTM, GRU cell implementation from scratch

Assignment 4 weights for Deep Learning, CS60010.

Currently includes weights for LSTM and GRU for hidden layer size as 32, 64, 128 and 256.

Objective

The aim of this assignment was to compare performance of LSTM, GRU and MLP for a fixed number of iterations, with variable hidden layer size. Please refer to Report.pdf for the details.

Suggested reading: colah's blog on LSTM. That's really all you need.

  • Loss as a function of iterations

Usage

sh run.sh - Run all hidden units LSTM GRU and report accuracy. Output here

python train.py --train - Run training, save weights into weights/ folder. Defaults to LSTM, hidden_unit 32, 30 iterations / epochs

python train.py --train --hidden_unit 32 --model lstm --iter 5: Train LSTM and dump weights. Run training with specified number of iterations. Default iterations are 50.

python train.py --test --hidden_unit 32 --model lstm - Load precomputed weights and report test accuracy.

Code structure

  • data_loader is used to load data from zip files in data folder.
  • module defines the basic LSTM and GRU code.
  • train handles input and states the model.

License

The MIT License (MIT) 2018 - Kaustubh Hiware.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].