All Projects → HendrikStrobelt → Lstmvis

HendrikStrobelt / Lstmvis

Licence: bsd-3-clause
Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Projects that are alternatives of or similar to Lstmvis

Bitcoin Price Prediction Using Lstm
Bitcoin price Prediction ( Time Series ) using LSTM Recurrent neural network
Stars: ✭ 67 (-93.01%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Tensorflow Lstm Regression
Sequence prediction using recurrent neural networks(LSTM) with TensorFlow
Stars: ✭ 433 (-54.85%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Language Translation
Neural machine translator for English2German translation.
Stars: ✭ 82 (-91.45%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-89.99%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Stock Price Predictor
This project seeks to utilize Deep Learning models, Long-Short Term Memory (LSTM) Neural Network algorithm, to predict stock prices.
Stars: ✭ 146 (-84.78%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Gdax Orderbook Ml
Application of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (-93.74%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Deep Learning Time Series
List of papers, code and experiments using deep learning for time series forecasting
Stars: ✭ 796 (-17%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Sentiment Analysis Nltk Ml Lstm
Sentiment Analysis on the First Republic Party debate in 2016 based on Python,NLTK and ML.
Stars: ✭ 61 (-93.64%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Image Caption Generator
[DEPRECATED] A Neural Network based generative model for captioning images using Tensorflow
Stars: ✭ 141 (-85.3%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-87.59%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-89.89%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+234.62%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Lstm anomaly thesis
Anomaly detection for temporal data using LSTMs
Stars: ✭ 178 (-81.44%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Lstm Human Activity Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Stars: ✭ 2,943 (+206.88%)
Mutual labels:  jupyter-notebook, lstm, recurrent-neural-networks
Multilabel Timeseries Classification With Lstm
Tensorflow implementation of paper: Learning to Diagnose with LSTM Recurrent Neural Networks.
Stars: ✭ 519 (-45.88%)
Mutual labels:  jupyter-notebook, lstm
Cryptocurrencyprediction
Predict Cryptocurrency Price with Deep Learning
Stars: ✭ 453 (-52.76%)
Mutual labels:  jupyter-notebook, lstm
Ner Lstm
Named Entity Recognition using multilayered bidirectional LSTM
Stars: ✭ 532 (-44.53%)
Mutual labels:  lstm, recurrent-neural-networks
Video Classification
Tutorial for video classification/ action recognition using 3D CNN/ CNN+RNN on UCF101
Stars: ✭ 543 (-43.38%)
Mutual labels:  jupyter-notebook, lstm
Pytorch Ntm
Neural Turing Machines (NTM) - PyTorch Implementation
Stars: ✭ 453 (-52.76%)
Mutual labels:  jupyter-notebook, lstm
Attention Networks For Classification
Hierarchical Attention Networks for Document Classification in PyTorch
Stars: ✭ 540 (-43.69%)
Mutual labels:  jupyter-notebook, lstm

Visual Analysis for State Changes in RNNs

More information about LSTMVis, an introduction video, and the link to the live demo can be found at lstm.seas.harvard.edu

Also check out our new work on Sequence-to-Sequence models on github or the live demo at http://seq2seq-vis.io/

Changes in V2.1

  • update to Python 3.7++ (thanks to @nneophyt)

Changes in V2

  • new design and server-backend
  • discrete zooming for hidden-state track
  • added annotation tracks for meta-data and prediction
  • added training and extraction workflow for tensorflow
  • client is now ES6 and D3v4
  • some performance enhancements on client side
  • Added Keras tutorial here (thanks to Mohammadreza Ebrahimi)

Install

Please use python 3.7 or later to install LSTMVis.

Clone the repository:

git clone https://github.com/HendrikStrobelt/LSTMVis.git; cd LSTMVis

Install python (server-side) requirements using pip:

python -m venv  venv3
source venv3/bin/activate
pip install -r requirements.txt

Download & Unzip example dataset(s) into <LSTMVis>/data/05childbook:

Children Book - Gutenberg - 2.2 GB

Parens Dataset - 10k small - 0.03 GB

start server:

source venv3/bin/activate
python lstm_server.py -dir <datadir>

For the example dataset, use python lstm_server.py -dir data

open browser at http://localhost:8888 - eh voila !

Adding Your Own Data

If you want to train your own data first, please read the Training document. If you have your own data at hand, adding it to LSTMVis is very easy. You only need three files:

  • HDF5 file containing the state vectors for each time step (e.g. states.hdf5)
  • HDF5 file containing a word ID for each time step (e.g. train.hdf5)*
  • Dict file containing the mapping from word ID to word (e.g. train.dict)*

A schematic representation of the data:

Data Format

*If you don't have these files yet, but a space-separated .txt file of your training data instead, check out our text conversion tool

Data Directory

LSTMVis parses all subdirectories of <datadir> for config files lstm.yml. A typical <datadir> might look like this:

<datadir>
├── paren  		        <--- project directory
│   ├── lstm.yml 		<--- config file
│   ├── states.hdf5 	        <--- states for each time step
│   ├── train.hdf5 		<--- word ID for each time step
│   └── train.dict 		<--- mapping word ID -> word
├── fun .. 

Config File

a simple example of an lstm.yml is:

name: children books  # project name
description: children book texts from the Gutenberg project # little description

files: # assign files to reference name
  states: states.hdf5 # HDF5 files have to end with .h5 or .hdf5 !!!
  train: train.hdf5 # word ids of training set
  words: train.dict # dict files have to end with .dict !!

word_sequence: # defines the word sequence
  file: train # HDF5 file
  path: word_ids # path to table in HDF5
  dict_file: words # dictionary to map IDs from HDF5 to words

states: # section to define which states of your model you want to look at
  file: states # HDF5 files containing the state for each position
  types: [
        {type: state, layer: 1, path: states1}, # type={state, output}, layer=[1..x], path = HDF5 path
        {type: state, layer: 2, path: states2},
        {type: output, layer: 2, path: output2}
  ]

Intrigued ? Here is more..

Check out our documents about:

Credits

LSTMVis is a collaborative project of Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M. Rush at Harvard SEAS.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].