All Projects → albertogaspar → dts

albertogaspar / dts

Licence: MIT License
A Keras library for multi-step time-series forecasting.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to dts

Gdax Orderbook Ml
Application of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (-53.85%)
Mutual labels:  recurrent-neural-networks, lstm, gru
Rnn Text Classification Tf
Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification
Stars: ✭ 114 (-12.31%)
Mutual labels:  recurrent-neural-networks, lstm, gru
Tensorflow Lstm Sin
TensorFlow 1.3 experiment with LSTM (and GRU) RNNs for sine prediction
Stars: ✭ 52 (-60%)
Mutual labels:  recurrent-neural-networks, lstm, gru
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+2529.23%)
Mutual labels:  lstm, gru, seq2seq
Tensorflow Lstm Regression
Sequence prediction using recurrent neural networks(LSTM) with TensorFlow
Stars: ✭ 433 (+233.08%)
Mutual labels:  time-series, recurrent-neural-networks, lstm
Deep News Summarization
News summarization using sequence to sequence model with attention in TensorFlow.
Stars: ✭ 167 (+28.46%)
Mutual labels:  recurrent-neural-networks, lstm, seq2seq
ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+55.38%)
Mutual labels:  time-series, lstm, gru
Lstm anomaly thesis
Anomaly detection for temporal data using LSTMs
Stars: ✭ 178 (+36.92%)
Mutual labels:  time-series, recurrent-neural-networks, lstm
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+69.23%)
Mutual labels:  recurrent-neural-networks, lstm, gru
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+1513.08%)
Mutual labels:  recurrent-neural-networks, lstm, gru
Deep Learning Time Series
List of papers, code and experiments using deep learning for time series forecasting
Stars: ✭ 796 (+512.31%)
Mutual labels:  time-series, recurrent-neural-networks, lstm
LSTM-Time-Series-Analysis
Using LSTM network for time series forecasting
Stars: ✭ 41 (-68.46%)
Mutual labels:  time-series, recurrent-neural-networks, lstm
LearningMetersPoems
Official repo of the article: Yousef, W. A., Ibrahime, O. M., Madbouly, T. M., & Mahmoud, M. A. (2019), "Learning meters of arabic and english poems with recurrent neural networks: a step forward for language understanding and synthesis", arXiv preprint arXiv:1905.05700
Stars: ✭ 18 (-86.15%)
Mutual labels:  lstm, gru
SpeakerDiarization RNN CNN LSTM
Speaker Diarization is the problem of separating speakers in an audio. There could be any number of speakers and final result should state when speaker starts and ends. In this project, we analyze given audio file with 2 channels and 2 speakers (on separate channels).
Stars: ✭ 56 (-56.92%)
Mutual labels:  recurrent-neural-networks, lstm
sequence-rnn-py
Sequence analyzing using Recurrent Neural Networks (RNN) based on Keras
Stars: ✭ 28 (-78.46%)
Mutual labels:  recurrent-neural-networks, lstm
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (-69.23%)
Mutual labels:  lstm, gru
keras-malicious-url-detector
Malicious URL detector using keras recurrent networks and scikit-learn classifiers
Stars: ✭ 24 (-81.54%)
Mutual labels:  recurrent-neural-networks, lstm
ECGClassifier
CNN, RNN, and Bayesian NN classification for ECG time-series (using TensorFlow in Swift and Python)
Stars: ✭ 53 (-59.23%)
Mutual labels:  time-series, gru
Conversational-AI-Chatbot-using-Practical-Seq2Seq
A simple open domain generative based chatbot based on Recurrent Neural Networks
Stars: ✭ 17 (-86.92%)
Mutual labels:  recurrent-neural-networks, seq2seq
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (-66.92%)
Mutual labels:  recurrent-neural-networks, lstm

DTS - Deep Time-Series Forecasting

DTS is a Keras library that provides multiple deep architectures aimed at multi-step time-series forecasting.

The Sacred library is used to keep track of different experiments and allow their reproducibility.

Installation

DTS is compatible with Python 3.5+, and is tested on Ubuntu 16.04.

The setup.py script of DTS will not attempt to install Sacred, Keras and a backend for it. Thus, before installing DTS, you have to manually install:

  • The CPU or GPU version of Tensorflow (GPU recommended) <=1.14.0
  • Keras <= 2.2.4
  • Sacred <=0.7.5
  • (Optional) MongoDB is also recommended.

This choice has been taken in order to avoid any possible problem for the user. If you are already a Keras/Tesorflow user mind that if your version of Tensorflow is greater or equal to 1.14.0 then you need to check out this issue to install sacred correctly.

I have tested dts with the following dependencies:

ENV 1 ENV 2
numpy==1.14.2 numpy==1.17.0
tensorflow==1.12.0 tensorflow==1.14.0
keras==2.1.6 keras==2.2.4
sacred==0.7.4 sacred==0.7.5

To install dts from source:

git clone https://github.com/albertogaspar/dts.git
cd dts
pip install -e .

What's in it & How to use

Time-Series Forecasting

The package includes several deep learning architectures that can be used for multi step-time series forecasting. The package provides also several utilities to cast the forecasting problem into a supervised machine learning problem. Specifically a sliding window approach is used: each model is given a time window of size nT and asked to output a prediction for the following nO timesteps (see Figure below).

Run Experiment

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE 

or:

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE --grid_search 

grid_search: defines whether or not you are searching for the best hyperparamters. If True, multiple experiments are run, each with a different combination of hyperparamters. The process terminates when all possible combinations of hyperparamers have been explored.

add_config: The experiment's hyperparamters should be defined as a yaml file in the config folder (see How to write a config file for more details). FULLPATH_TO_YAML_FILE is the fullpath to the .yaml file that stores your configuration. The main function for your model should always look similar to this one:

observer: all the important information in an experiment can be stored either in MongoDB (default choice) or in multiple files (txt and json) inside a given folder (dts/logs/sacred/). Mongo Observer which stores all information in a MongoDB If you want to use the file-based logger then launch the script with the additional argument --observer file (once again, the default choice is --observer mongodb)

If you want to train a model using pretrained weights just run the model providing the paramter --load followed by the fullpath to the file containing the weights.

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE --load FULLPATH_TO_WEIGHTS 

The model will be initilized with this weights before training.

Datasets

  • Individual household electric power consumption Data Set: Measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years. Dataset & Description.
  • GEFCom 2014: hourly consumption data coming from ISO New England (aggregated consumption). Dataset & Description, Paper. If you use the GEFCom2014 you should cite this paper to acknowledge the source.

With DTS you can model your input values in many diffrent ways and then feed them to your favourite deep learning architectures. E.g.:

  • you can decide to include exogenous features (like temperature readings) if they are available.

  • you can decide to apply detrending to the time series (see dts.datasets.*.apply_detrend for more details).

See how to format your data or check out the examples in dts.examples to know more about data formatting and the possibilities offered by DTS.

Available architectures

Included architectures are:

  • Recurrent Neural Networks (Elmann, LSTM, GRU) with different trainig procedure:

    • MIMO: a Dense Network is used to map the last state of the RNN to the output space of size nO. The training and inference procedures are the same.

    • Recursive: The RNN is trained to predict the next step, i.e. the output space during training has size 1. During inference, the network is fed with (part of) the input plus it's own predictions in a recurrent fashion until an ouput vector of length nO is obtained.

  • Seq2Seq:

    different training procedure are available (see Professor Forcing: A New Algorithm for Training Recurrent Networks for more details)

    • Teacher Forcing
    • Self-Generated Samples
    • Professor Forcing : TODO

  • Temporal Convolutional Neural Networks:

    • MIMO training/inference:
    • Recursive training/inference: TODO (The methods to perform prediction with this strategy is available in dts.models.TCN.py but has not been tested and there is no example to use a TCN with this mode in dts.examples.tcn.py)
  • Feedforward Neural Networks:

    • MIMO training/inference:
    • Recursive training/inference
  • ResNet a feedforward neural network with residual connections:

    • MIMO training/inference:
    • Recursive training/inference

Project Structure & TODO list

  • dts: contains models, utilities and example to train and test different deep learning models.
  • data: contains raw data and .npz,.npy data (already preprocessed data).
  • config: yaml file to be used for grid search of hyperparamteres for all architectures.
  • weights: contains models' weights. If you use sacred using the the artifactID field in each document/json file contains the name of the trained model that achieved the related performance.
  • log: If you use sacred without mongodb then all the relevant files are stored in this directory.

Sacred Collected Information

The animation below provides an intuitive explanation of the information collected by Sacred (using MongoDB as Observer). The example refers to a completed experiment of a TCN model trained on the Individual household electric power consumption Data Set (for brevity, 'uci'):

When MongoDB is used as an Observer, the collected information for an experiment is stored in a document. In the above documents are visualized using MongoDB Compass

Reference

This is the code used for the Deep Learning for Time Series Forecasting: The Electric Load Case paper. Mind that the code has been changed a bit, thus you may notice some differences with the models described in the paper. If you encounter any problem or have any doubt don't hesitate to contact me.

If you find it interesting it please consider citing us:

@article{gasparin2019deep,
  title={Deep Learning for Time Series Forecasting: The Electric Load Case},
  author={Gasparin, Alberto and Lukovic, Slobodan and Alippi, Cesare},
  journal={arXiv preprint arXiv:1907.09207},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].