All Projects → jdb78 → Pytorch Forecasting

jdb78 / Pytorch Forecasting

Licence: mit
Time series forecasting with PyTorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Forecasting

Neural Api
CAI NEURAL API - Pascal based neural network API optimized for AVX, AVX2 and AVX512 instruction sets plus OpenCL capable devices including AMD, Intel and NVIDIA.
Stars: ✭ 94 (-88.93%)
Mutual labels:  learning, deep, neural, network
Jeelizar
JavaScript object detection lightweight library for augmented reality (WebXR demos included). It uses convolutional neural networks running on the GPU with WebGL.
Stars: ✭ 296 (-65.14%)
Mutual labels:  learning, deep, neural, network
Ludwig
Data-centric declarative deep learning framework
Stars: ✭ 8,018 (+844.41%)
Mutual labels:  learning, machine, deep
Amazon Sagemaker Examples
Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
Stars: ✭ 6,346 (+647.47%)
Mutual labels:  learning, machine, deep
Deepj
A deep learning model for style-specific music generation.
Stars: ✭ 681 (-19.79%)
Mutual labels:  learning, machine, deep
pssa
Singular Spectrum Analysis for time series forecasting in Python
Stars: ✭ 119 (-85.98%)
Mutual labels:  time, series, forecasting
Stats
macOS system monitor in your menu bar
Stars: ✭ 7,134 (+740.28%)
Mutual labels:  gpu, network
Timetk
A toolkit for working with time series in R
Stars: ✭ 371 (-56.3%)
Mutual labels:  time, forecasting
Neurokernel
Neurokernel Project
Stars: ✭ 491 (-42.17%)
Mutual labels:  gpu, neural
Moviebox
Machine learning movie recommending system
Stars: ✭ 504 (-40.64%)
Mutual labels:  learning, machine
Sklearn Classification
Data Science Notebook on a Classification Task, using sklearn and Tensorflow.
Stars: ✭ 518 (-38.99%)
Mutual labels:  learning, machine
Machine Learning Mindmap
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.
Stars: ✭ 5,339 (+528.86%)
Mutual labels:  learning, machine
M
Stars: ✭ 313 (-63.13%)
Mutual labels:  time, neural
Awesome Cybersecurity Datasets
A curated list of amazingly awesome Cybersecurity datasets
Stars: ✭ 380 (-55.24%)
Mutual labels:  learning, deep
Sharplearning
Machine learning for C# .Net
Stars: ✭ 294 (-65.37%)
Mutual labels:  learning, machine
Awesome Falsehood
😱 Falsehoods Programmers Believe in
Stars: ✭ 16,614 (+1856.89%)
Mutual labels:  time, network
Awesome Machine Learning
🎰 A curated list of machine learning resources, preferably CoreML
Stars: ✭ 716 (-15.67%)
Mutual labels:  learning, machine
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (-4.95%)
Mutual labels:  learning, deep
Naivecnn
A naive (very simple!) implementation of a convolutional neural network
Stars: ✭ 18 (-97.88%)
Mutual labels:  neural, network
lobe
Lobe is the world's first AI paralegal.
Stars: ✭ 22 (-97.41%)
Mutual labels:  learning, machine

Our article on Towards Data Science introduces the package and provides background information.

Pytorch Forecasting aims to ease state-of-the-art timeseries forecasting with neural networks for real-world cases and research alike. The goal is to provide a high-level API with maximum flexibility for professionals and reasonable defaults for beginners. Specifically, the package provides

  • A timeseries dataset class which abstracts handling variable transformations, missing values, randomized subsampling, multiple history lengths, etc.
  • A base model class which provides basic training of timeseries models along with logging in tensorboard and generic visualizations such actual vs predictions and dependency plots
  • Multiple neural network architectures for timeseries forecasting that have been enhanced for real-world deployment and come with in-built interpretation capabilities
  • Multi-horizon timeseries metrics
  • Ranger optimizer for faster model training
  • Hyperparameter tuning with optuna

The package is built on pytorch-lightning to allow training on CPUs, single and multiple GPUs out-of-the-box.

Installation

If you are working on windows, you need to first install PyTorch with

pip install torch -f https://download.pytorch.org/whl/torch_stable.html.

Otherwise, you can proceed with

pip install pytorch-forecasting

Alternatively, you can install the package via conda

conda install pytorch-forecasting pytorch -c pytorch>=1.7 -c conda-forge

PyTorch Forecasting is now installed from the conda-forge channel while PyTorch is install from the pytorch channel.

Documentation

Visit https://pytorch-forecasting.readthedocs.io to read the documentation with detailed tutorials.

Available models

To implement new models, see the How to implement new models tutorial. It covers basic as well as advanced architectures.

Usage

import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_forecasting.metrics import QuantileLoss
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer

# load data
data = ...

# define dataset
max_encoder_length = 36
max_prediction_length = 6
training_cutoff = "YYYY-MM-DD"  # day for cutoff

training = TimeSeriesDataSet(
    data[lambda x: x.date <= training_cutoff],
    time_idx= ...,
    target= ...,
    group_ids=[ ... ],
    max_encoder_length=max_encoder_length,
    max_prediction_length=max_prediction_length,
    static_categoricals=[ ... ],
    static_reals=[ ... ],
    time_varying_known_categoricals=[ ... ],
    time_varying_known_reals=[ ... ],
    time_varying_unknown_categoricals=[ ... ],
    time_varying_unknown_reals=[ ... ],
)


validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training.index.time.max() + 1, stop_randomization=True)
batch_size = 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)


early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=1, verbose=False, mode="min")
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
    max_epochs=100,
    gpus=0,
    gradient_clip_val=0.1,
    limit_train_batches=30,
    callbacks=[lr_logger, early_stop_callback],
)


tft = TemporalFusionTransformer.from_dataset(
    training,
    learning_rate=0.03,
    hidden_size=32,
    attention_head_size=1,
    dropout=0.1,
    hidden_continuous_size=16,
    output_size=7,
    loss=QuantileLoss(),
    log_interval=2,
    reduce_on_plateau_patience=4
)
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")

# find optimal learning rate
res = trainer.lr_find(
    tft, train_dataloader=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,
)

print(f"suggested learning rate: {res.suggestion()}")
fig = res.plot(show=True, suggest=True)
fig.show()

trainer.fit(
    tft, train_dataloader=train_dataloader, val_dataloaders=val_dataloader,
)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].