All Projects → thuml → Autoformer

thuml / Autoformer

Licence: MIT license
About Code release for "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting" (NeurIPS 2021), https://arxiv.org/abs/2106.13008

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to Autoformer

midasml
midasml package is dedicated to run predictive high-dimensional mixed data sampling models
Stars: ✭ 31 (-94.53%)
Mutual labels:  time-series
The-Purchase-and-Redemption-Forecast-Challenge-baseline
天池“资金流入流出预测——挑战baseline”的解决方案,线上效果143.5
Stars: ✭ 78 (-86.24%)
Mutual labels:  time-series
wax-ml
A Python library for machine-learning and feedback loops on streaming data
Stars: ✭ 36 (-93.65%)
Mutual labels:  time-series
CausalityTools.jl
Algorithms for causal inference and the detection of dynamical coupling from time series, and for approximation of the transfer operator and invariant measures.
Stars: ✭ 45 (-92.06%)
Mutual labels:  time-series
autoplait
Python implementation of AutoPlait (SIGMOD'14) without smoothing algorithm. NOTE: This repository is for my personal use.
Stars: ✭ 24 (-95.77%)
Mutual labels:  time-series
downsample
Collection of several downsampling methods for time series visualisation purposes.
Stars: ✭ 50 (-91.18%)
Mutual labels:  time-series
sysidentpy
A Python Package For System Identification Using NARMAX Models
Stars: ✭ 139 (-75.49%)
Mutual labels:  time-series
tsmp
R Functions implementing UCR Matrix Profile Algorithm
Stars: ✭ 63 (-88.89%)
Mutual labels:  time-series
ahead
Univariate and multivariate time series forecasting
Stars: ✭ 15 (-97.35%)
Mutual labels:  time-series
modeltime.resample
Resampling Tools for Time Series Forecasting with Modeltime
Stars: ✭ 12 (-97.88%)
Mutual labels:  time-series
cubism-es
ES6 module of cubism.js, based on d3v5.
Stars: ✭ 24 (-95.77%)
Mutual labels:  time-series
pybacen
This library was developed for economic analysis in the Brazilian scenario (Investments, micro and macroeconomic indicators)
Stars: ✭ 40 (-92.95%)
Mutual labels:  time-series
wetterdienst
Open weather data for humans
Stars: ✭ 190 (-66.49%)
Mutual labels:  time-series
Start maja
To process a Sentinel-2 time series with MAJA cloud detection and atmospheric correction processor
Stars: ✭ 47 (-91.71%)
Mutual labels:  time-series
talaria
TalariaDB is a distributed, highly available, and low latency time-series database for Presto
Stars: ✭ 148 (-73.9%)
Mutual labels:  time-series
fastverse
An Extensible Suite of High-Performance and Low-Dependency Packages for Statistical Computing and Data Manipulation in R
Stars: ✭ 123 (-78.31%)
Mutual labels:  time-series
dana
DANA: Dimension-Adaptive Neural Architecture (UbiComp'21)( ACM IMWUT)
Stars: ✭ 28 (-95.06%)
Mutual labels:  time-series
state-spaces
Sequence Modeling with Structured State Spaces
Stars: ✭ 694 (+22.4%)
Mutual labels:  time-series
xephon-k
A time series database prototype with multiple backends
Stars: ✭ 22 (-96.12%)
Mutual labels:  time-series
cnosdb
An Open Source Distributed Time Series Database with high performance, high compression ratio and high usability.
Stars: ✭ 858 (+51.32%)
Mutual labels:  time-series

Autoformer (NeurIPS 2021)

Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Time series forecasting is a critical demand for real applications. Enlighted by the classic time series analysis and stochastic process theory, we propose the Autoformer as a general series forecasting model [paper]. Autoformer goes beyond the Transformer family and achieves the series-wise connection for the first time.

In long-term forecasting, Autoformer achieves SOTA, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.

🚩News (2022.02-2022.03) Autoformer has been deployed in 2022 Winter Olympics to provide weather forecasting for competition venues, including wind speed and temperature.

Autoformer vs. Transformers

1. Deep decomposition architecture

We renovate the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.



Figure 1. Overall architecture of Autoformer.

2. Series-wise Auto-Correlation mechanism

Inspired by the stochastic process theory, we design the Auto-Correlation mechanism, which can discover period-based dependencies and aggregate the information at the series level. This empowers the model with inherent log-linear complexity. This series-wise connection contrasts clearly from the previous self-attention family.



Figure 2. Auto-Correlation mechansim.

Get Started

  1. Install Python 3.6, PyTorch 1.9.0.
  2. Download data. You can obtain all the six benchmarks from Tsinghua Cloud or Google Drive. All the datasets are well pre-processed and can be used easily.
  3. Train the model. We provide the experiment scripts of all benchmarks under the folder ./scripts. You can reproduce the experiment results by:
bash ./scripts/ETT_script/Autoformer_ETTm1.sh
bash ./scripts/ECL_script/Autoformer.sh
bash ./scripts/Exchange_script/Autoformer.sh
bash ./scripts/Traffic_script/Autoformer.sh
bash ./scripts/Weather_script/Autoformer.sh
bash ./scripts/ILI_script/Autoformer.sh
  1. Special-designed implementation
  • Speedup Auto-Correlation: We built the Auto-Correlation mechanism as a batch-normalization-style block to make it more memory-access friendly. See the paper for details.

  • Without the position embedding: Since the series-wise connection will inherently keep the sequential information, Autoformer does not need the position embedding, which is different from Transformers.

Reproduce with Docker

To easily reproduce the results using Docker, conda and Make, you can follow the next steps:

  1. Initialize the docker image using: make init.
  2. Download the datasets using: make get_dataset.
  3. Run each script in scripts/ using make run_module module="bash scripts/ETT_script/Autoformer_ETTm1.sh" for each script.
  4. Alternatively, run all the scripts at once:
for file in `ls scripts`; do make run_module module="bash scripts/$script"; done

A Simple Example

See predict.ipynb for workflow (in Chinese).

Main Results

We experiment on six benchmarks, covering five main-stream applications. We compare our model with ten baselines, including Informer, N-BEATS, etc. Generally, for the long-term forecasting setting, Autoformer achieves SOTA, with a 38% relative improvement over previous baselines.

Baselines

We will keep adding series forecasting models to expand this repo:

  • Autoformer
  • Informer
  • Transformer
  • Reformer
  • LogTrans
  • N-BEATS

Citation

If you find this repo useful, please cite our paper.

@inproceedings{wu2021autoformer,
  title={Autoformer: Decomposition Transformers with {Auto-Correlation} for Long-Term Series Forecasting},
  author={Haixu Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}

Contact

If you have any questions or want to use the code, please contact [email protected].

Acknowledgement

We appreciate the following github repos a lot for their valuable code base or datasets:

https://github.com/zhouhaoyi/Informer2020

https://github.com/zhouhaoyi/ETDataset

https://github.com/laiguokun/multivariate-time-series-data

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].