All Projects → Davidham3 → Astgcn

Davidham3 / Astgcn

⚠️[Deprecated] no longer maintained, please use the code in https://github.com/guoshnBJTU/ASTGCN-r-pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Astgcn

Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-78.86%)
Mutual labels:  time-series, attention
mtad-gat-pytorch
PyTorch implementation of MTAD-GAT (Multivariate Time-Series Anomaly Detection via Graph Attention Networks) by Zhao et. al (2020, https://arxiv.org/abs/2009.02040).
Stars: ✭ 85 (-65.45%)
Mutual labels:  time-series, attention
Eon Chart
Realtime animated graphs with PubNub and C3.
Stars: ✭ 121 (-50.81%)
Mutual labels:  graph, time-series
Uplot
📈 A small, fast chart for time series, lines, areas, ohlc & bars
Stars: ✭ 6,808 (+2667.48%)
Mutual labels:  graph, time-series
Graph attention pool
Attention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (-24.39%)
Mutual labels:  graph, attention
Deep Rl Trading
playing idealized trading games with deep reinforcement learning
Stars: ✭ 228 (-7.32%)
Mutual labels:  time-series
Lightkurve
A friendly package for Kepler & TESS time series analysis in Python.
Stars: ✭ 232 (-5.69%)
Mutual labels:  time-series
Vworkflows
Flow Visualization Library for JavaFX and VRL-Studio
Stars: ✭ 226 (-8.13%)
Mutual labels:  graph
Tdengine
An open-source big data platform designed and optimized for the Internet of Things (IoT).
Stars: ✭ 17,434 (+6986.99%)
Mutual labels:  time-series
Awesome Time Series
list of papers, code, and other resources
Stars: ✭ 242 (-1.63%)
Mutual labels:  time-series
Grakn
TypeDB: a strongly-typed database
Stars: ✭ 2,947 (+1097.97%)
Mutual labels:  graph
Spectral Trajectory And Behavior Prediction
This is the code base for Trajectory and Driver Behavior Prediction in Autonomous Vehicles using Spectral Graph Theory
Stars: ✭ 236 (-4.07%)
Mutual labels:  graph
Euler
A distributed graph deep learning framework.
Stars: ✭ 2,701 (+997.97%)
Mutual labels:  graph
Graphnn
Training computational graph on top of structured data (string, graph, etc)
Stars: ✭ 235 (-4.47%)
Mutual labels:  graph
Unreal Polygonal Map Gen
An Unreal Engine 4 implementation of the Polygonal Map Generator for generating islands found at http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
Stars: ✭ 229 (-6.91%)
Mutual labels:  graph
Ai law
all kinds of baseline models for long text classificaiton( text categorization)
Stars: ✭ 243 (-1.22%)
Mutual labels:  attention
Self Attention Cv
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-15.04%)
Mutual labels:  attention
Appnp
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).
Stars: ✭ 234 (-4.88%)
Mutual labels:  attention
Msgraph Sdk Powershell
Powershell SDK for Microsoft Graph
Stars: ✭ 239 (-2.85%)
Mutual labels:  graph
Git Deps
git commit dependency analysis tool
Stars: ✭ 232 (-5.69%)
Mutual labels:  graph

ASTGCN

Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting (ASTGCN)

model architecture

References

Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, Huaiyu Wan(*). Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. The 33rd AAAI Conference on Artificial Intelligence (AAAI'19) 2019.

Datasets

We validate our model on two highway traffic datasets PeMSD4 and PeMSD8 from California. The datasets are collected by the Caltrans Performance Measurement System (PeMS) (Chen et al., 2001) in real time every 30 seconds. The traffic data are aggregated into every 5-minute interval from the raw data. The system has more than 39,000 detectors deployed on the highway in the major metropolitan areas in California. Geographic information about the sensor stations are recorded in the datasets. There are three kinds of traffic measurements considered in our experiments, including total flow, average speed, and average occupancy.

We provide two dataset: PEMS-04, PEMS-08

  1. PEMS-04:

    307 detectors
    Jan to Feb in 2018
    3 features: flow, occupy, speed.

  2. PEMS-08:

    170 detectors
    July to Augest in 2016
    3 features: flow, occupy, speed.

Requirements

  • python >= 3.5
  • mxnet >= 1.3.0
  • mxboard
  • scipy
  • tensorboard

To install MXNet correctly, you should follow the instruction provided by this page.

To run mxboard, you have to install tensorboard.

Other dependencies can be installed using the following command:

pip install -r requirements.txt

If you are using docker, install nvidia-docker and run the commands below:

# build image
docker build -t astgcn/mxnet:1.4.1_cu100_mkl_py35 -f docker/Dockerfile .

# training model in background
docker run -d -it --rm --runtime=nvidia -v $PWD:/mxnet --name astgcn astgcn/mxnet:1.4.1_cu100_mkl_py35 python3 train.py --config configurations/PEMS04.conf --force True

Usage

train model on PEMS04:

python train.py --config configurations/PEMS04.conf --force True

train model on PEMS08:

python train.py --config configurations/PEMS08.conf --force True

visualize training progress:

tensorboard --logdir logs --port 6006

then open http://127.0.0.1:6006 to visualize the training process.

Improvements

  1. We use convolutional operation to map the output of ASTGCN block to the label space because that can help the model achieve a better performance.

Configuration

The configuration file config.conf contains three parts: Data, Training and Predict:

Data

  • adj_filename: path of the adjacency matrix file
  • graph_signal_matrix_filename: path of graph signal matrix file
  • num_of_vertices: number of vertices
  • points_per_hour: points per hour, in our dataset is 12
  • num_for_predict: points to predict, in our model is 12

Training

  • model_name: ASTGCN or MSTGCN
  • ctx: set ctx = cpu, or set gpu-0, which means the first gpu device
  • optimizer: sgd, RMSprop, adam, see this page for more optimizer
  • learning_rate: float, like 0.0001
  • epochs: int, epochs to train
  • batch_size: int
  • num_of_weeks: int, how many weeks' data will be used
  • num_of_days: int, how many days' data will be used
  • num_of_hours: int, how many hours' data will be used
  • K: int, K-order chebyshev polynomials will be used
  • merge: int, 0 or 1, if merge equals 1, merge training set and validation set to train model
  • prediction_filename: str, if you specify this parameter, it will save the prediction of current testing set into this file
  • params_dir: the folder for saving parameters
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].