All Projects → rohanchandra30 → Spectral Trajectory And Behavior Prediction

rohanchandra30 / Spectral Trajectory And Behavior Prediction

This is the code base for Trajectory and Driver Behavior Prediction in Autonomous Vehicles using Spectral Graph Theory

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Spectral Trajectory And Behavior Prediction

Stgcn
implementation of STGCN for traffic prediction in IJCAI2018
Stars: ✭ 87 (-63.14%)
Mutual labels:  prediction, traffic
Speed Camera
A Unix, Windows, Raspberry Pi Object Speed Camera using python, opencv, video streaming, motion tracking. Includes a Standalone Web Server Interface, Image Search using opencv template match and a whiptail Admin Menu Interface Includes picam and webcam Plugins for motion track security camera configuration including rclone sync script. watch-app allows remotely controller camera configuration from a remote storage service name. Uses sqlite3 and gnuplot for reporting. Recently added openalpr license plate reader support.
Stars: ✭ 539 (+128.39%)
Mutual labels:  vehicle, traffic
Vizceral
WebGL visualization for displaying animated traffic graphs
Stars: ✭ 3,871 (+1540.25%)
Mutual labels:  graph, traffic
Graphembeddingrecommendationsystem
Python based Graph Propagation algorithm, DeepWalk to evaluate and compare preference propagation algorithms in heterogeneous information networks from user item relation ship.
Stars: ✭ 144 (-38.98%)
Mutual labels:  graph, prediction
Charts
Simple, responsive, modern SVG Charts with zero dependencies
Stars: ✭ 14,112 (+5879.66%)
Mutual labels:  graph
Ehtrace
ATrace is a tool for tracing execution of binaries on Windows.
Stars: ✭ 218 (-7.63%)
Mutual labels:  graph
Graph convolutional lstm
Traffic Graph Convolutional Recurrent Neural Network
Stars: ✭ 210 (-11.02%)
Mutual labels:  graph
Sdl core
SmartDeviceLink In-Vehicle Software and Sample HMI
Stars: ✭ 207 (-12.29%)
Mutual labels:  vehicle
Git Deps
git commit dependency analysis tool
Stars: ✭ 232 (-1.69%)
Mutual labels:  graph
Euler
A distributed graph deep learning framework.
Stars: ✭ 2,701 (+1044.49%)
Mutual labels:  graph
Pubg mobile memory hacking examples
Pubg Mobile Emulator Gameloop Memory Hacking C++ code examples. Ex: Name, Coord, Bones, Weapons, Items, Box, Drop etc.
Stars: ✭ 224 (-5.08%)
Mutual labels:  vehicle
Aaia
AWS Identity and Access Management Visualizer and Anomaly Finder
Stars: ✭ 218 (-7.63%)
Mutual labels:  graph
Vworkflows
Flow Visualization Library for JavaFX and VRL-Studio
Stars: ✭ 226 (-4.24%)
Mutual labels:  graph
Tcdf
Temporal Causal Discovery Framework (PyTorch): discovering causal relationships between time series
Stars: ✭ 217 (-8.05%)
Mutual labels:  prediction
Multigraph transformer
transformer, multi-graph transformer, graph, graph classification, sketch recognition, sketch classification, free-hand sketch, official code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition"
Stars: ✭ 231 (-2.12%)
Mutual labels:  graph
Hyperformula
A complete, open-source Excel-like calculation engine written in TypeScript. Includes 380+ built-in functions. Maintained by the Handsontable team⚡
Stars: ✭ 210 (-11.02%)
Mutual labels:  graph
Ngraph.path
Path finding in a graph
Stars: ✭ 2,545 (+978.39%)
Mutual labels:  graph
Unreal Polygonal Map Gen
An Unreal Engine 4 implementation of the Polygonal Map Generator for generating islands found at http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
Stars: ✭ 229 (-2.97%)
Mutual labels:  graph
Userline
Query and report user logons relations from MS Windows Security Events
Stars: ✭ 221 (-6.36%)
Mutual labels:  graph
Flashx
FlashX is a collection of big data analytics tools that perform data analytics in the form of graphs and matrices.
Stars: ✭ 220 (-6.78%)
Mutual labels:  graph

Paper - Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs

Project Page - https://gamma.umd.edu/spectralcows

Please cite our work if you found it useful.

@article{chandra2020forecasting,
  title={Forecasting trajectory and behavior of road-agents using spectral clustering in graph-lstms},
  author={Chandra, Rohan and Guan, Tianrui and Panuganti, Srujan and Mittal, Trisha and Bhattacharya, Uttaran and Bera, Aniket and Manocha, Dinesh},
  journal={IEEE Robotics and Automation Letters},
  year={2020},
  publisher={IEEE}
}

Important - This repo is no longer under active maintenance. Also, please note that the current results produced by the code are normalized RMSE values and not in meters. Furthermore, the trained models provided by in this codebase may not reflect the results in the main paper.

Table of Contents

Repo Details and Contents

Python version: 3.7

List of Trajectory Prediction Methods Implemented

Please cite the methods below if you use them.

As the official implementation of the GRIP method was not available at the time creating this repo, the code provided here is our own effort to replicate the GRIP method to the best of our ability and does not necessarily convey the original implementation of the authors.

The original GRIP implementation by the authors is provided here. Please cite their paper if you use their method.

Datasets

How to Run

Installation


  1. Create a conda environement
    conda env create -f env.yml

  2. To activate the environment:
    conda activate sc-glstm

  3. Download resources
    python setup.py

Usage


  • To run our one & two stream model:
    1. cd ours/
    2. python main.py
    3. To change between one stream to two stream, simply change the variable s1 in main.py between True and False.
    4. To change the model, change DATA and SUFIX variable in main.py.
  • To run EncDec comparison methods:
    1. cd comparison_methods/EncDec/
    2. python main.py
    3. To change the model, change DATA and SUFIX variable in main.py.
  • To run GRIP comparison methods:
    1. cd comparison_methods/GRIP/
    2. python main.py
    3. To change the model, change DATA and SUFIX variable in main.py.
  • To run TraPHic/SC-LSTM comparison methods:
    1. cd comparison_methods/traphic_sconv/
    2. python main.py
    3. To change the model and methods, change DATASET and PREDALGO variable in main.py.

Note: During evaluation of the trained_models, the best results may be different from reported error due to different batch normalization applied to the network. To obtain the same number, we may have to mannually change the network.

Resources folder structure:

  • data -- input and output of stream 1 & 2 (This is directly avaiable in resources folder)
  • raw_data -- location of the raw data (put the downloaded dataset in this folder to process)
  • trained_model -- some saved models

Data Preparation.


Important steps if you plan to prepare the Argoverse, Lyft, and Apolloscape from the raw data available from their websites.

Formatting the dataset after downloading from the official website

  • Run data_processing/format_apolloscape.py to format the downloaded apolloscape data into our desired representation
  • Run data_processing/format_lyft.py to format the downloaded lyft data into our desired representation
  • Run data_processing/generate_data.py to format the downloaded Argoverse trajectory data into our desired representation

For preparing the formatted data into the data structures which our model requires

  • Use data_processing/data_stream.py to generate input data for stream1 and stream2.
  • Use generate_adjacency() function in data_processing/behaviors.py to generate adjacency matrices.
  • Must use add_behaviors_stream2() function in data_processing/behaviors.py to add behavior labels to the stream2 data before supplying the data to the network.

Training and Testing on your own dataset


Our code supports any dataset that contains trajectory information. Follow the steps below to integrate your dataset with our code

1. Prepare your Dataset

The first step is to prepare your dataset in our format which is a text file where each row will contain 'Frame ID', 'Agent_ID', 'X coordinate', 'Y Coordinate', 'Dataset_ID'.

Make sure:

  • The Frame_ID's range between 1 to n. And Agent_ID's also range from 1 to N. n is total number of frames and N is total number of agents. If your dataset uses a different convention to represent the Frame_ID's (for example, few datasets use Time Stamp as Frame_ID), you need to map these ID's to 1 to n. If your dataset uses a different convention to represent Agent_ID's (for example few datasets represent Agent_ID's using string of characters), you need to map these ID's to 1 to N.

  • If the Frame_ID's and Agent_ID's of your dataset are already in ranges of 1 to n and 1 to N, make sure they are sequential. Make sure there are no missing ID's.

  • Dataset_ID's are used to differentiate different scenes/sets of a same DATASET

2. Convert the text file to .npy format and save this as TrainSet0.npy.

3. Run the data_stream.py file in /data_processing. This will generate the pickle files needed to run the main.py files for any method.

Mandatory precautions to take before running data_stream.py:

  • Make sure you have taken all the mandatory precautions mentioned above for preparing your data.

  • You must know the frame rate at which the trajectories of the vehicles are recorded. i.e., you must know how many frames does 1 second corresponds to? E.g. if the FPS is 2Hz, this means each second corresponds to 2 frames in the dataset.

  • You must set the train_seq_len and pred_seq_len in data_stream.py appropriately based on the frame rate. For example, if the frame rate is 2Hz, and if you want to consider 3 seconds as observation data, then train_seq_len would be 3*2 = 6. if you want the to consider next 5 seconds as prediction data, then pred_seq_len would be 5*2 = 10. Make sure frame_lenth_cap >= (train_seq_len + pred_seq_len). We use this frame_lenth_cap to enforce that an Agent_ID is present/visible/seen in atleast frame_lenth_cap number of frames.

  • If your data is too huge, you may want to consider only few scenes/sets from the whole data. Use the Dataset IDs (D_id) list to tweak the values and shorten the amount of data.

  • Assign a short keyword XXXX for naming your dataset.

  • Expect to see multiple files generated in the ./resources/DATA/XXXX/ with names starting with stream1_obs_data_, stream1_pred_data_, stream2_obs_data_, stream2_pred_data_,stream2_obs_eigs_, stream2_pred_eigs_.

4. Then run the main.py file of any method.

Our network

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].