All Projects → crowdbotp → socialways

crowdbotp / socialways

Licence: other
Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs (CVPR 2019)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to socialways

WIMP
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨
Stars: ✭ 80 (-29.2%)
Mutual labels:  self-driving-car, trajectory-prediction
Rtm3d
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)
Stars: ✭ 211 (+86.73%)
Mutual labels:  self-driving-car
Autonomousdrivingcookbook
Scenarios, tutorials and demos for Autonomous Driving
Stars: ✭ 1,939 (+1615.93%)
Mutual labels:  self-driving-car
Image To 3d Bbox
Build a CNN network to predict 3D bounding box of car from 2D image.
Stars: ✭ 200 (+76.99%)
Mutual labels:  self-driving-car
Airsim
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
Stars: ✭ 12,528 (+10986.73%)
Mutual labels:  self-driving-car
Seg Uncertainty
IJCAI2020 & IJCV 2020 🌇 Unsupervised Scene Adaptation with Memory Regularization in vivo
Stars: ✭ 202 (+78.76%)
Mutual labels:  self-driving-car
Self Driving Golf Cart
Be Driven 🚘
Stars: ✭ 147 (+30.09%)
Mutual labels:  self-driving-car
Drl based selfdrivingcarcontrol
Deep Reinforcement Learning (DQN) based Self Driving Car Control with Vehicle Simulator
Stars: ✭ 249 (+120.35%)
Mutual labels:  self-driving-car
Behavioral Cloning
Third Project of the Udacity Self-Driving Car Nanodegree Program
Stars: ✭ 210 (+85.84%)
Mutual labels:  self-driving-car
Self driving car specialization
Assignments and notes for the Self Driving Cars course offered by University of Toronto on Coursera
Stars: ✭ 190 (+68.14%)
Mutual labels:  self-driving-car
Awesome Self Driving Car
An awesome list of self-driving cars
Stars: ✭ 185 (+63.72%)
Mutual labels:  self-driving-car
Fusion Ukf
An unscented Kalman Filter implementation for fusing lidar and radar sensor measurements.
Stars: ✭ 162 (+43.36%)
Mutual labels:  self-driving-car
Pi self driving car
使用树莓派3b来实现无人驾驶汽车
Stars: ✭ 207 (+83.19%)
Mutual labels:  self-driving-car
Self Driving Car
Udacity Self-Driving Car Engineer Nanodegree projects.
Stars: ✭ 2,103 (+1761.06%)
Mutual labels:  self-driving-car
Ros robotics projects
Example codes of new book ROS Robotics Projects
Stars: ✭ 240 (+112.39%)
Mutual labels:  self-driving-car
Opentraj
Human Trajectory Prediction Dataset Benchmark (ACCV 2020)
Stars: ✭ 144 (+27.43%)
Mutual labels:  self-driving-car
Apollo perception ros
Object detection / tracking / fusion based on Apollo r3.0.0 perception module in ROS
Stars: ✭ 179 (+58.41%)
Mutual labels:  self-driving-car
Sdc Lane And Vehicle Detection Tracking
OpenCV in Python for lane line and vehicle detection/tracking in autonomous cars
Stars: ✭ 200 (+76.99%)
Mutual labels:  self-driving-car
self-driving-car
Udacity自动驾驶课程第一期
Stars: ✭ 57 (-49.56%)
Mutual labels:  self-driving-car
Awesome Carla
👉 CARLA resources such as tutorial, blog, code and etc https://github.com/carla-simulator/carla
Stars: ✭ 246 (+117.7%)
Mutual labels:  self-driving-car

Social Ways

The pytorch implementation for the paper

Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs
Javad Amirian, Jean-Bernard Hayet, Julien Pettre
Presented at CVPR 2019 in Precognition Workshop ( [arxiv], [slides], [poster] )

This work is, theoretically, an improvement of Social-GAN by applying the following changes:

  1. Implementing Attention Pooling, instead of Max-Pooling
  2. Introducing to use new social features between pair of agents:
  • Bearing angle
  • Euclidean Distance
  • Distance to Closest Approach (DCA)
  1. Replacing L2 loss function with Information loss, an idea inspired by info-GAN

System Architecture

The system is composed of two main components: Trajectory Generator and Trajectory Discriminator. For generating a prediction sample for Pedestrian of Interest (POI), the generator needs the following inputs:

  • the observed trajectory of POI,
  • the observed trajectory of surrounding agents,
  • the noise signal (z),
  • and the latent codes (c)

The Discriminator takes a pair of observation and prediction samples and decides, if the given prediction sample is real or fake.

Toy Example

We designed the trajectory toy dataset, to assess the capability of generator in preserving modes of trajectory distribution. There are six groups of trajectories, all starting from one specific point located along a circle (blue dots). When approaching the circle center, they split into 3 subgroups. Their endpoints are the green dots.

In order to create the toy example trajectories, you need to run

$ python3 create_toy.py --npz [output file]

this will store the required data into a .npz file. The default parameters are:

n_conditions = 8
n_modes = 3
n_samples = 768  

You can also store the raw trajectories into a .txt file with the following command:

$ python3 create_toy.py --txt [output file]

For having fun and seeing the animation of toy agents you can call:

$ python3 create_toy.py --anim

How to Train

To train the model, please edit the train.py to select the dataset you want to train the model on. The next few lines define some of the most critical parameters values. Then execute:

$ python3 train.py

How to Visualize Results

$ python3 visualize.py

How to Setup

To run this code you better to use python >= 3.5. You can use pip to install the required packages.

$ pip install torch torchvision numpy matplotlib tqdm nose
$ pip install seaborn opencv-python   # to run visualize.py

How to Cite

If you are using this code for your work, please cite:

@inproceedings{amirian2019social,
  title={Social ways: Learning multi-modal distributions of pedestrian trajectories with GANs},
  author={Amirian, Javad and Hayet, Jean-Bernard and Pettr{\'e}, Julien},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  pages={0--0},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].