All Projects → xz-group → AdverseDrive

xz-group / AdverseDrive

Licence: other
Attacking Vision based Perception in End-to-end Autonomous Driving Models

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to AdverseDrive

CarND-Advanced-Lane-Lines
No description or website provided.
Stars: ✭ 32 (+33.33%)
Mutual labels:  autonomous-vehicles, self-driving-cars
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+29116.67%)
Mutual labels:  autonomous-vehicles, carla-simulator
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-12.5%)
Mutual labels:  autonomous-vehicles, self-driving-cars
MotionPlanner
Motion Planner for Self Driving Cars
Stars: ✭ 129 (+437.5%)
Mutual labels:  self-driving-cars
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (+37.5%)
Mutual labels:  autonomous-vehicles
Visualizing-lidar-data
Visualizing lidar data using Uber Autonomous Visualization System (AVS) and Jupyter Notebook Application
Stars: ✭ 75 (+212.5%)
Mutual labels:  autonomous-vehicles
self-driving-car
Implementation of the paper "End to End Learning for Self-Driving Cars"
Stars: ✭ 54 (+125%)
Mutual labels:  autonomous-vehicles
telecarla
TELECARLA: An Open Source Extension of the CARLA Simulator for Teleoperated Driving Research Using Off-the-Shelf Components
Stars: ✭ 34 (+41.67%)
Mutual labels:  carla-simulator
drift drl
High-speed Autonomous Drifting with Deep Reinforcement Learning
Stars: ✭ 82 (+241.67%)
Mutual labels:  autonomous-vehicles
community-projects
Webots projects (PROTO files, controllers, simulation worlds, etc.) contributed by the community.
Stars: ✭ 20 (-16.67%)
Mutual labels:  autonomous-vehicles
athena
Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
Stars: ✭ 39 (+62.5%)
Mutual labels:  adversarial-machine-learning
jpeg-defense
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Stars: ✭ 82 (+241.67%)
Mutual labels:  adversarial-machine-learning
Adversarial-Patch-Training
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
Stars: ✭ 30 (+25%)
Mutual labels:  adversarial-machine-learning
interp-e2e-driving
Interpretable End-to-end Urban Autonomous Driving with Latent Deep Reinforcement Learning
Stars: ✭ 159 (+562.5%)
Mutual labels:  carla-simulator
highway-path-planning
My path-planning pipeline to navigate a car safely around a virtual highway with other traffic.
Stars: ✭ 39 (+62.5%)
Mutual labels:  autonomous-vehicles
perceptron-benchmark
Robustness benchmark for DNN models.
Stars: ✭ 61 (+154.17%)
Mutual labels:  adversarial-machine-learning
pyMHT
Track oriented, multi target, multi hypothesis tracker
Stars: ✭ 66 (+175%)
Mutual labels:  autonomous-vehicles
dig-into-apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 1,786 (+7341.67%)
Mutual labels:  autonomous-vehicles
Model-Predictive-Control
C++ implementation of Model Predictive Control(MPC)
Stars: ✭ 51 (+112.5%)
Mutual labels:  autonomous-vehicles
ThermometerEncoding
reproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch
Stars: ✭ 15 (-37.5%)
Mutual labels:  adversarial-machine-learning

Adverse Drive

The goal of this project is to attack end-to-end self-driving models using physically realizable adversaries.

Target Objective Conceptual Overview Example
Collision Attack collision_overview collision_adversary
Hijacking Attack hijack_overview hijack_adversary

Pre-requisites

  • Ubuntu 16.04
  • Dedicated GPU with relevant CUDA drivers
  • Docker-CE (for docker method)

Note: We highly recommend you use the dockerized version of our repository, due to being system independent. Furthermore, it would not affect the packages on your system.

Installation

  1. Clone the AdverseDrive repository
git clone https://github.com/xz-group/AdverseDrive
  1. Export Carla paths to PYTHONPATH
source export_paths.sh
  1. Install the required Python packages
pip3 install -r requirements.txt
  1. Download the modified version of the Carla simulator[1], carla-adversedrive.tar.gz. Extract the contents of the directory and navigate into the extracted directory.
tar xvzf carla-adversedrive.tar.gz
cd carla-adverserdrive
  1. Run the Carla simulator on a terminal
./CarlaUE4.sh -windowed -ResX=800 -ResY=600

This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.

  1. On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh

Note: This requires port 8000 to be free.

  1. On another new terminal, run the infraction objective python script
python3 start_infraction_experiments.py

Note: the Jupyter notebook version of this script, called start_infraction_experiments.ipynb describes each step in detail. It is recommended to use that while starting out with this repository. Use jupyter notebook to start a jupyter server in this directory.

How it Works

  1. The above steps sets up an experiment defined by the experiment parameters in config/infraction_parameters.json, including the Carla town being used, the task (straight, turn-left, turn-right), different scenes, the port number being used by Carla and Bayesian optimizer[3] parameters.
  2. Runs the baseline scenario where the Carla Imitation Learning[2] (IL) agent drives a vehicle from point A to point B as defined by the experiment scene and task. It returns a metric from the run (eg: sum of infraction for each frame). The baseline scenario is when there is no attack.
  3. The Bayesian Optimizer suggests parameters for the attack, based on the returned metric (which serves as the objective function that we are trying to maximize), the attack is generated by adversary_generator.py and placed in adversary/adversary_{town_name}.png.
  4. Carla reads the adversary image over the HTTP server and places in on pre-determined locations within the road.
  5. The IL model again runs through this attack scenario and returns a metric.
  6. Steps 3-5 are repeated for a set number of experiments, in which successful attacks would be found.

Docker Method (recommended)

It is expected that you have some experience with dockers, and have installed and tested your installation to ensure you have GPU access via docker containers. A quick way to test it is by running:

# docker >= 19.03
docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi

# docker < 19.03 (requires nvidia-docker2)
docker run nvidia/cuda:9.0-base --runtime=nvidia nvidia-smi

And you should get a standard nvidia-smi output.

  1. Clone the AdverseDrive repo
git clone https://github.com/xz-group/AdverseDrive
  1. Pull the modified version of the Carla simulator:
docker pull xzgroup/carla:latest
  1. Pull the AdverseDrive docker containing all the prerequisite packages for running experiments (also server-friendly)
docker pull xzgroup/adversedrive:latest
  1. Run the our dockerized Carla simulator on a terminal
sh run_carla_docker.sh

This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.

  1. On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh

Note: This requires port 8000 to be free.

  1. On another new terminal, run the xzgroup/adversedrive docker
sh run_docker.sh
  1. Run the infraction objective python script
python3 start_infraction_experiments.py

More documentation

References

  1. Carla Simulator: https://github.com/carla-simulator/carla
  2. Imitation Learning: https://github.com/carla-simulator/imitation-learning
  3. Bayesian Optimization: https://github.com/fmfn/BayesianOptimization

Citation

If you use our work, kindly cite us using the following:

@misc{boloor2019,
    title={Attacking Vision-based Perception in End-to-End Autonomous Driving Models},
    author={Adith Boloor and Karthik Garimella and Xin He and 
    Christopher Gill and Yevgeniy Vorobeychik and Xuan Zhang},
    year={2019},
    eprint={1910.01907},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].