All Projects → bradyz → 2020_carla_challenge

bradyz / 2020_carla_challenge

"Learning by Cheating" (CoRL 2019) submission for the 2020 CARLA Challenge

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to 2020 carla challenge

Lidar for ad references
A list of references on lidar point cloud processing for autonomous driving
Stars: ✭ 456 (+647.54%)
Mutual labels:  autonomous-driving
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+11395.08%)
Mutual labels:  autonomous-driving
Constrained attention filter
(ECCV 2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Stars: ✭ 36 (-40.98%)
Mutual labels:  autonomous-driving
Deeplabv3
PyTorch implementation of DeepLabV3, trained on the Cityscapes dataset.
Stars: ✭ 511 (+737.7%)
Mutual labels:  autonomous-driving
Awesome Interaction Aware Trajectory Prediction
A selection of state-of-the-art research materials on trajectory prediction
Stars: ✭ 625 (+924.59%)
Mutual labels:  autonomous-driving
Chosuntruck
Euro Truck Simulator 2 autonomous driving solution
Stars: ✭ 706 (+1057.38%)
Mutual labels:  autonomous-driving
Mscnn
Caffe implementation of our multi-scale object detection framework
Stars: ✭ 397 (+550.82%)
Mutual labels:  autonomous-driving
Pgdrive
PGDrive: an open-ended driving simulator with infinite scenes from procedural generation
Stars: ✭ 60 (-1.64%)
Mutual labels:  autonomous-driving
Highway Env
A minimalist environment for decision-making in autonomous driving
Stars: ✭ 674 (+1004.92%)
Mutual labels:  autonomous-driving
Kittiseg
A Kitti Road Segmentation model implemented in tensorflow.
Stars: ✭ 873 (+1331.15%)
Mutual labels:  autonomous-driving
Self Driving Car In Video Games
A deep neural network that learns to drive in video games
Stars: ✭ 559 (+816.39%)
Mutual labels:  autonomous-driving
Apollo Platform
Collections of Apollo Platform Software
Stars: ✭ 611 (+901.64%)
Mutual labels:  autonomous-driving
Dig Into Apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 903 (+1380.33%)
Mutual labels:  autonomous-driving
Multinet
Real-time Joint Semantic Reasoning for Autonomous Driving
Stars: ✭ 471 (+672.13%)
Mutual labels:  autonomous-driving
Dmpr Ps
DMPR-PS: A Novel Approach for Parking-Slot Detection Using Directional Marking-Point Regression
Stars: ✭ 46 (-24.59%)
Mutual labels:  autonomous-driving
Autonomousvehiclepaper
无人驾驶相关论文速递
Stars: ✭ 406 (+565.57%)
Mutual labels:  autonomous-driving
Ultra Fast Lane Detection
Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
Stars: ✭ 688 (+1027.87%)
Mutual labels:  autonomous-driving
Imitation Learning
Autonomous driving: Tensorflow implementation of the paper "End-to-end Driving via Conditional Imitation Learning"
Stars: ✭ 60 (-1.64%)
Mutual labels:  autonomous-driving
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-19.67%)
Mutual labels:  autonomous-driving
Tianbot racecar
DISCONTINUED - MIGRATED TO TIANRACER - A Low cost Autonomous Driving Car Educational and Competition Kit
Stars: ✭ 26 (-57.38%)
Mutual labels:  autonomous-driving

Learning by Cheating

teaser

Learning by Cheating
Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl,
Conference on Robot Learning (CoRL 2019)
arXiv 1912.12294

If you find our repo to be useful in your research, please consider citing our work

@inproceedings{chen2019lbc
  author    = {Dian Chen and Brady Zhou and Vladlen Koltun and Philipp Kr\"ahenb\"uhl},
  title     = {Learning by Cheating},
  booktitle = {Conference on Robot Learning (CoRL)},
  year      = {2019},
}

The code in this repo is based off of link, which contains the code for the NoCrash and CoRL 17 benchmarks.

Installation

Clone this repo with all its submodules

git clone https://github.com/bradyz/2020_CARLA_challenge.git --recursive

All python packages used are specified in carla_project/requirements.txt.

This code uses CARLA 0.9.9 and works with CARLA 0.9.8, 0.9.10.1.

You will also need to install CARLA 0.9.10.1, along with the additional maps. See link for more instructions.

Dataset

We provide a dataset of over 70k samples collected over the 75 routes provided in leaderboard/data/routes_*.xml.

Link to full dataset (9 GB).

sample

The dataset is collected using leaderboard/team_code/autopilot.py, using painfully hand-designed rules (i.e. if pedestrian is 5 meters ahead, then brake).

Additionally, we change the weather for a single route once every couple of seconds to add visual diversity as a sort of on-the-fly augmentation. The simulator is run at 20 FPS, and we save the following data at 2 Hz.

  • Left, Center, and Right RGB Images at 256 x 144 resolution
  • A semantic segmentation rendered in the overhead view
  • World position and heading
  • Raw control (steer, throttle, brake)

Note: the overhead view does nothing to address obstructions, like overhead highways, etc.

We provide a sample trajectory in sample_data, which you can visualize by running

python3 -m carla_project.src.dataset sample_data/route_00/

Data Collection

The autopilot that we used to collect the data can use a lot of work and currently does not support stop signs.

If you're interested in recollecting data after changing the autopilot's driving behavior in leaderboard/team_code/autopilot.py, you can collect your own dataset by running the following.

First, spin up a CARLA server

./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600 -opengl

then run the agent.

export CARLA_ROOT=/home/bradyzhou/software/CARLA_0.9.10.1           # change to where you installed CARLA
export PORT=2000                                                    # change to port that CARLA is running on
export ROUTES=leaderboard/data/routes_training/route_19.xml         # change to desired route
export TEAM_AGENT=auto_pilot.py                                     # no need to change
export TEAM_CONFIG=sample_data                                      # change path to save data

./run_agent.sh

Run a pretrained model

Download the checkpoint from our Wandb project.

Navigate to one of the runs, like https://app.wandb.ai/bradyz/2020_carla_challenge_lbc/runs/command_coefficient=0.01_sample_by=even_stage2/files

Go to the "files" tab, and download the model weights, named "epoch=24.ckpt", and pass in the file path as the TEAM_CONFIG below.

Spin up a CARLA server

./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600 -opengl

then run the agent.

export CARLA_ROOT=/home/bradyzhou/software/CARLA_0.9.10.1           # change to where you installed CARLA
export PORT=2000                                                    # change to port that CARLA is running on
export ROUTES=leaderboard/data/routes_training/route_19.xml         # change to desired route
export TEAM_AGENT=image_agent.py                                    # no need to change
export TEAM_CONFIG=model.ckpt                                       # change path to checkpoint
export HAS_DISPLAY=1                                                # set to 0 if you don't want a debug window

./run_agent.sh

Training models from scratch

First, download and extract our provided dataset.

Then run the stage 1 training of the privileged agent.

python3 -m carla_project.src.map_model --dataset_dir /path/to/data --hack

We use wandb for logging, so navigate to the generated experiment page to visualize training.

Important: If you're interested in tuning hyperparameters, see carla_project/src/map_model.py for more detail.
To see what hyperparameters we used for our models, you can see all of them by navigating to the corresponding wandb run config.

sample

Training the sensorimotor agent (acts only on raw images) is similar, and can be done by

python3 -m carla_project.src.image_model --dataset_dir /path/to/data

Docker

Build the docker container to submit, make sure to edit scripts/Dockerfile.master appropriately.

sudo ./scripts/make_docker.sh

Spin up a CARLA server

./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600 -opengl

Now you can either run the docker container or run it interactively.

To run the docker container,

sudo docker run --net=host --gpus all -e NVIDIA_VISIBLE_DEVICES=0 -e REPETITIONS=1 -e DEBUG_CHALLENGE=0 -e PORT=2000 -e ROUTES=leaderboard/data/routes_devtest.xml -e CHECKPOINT_ENDPOINT=tmp.txt -e SCENARIOS=leaderboard/data/all_towns_traffic_scenarios_public.json leaderboard-user:latest ./leaderboard/scripts/run_evaluation.sh

Or if you need to debug something, you can run it interactively

sudo docker run --net=host --gpus all -it leaderboard-user:latest /bin/bash

Run the evaluation through the interactive shell.

export PORT=2000
export DEBUG_CHALLENGE=0
export REPETITIONS=1
export ROUTES=leaderboard/data/routes_devtest/route_00.xml         # change to desired route
export CHECKPOINT_ENDPOINT=tmp.txt
export SCENARIOS=leaderboard/data/all_towns_traffic_scenarios_public.json

conda activate python37

./leaderboard/scripts/run_evaluation.sh
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].