autonomousvision / neat

Licence: MIT License
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Programming Languages

python
139335 projects - #7 most used programming language
XSLT
1337 projects
HTML
75241 projects
shell
77523 projects
Dockerfile
14818 projects
CSS
56736 projects

Projects that are alternatives of or similar to neat

data aggregation
This repository contains the code for the CVPR 2020 paper "Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving"
Stars: ✭ 26 (-86.6%)
Mutual labels:  autonomous-driving, imitation-learning
Pgdrive
PGDrive: an open-ended driving simulator with infinite scenes from procedural generation
Stars: ✭ 60 (-69.07%)
Mutual labels:  autonomous-driving, imitation-learning
Imitation Learning
Autonomous driving: Tensorflow implementation of the paper "End-to-end Driving via Conditional Imitation Learning"
Stars: ✭ 60 (-69.07%)
Mutual labels:  autonomous-driving, imitation-learning
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+3514.43%)
Mutual labels:  autonomous-driving, imitation-learning
Gym Carla
An OpenAI gym wrapper for CARLA simulator
Stars: ✭ 164 (-15.46%)
Mutual labels:  autonomous-driving, imitation-learning
HCFlow
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)
Stars: ✭ 140 (-27.84%)
Mutual labels:  iccv2021
urban road filter
Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗
Stars: ✭ 134 (-30.93%)
Mutual labels:  autonomous-driving
OpenHDMap
An open HD map production process for autonomous car simulation
Stars: ✭ 152 (-21.65%)
Mutual labels:  autonomous-driving
SelfImitationDiverse
Tensorflow code for "Learning Self-Imitating Diverse Policies" (ICLR 2019)
Stars: ✭ 18 (-90.72%)
Mutual labels:  imitation-learning
RCAutopilot
Autonomous RC Car powered by a Convoluted Neural Network implemented in Python with Tensorflow
Stars: ✭ 35 (-81.96%)
Mutual labels:  autonomous-driving
opendlv
OpenDLV - A modern microservice-based software ecosystem powered by libcluon to make vehicles autonomous.
Stars: ✭ 67 (-65.46%)
Mutual labels:  autonomous-driving
Imitation-Learning-from-Imperfect-Demonstration
[ICML 2019] Implementation of "Imitation Learning from Imperfect Demonstration"
Stars: ✭ 36 (-81.44%)
Mutual labels:  imitation-learning
Reinforce-Paraphrase-Generation
This repository contains the data and code for the paper "An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation" (EMNLP2019).
Stars: ✭ 76 (-60.82%)
Mutual labels:  imitation-learning
cruw-devkit
Develop kit for CRUW dataset
Stars: ✭ 27 (-86.08%)
Mutual labels:  autonomous-driving
hgail
gail, infogail, hierarchical gail implementations
Stars: ✭ 25 (-87.11%)
Mutual labels:  imitation-learning
QmapCompression
Official implementation of "Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform", ICCV 2021
Stars: ✭ 27 (-86.08%)
Mutual labels:  iccv2021
ar-tu-do
ROS & Gazebo project for 1/10th scale self-driving race cars
Stars: ✭ 65 (-66.49%)
Mutual labels:  autonomous-driving
BtcDet
Behind the Curtain: Learning Occluded Shapes for 3D Object Detection
Stars: ✭ 104 (-46.39%)
Mutual labels:  autonomous-driving
EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (-42.78%)
Mutual labels:  autonomous-driving
CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (-51.55%)
Mutual labels:  iccv2021

NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Paper | Supplementary | Video | Poster | Blog

This repository is for the ICCV 2021 paper NEAT: Neural Attention Fields for End-to-End Autonomous Driving.

@inproceedings{Chitta2021ICCV,
  author = {Chitta, Kashyap and Prakash, Aditya and Geiger, Andreas},
  title = {NEAT: Neural Attention Fields for End-to-End Autonomous Driving},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year = {2021}
}

Setup

Please follow the installation instructions from our TransFuser repository to set up the CARLA simulator. The conda environment required for NEAT can be installed via:

conda env create -f environment.yml
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

For running the AIM-VA baseline, you will additionally need to install MMCV and MMSegmentation.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install mmsegmentation

Data Generation

The training data is generated using leaderboard/team_code/auto_pilot.py. Data generation requires routes and scenarios. Each route is defined by a sequence of waypoints (and optionally a weather condition) that the agent needs to follow. Each scenario is defined by a trigger transform (location and orientation) and other actors present in that scenario (optional). We provide several routes and scenarios under leaderboard/data/. The TransFuser repository and leaderboard repository provide additional routes and scenario files.

Running a CARLA Server

With Display

./CarlaUE4.sh --world-port=2000 -opengl

Without Display

Without Docker:

SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=0 ./CarlaUE4.sh --world-port=2000 -opengl

With Docker:

Instructions for setting up docker are available here. Pull the docker image of CARLA 0.9.10.1 docker pull carlasim/carla:0.9.10.1.

Docker 18:

docker run -it --rm -p 2000-2002:2000-2002 --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 carlasim/carla:0.9.10.1 ./CarlaUE4.sh --world-port=2000 -opengl

Docker 19:

docker run -it --rm --net=host --gpus '"device=0"' carlasim/carla:0.9.10.1 ./CarlaUE4.sh --world-port=2000 -opengl

If the docker container doesn't start properly then add another environment variable -e SDL_AUDIODRIVER=dsp.

Running the Autopilot

Once the CARLA server is running, rollout the autopilot to start data generation.

./leaderboard/scripts/run_evaluation.sh

The expert agent used for data generation is defined in leaderboard/team_code/auto_pilot.py. Different variables which need to be set are specified in leaderboard/scripts/run_evaluation.sh. The expert agent is originally based on the autopilot from this codebase.

Training

The training code and pretrained models are provided below.

mkdir model_ckpt
wget https://s3.eu-central-1.amazonaws.com/avg-projects/neat/models.zip -P model_ckpt
unzip model_ckpt/models.zip -d model_ckpt/
rm model_ckpt/models.zip

There are 5 pretrained models provided in model_ckpt/:

Additional baselines are available in the TransFuser repository.

Evaluation

Spin up a CARLA server (described above) and run the required agent. The required variables need to be set in leaderboard/scripts/run_evaluation.sh.

CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh

CARLA also has an official Autonomous Driving Leaderboard on which different models can be evaluated. The leaderboard_submission branch of the TransFuser repository provides details on how to build a docker image and submit it to the leaderboard.

Acknowledgements

This implementation primarily extends the existing TransFuser repository.

If you found our work interesting, check out these other papers on autonomous driving from our group.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].