All Projects → merantix → Imitation Learning

merantix / Imitation Learning

Licence: mit
Autonomous driving: Tensorflow implementation of the paper "End-to-end Driving via Conditional Imitation Learning"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Imitation Learning

Pgdrive
PGDrive: an open-ended driving simulator with infinite scenes from procedural generation
Stars: ✭ 60 (+0%)
Mutual labels:  autonomous-driving, imitation-learning
neat
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
Stars: ✭ 194 (+223.33%)
Mutual labels:  autonomous-driving, imitation-learning
Gym Carla
An OpenAI gym wrapper for CARLA simulator
Stars: ✭ 164 (+173.33%)
Mutual labels:  autonomous-driving, imitation-learning
Awesome Interaction Aware Trajectory Prediction
A selection of state-of-the-art research materials on trajectory prediction
Stars: ✭ 625 (+941.67%)
Mutual labels:  autonomous-driving, paper
data aggregation
This repository contains the code for the CVPR 2020 paper "Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving"
Stars: ✭ 26 (-56.67%)
Mutual labels:  autonomous-driving, imitation-learning
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+11586.67%)
Mutual labels:  autonomous-driving, imitation-learning
Paperless
Scan, index, and archive all of your paper documents
Stars: ✭ 7,662 (+12670%)
Mutual labels:  paper
Multiagent Particle Envs
Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Stars: ✭ 1,086 (+1710%)
Mutual labels:  paper
Documents
Documentation for Phase 4 Ground
Stars: ✭ 31 (-48.33%)
Mutual labels:  paper
Pytorchsr
Pytorch based phoneme recognition (TIMIT phoneme classification)
Stars: ✭ 21 (-65%)
Mutual labels:  paper
Imitation Learning Dagger Torcs
A Simple Example for Imitation Learning with Dataset Aggregation (DAGGER) on Torcs Env
Stars: ✭ 60 (+0%)
Mutual labels:  imitation-learning
Dips
NAACL 2019: Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation
Stars: ✭ 59 (-1.67%)
Mutual labels:  paper
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-18.33%)
Mutual labels:  autonomous-driving
Constrained attention filter
(ECCV 2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Stars: ✭ 36 (-40%)
Mutual labels:  autonomous-driving
Papers
Papers about known hacking, security, hardware, software, computer, network and other ressources.
Stars: ✭ 58 (-3.33%)
Mutual labels:  paper
Essentials
The essential plugin suite for Minecraft servers.
Stars: ✭ 957 (+1495%)
Mutual labels:  paper
Infogan
Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
Stars: ✭ 948 (+1480%)
Mutual labels:  paper
Dmpr Ps
DMPR-PS: A Novel Approach for Parking-Slot Detection Using Directional Marking-Point Regression
Stars: ✭ 46 (-23.33%)
Mutual labels:  autonomous-driving
Bert In Production
A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.
Stars: ✭ 58 (-3.33%)
Mutual labels:  paper
Style Transfer In Text
Paper List for Style Transfer in Text
Stars: ✭ 1,030 (+1616.67%)
Mutual labels:  paper

Imitation learning Build Status Tweet

This repository provides a Tensorflow implementation of the paper End-to-end Driving via Conditional Imitation Learning.

You can find a pre-trained network here. The repository at hand adds Tensorflow training code.

There are only a few changes to the setup in the paper:

  • We train less steps (we do 190k steps, the paper does 450k steps), but this is configurable.
  • The branches for the controller follow the order of the training data.
  • We take different weight hyperparameters for the outputs (steer, gas, brake, speed), since the hyperparameters suggested in the paper did not work for us.

Setup

This repository uses docker images. In order to use it, install docker. To build the image, use:

docker build --build-arg base_image=tensorflow/tensorflow:1.12.0-gpu -t imit-learn .

If you only need a CPU image, leave out base_image=tensorflow/tensorflow:1.12.0-gpu. So far, we only tested the setup with Python2, which tensorflow:1.12.0 is based on.

To run a container, use:

cd <root of this repository>
DOCKER_BASH_HISTORY="$(pwd)/data/docker.bash_history"
touch $DOCKER_BASH_HISTORY

docker run -it --rm --name imit_learn \
    -v "$(pwd)/imitation:/imitation" \
    -v "$(pwd)/data:/data" \
    -v "$DOCKER_BASH_HISTORY:/root/.bash_history" \
    imit-learn bash

Download dataset (24GB). Unpack! Put them into data/imitation_learning/h5_files/AgentHuman.

If you don't wanna download all the data right away, you can try on a very small subset that is contained in this repository. To set it up, run:

cd <root of this repository>
mkdir data/imitation_learning/h5_files/
cp -r imitation/test/mock_data_181018/imitation_learning/h5_files/ data/imitation_learning/h5_files/

Preprocessing

The preprocessing converts the downloaded h5 files into tfrecord files so we can easier use them for training with Tensorflow.

During preprocessing, the data is shuffled to a certain degree. More specifically speaking, it is shuffled the h5 files, but it is not shuffling the frames inside an h5 file.
Shuffling across files is achieved by using a big shuffle buffer during training.

Run preprocessing using:

mkdir -p /data/imitation_learning/preprocessed/
python /imitation/preprocessor.py --preproc_config_paths=config-preprocess-production.yaml

This might run for a while and slow consume a lot of CPU power. To simply check that the preprocessing code can run, set --preproc_config_paths=config-preprocess-debug.yaml.

Train

In order for the training to run, training and validation data need to be in the right place as described above. To run training with best hyperparameters on the entire dataset, run:

python trainer.py --config_paths=config-train-production.yaml

To debug training, use:

python trainer.py --config_paths=config-train-debug.yaml

Tests

pytest
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].