All Projects β†’ wqi β†’ A2l

wqi / A2l

[ICLR 2020] Learning to Move with Affordance Maps πŸ—ΊοΈπŸ€–πŸ’¨

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to A2l

Rsband local planner
A ROS move_base local planner plugin for Car-Like robots with Ackermann or 4-Wheel-Steering.
Stars: ✭ 78 (+239.13%)
Mutual labels:  robotics, navigation
neonavigation
A 2-D/3-DOF seamless global/local mobile robot motion planner package for ROS
Stars: ✭ 199 (+765.22%)
Mutual labels:  robotics, navigation
Navigation
ROS Navigation stack. Code for finding where the robot is and how it can get somewhere else.
Stars: ✭ 1,248 (+5326.09%)
Mutual labels:  robotics, navigation
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (+47.83%)
Mutual labels:  robotics, navigation
Neural Slam
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
Stars: ✭ 414 (+1700%)
Mutual labels:  robotics, navigation
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (+113.04%)
Mutual labels:  robotics, navigation
Object-Goal-Navigation
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"
Stars: ✭ 107 (+365.22%)
Mutual labels:  robotics, navigation
Pepper Robot Programming
Pepper Programs : Object Detection Real Time without ROS
Stars: ✭ 29 (+26.09%)
Mutual labels:  robotics, navigation
CLF reactive planning system
This package provides a CLF-based reactive planning system, described in paper: Efficient Anytime CLF Reactive Planning System for a Bipedal Robot on Undulating Terrain. The reactive planning system consists of a 5-Hz planning thread to guide a robot to a distant goal and a 300-Hz Control-Lyapunov-Function-based (CLF-based) reactive thread to co…
Stars: ✭ 21 (-8.7%)
Mutual labels:  robotics, navigation
ROS Basic SLAM
BUILDING AN AUTOMATIC VEHICLE BASED ON STEREO CAMERA
Stars: ✭ 16 (-30.43%)
Mutual labels:  robotics, navigation
RustRobotics
Rust implementation of PythonRobotics such as EKF, DWA, Pure Pursuit, LQR.
Stars: ✭ 40 (+73.91%)
Mutual labels:  robotics, navigation
Probabilistic robotics
solution of exercises of the book "probabilistic robotics"
Stars: ✭ 734 (+3091.3%)
Mutual labels:  robotics, navigation
Navigation2
ROS2 Navigation Framework and System
Stars: ✭ 528 (+2195.65%)
Mutual labels:  robotics, navigation
Fourth robot pkg
4号機(KIT-C4)用γƒͺγƒγ‚Έγƒˆγƒͺ
Stars: ✭ 7 (-69.57%)
Mutual labels:  robotics, navigation
Ros best practices
Best practices, conventions, and tricks for ROS. Do you want to become a robotics master? Then consider graduating or working at the Robotics Systems Lab at ETH in ZΓΌrich!
Stars: ✭ 799 (+3373.91%)
Mutual labels:  robotics
Gradslam
gradslam is an open source differentiable dense SLAM library for PyTorch
Stars: ✭ 833 (+3521.74%)
Mutual labels:  robotics
Animatedbottombar
A customizable and easy to use BottomBar navigation view with sleek animations, with support for ViewPager, ViewPager2, NavController, and badges.
Stars: ✭ 797 (+3365.22%)
Mutual labels:  navigation
Behaviortree.cpp
Behavior Trees Library in C++. Batteries included.
Stars: ✭ 793 (+3347.83%)
Mutual labels:  robotics
Ethx Autonomous Mobile Robot
Autonomous Mobile Robot Problem Sets and Exercises (Spring 2017) @ ETH
Stars: ✭ 17 (-26.09%)
Mutual labels:  robotics
Redtail
Perception and AI components for autonomous mobile robotics.
Stars: ✭ 832 (+3517.39%)
Mutual labels:  robotics

A2L - Active Affordance Learning

Published at ICLR 2020 [OpenReview] [Video] [PDF]

This repo provides a reference implementation for active affordance learning, which can be employed to improve autonomous navigation performance in hazardous environments (demonstrated here using the VizDoom simulator). The repo also contains a variety of convenient utilities that can be re-used in other VizDoom-based projects to improve quality of life.

Setup

This code has been tested on Ubuntu 16.04.

Requirements

  1. python >= 3.5
  2. keras >= 2.2.0
  3. opencv-python >= 3.4.0

Installing Dependencies

  1. Install VizDoom simulator into local Python environment.

    # Install ZDoom dependencies
    sudo apt-get install build-essential zlib1g-dev libsdl2-dev libjpeg-dev \
    nasm tar libbz2-dev libgtk2.0-dev cmake git libfluidsynth-dev libgme-dev \
    libopenal-dev timidity libwildmidi-dev unzip
    
    # Install Boost libraries
    sudo apt-get install libboost-all-dev
    
    # Install Python 3 dependencies
    sudo apt-get install python3-dev python3-pip
    pip install numpy
    
  2. Install Keras-based segmentation-models library.

    pip install segmentation-models
    

Downloading Demo Data

In order to download the demo data (which contains train/test maps, pre-computed train beacons, and a pre-trained A2L model), follow the steps below:

  1. Download the demo data into the root of the repo and uncompress using the following commands:

    wget https://www.dropbox.com/s/0hn71njit81xiy7/demo_data.tar.gz
    tar -xvzf demo_data.tar.gz
    
  2. Check that the directory structure looks like the following:

    β”œβ”€β”€ data
    β”‚   β”œβ”€β”€ beacons
    β”‚   β”œβ”€β”€ configs
    β”‚   β”œβ”€β”€ experiments
    β”‚   β”œβ”€β”€ maps
    β”‚   β”œβ”€β”€ samples
    β”‚   └── models
    

Usage

Each executable script included in this repo is prefixed with run in its file name. For a detailed description of all possible configuration arguments, please run with the -h flag.

Generating Partially-Labeled Self-Supervised Samples

The following sequence of commands are used to generate a configurable number of partially-labeled examples of navigability in a self-supervised manner. This should work with any set of maps that are compatible with VizDoom.

  1. Generate a set of beacons for each map - which describe valid spawn points from which the agent can start a sampling episode.

    python preprocess/run_beacon_generation.py --wad-dir ../data/maps/train/ --save-dir ../data/beacons/train/
    
  2. Generate a configurable number of self-supervised examples per map.

    python train/run_data_sampling.py --wad-dir ../data/maps/train/ --beacon-dir ../data/beacons/train/ --save-dir ../data/samples/train/ --samples-per-map 500
    

Training Affordance Segmentation Models

The following command is used to train a ResNet-18-based UNet segmentation model to predict pixel-wise navigability.

  1. Train UNet-based segmentation model.

    python train/run_train_model.py --data-dir ../data/samples/train/ --save-dir ../data/models/ --epochs 50 --batch-size 40
    

Training Segmentation Models with Active Affordance Learning

The following command is used to train a ResNet-18-based UNet segmentation model using active affordance learning. The script alternates between data generation and model training, using trained seed models to actively seek out difficult examples.

  1. Train UNet-based segmentation model using active affordance learning.

    python train/run_train_A2L.py --wad-dir ../data/maps/train --beacon-dir ../data/beacons/train --save-dir ../data/models/active --active-iterations 5 --samples-per-map 500 --epochs 50 --batch-size 40
    

Evaluating Navigation Performance

The following command is used to evaluate the performance of an affordance-based agent in a goal-directed navigation task. It can also be used to evaluate a geometry-based agent by dropping the --model-path argument.

  1. Run navigation experiments specified using JSON file.
    python eval/run_eval_navigation.py --wad-dir ../data/maps/test --model-path ../data/models/seg_model.h5 --experiment-path ../data/experiments/navigation/demo.json --iterations 5
    

Citing

If you've found this code to be useful, please consider citing our paper!

@inproceedings{qi2020learning,
  title={Learning to Move with Affordance Maps},
  author={Qi, William and Mullapudi, Ravi Teja and Gupta, Saurabh and Ramanan, Deva},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2020}
}

Questions

If you have additional questions/concerns, please feel free to reach out to [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].