All Projects → devendrachaplot → Neural Slam

devendrachaplot / Neural Slam

Licence: mit
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Neural Slam

Object-Goal-Navigation
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"
Stars: ✭ 107 (-74.15%)
Mutual labels:  robotics, navigation, deep-reinforcement-learning
DDPG
End to End Mobile Robot Navigation using DDPG (Continuous Control with Deep Reinforcement Learning) based on Tensorflow + Gazebo
Stars: ✭ 41 (-90.1%)
Mutual labels:  navigation, deep-reinforcement-learning
motion-planner-reinforcement-learning
End to end motion planner using Deep Deterministic Policy Gradient (DDPG) in gazebo
Stars: ✭ 99 (-76.09%)
Mutual labels:  navigation, deep-reinforcement-learning
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (-91.79%)
Mutual labels:  robotics, navigation
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (-4.83%)
Mutual labels:  robotics, deep-reinforcement-learning
Rsband local planner
A ROS move_base local planner plugin for Car-Like robots with Ackermann or 4-Wheel-Steering.
Stars: ✭ 78 (-81.16%)
Mutual labels:  robotics, navigation
AI
使用深度强化学习解决视觉跟踪和视觉导航问题
Stars: ✭ 16 (-96.14%)
Mutual labels:  navigation, deep-reinforcement-learning
ROS Basic SLAM
BUILDING AN AUTOMATIC VEHICLE BASED ON STEREO CAMERA
Stars: ✭ 16 (-96.14%)
Mutual labels:  robotics, navigation
l2r
Open-source reinforcement learning environment for autonomous racing.
Stars: ✭ 38 (-90.82%)
Mutual labels:  robotics, deep-reinforcement-learning
RustRobotics
Rust implementation of PythonRobotics such as EKF, DWA, Pure Pursuit, LQR.
Stars: ✭ 40 (-90.34%)
Mutual labels:  robotics, navigation
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-88.16%)
Mutual labels:  robotics, navigation
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (-37.92%)
Mutual labels:  robotics, deep-reinforcement-learning
Pepper Robot Programming
Pepper Programs : Object Detection Real Time without ROS
Stars: ✭ 29 (-93%)
Mutual labels:  robotics, navigation
Navigation
ROS Navigation stack. Code for finding where the robot is and how it can get somewhere else.
Stars: ✭ 1,248 (+201.45%)
Mutual labels:  robotics, navigation
A2l
[ICLR 2020] Learning to Move with Affordance Maps 🗺️🤖💨
Stars: ✭ 23 (-94.44%)
Mutual labels:  robotics, navigation
Fourth robot pkg
4号機(KIT-C4)用リポジトリ
Stars: ✭ 7 (-98.31%)
Mutual labels:  robotics, navigation
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+60.87%)
Mutual labels:  robotics, deep-reinforcement-learning
Probabilistic robotics
solution of exercises of the book "probabilistic robotics"
Stars: ✭ 734 (+77.29%)
Mutual labels:  robotics, navigation
neonavigation
A 2-D/3-DOF seamless global/local mobile robot motion planner package for ROS
Stars: ✭ 199 (-51.93%)
Mutual labels:  robotics, navigation
CLF reactive planning system
This package provides a CLF-based reactive planning system, described in paper: Efficient Anytime CLF Reactive Planning System for a Bipedal Robot on Undulating Terrain. The reactive planning system consists of a 5-Hz planning thread to guide a robot to a distant goal and a 300-Hz Control-Lyapunov-Function-based (CLF-based) reactive thread to co…
Stars: ✭ 21 (-94.93%)
Mutual labels:  robotics, navigation

Active Neural SLAM

This is a PyTorch implementation of the ICLR-20 paper:

Learning To Explore Using Active Neural SLAM
Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, Ruslan Salakhutdinov
Carnegie Mellon University, Facebook AI Research, UIUC

Project Website: https://devendrachaplot.github.io/projects/Neural-SLAM

example

Overview:

The Active Neural SLAM model consists of three modules: a Global Policy, a Local Policy and a Neural SLAM Module. As shown below, the Neural-SLAM module predicts a map and agent pose estimate from incoming RGB observations and sensor readings. This map and pose are used by a Global policy to output a long-term goal, which is converted to a short-term goal using an analytic path planner. A Local Policy is trained to navigate to this short-term goal.

overview

Installing Dependencies

We use earlier versions of habitat-sim and habitat-api. The specific commits are mentioned below.

Installing habitat-sim:

git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim; git checkout 9575dcd45fe6f55d2a44043833af08972a7895a9; 
pip install -r requirements.txt; 
python setup.py install --headless
python setup.py install # (for Mac OS)

Installing habitat-api:

git clone https://github.com/facebookresearch/habitat-api.git
cd habitat-api; git checkout b5f2b00a25627ecb52b43b13ea96b05998d9a121; 
pip install -e .

Install pytorch from https://pytorch.org/ according to your system configuration. The code is tested on pytorch v1.2.0. If you are using conda:

conda install pytorch==1.2.0 torchvision cudatoolkit=10.0 -c pytorch #(Linux with GPU)
conda install pytorch==1.2.0 torchvision==0.4.0 -c pytorch #(Mac OS)

Setup

Clone the repository and install other requirements:

git clone --recurse-submodules https://github.com/devendrachaplot/Neural-SLAM
cd Neural-SLAM;
pip install -r requirements.txt

The code requires datasets in a data folder in the following format (same as habitat-api):

Neural-SLAM/
  data/
    scene_datasets/
      gibson/
        Adrian.glb
        Adrian.navmesh
        ...
    datasets/
      pointnav/
        gibson/
          v1/
            train/
            val/
            ...

Please download the data using the instructions here: https://github.com/facebookresearch/habitat-api#data

To verify that dependencies are correctly installed and data is setup correctly, run:

python main.py -n1 --auto_gpu_config 0 --split val

Usage

Training:

For training the complete Active Neural SLAM model on the Exploration task:

python main.py

Downloading pre-trained models

mkdir pretrained_models
wget -O pretrained_models/model_best.global http://www.cs.cmu.edu/~dchaplot/projects/active_neural_slam/model_best.global
wget -O pretrained_models/model_best.local http://www.cs.cmu.edu/~dchaplot/projects/active_neural_slam/model_best.local
wget -O pretrained_models/model_best.slam http://www.cs.cmu.edu/~dchaplot/projects/active_neural_slam/model_best.slam

For evaluation:

For evaluating the pre-trained models:

python main.py --split val --eval 1 --train_global 0 --train_local 0 --train_slam 0 \
--load_global pretrained_models/model_best.global \
--load_local pretrained_models/model_best.local \
--load_slam pretrained_models/model_best.slam 

For visualizing the agent observations and predicted map and pose, add -v 1 as an argument to the above

For more detailed instructions, see INSTRUCTIONS.

Cite as

Chaplot, D.S., Gandhi, D., Gupta, S., Gupta, A. and Salakhutdinov, R., 2020. Learning To Explore Using Active Neural SLAM. In International Conference on Learning Representations (ICLR). (PDF)

Bibtex:

@inproceedings{chaplot2020learning,
  title={Learning To Explore Using Active Neural SLAM},
  author={Chaplot, Devendra Singh and Gandhi, Dhiraj and Gupta, Saurabh and Gupta, Abhinav and Salakhutdinov, Ruslan},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2020}
}

Acknowledgements

This repository uses Habitat API (https://github.com/facebookresearch/habitat-api) and parts of the code from the API. The implementation of PPO is borrowed from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/. We thank Guillaume Lample for discussions and coding during initial stages of this project.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].