All Projects → devendrachaplot → Object-Goal-Navigation

devendrachaplot / Object-Goal-Navigation

Licence: MIT license
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"

Programming Languages

python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to Object-Goal-Navigation

Neural Slam
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
Stars: ✭ 414 (+286.92%)
Mutual labels:  robotics, navigation, deep-reinforcement-learning
Fourth robot pkg
4号機(KIT-C4)用リポジトリ
Stars: ✭ 7 (-93.46%)
Mutual labels:  robotics, navigation
Probabilistic robotics
solution of exercises of the book "probabilistic robotics"
Stars: ✭ 734 (+585.98%)
Mutual labels:  robotics, navigation
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-54.21%)
Mutual labels:  robotics, navigation
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (-68.22%)
Mutual labels:  robotics, navigation
Habitat Lab
A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators.
Stars: ✭ 587 (+448.6%)
Mutual labels:  robotics, deep-reinforcement-learning
Pepper Robot Programming
Pepper Programs : Object Detection Real Time without ROS
Stars: ✭ 29 (-72.9%)
Mutual labels:  robotics, navigation
AI
使用深度强化学习解决视觉跟踪和视觉导航问题
Stars: ✭ 16 (-85.05%)
Mutual labels:  navigation, deep-reinforcement-learning
motion-planner-reinforcement-learning
End to end motion planner using Deep Deterministic Policy Gradient (DDPG) in gazebo
Stars: ✭ 99 (-7.48%)
Mutual labels:  navigation, deep-reinforcement-learning
Navigation
ROS Navigation stack. Code for finding where the robot is and how it can get somewhere else.
Stars: ✭ 1,248 (+1066.36%)
Mutual labels:  robotics, navigation
Navigation2
ROS2 Navigation Framework and System
Stars: ✭ 528 (+393.46%)
Mutual labels:  robotics, navigation
micvision
Micvision package provide exploration and location for robot using navigation and cartographer packages
Stars: ✭ 21 (-80.37%)
Mutual labels:  navigation, exploration
Gps
Guided Policy Search
Stars: ✭ 529 (+394.39%)
Mutual labels:  robotics, deep-reinforcement-learning
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+522.43%)
Mutual labels:  robotics, deep-reinforcement-learning
Visual Pushing Grasping
Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.
Stars: ✭ 516 (+382.24%)
Mutual labels:  robotics, deep-reinforcement-learning
A2l
[ICLR 2020] Learning to Move with Affordance Maps 🗺️🤖💨
Stars: ✭ 23 (-78.5%)
Mutual labels:  robotics, navigation
Reward Learning Rl
[RSS 2019] End-to-End Robotic Reinforcement Learning without Reward Engineering
Stars: ✭ 310 (+189.72%)
Mutual labels:  robotics, deep-reinforcement-learning
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+268.22%)
Mutual labels:  robotics, deep-reinforcement-learning
Rsband local planner
A ROS move_base local planner plugin for Car-Like robots with Ackermann or 4-Wheel-Steering.
Stars: ✭ 78 (-27.1%)
Mutual labels:  robotics, navigation
DDPG
End to End Mobile Robot Navigation using DDPG (Continuous Control with Deep Reinforcement Learning) based on Tensorflow + Gazebo
Stars: ✭ 41 (-61.68%)
Mutual labels:  navigation, deep-reinforcement-learning

Object Goal Navigation using Goal-Oriented Semantic Exploration

This is a PyTorch implementation of the NeurIPS-20 paper:

Object Goal Navigation using Goal-Oriented Semantic Exploration
Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov
Carnegie Mellon University, Facebook AI Research

Winner of the CVPR 2020 Habitat ObjectNav Challenge.

Project Website: https://devendrachaplot.github.io/projects/semantic-exploration

example

Overview:

The Goal-Oriented Semantic Exploration (SemExp) model consists of three modules: a Semantic Mapping Module, a Goal-Oriented Semantic Policy, and a deterministic Local Policy. As shown below, the Semantic Mapping model builds a semantic map over time. The Goal-Oriented Semantic Policy selects a long-term goal based on the semantic map to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal.

overview

This repository contains:

  • Episode train and test datasets for Object Goal Navigation task for the Gibson dataset in the Habitat Simulator.
  • The code to train and evaluate the Semantic Exploration (SemExp) model on the Object Goal Navigation task.
  • Pretrained SemExp model.

Installing Dependencies

Installing habitat-sim:

git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim; git checkout tags/v0.1.5; 
pip install -r requirements.txt; 
python setup.py install --headless
python setup.py install # (for Mac OS)

Installing habitat-lab:

git clone https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab; git checkout tags/v0.1.5; 
pip install -e .

Check habitat installation by running python examples/benchmark.py in the habitat-lab folder.

  • Install pytorch according to your system configuration. The code is tested on pytorch v1.6.0 and cudatoolkit v10.2. If you are using conda:
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 #(Linux with GPU)
conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch #(Mac OS)
  • Install detectron2 according to your system configuration. If you are using conda:
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html #(Linux with GPU)
CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' #(Mac OS)

Docker and Singularity images:

We provide experimental docker and singularity images with all the dependencies installed, see Docker Instructions.

Setup

Clone the repository and install other requirements:

git clone https://github.com/devendrachaplot/Object-Goal-Navigation/
cd Object-Goal-Navigation/;
pip install -r requirements.txt

Downloading scene dataset

Downloading episode dataset

  • Download the episode dataset:
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1tslnZAkH8m3V5nP8pbtBmaR2XEfr8Rau' -O objectnav_gibson_v1.1.zip
  • Unzip the dataset into data/datasets/objectnav/gibson/v1.1/

Setting up datasets

The code requires the datasets in a data folder in the following format (same as habitat-lab):

Object-Goal-Navigation/
  data/
    scene_datasets/
      gibson_semantic/
        Adrian.glb
        Adrian.navmesh
        ...
    datasets/
      objectnav/
        gibson/
          v1.1/
            train/
            val/

Test setup

To verify that the data is setup correctly, run:

python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0

Usage

Training:

For training the SemExp model on the Object Goal Navigation task:

python main.py

Downloading pre-trained models

mkdir pretrained_models;
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=171ZA7XNu5vi3XLpuKs8DuGGZrYyuSjL0' -O pretrained_models/sem_exp.pth

For evaluation:

For evaluating the pre-trained model:

python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth

For visualizing the agent observations and predicted semantic map, add -v 1 as an argument to the above command.

The pre-trained model should get 0.657 Success, 0.339 SPL and 1.474 DTG.

For more detailed instructions, see INSTRUCTIONS.

Cite as

Chaplot, D.S., Gandhi, D., Gupta, A. and Salakhutdinov, R., 2020. Object Goal Navigation using Goal-Oriented Semantic Exploration. In Neural Information Processing Systems (NeurIPS-20). (PDF)

Bibtex:

@inproceedings{chaplot2020object,
  title={Object Goal Navigation using Goal-Oriented Semantic Exploration},
  author={Chaplot, Devendra Singh and Gandhi, Dhiraj and
            Gupta, Abhinav and Salakhutdinov, Ruslan},
  booktitle={In Neural Information Processing Systems (NeurIPS)},
  year={2020}
  }

Related Projects

Acknowledgements

This repository uses Habitat Lab implementation for running the RL environment. The implementation of PPO is borrowed from ikostrikov/pytorch-a2c-ppo-acktr-gail. The Mask-RCNN implementation is based on the detectron2 repository. We would also like to thank Shubham Tulsiani and Saurabh Gupta for their help in implementing some parts of the code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].