All Projects → ducha-aiki → navigation-benchmark

ducha-aiki / navigation-benchmark

Licence: other
Code for "Benchmarking Classic and Learned Navigation in Complex 3D Environments" paper

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to navigation-benchmark

Mola
A Modular Optimization framework for Localization and mApping (MOLA)
Stars: ✭ 206 (+226.98%)
Mutual labels:  slam
Deep Learning Interview Book
深度学习面试宝典(含数学、机器学习、深度学习、计算机视觉、自然语言处理和SLAM等方向)
Stars: ✭ 3,677 (+5736.51%)
Mutual labels:  slam
SLAM-application
LeGO-LOAM, LIO-SAM, LVI-SAM, FAST-LIO2, Faster-LIO, VoxelMap, R3LIVE application and comparison on Gazebo and real-world datasets. Installation and config files are provided.
Stars: ✭ 258 (+309.52%)
Mutual labels:  slam
Iris lama
LaMa - A Localization and Mapping library
Stars: ✭ 217 (+244.44%)
Mutual labels:  slam
Minisam
A general and flexible factor graph non-linear least square optimization framework
Stars: ✭ 246 (+290.48%)
Mutual labels:  slam
li slam ros2
ROS2 package of tightly-coupled lidar inertial ndt/gicp slam
Stars: ✭ 160 (+153.97%)
Mutual labels:  slam
Hypharos minicar
1/20 MiniCar: An ackermann based rover for MPC and Pure-Pursuit controller
Stars: ✭ 194 (+207.94%)
Mutual labels:  slam
r3live
A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
Stars: ✭ 1,355 (+2050.79%)
Mutual labels:  slam
Iros2020 Paper List
IROS2020 paperlist by paopaorobot
Stars: ✭ 247 (+292.06%)
Mutual labels:  slam
tx2 fcnn node
ROS node for real-time FCNN depth reconstruction
Stars: ✭ 102 (+61.9%)
Mutual labels:  slam
Iros2018 Slam Papers
IROS2018 SLAM papers (ref from PaoPaoRobot)
Stars: ✭ 224 (+255.56%)
Mutual labels:  slam
Iscloam
Intensity Scan Context based full SLAM implementation for autonomous driving. ICRA 2020
Stars: ✭ 232 (+268.25%)
Mutual labels:  slam
maks
Motion Averaging
Stars: ✭ 52 (-17.46%)
Mutual labels:  slam
Iros2019 Paper List
IROS2019 paper list from PaopaoRobot
Stars: ✭ 214 (+239.68%)
Mutual labels:  slam
awesome-mobile-robotics
Useful links of different content related to AI, Computer Vision, and Robotics.
Stars: ✭ 243 (+285.71%)
Mutual labels:  slam
Pvio
Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors
Stars: ✭ 198 (+214.29%)
Mutual labels:  slam
so dso place recognition
A Fast and Robust Place Recognition Approach for Stereo Visual Odometry using LiDAR Descriptors
Stars: ✭ 52 (-17.46%)
Mutual labels:  slam
DSP-SLAM
[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors
Stars: ✭ 377 (+498.41%)
Mutual labels:  slam
Robotics-Resources
List of commonly used robotics libraries and packages
Stars: ✭ 71 (+12.7%)
Mutual labels:  slam
Awesome-Self-Driving
an awesome list of self-driving algorithms, software, tools
Stars: ✭ 74 (+17.46%)
Mutual labels:  slam

Code for the paper "Benchmarking Classic and Learned Navigation in Complex 3D Environments"

Project website: https://sites.google.com/view/classic-vs-learned-navigation

Video: https://www.youtube.com/watch?v=b1S5ZbOAchc

Paper: https://arxiv.org/abs/1901.10915

If you use this code or the provided environments in your research, please cite the following:

@ARTICLE{Navigation2019,
       author = {{Mishkin}, Dmytro and {Dosovitskiy}, Alexey and {Koltun}, Vladlen},
        title = "{Benchmarking Classic and Learned Navigation in Complex 3D Environments}",
         year = 2019,
        month = Jan,
archivePrefix = {arXiv},
       eprint = {1901.10915},
}

Dependencies:

  • minos
  • numpy
  • pytorch
  • ORBSLAM2

Tested with:

  • Ubuntu 16.04
  • python 3.6
  • pytorch 0.4, 1.0

Benchmark

You may want to comment/uncomment needed agents and/or environments if need to reproduce only part of them. Agents contain random parts: RANSAC in ORBSLAM and 10% random actions in all agents. Nevertheless, results should be the same if run on the same PC. From machine to machine, results may differ (slightly)

You may also want to turn on recording of videos (RGB, depth, GT map, map, beliefs) by setting VIDEO=True in benchmark_all_handcrafted_agents.py.

Simple example of working with agents is shown in example.

Training

Training and pre-trained weights for the learned agents are coming soon.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].