All Projects → yunchih → Orb Slam2 Gpu2016 Final

yunchih / Orb Slam2 Gpu2016 Final

Licence: other

Labels

Projects that are alternatives of or similar to Orb Slam2 Gpu2016 Final

Deepway
This project is an aid to the blind. Till date there has been no technological advancement in the way the blind navigate. So I have used deep learning particularly convolutional neural networks so that they can navigate through the streets.
Stars: ✭ 118 (-56.62%)
Mutual labels:  gpu, opencv
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (+66.18%)
Mutual labels:  gpu, opencv
gpu-monitor
Script to remotely check GPU servers for free GPUs
Stars: ✭ 85 (-68.75%)
Mutual labels:  gpu
Object Detection Opencv
YOLO Object detection with OpenCV and Python.
Stars: ✭ 267 (-1.84%)
Mutual labels:  opencv
Opencv Python Blueprints
M. Beyeler (2015). OpenCV with Python Blueprints: Design and develop advanced computer vision projects using OpenCV with Python, Packt Publishing Ltd., ISBN 978-178528269-0.
Stars: ✭ 262 (-3.68%)
Mutual labels:  opencv
Ai Job Notes
AI算法岗求职攻略(涵盖准备攻略、刷题指南、内推和AI公司清单等资料)
Stars: ✭ 3,191 (+1073.16%)
Mutual labels:  opencv
Opencv Android
Easy way to integrate OpenCv into your Android project via Gradle
Stars: ✭ 264 (-2.94%)
Mutual labels:  opencv
Hetu
A high-performance distributed deep learning system targeting large-scale and automated distributed training.
Stars: ✭ 78 (-71.32%)
Mutual labels:  gpu
Evision
计算机视觉实践和探索/Practice and explorations in computer vision.
Stars: ✭ 268 (-1.47%)
Mutual labels:  opencv
Opencv Python Series
An OpenCV & Python Tutorial Series and Reference.
Stars: ✭ 262 (-3.68%)
Mutual labels:  opencv
Vuh
Vulkan compute for people
Stars: ✭ 264 (-2.94%)
Mutual labels:  gpu
Popsift
PopSift is an implementation of the SIFT algorithm in CUDA.
Stars: ✭ 259 (-4.78%)
Mutual labels:  gpu
Realtime object detection
Plug and Play Real-Time Object Detection App with Tensorflow and OpenCV. No Bugs No Worries. Enjoy!
Stars: ✭ 260 (-4.41%)
Mutual labels:  opencv
Facial Expression Recognition Using Cnn
Deep facial expressions recognition using Opencv and Tensorflow. Recognizing facial expressions from images or camera stream
Stars: ✭ 261 (-4.04%)
Mutual labels:  opencv
Pedestrian detection
通过HOG+SVM训练进行行人检测,行人数据库使用INRIAPerson,程序基于OpenCV实现
Stars: ✭ 253 (-6.99%)
Mutual labels:  opencv
Dynamicfusion
Implementation of Newcombe et al. CVPR 2015 DynamicFusion paper
Stars: ✭ 267 (-1.84%)
Mutual labels:  opencv
LuisaRender
High-Performance Multiple-Backend Renderer Based on LuisaCompute
Stars: ✭ 47 (-82.72%)
Mutual labels:  gpu
Sorting visualization
The Sound of Sorting: Visualize and Audibilize 12 classic sorting algorithms in real time
Stars: ✭ 260 (-4.41%)
Mutual labels:  opencv
Ergo
🧠 A tool that makes AI easier.
Stars: ✭ 264 (-2.94%)
Mutual labels:  gpu
Gprmax
gprMax is open source software that simulates electromagnetic wave propagation using the Finite-Difference Time-Domain (FDTD) method for numerical modelling of Ground Penetrating Radar (GPR)
Stars: ✭ 268 (-1.47%)
Mutual labels:  gpu

ORB-SLAM2-GPU

This is a fork of Raul Mur-Artal's ORB-SLAM2, on which we rewrite hot paths with CUDA. Our optimization enables us to run the algorithm in real time on a Nvidia's Jetson TX1.

Following is from the original README of ORB-SLAM2

Introduction

ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, and in the TUM dataset as RGB-D or monocular. We also provide a ROS node to process live monocular or RGB-D streams. The library can be compiled without ROS. ORB-SLAM2 provides a GUI to change between a SLAM Mode and Localization Mode, see section 9 of this document.

#####Videos showing ORB-SLAM2: Tsukuba Dataset KITTI Dataset TUM RGBD Dataset EuRoC Dataset (V1_02, V1_03) EuRoC Dataset (V1_02, V1_03)

Notice for ORB-SLAM Monocular users: The monocular capabilities of ORB-SLAM2 compared to ORB-SLAM Monocular are similar. However in ORB-SLAM2 we apply a full bundle adjustment after a loop closure, the extraction of ORB is slightly different (trying to improve the dispersion on the image) and the tracking is also slightly faster. The GUI of ORB-SLAM2 also provides you new capabilities as the modes mentioned above and a reset button. We recommend you to try this new software :)

###Related Publications:

[1] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF.

[2] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012. PDF

#1. License

ORB-SLAM2 is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

For a closed-source version of ORB-SLAM2 for commercial purposes, please contact the authors: orbslam (at) unizar (dot) es.

If you use ORB-SLAM2 in an academic work, please cite:

@article{murTRO2015,
  title={{ORB-SLAM}: a Versatile and Accurate Monocular {SLAM} System},
  author={Mur-Artal, Ra\'ul, Montiel, J. M. M. and Tard\'os, Juan D.},
  journal={IEEE Transactions on Robotics},
  volume={31},
  number={5},
  pages={1147--1163},
  doi = {10.1109/TRO.2015.2463671},
  year={2015}
 }

#2. Prerequisites We have tested the library in Ubuntu 12.04 and 14.04, but it should be easy to compile in other platforms. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results.

C++11 or C++0x Compiler

We use the new thread and chrono functionalities of C++11.

Pangolin

We use Pangolin for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

OpenCV

We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 2.4.11.

Eigen3

Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

BLAS and LAPACK

BLAS and LAPACK libraries are requiered by g2o (see below). On ubuntu:

sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev

DBoW2 and g2o (Included in Thirdparty folder)

We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

ROS (optional)

We provide some examples to process the live input of a monocular, stereo or RGB-D camera using ROS. Building these examples is optional. In case you want to use ROS, a version Hydro or newer is needed.

#3. Building ORB-SLAM2 library and TUM/KITTI examples

Clone the repository:

git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2

We provide a script build.sh to build the Thirdparty libraries and ORB-SLAM2. Please make sure you have installed all required dependencies (see section 2). Execute:

cd ORB_SLAM2
chmod +x build.sh
./build.sh

This will create libORB_SLAM2.so at lib folder and the executables mono_tum, mono_kitti, rgbd_tum, stereo_kitti in Examples folder.

#4. Monocular Examples

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlby KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

#5. Stereo Example

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlto KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Stereo/stereo_kitti Vocabulary/ORBvoc.txt Examples/Stereo/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

#6. RGB-D Example

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in Examples/RGB-D/associations/. You can generate your own associations file executing:

python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder. Change ASSOCIATIONS_FILE to the path to the corresponding associations file.
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE

#7. ROS Examples

Building the nodes for mono, stereo and RGB-D

  1. Add the path including Examples/ROS/ORB_SLAM2 to the ROS_PACKAGE_PATH environment variable. Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM2:
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS
  1. Go to Examples/ROS/ORB_SLAM2 folder and execute:
mkdir build
cd build
cmake .. -DROS_BUILD_TYPE=Release
make -j

Running Monocular Node

For a monocular input from topic /camera/image_raw run node ORB_SLAM2/Mono. You will need to provide the vocabulary file and a settings file. See the monocular examples above.

rosrun ORB_SLAM2 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

Running Stereo Node

For a stereo input from topic /camera/left/image_raw and /camera/right/image_raw run node ORB_SLAM2/Stereo. You will need to provide the vocabulary file and a settings file. If you provide rectification matrices (see Examples/Stereo/EuRoC.yaml example), the node will recitify the images online, otherwise images must be pre-rectified.

rosrun ORB_SLAM2 Stereo PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ONLINE_RECTIFICATION

Example: Download a rosbag (e.g. V1_01_easy.bag) from the EuRoC dataset (http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Open 3 tabs on the terminal and run the following command at each tab:

roscore
rosrun ORB_SLAM2 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml true
rosbag play --pause V1_01_easy.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw

Once ORB-SLAM2 has loaded the vocabulary, press space in the rosbag tab. Enjoy!. Note: a powerful computer is required to run the most exigent sequences of this dataset.

Running RGB_D Node

For an RGB-D input from topics /camera/rgb/image_raw and /camera/depth_registered/image_raw, run node ORB_SLAM2/RGBD. You will need to provide the vocabulary file and a settings file. See the RGB-D example above.

rosrun ORB_SLAM2 RGBD PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

#8. Processing your own sequences You will need to create a settings file with the calibration of your camera. See the settings file provided for the TUM and KITTI datasets for monocular, stereo and RGB-D cameras. We use the calibration model of OpenCV. See the examples to learn how to create a program that makes use of the ORB-SLAM2 library and how to pass images to the SLAM system. Stereo input must be synchronized and rectified. RGB-D input must be synchronized and depth registered.

#9. SLAM and Localization Modes You can change between the SLAM and Localization mode using the GUI of the map viewer.

SLAM Mode

This is the default mode. The system runs in parallal three threads: Tracking, Local Mapping and Loop Closing. The system localizes the camera, builds new map and tries to close loops.

Localization Mode

This mode can be used when you have a good map of your working area. In this mode the Local Mapping and Loop Closing are deactivated. The system localizes the camera in the map (which is no longer updated), using relocalization if needed.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].