All Projects → zju3dv → Eval Vislam

zju3dv / Eval Vislam

Licence: apache-2.0
Toolkit for VI-SLAM evaluation.

Projects that are alternatives of or similar to Eval Vislam

vslam research
this repo is for visual slam research
Stars: ✭ 22 (-75.28%)
Mutual labels:  augmented-reality, slam
Objectron
Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
Stars: ✭ 1,352 (+1419.1%)
Mutual labels:  dataset, augmented-reality
PlacenoteSDK-Unity
Placenote SDK and sample app for Unity
Stars: ✭ 78 (-12.36%)
Mutual labels:  augmented-reality, slam
PlaceInvaders
Multiplayer AR game sample
Stars: ✭ 24 (-73.03%)
Mutual labels:  augmented-reality, slam
StrayVisualizer
Visualize Data From Stray Scanner https://keke.dev/blog/2021/03/10/Stray-Scanner.html
Stars: ✭ 30 (-66.29%)
Mutual labels:  dataset, slam
Sfm Visual Slam
Stars: ✭ 551 (+519.1%)
Mutual labels:  slam, augmented-reality
Comma2k19
A driving dataset for the development and validation of fused pose estimators and mapping algorithms
Stars: ✭ 391 (+339.33%)
Mutual labels:  slam, dataset
Urbannavdataset
UrbanNav: an Open-Sourcing Localization Data Collected in Asian Urban Canyons, Including Tokyo and Hong Kong
Stars: ✭ 79 (-11.24%)
Mutual labels:  slam, dataset
Google Covid19 Mobility Reports
Data extraction of Google's COVID-19 Mobility Reports
Stars: ✭ 82 (-7.87%)
Mutual labels:  dataset
Fisheye Orb Slam
A real-time robust monocular visual SLAM system based on ORB-SLAM for fisheye cameras, without rectifying or cropping the input images
Stars: ✭ 84 (-5.62%)
Mutual labels:  slam
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-8.99%)
Mutual labels:  dataset
Vins mono cg
注释版: Modified version of VINS-Mono (commit 9e657be on Jan 9, 2019)
Stars: ✭ 82 (-7.87%)
Mutual labels:  slam
Conmask
ConMask model described in paper Open-world Knowledge Graph Completion.
Stars: ✭ 84 (-5.62%)
Mutual labels:  dataset
Openml R
R package to interface with OpenML
Stars: ✭ 81 (-8.99%)
Mutual labels:  dataset
Minisam lib
Lightweighted graph optimization (Factor graph) library.
Stars: ✭ 86 (-3.37%)
Mutual labels:  slam
Atis dataset
The ATIS (Airline Travel Information System) Dataset
Stars: ✭ 81 (-8.99%)
Mutual labels:  dataset
Recursive Cnns
Implementation of my paper "Real-time Document Localization in Natural Images by Recursive Application of a CNN."
Stars: ✭ 80 (-10.11%)
Mutual labels:  dataset
Hands Detection
Hands video tracker using the Tensorflow Object Detection API and Faster RCNN model. The data used is the Hand Dataset from University of Oxford.
Stars: ✭ 87 (-2.25%)
Mutual labels:  dataset
Sigsep Mus Db
Python parser and tools for MUSDB18 Music Separation Dataset
Stars: ✭ 85 (-4.49%)
Mutual labels:  dataset
Ccpd
[ECCV 2018] CCPD: a diverse and well-annotated dataset for license plate detection and recognition
Stars: ✭ 1,252 (+1306.74%)
Mutual labels:  dataset

eval-vislam

Toolkit for VSLAM and VISLAM evaluation.

For more information, please refer to our project website.

License

This project is released under the Apache 2.0 license.

Prerequisites

ISMAR 2019 SLAM Challenge Scoring ToolKit

Usage:
  python3 ismar-score.py --round <round> --is_vislam <is_vislam> --trajectory_base_dir <trajectory_base_dir> --gt_base_dir <gt_base_dir>

Arguments:
  <round>                    SLAM benchmark number of rounds, e.g. 5.
  <is_vislam>                Set to 1 for VISLAM, set to 0 for VSLAM.
  <trajectory_base_dir>      SLAM camera trajectory file folders(e.g. ~/MY-SLAM/trajectories/). We support TUM format(timestamp[s] px py pz qx qy qz qw) files.
  <gt_base_dir>              Path to groundtruth folder, e.g. ~/ISMAR-Dataset/train.

Toolkit Usage

Accuracy (APE, RPE, ARE, RRE, Completeness)

Usage:
  ./accuracy <groundtruth> <input> <fix scale>

Arguments:
  <groundtruth>    Path to sequence folder, e.g. ~/VISLAM-Dataset/A0.
  <input>          SLAM camera trajectory file in TUM format(timestamp[s] px py pz qx qy qz qw).
  <fix scale>      Set to 1 for VISLAM, set to 0 for VSLAM.

Initialization Scale Error and Time

Usage:
  ./initialization <groundtruth> <input> <has inertial> <has static segment>

Arguments:
  <groundtruth>         Path to sequence folder, e.g. ~/VISLAM-Dataset/A0.
  <input>               SLAM camera trajectory file in TUM format(timestamp[s] px py pz qx qy qz qw).
  <has inertial>        Set to 1 for VISLAM, set to 0 for VSLAM.
  <has static segment>  Set to 1 if sequence has static segment at the beginning (e.g. Ax and Cx series).

Robustness

Usage:
  ./robustness <groundtruth> <input> <fix scale>

Arguments:
  <groundtruth>    Path to sequence folder, e.g. ~/VISLAM-Dataset/A0.
  <input>          SLAM camera trajectory file in TUM format(timestamp[s] px py pz qx qy qz qw).
  <fix scale>      Set to 1 for VISLAM, set to 0 for VSLAM.

Relocalization Time

Usage:
  relocalization <groundtruth> <input> <has inertial> <jump detection>

Arguments:
  <groundtruth>    Path to sequence folder, e.g. ~/VISLAM-Dataset/A0.
  <input>          SLAM camera trajectory file in TUM format(timestamp[s] px py pz qx qy qz qw).
  <has inertial>   Set to 1 for VISLAM, set to 0 for VSLAM.
  <jump detection> Sensitivity to detect jump when relocalization happened (Default 0.05).

Citation

If you are using our codebase or dataset for research, please cite the following publication:

@article{
  title   = {Survey and Evaluation of Monocular Visual-Inertial SLAM Algorithms for Augmented Reality},
  author  = {Jinyu Li, Bangbang Yang, Danpeng Chen, Nan Wang, Guofeng Zhang*, Hujun Bao*},
  journal = {Journal of Virtual Reality & Intelligent Hardware},
  year    = {2019},
  volume  = {1},
  number  = {4},
  pages   = {386-410},
  url     = {http://www.vr-ih.com/vrih/html/EN/10.3724/SP.J.2096-5796.2018.0011/article.html},
  doi     = {10.3724/SP.J.2096-5796.2018.0011}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].