All Projects → SJTU-ViSYS → M2DGR

SJTU-ViSYS / M2DGR

Licence: MIT license
M2DGR: a Multi-modal and Multi-scenario Dataset for Ground Robots

Projects that are alternatives of or similar to M2DGR

opensource slam noted
open source slam system notes
Stars: ✭ 119 (-50%)
Mutual labels:  slam
solve keyframe pose graph
A kidnap-aware multi-threaded node to solve 6DOF posegraph slam. Needs poses at each node (subscribes to) and relative positions at edges. Maintains an optimized pose graph. Has support for recovery from kidnap
Stars: ✭ 48 (-79.83%)
Mutual labels:  slam
FAST LIO SLAM
LiDAR SLAM = FAST-LIO + Scan Context
Stars: ✭ 183 (-23.11%)
Mutual labels:  slam
PlacenoteSDK-Unity
Placenote SDK and sample app for Unity
Stars: ✭ 78 (-67.23%)
Mutual labels:  slam
SLAM Qt
My small SLAM simulator to study "SLAM for dummies"
Stars: ✭ 47 (-80.25%)
Mutual labels:  slam
ndt map
SLAM package using NDT registration library of Autoware with loop-closure detection (odometry based) referenced from lego_loam.
Stars: ✭ 115 (-51.68%)
Mutual labels:  slam
efficient-descriptors
🚀🚀 Revisiting Binary Local Image Description for Resource Limited Devices
Stars: ✭ 76 (-68.07%)
Mutual labels:  slam
SJS DROPS
Script using requests module to register accounts to Slam Jam Socialism raffles.
Stars: ✭ 21 (-91.18%)
Mutual labels:  slam
HybVIO
HybVIO visual-inertial odometry and SLAM system
Stars: ✭ 261 (+9.66%)
Mutual labels:  slam
LIO-SAM based relocalization
A simple system that can relocalize a robot on a built map is developed in this system. The system is based on LIO-SAM.
Stars: ✭ 126 (-47.06%)
Mutual labels:  slam
pilotguru
Gather training data for training a self-driving car with just a smartphone.
Stars: ✭ 61 (-74.37%)
Mutual labels:  slam
wpr simulation
No description or website provided.
Stars: ✭ 24 (-89.92%)
Mutual labels:  slam
lsd slam stereo
LSD-SLAM with Stereo Cameras
Stars: ✭ 48 (-79.83%)
Mutual labels:  slam
PoseGraph-Ceres
An offline tool for pose-graph-optimization.
Stars: ✭ 57 (-76.05%)
Mutual labels:  slam
pybot
Research tools for autonomous systems in Python
Stars: ✭ 60 (-74.79%)
Mutual labels:  slam
Tonic
An autonomous vehicle written in python
Stars: ✭ 85 (-64.29%)
Mutual labels:  slam
TinyGrapeKit
A bunch of state estimation algorithms
Stars: ✭ 184 (-22.69%)
Mutual labels:  slam
vslam research
this repo is for visual slam research
Stars: ✭ 22 (-90.76%)
Mutual labels:  slam
Dynamic ORB SLAM2
Visual SLAM system that can identify and exclude dynamic objects.
Stars: ✭ 89 (-62.61%)
Mutual labels:  slam
A-LeGO-LOAM
Advance-LeGO-LOAM
Stars: ✭ 45 (-81.09%)
Mutual labels:  slam

M2DGR: a Multi-modal and Multi-scenario SLAM Dataset for Ground Robots

Author: Jie Yin, Ang Li, Tao Li, Wenxian Yu, and Danping Zou

Figure 1. Sample Images

ABSTRACT:

We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public.

Keywords:Dataset, Multi-model, Multi-scenario,Ground Robot

MAIN CONTRIBUTIONS:

  • We collected long-term challenging sequences for ground robots both indoors and outdoors with a complete sensor suite, which includes six surround-view fish-eye cameras, a sky-pointing fish-eye camera, a perspective color camera, an event camera, an infrared camera, a 32-beam LIDAR, two GNSS receivers, and two IMUs. To our knowledge, this is the first SLAM dataset focusing on ground robot navigation with such rich sensory information.
  • We recorded trajectories in a few challenging scenarios like lifts, complete darkness, which can easily fail existing localization solutions. These situations are commonly faced in ground robot applications, while they are seldom discussed in previous datasets.
  • We launched a comprehensive benchmark for ground robot navigation. On this benchmark, we evaluated existing state-of-the-art SLAM algorithms of various designs and analyzed their characteristics and defects individually.

Updates

2022.02.18 We have upload a brand new SLAM dataset with GNSS, vision and IMU information. Here is our link SJTU-GVI. Different from M2DGR, new data is captured on a real car and it records GNSS raw measurements with a Ublox ZED-F9P device to facilitate GNSS-SLAM. Give us a star if you like it!

2022.02.01 Our work has been accepted by ICRA2022!

2022.01.10 We will upload more brand new datasets soon! Please follow our work!

1.LICENSE

This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on [email protected] for further communication.

If you face any problem when using this dataset, feel free to propose an issue. And if you find our dataset helpful in your research, simply give this project a star.

The paper has been accepted by both RA-L and ICRA 2022. A preprint version of the paper in Arxiv and IEEE RA-L.If you use M2DGR in an academic work, please cite:

@ARTICLE{9664374,
  author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
  journal={IEEE Robotics and Automation Letters}, 
  title={M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots}, 
  year={2021},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/LRA.2021.3138527}}

2.SENSOR SETUP

2.1 Acquisition Platform

Physical drawings and schematics of the ground robot is given below. The unit of the figures is centimeter.

Figure 2. The GAEA Ground Robot Equipped with a Full Sensor Suite.The directions of the sensors are marked in different colors,red for X,green for Y and blue for Z.

2.2 Sensor parameters

All the sensors and track devices and their most important parameters are listed as below:

  • LIDAR Velodyne VLP-32C, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.

  • RGB Camera FLIR Pointgrey CM3-U3-13Y3C-CS,fish-eye lens,1280*1024,190 HFOV,190 V-FOV, 15 Hz

  • GNSS Ublox M8T, GPS/BeiDou, 1Hz

  • Infrared Camera,PLUG 617,640*512,90.2 H-FOV,70.6 V-FOV,25Hz;

  • V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz

  • Event Camera Inivation DVXplorer, 640*480,15Hz;

  • IMU,Handsfree A9,9-axis,150Hz;

  • GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;

  • Laser Scanner Leica MS60, localization 1mm+1.5ppm

  • Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;

The rostopics of our rosbag sequences are listed as follows:

  • LIDAR: /velodyne_points

  • RGB Camera: /camera/left/image_raw/compressed ,
    /camera/right/image_raw/compressed ,
    /camera/third/image_raw/compressed ,
    /camera/fourth/image_raw/compressed ,
    /camera/fifth/image_raw/compressed ,
    /camera/sixth/image_raw/compressed ,
    /camera/head/image_raw/compressed

  • GNSS Ublox M8T:
    /ublox/aidalm ,
    /ublox/aideph ,
    /ublox/fix ,
    /ublox/fix_velocity ,
    /ublox/monhw ,
    /ublox/navclock ,
    /ublox/navpvt ,
    /ublox/navsat ,
    /ublox/navstatus ,
    /ublox/rxmraw

  • Infrared Camera:/thermal_image_raw

  • V-I Sensor:
    /camera/color/image_raw/compressed ,
    /camera/imu

  • Event Camera:
    /dvs/events,
    /dvs_rendering/compressed

  • IMU: /handsfree/imu

3.DATASET SEQUENCES

We make public ALL THE SEQUENCES with their GT now.

Figure 3. A sample video with fish-eye image(both forward-looking and sky-pointing),perspective image,thermal-infrared image,event image and lidar odometry

An overview of M2DGR is given in the table below:

Scenario Street Circle Gate Walk Hall Door Lift Room Roomdark TOTAL
Number 10 2 3 1 5 2 4 3 6 36
Size/GB 590.7 50.6 65.9 21.5 117.4 46.0 112.1 45.3 171.1 1220.6
Duration/s 7958 478 782 291 1226 588 1224 275 866 13688
Dist/m 7727.72 618.03 248.40 263.17 845.15 200.14 266.27 144.13 395.66 10708.67
Ground Truth RTK/INS RTK/INS RTK/INS RTK/INS Leica Leica Leica Mocap Mocap ---

3.1 Outdoors

Figure 4. Outdoor Sequences:all trajectories are mapped in different colors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
gate_01 2021-07-31 16.4g 172s dark,around gate Rosbag GT
gate_02 2021-07-31 27.3g 327s dark,loop back Rosbag GT
gate_03 2021-08-04 21.9g 283s day Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
Circle_01 2021-08-03 23.3g 234s Circle Rosbag GT
Circle_02 2021-08-07 27.3g 244s Circle Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
street_01 2021-08-06 75.8g 1028s street and buildings,night,zigzag,long-term Rosbag GT
street_02 2021-08-03 83.2g 1227s day,long-term Rosbag GT
street_03 2021-08-06 21.3g 354s night,back and fourth,full speed Rosbag GT
street_04 2021-08-03 48.7g 858s night,around lawn,loop back Rosbag GT
street_05 2021-08-04 27.4g 469s night,staight line Rosbag GT
street_06 2021-08-04 35.0g 494s night,one turn Rosbag GT
street_07 2021-08-06 77.2g 929s dawn,zigzag,sharp turns Rosbag GT
street_08 2021-08-06 31.2g 491s night,loop back,zigzag Rosbag GT
street_09 2021-08-07 83.2g 907s day,zigzag Rosbag GT
street_010 2021-08-07 86.2g 910s day,zigzag Rosbag GT
walk_01 2021-08-04 21.5g 291s day,back and fourth Rosbag GT

3.2 Indoors

Figure 5. Lift Sequences:The robot hang around a hall on the first floor and then went to the second floor by lift.A laser scanner track the trajectory outside the lift

Sequence Name Collection Date Total Size Duration Features Rosbag GT
lift_01 2021-08-04 18.4g 225s lift Rosbag GT
lift_02 2021-08-04 43.6g 488s lift Rosbag GT
lift_03 2021-08-15 22.3g 252s lift Rosbag GT
lift_04 2021-08-15 27.8g 299s lift Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
hall_01 2021-08-01 29.1g 351s randon walk Rosbag GT
hall_02 2021-08-08 15.0g 128s randon walk Rosbag GT
hall_03 2021-08-08 20.5g 164s randon walk Rosbag GT
hall_04 2021-08-15 17.7g 181s randon walk Rosbag GT
hall_05 2021-08-15 35.1g 402s circle Rosbag GT

Figure 6. Room Sequences:under a Motion-capture system with twelve cameras.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
room_01 2021-07-30 14.0g 72s room,bright Rosbag GT
room_02 2021-07-30 15.2g 75s room,bright Rosbag GT
room_03 2021-07-30 26.1g 128s room,bright Rosbag GT
room_dark_01 2021-07-30 20.2g 111s room,dark Rosbag GT
room_dark_02 2021-07-30 30.3g 165s room,dark Rosbag GT
room_dark_03 2021-07-30 22.7g 116s room,dark Rosbag GT
room_dark_04 2021-08-15 29.3g 143s room,dark Rosbag GT
room_dark_05 2021-08-15 33.0g 159s room,dark Rosbag GT
room_dark_06 2021-08-15 35.6g 172s room,dark Rosbag GT

3.3 alternative indoors and outdoors

Figure 7. Door Sequences:A laser scanner track the robot through a door from indoors to outdoors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
door_01 2021-08-04 35.5g 461s outdoor to indoor to outdoor,long-term Rosbag GT
door_02 2021-08-04 10.5g 127s outdoor to indoor,short-term Rosbag GT

4. CONFIGURERATION FILES

For convenience of evaluation, we provide configuration files of some well-known SLAM systems as below:

A-LOAM,

LeGO-LOAM,

LINS,

LIO-SAM,

VINS-MONO,

ORB-Pinhole,

ORB-Fisheye,

ORB-Thermal,

CUBMAPSLAM

5.DEVELOPMENT TOOLKITS

5.1 Extracting Images

  • For rosbag users, first make image view
roscd image_view
rosmake image_view
sudo apt-get install mjpegtools

open a terminal,type roscore.And then open another,type

rosrun image_transport republish compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw respawn="true"

5.2 Evaluation

We use open-source tool evo for evalutation. To install evo,type

pip install evo --upgrade --no-binary evo

To evaluate monocular visual SLAM,type

evo_ape tum street_07.txt your_result.txt -vaps

To evaluate LIDAR SLAM,type

evo_ape tum street_07.txt your_result.txt -vap

To test GNSS based methods,type

evo_ape tum street_07.txt your_result.txt -vp

5.3 Calibration

For camera intrinsics,visit Ocamcalib for omnidirectional model. visit Vins-Fusion for pinhole and MEI model. use Opencv for Kannala Brandt model

For IMU intrinsics,visit Imu_utils

For extrinsics between cameras and IMU,visit Kalibr For extrinsics between Lidar and IMU,visit Lidar_IMU_Calib For extrinsics between cameras and Lidar, visit Autoware

5.4 Getting RINEX files

For GNSS based methods like RTKLIB,we usually need to get data in the format of RINEX. To make use of GNSS raw measurements, we use Link toolkit.

5.5 ROS drivers for UVC cameras

We write a ROS driver for UVC cameras to record our thermal-infrared image. UVC ROS driver

6.FUTURE PLANS

In the future, we plan to update and extend our project from time to time, striving to build a comprehensive SLAM benchmark similar to the KITTI dataset for ground robots.

If you have any suggestions or questions, do not hesitate to propose an issue. And if you find our dataset helpful in your research, a simple star is the best affirmation for us.

7.ACKNOWLEGEMENT

This work is supported by NSFC(62073214). Authors from SJTU hereby express our appreciation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].