All Projects → ArashJavan → DeepLIO

ArashJavan / DeepLIO

Licence: Apache-2.0 license
Deep Lidar Inertial Odometry

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to DeepLIO

SlimeVR-Server
Server app for SlimeVR ecosystem
Stars: ✭ 361 (+684.78%)
Mutual labels:  imu, imu-sensor
imusensor
Python library for communication between raspberry pi and MPU9250 imu
Stars: ✭ 47 (+2.17%)
Mutual labels:  imu, imu-sensor
Monocular-Visual-Inertial-Odometry
This contains the code(in development) for monocular visual odometry of a quadrotor. The visual data from the monocular camera is fused with onboard IMU to develop indoor control and navigation algorithms.
Stars: ✭ 70 (+52.17%)
Mutual labels:  imu, odometry
Urbannavdataset
UrbanNav: an Open-Sourcing Localization Data Collected in Asian Urban Canyons, Including Tokyo and Hong Kong
Stars: ✭ 79 (+71.74%)
Mutual labels:  imu
9dof razor imu
9DOF Razor IMU available from SparkFun Electronics
Stars: ✭ 96 (+108.7%)
Mutual labels:  imu
Sparkfun mpu 9250 Dmp arduino library
Arduino library for the MPU-9250 enabling its digital motion process (DMP) features.
Stars: ✭ 169 (+267.39%)
Mutual labels:  imu
PointCloudSegmentation
The research project based on Semantic KITTTI dataset, 3d Point Cloud Segmentation , Obstacle Detection
Stars: ✭ 62 (+34.78%)
Mutual labels:  lidar-point-cloud
Lego Loam
LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
Stars: ✭ 1,138 (+2373.91%)
Mutual labels:  imu
Map Based Visual Localization
A general framework for map-based visual localization. It contains 1) Map Generation which support traditional features or deeplearning features. 2) Hierarchical-Localizationvisual in visual(points or line) map. 3)Fusion framework with IMU, wheel odom and GPS sensors.
Stars: ✭ 229 (+397.83%)
Mutual labels:  imu
Venom
All Terrain Autonomous Quadruped
Stars: ✭ 145 (+215.22%)
Mutual labels:  imu
Qcopterflightcontrol
飛控板 FlightController,使用 STM32F405RG / STM32F411CE
Stars: ✭ 143 (+210.87%)
Mutual labels:  imu
Loose Gnss Imu
Loosely coupled integration of GNSS and IMU
Stars: ✭ 97 (+110.87%)
Mutual labels:  imu
Kalibr
The Kalibr visual-inertial calibration toolbox
Stars: ✭ 2,410 (+5139.13%)
Mutual labels:  imu
Location
Smartphone navigation positionning, fusion GPS and IMU sensors.
Stars: ✭ 87 (+89.13%)
Mutual labels:  imu
Smartimu
Inertial Sensor Development Kit,STM32F411C、NRF52810、MPU9250、LPS22HB
Stars: ✭ 231 (+402.17%)
Mutual labels:  imu
Denoise Imu Gyro
Convolutional Neural Networks for Denoising Gyroscopes of Low-Cost IMUs
Stars: ✭ 67 (+45.65%)
Mutual labels:  imu
Staticmapping
Use LiDAR to map the static world
Stars: ✭ 191 (+315.22%)
Mutual labels:  imu
Bbbmini
BeagleBone ArduPilot DIY Cape
Stars: ✭ 140 (+204.35%)
Mutual labels:  imu
Openimu
Open Source Analytics & Visualisation Software for Inertial Measurement Units
Stars: ✭ 133 (+189.13%)
Mutual labels:  imu
Uwb Localization
Accurate 3D Localization for MAV Swarms by UWB and IMU Fusion.
Stars: ✭ 155 (+236.96%)
Mutual labels:  imu

DeepLIO

Deep Lidar Inertial Odometry

  1. Intorduction

DeepLIO is an deep learning based odometry estimation by fusing lidar and IMU.

Cloning deeplio

git clone https://github.com/ArashJavan/DeepLIO.git
cd DeepLIO

1.1 Dependencies

Following packages should be already installed, before you can start using deeplio.

1.2 Preparing the KITTI Dataset

Downloading KITTI

In this project the synced and unsyced (extract) version of the KITTI dataset is used. From unsynced (extract) data we only need the IMu measurments and from the synced data the lidar frames and the ground truth. So if you want to reduce the storage space, you can safely remove the images folder and also the lidar frames from the unsyced versoin. Eithter you download the datasets by yourself or you can also run the download_kitti_raw.sh script to download the KITTI raw sequences.

Note: You will need at least 300 GB free sapce on your hard drive.

$ mkdir -p datasets/KITTI/sync
$ download_kitti_raw.sh datasets/KITTI  sync

$ mkdir -p datasets/KITTI/extract
$ download_kitti_raw.sh datasets/KITTI  extract

Now wait till download is completed. Well it will take a long long time so go get some coffee :)

At the end you will find all sequences extracted under KITTI folder

KITTI
|
 extract|
        |> 2011_09_30
        |-> 2011_09_30_drive_0016_extract
            |-> image0 .. imahe3
            |->oxts
            |->velodyne_points
        .
        .
        .
    |-> 2011_10_03
        |-> 2011_09_03_drive_0027_extract
    .
    .
    .
 sync|
    .
    .
    .

1.3 DeepLIO Architecture

DeepLIO is made completely modular and configurable, e.g. every part and module can be combined with other modules to build the whole network architecture. As you can see from the following figure, there are four main modules.

  • LiDAR-Feature-Nets: This module is responsible for extracting (learn) and encoding the LiDAR frames which are first transformed by spherical projection.
  • IMU-Feature-Nets: This module is responsible for extracting (learn) and encoding the IMU measurements, which consists of linear acceleration and angular velocity (dim=6).
  • Fusion-Net: This module is responsible for fusing the features extract fom LiDAR and IMU.
  • Odometry-Net: At least in this module the fused features are used to learn the hidden state, which shall explain the odometry information encoded in these features.

deeplio

1.3.1 LiDAR-Feature-Nets

The inputs to this module are the projected LiDAR frames and their normal images, which in the current configuration is built by stacking two remission (intensity) and depth channels together. The module itself is made of an arbitrary Convolutional Neural Network (CNN). These networks are implemented as a siamese netowrk. At this point in time there are four CNN-Architectures implemented:

  1. [FlowNet-Feature-Net]

TODO adding link to flownet here.

  1. [ResNet-Feature-Net]

TODO adding link to resnet here.

  1. [Simple-Feature-Net]

  2. PointSeg-Feature-Net This network is the implementation of the more demanding PointSeg Netowrk. the network it self is made of three functional layers, the so called Fire layers, squeeze reweighting layer and enlargement layer. An interested reader may find more information about each of those layers in the original PointSeg paper.

All the above listed networks can be found and configured in the configuration file:

### Lidar Feature Netowrks ###########################
# feature network simple0 with conv layers

# feature network pointseg
lidar-feat-pointseg:  # pointseg feature
  dropout: 0.1
  classes: ['unknown', 'object']
  bypass: "simple"
  fusion: add # [cat, sub, add]
  part: "encoder" # [encoder]

lidar-feat-flownet:
  dropout: 0.
  fusion: add # [cat, sub, add]

lidar-feat-resnet:
  dropout: 0.25
  fusion: add # [cat, sub, add]

# feature network simple0 with bypassed conv layers
lidar-feat-simple-1:
  dropout: 0.25
  fusion: add # [cat, sub, add]
  bypass: false

With the keyword fusion you can define, if the both threads of the siamese network should be concatenated together or subtracted. Please note that by subtracting you will have the half number of feature as by concatenating.

1.3.3 IMU-Feature-Nets

The inputs to this modules are sequences of the IMU measurements. Basically any kind of Fully Connected (FC) Network can be configured here. Since the IMU measurements are also time series it is quit appealing to handle them with an RNN based network. For this reason both of these networks - FC and RNN (LSTM, GRU), are also implemented in the IMU-Module. Again you can freely configure each network in the config-file section:

### IMU Feature Netowrks ###########################
# IMU Feature Netowrks
imu-feat-fc:  # FC
  input-size: 6  # !fixed! do not chanage
  hidden-size: [1024, 512, 256]
  dropout: 0.

imu-feat-rnn: # RNN
  type: "lstm"
  input-size: 6  # !fixed! do not chanage
  hidden-size: 128
  num-layers: 2
  bidirectional: true
  dropout: 0.

Please note that the networks are build dynamically, e.g. you can change number of hidden layers an other stuff as required.

1.3.2 Fusion-Layer

After we have encoded the features in each sensory information by passing them through each IMU and LiDAR Network, we should some how fusion them together for the task of odometry inference. One classical approach may be using some kind of filter like for example Kalman-Filter, but since we want a fully differentiable system, we should integrate this part also in the network. One simple procedure would be just concatenating them together, or use another network to learn the correlation between those features, as in attention based networks. For now only concatenating procedure is implemented.

### Fusion-Layer for DeepLIO Net ###########################
fusion-layer:
  type: "cat" # [cat, soft-fusion]

1.3.4 Odometry-Feature-Nets

This layer should learn how to extract information needed for the task fo odometery inference from the fused features. Since odometry inference can be typically seen as an inference in a time series, both RNN-based an FC-based networks can be used here. Like in IMU-Features you can configure the odom-feat-network as you need.

### Odometry Feature Netowrks ###########################
# odometry feature network with fully connected layers
odom-feat-fc:
  size: [256]
  dropout: 0.

# odometry feature network with rnn-layers
odom-feat-rnn:
  type: "lstm"
  hidden-size: 512
  num-layers: 2
  bidirectional: true
  dropout: 0.

1.4 Configuring DeepLIO

As mentioned before you can configure each module of the network as you need, e.g. you can also forego some modules if you want.

### DeepLIO Network ##############################
deeplio:
  dropout: 0.25
  pretrained: false
  model-path: ""
  lidar-feat-net:
    name: "lidar-feat-simple-1"
    pretrained: false
    model-path: ""
  imu-feat-net:
    name: "imu-feat-rnn"
    pretrained: false
    model-path: ""
  odom-feat-net:
    name: "odom-feat-fc"
    pretrained: false
    model-path: ""
  fusion-net: "fusion-layer"

You can choose between different network implementations by just replacing the name of the desired network.

1.4.1 DeepIO

Deep Inertial Odometry You can configure the network in a way that it only uses IMU measurements for odometry prediction. You do it just by commenting out '#' the name of all other modules, as below:

 ### DeepLIO Network ##############################
deeplio:
  dropout: 0.25
  pretrained: false
  model-path: ""
  lidar-feat-net:
    name: # "lidar-feat-simple-1" # See the name is commented out
    pretrained: false
    model-path: ""
  imu-feat-net:
    name: "imu-feat-rnn"
    pretrained: false
    model-path: ""
  odom-feat-net:
    name: # "odom-feat-fc" # See the name is commented out
    pretrained: false
    model-path: ""
  fusion-net: "fusion-layer"

Please also note that you can for example have IMU and Odometry module by just commenting Lidar-Module out!

1.4.2 DeepLO

Deep LiDAR Odometry In the same way you can configure the network in a way that it only uses Lidar measurements for odometry prediction. You do it just by commenting out '#' the name of all other modules, as below:

 ### DeepLIO Network ##############################
deeplio:
  dropout: 0.25
  pretrained: false
  model-path: ""
  lidar-feat-net:
    name: "lidar-feat-simple-1"
    pretrained: false
    model-path: ""
  imu-feat-net:
    name: "imu-feat-rnn"  # See the name is commented out
    pretrained: false
    model-path: ""
  odom-feat-net:
    name: # "odom-feat-fc" # See the name is commented out
    pretrained: false
    model-path: ""
  fusion-net: "fusion-layer"

Please also note that you can for example have LiDAR and Odometry module by just commenting IMU-Module out! As you can easily guess you will get the whole DeepLIO netowrk by naming all the modules you want in the network.

1.5 Data Normalization

The inputs to the LiDAR-Feat-Nets are as mentioned before the projected frame, consisting of two remission and intensity channels. As you can imagine they do not have the same scale, for this reason they should be normalized before we pass them to the network. The same is true for IMU-measurments. Mean and standard deviation of each inputs to the netowrk is calculated by Mean-Std-Script. For the KITTI dataset the values are already incorporated in the config-file.

    # channesl: x, y, z, range(x,y,z), remissions)
    mean-image: [-0.05815, 0.4165, -0.8585, 0.1029, 0.2164]
    std-image: [10.0366, 7.4936, 0.8761, 0.0857, 0.1740]
    mean-imu: [-0.0635, 0.186, 9.7966, 0., 0.0005, 0.0082]
    std-imu: [0.8525, 0.9516, 0.3401, 0.0198, 0.02204, 0.1383]

Note: In this work we use in the computer vision community more popular inverse depth representation insted of depth values directly.

deeplio

1.6 Training DeepLIO_

You can train the network by first preparing the KITTI datset as explained before, then performing the required configurations in the config-file depending on your needs and at least starting the training by executing:

python deeplio/train.py

or

python deeplio/train.py --lr 0.01 --weight-decay 0.0001 -b 32 --epochs 100 -j 6 --device cuda --print-freq 10 --lr-decay 20

For each traning instance one folder with the current timestampm will be created and the outputs of the netowork are all saved in there. The Ouputs are:

  • Checkpoints.
  • Logfile.
  • Tensorboard logfile.

Citation

If you use this package for your research, a footnote with the link to this repository is appreciated: https://github.com/ArashJavan/DeepLIO. For citation with BibTeX:

@article{isprs-annals-V-1-2021-47-2021,
author = {Javanmard-Gh., A and Iwaszczuk, D and Roth, S},
doi = {10.5194/isprs-annals-V-1-2021-47-2021},
journal = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
pages = {47--54},
title = {{DEEPLIO: DEEP LIDAR INERTIAL SENSOR FUSION FOR ODOMETRY ESTIMATION}},
url = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-1-2021/47/2021/},
volume = {V-1-2021},
year = {2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].