All Projects → ika-rwth-aachen → EviLOG

ika-rwth-aachen / EviLOG

Licence: MIT license
TensorFlow training pipeline and dataset for prediction of evidential occupancy grid maps from lidar point clouds.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to EviLOG

OpenMaterial
3D model exchange format with physical material properties for virtual development, test and validation of automated driving.
Stars: ✭ 23 (-23.33%)
Mutual labels:  lidar, automated-driving
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (+73.33%)
Mutual labels:  lidar
pcl localization ros2
ROS2 package of 3D LIDAR-based Localization using PCL (Not SLAM)
Stars: ✭ 74 (+146.67%)
Mutual labels:  lidar
mola-fe-lidar
MOLA module: Front-end for point-cloud sensors based on generic ICP algorithms. LiDAR odometry and loop closure.
Stars: ✭ 16 (-46.67%)
Mutual labels:  lidar
LiDAR-GTA-V
A plugin for Grand Theft Auto V that generates a labeled LiDAR point cloud from the game environment.
Stars: ✭ 127 (+323.33%)
Mutual labels:  lidar
continuous-fusion
(ROS) Sensor fusion algorithm for camera+lidar.
Stars: ✭ 26 (-13.33%)
Mutual labels:  lidar
mini-map-maker
A tool for automatically generating 3D printable STLs from freely available lidar scan data.
Stars: ✭ 51 (+70%)
Mutual labels:  lidar
voxelization and sdf
C++11 code for building a 3D occupancy grid an SDF 3D grid from a mesh
Stars: ✭ 29 (-3.33%)
Mutual labels:  occupancy-grid-map
Python-for-Remote-Sensing
python codes for remote sensing applications will be uploaded here. I will try to teach everything I learn during my projects in here.
Stars: ✭ 20 (-33.33%)
Mutual labels:  lidar
pyGEDI
pyGEDI is a Python Package for NASA's Global Ecosystem Dynamics Investigation (GEDI) mission, data extraction, analysis, processing and visualization.
Stars: ✭ 55 (+83.33%)
Mutual labels:  lidar
camera-pose-estimation
Given a map data (image + lidar), estimate the 6 DoF camera pose of the query image.
Stars: ✭ 23 (-23.33%)
Mutual labels:  lidar
sweep-sdk
Sweep SDK
Stars: ✭ 88 (+193.33%)
Mutual labels:  lidar
gedi tutorials
GEDI L3 and L4 Tutorials
Stars: ✭ 61 (+103.33%)
Mutual labels:  lidar
occupancy-grid-a-star
A Python implementation of the A* algorithm in a 2D Occupancy Grid Map
Stars: ✭ 50 (+66.67%)
Mutual labels:  occupancy-grid-map
lidar-sync-mimics-gps
Open-Source LiDAR Time Synchronization System by Mimicking GPS-clock
Stars: ✭ 52 (+73.33%)
Mutual labels:  lidar
WS3D
Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)
Stars: ✭ 104 (+246.67%)
Mutual labels:  lidar
pole-localization
Online Range Image-based Pole Extractor for Long-term LiDAR Localization in Urban Environments
Stars: ✭ 107 (+256.67%)
Mutual labels:  lidar
LiDAR fog sim
LiDAR fog simulation
Stars: ✭ 101 (+236.67%)
Mutual labels:  lidar
CarND-Extended-Kalman-Filter-P6
Self Driving Car Project 6 - Sensor Fusion(Extended Kalman Filter)
Stars: ✭ 24 (-20%)
Mutual labels:  lidar
PointPainting
This repository is an open-source PointPainting package which is easy to understand, deploy and run!
Stars: ✭ 152 (+406.67%)
Mutual labels:  lidar

EviLOG: Evidential Lidar Occupancy Grid Mapping

This repository provides the dataset as well as the training pipeline that was used in our paper:

IV 2021 Presentation

A Simulation-based End-to-End Learning Framework for Evidential Occupancy Grid Mapping (IEEE Xplore, arXiv)

Raphael van Kempen, Bastian Lampe, Timo Woopen, and Lutz Eckstein
Institute for Automotive Engineering (ika), RWTH Aachen University

Abstract — Evidential occupancy grid maps (OGMs) are a popular representation of the environment of automated vehicles. Inverse sensor models (ISMs) are used to compute OGMs from sensor data such as lidar point clouds. Geometric ISMs show a limited performance when estimating states in unobserved but inferable areas and have difficulties dealing with ambiguous input. Deep learning-based ISMs face the challenge of limited training data and they often cannot handle uncertainty quantification yet. We propose a deep learning-based framework for learning an OGM algorithm which is both capable of quantifying uncertainty and which does not rely on manually labeled data. Results on synthetic and on real-world data show superiority over other approaches.

Demo Video

We hope our paper, data and code can help in your research. If this is the case, please cite:

@INPROCEEDINGS{9575715,
  author={van Kempen, Raphael and Lampe, Bastian and Woopen, Timo and Eckstein, Lutz},
  booktitle={2021 IEEE Intelligent Vehicles Symposium (IV)}, 
  title={A Simulation-based End-to-End Learning Framework for Evidential Occupancy Grid Mapping}, 
  year={2021},
  pages={934-939},
  doi={10.1109/IV48863.2021.9575715}}

Content

Installation

We suggest to create a new conda environment with all required packages. This will automatically install the GPU version of TensorFlow with CUDA and cuDNN if an NVIDIA GPU is available.

# EviLOG/
conda env create -f environment.yml

Alternatively, it is possible to install all package dependencies in a Python 3.7 environment (e.g. by using virtualenv) with pip. Note that CMake must be installed to build the point-pillars package.

# EviLOG/
pip install -r requirements.txt

Data

We provide all data that is required to reproduce the results in our paper. The EviLOG dataset comprises:

  • Synthetic training and validation data consisting of lidar point clouds (as pcd files) and evidential occupancy grid maps (as png files)
    • 10.000 training samples
    • 1.000 validation samples
    • 100 test samples
  • Real-world input data that was recorded with a Velodyne VLP32C lidar sensor during a ~9 minutes ride in an urban area (5.224 point clouds).

We are very interested in the impact of our provided dataset. Please fill out the dataset request form and we will send you a download link that is valid for 24 hours.

Note: Download size is approximately 6.8 GB, uncompressed size is approximately 11.8 GB.

Put the downloaded tar archive into the data folder and extract it:

# EviLOG/data/
tar xvf EviLOG_2021.tar.gz

Training

Use the scripts model/train.py, model/evaluate.py, and model/predict.py to train a model, evaluate it on validation data, and make predictions on a testing dataset or the provided real-world input point clouds.

Input directories, training parameters, and more can be set via CLI arguments or in a config file. Run the scripts with --help-flag or see one of the provided exemplary config files for reference.

Training

Start training the model by passing the provided config file model/config.yml.

# EviLOG/model/
export TF_FORCE_GPU_ALLOW_GROWTH=true  # try this if cuDNN fails to initialize
./train.py -c config.yml

You can visualize training progress by pointing TensorBoard to the output directory (model/output by default). Training metrics will also be printed to stdout.

Evaluation

Before evaluating your trained model on the test data, set the parameter model-weights to point to the best_weights.hdf5 file in the Checkpoints folder of its model directory.

# EviLOG/model/
./evaluate.py -c config.yml --input-validation ../data/input_test --label-validation ../data/label_test --model-weights output/<YOUR-TIMESTAMP>/Checkpoints/best_weights.hdf5

The evaluation results will be exported to the Evaluation folder in your model directory. This also comprises a comparison between occupancy grid maps predicted by the neural network and grid maps created using a simple geometric inverse sensor model.

Left: Input lidar point cloud. Middle: baseline OGM created by geometric ISM. Right: OGM predicted by deep ISM

evaluation on test data

Testing

To actually see the predictions your network makes, try it out on unseen input point clouds, such as the provided test data or real-world input point clouds. The predicted occupancy grid maps are exported to the directory specified by the parameter output-dir-testing.

Prediction using synthetic test data:

# EviLOG/model/
./predict.py -c config.yml --model-weights output/<YOUR-TIMESTAMP>/Checkpoints/best_weights.hdf5 --prediction-dir output/<YOUR-TIMESTAMP>/Predictions

Prediction using real-world input point clouds:

# EviLOG/model/
./predict.py -c config.yml --input-testing ../data/input_real --model-weights output/<YOUR-TIMESTAMP>/Checkpoints/best_weights.hdf5 --prediction-dir output/<YOUR-TIMESTAMP>/Predictions-Real

Acknowledgement

This research is accomplished within the project ”UNICARagil” (FKZ 16EMO0289). We acknowledge the financial support for the project by the Federal Ministry of Education and Research of Germany (BMBF).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].