All Projects → cattaneod → CMRNet

cattaneod / CMRNet

Licence: other
Code for "CMRNet: Camera to LiDAR-Map Registration" (ITSC 2019) - WIP

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to CMRNet

Self-Driving-Car-NanoDegree-Udacity
This repository contains code and writeups for projects and labs completed as a part of UDACITY's first of it's kind self driving car nanodegree program.
Stars: ✭ 29 (-58.57%)
Mutual labels:  localization, sensor-fusion
DeepI2P
DeepI2P: Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
Stars: ✭ 130 (+85.71%)
Mutual labels:  localization, point-cloud
GA SLAM
🚀 SLAM for autonomous planetary rovers with global localization
Stars: ✭ 40 (-42.86%)
Mutual labels:  localization, sensor-fusion
FAST LIO LOCALIZATION
A simple localization framework that can re-localize in built maps based on FAST-LIO.
Stars: ✭ 194 (+177.14%)
Mutual labels:  localization
HybVIO
HybVIO visual-inertial odometry and SLAM system
Stars: ✭ 261 (+272.86%)
Mutual labels:  sensor-fusion
hordes-loc
Community driven Hordes.io string localization
Stars: ✭ 57 (-18.57%)
Mutual labels:  localization
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (+18.57%)
Mutual labels:  point-cloud
LPD-net
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis, ICCV 2019, Seoul, Korea
Stars: ✭ 75 (+7.14%)
Mutual labels:  point-cloud
Monocular-Vehicle-Localization
Estimating the orientation and the relative dimensions of vehicles by producing a 3d bounding frame
Stars: ✭ 28 (-60%)
Mutual labels:  monocular
mobile-sdk-ios
Crowdin iOS SDK delivers all new translations from Crowdin project to the application immediately
Stars: ✭ 93 (+32.86%)
Mutual labels:  localization
www.gafam.info
Sources of the gafam.info web page
Stars: ✭ 14 (-80%)
Mutual labels:  localization
android-api-loquacious
Loquacious is a localized remote resource manager library for Android
Stars: ✭ 24 (-65.71%)
Mutual labels:  localization
bugzilla-tw
Bugzilla traditional Chinese localization files
Stars: ✭ 25 (-64.29%)
Mutual labels:  localization
frustum-convnet
The PyTorch Implementation of F-ConvNet for 3D Object Detection
Stars: ✭ 228 (+225.71%)
Mutual labels:  point-cloud
mobility-actiontext
Translate Rails Action Text rich text with Mobility.
Stars: ✭ 27 (-61.43%)
Mutual labels:  localization
ACKLocalization
Localize your Cocoa apps from Google Spreadsheet
Stars: ✭ 18 (-74.29%)
Mutual labels:  localization
laravel-query-localization
Easy Localization for Laravel
Stars: ✭ 50 (-28.57%)
Mutual labels:  localization
Open-Infra-Platform
This is the official repository of the open-source Open Infra Platform software (as of April 2020).
Stars: ✭ 26 (-62.86%)
Mutual labels:  point-cloud
NSVLocalizationKit
Localize directly from Storyboard or Xib, it will automatically update all texts after in app language changing, without any line of code
Stars: ✭ 21 (-70%)
Mutual labels:  localization
Point2Sequence
Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network
Stars: ✭ 34 (-51.43%)
Mutual labels:  point-cloud

CC BY-NC-SA 4.0

CMRNet: Camera to LiDAR-Map Registration (IEEE ITSC 2019)

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. CC BY-SA 4.0

News

Check out our new paper "CMRNet++: Map and Camera Agnostic Monocular Visual Localization in LiDAR Maps":


2022/05/19

  • Updated for later version of PyTorch 1.4+, and CUDA 11.x

2020/06/24

2020/05/11

  • We released the SLAM ground truth files, see Local Maps Generation.
  • Multi-GPU training.
  • Added requirements.txt

Code

CMRNet Teaser

PyTorch implementation of CMRNet.

This code is a provided "as is", without warranty of any kind. This version only works on GPUs (no CPU version available).

Tested on:

  • Ubuntu 16.04/18.04
  • python 3.6
  • cuda 9/10/11.x
  • pytorch 1.0.1/1.10

Dependencies (this list is not complete):

Installation

Install CUDA, PyTorch, CuPy. Make sure to use the correct cuda version for all the packages.

⚠️ For CUDA 11.x please uncomment line 17 in models/CMRNet/correlation_package/setup.py setup.py#L17

Install the prerequisite packages:

pip install -r requirements.txt

And finally, install the correlation_cuda and the visibility package:

cd models/CMRNet/correlation_package/
python setup.py install
cd ../../..
python setyp.py install

It is recommended to use a dedicated conda environment

Data

We trained and tested CMRNet on the KITTI odometry sequences 00, 03, 05, 06, 07, 08, and 09.

We used a LiDAR-based SLAM system to generate the ground truths.

The Data Loader requires a local point cloud for each camera frame, the point cloud must be expressed with respect to the camera_2 reference frame, BUT (very important) with a different axes representation: X-forward, Y-right, Z-down.

For reading speed and file size we decided to save the point clouds as h5 files.

The directory structure should looks like:

KITTI_ODOMETRY
├── 00
│   ├── image_2
│   │   ├── 000000.png
│   │   ├── 000001.png
│   │   ├── ...
│   │   └── 004540.png
│   ├── local_maps
│   │   ├── 000000.h5
│   │   ├── 000001.h5
│   │   ├── ...
│   │   └── 004540.h5
│   └── poses.csv
└── 03
    ├── image_2
    │   ├── 000000.png
    │   ├── 000001.png
    │   ├── ...
    │   └── 000800.png
    ├── local_maps
    │   ├── 000000.h5
    │   ├── 000001.h5
    │   ├── ...
    │   └── 000800.h5
    └── poses.csv

Local Maps Generation

To generate the h5 files, use the script preprocess/kitti_maps.py with the ground truth files in data/.

In the sequence 08, the SLAM failed to detect a loop closure, so the poses are not coherent around that closure. Therefore, we splitted the map at frame 3000, so to have two coherent maps for that sequence.

python preprocess/kitti_maps.py --sequence 00 --kitti_folder ./KITTI_ODOMETRY/
python preprocess/kitti_maps.py --sequence 03 --kitti_folder ./KITTI_ODOMETRY/
python preprocess/kitti_maps.py --sequence 05 --kitti_folder ./KITTI_ODOMETRY/
python preprocess/kitti_maps.py --sequence 06 --kitti_folder ./KITTI_ODOMETRY/
python preprocess/kitti_maps.py --sequence 07 --kitti_folder ./KITTI_ODOMETRY/
python preprocess/kitti_maps.py --sequence 08 --kitti_folder ./KITTI_ODOMETRY/ --end 3000
python preprocess/kitti_maps.py --sequence 08 --kitti_folder ./KITTI_ODOMETRY/ --start 3000
python preprocess/kitti_maps.py --sequence 09 --kitti_folder ./KITTI_ODOMETRY/

Single Iteration example

Training:

python main_visibility_CALIB.py with batch_size=24 data_folder=./KITTI_ODOMETRY/ epochs=300 max_r=10 max_t=2 BASE_LEARNING_RATE=0.0001 savemodel=./checkpoints/ test_sequence=0

Evaluation:

python evaluate_iterative_single_CALIB.py with test_sequence=00 maps_folder=local_maps data_folder=./KITTI_ODOMETRY/ weight="['./checkpoints/weights.tar']"

Iterative refinement example

Training

python main_visibility_CALIB.py with batch_size=24 data_folder=./KITTI_ODOMETRY/sequences/ epochs=300 max_r=10 max_t=2   BASE_LEARNING_RATE=0.0001 savemodel=./checkpoints/ test_sequence=0
python main_visibility_CALIB.py with batch_size=24 data_folder=./KITTI_ODOMETRY/sequences/ epochs=300 max_r=2  max_t=1   BASE_LEARNING_RATE=0.0001 savemodel=./checkpoints/ test_sequence=0
python main_visibility_CALIB.py with batch_size=24 data_folder=./KITTI_ODOMETRY/sequences/ epochs=300 max_r=2  max_t=0.6 BASE_LEARNING_RATE=0.0001 savemodel=./checkpoints/ test_sequence=0

Evaluation

python evaluate_iterative_single_CALIB.py with test_sequence=00 maps_folder=local_maps data_folder=./KITTI_ODOMETRY/sequences/ weight="['./checkpoints/iter1.tar','./checkpoints/iter2.tar','./checkpoints/iter3.tar']"

Pretrained Model

The weights for the three iterations, trained on the sequences 03, 05, 06, 07, 08 and 09 are available here: Iteration 1 Iteration 2 Iteration 3

Results:

Median
Transl. error

Median
Rotation. error

Iteration 1 0.46 cm 1.60°
Iteration 2 0.25 cm 1.14°
Iteration 3 0.20 cm 0.97°

Paper

"CMRNet: Camera to LiDAR-Map Registration"

If you use CMRNet, please cite:

@InProceedings{cattaneo2019cmrnet,
  author={Cattaneo, Daniele and Vaghi, Matteo and Ballardini, Augusto Luis and Fontana, Simone and Sorrenti, Domenico Giorgio and Burgard, Wolfram},
  booktitle={2019 IEEE Intelligent Transportation Systems Conference (ITSC)},
  title={CMRNet: Camera to LiDAR-Map Registration},
  year={2019},
  pages={1283-1289},
  doi={10.1109/ITSC.2019.8917470},
  month={Oct}
}

If you use the ground truths, please also cite:

@INPROCEEDINGS{Caselitz_2016, 
  author={T. {Caselitz} and B. {Steder} and M. {Ruhnke} and W. {Burgard}}, 
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Monocular camera localization in 3D LiDAR maps}, 
  year={2016},
  pages={1926-1931},
  doi={10.1109/IROS.2016.7759304}
}

Acknowledgments

correlation_package was taken from flownet2

PWCNet.py is a modified version of the original PWC-DC network

Contacts

Daniele Cattaneo ([email protected] or [email protected])

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].