All Projects → ethz-asl → Hierarchical_loc

ethz-asl / Hierarchical_loc

Licence: bsd-3-clause
Deep image retrieval for efficient 6-DoF localization

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Hierarchical loc

Overlap localization
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.
Stars: ✭ 120 (-8.4%)
Mutual labels:  learning, localization
Kapture
kapture is a file format as well as a set of tools for manipulating datasets, and in particular Visual Localization and Structure from Motion data.
Stars: ✭ 128 (-2.29%)
Mutual labels:  localization
Ov2slam
OV²SLAM is a Fully Online and Versatile Visual SLAM for Real-Time Applications
Stars: ✭ 119 (-9.16%)
Mutual labels:  localization
Javascript Inspirate
📖 JavaScript, ¡Inspírate! por Ulises Gascón
Stars: ✭ 123 (-6.11%)
Mutual labels:  learning
Postgres Showcase
Postgres features showcase (commented SQL samples) for beginners
Stars: ✭ 121 (-7.63%)
Mutual labels:  learning
Tinyrenderer
A brief computer graphics / rendering course
Stars: ✭ 11,776 (+8889.31%)
Mutual labels:  learning
React Chat Api
📡 API for chat application for DogeCodes React course
Stars: ✭ 121 (-7.63%)
Mutual labels:  learning
Frapper
ASP.NET Core 3.1 Beginners project template with complete Custom User Management and lot's of other useful Features Which Helps you for Rapid Application Development.
Stars: ✭ 129 (-1.53%)
Mutual labels:  learning
Web Development Resources
A list of useful resources for Web Developers! Put it in your bookmarks and contribute something ❤️
Stars: ✭ 128 (-2.29%)
Mutual labels:  learning
Git Cheats
Git Cheats - Interactive Cheatsheet For Git Commands
Stars: ✭ 124 (-5.34%)
Mutual labels:  learning
Roenglishre
An unofficial english translation project for Korea Ragnarok Online (kRO).
Stars: ✭ 121 (-7.63%)
Mutual labels:  localization
Onramp
Easing the onramp for new or non-PHP developers to become Laravel devs.
Stars: ✭ 123 (-6.11%)
Mutual labels:  learning
Dive Into Machine Learning
Dive into Machine Learning with Python Jupyter notebook and scikit-learn! First posted in 2016, maintained as of 2021. Pull requests welcome.
Stars: ✭ 10,810 (+8151.91%)
Mutual labels:  learning
Machinelearning
An easy neural network for Java!
Stars: ✭ 122 (-6.87%)
Mutual labels:  learning
Openibl
[ECCV-2020 (spotlight)] Self-supervising Fine-grained Region Similarities for Large-scale Image Localization. 🌏 PyTorch open-source toolbox for image-based localization (place recognition).
Stars: ✭ 128 (-2.29%)
Mutual labels:  localization
Web
Kottans web course 🎓
Stars: ✭ 121 (-7.63%)
Mutual labels:  learning
React I18nify
Simple i18n translation and localization components and helpers for React.
Stars: ✭ 123 (-6.11%)
Mutual labels:  localization
Dom I18n
Provides a very basic HTML multilingual support using JavaScript
Stars: ✭ 125 (-4.58%)
Mutual labels:  localization
Arkade
Open Source Kubernetes Marketplace
Stars: ✭ 2,343 (+1688.55%)
Mutual labels:  learning
Autolocalization
iOS Auto Localization for xib and storyboard
Stars: ✭ 128 (-2.29%)
Mutual labels:  localization

Hierarchical Localization

⚠️ ⚠️ For a clean and research-friendly implementation of Hierarchical Localization, please refer to our CVPR 2019 paper at ethz-asl/hfnet. ⚠️ ⚠️

This repository contains the training and deployment code used in our paper Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization presented at CoRL 2018. This work introduces MobileNetVLAD, a mobile-friendly image retrieval deep neural network that significantly improves the performance of classical 6-DoF visual localization through a hierarchical search.


The approach is described in details in our video (click to play).

We introduce here two main features:

  • The deployment code of MobileNetVLAD: global-loc, a C++ ROS/Catkin package that can
    • load any trained image retrieval model,
    • efficiently perform the inference on GPU or CPU,
    • index a given map and save it as a protobuf,
    • and retrieve keyframes given a query image;
  • The training code: retrievalnet, a modular Python+Tensorflow package that allows to
    • train the model on any target image domain,
    • using the supervision of any existing teacher network.

The modularity of our system allows to train a model and index a map on a powerful workstation while performing the retrieval on a mobile platform. Our code has thus been extensively tested on an NVIDIA Jetson TX2, widely used for robotics research.


Retrieval on our Zurich dataset: strong illumination and viewpoint changes.

Deployment

The package relies on map primitives provided by maplab, but can be easily adapted to other SLAM frameworks. We thus do not release the code performing the local matching. The trained MobileNetVLAD is provided in global-loc/models/ and is loaded using tensorflow_catkin.

Installation

Both Ubuntu 14.04 and 16.04 are supported. First install the system packages required by maplab.

Then setup the Catkin workspace:

export ROS_VERSION=kinetic #(Ubuntu 16.04: kinetic, Ubuntu 14.04: indigo)
export CATKIN_WS=~/maplab_ws
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --merge-devel # Necessary for catkin_tools >= 0.4.
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --cmake-args \
	-DCMAKE_BUILD_TYPE=Release \
	-DENABLE_TIMING=1 \
	-DENABLE_STATISTICS=1 \
	-DCMAKE_CXX_FLAGS="-fext-numeric-literals -msse3 -msse4.1 -msse4.2" \
	-DCMAKE_CXX_STANDARD=14
cd src

If you want to perform the inference on GPU (see the requirements of tensorflow_catkin), add:

catkin config --append-args --cmake-args -DUSE_GPU=ON

Finally clone the repository and build:

git clone https://github.com/ethz-asl/hierarchical_loc.git --recursive
touch hierarchical_loc/catkin_dependencies/maplab_dependencies/3rd_party/eigen_catkin/CATKIN_IGNORE
touch hierarchical_loc/catkin_dependencies/maplab_dependencies/3rd_party/protobuf_catkin/CATKIN_IGNORE
cd $CATKIN_WS && catkin build global_loc

Run the test examples:

./devel/lib/global_loc/test_inference
./devel/lib/global_loc/test_query_index

Indexing

Given a VI map in global-loc/maps/, an index of global descriptors can be created in global-loc/data/:

./devel/lib/global_loc/build_index \
	--map_name <map_name> \
	--model_name mobilenetvlad_depth-0.35 \
	--proto_name <index_name.pb>

As an example, we provide the Zurich map used in our paper. Several indexing options are available in place-retrieval.cc, such as subsampling or mission selection.

Retrieval

An example of query is provided in test_query_index.cc. Descriptor indexes for the Zurich dataset are included in global-loc/data/ and can be used to time the queries:

./devel/lib/global_loc/time_query \
	--map_name <map_name> \
	--model_name mobilenetvlad_depth-0.35 \
	--proto_name lindenhof_afternoon_aligned_mobilenet-d0.35.pb \
	--query_mission f6837cac0168580aa8a66be7bbb20805 \
	--use_pca --pca_dims 512 --max_num_queries 100

Use the same indexes to evaluate and visualize the retrieval: install retrievalnet, generate the Python protobuf interface, and refer to tango_evaluation.ipynb and tango_visualize_retrieval.ipynb.

Training

We use distillation to compress the original NetVLAD model into a smaller MobileNetVLAD with mobile real-time inference capability.

Installation

Python 3.5 is required. It is advised to run the following installation commands within a virtual environment. You will be prompted to provide the path to a data folder (subsequently referred as $DATA_PATH) containing the datasets and pre-trained models and to an experiment folder ($EXPER_PATH) containing the trained models, training logs, and exported descriptors for evaluation.

cd retrievalnet && make install

Exporting the target descriptors

If you wish to train MobileNetVLAD on the Google Landmarks dataset as done in our paper, you first need to download the index of images and then download the dataset itself with download_google_landmarks.py. The weights of the original NetVLAD model are provided by netvlad_tf_open and should be extracted in $DATA_PATH/weights/.

Finally export the descriptors of Google Landmarks:

python export_descriptors.py config/netvlad_export_distill.yaml google_landmarks/descriptors --as_dataset

Training MobileNetVLAD

Extract the MobileNet encoder pre-trained on ImageNet in $DATA_PATH/weights/ and run:

python train.py config/mobilenetvlad_train_distill.yaml mobilenetvlad

The training can be interrupted at any time using Ctrl+C and can be monitored with Tensorboard summaries saved in $EXPER_PATH/mobilenetvlad/. The weights are also saved there.

Exporting the model for deployment

python export_model.py config/mobilenetvlad_train_distill.yaml mobilenetvlad

will export the model in $EXPER_PATH/saved_models/mobilenetvlad/.

Evaluating on the NCLT dataset

Download the NCLT sequences in $DATA_PATH/nclt/ along with the corresponding pose files (generated with nclt_generate_poses.ipynb). Export the NCLT descriptors, e.g. for MobileNetVLAD:

python export_descriptors.py configs/mobilenetvlad_export_nclt.yaml mobilenetvlad

These can be used to evaluate and visualize the retrieval (see nclt_evaluation.ipynb and nclt_visualize_retrieval.ipynb).

Citation

Please consider citing the corresponding publication if you use this work in an academic context:

@inproceedings{hloc2018,
  title={Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization},
  author={Sarlin, P.-E. and Debraine, F. and Dymczyk, M. and Siegwart, R. and Cadena, C.},
  booktitle={Conference on Robot Learning (CoRL)},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].