All Projects → oravus → lostX

oravus / lostX

Licence: MIT license
(RSS 2018) LoST - Visual Place Recognition using Visual Semantics for Opposite Viewpoints across Day and Night

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to lostX

seq2single
Visual place recognition from opposing viewpoints under extreme appearance variations
Stars: ✭ 15 (-75%)
Mutual labels:  place-recognition, day-night-cycle, visual-place-recognition
S2DHM
Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization (3DV 2019)
Stars: ✭ 64 (+6.67%)
Mutual labels:  visual
Pyicp Slam
Full-python LiDAR SLAM using ICP and Scan Context
Stars: ✭ 155 (+158.33%)
Mutual labels:  recognition
Idcardrecognition
🇨🇳中国大陆第二代身份证 🆔 识别,自动读出身份证上的信息(姓名、性别、民族、住址、身份证号码)并截取身份证照片, iOS开发者交流:①群:446310206 ②群:426087546
Stars: ✭ 191 (+218.33%)
Mutual labels:  recognition
Architectural Floor Plan
AFPlan is an architectural floor plan analysis and recognition system to create extended plans for building services.
Stars: ✭ 165 (+175%)
Mutual labels:  recognition
Labelimg
🖍️ LabelImg is a graphical image annotation tool and label object bounding boxes in images
Stars: ✭ 16,088 (+26713.33%)
Mutual labels:  recognition
Crnn.pytorch
Convolutional recurrent network in pytorch
Stars: ✭ 1,914 (+3090%)
Mutual labels:  recognition
Log
Daily logging tool and data visualizer.
Stars: ✭ 30 (-50%)
Mutual labels:  visual
zero-shot-indoor-localization-release
The official code and datasets for "Zero-Shot Multi-View Indoor Localization via Graph Location Networks" (ACMMM 2020)
Stars: ✭ 44 (-26.67%)
Mutual labels:  place-recognition
Speechtotext Websockets Javascript
SDK & Sample to do speech recognition using websockets in Javascript
Stars: ✭ 191 (+218.33%)
Mutual labels:  recognition
Opencv Course
Learn OpenCV in 4 Hours - Code used in my Python and OpenCV course on freeCodeCamp.
Stars: ✭ 185 (+208.33%)
Mutual labels:  recognition
Text Detector
Tool which allow you to detect and translate text.
Stars: ✭ 173 (+188.33%)
Mutual labels:  recognition
Deephccr
Offline Handwritten Chinese Character Recognition based on GoogLeNet and AlexNet (With CaffeModel)
Stars: ✭ 242 (+303.33%)
Mutual labels:  recognition
Lc Finder
An image annotation and object detection tool written in C
Stars: ✭ 163 (+171.67%)
Mutual labels:  recognition
ascii.js
A web-font-based rendering engine for displaying DOS/Amiga ASCII artwork on the web as text
Stars: ✭ 25 (-58.33%)
Mutual labels:  visual
Faceid
An implementation of YOLO v2 for direct facial recognition within detection layer.
Stars: ✭ 144 (+140%)
Mutual labels:  recognition
Deep Text Recognition Benchmark
Text recognition (optical character recognition) with deep learning methods.
Stars: ✭ 2,665 (+4341.67%)
Mutual labels:  recognition
Php Opencv
php wrapper for opencv
Stars: ✭ 194 (+223.33%)
Mutual labels:  recognition
MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (+8.33%)
Mutual labels:  place-recognition
GuneyOzsanOutThereMusicVideo
Procedurally generated, real-time, demoscene style, open source music video made with Unity 3D for Out There by Guney Ozsan.
Stars: ✭ 26 (-56.67%)
Mutual labels:  visual

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

This is the source code for the paper titled - "LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics", [arXiv][RSS 2018 Proceedings]

An example output image showing Keypoint Correspondences:

An example output image showing Keypoint Correspondences

Flowchart of the proposed approach:

Flowchart of the proposed approach

If you find this work useful, please cite it as:
Sourav Garg, Niko Sunderhauf, and Michael Milford. LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Proceedings of Robotics: Science and Systems XIV, 2018.
bibtex:

@article{garg2018lost,
title={LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics},
author={Garg, Sourav and Suenderhauf, Niko and Milford, Michael},
journal={Proceedings of Robotics: Science and Systems XIV},
year={2018}
}

RefineNet's citation as mentioned on their Github page.

Setup and Run

Dependencies

  • Ubuntu (Tested on 14.04)
  • RefineNet
    • Required primarily for visual semantic information. Convolutional feature maps based dense descriptors are also extracted from the same.
    • A modified fork of RefineNet's code is used in this work to simultaneously store convolutional dense descriptors.
    • Requires Matlab (Tested on 2017a)
  • Python (Tested on 2.7)
    • numpy (Tested on 1.11.1, 1.14.2)
    • scipy (Tested on 0.13.3, 0.17.1)
    • skimage (Minimum Required 0.13.1)
    • sklearn (Tested on 0.14.1, 0.19.1)
    • h5py (Tested on 2.7.1)
  • Docker (optional, recommended, tested on 17.12.0-ce)

Download

  1. In your workspace, clone the repositories:
    git clone https://github.com/oravus/lostX.git
    cd lostX
    git clone https://github.com/oravus/refinenet.git
    
    NOTE: If you download this repository as a zip, the refineNet's fork will not get downloaded automatically, being a git submodule.
  2. Download the Resnet-101 model pre-trained on Cityscapes dataset from here or here. More details on RefineNet's Github page.
    • Place the downloaded model's .mat file in the refinenet/model_trained/ directory.
  3. If you are using docker, download the docker image:
    docker pull souravgarg/vpr-lost-kc:v1
    

Run

  1. Generate and store semantic labels and dense convolutional descriptors from RefineNet's conv5 layer In the MATLAB workspace, from the refinenet/main/ directory, run:

    demo_predict_mscale_cityscapes
    

    The above will use the sample dataset from refinenet/datasets/ directory. You can set path to your data in demo_predict_mscale_cityscapes.m through variable datasetName and img_data_dir.
    You might have to run vl_compilenn before running the demo, please refer to the instructions for running refinenet in their official Readme.md

  2. [For Docker users]
    If you have an environment with python and other dependencies installed, skip this step, otherwise run a docker container:

    docker run -it -v PATH_TO_YOUR_HOME_DIRECTORY/:/workspace/ souravgarg/vpr-lost-kc:v1 /bin/bash
    

    From within the docker container, navigate to lostX/lost_kc/ repository.
    -v option mounts the PATH_TO_YOUR_HOME_DIRECTORY to /workspace directory within the docker container.

  3. Reformat and pre-process RefineNet's output from lostX/lost_kc/ directory:

    python reformat_data.py -p $PATH_TO_REFINENET_OUTPUT
    

    $PATH_TO_REFINENET_OUTPUT is set to be the parent directory of predict_result_full, for example, ../refinenet/cache_data/test_examples_cityscapes/1-s_result_20180427152622_predict_custom_data/predict_result_1/

  4. Compute LoST descriptor:

    python LoST.py -p $PATH_TO_REFINENET_OUTPUT 
    
  5. Repeat step 1, 3, and 4 to generate output for the other dataset by setting the variable datasetName to 2-s.

  6. Perform place matching using LoST descriptors based difference matrix and Keypoint Correspondences:

    python match_lost_kc.py -n 10 -f 0 -p1 $PATH_TO_REFINENET_OUTPUT_1  -p2 $PATH_TO_REFINENET_OUTPUT_2
    

Note: Run python FILENAME -h for any of the python source files in Step 3, 4, and 6 for description of arguments passed to those files.

License

The code is released under MIT License.

Related Projects

Delta Descriptors (2020)

CoarseHash (2020)

seq2single (2019)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].