All Projects → coldmanck → zero-shot-indoor-localization-release

coldmanck / zero-shot-indoor-localization-release

Licence: MIT license
The official code and datasets for "Zero-Shot Multi-View Indoor Localization via Graph Location Networks" (ACMMM 2020)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to zero-shot-indoor-localization-release

overlapping-community-detection
Implementation of "Overlapping Community Detection with Graph Neural Networks"
Stars: ✭ 79 (+79.55%)
Mutual labels:  graph-neural-networks
Stellargraph
StellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+4979.55%)
Mutual labels:  graph-neural-networks
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (+88.64%)
Mutual labels:  place-recognition
DGN
Implementation of Directional Graph Networks in PyTorch and DGL
Stars: ✭ 71 (+61.36%)
Mutual labels:  graph-neural-networks
Pytorch geometric
Graph Neural Network Library for PyTorch
Stars: ✭ 13,359 (+30261.36%)
Mutual labels:  graph-neural-networks
MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (+47.73%)
Mutual labels:  place-recognition
DOM-Q-NET
Graph-based Deep Q Network for Web Navigation
Stars: ✭ 30 (-31.82%)
Mutual labels:  graph-neural-networks
place-recognition
Visual Place Recognition implemented in PyTorch
Stars: ✭ 41 (-6.82%)
Mutual labels:  place-recognition
Spektral
Graph Neural Networks with Keras and Tensorflow 2.
Stars: ✭ 1,946 (+4322.73%)
Mutual labels:  graph-neural-networks
LPD-net
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis, ICCV 2019, Seoul, Korea
Stars: ✭ 75 (+70.45%)
Mutual labels:  place-recognition
Graph Based Deep Learning Literature
links to conference publications in graph-based deep learning
Stars: ✭ 3,428 (+7690.91%)
Mutual labels:  graph-neural-networks
Deep Learning Drizzle
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Stars: ✭ 9,717 (+21984.09%)
Mutual labels:  graph-neural-networks
lostX
(RSS 2018) LoST - Visual Place Recognition using Visual Semantics for Opposite Viewpoints across Day and Night
Stars: ✭ 60 (+36.36%)
Mutual labels:  place-recognition
Traffic-Prediction-Open-Code-Summary
Summary of open source code for deep learning models in the field of traffic prediction
Stars: ✭ 58 (+31.82%)
Mutual labels:  graph-neural-networks
FAST LIO SLAM
LiDAR SLAM = FAST-LIO + Scan Context
Stars: ✭ 183 (+315.91%)
Mutual labels:  place-recognition
grail
Inductive relation prediction by subgraph reasoning, ICML'20
Stars: ✭ 83 (+88.64%)
Mutual labels:  graph-neural-networks
Euler
A distributed graph deep learning framework.
Stars: ✭ 2,701 (+6038.64%)
Mutual labels:  graph-neural-networks
so dso place recognition
A Fast and Robust Place Recognition Approach for Stereo Visual Odometry using LiDAR Descriptors
Stars: ✭ 52 (+18.18%)
Mutual labels:  place-recognition
seq2single
Visual place recognition from opposing viewpoints under extreme appearance variations
Stars: ✭ 15 (-65.91%)
Mutual labels:  place-recognition
PlaceRecognition-LoopDetection
Light-weight place recognition and loop detection using road markings
Stars: ✭ 50 (+13.64%)
Mutual labels:  place-recognition

Zero-Shot Indoor Localization

[Paper] [Poster] [Video]

The official evaluation code of the paper Zero-Shot Multi-View Indoor Localization via Graph Location Networks which has been accepted at ACM MM 2020. This repo also includes two datasets (ICUBE & WCP) used in the paper and useful code snippets for reading datasets.

Please cite our paper if you use our code/datasets or feel inspired by our work :)

@inproceedings{chiou2020zero,
  title={Zero-Shot Multi-View Indoor Localization via Graph Location Networks},
  author={Chiou, Meng-Jiun and Liu, Zhenguang and Yin, Yifang and Liu, An-An and Zimmermann, Roger},
  booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
  pages={3431--3440},
  year={2020}
}

Especially for ICUBE dataset, you might also want to cite the following paper:

@inproceedings{liu2017multiview,
  title={Multiview and multimodal pervasive indoor localization},
  author={Liu, Zhenguang and Cheng, Li and Liu, Anan and Zhang, Luming and He, Xiangnan and Zimmermann, Roger},
  booktitle={Proceedings of the 25th ACM international conference on Multimedia},
  pages={109--117},
  year={2017}
}

Datasets

Images

Note that for both datasets, the name of each image follows the naming of {xx}_000{yy}.jpg where xx is the location ID and yy in {01,02,...,12} means different views of the location and 01-04, 05-08 and 09-12 are sets of four views of the location.

  • ICUBE Dataset: collected at the ICUBE building (21 Heng Mui Keng Terrace, Singapore 119613) at NUS. Put the image data under data/icube/.
  • West Coast Plaza (WCP) Dataset: collected at the WCP shopping mall (154 West Coast Rd, Singapore 127371). Put the image data under data/wcp/.

Annotations

Annotations are provided in this repository. We provide description for each file as follows:

  • {dataset}_path.txt -> Each line represents a "path" or "road", where for each line there are three numbers: (1) road ID (2) start location ID (3) end location ID
  • loc_vec.npy -> A 2D array in size of (n_location, 2) where the line i is the coordinate of location i in increasing order (the first line is location 1). Generated from {dataset}_path.txt.
  • adjacency_matrix.npy -> (for zero-shot use) Adjacency matrix of the locations. A 2D array in size of (n_location + 1, n_location + 1). The first row and column should be omitted before use. Generated from loc_vec.npy. Note that currently the adjacency_matrix.npy file for wcp dataset is missed; you have to generate it yourself according to wcp_path.txt.
  • nonzl_loc_to_dict.pkl -> (for zero-shot use) A dictionary that maps dataset's seen location IDs into all (training) class IDs.
  • all_loc_to_dict.pkl -> (for zero-shot use) A dictionary that maps all dataset's seen & unseen location IDs into all (training & validation) class IDs.

Pre-trained Feature

  • loc_vec_trained_{214,394}.npy -> (for zero-shot use) Trained node features with the proposed Map2Vec using compute-loc_vec.py.

Installation

  • Python 3.6 or higher
  • PyTorch 1.1 (possibly compatible with versions from 0.4 to 1.5)
  • Torchvision (to be installed along with PyTorch)
  • Other packages in requirements.txt (including torch-geometric==1.3.0)
conda create -n zsgln python=3.6 scipy numpy
conda activate zsgln
# Go to https://pytorch.org/get-started/previous-versions/ to install PyTorch with a compatiable gpu version
# Then install other requirements as follows
pip install -r requirements.txt

Checkpoints

Download the trained models here and put them into the checkpoints folder follow the following predefined structure. You can also run ./download_checkpoints.sh to download them automatically (while you need to install gdown via pip install gdown first).

checkpoints
|-icube
  |-standard
    |-resnet152-best_model-gln.pth
    |-resnet152-best_model-gln_att.pth
  |-zero-shot
    |-resnet152-best_model-baseline.pth
    |-resnet152-best_model-gln.pth
    |-resnet152-best_model-gln_att.pth
|-wcp
  |-standard
    |-resnet152-best_model-gln.pth
    |-resnet152-best_model-gln_att.pth
  |-zero-shot
    |-resnet152-best_model-baseline.pth
    |-resnet152-best_model-gln.pth
    |-resnet152-best_model-gln_att.pth

Evaluation

Default dataset is icube. You may change to wcp as needed. Note that the code assumes using 1 GPU & will take up around ~2GB memory. To change to cpu only, make some changes to the codes.

Ordinary Indoor Localization

Note that the first number in printed top1_count corresponds to meter-level accuracy in Table 1 and the first 6 numbers correspond to the CDF curves in Figure 5.

GLN (GCN based)

python eval-gln.py --network gcn --dataset icube --ckpt checkpoints/icube/standard/resnet152-best_model-gln.pth

GLN + Attention (GAT based)

python eval-gln.py --network gat --dataset icube --ckpt checkpoints/icube/standard/resnet152-best_model-gln_att.pth

Zero-shot Indoor Localization

Note that the printed top1_count corresponds to CDF@k and Top{1,2,3,5,10} Acc corresponds to Recall@k in Table 2. MED are manually computed with linear interpolation finding the value resulting in 0.5 in top1_count.

Baseline

python eval-zs_gln.py --network baseline --dataset icube --ckpt checkpoints/icube/zero-shot/resnet152-best_model-baseline.pth

GLN (GCN based)

python eval-zs_gln.py --network gcn --dataset icube --ckpt checkpoints/icube/zero-shot/resnet152-best_model-gln.pth

GLN + Attention (GAT based)

python eval-zs_gln.py --network gat --dataset icube --ckpt checkpoints/icube/zero-shot/resnet152-best_model-gln_att.pth

(Optional) Computing Map2Vec

Refer to compute-loc_vec.py for computing Map2Vec embeddings for both icube and wcp datasets.

Note that currently the adjacency_matrix.npy file for wcp dataset is missed; you have to generate it yourself from loc_vec.npy. However, only for verifying/evaluation purpose, you may skip this step to use the loc_vec_trained_394.npy directly.

License

Our code & datasets are released under the MIT license. See LICENSE for additional details.

Enquiry

Feel free to drop an email to [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].