All Projects → Suoivy → LPD-net

Suoivy / LPD-net

Licence: other
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis, ICCV 2019, Seoul, Korea

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to LPD-net

MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (-13.33%)
Mutual labels:  point-cloud, place-recognition
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (+10.67%)
Mutual labels:  point-cloud, place-recognition
PlaceRecognition-LoopDetection
Light-weight place recognition and loop detection using road markings
Stars: ✭ 50 (-33.33%)
Mutual labels:  place-recognition
persee-depth-image-server
Stream openni2 depth images over the network
Stars: ✭ 21 (-72%)
Mutual labels:  point-cloud
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (-30.67%)
Mutual labels:  point-cloud
LiDAR fog sim
LiDAR fog simulation
Stars: ✭ 101 (+34.67%)
Mutual labels:  point-cloud
zed-ros2-wrapper
ROS 2 wrapper beta for the ZED SDK
Stars: ✭ 61 (-18.67%)
Mutual labels:  point-cloud
WS3D
Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)
Stars: ✭ 104 (+38.67%)
Mutual labels:  point-cloud
efficient online learning
Efficient Online Transfer Learning for 3D Object Detection in Autonomous Driving
Stars: ✭ 20 (-73.33%)
Mutual labels:  point-cloud
Python-for-Remote-Sensing
python codes for remote sensing applications will be uploaded here. I will try to teach everything I learn during my projects in here.
Stars: ✭ 20 (-73.33%)
Mutual labels:  point-cloud
pyRANSAC-3D
A python tool for fitting primitives 3D shapes in point clouds using RANSAC algorithm
Stars: ✭ 253 (+237.33%)
Mutual labels:  point-cloud
DeepI2P
DeepI2P: Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
Stars: ✭ 130 (+73.33%)
Mutual labels:  point-cloud
ECCV-2020-point-cloud-analysis
ECCV 2020 papers focusing on point cloud analysis
Stars: ✭ 22 (-70.67%)
Mutual labels:  point-cloud
fastDesp-corrProp
Fast Descriptors and Correspondence Propagation for Robust Global Point Cloud Registration
Stars: ✭ 16 (-78.67%)
Mutual labels:  point-cloud
YOHO
[ACM MM 2022] You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors
Stars: ✭ 76 (+1.33%)
Mutual labels:  point-cloud
pcc geo cnn v2
Improved Deep Point Cloud Geometry Compression
Stars: ✭ 55 (-26.67%)
Mutual labels:  point-cloud
mix3d
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021 Oral)
Stars: ✭ 183 (+144%)
Mutual labels:  point-cloud
pcl-edge-detection
Edge-detection application with PointCloud Library
Stars: ✭ 32 (-57.33%)
Mutual labels:  point-cloud
CVPR-2020-point-cloud-analysis
CVPR 2020 papers focusing on point cloud analysis
Stars: ✭ 48 (-36%)
Mutual labels:  point-cloud
point based clothing
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)
Stars: ✭ 57 (-24%)
Mutual labels:  point-cloud

LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis

LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis ICCV 2019, Seoul, Korea

Zhe Liu1, Shunbo Zhou1, Chuanzhe Suo1, Peng Yin3, Wen Chen1, Hesheng Wang2,Haoang Li1, Yun-Hui Liu1

1The Chinese University of Hong Kong, 2Shanghai Jiao Tong University, 3Carnegie Mellon University

pic-network

Introduction

Point cloud based Place Recognition is still an open issue due to the difficulty in extracting local features from the raw 3D point cloud and generating the global descriptor,and it’s even harder in the large-scale dynamic environments. We develop a novel deep neural network, named LPD-Net (Large-scale Place Description Network), which can extract discriminative and generalizable global descriptors from the raw 3D point cloud. The arXiv version of LPD-Net can be found here.

@InProceedings{Liu_2019_ICCV,
author = {Liu, Zhe and Zhou, Shunbo and Suo, Chuanzhe and Yin, Peng and Chen, Wen and Wang, Hesheng and Li, Haoang and Liu, Yun-Hui},
title = {LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}

Benchmark Datasets

The benchmark datasets introdruced in this work can be downloaded here, which created by PointNetVLAD for point cloud based retrieval for place recognition from the open-source Oxford RobotCar. Details can be found in PointNetVLAD.

  • All submaps are in binary file format
  • Ground truth GPS coordinate of the submaps are found in the corresponding csv files for each run
  • Filename of the submaps are their timestamps which is consistent with the timestamps in the csv files
  • Use CSV files to define positive and negative point clouds
  • All submaps are preprocessed with the road removed and downsampled to 4096 points

Oxford Dataset

  • 45 sets in total of full and partial runs
  • Used both full and partial runs for training but only used full runs for testing/inference
  • Training submaps are found in the folder "pointcloud_20m_10overlap/" and its corresponding csv file is "pointcloud_locations_20m_10overlap.csv"
  • Training submaps are not mutually disjoint per run
  • Each training submap ~20m of car trajectory and subsequent submaps are ~10m apart
  • Test/Inference submaps found in the folder "pointcloud_20m/" and its corresponding csv file is "pointcloud_locations_20m.csv"
  • Test/Inference submaps are mutually disjoint

Project Code

Pre-requisites

  • Python
  • CUDA
  • Tensorflow
  • Scipy
  • Pandas
  • Sklearn

Code was tested using Python 3 on Tensorflow 1.4.0 with CUDA 8.0 and Tensorflow 1.12.0 with CUDA 9.0

sudo apt-get install python3-pip python3-dev python-virtualenv
virtualenv --system-site-packages -p python3 ~/tensorflow
source ~/tensorflow/bin/activate
easy_install -U pip
pip3 install --upgrade tensorflow-gpu==1.4.0
pip install scipy, pandas, sklearn
pip install glog

Dataset set-up

Download the zip file of the benchmark datasets found here. Extract the folder on the same directory as the project code. Thus, on that directory you must have two folders: 1) benchmark_datasets/ and 2) LPD_net/

Data pre-processing

We preprocess the benchmark datasets at first and store the features of point clouds on bin files to save the training time. The files only need to be generated once and used as input of networks. The generation of these files may take a few hours.

# For pre-processing dataset to generate pointcloud with features
python prepare_data.py

# Parse Arguments: --k_start 20 --k_end 100 --k_step 10 --bin_core_num 10
# KNN Neighbor size from 20 to 100 with interval 10, parallel process pool core numbers:10

Generate pickle files

We store the positive and negative point clouds to each anchor on pickle files that are used in our training and evaluation codes. The files only need to be generated once. The generation of these files may take a few minutes.

cd generating_queries/ 

# For training tuples in LPD-Net
python generate_training_tuples_baseline.py

# For network evaluation
python generate_test_sets.py

# For network inference
python generate_inference_sets.py 
# Need to modify the variables (folders or index_list) to specify the folder

Model Training and Evaluation

To train our network, run the following command:

python train_lpdnet.py
# Parse Arguments: --model lpd_FNSF --log_dir log/ --restore

# For example, Train lpd_FNSF network from scratch
python train_lpdnet.py --model lpd_FNSF

# Retrain lpd_FNSF network with pretrained model
python train_lpdnet.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03> --restore

To evaluate the model, run the following command:

python evaluate.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03>
# The resulst.txt will be saved in results/<model_folder>

To infer the model, run the following command to get global descriptors:

python inference.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03>
# The inference_vectors.bin will be saved in LPD_net folder

Pre-trained Models

The pre-trained models for lpd_FNSF networks can be downloaded here. Put it under the /logfolder

License

This repository is released under MIT License (see LICENSE file for details).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].