All Projects β†’ joshi-bharat β†’ deep_underwater_localization

joshi-bharat / deep_underwater_localization

Licence: GPL-3.0 License
Source Code for "DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization", submitted to IROS 2020

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deep underwater localization

TransPose
PyTorch Implementation for "TransPose: Keypoint localization via Transformer", ICCV 2021.
Stars: ✭ 250 (+1823.08%)
Mutual labels:  localization, pose-estimation
GA SLAM
πŸš€ SLAM for autonomous planetary rovers with global localization
Stars: ✭ 40 (+207.69%)
Mutual labels:  localization, pose-estimation
vor
The new IoT Office Experience.
Stars: ✭ 44 (+238.46%)
Mutual labels:  localization
ONNX-Mobile-Human-Pose-3D
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.
Stars: ✭ 69 (+430.77%)
Mutual labels:  pose-estimation
tensorrt-examples
TensorRT Examples (TensorRT, Jetson Nano, Python, C++)
Stars: ✭ 31 (+138.46%)
Mutual labels:  pose-estimation
RustRobotics
Rust implementation of PythonRobotics such as EKF, DWA, Pure Pursuit, LQR.
Stars: ✭ 40 (+207.69%)
Mutual labels:  localization
StarMap
StarMap for Category-Agnostic Keypoint and Viewpoint Estimation
Stars: ✭ 97 (+646.15%)
Mutual labels:  pose-estimation
f3-multilang
Create multilingual apps with this localization plugin for the PHP Fat-Free Framework
Stars: ✭ 44 (+238.46%)
Mutual labels:  localization
HyperFace-TensorFlow-implementation
HyperFace
Stars: ✭ 68 (+423.08%)
Mutual labels:  pose-estimation
UrbanLoco
UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes
Stars: ✭ 147 (+1030.77%)
Mutual labels:  localization
i18n-tag-schema
Generates a json schema for all i18n tagged template literals in your project
Stars: ✭ 15 (+15.38%)
Mutual labels:  localization
ViPNAS
The official repo for CVPR2021β€”β€”ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search.
Stars: ✭ 32 (+146.15%)
Mutual labels:  pose-estimation
devonfw4flutter-mts-app
Large-Scale Flutter Reference Application. An Extension of DevonFw's My Thai Star Project
Stars: ✭ 54 (+315.38%)
Mutual labels:  localization
plate
Internationalization library for Python
Stars: ✭ 31 (+138.46%)
Mutual labels:  localization
openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.
Stars: ✭ 900 (+6823.08%)
Mutual labels:  pose-estimation
flutter-internationalization
Flutter Internationalization by Using JSON Files
Stars: ✭ 18 (+38.46%)
Mutual labels:  localization
AspNetCoreMvcSharedLocalization
ASP.NET Core MVC shared localization
Stars: ✭ 31 (+138.46%)
Mutual labels:  localization
rosetta
A blazing fast internationalization (i18n) library for Crystal with compile-time key lookup.
Stars: ✭ 23 (+76.92%)
Mutual labels:  localization
localized-countries
🌐 Country code to name mappings for several languages
Stars: ✭ 18 (+38.46%)
Mutual labels:  localization
react-translator-component
React language translation module for developing a multilingual project.
Stars: ✭ 13 (+0%)
Mutual labels:  localization

DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization

Source Code for the paper DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization, accepted to IROS 2020. [Paper]

Introduction

We propose a real-time deep-learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from single image. Due to the pro-found difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP.

Click on Image for Deep URL YouTube Video

Citation

If you find DeepURL useful in your research, please consider citing:

@misc{joshi2020deepurl,
    title={DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization},
    author={Bharat Joshi and Md Modasshir and Travis Manderson and Hunter Damron and Marios Xanthidis and Alberto Quattrini Li and Ioannis Rekleitis and Gregory Dudek},
    year={2020},
    archivePrefix={arXiv}
}

Installation

Packages

  • Python 3, Tensorflow >= 1.8.0, Numpy, tqdm, opencv-python

Tested on

  • Ubuntu 18.04
  • Tensorflow 1.15.0
  • python 3.7.6
  • Cuda Toolkit 10.0

Running Demo on Single Image

There are some images from Pool Dataset under ./data/demo_data. You can run the demo on single image by

  1. Download the pretrained DeepURL checkpoint,deepurl_checkpoint.zip, from [GitHub Release] and extract the checkpoint.
  2. python test_single_image.py --input_image data/demo_data/1537054379109932699.jpeg --checkpoint_dir path_to_extracted_checkpoint

Sample Result:

Training

Note: DeepURL only supports one object class until now

  1. Download the pretrained darknet Tensorflow checkpoint,darknet_weight_checkpoint.zip, from [GitHub Release]. Extract the darknet checkpoint and place inside ./data/darknet_weights/ directory.

  2. Download the synthetic dataset - synthetic.zip obtained after image-to-image translation using CycleGAN from [AFRL DeepURL Dataset] and extract them. The training file is available as .data/deepurl/train.txt. Each line in the training file represents each image in the format like image_index image_absolute_path img_width img_height label_index 2D_bounding_box 3D_keypoint_projection. 2D_bounding_box format: x_min y_min x_max y_max top left -> (x_min,y_min) and bottom right -> (x_max, y_max). 3D_keypoint_projection contains the projections of 8 corners of Aqua (any other object you want to use) 3D object model in the image.

    For example:

    0 xxx/xxx/45162.png 800 600 0 445 64 571 234 505 151 519 243 546 227 555 209 586 191 440 119 466 105 458 61 489 44
    1 xxx/xxx/3621.png 800 600 0 194 181 560 475 400 300 356 509 305 417 207 422 166 358 620 243 602 169 442 245 422 191
    

    To train change the image_absolute_path to the directory where you downloaded and extracted the synthetic dataset.

    Please refer to this link for a detailed explanation on how to create labels for your own dataset.

  3. Start the training

    python train.py

    The hyper-parameters and the corresponding annotations can be found in args.py. For future work, projections of 3D Aqua center are also appended at the end. Change nV to 9 in args.py if you want to use center of object as keypoint for training.

Testing on Pool Dataset

  1. Download the pretrained DeepURL checkpoint,deepurl_checkpoint.zip, from [GitHub Release] and extract the checkpoint.

  2. Download the pool dataset - pool.zip from [AFRL DeepURL Dataset] and extract them. The testing file is available as ./data/deepurl/pool_test.txt. Each line in the training file represents each image in the format like image_index image_absolute_path img_width img_height label_index 3D_keypoint_projection. 3D_keypoint_projection contains the projections of 8 corners of Aqua (any other object you want to use) 3D object model in the image.

    For example:

    0 xxx/xxx/1536966071809545499.jpeg 800 600 0 630 278 644 237 436 287 432 249 582 111 589 68 433 125 430 85 278 644
    1 xxx/xxx/1536966073192620099.jpeg 800 600 0 590 385 593 336 400 407 392 361 523 260 522 222 389 279 384 242 385 593
    

    To test on pool dataset, change the image_absolute_path to the directory where you downloaded and extracted the pool dataset.

  3. Start testing

    python test_image_list.py --image_list data/deepurl/pool_test.txt --checkpoint_dir path_to_extracted_checkpoint

Running Demo on GoPro Video

  1. Download the pretrained DeepURL checkpoint,deepurl_checkpoint.zip, from [GitHub Release] and extract the checkpoint.
  2. Download GoPro Video from [Google Drive]
  3. python test_video.py --test_video path_to_downloaded_test_video --checkpoint_dir path_to_extracted_checkpoint

Acknowledgments

This code is built on YOLOv3 implementation of github user @wizyoung.

Contact

For any help, enquiries and comments, please contact me at [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].