All Projects → Sachini → ronin

Sachini / ronin

Licence: GPL-3.0 license
RoNIN: Robust Neural Inertial Navigation in the Wild

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ronin

microstrain inertial
ROS driver for all of MicroStrain's current G and C series products. To learn more visit
Stars: ✭ 44 (-69.44%)
Mutual labels:  imu, inertial-navigation-systems
rpc-bench
RPC Benchmark of gRPC, Aeron and KryoNet
Stars: ✭ 59 (-59.03%)
Mutual labels:  benchmark
nowplaying-RS-Music-Reco-FM
#nowplaying-RS: Music Recommendation using Factorization Machines
Stars: ✭ 23 (-84.03%)
Mutual labels:  benchmark
caliper-benchmarks
Sample benchmark files for Hyperledger Caliper https://wiki.hyperledger.org/display/caliper
Stars: ✭ 69 (-52.08%)
Mutual labels:  benchmark
graphql-benchmarks
GraphQL benchmarks using the-benchmarker framework.
Stars: ✭ 54 (-62.5%)
Mutual labels:  benchmark
revl
Helps to benchmark code for Autodesk Maya.
Stars: ✭ 14 (-90.28%)
Mutual labels:  benchmark
Embedded UKF Library
A compact Unscented Kalman Filter (UKF) library for Teensy4/Arduino system (or any real time embedded system in general)
Stars: ✭ 31 (-78.47%)
Mutual labels:  imu
php-simple-benchmark-script
Очень простой скрипт тестирования быстродействия PHP | Very simple script for testing of PHP operations speed (rusoft repo mirror)
Stars: ✭ 50 (-65.28%)
Mutual labels:  benchmark
react-benchmark
A tool for benchmarking the render performance of React components
Stars: ✭ 99 (-31.25%)
Mutual labels:  benchmark
typescript-orm-benchmark
⚖️ ORM benchmarking for Node.js applications written in TypeScript
Stars: ✭ 106 (-26.39%)
Mutual labels:  benchmark
node-vs-ruby-io
Node vs Ruby I/O benchmarks when resizing images with libvips.
Stars: ✭ 11 (-92.36%)
Mutual labels:  benchmark
cpm
Continuous Perfomance Monitor (CPM) for C++ code
Stars: ✭ 39 (-72.92%)
Mutual labels:  benchmark
sets
Benchmarks for set data structures: hash sets, dawg's, bloom filters, etc.
Stars: ✭ 20 (-86.11%)
Mutual labels:  benchmark
hood
The plugin to manage benchmarks on your CI
Stars: ✭ 17 (-88.19%)
Mutual labels:  benchmark
map benchmark
Comprehensive benchmarks of C++ maps
Stars: ✭ 132 (-8.33%)
Mutual labels:  benchmark
beapi-bench
Tool for benchmarking apis. Uses ApacheBench(ab) to generate data and gnuplot for graphing. Adding new features almost daily
Stars: ✭ 16 (-88.89%)
Mutual labels:  benchmark
wasr network
WaSR Segmentation Network for Unmanned Surface Vehicles v0.5
Stars: ✭ 32 (-77.78%)
Mutual labels:  imu
benchmarking-fft
choosing FFT library...
Stars: ✭ 74 (-48.61%)
Mutual labels:  benchmark
ftsb
Full Text Search Benchmark, a tool for comparing and evaluating full-text search engines.
Stars: ✭ 12 (-91.67%)
Mutual labels:  benchmark
facies classification benchmark
The repository includes PyTorch code, and the data, to reproduce the results for our paper titled "A Machine Learning Benchmark for Facies Classification" (published in the SEG Interpretation Journal, August 2019).
Stars: ✭ 79 (-45.14%)
Mutual labels:  benchmark

RoNIN: Robust Neural Inertial Navigation in the Wild

Paper: ICRA 2020, arXiv
Website: http://ronin.cs.sfu.ca/
Demo: https://youtu.be/JkL3O9jFYrE


Requirements

python3, numpy, scipy, pandas, h5py, numpy-quaternion, matplotlib, torch, torchvision, tensorboardX, numba, plyfile, tqdm, scikit-learn

Data

The dataset used by this project is collected using an App for Google Tango Device and an App for any Android Device, and pre_processed to the data format specified here Please refer to our paper for more details on data collection.

You can download the RoNIN dataset from our project website or HERE. Unfortunately, due to security concerns we were unable to publish 50% of our dataset.

Optionally, you can write a custom dataloader (E.g: soure/data_ridi.py) to load a different dataset.

Usage:

  1. Clone the repository.
  2. (Optional) Download the dataset and the pre-trained models1 from HERE.
  3. Position Networks
    1. To train/test RoNIN ResNet model:
      • run source/ronin_resnet.py with mode argument. Please refer to the source code for the full list of command line arguments.
      • Example training command: python ronin_resnet.py --mode train --train_list <path-to-train-list> --root_dir <path-to-dataset-folder> --out_dir <path-to-output-folder>.
      • Example testing command: python ronin_resnet.py --mode test --test_list <path-to-train-list> --root_dir <path-to-dataset-folder> --out_dir <path-to-output-folder> --model_path <path-to-model-checkpoint>.
    2. To train/test RoNIN LSTM or RoNIN TCN model:
      • run source/ronin_lstm_tcn.py with mode (train/test) and model type. Please refer to the source code for the full list of command line arguments. Optionally you can specify a configuration file such as config/temporal_model_defaults.json with the data paths.
      • Example training command: python ronin_lstm_tcn.py train --type tcn --config <path-to-your-config-file> --out_dir <path-to-output-folder> --use_scheduler.
      • Example testing command: python ronin_lstm_tcn.py test --type tcn --test_list <path-to-test-list> --data_dir <path-to-dataset-folder> --out_dir <path-to-output-folder> --model_path <path-to-model-checkpoint>.
  4. Heading Network
    • run source/ronin_body_heading.py with mode (train/test). Please refer to the source code for the full list of command line arguments. Optionally you can specify a configuration file such as config/heading_model_defaults.json with the data paths.
    • Example training command: python ronin_body_heading.py train --config <path-to-your-config-file> --out_dir <path-to-output-folder> --weights 1.0,0.2.
    • Example testing command: python ronin_body_heading.py test --config <path-to-your-config-file> --test_list <path-to-test-list> --out_dir <path-to-output-folder> --model_path <path-to-model-checkpoint>.

1 The models are trained on the entire dataset

Citation

Please cite the following paper is you use the code, paper or data:
Herath, S., Yan, H. and Furukawa, Y., 2020, May. RoNIN: Robust Neural Inertial Navigation in the Wild: Benchmark, Evaluations, & New Methods. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3146-3152). IEEE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].