All Projects → qcr → Benchbot

qcr / Benchbot

Licence: bsd-3-clause
BenchBot is a tool for seamlessly testing & evaluating semantic scene understanding tools in both realistic 3D simulation & on real robots

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Benchbot

Evo
Python package for the evaluation of odometry and SLAM
Stars: ✭ 1,373 (+4634.48%)
Mutual labels:  robotics, evaluation
Ab3dmot
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
Stars: ✭ 1,032 (+3458.62%)
Mutual labels:  robotics, evaluation
Avalanche
Avalanche: a End-to-End Library for Continual Learning.
Stars: ✭ 151 (+420.69%)
Mutual labels:  evaluation, benchmarking
Fuzzbench
FuzzBench - Fuzzer benchmarking as a service.
Stars: ✭ 612 (+2010.34%)
Mutual labels:  evaluation, benchmarking
Qa
Virtual repository hosting our Questions & Answers system
Stars: ✭ 11 (-62.07%)
Mutual labels:  robotics
Har Remix
Easily serve a HAR archive with loose matching and alterations.
Stars: ✭ 19 (-34.48%)
Mutual labels:  benchmarking
Bluezero
Middleware for distributed applications
Stars: ✭ 17 (-41.38%)
Mutual labels:  robotics
Pwned
Simple C++ code for simple tasks
Stars: ✭ 16 (-44.83%)
Mutual labels:  benchmarking
Gym Dart
OpenAI Gym environments using DART
Stars: ✭ 20 (-31.03%)
Mutual labels:  robotics
Sysbench Docker Hpe
Sysbench Dockerfiles and Scripts for VM and Container benchmarking MySQL
Stars: ✭ 14 (-51.72%)
Mutual labels:  benchmarking
Martypi
DEPRECATED - See new ROS integration. 2017 Marty RaspberryPi API by Robotical Ltd.
Stars: ✭ 11 (-62.07%)
Mutual labels:  robotics
Pepper plymouth ros
A set of launch files and configuration files for Plymouth University's Pepper robot
Stars: ✭ 22 (-24.14%)
Mutual labels:  robotics
Ros Academy For Beginners
中国大学MOOC《机器人操作系统入门》代码示例 ROS tutorial
Stars: ✭ 861 (+2868.97%)
Mutual labels:  robotics
Ethx Autonomous Mobile Robot
Autonomous Mobile Robot Problem Sets and Exercises (Spring 2017) @ ETH
Stars: ✭ 17 (-41.38%)
Mutual labels:  robotics
Roboticarmandroid
💪 + 📱 It's a simple project where you'll learn how to create a Robotic Arm with Arduino board, controlled by a Android smartphone using Bluetooth. (PT-BR: Um projeto simples onde você irá aprender como criar um braço robótico utilizando Arduino, e controlar ele via Bluetooth através de um aplicativo Android)
Stars: ✭ 14 (-51.72%)
Mutual labels:  robotics
Dynamicwindowapproach
The Dynamic Window Approach planning algorithm written in C with Python Bindings
Stars: ✭ 17 (-41.38%)
Mutual labels:  robotics
Lispy
Short and sweet LISP editing
Stars: ✭ 856 (+2851.72%)
Mutual labels:  evaluation
Mrsr
MRSR - Matlab Recommender Systems Research is a software framework for evaluating collaborative filtering recommender systems in Matlab.
Stars: ✭ 13 (-55.17%)
Mutual labels:  evaluation
Quickmcl
QuickMCL - Monte Carlo localisation for ROS
Stars: ✭ 24 (-17.24%)
Mutual labels:  robotics
Champ setup assistant
CHAMP Package Config Generator
Stars: ✭ 24 (-17.24%)
Mutual labels:  robotics

~ Our Robotic Vision Scene Understanding (RVSU) Challenge is live on EvalAI ~
(prizes include $2,500USD provided by ACRV & GPUs provided by sponsors NVIDIA)

~ Our BenchBot tutorial is the best place to get started developing with BenchBot ~

BenchBot Software Stack

benchbot_web

The BenchBot software stack is a collection of software packages that allow end users to control robots in real or simulated environments with a simple python API. It leverages the simple "observe, act, repeat" approach to robot problems prevalent in reinforcement learning communities (OpenAI Gym users will find the BenchBot API interface very similar).

BenchBot was created as a tool to assist in the research challenges faced by the semantic scene understanding community; challenges including understanding a scene in simulation, transferring algorithms to real world systems, and meaningfully evaluating algorithm performance. We've since realised, these challenges don't just exist for semantic scene understanding, they're prevalent in a wide range of robotic problems.

This led us to create version 2 of BenchBot with a focus on allowing users to define their own functionality for BenchBot through add-ons. Want to integrate your own environments? Plug-in new robot platforms? Define new tasks? Share examples with others? Add evaluation measures? This all now possible with add-ons, and you don't have to do anything more than add some YAML and Python files defining your new content!

The "bench" in "BenchBot" refers to benchmarking, with our goal to provide a system that greatly simplifies the benchmarking of novel algorithms in both realistic 3D simulation and on real robot platforms. If there is something else you would like to use BenchBot for (like integrating different simulators), please let us know. We're very interested in BenchBot being the glue between your novel robotics research and whatever your robot platform may be.

This repository contains the software stack needed to develop solutions for BenchBot tasks on your local machine. It installs and configures a significant amount of software for you, wraps software in stable Docker images (~50GB), and provides simple interaction with the stack through 4 basic scripts: benchbot_install, benchbot_run, benchbot_submit, and benchbot_eval.

System recommendations and requirements

The BenchBot software stack is designed to run seamlessly on a wide number of system configurations (currently limited to Ubuntu 18.04+). System hardware requirements are relatively high due to the software run for 3D simulation (Unreal Engine, Nvidia Isaac, Vulkan, etc.):

  • Nvidia Graphics card (GeForce GTX 1080 minimum, Titan XP+ / GeForce RTX 2070+ recommended)
  • CPU with multiple cores (Intel i7-6800K minimum)
  • 32GB+ RAM
  • 64GB+ spare storage (an SSD storage device is strongly recommended)

Having a system that meets the above hardware requirements is all that is required to begin installing the BenchBot software stack. The install script analyses your system configuration and offers to install any missing software components interactively. The list of 3rd party software components involved includes:

  • Nvidia Driver (4.18+ required, 4.50+ recommended)
  • CUDA with GPU support (10.0+ required, 10.1+ recommended)
  • Docker Engine - Community Edition (19.03+ required, 19.03.2+ recommended)
  • Nvidia Container Toolkit (1.0+ required, 1.0.5+ recommended)
  • ISAAC 2019.2 SDK (requires an Nvidia developer login)

Managing your installation

Installation is simple:

[email protected]:~$ git clone https://github.com/qcr/benchbot && cd benchbot
[email protected]:~$ ./install

Any missing software components, or configuration issues with your system, should be detected by the install script and resolved interactively. The installation asks if you want to add BenchBot helper scripts to your PATH. Choosing yes will make the following commands available from any directory: benchbot_install (same as ./install above), benchbot_run, benchbot_submit, benchbot_eval, and benchbot_batch.

BenchBot installs a default set of add-ons (currently 'benchbot-addons/ssu'), but this can be changed based on how you want to use BenchBot. For example, the following will also install the 'benchbot-addons/sqa' add-ons:

[email protected]:~$ benchbot_install --addons benchbot-addons/ssu,benchbot-addons/sqa

See the BenchBot Add-ons Manager's documentation for more information on using add-ons.

The BenchBot software stack will frequently check for updates and can update itself automatically. To update simply run the install script again (add the --force-clean flag if you would like to install from scratch):

[email protected]:~$ benchbot_install

If you decide to uninstall the BenchBot software stack, run:

[email protected]:~$ benchbot_install --uninstall

There are a number of other options to customise your BenchBot installation, which are all described by running:

[email protected]:~$ benchbot_install --help

Getting started

Getting a solution up and running with BenchBot is as simple as 1,2,3. Here's how to use BenchBot with content from the semantic scene understanding add-on:

  1. Run a simulator with the BenchBot software stack by selecting an available robot, environment, and task definition:

    [email protected]:~$ benchbot_run --robot carter --env miniroom:1 --task semantic_slam:active:ground_truth
    

    A number of useful flags exist to help you explore what content is available in your installation (see --help for full details). For example, you can list what tasks are available via --list-tasks and view the task specification via --show-task TASK_NAME.

  2. Create a solution to a BenchBot task, and run it against the software stack. To run a solution you must select a mode. For example, if you've created a solution in my_solution.py that you would like to run natively:

    [email protected]:~$ benchbot_submit --native python my_solution.py
    

    See --help for other options. You also have access to all of the examples available in your installation. For instance, you can run the hello_active example in containerised mode via:

    [email protected]:~$ benchbot_submit --containerised --example hello_active
    

    See --list-examples and --show-example EXAMPLE_NAME for full details on what's available out of the box.

  3. Evaluate the performance of your system using a supported evaluation method (see --list-methods). To use the omq evaluation method on my_results.json:

    [email protected]:~$ benchbot_eval --method omq my_results.json
    

    You can also simply run evaluation automatically after your submission completes:

    [email protected]:~$ benchbot_submit --evaluate-with omq --native --example hello_eval_semantic_slam
    

The BenchBot Tutorial is a great place to start working with BenchBot; the tutorial takes you from a blank system to a working Semantic SLAM solution, with many educational steps along the way. Also remember the examples in your installation (benchbot-addons/examples_base is a good starting point) which show how to get up and running with the BenchBot software stack.

Power tools for autonomous algorithm evaluation

Once you are confident your algorithm is a solution to the chosen task, the BenchBot software stack's power tools allow you to comprehensively explore your algorithm's performance. You can autonomously run your algorithm over multiple environments, and evaluate it holistically to produce a single summary statistic of your algorithm's performance. Here are some examples again with content from the semantic scene understanding add-on:

  • Use benchbot_batch to run your algorithm in a number of environments and produce a set of results. The script has a number of toggles available to customise the process (see --help for full details). To autonomously run your semantic_slam:active:ground_truth algorithm over 3 environments:

    [email protected]:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --native python my_solution.py
    

    Or you can use one of the pre-defined environment batches installed via add-ons (e.g. benchbot-addons/batches_isaac):

    [email protected]:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs-batch develop_1 --native python my_solution.py
    

    Additionally, you can create a results ZIP and request an overall evaluation score at the end of the batch:

    [email protected]:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --zip --evaluate-with omq --native python my_solution.py
    

    Lastly, both native and containerised submissions are supported exactly as in benchbot_submit:

    [email protected]:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --containerised my_solution_folder/
    
  • You can also directly call the holistic evaluation performed above by benchbot_batch through the benchbot_eval script. The script supports single result files, multiple results files, or a ZIP of multiple results files. See benchbot_eval --help for full details. Below are examples calling benchbot_eval with a series of results and a ZIP of results respectively:

    [email protected]:~$ benchbot_eval --method omq -o my_jsons_scores result_1.json result_2.json result_3.json
    
    [email protected]:~$ benchbot_eval --method omq -o my_zip_scores results.zip
    

Using BenchBot in your research

BenchBot was made to enable and assist the development of high quality, repeatable research results. We welcome any and all use of the BenchBot software stack in your research.

To use our system, we just ask that you cite our paper on the BenchBot system. This will help us follow uses of BenchBot in the research community, and understand how we can improve the system to help support future research results. Citation details are as follows:

@misc{talbot2020benchbot,
    title={BenchBot: Evaluating Robotics Research in Photorealistic 3D Simulation and on Real Robots},
    author={Ben Talbot and David Hall and Haoyang Zhang and Suman Raj Bista and Rohan Smith and Feras Dayoub and Niko Sünderhauf},
    year={2020},
    eprint={2008.00635},
    archivePrefix={arXiv},
    primaryClass={cs.RO}
}

Components of the BenchBot software stack

The BenchBot software stack is split into a number of standalone components, each with their own GitHub repository and documentation. This repository glues them all together for you into a working system. The components of the stack are:

  • benchbot_api: user-facing Python interface to the BenchBot system, allowing the user to control simulated or real robots in simulated or real world environments through simple commands
  • benchbot_addons: a Python manager for add-ons to a BenchBot system, with full documentation on how to create and add your own add-ons
  • benchbot_supervisor: a HTTP server facilitating communication between user-facing interfaces and the underlying robot controller
  • benchbot_robot_controller: a wrapping script which controls the low-level ROS functionality of a simulator or real robot, handles automated subprocess management, and exposes interaction via a HTTP server
  • benchbot_simulator: a realistic 3D simulator employing Nvidia's Isaac framework, in combination with Unreal Engine environments
  • benchbot_eval: Python library for evaluating the performance in a task, based on the results produced by a submission

Further information

  • FAQs: Wiki page where answers to frequently asked questions and resolutions to common issues will be provided
  • Semantic SLAM Tutorial: a tutorial stepping through creating a semantic SLAM system in BenchBot that utilises the 3D object detector VoteNet

Supporters

Development of the BenchBot software stack was directly supported by:

Australian Centre for Robotic Vision        QUT Centre for Robotics

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].