All Projects → uw-advanced-robotics → aruw-vision-platform-2019

uw-advanced-robotics / aruw-vision-platform-2019

Licence: MIT License
ARUW's vision code from the 2019 season. Published here, read-only, for public reference.

Programming Languages

python
139335 projects - #7 most used programming language
CMake
9771 projects
shell
77523 projects

Projects that are alternatives of or similar to aruw-vision-platform-2019

Ros Control Center
A web-based control center for ROS robots.
Stars: ✭ 180 (+386.49%)
Mutual labels:  robotics, ros
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (+508.11%)
Mutual labels:  robotics, ros
Ros2 rust
Rust bindings for ROS2
Stars: ✭ 187 (+405.41%)
Mutual labels:  robotics, ros
2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-5.41%)
Mutual labels:  robotics, ros
interbotix ros manipulators
ROS Packages for Interbotix Arms
Stars: ✭ 32 (-13.51%)
Mutual labels:  robotics, ros
Xpp
Visualization of Motions for Legged Robots in ros-rviz
Stars: ✭ 177 (+378.38%)
Mutual labels:  robotics, ros
Mcl 3dl
A ROS node to perform a probabilistic 3-D/6-DOF localization system for mobile robots with 3-D LIDAR(s). It implements pointcloud based Monte Carlo localization that uses a reference pointcloud as a map.
Stars: ✭ 221 (+497.3%)
Mutual labels:  robotics, ros
Rosnodejs
Client library for writing ROS nodes in JavaScript with nodejs
Stars: ✭ 145 (+291.89%)
Mutual labels:  robotics, ros
linorobot2
Autonomous mobile robots (2WD, 4WD, Mecanum Drive)
Stars: ✭ 97 (+162.16%)
Mutual labels:  robotics, ros
robomaster s1 can hack
DJI RoboMaster S1 CAN Hack
Stars: ✭ 71 (+91.89%)
Mutual labels:  ros, robomaster
Rt gene
RT-GENE: Real-Time Eye Gaze and Blink Estimation in Natural Environments
Stars: ✭ 157 (+324.32%)
Mutual labels:  robotics, ros
direct lidar odometry
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds
Stars: ✭ 202 (+445.95%)
Mutual labels:  robotics, ros
Urdf Viz
visualize URDF/XACRO file, URDF Viewer works on Windows/MacOS/Linux
Stars: ✭ 149 (+302.7%)
Mutual labels:  robotics, ros
smart grasping sandbox
A public sandbox for Shadow's Smart Grasping System
Stars: ✭ 69 (+86.49%)
Mutual labels:  robotics, ros
Roslibpy
Python ROS Bridge library
Stars: ✭ 146 (+294.59%)
Mutual labels:  robotics, ros
Gqcnn
Python module for GQ-CNN training and deployment with ROS integration.
Stars: ✭ 216 (+483.78%)
Mutual labels:  robotics, ros
Aikido
Artificial Intelligence for Kinematics, Dynamics, and Optimization
Stars: ✭ 133 (+259.46%)
Mutual labels:  robotics, ros
Weloveinterns
中科院软件所智能软件中心实习生社区
Stars: ✭ 143 (+286.49%)
Mutual labels:  robotics, ros
Ros robotics projects
Example codes of new book ROS Robotics Projects
Stars: ✭ 240 (+548.65%)
Mutual labels:  robotics, ros
aerial autonomy
Easily extendable package for interacting with and defining state machines for autonomous aerial systems
Stars: ✭ 22 (-40.54%)
Mutual labels:  robotics, ros

aruw-vision-platform-2019

Introduction

This repository includes all of the vision-related detection, tracking and aiming code from ARUW's 2019 season. This system is responsible for identifying target plates in video footage, calculating a 3D position in the world for that plate, and computing the angle the turret must point to hit the target.

It uses a Machine Learning (ML) model to recognize targets, then uses the Intel RealSense deepth feed to compute the 3D point where the plate exists in space. It fuses that point with calculated odometry to negate our own robot movement, filters the location data to minimize noise, and then computes the aim angles necessary to account for the target's motion while the projectile flies through the air. The goal with these additional calculations, as opposed to the traditional style of pointing directly at the target, is to make our vision system effective at long range.

Capabilities and Results (Software effects display)

Our system is unique among RoboMaster teams' vision systems in three primary ways:

  • We use a machine learning approach to plate detection rather than classical CV. This enables detection in much more diverse environments and positions with minimal manual intervention. There is no brightness of threshold tuning, and extremely angled plates can still be identified.
  • We use a depth camera and robot odometry to track targets in world-relative 3D space. This allows us to do more intelligent correction and prediction than would be possible with a naive camera-frame-relative alignment approach.
  • We do ballistics calculation and correction to account for gravity, target motion, chassis motion, and bullet velocity. This becomes more significant at longer range.

Sentinel Test Real Match Result 1 Real Match Result 2

Dependencies and Hardware/Software environment

This is the hardware and software we use on our production robots. It is certainly possible to swap out components with minimal effort if desired (e.g., disable GPU support if there is no GPU available, use alternate camera SDK if no RealSense is available).

Hardware:

  • Linux environment (tested on Ubuntu 18.04, other platforms may also work)
  • Intel RealSense D435
  • NVIDIA Jetson Xavier DevKit
  • Serial (UART) connection to the main controller

Software:

  • ROS Melodic
  • librealsense 2.24+
  • darknet-aruw (our custom fork with minor fixes)
  • JetPack 4.2.1 (includes OpenCV 3.3.1, CUDA 10.0.326)

See below for installation instructions -- most of it is done by the provided scripts.

Compilation and Installation

Follow the instructions in scripts/xavier-setup/README.md to set up the depdencies for an Xavier.

To run the whole production system on the Xavier, run roslaunch aruw_common prod.launch. To only do detection without any serial communication, you can replace prod.launch with dev.launch.

To enable auto-startup for production use in a real match, enter the scripts folder and run ./configure-service.sh -e. This will configure the app as a service that runs on boot. To disable the service, run ./configure-service.sh.

Our setup scripts will configure Bash aliases for interacting with the service:

  • vision-status: displays the running/stopped status of the service and the most recent log output.
  • vision-start: start the service.
  • vision-stop: stop the service.
  • vision-log: less the log output from the service.

Structure and Organization

See our Wiki: Software Architecture: File Structure and Organization

Software and Hardware Block Diagrams and Data Flow

See our Wiki: Block Diagrams and Data Flow. Additionally, see these accompanying pages:

Principle Introduction and Theoretical Support Analysis

In addition to the architecture information in past sections, see our Wiki for each of the following:

Software Architecture and Hierachy diagram

See our Wiki for relevant pages:

Roadmap

We have a strong foundation which supports advanced use-cases for vision in the RoboMaster competition. However, there are many areas we can improve on to increase accuracy, responsiveness and consistency:

  • Make the system more robust to RealSense instability. This requires investigating interop issues at the operating system and library level.
  • Decrease round-trip outer-loop time:
    • Optimize execution of neural network detector (both the wrapper code and actual inference time)
    • Investigate moving ballistics and odometry to main controller, and send only 3D position and velocity rather than turret aim.
  • Improve precision of frame and odometry timestamps (eliminate fudge factors)
  • Unit tests and simulations for all major components

There are also plans and directions to improve individual modules as well located in their respective wiki pages: https://github.com/uw-advanced-robotics/aruw-vision-platform-2019/wiki

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].