All Projects → peasant98 → Bebop-Autonomy-Vision

peasant98 / Bebop-Autonomy-Vision

Licence: other
An autonomous, vision-based Bebop drone.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
CMake
9771 projects
Mustache
554 projects
EmberScript
38 projects
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to Bebop-Autonomy-Vision

drone-webhook
Drone plugin for triggering webhook notifications
Stars: ✭ 40 (+66.67%)
Mutual labels:  drone
One-Shot-Learning
Matching Networks Tensorflow 2 Implementation for few-shot AD diagnosis
Stars: ✭ 22 (-8.33%)
Mutual labels:  keras-tensorflow
zubax gnss
Zubax GNSS module
Stars: ✭ 45 (+87.5%)
Mutual labels:  drone
G-SimCLR
This is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.
Stars: ✭ 69 (+187.5%)
Mutual labels:  keras-tensorflow
dl-relu
Deep Learning using Rectified Linear Units (ReLU)
Stars: ✭ 20 (-16.67%)
Mutual labels:  keras-tensorflow
keras-yolo3-facedetection
Real-time face detection model using YOLOv3 with Keras
Stars: ✭ 13 (-45.83%)
Mutual labels:  keras-tensorflow
tf-faster-rcnn
Tensorflow 2 Faster-RCNN implementation from scratch supporting to the batch processing with MobileNetV2 and VGG16 backbones
Stars: ✭ 88 (+266.67%)
Mutual labels:  keras-tensorflow
Rus-SpeechRecognition-LSTM-CTC-VoxForge
Распознавание речи русского языка используя Tensorflow, обучаясь на базе Voxforge
Stars: ✭ 50 (+108.33%)
Mutual labels:  keras-tensorflow
Deep-Quality-Value-Family
Official implementation of the paper "Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning Algorithms": https://arxiv.org/abs/1909.01779 To appear at the next NeurIPS2019 DRL-Workshop
Stars: ✭ 12 (-50%)
Mutual labels:  keras-tensorflow
GestureAI
RNN(Recurrent Nerural network) model which recognize hand-gestures drawing 5 figures.
Stars: ✭ 20 (-16.67%)
Mutual labels:  keras-tensorflow
GapFlyt
GapFlyt: Active Vision Based Minimalist Structure-less Gap Detection For Quadrotor Flight
Stars: ✭ 30 (+25%)
Mutual labels:  drone
TF SemanticSegmentation
Semantic image segmentation network with pyramid atrous convolution and boundary-aware loss for Tensorflow.
Stars: ✭ 26 (+8.33%)
Mutual labels:  deeplabv3
keras-complex
Keras-Tensorflow implementation of complex-valued convolutional neural networks
Stars: ✭ 96 (+300%)
Mutual labels:  keras-tensorflow
digit-recognizer-live
Recognize Digits using Deep Neural Networks in Google Chrome live!
Stars: ✭ 29 (+20.83%)
Mutual labels:  keras-tensorflow
LightNet
LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)
Stars: ✭ 710 (+2858.33%)
Mutual labels:  deeplabv3
fall-detection-two-stream-cnn
Real-time fall detection using two-stream convolutional neural net (CNN) with Motion History Image (MHI)
Stars: ✭ 49 (+104.17%)
Mutual labels:  keras-tensorflow
night image semantic segmentation
[ICIP 2019] : This is the official github repository for the paper "What's There in The Dark" accepted in IEEE International Conference in Image Processing 2019 (ICIP19) , Taipei, Taiwan.
Stars: ✭ 25 (+4.17%)
Mutual labels:  deeplabv3
stocktwits-sentiment
Stocktwits market sentiment analysis in Python with Keras and TensorFlow.
Stars: ✭ 23 (-4.17%)
Mutual labels:  keras-tensorflow
FCNN-example
This is a fully convolutional neural net exercise to detect houses from aerial images.
Stars: ✭ 28 (+16.67%)
Mutual labels:  drone
drone-irc
Drone plugin for sending IRC messages
Stars: ✭ 12 (-50%)
Mutual labels:  drone

Bebop Autonomy Vision

An autonomous, vision-based Bebop drone. Our Intro to Robotics (CSCI-3302, at the University of Colorado Boulder) final project.

This project consists of ROS-based autonomous CNN-Based Navigation (via Dronet) of a Bebop Quadrotor with SSD300 Object Detection and Semi-Direct Visual Odometry. New: We now have semantic segmentation with DeepLabv3 working as well! The whole vision suites only requires the Bebop's camera to run, no other sensors on the drone. Additionally, the object detection and semantic segmentation use the python torch2trt plugin, which is a PyTorch to TensorRT converter that runs optimized models faster than ever.

Contributors

Relevant Papers

Package Description

  • bebop_autonomy - The ROS driver for the Parrot Bebop drone. Is the base of this whole project. A link can be found here

  • catkin_simple - Catkin Simple ROS package that is used with Dronet. Additionally, a link to the Github repo can be found here.

  • dronet_control - ETH Zurich's Dronet control package for sending commands to the cmd_vel for the Bebop drone. A link to all of Dronet can be found here

  • dronet_perception - Runs the actual Dronet model (best if done on GPU), which outputs a steering angle and collision probability.

  • rpg_svo - The semi-direct visual odometry package in ROS. Developed at ETH Zurich; the repository can be found here

  • rpg_vikit - Some vision tools for this project. The link to the repo is here.

Installation

Each package with the bebop_ws ROS workspace requires some different work to be done to get it fully working with the whole suite. They are listed below (firstly, make sure that you have ROS installed).

  • bebop_autonomy - in-depth docs here

    • Run sudo apt-get install ros-<ros-distro>-parrot-arsdk
  • catkin_simple - simply required for being able to build Dronet, which uses catkin_simple

  • dronet_control - ROS package that takes the CNN predictions from Dronet to send the correct commands to the Bebop drone.

  • dronet_perception - ROS package that runs the actual Dronet model.

    • Requires tensorflow, keras, rospy, opencv, Python's gflags, and numpy/sklearn.
    • More information about the Dronet code and setup can be found here.
  • rpg_svo - There are some extra steps that you will need to follow; these are detailed well at rpg_svo's wiki. The g2o package is optional. Additionally, the step to clone rpg_svo is not needed as it already exists in this repo.

  • rgp_vikit - Nothing here.

Misc Installation

We also have two Python files in this repo that are used for easier ROS control and for the object detection model, which includes csci_dronet.py and robotics.py.

  • csci_dronet.py - This Python file serves as an easy way to send publish (in ROS) to one of three topics. Requires argparse, std_msgs, and rospy, if you haven't installed them already. Usage will be detailed in a later section.

  • bebop_object_detection.py - This Python file runs the SSD-300 object detection model in real time. Its usage will also be detailed in a later section. This requires:

    • torch - link here
    • torchvision
    • ros_numpy- link here
    • torch2trt - link, as well as good installation steps, here
  • bebop_semantic_segmentation.py - This Python runs the DeepLabv3 semantic segmentation model in real time (not as fast as the object detection model). Usage will be detailed later. It requires the same packages as bebop_object_detection.py.

Building the Code

cd bebop_ws
catkin build
  • If you have the above packages/dependencies installed, then catkin build should work fine, however, it is possible that there is some missing ROS package (if there's an error). In that case, a common way to fix this issue is to run sudo apt-get install ros-<your-distro>-<package-name>. Then, retry the previous command.

Also do:

  • In bebop_ws/devel/include:
sudo ln -s /opt/ros/<ros-distro>/include/parrot_arsdk parrot_arsdk
  • In bebop_ws/devel/library:
sudo ln -s /opt/ros/<ros-distro>/lib/parrot_arsdk parrot_arsdk
echo "export LD_LIBRARY_PATH=<path-to-bebop-ws>/devel/lib/parrot_arsdk:$LD_LIBRARY_PATH" >> ~/.bashrc

Steps to Running the Code

We explain how to use this open sourced work and how to integrate it with our work together.

First, make sure that you have a working Bebop2, and connect to its Wi-Fi network. Also, make sure that you successfully completed the build steps listed above.

cd bebop_ws

source devel/setup.bash

cd src/dronet_perception/launch

roslaunch full_perception_launch.launch

In another terminal, or using tmux (recommended):

cd bebop_ws/

source devel/setup.bash

cd src/dronet_control/launch

roslaunch deep_navigation.launch

Object Detection

  • Using tmux or another terminal:

  • python bebop_object_detection.py or python3.5 bebop_object_detection.py

  • This should open a window that shows the detections from the Bebop drone in real time. Having a GPU helps here.

Here's an example:

alt text

Semantic Segmentation

  • Using tmux or another terminal:

  • python bebop_semantic_segmentation.py or python3.5 bebop_semantic_segmentation.py

  • This also should open a window that shows segmentation results real-time.

Here's an example:

alt text

SVO

  • Again, using tmux or another terminal:
cd bebop_ws
source devel/setup.bash
roslaunch svo_ros live.launch

For visualization:

rosrun rviz rviz -d bebop_ws/src/rpg_svo/svo_ros/rviz_config.rviz

alt text

Autonomous Navigation with Bebop

Now we can begin the real fun work. Below is a list of commands you can use once the above programs are running.

rostopic pub --once /bebop/takeoff std_msgs/Empty - takes off the drone

rostopic pub --once /bebop/state_change std_msgs/Bool "data: true" - enables dronet control. SVO and SSD will continue to run.

rostopic pub --once /bebop/state_change std_msgs/Bool "data: false" - stops dronet control, perception will still run.

rostopic pub --once /bebop/land std_msgs/Empty - lands the drone regardless of if dronet is enabled or not.

Additionally, we can run python csci_dronet.py --option=takeoff, python csci_dronet.py --option=land, python csci_dronet.py --option=dronet_start, or python csci_dronet.py --option=dronet_end to takeoff, land, start, and stop dronet, respectively.

Real Life Vid/Pic

Dronet in action. It follows the road, and thankfully stops when one of us gets too close. The Bebop doesn't listen to us, but Dronet's collision probability from its forward-facing camera was high enough so that the drone stopped, and disaster was averted.

The convolutional neural network is influenced by edges (detailed more in the paper) and is clearly moving parallel to the edge of the road here.

alt text

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].