All Projects → pengzhi1998 → Underwater-obstacle-avoidance

pengzhi1998 / Underwater-obstacle-avoidance

Licence: other
The underwater robot obstacle avoidance project with the method of deep reinforcement learning

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to Underwater-obstacle-avoidance

Quadcopter multi
(Complete). An open-architecture multi-agent quadcopter simulator. We implement a few modern techniques for improving the performance of aerial vehicles, including reinforcement learning and shifting planar inequalities for obstacle avoidance.
Stars: ✭ 16 (-30.43%)
Mutual labels:  obstacle-avoidance
Peabot-Library
Peabot: quadruped robot library for Raspberry Pi
Stars: ✭ 18 (-21.74%)
Mutual labels:  robot
robot
Functions and classes for gradient-based robot motion planning, written in Ivy.
Stars: ✭ 29 (+26.09%)
Mutual labels:  robot
spatialmath-matlab
Create, manipulate and convert representations of position and orientation in 2D or 3D using Python
Stars: ✭ 142 (+517.39%)
Mutual labels:  robot
car-racing
A toolkit for testing control and planning algorithm for car racing.
Stars: ✭ 30 (+30.43%)
Mutual labels:  obstacle-avoidance
moveit python
Pure Python Bindings to ROS MoveIt!
Stars: ✭ 107 (+365.22%)
Mutual labels:  robot
PythonAudioEffects
A Python library that can apply: darth vader, echo, radio, robotic, and ghost effects to audio samples.
Stars: ✭ 26 (+13.04%)
Mutual labels:  robot
raspimouse2
ROS 2 node for Raspberry Pi Mouse
Stars: ✭ 23 (+0%)
Mutual labels:  robot
homebridge-mi-robot vacuum
XiaoMi robot vacuum plugins for HomeBridge.
Stars: ✭ 53 (+130.43%)
Mutual labels:  robot
TelegramTrader-MT4-MT5
Connect Telegram Messenger to Metatrader for Live Quotes, Charts, Trading, and Managing Robots(Expert Advisors)
Stars: ✭ 74 (+221.74%)
Mutual labels:  robot
facebook-login-for-robots
Facebook Login for 🤖 robots
Stars: ✭ 41 (+78.26%)
Mutual labels:  robot
skynet robot control rtos ethercat
Realtime 6-axis robot controller, based on Qt C++ & OpenCascade & KDL kinematics & HAL
Stars: ✭ 41 (+78.26%)
Mutual labels:  robot
elliot on g0d
elliot_on_g0d
Stars: ✭ 17 (-26.09%)
Mutual labels:  robot
woapp
web模拟安卓操作系统,php开发,内置文件管理,电话,短信,拍照,用在树莓派上可做智能家居,视频监控,机顶盒等……
Stars: ✭ 22 (-4.35%)
Mutual labels:  robot
Virtual-Robot-Challenge
How-to on simulating a robot with V-REP and controlling it with ROS
Stars: ✭ 83 (+260.87%)
Mutual labels:  robot
trik-studio
TRIK Studio programming environment
Stars: ✭ 15 (-34.78%)
Mutual labels:  robot
vbot-tuling
微信 图灵机器人
Stars: ✭ 20 (-13.04%)
Mutual labels:  robot
robot-framework-docker
Docker image to run robot framework acceptance testing in a docker container
Stars: ✭ 24 (+4.35%)
Mutual labels:  robot
Steamworks-2017
SERT's code for the 2017 Steamworks game
Stars: ✭ 13 (-43.48%)
Mutual labels:  robot
gym-line-follower
Line follower robot simulator environment for Open AI Gym.
Stars: ✭ 46 (+100%)
Mutual labels:  robot

Underwater-obstacle-avoidance

Hello everyone, welcome to this repository. This project is mainly about underwater vehicles' obstacle avoidance with neural networks as well as the method based on the single beam distance detection. The project mainly refers to the following projects:https://github.com/xie9187/Monocular-Obstacle-Avoidance, https://github.com/XPFly1989/FCRN as well as https://github.com/iro-cp/FCRN-DepthPrediction.

contents

  1. Introduction
  2. Guide

1. Introduction

Nowadays, the AUVs (autonomous underwater vehichles) are widely used in underwater projects (underwater environment detection, cultural relic salvage, underwater rescues and etc). And in order to improve their efficiency, a great sense of obstacle avoidance of the robots is indispensable. But because of the rather complex underwater light conditions including light attenuation, dimmer environment, reflection, refraction along with the more complicated kinematics situation including caparicious current and more resistance, it is much harder for the robots to work well underwater. So we developed an ad-hoc methods to deal with that.

In the first part, we implemented a FCRN (fully convolutional residual network) to predict RGBD from the front monocular camera. To train the network, we used the NYU dataset, the images pairs from which have been preprocessed according to the underwater environment. In the second part, we applied the DDDQN to control the robot in "POSHOLD" mode with the topic of "/rc/override". We trained this DDDQN in a well-designed Gazebo world. At last, we combined the two neural networks with the method based on the single beam echo sounder to make the robot, BlueROV2 to avoid obstacles. The senario is designed as follows (needs further tests and experiments):

  1. Set a goal point for the robot.
  2. The robot spins toward the goal, then moves forward.
  3. When the echo sounder detects obstacles right in front of it (the distance is less than .8 meters), the robot will be controlled by the neural networks based on the front monocular camera until it successfully avoid the object.
  4. Repeat 2 and 3 until it reaches the goal point.

2. Guide

  1. Clone the repository into a directory.
  2. Download the NYU Depth Dataset V2 Labelled Dataset as well as the pre-trained TensorFlow weights as a .npy file for a part of the model from Laina et al. into the folder of FCRN_train:
    http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat;
    http://campar.in.tum.de/files/rupprecht/depthpred/NYU_ResNet-UpProj.npy
  3. Open the create_underwater.m file, and change the three parameters (Red_attenuation, Green_attenuation along with Blue_attenuation) to fit the environment where you'd like to test the performance. Then run it to process the NYU dataset. It will generate a "test.mat" in the same directory.
  4. Run train.py in FCRN_train to train the FCRN network which is for the RGBD prediction. After 30 epochs, the performance is relatively good. The parameters of the model will be stored into the checkpoint.pth.tar.
  5. To train the DDDQN network, do the following things:
    (1) Move the world file turtlebot3_bighouse.world into the turtlebot3_gazebo world file folder of turtlebot3 in your work space. And move the launch file turtlebot3_bighouse.launch into the turtlebot3_gazebo launch file folder. Then launch the designed world with the command:
    roslaunch turtlebot3_gazebo turtlebot3_bighouse.launch
    (2) Adjust the uri of the turtlebot3 model into your path of the turtlebot3_description. Meanwhile, to make the training in simulation suitable for your robot in the real world, adjust the parameters like field of view, image size, depth image noise and so forth.
    (3) Run the DDDQN.py at the same time in another terminal. The training begins. You could find the robot moves aimlessly at first, but starts to show the ability of avoiding the obstacles after around 200 episodes. The average reward for each 50 episodes could be seen from the graph drew by visdom.
    We set the max episode number to be 100000. Nevertheless, when the performance is good enough, it is fine to terminate the process. The networks will be saved into online_with_noise.pth.tar as well as target_with_noise.pth.tar.
  6. Copy the checkpoint.pth.tar from FCRN_train and online_with_noise.pth.tar from DDDQN_train into the folder, Test_on_robots. Then test on the ground robots or underwater robots.
    ping_echo_sounder.launch and pingmessage.py are helping to open the single beam echo sounder to detect the distance between the robot and the object right in front of the echo sounder.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].