All Projects → satinder147 → DeepWay.v2

satinder147 / DeepWay.v2

Licence: BSD-2-Clause license
Autonomous navigation for blind people

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to DeepWay.v2

buildKernelAndModules
Build the Linux Kernel and Modules on board the NVIDIA Jetson Nano Developer Kit
Stars: ✭ 68 (+4.62%)
Mutual labels:  jetson-nano
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+326.15%)
Mutual labels:  u-net
dockerfile-yolov5-jetson
Dockerfile for yolov5 inference on NVIDIA Jetson
Stars: ✭ 30 (-53.85%)
Mutual labels:  jetson-nano
deepedge
deep learning edge detector based on U-net and BSDS 500 dataset
Stars: ✭ 25 (-61.54%)
Mutual labels:  u-net
awesome-jetson-nano
A list of resources to start working with Jetson Nano Nvidia
Stars: ✭ 24 (-63.08%)
Mutual labels:  jetson-nano
Advanced-lane-finding
Advanced lane finding
Stars: ✭ 50 (-23.08%)
Mutual labels:  lane-detection
BlindAid
Capstone Project: Assist the blind in moving around safely by warning them of impending obstacles using depth sensing, computer vision, and tactile glove feedback.
Stars: ✭ 14 (-78.46%)
Mutual labels:  blind
covid19.MIScnn
Robust Chest CT Image Segmentation of COVID-19 Lung Infection based on limited data
Stars: ✭ 77 (+18.46%)
Mutual labels:  u-net
W-Net-Keras
An unofficial implementation of W-Net for crowd counting.
Stars: ✭ 20 (-69.23%)
Mutual labels:  u-net
Brain-Tumor-Segmentation
Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation
Stars: ✭ 125 (+92.31%)
Mutual labels:  u-net
diagrams-braille
Render diagrams to Braille
Stars: ✭ 21 (-67.69%)
Mutual labels:  blind
blindassist-ios
BlindAssist iOS app
Stars: ✭ 34 (-47.69%)
Mutual labels:  blind
LaneNetRos
Ros node to use LaneNet to detect the lane in camera
Stars: ✭ 132 (+103.08%)
Mutual labels:  lane-detection
FUEL
An Efficient Framework for Fast UAV Exploration
Stars: ✭ 450 (+592.31%)
Mutual labels:  autonomous-navigation
installROS
Install ROS Melodic on NVIDIA Jetson Development Kits
Stars: ✭ 75 (+15.38%)
Mutual labels:  jetson-nano
installLibrealsense
Build and install Intel's librealsense for the NVIDIA Jetson Nano Developer Kit
Stars: ✭ 134 (+106.15%)
Mutual labels:  jetson-nano
lego-mindstorms-51515-jetson-nano
Combines the LEGO Mindstorms 51515 with the NVIDIA Jetson Nano
Stars: ✭ 31 (-52.31%)
Mutual labels:  jetson-nano
dofbot-jetson nano
Yahboom DOFBOT AI Vision Robotic Arm with ROS for Jetson NANO 4GB B01
Stars: ✭ 24 (-63.08%)
Mutual labels:  jetson-nano
Light-Condition-Style-Transfer
Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer (IV 2020)
Stars: ✭ 133 (+104.62%)
Mutual labels:  lane-detection
unet-pytorch
This is the example implementation of UNet model for semantic segmentations
Stars: ✭ 17 (-73.85%)
Mutual labels:  u-net

DEEPWAY V2

Autonomous navigation for blind people.

Different versions of this project

A question you may have in mind

If I already had a repository, why make another ?
  • Since V1 was based on keras, and I don't like tensorflow much, so for more control I have shifted to pytorch.
  • It is complete redesign.

How is it better than others:

  1. Cost effective: This version costs approx 400 dollars.
  2. Blind people generally develop other senses like hearing very well. Taking away one of their senses by using earphones would not have been nice so I am providing information to the blind person using haptic feedback.
  3. Everything runs on a edge device .i.e on the depth-ai kit.

Hardware requirements

  1. Depth ai kit
  2. Arduino nano.
  3. 2 servo motors.
  4. raspberry pi or any other host device. (Will not be required once depthai kit GPIO support)
  5. Power adapter for depth ai kit.
  6. 3D printer.(Not necessary)
  7. A laptop(Nvidia GPU preferred) or any cloud service provider.

Installation instruction

  1. Clone this repository
  2. Install anaconda.
  3. Install the required dependencies. Some libraries like pytorch, opencv would require a little extra attention.

conda env create -f deepway.yml

  1. Download the segmentation model from here and create a directory inside deepway folder named "trained_models" and put it there.
  2. Change the COM number in the arduino.py file according to your system.
  3. Connect the Arduino nano.
  4. Compile and run arduino Nano code in the arduino nano.
  5. Run runner.py

1. Collecting dataSet and Generating image masks.

I made videos of roads and converted those videos to jpg's. This way I collected a dataSet of approximately 10000 images.I collected images from left, right and center view(So automatically labelled). e.g:

For Unet, I had to create binary masks for the input data, I used LabelBox for generating binary masks. (This took a looooooooot of time). A sample is as follows->

For downloading the labelled data from Labelbox, I have made a small utility named "downloader.py"

2. Model training

I trained a lane detection model(Now deprecated) which would predict the lane(left,center,right) I am walking in.
The loss vs iteration curve is as follows:

I trained a U-Net based model for road segmentation on Azure. The loss(pink: Train, green: Validation) vs iterations curve is as follows.

though the loss is less the model does not perform well
I trained a model in keras with a different architecture performs really well Loss vs iterations curve is:

3. 3D modelling and printing

My friend Sangam Kumar Padhi helped me with CAD model. You can look at it here

4. Electronics on the spectacles

The electronics on the spectacles are very easy. It is just two servo motors connected with a ardunio nano. The arduino nano receives signal from the jetson(using pyserial library), and Arduino Nano controls the servo motors.

FLOW

  1. Get camera feed.
  2. Run the image through the road segmentation model.
  3. Get lane lines from the segmentation mask.
  4. Get depth of all the objects in front of the person.
  5. Get all the objects in current lane.
  6. Plot all the objects on a 2d Screen.
  7. Push the person to the left lane (As per indian traffic rules)
  8. Use A-Star algorithm to get a path from current location to 2 meters ahead while maintaining distance from objects.
  9. Inform the user about all the navigation instructions by using the SERVO motors.

Things to be added or can be improved

  1. Instead of using naive A-star use one which does not involve sharp turns.
  2. Find some efficient way than A-star.
  3. Try fitting a 3 degree polynomial to the trajectory to get smooth turns.
  4. Train a segmentation model for depth-ai, so that the complete neural network inference runs on the device.
  5. Predicting where the pedestrians are going to be in future time will help plan better paths.
  6. Trajectory planning is same for all types of objects right now, but trajectories can be different based on the size and speed of the object.
  7. Region around a object also has to be considered.

People to Thank

  1. Army Institute of Technology (My college).
  2. Prof. Avinash Patil,Sangam Kumar Padhi, Sahil and Priyanshu for 3D modelling and printing.
  3. Shivam sharma and Arpit for data labelling.
  4. Luxonis for providing a free depth ai kit
  5. LabelBox: For providing me with the free license of their Amazing Prodcut.
  6. Luxonis slack channel

References

  1. Luxonis api reference
  2. PyimageSearch
  3. Pytorch Community, special mention @ptrblck
  4. U-Net
  5. U-Net implementation(usuyama)
  6. U-Net implementation(Heet Sankesara)
  7. Social distancing app
  8. Advanced lane detection-Eddie Forson

Citations

Labelbox, "Labelbox," Online, 2019. [Online]. Available: https://labelbox.com

Liked it

Tell me if you liked it by giving a star. Also check out my other repositories, I always make cool stuff. I even have a youtube channel "reactor science" where I post all my work.

Read about v1 at:

  1. Geospatial Magazine
  2. Hackster
  3. Anyline
  4. Arduino blog
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].