All Projects → visualbuffer → copilot

visualbuffer / copilot

Licence: MIT license
Lane and obstacle detection for active assistance during driving. Uses windowed sweep for lane detection. Combination of object tracking and YOLO for obstacles. Determines lane change, relative velocity and time to collision

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to copilot

AdvancedLaneLines
Lane identification system for camera based systems.
Stars: ✭ 61 (-35.79%)
Mutual labels:  lane-finding, autonomous-vehicles, lane-detection, adas, lane-detector
LaneandYolovehicle-DetectionLinux
Lane depertaure and Yolo objection detection C++ Linux
Stars: ✭ 16 (-83.16%)
Mutual labels:  lane-finding, lane-detection, adas, lane-tracking
conde simulator
Autonomous Driving Simulator for the Portuguese Robotics Open
Stars: ✭ 31 (-67.37%)
Mutual labels:  autonomous-driving, lane-detection, lane-tracking
Virtual-Lane-Boundary-Generation
Virtual Lane Boundary Generation for Human-Compatible Autonomous Driving
Stars: ✭ 22 (-76.84%)
Mutual labels:  autonomous-driving, lane-detection, lane-tracking
lane-detection
Lane detection MATLAB code for Kalman Filter book chapter: Lane Detection
Stars: ✭ 21 (-77.89%)
Mutual labels:  lane-finding, autonomous-driving, lane-detection
Lanenet Lane Detection
Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/
Stars: ✭ 1,690 (+1678.95%)
Mutual labels:  lane-finding, lane-detection, lane-lines-detection
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (-7.37%)
Mutual labels:  autonomous-driving, autonomous-vehicles, adas
Advanced-lane-finding
Advanced lane finding
Stars: ✭ 50 (-47.37%)
Mutual labels:  lane-finding, lane-detection, lane-curvature
YOLOP-opencv-dnn
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。
Stars: ✭ 178 (+87.37%)
Mutual labels:  autonomous-driving, lane-lines-detection
Alturos.ImageAnnotation
A collaborative tool for labeling image data for yolo
Stars: ✭ 47 (-50.53%)
Mutual labels:  bounding-boxes, yolov3
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-65.26%)
Mutual labels:  autonomous-driving, autonomous-vehicles
SelfDrivingCarsControlDesign
Self Driving Cars Longitudinal and Lateral Control Design
Stars: ✭ 96 (+1.05%)
Mutual labels:  autonomous-driving, autonomous-vehicles
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-77.89%)
Mutual labels:  autonomous-driving, autonomous-vehicles
MIT-Driverless-CV-TrainingInfra
PyTorch pipeline of MIT Driverless Computer Vision paper(2020)
Stars: ✭ 89 (-6.32%)
Mutual labels:  autonomous-vehicles, yolov3
Autonomous-Parking-System
Automatic Parking is an autonomous car maneuvering system (part of ADAS) that moves a vehicle from a traffic lane into a parking spot to perform parallel parking. The automatic parking system aims to enhance the comfort and safety of driving in constrained environments where much attention and experience is required to steer the car. The parking…
Stars: ✭ 39 (-58.95%)
Mutual labels:  autonomous-driving, adas
loco car
Software for LOCO, our autonomous drifting RC car.
Stars: ✭ 44 (-53.68%)
Mutual labels:  autonomous-driving, autonomous-vehicles
dig-into-apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 1,786 (+1780%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Visualizing-lidar-data
Visualizing lidar data using Uber Autonomous Visualization System (AVS) and Jupyter Notebook Application
Stars: ✭ 75 (-21.05%)
Mutual labels:  autonomous-driving, autonomous-vehicles
highway-path-planning
My path-planning pipeline to navigate a car safely around a virtual highway with other traffic.
Stars: ✭ 39 (-58.95%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Light-Condition-Style-Transfer
Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer (IV 2020)
Stars: ✭ 133 (+40%)
Mutual labels:  lane-detection, lane-lines-detection

Copilot : Driving assistance on mobile devices

Lane and obstacle detection for active assistance during driving.


Vehicle Position + collision time superposed in the top view

Accompanying article https://towardsdatascience.com/copilot-driving-assistance-635e1a50f14

Global annual road accidents fatalities total about 1.5 million which is just about the population of Mauritius. 90% of these occur in low and middle income countries which have less than half of the total vehicles in the world. Advanced driver-assistance systems (ADAS) Lane detection, collision warning are present in less than 0.1% of the vehicles. They are almost non existent in developing countries. Median Smartphone ownership in emerging economies is about 10 times as high as that of four wheeler. While we already have semi autonomous vehicles running about in parts of the world. This repository checks how close we might come to using a mobile computing platform as an ADAS copilot.

DOWNLOAD WEIGHTS AND CODE

! git clone https://github.com/visualbuffer/copilot.git
! mv copilot/* ./
! wget -P ./model_data/ https://s3-ap-southeast-1.amazonaws.com/deeplearning-mat/backend.h5


Robustness for different illumination conditionsz

USAGE EXAMPLE

from frame import FRAME

file_path =  "videos/highway.mp4"# <== Upload appropriate file          
video_out = "videos/output11.mov"
frame =  FRAME( 
    ego_vehicle_offset = .15,                       # SELF VEHICLE OFFSET
    yellow_lower = np.uint8([ 20, 50,   100]),      # LOWER YELLOW HLS THRESHOLD
    yellow_upper = np.uint8([35, 255, 255]),        # UPER YELLOW HLS THRESHOLD
    white_lower = np.uint8([ 0, 200,   0]),         # LOWER WHITE THRESHOLD
    white_upper = np.uint8([180, 255, 100]),        # UPPER WHITE THRESHOLD
    lum_factor = 118,                               # NORMALIZING LUM FACTOR
    max_gap_th = 0.45,                              # MAX GAP THRESHOLD
    YOLO_PERIOD = .25,                              # YOLO PERIOD
    lane_start=[0.35,0.75] ,                        # LANE INITIATION
    verbose = 3)                                    # VERBOSITY
frame.process_video(file_path, 1,\
        video_out = video_out,pers_frame_time =144,\
        t0  =144 , t1 =150)#None)
PARAMETER Description
SELF VEHICLE OFFSET Trim off from bottom edge video if ego vehicle covers part of the frame % of front view
LOWER YELLOW HLS THRESHOLD Lower yellow HLS threshold used to prepare the mask. Tune down if yellow lane is not detected, up if all the foilage is
UPPER YELLOW HLS THRESHOLD Upper threshold for identifying yellow lanes
LOWER WHITE THRESHOLD Lower yellow HLS threshold used to prepare the mask. Tune up saturation if foilage lights up the entire scene
UPPER WHITE THRESHOLD
NORMALIZING LUM FACTOR Factor used to normalize luminosity against, reducing increses lower Lum threshold
MAX GAP THRESHOLD Max continous gap tollerated in the lane detection % of top-view height
YOLO PERIOD Period [s] after which YOLO is detected, typ 2s reducing decreases processing fps increases detection
LANE INITIATION intial guess for lane start % of top-view width
VERBOSITY 1 Show lesser,2 Show less,3 Show everything


Detecting lane change automatically

Notebooks

DIRECTORY COLAB
./notebooks/coPilot.ipynb https://colab.research.google.com/drive/1CdqDXZqssDgSC35W4A-4Gp8kfqzyPKug

Ref:

https://github.com/qqwweee/keras-yolo3

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].