All Projects → hpc203 → YOLOP-opencv-dnn

hpc203 / YOLOP-opencv-dnn

Licence: other
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to YOLOP-opencv-dnn

copilot
Lane and obstacle detection for active assistance during driving. Uses windowed sweep for lane detection. Combination of object tracking and YOLO for obstacles. Determines lane change, relative velocity and time to collision
Stars: ✭ 95 (-46.63%)
Mutual labels:  autonomous-driving, lane-lines-detection
YOLOP
You Only Look Once for Panopitic Driving Perception.(https://arxiv.org/abs/2108.11250)
Stars: ✭ 1,228 (+589.89%)
Mutual labels:  autonomous-driving, drivable-area-segmentation
Jetson Car
Autonomous Racing Car using NVIDIA Jetson TX2 using end-to-end driving approach. Paper: https://arxiv.org/abs/1604.07316
Stars: ✭ 172 (-3.37%)
Mutual labels:  autonomous-driving
Autonomous-Parking-System
Automatic Parking is an autonomous car maneuvering system (part of ADAS) that moves a vehicle from a traffic lane into a parking spot to perform parallel parking. The automatic parking system aims to enhance the comfort and safety of driving in constrained environments where much attention and experience is required to steer the car. The parking…
Stars: ✭ 39 (-78.09%)
Mutual labels:  autonomous-driving
Segmentation
TensorFlow implementation of ENet, trained on the Cityscapes dataset.
Stars: ✭ 243 (+36.52%)
Mutual labels:  autonomous-driving
Apollo perception ros
Object detection / tracking / fusion based on Apollo r3.0.0 perception module in ROS
Stars: ✭ 179 (+0.56%)
Mutual labels:  autonomous-driving
autonomous-delivery-robot
Repository for Autonomous Delivery Robot project of IvLabs, VNIT
Stars: ✭ 65 (-63.48%)
Mutual labels:  autonomous-driving
Gym Carla
An OpenAI gym wrapper for CARLA simulator
Stars: ✭ 164 (-7.87%)
Mutual labels:  autonomous-driving
SF-GRU
Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs
Stars: ✭ 43 (-75.84%)
Mutual labels:  autonomous-driving
Carma Platform
CARMA Platform is built on robot operating system (ROS) and utilizes open source software (OSS) that enables Cooperative Driving Automation (CDA) features to allow Automated Driving Systems to interact and cooperate with infrastructure and other vehicles through communication.
Stars: ✭ 243 (+36.52%)
Mutual labels:  autonomous-driving
Virtual-Lane-Boundary-Generation
Virtual Lane Boundary Generation for Human-Compatible Autonomous Driving
Stars: ✭ 22 (-87.64%)
Mutual labels:  autonomous-driving
Iscloam
Intensity Scan Context based full SLAM implementation for autonomous driving. ICRA 2020
Stars: ✭ 232 (+30.34%)
Mutual labels:  autonomous-driving
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (+0.56%)
Mutual labels:  autonomous-driving
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (-50.56%)
Mutual labels:  autonomous-driving
3d Bat
3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling
Stars: ✭ 179 (+0.56%)
Mutual labels:  autonomous-driving
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-88.2%)
Mutual labels:  autonomous-driving
Dbnet
DBNet: A Large-Scale Dataset for Driving Behavior Learning, CVPR 2018
Stars: ✭ 172 (-3.37%)
Mutual labels:  autonomous-driving
Rtm3d
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)
Stars: ✭ 211 (+18.54%)
Mutual labels:  autonomous-driving
OpenMaterial
3D model exchange format with physical material properties for virtual development, test and validation of automated driving.
Stars: ✭ 23 (-87.08%)
Mutual labels:  autonomous-driving
Depth-Guided-Inpainting
Code for ECCV 2020 "DVI: Depth Guided Video Inpainting for Autonomous Driving"
Stars: ✭ 50 (-71.91%)
Mutual labels:  autonomous-driving

YOLOP-opencv-dnn

使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现

onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldUHeY9GUle_HO4Crg 提取码:mf1x

C++版本的主程序文件是main.cpp,Python版本的主程序文件是main.py。把onnx文件下载到主程序文件所在目录后,就可以运行程序了。文件夹images 里含有若干张测试图片,来自于bdd100k自动驾驶数据集。

本套程序是在华中科技大学视觉团队在最近发布的项目https://github.com/hustvl/YOLOP 的基础上做的一个opencv推理部署程序,本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。如果程序运行出错,那很有可能是您安装的opencv版本低了,这时升级opencv版本就能正常运行的。

此外,在本套程序里,还有一个export_onnx.py文件,它是生成onnx文件的程序。不过,export_onnx.py文件不能本套程序目录内运行的, 假如您想了解如何生成.onnx文件,需要把export_onnx.py文件拷贝到https://github.com/hustvl/YOLOP 的主目录里之后,并且修改lib/models/common.py里的代码, 这时运行export_onnx.py就可以生成onnx文件了。在lib/models/common.py里修改哪些代码,可以参见我的csdn博客文章 https://blog.csdn.net/nihate/article/details/112731327

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].