All Projects → Chenzhaowei13 → Light-Condition-Style-Transfer

Chenzhaowei13 / Light-Condition-Style-Transfer

Licence: MIT license
Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer (IV 2020)

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
shell
77523 projects
matlab
3953 projects
TeX
3793 projects
CMake
9771 projects
Makefile
30231 projects

Projects that are alternatives of or similar to Light-Condition-Style-Transfer

Lanenet Lane Detection
Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/
Stars: ✭ 1,690 (+1170.68%)
Mutual labels:  self-driving-car, lane-detection, instance-segmentation, lane-lines-detection
Awesome-Lane-Detection
A paper list with code of lane detection.
Stars: ✭ 34 (-74.44%)
Mutual labels:  self-driving-car, lane-detection, lane-lines-detection
CAP augmentation
Cut and paste augmentation for object detection and instance segmentation
Stars: ✭ 93 (-30.08%)
Mutual labels:  self-driving-car, instance-segmentation
Awesome Lane Detection
A paper list of lane detection.
Stars: ✭ 1,990 (+1396.24%)
Mutual labels:  lane-detection, lane-lines-detection
copilot
Lane and obstacle detection for active assistance during driving. Uses windowed sweep for lane detection. Combination of object tracking and YOLO for obstacles. Determines lane change, relative velocity and time to collision
Stars: ✭ 95 (-28.57%)
Mutual labels:  lane-detection, lane-lines-detection
Swayam-Self-Driving-Car
This is an implementation of various algorithms and techniques required to build a simple Self Driving Car. A modified versions of the Udacity Self Driving Car Simulator is used as a testing environment.
Stars: ✭ 17 (-87.22%)
Mutual labels:  self-driving-car, lane-detection
Self-Driving-Car
Lane Detection for Self Driving Car
Stars: ✭ 14 (-89.47%)
Mutual labels:  self-driving-car, lane-detection
Self Driving Car
Udacity Self-Driving Car Engineer Nanodegree projects.
Stars: ✭ 2,103 (+1481.2%)
Mutual labels:  self-driving-car, lane-detection
YOLOP-opencv-dnn
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。
Stars: ✭ 178 (+33.83%)
Mutual labels:  lane-lines-detection
dig-into-apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 1,786 (+1242.86%)
Mutual labels:  self-driving-car
GTAV-Self-driving-car
Self driving car in GTAV with Deep Learning
Stars: ✭ 15 (-88.72%)
Mutual labels:  self-driving-car
Algorithms-for-Automated-Driving
Each chapter of this (mini-)book guides you in programming one important software component for automated driving.
Stars: ✭ 153 (+15.04%)
Mutual labels:  self-driving-car
LiDAR-GTA-V
A plugin for Grand Theft Auto V that generates a labeled LiDAR point cloud from the game environment.
Stars: ✭ 127 (-4.51%)
Mutual labels:  self-driving-car
Advanced-Lane-Lines
Udacity Self-Driving Car Engineer Nanodegree. Project: Advanced Lane Finding
Stars: ✭ 52 (-60.9%)
Mutual labels:  self-driving-car
Advanced-lane-finding
Advanced lane finding
Stars: ✭ 50 (-62.41%)
Mutual labels:  lane-detection
Deep-Learning
Side projects and hands-on work
Stars: ✭ 16 (-87.97%)
Mutual labels:  instance-segmentation
SelfDrivingCarsControlDesign
Self Driving Cars Longitudinal and Lateral Control Design
Stars: ✭ 96 (-27.82%)
Mutual labels:  self-driving-car
LaneNetRos
Ros node to use LaneNet to detect the lane in camera
Stars: ✭ 132 (-0.75%)
Mutual labels:  lane-detection
FaPN
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction
Stars: ✭ 173 (+30.08%)
Mutual labels:  instance-segmentation
celldetection
Cell Detection with PyTorch.
Stars: ✭ 44 (-66.92%)
Mutual labels:  instance-segmentation

Light Conditions Style Transfer

Paper

Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer

Accepted by 2020 IEEE Intelligent Vehicles Symposium (IV 2020).

The main framework is as follows: Our framework

Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. (It can achieve 73.9 F1-measure in CULane testing set)

Datasets

CULane

The whole dataset is available at CULane.

CULane
├── driver_23_30frame       # training&validation
├── driver_161_90frame      # training&validation
├── driver_182_30frame      # training&validation
├── driver_193_90frame      # testing
├── driver_100_30frame      # testing
├── driver_37_30frame       # testing
├── laneseg_label_w16       # labels
└── list                    # list

Generated Images

The images in low-light conditions are generated by the proposed SIM-CycleGAN.

Requirements

  • PyTorch 1.3.0.

  • Matlab (for tools/prob2lines), version R2017a or later.

  • Opencv (for tools/lane_evaluation).

Before start

conda create -n  your_env_name python=3.6
conda activate your_env_name
conda install pytorch==1.3.0 torchvision==0.4.1 cudatoolkit=10.0 -c pytorch
pip install -r requirements.txt 

SIM-CycleGAN

The source code for SIM-CycleGAN has been released. (11/03)

train

Train your own SIM-CycleGAN model as follow.

python train.py  --name repo_name \
                 --dataset_loadtxt_A /path/to/domain_A_txt \
                 --dataset_loadtxt_B /path/to/domain_B_txt \
                 --gpu_ids 6 \

test

Use your trained model to generate images.

python test.py   --name repo_name \
                 --model simcycle_gan \
                 --dataset_loadtxt_A /path/to/domain_A_txt \
                 --dataset_loadtxt_B /path/to/domain_B_txt \
                 --gpu_ids 6 \

Lane Detetcion

The source code used for the lane detction is made publicly available by HOU Yuenan.

Test for Demo

We provide demo for testing a single image or a video.

sh ./demo.sh

You can get the results as follow.

Result for probability map images

Result for points images

If you want to test the model for video, you can set mode=0 in demo.sh.

Evaluate the Model

The trained model used in this paper is available in ./trained.

  1. Run test script
sh ./test_erfnet.sh
  1. Get lines from probability maps
cd tools/prob2lines
matlab -nodisplay -r "main;exit"

Please check the file path in Matlab code before.

  1. Evaluation
cd /tools/lane_evaluation
make
# You may also use cmake instead of make, via:
# mkdir build && cd build && cmake ..
sh eval_all.sh    # evaluate the whole test set
sh eval_split.sh  # evaluate each scenario separately

The evaluation results are saved in /tools/lane_evaluation/output.

Performance

Light Conditions Stlye Transfer

Some examples of real images in normal light conditions and their corresponding translations images in low-light conditions. images

Lane Detetcion

Performance ( (F1-measure) ) of different methods on CULane testing set. For crossroad, only FP is shown.

Category ERFNet CycleGAN+ERFNet SIM-CycleGAN + ERFNet(ours) SCNN ENet-SAD ResNet-101-SAD
Normal 91.5 91.7 91.8 90.6 90.1 90.7
Crowded 71.6 71.5 71.8 69.7 68.8 70.0
Night 67.1 68.9 69.4 66.1 66.0 66.3
No Line 45.1 45.2 46.1 43.4 41.6 43.5
Shadow 71.3 73.1 76.2 66.9 65.9 67.0
Arrow 87.2 87.2 87.8 66.9 65.9 67.0
Dazzle Light 66.0 67.5 66.4 58.5 60.2 59.9
Curve 66.3 69.0 67.1 64.4 65.7 65.7
Crossroad 2199 2402 2346 1990 1998 2052
Total 73.1 73.6 73.9 71.6 70.8 71.8

The probability maps output by the three methods above are shown as following images

To do

  • Add attenction on ERFNet

  • Open the source code for SIM-CycleGAN

  • Upgade pytorch (from 0.3.0 to 1.3.0)

  • Upload demo for test

Citation

Please cite this in your publication if our work helps your research.

@inproceedings{Liu2020Lane,
  title={Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer},
  author={Liu, Tong and Chen, Zhaowei and Yang, Yi and Wu, Zehao and Li, Haowei},
  booktitle={2020 IEEE intelligent vehicles symposium (IV)},
  year={2020},
}

Acknowledgement

This project refers to the following projects:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].