All Projects → cardwing → Codes-for-IntRA-KD

cardwing / Codes-for-IntRA-KD

Licence: MIT license
Inter-Region Affinity Distillation for Road Marking Segmentation (CVPR 2020)

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
c
50402 projects - #5 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Codes-for-IntRA-KD

conde simulator
Autonomous Driving Simulator for the Portuguese Robotics Open
Stars: ✭ 31 (-70.48%)
Mutual labels:  lane-detection
Swayam-Self-Driving-Car
This is an implementation of various algorithms and techniques required to build a simple Self Driving Car. A modified versions of the Udacity Self Driving Car Simulator is used as a testing environment.
Stars: ✭ 17 (-83.81%)
Mutual labels:  lane-detection
Awesome Lane Detection
A paper list of lane detection.
Stars: ✭ 1,990 (+1795.24%)
Mutual labels:  lane-detection
VPGNet for lane
Vanishing Point Guided Network for lane detection, with post processing
Stars: ✭ 33 (-68.57%)
Mutual labels:  lane-detection
CarND-Detect-Lane-Lines-And-Vehicles
Use segmentation networks to recognize lane lines and vehicles. Infer position and curvature of lane lines relative to self.
Stars: ✭ 66 (-37.14%)
Mutual labels:  lane-detection
LaneandYolovehicle-DetectionLinux
Lane depertaure and Yolo objection detection C++ Linux
Stars: ✭ 16 (-84.76%)
Mutual labels:  lane-detection
DeepWay.v2
Autonomous navigation for blind people
Stars: ✭ 65 (-38.1%)
Mutual labels:  lane-detection
DVCNN Lane Detection
Accurate and Robust Lane Detection based on Dual-View Convolutional Neutral Network
Stars: ✭ 25 (-76.19%)
Mutual labels:  lane-detection
lane detection
Lane detection for the Nvidia Jetson TX2 using OpenCV4Tegra
Stars: ✭ 15 (-85.71%)
Mutual labels:  lane-detection
Lanenet Lane Detection
Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/
Stars: ✭ 1,690 (+1509.52%)
Mutual labels:  lane-detection
copilot
Lane and obstacle detection for active assistance during driving. Uses windowed sweep for lane detection. Combination of object tracking and YOLO for obstacles. Determines lane change, relative velocity and time to collision
Stars: ✭ 95 (-9.52%)
Mutual labels:  lane-detection
Awesome-Lane-Detection
A paper list with code of lane detection.
Stars: ✭ 34 (-67.62%)
Mutual labels:  lane-detection
SAComputerVisionMachineLearning
Computer Vision and Machine Learning related projects of Udacity's Self-driving Car Nanodegree Program
Stars: ✭ 36 (-65.71%)
Mutual labels:  lane-detection
lane-detection
Lane detection MATLAB code for Kalman Filter book chapter: Lane Detection
Stars: ✭ 21 (-80%)
Mutual labels:  lane-detection
Self Driving Car
Udacity Self-Driving Car Engineer Nanodegree projects.
Stars: ✭ 2,103 (+1902.86%)
Mutual labels:  lane-detection
YOLOP
You Only Look Once for Panopitic Driving Perception.(https://arxiv.org/abs/2108.11250)
Stars: ✭ 1,228 (+1069.52%)
Mutual labels:  lane-detection
Self-Driving-Car-Engines
Gathers signal processing, computer vision, machine learning and deep learning for self-driving car engines.
Stars: ✭ 68 (-35.24%)
Mutual labels:  lane-detection
AirCracker
Basic python script for detect airdroid users in lan
Stars: ✭ 37 (-64.76%)
Mutual labels:  lane-detection
Virtual-Lane-Boundary-Generation
Virtual Lane Boundary Generation for Human-Compatible Autonomous Driving
Stars: ✭ 22 (-79.05%)
Mutual labels:  lane-detection
Self-Driving-Car
Lane Detection for Self Driving Car
Stars: ✭ 14 (-86.67%)
Mutual labels:  lane-detection

Codes for "Inter-Region Affinity Distillation for Road Marking Segmentation"

Requirements

Before start

Please follow list to put ApolloScape in the desired folder. We'll call the directory that you cloned Codes-for-IntRA-KD as `$IntRA_KD_ROOT .

Testing

  1. Obtain model predictions from trained weights:

Download the trained ResNet-101 and ERFNet, and put them in the folder trained_model.

    cd $IntRA_KD_ROOT
    sh test_pspnet_multi_scale.sh # sh test_erfnet_multi_scale.sh

The output predictions will be saved to road05_tmp by default.

  1. Transfer TrainID to ID:
    python road_npy2img.py

The outputs will be stored in road05 by default.

  1. Generate zip files:
    mkdir test
    mv road05 test/
    zip -r test.zip test

Now, just upload test.zip to ApolloScape online server. The trained ResNet-101 can achieve 46.63% mIoU and trained ERFNet can achieve 43.48% mIoU.

  1. (Optional) Produce color maps from model predictions:
    python trainId2color.py
  1. (Optional) Leverage t-SNE to visualize the feature maps:

Please use the script to perform the visualization.

Training

    cd $IntRA_KD_ROOT
    sh train_pspnet.sh # sh train_erfnet_vanilla.sh

Please make sure that you have 8 GPUs and each GPU has least 11 GB memory if you want to train ResNet-101.

Citation

If you use the codes, please cite the following publication:

@inproceedings{hou2020interregion,
  title     = {Inter-Region Affinity Distillation for Road Marking Segmentation},
  author    = {Yuenan Hou, Zheng Ma, Chunxiao Liu, Tak-Wai Hui, and Chen Change Loy},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2020},
} 

Acknowledgement

This repo is built upon ERFNet-CULane-PyTorch.

Contact

If you have any problems in reproducing the results, just raise an issue in this repo.

To-Do List

  • Training codes of IntRA-KD and various baseline KD methods for ApolloScape
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].