maudzung / Awesome Autonomous Driving Papers
This repository provides awesome research papers for autonomous driving perception. If you do find a problem or have any suggestions, please raise this as an issue or make a pull request with information (format of the repo): Research paper title, datasets, metrics, objects, source code, publisher, and year.
Stars: ✭ 30
Projects that are alternatives of or similar to Awesome Autonomous Driving Papers
Vision3d
Research platform for 3D object detection in PyTorch.
Stars: ✭ 177 (+490%)
Mutual labels: lidar, real-time
Hdl localization
Real-time 3D localization using a (velodyne) 3D LIDAR
Stars: ✭ 332 (+1006.67%)
Mutual labels: lidar, real-time
RGB-Fusion-Tool-PS
Powershell that use RGB Fusion CLI to associate profiles with Windows Processes
Stars: ✭ 30 (+0%)
Mutual labels: fusion, rgb
Depth clustering
🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.
Stars: ✭ 657 (+2090%)
Mutual labels: lidar, real-time
sensor-fusion
Filters: KF, EKF, UKF || Process Models: CV, CTRV || Measurement Models: Radar, Lidar
Stars: ✭ 96 (+220%)
Mutual labels: fusion, lidar
Complex Yolov4 Pytorch
The PyTorch Implementation based on YOLOv4 of the paper: "Complex-YOLO: Real-time 3D Object Detection on Point Clouds"
Stars: ✭ 691 (+2203.33%)
Mutual labels: lidar, real-time
Sketch Chat
A Sketch plugin to chat in Sketch Cloud files
Stars: ✭ 20 (-33.33%)
Mutual labels: real-time
Mechanical Keyboard
DIY mechanical keyboard and where to find them
Stars: ✭ 947 (+3056.67%)
Mutual labels: rgb
Modosc
A set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.
Stars: ✭ 24 (-20%)
Mutual labels: real-time
Wetland Hydro Gee
Mapping wetland hydrological dynamics using Google Earth Engine (GEE)
Stars: ✭ 20 (-33.33%)
Mutual labels: lidar
Fusiondirect.jl
(No maintenance) Detect gene fusion directly from raw fastq files
Stars: ✭ 23 (-23.33%)
Mutual labels: fusion
Przm
🎨 A simple, yet feature rich color picker and manipulator
Stars: ✭ 17 (-43.33%)
Mutual labels: rgb
Fusionless
Python in Black Magic Design's Fusion that sucks less.
Stars: ✭ 12 (-60%)
Mutual labels: fusion
Rocket.chat
The communications platform that puts data protection first.
Stars: ✭ 31,251 (+104070%)
Mutual labels: real-time
Awesome-Autonomous-Driving-Papers
This repository provides awesome research papers for autonomous driving perception.
I have tried my best to keep this repository up to date. If you do find a problem or have any suggestions, please raise this as
an issue or make a pull request with information (format of the repo): Research paper title, datasets, metrics, objects, source code, publisher, and year.
This summary is categorized into:
- Datasets
- LiDAR-based 3D Object Detection
- Monocular Image-based 3D Object Detection
- LiDAR and RGB Images fusion
- Pseudo-LiDAR
- Training tricks
Abbreviations
- AP-2D: Average Precision for 2D detection (on RGB-image space)
- AP-3D: Average Precision for 3D detection
- AP-BEV: Average Precision for Birds Eye View
- AOS: Average Orientation Similarity (if 2D bounding box available)
Datasets
1. LiDAR-based 3D Object Detection
1.1 Single-stage detectors
1.2 Two-stage detectors
2. Monocular Image-based 3D Object Detection
Research Paper | Datasets | Metrics | Objects | Source Code | Publisher | Year |
---|---|---|---|---|---|---|
RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving |
|
|
Cars | PyTorch | ECCV | 2020 |
Stereo R-CNN based 3D Object Detection for Autonomous Driving |
|
|
Cars | PyTorch | CVPR | 2019 |
M3D-RPN: Monocular 3D Region Proposal Network for Object Detection |
|
|
Cars, Pedestrians, Cyclists | PyTorch | ICCV | 2019 |
Mono3D++: Monocular 3D Vehicle Detection with Two-Scale 3D Hypotheses and Task Priors |
|
|
Cars, Pedestrians, Cyclists | --- | ArXiv | 2019 |
3D Bounding Box Estimation Using Deep Learning and Geometry |
|
|
Cars, Cyclists | PyTorch | CVPR | 2017 |
Deep MANTA: A Coarse-to-fine Many-Task Network for joint 2D and 3D vehicle analysis from monocular image |
|
Cars | --- | CVPR | 2017 | |
Deep MANTA: A Coarse-to-fine Many-Task Network for joint 2D and 3D vehicle analysis from monocular image |
|
|
Cars | Link | ICRA | 2017 |
3. LiDAR and RGB Images Fusion
Research Paper | Datasets | Metrics | Objects | Source Code | Publisher | Year |
---|---|---|---|---|---|---|
ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes |
|
|
37 object categories | PyTorch | CVPR | 2020 |
Multi-Task Multi-Sensor Fusion for 3D Object Detection |
|
|
Cars, Pedestrians, Cyclists | PyTorch | CVPR | 2019 |
4. Pseudo-LiDAR
Research Paper | Datasets | Metrics | Objects | Source Code | Publisher | Year |
---|---|---|---|---|---|---|
Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving |
|
|
Cars, Pedestrians, Cyclists | PyTorch | ICLR | 2020 |
Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving |
|
|
Cars, Pedestrians, Cyclists | PyTorch | CVPR | 2019 |
5. Training tricks
Research Paper | Datasets | Metrics | Objects | Source Code | Publisher | Year |
---|---|---|---|---|---|---|
PPBA: Improving 3D Object Detection through Progressive Population Based Augmentation |
|
|
Cars, Pedestrians, Cyclists | --- | ArXiv | 2020 |
Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection |
|
|
10 object categories | PyTorch | ArXiv | 2019 |
Weighted Point Cloud Augmentation for Neural Network Training Data Class-Imbalance |
|
--- | ArXiv | 2019 |
6. Object tracking (in progress)
To do list:
- [x] Add 3D object detection papers based on LiDAR/monocular images/fusion/pseudo-LiDAR.
- [x] Add training tricks papers
- [ ] Add object tracking papers.
- [ ] Provide
download.py
script to automatically download.pdf
files.
References
- The format of the README has been referred from RedditSota/state-of-the-art-result-for-machine-learning-problems
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].