All Projects → anshulpaigwar → GndNet

anshulpaigwar / GndNet

Licence: MIT license
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to GndNet

patchwork
Official page of Patchwork (RA-L'21 w/ IROS'21)
Stars: ✭ 174 (+53.98%)
Mutual labels:  ground-segmentation, ground-estimator
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-81.42%)
Mutual labels:  autonomous-vehicles
Motionplanning
Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)
Stars: ✭ 228 (+101.77%)
Mutual labels:  autonomous-vehicles
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (-22.12%)
Mutual labels:  autonomous-vehicles
Carma Platform
CARMA Platform is built on robot operating system (ROS) and utilizes open source software (OSS) that enables Cooperative Driving Automation (CDA) features to allow Automated Driving Systems to interact and cooperate with infrastructure and other vehicles through communication.
Stars: ✭ 243 (+115.04%)
Mutual labels:  autonomous-vehicles
SubjuGator
SubjuGator 8 on-board software
Stars: ✭ 31 (-72.57%)
Mutual labels:  autonomous-vehicles
Rtm3d
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)
Stars: ✭ 211 (+86.73%)
Mutual labels:  autonomous-vehicles
realsense explorer bot
Autonomous ground exploration mobile robot which has 3-DOF manipulator with Intel Realsense D435i mounted on a Tracked skid-steer drive mobile robot. The robot is capable of mapping spaces, exploration through RRT, SLAM and 3D pose estimation of objects around it. This is an custom robot with self built URDF model.The Robot uses ROS's navigation…
Stars: ✭ 61 (-46.02%)
Mutual labels:  autonomous-vehicles
PointCloudSegmentation
The research project based on Semantic KITTTI dataset, 3d Point Cloud Segmentation , Obstacle Detection
Stars: ✭ 62 (-45.13%)
Mutual labels:  point-cloud-segmentation
autonomous-delivery-robot
Repository for Autonomous Delivery Robot project of IvLabs, VNIT
Stars: ✭ 65 (-42.48%)
Mutual labels:  autonomous-vehicles
Awesome-Self-Driving
an awesome list of self-driving algorithms, software, tools
Stars: ✭ 74 (-34.51%)
Mutual labels:  autonomous-vehicles
Awesome Carla
👉 CARLA resources such as tutorial, blog, code and etc https://github.com/carla-simulator/carla
Stars: ✭ 246 (+117.7%)
Mutual labels:  autonomous-vehicles
ldgcnn
Linked Dynamic Graph CNN: Learning through Point Cloud by Linking Hierarchical Features
Stars: ✭ 66 (-41.59%)
Mutual labels:  point-cloud-segmentation
Deeppicar
Deep Learning Autonomous Car based on Raspberry Pi, SunFounder PiCar-V Kit, TensorFlow, and Google's EdgeTPU Co-Processor
Stars: ✭ 242 (+114.16%)
Mutual labels:  autonomous-vehicles
SelfDrivingCarsControlDesign
Self Driving Cars Longitudinal and Lateral Control Design
Stars: ✭ 96 (-15.04%)
Mutual labels:  autonomous-vehicles
Autodrome
Framework and OpenAI Gym Environment for Autonomous Vehicle Development
Stars: ✭ 214 (+89.38%)
Mutual labels:  autonomous-vehicles
can-go
Controller Area Network (CAN) SDK for Go.
Stars: ✭ 113 (+0%)
Mutual labels:  autonomous-vehicles
loco car
Software for LOCO, our autonomous drifting RC car.
Stars: ✭ 44 (-61.06%)
Mutual labels:  autonomous-vehicles
MIT-Driverless-CV-TrainingInfra
PyTorch pipeline of MIT Driverless Computer Vision paper(2020)
Stars: ✭ 89 (-21.24%)
Mutual labels:  autonomous-vehicles
3D Ground Segmentation
A ground segmentation algorithm for 3D point clouds based on the work described in “Fast segmentation of 3D point clouds: a paradigm on LIDAR data for Autonomous Vehicle Applications”, D. Zermas, I. Izzat and N. Papanikolopoulos, 2017. Distinguish between road and non-road points. Road surface extraction. Plane fit ground filter
Stars: ✭ 55 (-51.33%)
Mutual labels:  ground-segmentation

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles.

Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzalez, Christian Laugier

drawing

Introduction

This repository is code release for our GndNet paper accepted in International conference on Robotic Systems, IROS 2020. Link

Abstract

Ground plane estimation and ground point seg-mentation is a crucial precursor for many applications in robotics and intelligent vehicles like navigable space detection and occupancy grid generation, 3D object detection, point cloud matching for localization and registration for mapping. In this paper, we present GndNet, a novel end-to-end approach that estimates the ground plane elevation information in a grid-based representation and segments the ground points simultaneously in real-time. GndNet uses PointNet and Pillar Feature Encoding network to extract features and regresses ground height for each cell of the grid. We augment the SemanticKITTI dataset to train our network. We demonstrate qualitative and quantitative evaluation of our results for ground elevation estimation and semantic segmentation of point cloud. GndNet establishes a new state-of-the-art, achieves a run-time of 55Hz for ground plane estimation and ground point segmentation. drawing

Installation

We have tested the algorithm on the system with Ubuntu 18.04, 12 GB RAM and NVIDIA GTX-1080.

Dependencies

Python 3.6
CUDA (tested on 10.1)
PyTorch (tested on 1.4)
scipy
ipdb
argparse
numba

Visualization

For visualisation of the ground estimation, semantic segmentation of pointcloud, and easy integration with our real system we use Robot Operating System (ROS):

ROS
ros_numpy

Data Preparation

We train our model using the augmented SematicKITTI dataset. A sample data is provided in this repository, while the full dataset can be downloaded from link. We use the following procedure to generate our dataset:

  • We first crop the point cloud within the range of (x, y) = [(-50, -50), (50, 50)] and apply incremental rotation [-10, 10] degrees about the X and Y axis to generate data with varying slopes and uphills. (SemanticKITTI dataset is recorded with mostly flat terrain)
  • Augmented point cloud is stored as a NumPy file in the folder reduced_velo.
  • To generate ground elevation labels we then use the CRF-based surface fitting method as described in [1].
  • We subdivide object classes in SematicKITTI dataset into two categories
    1. Ground (road, sidewalk, parking, other-ground, vegetation, terrain)
    2. Non-ground (all other)
  • We filter out non-ground points from reduced_velo and use CRF-method [1] only with the ground points to generate an elevation map.
  • Our ground elevation is represented as a 2D grid with cell resolution 1m x 1m and of size (x, y) = [(-50, -50), (50, 50)], where values of each cell represent the local ground elevation.
  • Ground elevation map is stored as NumPy file in gnd_labels folder.
  • Finally, GndNet uses gnd_labels and reduced_velo (consisting of both ground and non-ground points) for training.

If you find the dataset useful consider citing our work and for queries regarding the dataset please contact the authors.

Training

To train the model update the data directory path in the config file: config_kittiSem.yaml

python main.py -s

It takes around 6 hours for the network to converge and model parameters would be stored in checkpoint.pth.tar file. A pre-trained model is provided in the trained_models folder it can be used to evaluate a sequence in the SemanticKITTI dataset.

python evaluate_SemanticKITTI.py --resume checkpoint.pth.tar --data_dir /home/.../kitti_semantic/dataset/sequences/07/

Using pre-trained model

Download the SemanticKITTI dataset from their website link. To visualize the output we use ROS and rviz. The predicted class (ground or non-ground) of the points in the point cloud is substituted in the intensity field of sensor_msgs.pointcloud. In the rviz use intensity as a color transformer to visualize segmented pointcloud. For the visualization of ground elevation, we use the ROS line marker.

roscore
rviz
python evaluate_SemanticKITTI.py --resume trained_models/checkpoint.pth.tar -v -gnd --data_dir /home/.../SemanticKITTI/dataset/sequences/00/

Note: The current version of the code for visualization is written in python which can be very slow specifically the generation of ROS marker. To only visualize segmentation output without ground elevation remove the -gnd flag.

Results

Semantic segmentation of point cloud ground (green) and non-ground (purple):

drawing

Ground elevation estimation:

drawing

YouTube video (Segmentation):

IMAGE ALT TEXT HERE

YouTube video (Ground Estimation):

IMAGE ALT TEXT HERE

TODO

  • Current dataloader loads the entire dataset into RAM first, this reduces training time but it can be hog systems with low RAM.
  • Speed up visualization of ground elevation. Write C++ code for ROS marker.
  • Create generalized ground elevation dataset to be with correspondence to SemanticKitti to be made public.

Citation

If you find this project useful in your research, please consider citing our work:

@inproceedings{paigwar2020gndnet,
  title={GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles},
  author={Paigwar, Anshul and Erkent, {\"O}zg{\"u}r and Gonz{\'a}lez, David Sierra and Laugier, Christian},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2020}
}

Contribution

We welcome you for contributing to this repo, and feel free to contact us for any potential bugs and issues.

References

[1] L. Rummelhard, A. Paigwar, A. Nègre and C. Laugier, "Ground estimation and point cloud segmentation using SpatioTemporal Conditional Random Field," 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, 2017, pp. 1105-1110, doi: 10.1109/IVS.2017.7995861.

[2] Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., & Gall, J. (2019). SemanticKITTI: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE International Conference on Computer Vision (pp. 9297-9307).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].