All Projects → jbehley → Point_labeler

jbehley / Point_labeler

Licence: mit
My awesome point cloud labeling tool

Labels

Projects that are alternatives of or similar to Point labeler

Slate
A Super-Lightweight Annotation Tool for Experts: Label text in a terminal with just Python
Stars: ✭ 61 (-75.89%)
Mutual labels:  labeling
Imagetagger
An open source online platform for collaborative image labeling
Stars: ✭ 182 (-28.06%)
Mutual labels:  labeling
image-sorter2
One-click image sorting/labelling script
Stars: ✭ 65 (-74.31%)
Mutual labels:  labeling
Universal Data Tool
Collaborate & label any type of data, images, text, or documents, in an easy web interface or desktop app.
Stars: ✭ 1,356 (+435.97%)
Mutual labels:  labeling
Fastclass
Little tools to download and then weed through images, delete and classify them into groups for building deep learning image datasets (based on crawler and tkinter)
Stars: ✭ 123 (-51.38%)
Mutual labels:  labeling
Compose
A machine learning tool for automated prediction engineering. It allows you to easily structure prediction problems and generate labels for supervised learning.
Stars: ✭ 203 (-19.76%)
Mutual labels:  labeling
Person Search Annotation
Cross-Platform Annotation Tool for Person Search Datasets
Stars: ✭ 9 (-96.44%)
Mutual labels:  labeling
labelReader
Programmatically find and read labels using Machine Learning
Stars: ✭ 44 (-82.61%)
Mutual labels:  labeling
Kili Playground
Simplest and fastest image and text annotation tool.
Stars: ✭ 166 (-34.39%)
Mutual labels:  labeling
Alturos.ImageAnnotation
A collaborative tool for labeling image data for yolo
Stars: ✭ 47 (-81.42%)
Mutual labels:  labeling
Nova
NOVA is a tool for annotating and analyzing behaviours in social interactions. It supports Annotators using Machine Learning already during the coding process. Further it features both, discrete labels and continuous scores and a visuzalization of streams recorded with the SSI Framework.
Stars: ✭ 110 (-56.52%)
Mutual labels:  labeling
Labelbox
Labelbox is the fastest way to annotate data to build and ship computer vision applications.
Stars: ✭ 1,588 (+527.67%)
Mutual labels:  labeling
Bbox Visualizer
Make drawing and labeling bounding boxes easy as cake
Stars: ✭ 225 (-11.07%)
Mutual labels:  labeling
Awesome Data Labeling
A curated list of awesome data labeling tools
Stars: ✭ 1,120 (+342.69%)
Mutual labels:  labeling
labelCloud
A lightweight tool for labeling 3D bounding boxes in point clouds.
Stars: ✭ 264 (+4.35%)
Mutual labels:  labeling
Diffgram
Data Annotation, Data Labeling, Annotation Tooling, Training Data for Machine Learning
Stars: ✭ 43 (-83%)
Mutual labels:  labeling
Ultimatelabeling
A multi-purpose Video Labeling GUI in Python with integrated SOTA detector and tracker
Stars: ✭ 184 (-27.27%)
Mutual labels:  labeling
turktool
Modern React app for bounding box annotation on mturk
Stars: ✭ 46 (-81.82%)
Mutual labels:  labeling
iris3
An upgraded and improved version of the Iris automatic GCP-labeling project
Stars: ✭ 38 (-84.98%)
Mutual labels:  labeling
annotate
Create 3D labelled bounding boxes in RViz
Stars: ✭ 104 (-58.89%)
Mutual labels:  labeling

Point Cloud Labeling Tool

Tool for labeling of a single point clouds or a stream of point clouds.

Given the poses of a KITTI point cloud dataset, we load tiles of overlapping point clouds. Thus, multiple point clouds are labeled at once in a certain area.

Features

  • Support for KITTI Vision Benchmark Point Clouds.
  • Human-readable label description files in xml allow to define label names, ids, and colors.
  • Modern OpenGL shaders for rendering of even millions of points.
  • Tools for labeling of individual points and polygons.
  • Filtering of labels makes it easy to label even complicated structures with ease.

Dependencies

  • catkin
  • Eigen >= 3.2
  • boost >= 1.54
  • QT >= 5.2
  • OpenGL Core Profile >= 4.0
  • glow (catkin package)

Build

On Ubuntu 16.04 and 18.04, most of the dependencies can be installed from the package manager:

sudo apt install git libeigen3-dev libboost-all-dev qtbase5-dev libglew-dev catkin

Additionally, make sure you have catkin-tools and the fetch verb installed:

sudo apt install python-pip
sudo pip install catkin_tools catkin_tools_fetch empy

If you do not have a catkin workspace already, create one:

cd
mkdir catkin_ws
cd catkin_ws
mkdir src
catkin init
cd src
git clone https://github.com/ros/catkin.git

Clone the repository in your catkin workspace:

cd ~/catkin_ws/src
git clone https://github.com/jbehley/point_labeler.git

Download the additional dependencies:

catkin deps fetch

Then, build the project:

catkin build point_labeler

Now the project root directory (e.g. ~/catkin_ws/src/point_labeler) should contain a bin directory containing the labeler.

Usage

In the bin directory, just run ./labeler to start the labeling tool.

The labeling tool allows to label a sequence of point clouds in a tile-based fashion, i.e., the tool loads all scans overlapping with the current tile location. Thus, you will always label the part of the scans that overlaps with the current tile.

In the settings.cfg files you can change the followings options:

tile size: 100.0   # size of a tile (the smaller the less scans get loaded.)
max scans: 500    # number of scans to load for a tile. (should be maybe 1000), but this currently very memory consuming.
min range: 0.0    # minimum distance of points to consider.
max range: 50.0   # maximum distance of points in the point cloud.
add car points: true # add points at the origin of the sensor possibly caused by the car itself. Default: false.

Folder structure

When loading a dataset, the data must be organized as follows:

point cloud folder
├── velodyne/             -- directory containing ".bin" files with Velodyne point clouds.   
├── labels/   [optional]  -- label directory, will be generated if not present.  
├── image_2/  [optional]  -- directory containing ".png" files from the color   camera.  
├── calib.txt             -- calibration of velodyne vs. camera. needed for projection of point cloud into camera.  
└── poses.txt             -- file containing the poses of every scan.

Documentation

See the wiki for more information on the usage and other details.

Citation

If you're using the tool in your research, it would be nice if you cite our paper:

@inproceedings{behley2019iccv,
    author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
     title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
 booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
      year = {2019}
}

We used the tool to label SemanticKITTI, which contains overall over 40.000 scans organized in 20 sequences.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].