All Projects → mathieuorhan → pointnet2_semantic

mathieuorhan / pointnet2_semantic

Licence: other
A pointnet++ fork, with focus on semantic segmentation of differents datasets

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
Cuda
1817 projects
shell
77523 projects
CMake
9771 projects

Projects that are alternatives of or similar to pointnet2 semantic

Superpoint graph
Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
Stars: ✭ 533 (+672.46%)
Mutual labels:  point-cloud, semantic-segmentation
pointnet2-pytorch
A clean PointNet++ segmentation model implementation. Support batch of samples with different number of points.
Stars: ✭ 45 (-34.78%)
Mutual labels:  point-cloud, pointnet2
SimpleView
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"
Stars: ✭ 95 (+37.68%)
Mutual labels:  point-cloud, pointnet2
Fpconv
FPConv: Learning Local Flattening for Point Convolution, CVPR 2020
Stars: ✭ 114 (+65.22%)
Mutual labels:  point-cloud, semantic-segmentation
Cylinder3d
Rank 1st in the leaderboard of SemanticKITTI semantic segmentation (both single-scan and multi-scan) (Nov. 2020) (CVPR2021 Oral)
Stars: ✭ 221 (+220.29%)
Mutual labels:  point-cloud, semantic-segmentation
mix3d
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021 Oral)
Stars: ✭ 183 (+165.22%)
Mutual labels:  point-cloud, semantic-segmentation
Asis
Associatively Segmenting Instances and Semantics in Point Clouds, CVPR 2019
Stars: ✭ 228 (+230.43%)
Mutual labels:  point-cloud, semantic-segmentation
Open3D-PointNet2-Semantic3D
Semantic3D segmentation with Open3D and PointNet++
Stars: ✭ 422 (+511.59%)
Mutual labels:  point-cloud, pointnet2
graspnet-baseline
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
Stars: ✭ 146 (+111.59%)
Mutual labels:  point-cloud
HoHoNet
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
Stars: ✭ 65 (-5.8%)
Mutual labels:  semantic-segmentation
urban road filter
Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗
Stars: ✭ 134 (+94.2%)
Mutual labels:  point-cloud
self-sample
Single shape Deep Point Cloud Consolidation [TOG 2021]
Stars: ✭ 33 (-52.17%)
Mutual labels:  point-cloud
DLCV2018SPRING
Deep Learning for Computer Vision (CommE 5052) in NTU
Stars: ✭ 38 (-44.93%)
Mutual labels:  semantic-segmentation
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (+123.19%)
Mutual labels:  semantic-segmentation
Iterative-Closest-Point
Implementation of the iterative closest point algorithm. A point cloud is transformed such that it best matches a reference point cloud.
Stars: ✭ 101 (+46.38%)
Mutual labels:  point-cloud
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (+42.03%)
Mutual labels:  semantic-segmentation
lowshot-shapebias
Learning low-shot object classification with explicit shape bias learned from point clouds
Stars: ✭ 37 (-46.38%)
Mutual labels:  point-cloud
volumentations
Augmentation package for 3d data based on albumentaitons
Stars: ✭ 26 (-62.32%)
Mutual labels:  point-cloud
lvr2
Las Vegas Reconstruction 2.0
Stars: ✭ 39 (-43.48%)
Mutual labels:  point-cloud
ObjectNet
PyTorch implementation of "Pyramid Scene Parsing Network".
Stars: ✭ 15 (-78.26%)
Mutual labels:  semantic-segmentation

PointNet2 for semantic segmentation of 3d points clouds

By Mathieu Orhan and Guillaume Dekeyser (Ecole des Ponts et Chaussées, Paris, 2018).

Introduction

This project is a student fork of PointNet2, by Charles R. Qi, Li (Eric) Yi, Hao Su, Leonidas J. Guibas from Stanford University. You can refer to the original PointNet2 paper and code (https://github.com/charlesq34/pointnet2) for details.

This fork focused on semantic segmentation, with the goal of comparing three datasets : Scannet, Semantic-8 and Bertrand Le Saux aerial LIDAR dataset. To achieve that, we clean, document, refactor, and improve the original project. We will compare the same datasets later with SnapNet, another state-of-the-art semantic segmentation project.

Dependancies and data

We work on Ubuntu 16.04 with 3 GTX Titan Black and a GTX Titan X. On older GPUs, like my GTX 860m, you can expect to lower the number of points and the batch size for the training, otherwise you will get a OutOfMemory from TensorFlow. You have to install TensorFlow on GPU (we use TF 1.2, cuda 8.0, python 2.7, but it should also work on newer versions with minor changes). Then, you have to compile the custom TensorFlow operators in the tf_ops subdirectories, with the .sh files. You may have to install some additionnal Python modules.

Get the preprocessed data (you can also preprocess the semantic data from raw data in the directory dataset/preprocessing) :

Compiling the C++ parts if you want to preprocess the data or to calculate results on the raw data can result in the following error : "/usr/bin/ld: cannot find -lvtkproj4", but you can overcome this difficulty by using this trick : ln -s /usr/lib/x86_64-linux-gnu/libvtkCommonCore-6.2.so /usr/lib/libvtkproj4.so (see https://github.com/PointCloudLibrary/pcl/issues/1594 for details).

For downloading the raw data , go into dataset/ directory and use the command: ./downloadAndExtractSem8.sh

For preprocessing with this raw data and the voxel_size you want, go into the preprocessing directory and use the command: ./preprocess.sh ../dataset/raw_semantic_data ../dataset/semantic_data 'voxel_size' (with the voxel_size you want, in m. default is 0.05)

For training, use python train.py --config=your_config --log=your_logs Both scannet and semantic_8 should be trainable.

For interpolating results, first use predict.py --cloud=true --n=100 --ckpt=your_ckpt --dataset=semantic --set=test for example. Files will be created in visu/semantic_test/full_scenes_predictions and will contain predictions on sparse point clouds. The actual interpolation is done in interpolation directory with the command: ./interpolate path/to/raw/data visu/semantic_test/full_scenes_predictions /path/to/where/to/put/results 'voxel_size' (with the voxel_size you want, in m. default is 0.1)

Please check the source files for more details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].