All Projects → AIS-Bonn → Lattice_net

AIS-Bonn / Lattice_net

Fast Point Cloud Segmentation Using Permutohedral Lattices

Labels

Projects that are alternatives of or similar to Lattice net

Juice
The Hacker's Machine Learning Engine
Stars: ✭ 743 (+3130.43%)
Mutual labels:  cuda
Scikit Cuda
Python interface to GPU-powered libraries
Stars: ✭ 803 (+3391.3%)
Mutual labels:  cuda
Wheels
Performance-optimized wheels for TensorFlow (SSE, AVX, FMA, XLA, MPI)
Stars: ✭ 891 (+3773.91%)
Mutual labels:  cuda
Ethereum nvidia miner
💰 USB flash drive ISO image for Ethereum, Zcash and Monero mining with NVIDIA graphics cards and Ubuntu GNU/Linux (headless)
Stars: ✭ 772 (+3256.52%)
Mutual labels:  cuda
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+3347.83%)
Mutual labels:  cuda
Libcudarange
An interval arithmetic and affine arithmetic library for NVIDIA CUDA
Stars: ✭ 5 (-78.26%)
Mutual labels:  cuda
Deep Painterly Harmonization
Code and data for paper "Deep Painterly Harmonization": https://arxiv.org/abs/1804.03189
Stars: ✭ 6,027 (+26104.35%)
Mutual labels:  cuda
Cudajacobi
CUDA implementation of the Jacobi method
Stars: ✭ 19 (-17.39%)
Mutual labels:  cuda
Blocksparse
Efficient GPU kernels for block-sparse matrix multiplication and convolution
Stars: ✭ 797 (+3365.22%)
Mutual labels:  cuda
Gmatrix
R package for unleashing the power of NVIDIA GPU's
Stars: ✭ 16 (-30.43%)
Mutual labels:  cuda
Marian
Fast Neural Machine Translation in C++
Stars: ✭ 777 (+3278.26%)
Mutual labels:  cuda
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (+3334.78%)
Mutual labels:  cuda
Cudadbclustering
Clustering via Graphics Processor, using NVIDIA CUDA sdk to preform database clustering on the massively parallel graphics card processor
Stars: ✭ 6 (-73.91%)
Mutual labels:  cuda
Accelerate
Embedded language for high-performance array computations
Stars: ✭ 751 (+3165.22%)
Mutual labels:  cuda
Neuralsuperresolution
Real-time video quality improvement for applications such as video-chat using Perceptual Losses
Stars: ✭ 18 (-21.74%)
Mutual labels:  cuda
Kintinuous
Real-time large scale dense visual SLAM system
Stars: ✭ 740 (+3117.39%)
Mutual labels:  cuda
Pytorch Loss
label-smooth, amsoftmax, focal-loss, triplet-loss, lovasz-softmax. Maybe useful
Stars: ✭ 812 (+3430.43%)
Mutual labels:  cuda
Sepconv Slomo
an implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch
Stars: ✭ 918 (+3891.3%)
Mutual labels:  cuda
Libomptarget
Stars: ✭ 18 (-21.74%)
Mutual labels:  cuda
Ddsh Tip2018
source code for paper "Deep Discrete Supervised Hashing"
Stars: ✭ 16 (-30.43%)
Mutual labels:  cuda

LatticeNet

Project Page | Video | Paper

LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices
Radu Alexandru Rosu 1, Peer Schütt 1, Jan Quenzel 1, Sven Behnke 1
1University of Bonn, Autonomous Intelligent Systems

This is the official PyTorch implementation of LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices

LatticeNet can process raw point clouds for semantic segmentation (or any other per-point prediction task). The implementation is written in CUDA and PyTorch. There is no CPU implementation yet.

Getting started

Install

The easiest way to install LatticeNet is using the included dockerfile.
You will need to have Docker>=19.03 and nvidia drivers installed.
Afterwards, you can build the docker image which contains all the LatticeNet dependencies using:

$ git clone --recursive https://github.com/RaduAlexandru/lattice_net
$ cd lattice_net/docker
$ ./build.sh lattice_img #this will take some time because some packages need to be build from source
$ ./run.sh lattice_img 
$ git clone --recursive https://github.com/RaduAlexandru/easy_pbr
$ cd easy_pbr && make && cd ..
$ git clone --recursive https://github.com/RaduAlexandru/data_loaders  
$ cd data_loaders && make && cd ..
$ git clone --recursive https://github.com/RaduAlexandru/lattice_net
$ cd lattice_net && make && cd ..

Data

LatticeNet uses point clouds for training. The data is loaded with the DataLoaders package and interfaced using EasyPBR. Here we show how to train on the ShapeNet dataset.
While inside the docker container ( after running ./run.sh lattice_img ), download and unzip the ShapeNet dataset:

$ bash ./lattice_net/data/shapenet_part_seg/download_shapenet.sh

Usage

Train

LatticeNet uses config files to configure the dataset used, the training parameters, model architecture and various visualization options.
The config file used to train on the shapenet dataset can be found under "lattice_net/config/ln_train_shapenet_example.cfg".
Running the training script will by default read this config file and start the training.

$ ./lattice_net/latticenet_py/ln_train.py

Configuration options

Various configuration options can be interesting to check out and modify. We take ln_train_shapenet_example.cfg as an example.

core: hdpi: false          #can be turned on an off to accomodate high DPI displays. If the text and fonts in the visualizer are too big, set this option to false
train: with_viewer: false  #setting to true will start a visualizer which displays the currently segmented point cloud and the difference to the ground truth



If training is performed with the viewer enabled, you should see something like this:

Citation

@inproceedings{rosu2020latticenet,
  title={LatticeNet: Fast point cloud segmentation using permutohedral lattices},
  author={Rosu, Radu Alexandru and Sch{\"u}tt, Peer and Quenzel, Jan and Behnke, Sven},
  booktitle="Proc. of Robotics: Science and Systems (RSS)",
  year={2020}
}

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].