All Projects → goodok → fastai_sparse

goodok / fastai_sparse

Licence: MIT license
3D augmentation and transforms of 2D/3D sparse data, such as 3D triangle meshes or point clouds in Euclidean space. Extension of the Fast.ai library to train Sub-manifold Sparse Convolution Networks

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to fastai sparse

volumentations
Augmentation package for 3d data based on albumentaitons
Stars: ✭ 26 (-43.48%)
Mutual labels:  mesh, data-augmentation
jismesh
Utilities for the Japanese regional grid square system defined in Japanese Industrial Standards (JIS X 0410 地域メッシュ).
Stars: ✭ 33 (-28.26%)
Mutual labels:  mesh
meshname
Meshname, a universal naming system for all IPv6-based mesh networks, including CJDNS and Yggdrasil
Stars: ✭ 65 (+41.3%)
Mutual labels:  mesh
GAug
AAAI'21: Data Augmentation for Graph Neural Networks
Stars: ✭ 139 (+202.17%)
Mutual labels:  data-augmentation
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+1650%)
Mutual labels:  data-augmentation
audio degrader
Audio degradation toolbox in python, with a command-line tool. It is useful to apply controlled degradations to audio: e.g. data augmentation, evaluation in noisy conditions, etc.
Stars: ✭ 40 (-13.04%)
Mutual labels:  data-augmentation
XH5For
XDMF parallel partitioned mesh I/O on top of HDF5
Stars: ✭ 23 (-50%)
Mutual labels:  mesh
Keras-MultiClass-Image-Classification
Multiclass image classification using Convolutional Neural Network
Stars: ✭ 48 (+4.35%)
Mutual labels:  data-augmentation
merbridge
Use eBPF to speed up your Service Mesh like crossing an Einstein-Rosen Bridge.
Stars: ✭ 469 (+919.57%)
Mutual labels:  mesh
particle-cookbook
A collection of programming snippets, tips, and tricks for developing with Particle IoT devices
Stars: ✭ 20 (-56.52%)
Mutual labels:  mesh
intersection-wasm
Mesh-Mesh and Triangle-Triangle Intersection tests based on the algorithm by Tomas Akenine-Möller
Stars: ✭ 17 (-63.04%)
Mutual labels:  mesh
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+26.09%)
Mutual labels:  data-augmentation
Magic-VNet
VNet for 3d volume segmentation
Stars: ✭ 45 (-2.17%)
Mutual labels:  3d-segmentation
Py BL MeshSkeletonization
Mesh Skeleton Extraction Using Laplacian Contraction
Stars: ✭ 32 (-30.43%)
Mutual labels:  mesh
Brain-Tumor-Segmentation-using-Topological-Loss
A Tensorflow Implementation of Brain Tumor Segmentation using Topological Loss
Stars: ✭ 28 (-39.13%)
Mutual labels:  3d-segmentation
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+502.17%)
Mutual labels:  data-augmentation
manifold mixup
Tensorflow implementation of the Manifold Mixup machine learning research paper
Stars: ✭ 24 (-47.83%)
Mutual labels:  data-augmentation
IrregularGradient
Create animated irregular gradients in SwiftUI.
Stars: ✭ 127 (+176.09%)
Mutual labels:  mesh
vyatta-cjdns
A cjdns package for Ubiquiti EdgeOS and VyOS, allowing cjdns to be used on EdgeRouters
Stars: ✭ 39 (-15.22%)
Mutual labels:  mesh
keras-transform
Library for data augmentation
Stars: ✭ 31 (-32.61%)
Mutual labels:  data-augmentation

fastai_sparse

This is an extension of the fast.ai library to train Submanifold Sparse Convolution Networks that apply to 2D/3D sparse data, such as 3D geometric meshes or point clouds in euclidian space

Currently, this library has SparseConvNet under the hood which is the best in 3D (ScanNet benchmark, ShapeNet workshop) so far.

Installation

fastai_sparse is compatible with: Python 3.6, PyTorch 1.0+

Some key dependences:

  • Fast.ai
  • PyTorch sparse convolution models: SparseConvNet.
  • PLY file reader and 3D geometry mesh transforms are implemented by trimesh.
  • ipyvolume is used for interactive visualisation in jupyter notebooks examples.

See details in INSTALL.md

Features:

  • fast.ai train/inference loop concept (Model + DataBunch --> Learner) Classes overview
  • model training best practices provided by fast.ai (Learning Rate Finder, One Cycle policy)
  • 3D transforms for data preprocessing and augmentation:
    • mesh-level transforms and features extraction (surface normals, triangle area,...)
    • points-level spatial transforms (affine, elastic,...)
    • points-level features (color, brightness)
    • mesh to points
    • points to sparse voxels
  • metrics (IoU, avgIoU, ) calculation and tracking
  • visualization utils (batch generator output)

Notebooks with examples

TODO

Priority 1:

  • Separate 3D augmentation library with key points, spatial targets
  • Prediction pipeline
  • Classification/regression examples
  • Spatial targets (bounding box, key points, axes)

Priority 2:

  • TTA
  • Multi-GPU
  • PointNet-like feature extraction layer ("VoxelNet" architecture)
  • Confidence / heatmap / kernels visualization
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].