All Projects → shubhtuls → volumetricPrimitives

shubhtuls / volumetricPrimitives

Licence: other
Code release for "Learning Shape Abstractions by Assembling Volumetric Primitives " (CVPR 2017)

Programming Languages

lua
6591 projects
matlab
3953 projects
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to volumetricPrimitives

Gen rmq
Elixir AMQP consumer and publisher behaviours
Stars: ✭ 146 (+6.57%)
Mutual labels:  abstraction
CGMM
Official Repository of "Contextual Graph Markov Model" (ICML 2018 - JMLR 2020)
Stars: ✭ 35 (-74.45%)
Mutual labels:  unsupervised-learning
theta
Generic, modular and configurable formal verification framework supporting various formalisms and algorithms
Stars: ✭ 34 (-75.18%)
Mutual labels:  abstraction
Go Billy
The missing interface filesystem abstraction for Go
Stars: ✭ 184 (+34.31%)
Mutual labels:  abstraction
esapp
An unsupervised Chinese word segmentation tool.
Stars: ✭ 13 (-90.51%)
Mutual labels:  unsupervised-learning
unsupervised-qa
Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering
Stars: ✭ 47 (-65.69%)
Mutual labels:  unsupervised-learning
Common Tags
🔖 Useful template literal tags for dealing with strings in ES2015+
Stars: ✭ 1,761 (+1185.4%)
Mutual labels:  abstraction
naru
Neural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (-44.53%)
Mutual labels:  unsupervised-learning
CS-DisMo
[ICCVW 2021] Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Stars: ✭ 20 (-85.4%)
Mutual labels:  unsupervised-learning
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (-66.42%)
Mutual labels:  unsupervised-learning
Sparrow
A simple database toolkit for PHP
Stars: ✭ 236 (+72.26%)
Mutual labels:  abstraction
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-67.15%)
Mutual labels:  unsupervised-learning
Eigen-Portfolio
Unsupervised machine learning Principal Component Analysis (PCA) on the Dow Jones Industrial Average index and it's respective 30 stocks to construct an optimized diversified intelligent portfolio.
Stars: ✭ 54 (-60.58%)
Mutual labels:  unsupervised-learning
Dosa
DOSA is a data object abstraction layer
Stars: ✭ 172 (+25.55%)
Mutual labels:  abstraction
CLSA
official implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (-64.96%)
Mutual labels:  unsupervised-learning
Use Less
React hooks that help you do what you already did, with more indirection
Stars: ✭ 135 (-1.46%)
Mutual labels:  abstraction
Joint-Motion-Estimation-and-Segmentation
[MICCAI'18] Joint Learning of Motion Estimation and Segmentation for Cardiac MR Image Sequences
Stars: ✭ 45 (-67.15%)
Mutual labels:  unsupervised-learning
hmm market behavior
Unsupervised Learning to Market Behavior Forecasting Example
Stars: ✭ 36 (-73.72%)
Mutual labels:  unsupervised-learning
deepvis
machine learning algorithms in Swift
Stars: ✭ 54 (-60.58%)
Mutual labels:  unsupervised-learning
drama
Main component extraction for outlier detection
Stars: ✭ 17 (-87.59%)
Mutual labels:  unsupervised-learning

Learning Shape Abstractions by Assembling Volumetric Primitives

Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, Jitendra Malik. In CVPR, 2017. Project Page

Teaser Image

1) Demo

Please check out the interactive notebook which shows how to compute the primitive based representation for an input shape. You'll need to -

  • Install a working implementation of torch and itorch.
  • Edit the path to the blender executable in the demo script.

2) Training

We provide code to train the abstraction models on ShapeNet categories.

a) Preprocessing

We'll first need to preprocess the ShapeNet models to compute voxelizations required as inputs as well as data required to implement the loss functions.

  • Install gptoolbox in external/gptoolbox. You'll need to compile the mex file for point_mesh_sqaured_distance. You can first try this precompiled version. If that does not work, you will have to compile it yourself - some helpful steps as required on my machine are pointed out here.
  • Modify the path to ShapeNet dataset (v1) in the startup file
  • Specify the synsets of interest in the preprocessing script and then run it.

b) Learning

The training takes place in two stages. In the first we use all cuboids while biasing them to be small and then allow the network to choose to use fewer cuboids. Sample scripts for the synset corresponding to chairs are below.

# Stage 1
cd experiments;
disp=0 gpu=1 nParts=20 nullReward=0 probLrDecay=0.0001 shapeLrDecay=0.01 synset=3001627 usePretrain=0 numTrainIter=20000 name=chairChamferSurf_null_small_init_prob0pt0001_shape0pt01 th cadAutoEncCuboids/primSelTsdfChamfer.lua

After the first network is trained, we allow the learning of primitive existence probabilities.

# Stage 2
cd experiments;
pretrainNet=chairChamferSurf_null_small_init_prob0pt0001_shape0pt01 pretrainIter=20000 disp=0 gpu=1 nParts=20 nullReward=8e-5 shapeLrDecay=0.5   synset=3001627 probLrDecay=0.2 usePretrain=1  numTrainIter=30000 name=chairChamferSurf_null_small_ft_prob0pt2_shape0pt5_null8em5 th cadAutoEncCuboids/primSelTsdfChamfer.lua

Citation

If you use this code for your research, please consider citing:

@inProceedings{abstractionTulsiani17,
  title={Learning Shape Abstractions by Assembling Volumetric Primitives},
  author = {Shubham Tulsiani
  and Hao Su
  and Leonidas J. Guibas
  and Alexei A. Efros
  and Jitendra Malik},
  booktitle={Computer Vision and Pattern Regognition (CVPR)},
  year={2017}
}

Other Implementations

Pytorch (by Nilesh Kulkarni)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].