All Projects → Hippogriff → 3DCSGNet

Hippogriff / 3DCSGNet

Licence: other
CSGNet for voxel based input

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 3DCSGNet

Csgnet
CSGNet: Neural Shape parser for Constructive Solid Geometry
Stars: ✭ 55 (+61.76%)
Mutual labels:  generative-model, 3d-graphics
Curated List Of Awesome 3d Morphable Model Software And Data
The idea of this list is to collect shared data and algorithms around 3D Morphable Models. You are invited to contribute to this list by adding a pull request. The original list arised from the Dagstuhl seminar on 3D Morphable Models https://www.dagstuhl.de/19102 in March 2019.
Stars: ✭ 375 (+1002.94%)
Mutual labels:  generative-model, 3d-graphics
nautilus
another graphics engine
Stars: ✭ 16 (-52.94%)
Mutual labels:  3d-graphics
limitless-engine
OpenGL C++ Graphics Engine
Stars: ✭ 95 (+179.41%)
Mutual labels:  3d-graphics
MidiTok
A convenient MIDI / symbolic music tokenizer for Deep Learning networks, with multiple strategies 🎶
Stars: ✭ 180 (+429.41%)
Mutual labels:  generative-model
rend3
Easy to use, customizable, efficient 3D renderer library built on wgpu.
Stars: ✭ 546 (+1505.88%)
Mutual labels:  3d-graphics
CycleConsistentDeformation
This repository contains the source codes for the paper "Unsupervised cycle-consistent deformation for shape matching".
Stars: ✭ 58 (+70.59%)
Mutual labels:  3d-deep-learning
RenderingX12
Partially open source: real-time scene rendering using XUSG based on DirectX 12. 纯粹秀秀demo……ただ、デモを見せるため……
Stars: ✭ 16 (-52.94%)
Mutual labels:  3d-graphics
Minecraft
A Tiny Minecraft clone made with C++ and OpenGL.
Stars: ✭ 88 (+158.82%)
Mutual labels:  3d-graphics
d3d
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch
Stars: ✭ 27 (-20.59%)
Mutual labels:  3d-deep-learning
gcWGAN
Guided Conditional Wasserstein GAN for De Novo Protein Design
Stars: ✭ 38 (+11.76%)
Mutual labels:  generative-model
SmallFBX
An open-source implementation of Autodesk's FBX
Stars: ✭ 41 (+20.59%)
Mutual labels:  3d-graphics
style-vae
Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-26.47%)
Mutual labels:  generative-model
py-msa-kdenlive
Python script to load a Kdenlive (OSS NLE video editor) project file, and conform the edit on video or numpy arrays.
Stars: ✭ 25 (-26.47%)
Mutual labels:  generative-model
StickMan-3D
StickMan 3D: First Round | indie fighting game | C++ OpenGL
Stars: ✭ 60 (+76.47%)
Mutual labels:  3d-graphics
rabbit-hole
An experimental voxel engine.
Stars: ✭ 39 (+14.71%)
Mutual labels:  3d-graphics
causal-semantic-generative-model
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)
Stars: ✭ 51 (+50%)
Mutual labels:  generative-model
Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Stars: ✭ 482 (+1317.65%)
Mutual labels:  generative-model
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (+14.71%)
Mutual labels:  generative-model
gans-in-action
"GAN 인 액션"(한빛미디어, 2020)의 코드 저장소입니다.
Stars: ✭ 29 (-14.71%)
Mutual labels:  generative-model

CSGNet: Neural Shape Parser for Constructive Solid Geometry

This repository contains code accompanying the paper: CSGNet: Neural Shape Parser for Constructive Solid Geometry, CVPR 2018.

This code base contains model architecture and dataset for 3D-CSGNet. For 2D-CSGNet, look at this repository.

Dependency

  • Python >3.5*
  • Please use conda env using environment.yml file.
    conda env create -f environment.yml -n 3DCSGNet
    source activate 3DCSGNet

Data

  • Synthetic Dataset:

    Download the synthetic dataset and un-tar it in the root. Pre-trained model is available here, untar it in the directory trained_models/. Synthetic dataset is provided in the form of program expressions, instead of rendered images. Images for training, validation and testing are rendered on the fly. The dataset is split in different program lengths.

    tar -zxvf data.tar.gz
  • How to create Voxels from program expressions?

    Start by loading some program expression from data/x_ops/expressions.txt files. You can get voxels in the form of Numpy array using the following:

    import deepdish as dd
    from src.Utils.train_utils import voxels_from_expressions
    
    # pre-rendered shape primitives in the form of voxels for better performance
    primitives = dd.io.load("data/primitives.h5")
    expressions = ["cy(48,48,32,8,12)cu(24,24,40,28)+", "sp(48,32,32,8,12)cu(24,24,40,28)+"]
    
    voxels = voxels_from_expressions(expressions, primitives, max_len=7)
    print(voxels.shape)
    
    (2, 64, 64, 64)

    In case of key error in the above, or if you want to execute programs of higher length or arbitary positions and scales, then change the max_len=len_of_program and primitives=None in the above method. However, this will render primitives on-the-fly and will be slow.

Supervised Learning

  • To train, update config.yml with required arguments. Also make sure to fill up the config.yml file with proportion of the dataset that you want to train on (default is 1 percent, can go up to 100 percent which is used in the paper). Then run:

    python train.py
  • To test, update config.yml with required arguments. Specify the path of pre-trained model in the config file on the field pretrain_model_path=trained_models/models.pth, preload_model=True and proportion=100. Then run:

    # For top-1 testing
    python test.py
    # For beam-search-k testing
    python test_beam_search.py

Cite:

@InProceedings{Sharma_2018_CVPR,
author = {Sharma, Gopal and Goyal, Rishabh and Liu, Difan and Kalogerakis, Evangelos and Maji, Subhransu},
title = {CSGNet: Neural Shape Parser for Constructive Solid Geometry},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

Contact

To ask questions, please email.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].