All Projects → ChrisWu1997 → PQ-NET

ChrisWu1997 / PQ-NET

Licence: MIT license
code for our CVPR 2020 paper "PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes"

Programming Languages

python
139335 projects - #7 most used programming language
cython
566 projects
shell
77523 projects

Projects that are alternatives of or similar to PQ-NET

DeepCAD
code for our ICCV 2021 paper "DeepCAD: A Deep Generative Network for Computer-Aided Design Models"
Stars: ✭ 74 (-25.25%)
Mutual labels:  paper, 3d-shapes
Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+3110.1%)
Mutual labels:  paper, cvpr2020
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+7110.1%)
Mutual labels:  paper, cvpr2020
dot-grid-paper
Dot Grid Paper
Stars: ✭ 97 (-2.02%)
Mutual labels:  paper
event-extraction-paper
Papers from top conferences and journals for event extraction in recent years
Stars: ✭ 54 (-45.45%)
Mutual labels:  paper
research-contributions
Implementations of recent research prototypes/demonstrations using MONAI.
Stars: ✭ 564 (+469.7%)
Mutual labels:  paper
DecisionTrees
Seminar work "Decision Trees - An Introduction" with presentation, seminar paper, and Python implementation
Stars: ✭ 111 (+12.12%)
Mutual labels:  paper
MTF
Modular Tracking Framework
Stars: ✭ 99 (+0%)
Mutual labels:  paper
KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
Stars: ✭ 40 (-59.6%)
Mutual labels:  paper
tpprl
Code and data for "Deep Reinforcement Learning of Marked Temporal Point Processes", NeurIPS 2018
Stars: ✭ 68 (-31.31%)
Mutual labels:  paper
Paper-clip
List of various interesting papers
Stars: ✭ 16 (-83.84%)
Mutual labels:  paper
STAWM
Code for the paper 'A Biologically Inspired Visual Working Memory for Deep Networks'
Stars: ✭ 21 (-78.79%)
Mutual labels:  paper
PlantDoc-Dataset
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020
Stars: ✭ 114 (+15.15%)
Mutual labels:  paper
my-notes
工程师的自我修养
Stars: ✭ 28 (-71.72%)
Mutual labels:  paper
3DObjectTracking
Official Code: A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking
Stars: ✭ 375 (+278.79%)
Mutual labels:  paper
ignite
A Mixin and Access Widener mod loader for Spigot/Paper
Stars: ✭ 115 (+16.16%)
Mutual labels:  paper
Awesome-Image-Matting
📓 A curated list of deep learning image matting papers and codes
Stars: ✭ 281 (+183.84%)
Mutual labels:  paper
libpillowfight
Small library containing various image processing algorithms (+ Python 3 bindings) that has almost no dependencies -- Moved to Gnome's Gitlab
Stars: ✭ 60 (-39.39%)
Mutual labels:  paper
dough
Library containing a lot of useful utility classes for the everyday Java and Spigot/Paper developer.
Stars: ✭ 26 (-73.74%)
Mutual labels:  paper
Origami
Bukkit/Spigot/Paper based Minecraft server used by Minebench.de | Looking for an 1.17 version? If so most patches are PRd into Paper now, Origami 1.17 will continue once patches that Paper wont accept are necessary.
Stars: ✭ 29 (-70.71%)
Mutual labels:  paper

PQ-NET

This repository provides PyTorch implementation of our paper:

PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes

Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, Baoquan Chen

CVPR 2020

Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN
  • Python 3.6

Dependencies

Install python package dependencies through pip:

pip install -r requirements.txt

Compile the extension module brought from Occupancy_Networks:

python setup.py build_ext --inplace

Data

We first voxelized PartNet shapes and scale each part to $64^3$ resolution. We provide data for three categories: chair, table, lamp. Please use link1(PKU disk) or link2(Google Drive) to download the voxelized PartNet shapes and exact the file to data/ folder, e.g.

cd data
tar -xvf Lamp.tar.gz

Then run data/sample_points_from_voxel.py to sampled paired points and signed values, e.g:

python data/sample_points_from_voxel.py --src data --category Lamp

Training

Example training scripts can be found in scripts folder. See config/ for specific definition of all hyper-parameters.

To train the main model:

# train part auto-encoder following multiscale strategy 16^3-32^3-64^3
sh scripts/lamp/train_lamp_partae_multiscale.sh # use two gpus 

# train seq2seq model
sh scripts/lamp/train_lamp_seq2seq.sh

For random generation task, further train a latent GAN:

# encode each shape to latent space
sh scripts/lamp/enc_lamp_seq2seq.sh

# train latent GAN (wgan-gp)
sh scripts/lamp/train_lamp_lgan.sh

The trained models and experment logs will be saved in proj_log/pqnet-PartNet-Lamp/ by default.

Testing

Example testing scripts can also be found in scripts folder.

  • Shape Auto-encoding

    After training the main model, run the model to reconstruct all test shapes:

    sh scripts/lamp/rec_lamp_seq2seq.sh
  • Shape Generation

    After training the latent GAN, run latent GAN and the main model to do random generation:

    # run latent GAN to generate fake latent vectors
    sh scripts/lamp/test_lamp_lgan.sh
    
    # run the main model to decode the generated vectors to final shape
    sh scripts/lamp/dec_lamp_seq2seq.sh

The results will be saved inproj_log/pqnet-PartNet-Lamp/results/ by default.

Pre-trained models

Please use link1(PKU disk) or link2(Google Drive) to download the pretrained model. Download and extract it under proj_log/, so that all test scripts can be directly excecuted.

Voxelization

For those who need to train the model on their own dataset, see the instructions and code of our voxelization process here.

Cite

Please cite our work if you find it useful:

@InProceedings{Wu_2020_CVPR,
author = {Wu, Rundi and Zhuang, Yixin and Xu, Kai and Zhang, Hao and Chen, Baoquan},
title = {PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].