All Projects → AllenXiangX → SnowflakeNet

AllenXiangX / SnowflakeNet

Licence: MIT license
(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to SnowflakeNet

PlaneTR3D
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery
Stars: ✭ 58 (-21.62%)
Mutual labels:  3dvision, iccv2021
PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
Stars: ✭ 260 (+251.35%)
Mutual labels:  3dvision, iccv2021
Tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Stars: ✭ 5,077 (+6760.81%)
Mutual labels:  transformers
KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
Stars: ✭ 162 (+118.92%)
Mutual labels:  transformers
Dalle Pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+4847.3%)
Mutual labels:  transformers
Haystack
🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+4506.76%)
Mutual labels:  transformers
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+4236.49%)
Mutual labels:  transformers
CogView
Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Stars: ✭ 708 (+856.76%)
Mutual labels:  transformers
G-SFDA
code for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
Stars: ✭ 88 (+18.92%)
Mutual labels:  iccv2021
Simpletransformers
Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Stars: ✭ 2,881 (+3793.24%)
Mutual labels:  transformers
COCO-LM
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (+47.3%)
Mutual labels:  transformers
Spark Nlp
State of the Art Natural Language Processing
Stars: ✭ 2,518 (+3302.7%)
Mutual labels:  transformers
Fast Bert
Super easy library for BERT based NLP models
Stars: ✭ 1,678 (+2167.57%)
Mutual labels:  transformers
Nlp Architect
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
Stars: ✭ 2,768 (+3640.54%)
Mutual labels:  transformers
Reformer Pytorch
Reformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (+2121.62%)
Mutual labels:  transformers
Fengshenbang-LM
Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
Stars: ✭ 1,813 (+2350%)
Mutual labels:  transformers
Vit Pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+9628.38%)
Mutual labels:  transformers
Transmogrifai
TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Stars: ✭ 2,084 (+2716.22%)
Mutual labels:  transformers
Nn
🧑‍🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Stars: ✭ 5,720 (+7629.73%)
Mutual labels:  transformers
gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Stars: ✭ 216 (+191.89%)
Mutual labels:  transformers

Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer (TPAMI 2022)

Peng Xiang*, Xin Wen*, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han

Intro pic

[NEWS]

  • 2023-02 [NEW🎉] The Jittor implementations of SPD are released in the SPD_jittor repo.

  • 2023-01 [NEW🎉] This repository now contains the code of the ICCV paper and the extra contents of the extended version, including:

    • Point cloud completion on the ShapeNet-34/21 dataset for unseen class completion.
    • Point cloud completion on the PCN dataset evaluated under EMD metric.
    • Point cloud auto-encoding and novel shape generation, see the generation folder.
    • Single view reconstruction, seed the svr folder.
    • Point cloud upsampling, see the PU folder.
  • 2022-10 [NEW🎉] SPD, the journal extension of SnowflakeNet, is accepted to TPAMI 2022. We have extended the application of snowflake point deconvolution to more generative tasks other than point cloud completion, including point cloud auto-encoding, generation, single view reconstruction (SVR), and point cloud upsampling (PU).

  • 2021-10 SnowflakeNet is published at ICCV 2021, and the code is released!

[SPD]

1. Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer (TPAMI 2022)

2. SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021, Oral)

Most existing point cloud completion methods suffer from the discrete nature of point clouds and the unstructured prediction of points in local regions, which makes it difficult to reveal fine local geometric details. To resolve this issue, we propose SnowflakeNet with snowflake point deconvolution (SPD) to generate complete point clouds. SPD models the generation of point clouds as the snowflake-like growth of points, where child points are generated progressively by splitting their parent points after each SPD. Our insight into the detailed geometry is to introduce a skip-transformer in the SPD to learn the point splitting patterns that can best fit the local regions. The skip-transformer leverages attention mechanism to summarize the splitting patterns used in the previous SPD layer to produce the splitting in the current layer. The locally compact and structured point clouds generated by SPD precisely reveal the structural characteristics of the 3D shape in local patches, which enables us to predict highly detailed geometries. Moreover, since SPD is a general operation that is not limited to completion, we explore its applications in other generative tasks, including point cloud auto-encoding, generation, single image reconstruction, and upsampling. Our experimental results outperform state-of-the-art methods under widely used benchmarks.

[Cite this work]

@ARTICLE{xiang2022SPD,
  author={Xiang, Peng and Wen, Xin and Liu, Yu-Shen and Cao, Yan-Pei and Wan, Pengfei and Zheng, Wen and Han, Zhizhong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer}, 
  year={2022},
  volume={},
  number={},
  pages={1-18},
  doi={10.1109/TPAMI.2022.3217161}}

@inproceedings{xiang2021snowflakenet,
  title={{SnowflakeNet}: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer},
  author={Xiang, Peng and Wen, Xin and Liu, Yu-Shen and Cao, Yan-Pei and Wan, Pengfei and Zheng, Wen and Han, Zhizhong},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

[Getting Started]

Build Environment

# python environment
$ cd SnowflakeNet
$ conda create -n spd python=3.7
$ conda activate spd
$ pip3 install -r requirements.txt

# pytorch
$ pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

Build PyTorch Extensions

cd models/pointnet2_ops_lib
python setup.py install

cd ../..

cd loss_functions/Chamfer3D
python setup.py install

cd ../emd
python setup.py install

Pre-trained models

We provided the pretrained models on different tasks:

Backup Links:

Visualization of point splitting paths

We provide visualization code for point splitting paths in the visualization folder.

Acknowledgements

Some of the code of this repo is borrowed from:

We thank the authors for their great job!

License

This project is open sourced under MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].