All Projects → IceTTTb → PlaneTR3D

IceTTTb / PlaneTR3D

Licence: Apache-2.0 License
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to PlaneTR3D

PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
Stars: ✭ 260 (+348.28%)
Mutual labels:  3dvision, iccv2021
SnowflakeNet
(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (+27.59%)
Mutual labels:  3dvision, iccv2021
Vision-Language-Transformer
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)
Stars: ✭ 127 (+118.97%)
Mutual labels:  iccv2021
dti-sprites
(ICCV 2021) Code for "Unsupervised Layered Image Decomposition into Object Prototypes" paper
Stars: ✭ 33 (-43.1%)
Mutual labels:  iccv2021
algo-ds-101
Curated list of data structures and algorithms in 10+ programming languages.
Stars: ✭ 154 (+165.52%)
Mutual labels:  structure
cycle-confusion
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".
Stars: ✭ 67 (+15.52%)
Mutual labels:  iccv2021
CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (+62.07%)
Mutual labels:  iccv2021
cath-tools
Protein structure comparison tools such as SSAP and SNAP
Stars: ✭ 40 (-31.03%)
Mutual labels:  structure
MVP Benchmark
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration
Stars: ✭ 74 (+27.59%)
Mutual labels:  iccv2021
HCFlow
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)
Stars: ✭ 140 (+141.38%)
Mutual labels:  iccv2021
directory-structure
📦 Print a directory tree structure in your Python code.
Stars: ✭ 40 (-31.03%)
Mutual labels:  structure
QmapCompression
Official implementation of "Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform", ICCV 2021
Stars: ✭ 27 (-53.45%)
Mutual labels:  iccv2021
arch
🎉 a Tool to Manage & Automate your Node.js Server 🎉
Stars: ✭ 13 (-77.59%)
Mutual labels:  structure
strcode
Structure your code better.
Stars: ✭ 42 (-27.59%)
Mutual labels:  structure
causal-learn
Causal Discovery for Python. Translation and extension of the Tetrad Java code.
Stars: ✭ 428 (+637.93%)
Mutual labels:  structure
neat
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
Stars: ✭ 194 (+234.48%)
Mutual labels:  iccv2021
gnerf
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )
Stars: ✭ 152 (+162.07%)
Mutual labels:  iccv2021
VSCoding-Sequence
VSCode Extension for interactively visualising protein structure data in the editor
Stars: ✭ 41 (-29.31%)
Mutual labels:  structure
Version3-1
Version 2020 (3.1) of Chem4Word - A Chemistry Add-In for Microsoft Word
Stars: ✭ 14 (-75.86%)
Mutual labels:  structure
Meta-SelfLearning
Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark
Stars: ✭ 157 (+170.69%)
Mutual labels:  iccv2021

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

This is the official implementation of our ICCV 2021 paper

News

  • 2021.08.08: The visualization code is available now. You can find it in 'disp.py'. A simple example of how to visualize the results is showed in 'eval_planeTR.py'.

TODO

  • Supplement 2D/3D visualization code.

Getting Started

Clone the repository:

git clone https://github.com/IceTTTb/PlaneTR3D.git

We use Python 3.6 and PyTorch 1.6.0 in our implementation, please install dependencies:

conda create -n planeTR python=3.6
conda activate planeTR
conda install pytorch=1.6.0 torchvision=0.7.0 torchaudio cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Data Preparation

We train and test our network on the plane dataset created by PlaneNet. We follow PlaneAE to convert the .tfrecords to .npz files. Please refer to PlaneAE for more details.

We generate line segments using the state-of-the-art line segment detection algorithm HAWP with their pretrained model. The processed line segments data we used can be downloaded here.

The structure of the data folder should be

plane_data/
  --train/*.npz
  --train_img/*
  --val/*.npz
  --val_img/*
  --train.txt
  --val.txt

Training

Download the pretrained model of HRNet and place it under the 'ckpts/' folder.

Change the 'root_dir' in config files to the path where you save the data.

Run the following command to train our network on one GPU:

CUDA_VISIBLE_DEVICES=0 python train_planeTR.py

Run the following command to train our network on multiple GPUs:

CUDA_VISIBLE_DEVICES=0,1,2 python -m torch.distributed.launch --nproc_per_node=3 --master_port 295025 train_planeTR.py

Evaluation

Download the pretrained model here and place it under the 'ckpts/' folder.

Change the 'resume_dir' in 'config_planeTR_eval.yaml' to the path where you save the weight file.

Change the 'root_dir' in config files to the path where you save the data.

Run the following command to evaluate the performance:

CUDA_VISIBLE_DEVICES=0 python eval_planeTR.py

Citations

If you find our work useful in your research, please consider citing:

@inproceedings{tan2021planeTR,
title={PlaneTR: Structure-Guided Transformers for 3D Plane Recovery},
author={Tan, Bin and Xue, Nan and Bai, Song and Wu, Tianfu and Xia, Gui-Song},
booktitle = {International Conference on Computer Vision},
year={2021}
}

Contact

[email protected]

https://xuenan.net/

Acknowledgements

We thank the authors of PlaneAE, PlaneRCNN, interplane and DETR. Our implementation is heavily built upon their codes.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].