All Projects → microsoft → SpareNet

microsoft / SpareNet

Licence: MIT license
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to SpareNet

Cgal
The public CGAL repository, see the README below
Stars: ✭ 2,825 (+2294.07%)
Mutual labels:  point-cloud
Spvnas
[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Stars: ✭ 239 (+102.54%)
Mutual labels:  point-cloud
SGGpoint
[CVPR 2021] Exploiting Edge-Oriented Reasoning for 3D Point-based Scene Graph Analysis (official pytorch implementation)
Stars: ✭ 41 (-65.25%)
Mutual labels:  point-cloud
Point Cloud Annotation Tool
Stars: ✭ 224 (+89.83%)
Mutual labels:  point-cloud
Asis
Associatively Segmenting Instances and Semantics in Point Clouds, CVPR 2019
Stars: ✭ 228 (+93.22%)
Mutual labels:  point-cloud
Scan2Cap
[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
Stars: ✭ 81 (-31.36%)
Mutual labels:  point-cloud
Samplenet
Differentiable Point Cloud Sampling (CVPR 2020 Oral)
Stars: ✭ 212 (+79.66%)
Mutual labels:  point-cloud
UnsupervisedPointCloudReconstruction
Experiments on unsupervised point cloud reconstruction.
Stars: ✭ 133 (+12.71%)
Mutual labels:  point-cloud
Pcn
Code for PCN: Point Completion Network in 3DV'18 (Oral)
Stars: ✭ 238 (+101.69%)
Mutual labels:  point-cloud
ldgcnn
Linked Dynamic Graph CNN: Learning through Point Cloud by Linking Hierarchical Features
Stars: ✭ 66 (-44.07%)
Mutual labels:  point-cloud
Pointnetvlad
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018
Stars: ✭ 224 (+89.83%)
Mutual labels:  point-cloud
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (+90.68%)
Mutual labels:  point-cloud
SnowflakeNet
(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (-37.29%)
Mutual labels:  point-cloud-completion
Kitti Dataset
Visualising LIDAR data from KITTI dataset.
Stars: ✭ 217 (+83.9%)
Mutual labels:  point-cloud
Displaz.jl
Julia bindings for the displaz lidar viewer
Stars: ✭ 16 (-86.44%)
Mutual labels:  point-cloud
Pclpy
Python bindings for the Point Cloud Library (PCL)
Stars: ✭ 212 (+79.66%)
Mutual labels:  point-cloud
Flownet3d
FlowNet3D: Learning Scene Flow in 3D Point Clouds (CVPR 2019)
Stars: ✭ 249 (+111.02%)
Mutual labels:  point-cloud
pointcloud viewer
No description or website provided.
Stars: ✭ 16 (-86.44%)
Mutual labels:  point-cloud
MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (-44.92%)
Mutual labels:  point-cloud
GRNet
Implementation of "GRNet: Gridding Residual Network for Dense Point Cloud Completion". (Xie et al., ECCV 2020)
Stars: ✭ 239 (+102.54%)
Mutual labels:  point-cloud-completion

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

An efficient PyTorch library for Point Cloud Completion.

Project page | Paper | Video

Chulin Xie*, Chuxin Wang*, Bo Zhang, Hao Yang, Dong Chen, and Fang Wen. (*Equal contribution)

Abstract

We proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion. Firstly, we present the channel-attentive EdgeConv to fully exploit the local structures as well as the global shape in point features. Secondly, we observe that the concatenation manner used by vanilla foldings limits its potential of generating a complex and faithful shape. Enlightened by the success of StyleGAN, we regard the shape feature as style code that modulates the normalization layers during the folding, which considerably enhances its capability. Thirdly, we realize that existing point supervisions, e.g., Chamfer Distance or Earth Mover’s Distance, cannot faithfully reflect the perceptual quality of the reconstructed points. To address this, we propose to project the completed points to depth maps with a differentiable renderer and apply adversarial training to advocate the perceptual realism under different viewpoints. Comprehensive experiments on ShapeNet and KITTI prove the effectiveness of our method, which achieves state-of-the-art quantitative performance while offering superior visual quality.

Installation

  1. Create a virtual environment via conda.

    conda create -n sparenet python=3.7
    conda activate sparenet
  2. Install torch and torchvision.

    conda install pytorch cudatoolkit=10.1 torchvision -c pytorch
  3. Install requirements.

    pip install -r requirements.txt
  4. Install cuda

    sh setup_env.sh

Dataset

  • Download the processed ShapeNet dataset (16384 points) generated by GRNet, and the KITTI dataset.

  • Update the file path of the datasets in configs/base_config.py:

    __C.DATASETS.shapenet.partial_points_path = "/path/to/datasets/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd"
    __C.DATASETS.shapenet.complete_points_path = "/path/to/datasets/ShapeNetCompletion/%s/complete/%s/%s.pcd"
    __C.DATASETS.kitti.partial_points_path = "/path/to/datasets/KITTI/cars/%s.pcd"
    __C.DATASETS.kitti.bounding_box_file_path = "/path/to/datasets/KITTI/bboxes/%s.txt"
    
    # Dataset Options: ShapeNet, ShapeNetCars, KITTI
    __C.DATASET.train_dataset = "ShapeNet"
    __C.DATASET.test_dataset = "ShapeNet"
    

Get Started

Inference Using Pretrained Model

The pretrained models:

Train

All log files in the training process, such as log message, checkpoints, etc, will be saved to the work directory.

  • run

    python train.py  --gpu ${GPUS}\
             --work_dir ${WORK_DIR} \
             --model ${network} \
             --weights ${path to checkpoint}
  • example

    python  train.py --gpu 0,1,2,3 --work_dir /path/to/logfiles --model sparenet --weights /path/to/cheakpoint

Differentiable Renderer

A fully differentiable point renderer that enables end-to-end rendering from 3D point cloud to 2D depth maps. See the paper for details.

Usage of Renderer

The inputs of renderer are pcd, views and radius, and the outputs of renderer are depth_maps.

  • example
    # `projection_mode`: a str with value "perspective" or "orthorgonal"
    # `eyepos_scale`: a float that defines the distance of eyes to (0, 0, 0)
    # `image_size`: an int defining the output image size
    renderer = ComputeDepthMaps(projection_mode, eyepos_scale, image_size)
    
    # `data`: a tensor with shape [batch_size, num_points, 3]
    # `view_id`: the index of selected view satisfying 0 <= view_id < 8
    # `radius_list`: a list of floats, defining the kernel radius to render each point
    depthmaps = renderer(data, view_id, radius_list)

Test FPD on ShapeNet Dataset

  • Run your model and save your results of test dataset

  • Update the file path of the results in test_fpd.py and run it:

    parser.add_argument('--log_dir', default='/path/to/save/logs')
    parser.add_argument('--data_dir', default='/path/to/test/dataset/pcds')
    parser.add_argument('--fake_dir', default='/path/to/methods/pcds',
                               help='/path/to/results/shapenet_fc/pcds/')
    

License

The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

BibTex

If you like our work and use the codebase or models for your research, please cite our work as follows.

@InProceedings{Xie_2021_CVPR,
    author    = {Xie, Chulin and Wang, Chuxin and Zhang, Bo and Yang, Hao and Chen, Dong and Wen, Fang},
    title     = {Style-Based Point Generator With Adversarial Rendering for Point Cloud Completion},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {4619-4628}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].