All Projects → VCL3D → 3D60

VCL3D / 3D60

Licence: BSD-2-Clause license
Tools accompanying the 3D60 spherical panoramas dataset

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 3D60

HyperSphereSurfaceRegression
Code accompanying the paper "360 Surface Regression with a Hyper-Sphere Loss", 3DV 2019
Stars: ✭ 13 (-84.34%)
Mutual labels:  360, omnidirectional, spherical-panoramas, surface-normals
DeepPanoramaLighting
Deep Lighting Environment Map Estimation from Spherical Panoramas (CVPRW20)
Stars: ✭ 37 (-55.42%)
Mutual labels:  360, omnidirectional, spherical-panoramas
PanoDR
Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.
Stars: ✭ 22 (-73.49%)
Mutual labels:  360, spherical-panoramas
GBVS360-BMS360-ProSal
Extending existing saliency prediction models from 2D to omnidirectional images
Stars: ✭ 25 (-69.88%)
Mutual labels:  360, omnidirectional
zed-pytorch
3D Object detection using the ZED and Pytorch
Stars: ✭ 41 (-50.6%)
Mutual labels:  stereo-vision
semi-global-matching
Semi-Global Matching
Stars: ✭ 122 (+46.99%)
Mutual labels:  stereo-vision
Many-Translaters
谷歌翻译,360翻译,iCIBA翻译,有道翻译,免费API
Stars: ✭ 121 (+45.78%)
Mutual labels:  360
pipano-sdk-ios
A Panorama SDK for iOS
Stars: ✭ 20 (-75.9%)
Mutual labels:  360
zed-ros2-wrapper
ROS 2 wrapper beta for the ZED SDK
Stars: ✭ 61 (-26.51%)
Mutual labels:  stereo-vision
edlsm pytorch
Pytorch implementation for stereo matching described in the paper: Efficient Deep learning for stereo matching
Stars: ✭ 16 (-80.72%)
Mutual labels:  stereo-vision
RealtimeStereo
Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices (ACCV, 2020)
Stars: ✭ 110 (+32.53%)
Mutual labels:  stereo-vision
ExampleOfiOSLiDAR
Example Of iOS ARKit LiDAR
Stars: ✭ 373 (+349.4%)
Mutual labels:  depth-map
DispNet-TensorFlow
TensorFlow implementation of DispNet by Zhijian Jiang.
Stars: ✭ 55 (-33.73%)
Mutual labels:  stereo-vision
360WIFI-MAC
💻Use 360 Portable Wi-Fi Adapter (1st gen & 2nd gen) on Your Mac 360 MacOS 驱动。
Stars: ✭ 26 (-68.67%)
Mutual labels:  360
zed-matlab
ZED SDK interface sample for Matlab
Stars: ✭ 23 (-72.29%)
Mutual labels:  stereo-vision
ballbot
Firmware for self balancing ballbot
Stars: ✭ 11 (-86.75%)
Mutual labels:  omnidirectional
360-VJ
Add another dimension to your VJing with the 360-VJ effect pack! Rotate 360 and Fisheye videos, convert 360 and Flat videos to Fisheye. Great for fulldome and immersive VJing.
Stars: ✭ 81 (-2.41%)
Mutual labels:  360
RoboVision
Attempting to create a program capable of combining stereo video input , with motors and other sensors on a PC running linux , the target is embedded linux for use in a robot!
Stars: ✭ 21 (-74.7%)
Mutual labels:  stereo-vision
zed-openpose
Real-time 3D multi-person with OpenPose and the ZED
Stars: ✭ 37 (-55.42%)
Mutual labels:  stereo-vision
dispflownet-tf
Tensorflow implementation of https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16 + pretrained weights + implementation of "Unsupervised Adaptation for Deep Stereo" (ICCV 2017)
Stars: ✭ 18 (-78.31%)
Mutual labels:  stereo-vision

IMPORTANT An updated dataset is now available which fixes a critical issue with 3D60, the lighting bias introduced by the light source placed at the origin. More information can be found at the Pano3D project page.

OmniDepth Conference Project Page

Spherical View Synthesis Conference Project Page

Surface Regression Conference Project Page

3D60 Toolset

A set of tools for working with the 3D60 dataset:

  • PyTorch data loaders
  • Dataset splits generation scripts

The 3D60 dataset was generated by ray-casting existing 3D datasets, making it a derivative of:

Requirements

This code has been tested with:

Besides PyTorch, the following Python packages are needed:

Data Loading

An example data loading usage can be found in visualize_dataset.py where the dataset is loaded and visualized using visdom.

Given that 3D60 can be used in a variety of machine learning tasks, data loading can be modified w.r.t which datasets, image_types and placements (for stereo viewpoints) will be loaded by constructing customized dataset iterators:

dataset_iterator = ThreeD60.get_datasets(".//splits//3D60_train.txt",\
	    datasets=["suncg", "m3d", "s2d3d"],\
	    placements=[ThreeD60.Placements.CENTER, ThreeD60.Placements.RIGHT, ThreeD60.Placements.UP],\
	    image_types=[ThreeD60.ImageTypes.COLOR, ThreeD60.ImageTypes.DEPTH, ThreeD60.ImageTypes.NORMAL],\
	    longitudinal_rotation=True)

In addition, randomized horizontal rotation augmentations can also be triggered with the longitudinal_rotation flag.

The returned iterator can be used to construct a PyTorch DataLoader:

dataset_loader = torch.utils.data.DataLoader(dataset_iterator,\
  batch_size=32, shuffle=True, pin_memory=False, num_workers=4)

Specific image tensors can be extracted from the returned dictionary with the extract_image function:

for i, b in  enumerate(dataset_loader):
	center_color_image = ThreeD60.extract_image(b, ThreeD60.Placements.CENTER, ThreeD60.ImageTypes.COLOR)
	center_depth_map = ThreeD60.extract_image(b, ThreeD60.Placements.CENTER, ThreeD60.ImageTypes.DEPTH)
	center_normal_map = ThreeD60.extract_image(b, ThreeD60.Placements.CENTER, ThreeD60.ImageTypes.NORMAL)

Splits

Published

A set of published splits are provided which are used in the corresponding works:

  • 360D ECCV18 (used in OmniDepth [4])
  • 3D60 3DV19 (with a smaller synthetic part used in [5] & [6])
  • v1 (with a larger synthetic part)

These splits rely on the official splits of the real datasets -- i.e. Matterport3D and Stanford2D3D (fold#1) -- but use a random selection of scenes from the synthetic SunCG dataset.

Custom

We also offer a set of scripts to generate new splits:

This script calculates a depth value distribution histogram w.r.t. a --max_depth argument value (default: 10.0m) as well as the percentage of values lower than 0.5m and over 5.0m for each of the datasets. The resulting .csv files are saved in the --stats_path argument folder (default: ./splits/), one for each rendered 3D dataset and prefixed with their codename: "m3d", "s2d3d" and "suncg". The paths containing the rendered data for each dataset are provided with the --m3d_path, --s2d3d_path and --suncg_path for Matterport3D, Stanford2D3D and SunCG respectively.

This script has two modes based on the --action argument:

  • 'calc': Finds and saves outlier renders based on a set of heuristics w.r.t. their near (--lower_threshold) and far (--upper_threshold) depth value distributions. When the percentage of total pixels of each examined depth map exceeds specific percentage bounds either under the near value threshold or over the far value threshold, it is considered as an outlier or bad render. The depth maps are located in each dataset's respective folder provided by the codename prefixed arguments --m3d_path, --s2d3d_path and --suncg_path similar to calculate_statistics.py. This can happen due to incomplete scans, scanning artifacts and errors, unfinished 3D modeling or missing assets. Different lower and upper bounds can be set for each dataset as their typical depth distribution values differ. They are fractional percentages, set using codename prefixed lower and upper bound arguments:

    • --m3d_lower_bound and --m3d_upper_bound for Matterport3D,
    • --s2d3d_lower_bound and --s2d3d_upper_bound for Stanford2D3D, and
    • --suncg_lower_bound and --suncg_upper_bound for SunCG.

    The resulting .csv files contain the file names of the outlier renders, prefixed with each dataset's codename and saved in the --outliers_path argument folder (default: ./splits/).

  • 'save': Stores the calculated outliers read from the --outliers_path argument folder in .png images saved in the same path. The images contain multiple tiled outliers for quick visual inspection of the results.

This script creates the split (i.e. train, test and val) files that in turn contain the filenames of each rendered modality and placement for each viewpoint. The split files are prefixed with the --name argument and saved in the --outliers_path folder, where the outlier files are also read from. The script has the train/test/validation set splits from Matterport3D and Stanford2D3D's fold#1 hardcoded in it, but uses a random selection for SunCG. It scans each dataset's folder, as provided by the codename prefixed arguments --m3d_path, --s2d3d_path and --suncg_path similar to calculate_statistics.py and find_outliers.py, and ignores the outliers as read from the available files for each dataset.

The splits available in ./splits/v1 were generated by running the aforementioned scripts in order and using their default parameters. However, they can also be used to create new splits using different parameterisations for outlier rejection, based on custom distance thresholds, or by ignoring specific datasets. If any of the codename prefixed arguments --m3d_path, --s2d3d_path and --suncg_path are not provided, they are skipped from all steps/scripts, and thus, single dataset splits can also be generated (i.e. for leave-one-out experiments).

Important Note: Taking this into account, consistency between experiments may not always be possible without using the aforementioned splits or when involving custom splits that contain synthetic samples. However, the realistic (i.e. Matterport3D and Stanford2D3D) samples test and validation sets should be consistent between published and custom splits.

References

[1] Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A. and Zhang, Y. (2017). Matterport3d: Learning from rgb-d data in indoor environments. In Proceedings of the International Conference on 3D Vision (3DV).

[2] Armeni, I., Sax, S., Zamir, A.R. and Savarese, S., 2017. Joint 2d-3d-semantic data for indoor scene understanding. arXiv preprint arXiv:1702.01105.

[3] Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M. and Funkhouser, T., 2017. Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4] Zioulis, N.*, Karakottas, A.*, Zarpalas, D., and Daras, P. (2018). Omnidepth: Dense depth estimation for indoors spherical panoramas. In Proceedings of the European Conference on Computer Vision (ECCV).

[5] Zioulis, N., Karakottas, A., Zarpalas, D., Alvarez, F., and Daras, P. (2019). Spherical View Synthesis for Self-Supervised 360o Depth Estimation. In Proceedings of the International Conference on 3D Vision (3DV).

[6] Karakottas, A., Zioulis, N., Samaras, S., Ataloglou, D., Gkitsas, V., Zarpalas, D., and Daras, P. (2019). 360o Surface Regression with a Hyper-sphere Loss. In Proceedings of the International Conference on 3D Vision (3DV).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].