All Projects → VCL3D → PanoDR

VCL3D / PanoDR

Licence: other
Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.

Programming Languages

python
139335 projects - #7 most used programming language
Batchfile
5799 projects

Projects that are alternatives of or similar to PanoDR

DeepPanoramaLighting
Deep Lighting Environment Map Estimation from Spherical Panoramas (CVPRW20)
Stars: ✭ 37 (+68.18%)
Mutual labels:  augmented-reality, 360, spherical-panoramas
Stargan V2
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)
Stars: ✭ 2,700 (+12172.73%)
Mutual labels:  image-to-image-translation, generative-models
overlord
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.
Stars: ✭ 35 (+59.09%)
Mutual labels:  image-to-image-translation, generative-models
Stargan
StarGAN - Official PyTorch Implementation (CVPR 2018)
Stars: ✭ 4,946 (+22381.82%)
Mutual labels:  image-to-image-translation, generative-models
3D60
Tools accompanying the 3D60 spherical panoramas dataset
Stars: ✭ 83 (+277.27%)
Mutual labels:  360, spherical-panoramas
HyperSphereSurfaceRegression
Code accompanying the paper "360 Surface Regression with a Hyper-Sphere Loss", 3DV 2019
Stars: ✭ 13 (-40.91%)
Mutual labels:  360, spherical-panoramas
GazeCorrection
Unsupervised High-Resolution Portrait Gaze Correction and Animation (TIP 2022)
Stars: ✭ 174 (+690.91%)
Mutual labels:  image-inpainting, image-to-image-translation
WebScreenVR
WebScreenVR enhance your workspace while in Virtual Reality, allowing you to cast your screen and different applications around you in a 3D environment.
Stars: ✭ 53 (+140.91%)
Mutual labels:  augmented-reality
AR BusinessCard
Augmented Reality Business Card
Stars: ✭ 19 (-13.64%)
Mutual labels:  augmented-reality
lsun-room
[ICPR 2018] Indoor Scene Layout Estimation from a Single Image.
Stars: ✭ 94 (+327.27%)
Mutual labels:  layout-estimation
AR-Sandbox-for-Construction-Planning
An interactive augmented reality sandbox designed to help illustrate civil engineering concepts
Stars: ✭ 14 (-36.36%)
Mutual labels:  augmented-reality
Depth-Guided-Inpainting
Code for ECCV 2020 "DVI: Depth Guided Video Inpainting for Autonomous Driving"
Stars: ✭ 50 (+127.27%)
Mutual labels:  image-inpainting
spark-ar-physics
A helper module for connecting Spark AR with physics libraries
Stars: ✭ 28 (+27.27%)
Mutual labels:  augmented-reality
FoodMagic
An AR App for Restaurants and Food Delivery ✨✨
Stars: ✭ 53 (+140.91%)
Mutual labels:  augmented-reality
TheLaughingMan-ARKit
Use ARKit to become the infamous Laughing Man from Ghost in the Shell
Stars: ✭ 26 (+18.18%)
Mutual labels:  augmented-reality
quickstart-android
Quick start examples for Banuba Face AR SDK on Android
Stars: ✭ 17 (-22.73%)
Mutual labels:  augmented-reality
captAR
Augmented Reality Geolocation Capture-the-Flag Mobile Game Capstone Project
Stars: ✭ 24 (+9.09%)
Mutual labels:  augmented-reality
example-reactnative
DeepAR SDK React Native example (iOS and Android)
Stars: ✭ 23 (+4.55%)
Mutual labels:  augmented-reality
awesome-visual-localization-papers
The relocalization task aims to estimate the 6-DoF pose of a novel (unseen) frame in the coordinate system given by the prior model of the world.
Stars: ✭ 60 (+172.73%)
Mutual labels:  augmented-reality
ArcV
Computer Vision & Augmented Reality library
Stars: ✭ 27 (+22.73%)
Mutual labels:  augmented-reality

PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes.

Paper Paper Conference Workshop YouTube Project Page

Model Architecture


Prerequisites

  • Windows10 or Linux
  • Python 3.7
  • NVIDIA GPU + CUDA CuDNN
  • PyTorch 1.7.1 (or higher)

Installation

  • Clone this repo:
git clone https://github.com/VCL3D/PanoDR.git
cd PanoDR
  • We recommend setting up a virtual environment (follow the virtualenv documentation). Once your environment is set up and activated, install the vcl3datlantis package:
cd src/utils
pip install -e .

Dataset

We use Structured3D dataset. To train a model on the dataset please download the dataset from the official website. We follow the official training, validation, and testing splits as defined by the authors. After downloading the dataset, split the scenes into training train, validation and test folders. The folders should have the following format:

Structured3D/
    train/
        scene_00000
        scene_00001
        ...
    validation/
        scene_03000
        scene_03001
        ...
    test/
        scene_03250
        scene_03251
        ...

In order to estimate the dense layout maps, specify the path to train and test folders and run:

python src\utils\vcl3datlantis\dataset\precompute_structure_semantics.py 

Training

In order to train the model, first specify the required parameters:

  • --train_path : /../Structured3D/train/
  • --test_path : /../Structured3D/test/
  • --results_path : The folder where metrics are saved
  • --gt_results_path : The folder where ground truth images are saved for testing
  • --pred_results_path : The folder where predicted images are saved for testing
  • --segmentation_model_chkpnt : The path for the pre-trained dense layout estimation model
  • --model_folder : The folder where checkpoints are saved

After starting visdom on ther server:

python -m visdom

run:

python src/train/train.py --visdom 

Inference

You can download the pre-trained models from here and specify the arguments --eval_chkpnt_folder and --segmentation_model_chkpnt, respectively. Assuming the input image and mask are in the format as in the input folder run:

python src/train/test.py --inference --eval_path input/

Model service

Model is also available via torchserve. First, install the required dependencies via

cd service
pip install -r requirements.txt

Next, download the .mar file from here and place it under service/model_store. In order to serve the model using REST calls, run:

torchserve --start --ncs --model-store ./model_store --models panodr=/model_store/panodr.mar torchserve --start --ncs --model-store ./model_store --models panodr=/model_store/panodr.mar 

Once the model is served, the endpoint is reachable on http://IP:8080/predictions/panodr, with IP as selected when configuring torchserve (typically localhost, but more advanced configuration is also possible to serve the model externally or make it reachable from other machines, using the inference_address setting).

A server is provided for hosting inputs and saving the output files. It can be started via:

cd Imageserver\ 
python .\imageserver.py

All images are hosted on http://IP:PORT. Further, an endpoint on http://IP:PORT/save/inpainted is provided for obtaining the output files from the service.

The following arguments have to be specified in inputs/requests.json file to call the service:

  • DataInputs["rgb"]
  • DataInputs["mask"]

Finally, to obtain predictions from the model, a callback URL json payload needs to be POSTed. Simply run:

curl.exe -X POST http://IP:8080/predictions/panodr -H "Content-Type: application/json" -d @/PATH_TO/PanoDR/service/inputs/request.json  

Citation

If you use this code for your research, please cite the following:

@inproceedings{gkitsas2021panodr,
  title={PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes},
  author={Gkitsas, Vasileios and Sterzentsenko, Vladimiros and Zioulis, Nikolaos and Albanis, Georgios and Zarpalas, Dimitrios},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3716--3726},
  year={2021}
}

Acknowledgements

This project has received funding from the European Union's Horizon 2020 innovation programme ATLANTIS under grant agreement No 951900.

Our code borrows from SEAN and deepfillv2.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].