All Projects → cherubicXN → hawp

cherubicXN / hawp

Licence: other
Holistically-Attracted Wireframe Parsing

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to hawp

IJCAI2018 SSDH
Semantic Structure-based Unsupervised Deep Hashing IJCAI2018
Stars: ✭ 38 (-73.97%)
Mutual labels:  deep
nlp pycon
Material for PyCon 2019 NLP Tutorial
Stars: ✭ 33 (-77.4%)
Mutual labels:  deep
map-keys-deep-lodash
Map/rename keys recursively with Lodash
Stars: ✭ 16 (-89.04%)
Mutual labels:  deep
jest-expect-contain-deep
Assert deeply nested values in Jest
Stars: ✭ 68 (-53.42%)
Mutual labels:  deep
FastMVSNet
[CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Stars: ✭ 193 (+32.19%)
Mutual labels:  cvpr2020
CarND-VehicleDetection
vehicle detection with deep learning
Stars: ✭ 34 (-76.71%)
Mutual labels:  deep
defaults-deep
Like `extend` but recursively copies only the missing properties/values to the target object.
Stars: ✭ 26 (-82.19%)
Mutual labels:  deep
dqn-tensorflow
Deep Q Network implements by Tensorflow
Stars: ✭ 25 (-82.88%)
Mutual labels:  deep
introspected
Introspection for serializable arrays and JSON friendly objects.
Stars: ✭ 75 (-48.63%)
Mutual labels:  deep
nemar
[CVPR2020] Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Stars: ✭ 120 (-17.81%)
Mutual labels:  cvpr2020
cca zoo
Canonical Correlation Analysis Zoo: A collection of Regularized, Deep Learning based, Kernel, and Probabilistic methods in a scikit-learn style framework
Stars: ✭ 103 (-29.45%)
Mutual labels:  deep
rl
Reinforcement learning algorithms implemented using Keras and OpenAI Gym
Stars: ✭ 14 (-90.41%)
Mutual labels:  deep
MotionNet
CVPR 2020, "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps"
Stars: ✭ 141 (-3.42%)
Mutual labels:  cvpr2020
zero virus
Zero-VIRUS: Zero-shot VehIcle Route Understanding System for Intelligent Transportation (CVPR 2020 AI City Challenge Track 1)
Stars: ✭ 25 (-82.88%)
Mutual labels:  cvpr2020
DSGN
DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
Stars: ✭ 276 (+89.04%)
Mutual labels:  cvpr2020
LUVLi
[CVPR 2020] Re-hosting of the LUVLi Face Alignment codebase. Please download the codebase from the original MERL website by agreeing to all terms and conditions. By using this code, you agree to MERL's research-only licensing terms.
Stars: ✭ 24 (-83.56%)
Mutual labels:  cvpr2020
sketch wireframing kit
Quick Sketchapp wireframing tool for UK government digital services
Stars: ✭ 74 (-49.32%)
Mutual labels:  wireframe
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (-60.96%)
Mutual labels:  cvpr2020
Tensorflow-Wide-Deep-Local-Prediction
This project demonstrates how to run and save predictions locally using exported tensorflow estimator model
Stars: ✭ 28 (-80.82%)
Mutual labels:  deep
Protobuf-Dreamer
A tiled DeepDream project for creating any size of image, on both CPU and GPU
Stars: ✭ 39 (-73.29%)
Mutual labels:  deep

Holistically-Attracted Wireframe Parsing (CVPR 2020)

This is the official implementation of our CVPR paper.

[News] We experimentally provided an easy-to-install version for inference-only usage of HAWP, please checkout the inference branch for the details.

Highlights

  • We propose a fast and parsimonious parsing method HAWP to accurately and robustly detect a vectorized wireframe in an input image with a single forward pass.
  • The proposed HAWP is fully end-to-end.
  • The proposed HAWP does not require the squeeze module.
  • State-of-the-art performance on the Wireframe dataset and YorkUrban dataset.
  • The proposed HAWP achieves 29.5 FPS on a GPU (Tesla V100) for 1-batch inference.

Quantitative Results

Wireframe Dataset

Method Wireframe Dataset FPS
sAP5sAP10sAP15 msAPmAPJAPHFH
LSD / ////55.262.5 49.6
AFM 18.5 24.4 27.5 23.5 23.3 69.2 77.2 13.5
DWP 3.7 5.1 5.9 4.9 40.9 67.8 72.2 2.24
L-CNN 58.9 62.9 64.9 62.2 59.3 80.3 76.9 15.6
82.8 81.3
L-CNN (re-trained) 59.7 63.6 65.3 62.9 60.2 81.6 77.9 15.6
83.7 81.7
HAWP (Ours) 62.5 66.5 68.2 65.7 60.2 84.5 80.3 29.5
86.1 83.1

YorkUrban Dataset

Method YorkUrban Dataset FPS
sAP5sAP10sAP15 msAPmAPJAPHFH
LSD / ////50.960.1 49.6
AFM 7.3 9.4 11.1 9.3 12.4 48.2 63.3 13.5
DWP 1.5 2.1 2.6 2.1 13.4 51.0 61.6 2.24
L-CNN 24.3 26.4 27.5 26.1 30.4 58.5 61.8 15.6
59.6 65.3
L-CNN (re-trained) 25.0 27.1 28.3 26.8 31.5 58.3 62.2 15.6
59.3 65.2
HAWP (Ours) 26.1 28.5 29.7 28.1 31.6 60.6 64.8 29.5
61.2 66.3

Installation (tested on Ubuntu-18.04, CUDA 10.0, GCC 7.4.0)

conda create -n hawp python=3.6
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

cd hawp
conda develop .

pip install -r requirement.txt
python setup.py build_ext --inplace

Quickstart with the pretrained model

python scripts/predict.py --img figures/example.png

Training & Testing

Data Preparation

  • Download the Wireframe dataset and the YorkUrban dataset from their project pages.
  • Download the JSON-format annotations (Google Drive).
  • Place the images to "hawp/data/wireframe/images/" and "hawp/data/york/images/".
  • Unzip the json-format annotations to "hawp/data/wireframe" and "hawp/data/york".

The structure of the data folder should be

data/
   wireframe/images/*.png
   wireframe/train.json
   wireframe/test.json
   ------------------------
   york/images/*.png
   york/test.json

Training

CUDA_VISIBLE_DEVICES=0, python scripts/train.py --config-file config-files/hawp.yaml

The best model is manually selected from the model files after 25 epochs.

Testing

CUDA_VISIBLE_DEVICES=0, python scripts/test.py --config-file config-files/hawp.yaml [optional] --display

The output results will be saved to OUTPUT_DIR/$dataset_name.json. The dataset_name should be wireframe_test or york_test.

Structural-AP Evaluation

  • Run scripts/test.py to get the wireframe parsing results.
  • Run scripts/eval_sap.py to get the sAP results
# example on the Wireframe dataset
python scripts/eval_sap.py --path outputs/hawp/wireframe_test.json --threshold 10

Citations

If you find our work useful in your research, please consider citing:

@inproceedings{HAWP,
title = "Holistically-Attracted Wireframe Parsing",
author = "Nan Xue and Tianfu Wu and Song Bai and Fu-Dong Wang and Gui-Song Xia and Liangpei Zhang and Philip H.S. Torr
",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
year = {2020},
}

Acknowledgment

We acknowledge the effort from the authors of the Wireframe dataset and the YorkUrban dataset. These datasets make accurate line segment detection and wireframe parsing possible.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].