All Projects → CubiCasa → Cubicasa5k

CubiCasa / Cubicasa5k

Licence: other
CubiCasa5k floor plan dataset

Projects that are alternatives of or similar to Cubicasa5k

Caffenet Benchmark
Evaluation of the CNN design choices performance on ImageNet-2012.
Stars: ✭ 700 (+614.29%)
Mutual labels:  jupyter-notebook, dataset
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-50%)
Mutual labels:  jupyter-notebook, dataset
Covid Ct
COVID-CT-Dataset: A CT Scan Dataset about COVID-19
Stars: ✭ 820 (+736.73%)
Mutual labels:  jupyter-notebook, dataset
Vpgnet
VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition (ICCV 2017)
Stars: ✭ 382 (+289.8%)
Mutual labels:  jupyter-notebook, dataset
Wikipedia ner
📖 Labeled examples from wiki dumps in Python
Stars: ✭ 61 (-37.76%)
Mutual labels:  jupyter-notebook, dataset
Comma2k19
A driving dataset for the development and validation of fused pose estimators and mapping algorithms
Stars: ✭ 391 (+298.98%)
Mutual labels:  jupyter-notebook, dataset
Deep learning projects
Stars: ✭ 28 (-71.43%)
Mutual labels:  jupyter-notebook, dataset
Transportationnetworks
Transportation Networks for Research
Stars: ✭ 312 (+218.37%)
Mutual labels:  jupyter-notebook, dataset
Animegan
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.
Stars: ✭ 1,095 (+1017.35%)
Mutual labels:  jupyter-notebook, dataset
Cinemanet
Stars: ✭ 57 (-41.84%)
Mutual labels:  jupyter-notebook, dataset
Medmnist
[ISBI'21] MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis
Stars: ✭ 338 (+244.9%)
Mutual labels:  jupyter-notebook, dataset
Symbolic Musical Datasets
🎹 symbolic musical datasets
Stars: ✭ 79 (-19.39%)
Mutual labels:  jupyter-notebook, dataset
Dsprites Dataset
Dataset to assess the disentanglement properties of unsupervised learning methods
Stars: ✭ 340 (+246.94%)
Mutual labels:  jupyter-notebook, dataset
Hate Speech And Offensive Language
Repository for the paper "Automated Hate Speech Detection and the Problem of Offensive Language", ICWSM 2017
Stars: ✭ 543 (+454.08%)
Mutual labels:  jupyter-notebook, dataset
Whylogs
Profile and monitor your ML data pipeline end-to-end
Stars: ✭ 328 (+234.69%)
Mutual labels:  jupyter-notebook, dataset
Tedsds
Apache Spark - Turbofan Engine Degradation Simulation Data Set example in Apache Spark
Stars: ✭ 14 (-85.71%)
Mutual labels:  jupyter-notebook, dataset
Datascience course
Curso de Data Science em Português
Stars: ✭ 294 (+200%)
Mutual labels:  jupyter-notebook, dataset
Covid19 twitter
Covid-19 Twitter dataset for non-commercial research use and pre-processing scripts - under active development
Stars: ✭ 304 (+210.2%)
Mutual labels:  jupyter-notebook, dataset
Covidnet Ct
COVID-Net Open Source Initiative - Models and Data for COVID-19 Detection in Chest CT
Stars: ✭ 57 (-41.84%)
Mutual labels:  jupyter-notebook, dataset
Raccoon dataset
The dataset is used to train my own raccoon detector and I blogged about it on Medium
Stars: ✭ 1,177 (+1101.02%)
Mutual labels:  jupyter-notebook, dataset

CubiCasa5K: A Dataset and an Improved Multi-Task Model for Floorplan Image Analysis

Paper: CubiCasa5K: A Dataset and an Improved Multi-Task Model for Floorplan Image Analysis

Multi-Task Model

The model uses the neural network architecture presented in Raster-to-Vector: Revisiting Floorplan Transformation [1]. The pre- and post-processing parts are modified to suit our dataset, but otherwise the pipeline follows the torch implementation of [1] as much as possible. Our model utilizes the multi-task uncertainty loss function presented in Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. An example of our trained model's prediction can be found in the samples.ipynb file.

Dataset

CubiCasa5K is a large-scale floorplan image dataset containing 5000 samples annotated into over 80 floorplan object categories. The dataset annotations are performed in a dense and versatile manner by using polygons for separating the different objects. You can download the CubiCasa5K dataset from here and extract the zip file to data/ folder.

Requirements

The model is written for Python 3.6.5 and Pytorch 1.0.0 with CUDA enabled GPU. Other dependencies Python can be found in requirements.txt file with the exception of cv2 3.1.0 (OpenCV). If you want to use the Dockerfile you need to have docker and nvidia-docker2 installed. We use pre-built image anibali/pytorch:cuda-9.0 as a starting point and install other required libraries using pip. To create the container run in the:

docker build -t cubi -f Dockerfile .

To start JupyterLab in the container:

docker run --rm -it --init \
  --runtime=nvidia \
  --ipc=host \
  --publish 1111:1111 \
  --user="$(id -u):$(id -g)" \
  --volume=$PWD:/app \
  -e NVIDIA_VISIBLE_DEVICES=0 \
  cubi jupyter-lab --port 1111 --ip 0.0.0.0 --no-browser

You can now open a terminal in JupyterLab web interface to execute more commands in the container.

Database creation

We create a LMDB database of the dataset, where we store the floorplan image, segmentation tensors and heatmap coordinates. This way we can access the data faster during training and evaluation. The downside however is that the database takes about 105G of hard drive space. There is an option to parse the SVG file on the go but it is slow for training. Commands to create the database:

python create_lmdb.py --txt val.txt
python create_lmdb.py --txt test.txt
python create_lmdb.py --txt train.txt

Train

python train.py

Different training options can be found in the script file. Tensorboard is not included in the docker container. You need to run it outside and point it to cubi_runs/ folder. For each run a new folder is created with a timestamp as the folder name.

tensorboard --logdir runs_cubi/

Evaluation

Our model weights file can be downloaded here. Once the weights file is in the project folder evaluation can be done. Also you can run the jupyter notebook file to see how the model is performing for different floorplans.

python eval.py --weights model_best_val_loss_var.pkl

Additional option for evaluation can be found in the script file. The results can be found in runs_cubi/ folder.

Todo

  • Modify create_lmdb.py to save files as uint8 (now using float32 which is the main reason why the lmdb file gets as big as over 100 gbytes).
  • Modify augmentations.py to operate with numpy arrays (the reason why it currently utilizes torch tensors is the fact that in our earlier version we applied augmentations to heatmap tensors and not to heatmap dicts which is the correct way to do it)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].