All Projects → BMW-InnovationLab → BMW-IntelOpenVINO-Segmentation-Inference-API

BMW-InnovationLab / BMW-IntelOpenVINO-Segmentation-Inference-API

Licence: Apache-2.0 license
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to BMW-IntelOpenVINO-Segmentation-Inference-API

BMW-IntelOpenVINO-Detection-Inference-API
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.
Stars: ✭ 66 (+112.9%)
Mutual labels:  cpu, inference, deeplearning, nocode, openvino-toolkit
Bmw Tensorflow Inference Api Cpu
This is a repository for an object detection inference API using the Tensorflow framework.
Stars: ✭ 158 (+409.68%)
Mutual labels:  cpu, inference, deeplearning
Pixellib
Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/
Stars: ✭ 327 (+954.84%)
Mutual labels:  deeplearning, image-segmentation, semantic-segmentation
Keras Unet
Helper package with multiple U-Net implementations in Keras as well as useful utility tools helpful when working with image semantic segmentation tasks. This library and underlying tools come from multiple projects I performed working on semantic segmentation tasks
Stars: ✭ 196 (+532.26%)
Mutual labels:  deeplearning, image-segmentation, semantic-segmentation
Dawn Bench Entries
DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Stars: ✭ 254 (+719.35%)
Mutual labels:  inference, deeplearning
Bmw Yolov4 Inference Api Gpu
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Stars: ✭ 237 (+664.52%)
Mutual labels:  inference, deeplearning
Kimera Semantics
Real-Time 3D Semantic Reconstruction from 2D data
Stars: ✭ 368 (+1087.1%)
Mutual labels:  cpu, semantic-segmentation
Openvino Yolov3
YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Stars: ✭ 500 (+1512.9%)
Mutual labels:  cpu, deeplearning
Xnnpack
High-efficiency floating-point neural network inference operators for mobile, server, and Web
Stars: ✭ 808 (+2506.45%)
Mutual labels:  cpu, inference
Nnpack
Acceleration package for neural networks on multi-core CPUs
Stars: ✭ 1,538 (+4861.29%)
Mutual labels:  cpu, inference
ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (+216.13%)
Mutual labels:  image-segmentation, semantic-segmentation
Ncnn Benchmark
The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 70 (+125.81%)
Mutual labels:  inference, deeplearning
Neuropod
A uniform interface to run deep learning models from multiple frameworks
Stars: ✭ 858 (+2667.74%)
Mutual labels:  inference, deeplearning
Models
Model Zoo for Intel® Architecture: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors
Stars: ✭ 248 (+700%)
Mutual labels:  cpu, inference
K-Net
[NeurIPS2021] Code Release of K-Net: Towards Unified Image Segmentation
Stars: ✭ 434 (+1300%)
Mutual labels:  image-segmentation, semantic-segmentation
gaze-estimation-with-laser-sparking
Deep learning based gaze estimation demo with a fun feature :-)
Stars: ✭ 32 (+3.23%)
Mutual labels:  inference, openvino-toolkit
InferenceHelper
C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow
Stars: ✭ 142 (+358.06%)
Mutual labels:  inference, deeplearning
Semantic-Segmentation-BiSeNet
Keras BiseNet architecture implementation
Stars: ✭ 55 (+77.42%)
Mutual labels:  image-segmentation, semantic-segmentation
Deep Learning In Production
Develop production ready deep learning code, deploy it and scale it
Stars: ✭ 216 (+596.77%)
Mutual labels:  deeplearning, semantic-segmentation
Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (+480.65%)
Mutual labels:  cpu, inference

BMW-IntelOpenVINO-Segmentation-Inference-API

This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported on both Windows and Linux Operating systems.

Models in Intermediate Representation(IR) format, converted via the Intel® OpenVINO™ toolkit v2021.1, can be deployed in this API. Currently, OpenVINO supports conversion for DL-based models trained via several Machine Learning frameworks including Caffe, Tensorflow etc. Please refer to the OpenVINO documentation for further details on converting your Model.

Note: To be able to use the sample inference model provided with this repository make sure to use git clone and avoid downloading the repository as ZIP because it will not download the acutual model stored on git lfs but just the pointer instead

overview

Prerequisites

  • OS:
    • Ubuntu 18.04
    • Windows 10 pro/enterprise
  • Docker

Check for prerequisites

To check if you have docker-ce installed:

docker --version

Install prerequisites

Ubuntu

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Windows 10

To install Docker on Windows, please follow the link.

Build The Docker Image

In order to build the project run the following command from the project's root directory:

docker build -t openvino_segmentation -f docker/Dockerfile .

Behind a proxy

docker build --build-arg http_proxy='' --build-arg https_proxy='' -t openvino_segmentation -f docker/Dockerfile .

Run The Docker Container

If you wish to deploy this API using docker, please issue the following run command.

To run the API, go the to the API's directory and run the following:

Using Linux based docker:

docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p <port_of_your_choice>:80 openvino_segmentation

Using Windows based docker:

Using PowerShell:
docker run -itv ${PWD}/models:/models -v ${PWD}/models_hash:/models_hash -p <port_of_your_choice>:80 openvino_segmentation
Using CMD:
docker run -itv %cd%/models:/models -v %cd%/models_hash:/models_hash -p <port_of_your_choice>:80 openvino_segmentation

The <docker_host_port> can be any unique port of your choice.

The API file will run automatically, and the service will listen to http requests on the chosen port. result

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again.

load model

/models/{model_name}/detect (POST)

Performs inference on an image using the specified model and returns the bounding-boxes of the class in a JSON format.

detect image

/models/{model_name}/image_segmentation (POST)

Performs inference on an image using the specified model, draws segmentation and the class on the image, and returns the resulting image as response.

image segmentation

Model structure

The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:

  • bin file (<your_converted_model>.bin): contains the model weights

  • xml file (<your_converted_model>.xml): describes the network topology

  • configuration.json (This is a json file containing information about the model)

      {
        "classes":4,
        "type":"segmentation",
        "classesname":[
          "background",
          "person",
          "bicycle",
          "car"
        ]
      }

How to add new model

Add New Model and create the palette

create a new folder and add the model files ('.bin' and '.xml' and the 'configuration.json') after adding this folder run the following script

python generate_random_palette.py -m <ModelName>

this script will generate a random palette and add it to your files

The "models" folder structure should now be similar to as shown below:

│──models
  │──model_1
  │  │──<model_1>.bin
  │  │──<model_1>.xml
  │  │──configuration.json
  |  |__palette.txt
  │
  │──model_2
  │  │──<model_2>.bin
  │  │──<model_2>.xml
  │  │──configuration.json
  │  │──palette.txt

image segmentation

Acknowledgements

OpenVINO Toolkit

intel.com

Elio Hanna

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].