All Projects → BMW-InnovationLab → BMW-IntelOpenVINO-Detection-Inference-API

BMW-InnovationLab / BMW-IntelOpenVINO-Detection-Inference-API

Licence: Apache-2.0 license
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

Programming Languages

python
139335 projects - #7 most used programming language
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to BMW-IntelOpenVINO-Detection-Inference-API

BMW-IntelOpenVINO-Segmentation-Inference-API
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit
Stars: ✭ 31 (-53.03%)
Mutual labels:  cpu, inference, deeplearning, nocode, openvino-toolkit
gaze-estimation-with-laser-sparking
Deep learning based gaze estimation demo with a fun feature :-)
Stars: ✭ 32 (-51.52%)
Mutual labels:  inference, inference-engine, openvino, openvino-toolkit
Bmw Tensorflow Inference Api Cpu
This is a repository for an object detection inference API using the Tensorflow framework.
Stars: ✭ 158 (+139.39%)
Mutual labels:  cpu, inference, deeplearning
Openvino
OpenVINO™ Toolkit repository
Stars: ✭ 2,858 (+4230.3%)
Mutual labels:  inference, inference-engine, openvino
Xnnpack
High-efficiency floating-point neural network inference operators for mobile, server, and Web
Stars: ✭ 808 (+1124.24%)
Mutual labels:  cpu, inference
opencv-python-inference-engine
Wrapper package for OpenCV with Inference Engine python bindings.
Stars: ✭ 32 (-51.52%)
Mutual labels:  inference-engine, openvino
Nnpack
Acceleration package for neural networks on multi-core CPUs
Stars: ✭ 1,538 (+2230.3%)
Mutual labels:  cpu, inference
Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (+172.73%)
Mutual labels:  cpu, inference
Models
Model Zoo for Intel® Architecture: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors
Stars: ✭ 248 (+275.76%)
Mutual labels:  cpu, inference
concurrent-video-analytic-pipeline-optimization-sample-l
Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the ability to encode multiple videos to local storage in a single stream.
Stars: ✭ 39 (-40.91%)
Mutual labels:  inference, openvino
r2inference
RidgeRun Inference Framework
Stars: ✭ 22 (-66.67%)
Mutual labels:  inference, inference-engine
Openvino Yolov3
YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Stars: ✭ 500 (+657.58%)
Mutual labels:  cpu, deeplearning
Dawn Bench Entries
DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Stars: ✭ 254 (+284.85%)
Mutual labels:  inference, deeplearning
Bmw Yolov4 Inference Api Gpu
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Stars: ✭ 237 (+259.09%)
Mutual labels:  inference, deeplearning
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (+10.61%)
Mutual labels:  openvino, openvino-toolkit
intruder-detector-python
Build an application that alerts you when someone enters a restricted area. Learn how to use models for multiclass object detection.
Stars: ✭ 16 (-75.76%)
Mutual labels:  inference, openvino
motor-defect-detector-python
Predict performance issues with manufacturing equipment motors. Perform local or cloud analytics of the issues found, and then display the data on a user interface to determine when failures might arise.
Stars: ✭ 24 (-63.64%)
Mutual labels:  inference, openvino
Neuropod
A uniform interface to run deep learning models from multiple frameworks
Stars: ✭ 858 (+1200%)
Mutual labels:  inference, deeplearning
Ncnn Benchmark
The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 70 (+6.06%)
Mutual labels:  inference, deeplearning
object-flaw-detector-cpp
Detect various irregularities of a product as it moves along a conveyor belt.
Stars: ✭ 19 (-71.21%)
Mutual labels:  inference, openvino

OpenVINO Inference API

This is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

Models in Intermediate Representation(IR) format, converted using the Intel® OpenVINO™ toolkit v2021.1, can be deployed in this API. Currently, OpenVINO supports conversion for Models trained in several Machine Learning frameworks including Caffe, Tensorflow etc. Please refer to the OpenVINO documentation for further details on converting your Model.

Prerequisites

  • OS:
    • Ubuntu 18.04
    • Windows 10 pro/enterprise
  • Docker

Check for prerequisites

To check if you have docker-ce installed:

docker --version

Install prerequisites

Ubuntu

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Windows 10

To install Docker on Windows, please follow the link.

P.S: For Windows users, open the Docker Desktop menu by clicking the Docker Icon in the Notifications area. Select Settings, and then Advanced tab to adjust the resources available to Docker Engine.

Build The Docker Image

In order to build the project run the following command from the project's root directory:

sudo docker build -t openvino_inference_api .

Behind a proxy

sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t openvino_inference_api .

Run The Docker Container

If you wish to deploy this API using docker, please issue the following run command.

To run the API, go the to the API's directory and run the following:

Using Linux based docker:

sudo docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p <docker_host_port>:80 openvino_inference_api

Using Windows based docker:

docker run -itv ${PWD}\models:/models -v ${PWD}\models_hash:/models_hash -p <docker_host_port>:80 openvino_inference_api

The <docker_host_port> can be any unique port of your choice.

The API file will be run automatically, and the service will listen to http requests on the chosen port.

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again.

load model

/detect (POST)

Performs inference on an image using the specified model and returns the bounding-boxes of the objects in a JSON format.

detect image

/models/{model_name}/predict_image (POST)

Performs inference on an image using the specified model, draws bounding boxes on the image, and returns the resulting image as response.

predict image

/models/{model_name}/config (GET)

Returns the model's configuration

config image

/models (GET)

Lists all the available models

/models/{model_name}/labels (GET)

Returns all the object labels of the model as a list

/models/{model_name}/predict (POST)

Performs inference on a given image using the model and returns the bounding-boxes of the objects as JSON.

P.S: If you are using custom endpoints like /detect, /predict_image, you should always use the /load endpoint first and then use /detect

Model structure

The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:

  • bin file (<your_converted_model>.bin): contains the model weights

  • xml file (<your_converted_model>.xml): describes the network topology

  • class file (classes.txt): contains the names of the object classes, which should be in the below format

        class1
        class2
        ...
    
  • config.json (This is a json file containing information about the model)

      {
          "inference_engine_name": "openvino_detection",
          "confidence": 60,
          "predictions": 15,
          "number_of_classes": 2,
          "framework": "openvino",
          "type": "detection",
          "network": "fasterrcnn"
      }

    P.S:

    • You can change confidence and predictions values while running the API
    • The API will return bounding boxes with a confidence higher than the "confidence" value. A high "confidence" can show you only accurate predictions

The "models" folder structure should be similar to as shown below:

│──models
  │──model_1
  │  │──<model_1>.bin
  │  │──<model_1>.xml
  │  │──classes.txt
  │  │──config.json
  │
  │──model_2
  │  │──<model_2>.bin
  │  │──<model_2>.xml
  │  │──classes.txt
  │  │──config.json

Using with Anonymization Api

In this section, docker-compose will build and run the OpenVINO Inference Api alongside the Anonymization Api.

To build and run both the APIs together, clone the Anonymization API repository to your machine. Replace the "/jsonFiles/url_configuration.json" with the file in the "/docker_anonymize" directory of this repo.

Two services are configured in the "docker-compose.yml" file in the "/docker_anonymize" directory: the OpenVINO Inference API and the Anonymization API.

You can modify the build context to specify the base directory of anonymization api (ensure the correct path is also given for the mounted volumes).You can also modify the host ports you wish to use for the APIs.

Now, run the following command in the "/docker_anonymize" directory of this repo:

docker-compose up

In the terminal, you should now see all the APIs running together.

Acknowledgements

OpenVINO Toolkit

intel.com

robotron.de

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].