All Projects → openvinotoolkit → Model_server

openvinotoolkit / Model_server

Licence: apache-2.0
A scalable inference server for models optimized with OpenVINO™

Projects that are alternatives of or similar to Model server

Server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Stars: ✭ 2,994 (+594.66%)
Mutual labels:  cloud, inference, edge
object-size-detector-python
Monitor mechanical bolts as they move down a conveyor belt. When a bolt of an irregular size is detected, this solution emits an alert.
Stars: ✭ 26 (-93.97%)
Mutual labels:  inference, edge
intruder-detector-python
Build an application that alerts you when someone enters a restricted area. Learn how to use models for multiclass object detection.
Stars: ✭ 16 (-96.29%)
Mutual labels:  inference, edge
sagemaker-sparkml-serving-container
This code is used to build & run a Docker container for performing predictions against a Spark ML Pipeline.
Stars: ✭ 44 (-89.79%)
Mutual labels:  inference, serving
Deep Learning In Production
Develop production ready deep learning code, deploy it and scale it
Stars: ✭ 216 (-49.88%)
Mutual labels:  ai, cloud
Models
Model Zoo for Intel® Architecture: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors
Stars: ✭ 248 (-42.46%)
Mutual labels:  ai, inference
safety-gear-detector-python
Observe workers as they pass in front of a camera to determine if they have adequate safety protection.
Stars: ✭ 54 (-87.47%)
Mutual labels:  inference, edge
Pai
Resource scheduling and cluster management for AI
Stars: ✭ 2,223 (+415.78%)
Mutual labels:  ai, cloud
spark-ml-serving
Spark ML Lib serving library
Stars: ✭ 49 (-88.63%)
Mutual labels:  inference, serving
serving-runtime
Exposes a serialized machine learning model through a HTTP API.
Stars: ✭ 15 (-96.52%)
Mutual labels:  inference, serving
object-flaw-detector-python
Detect various irregularities of a product as it moves along a conveyor belt.
Stars: ✭ 17 (-96.06%)
Mutual labels:  inference, edge
Libonnx
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
Stars: ✭ 217 (-49.65%)
Mutual labels:  ai, inference
Dl inference
通用深度学习推理服务,可在生产环境中快速上线由TensorFlow、PyTorch、Caffe框架训练出的深度学习模型。
Stars: ✭ 209 (-51.51%)
Mutual labels:  ai, inference
object-flaw-detector-cpp
Detect various irregularities of a product as it moves along a conveyor belt.
Stars: ✭ 19 (-95.59%)
Mutual labels:  inference, edge
Microsoft Student Partner Workshop Learning Materials Ai Nlp
This repository contains all codes and materials of the current session. It contains the required code on Natural Language Processing, Artificial intelligence.
Stars: ✭ 187 (-56.61%)
Mutual labels:  ai, cloud
motor-defect-detector-python
Predict performance issues with manufacturing equipment motors. Perform local or cloud analytics of the issues found, and then display the data on a user interface to determine when failures might arise.
Stars: ✭ 24 (-94.43%)
Mutual labels:  inference, edge
K3sup
bootstrap Kubernetes with k3s over SSH < 1 min 🚀
Stars: ✭ 4,012 (+830.86%)
Mutual labels:  cloud, edge
Maixpy
MicroPython for K210 RISC-V, let's play with edge AI easier
Stars: ✭ 1,065 (+147.1%)
Mutual labels:  ai, edge
Blockerized Dockchain
Because all problems are solvable with containers and blockchains
Stars: ✭ 77 (-82.13%)
Mutual labels:  ai, cloud
concurrent-video-analytic-pipeline-optimization-sample-l
Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the ability to encode multiple videos to local storage in a single stream.
Stars: ✭ 39 (-90.95%)
Mutual labels:  inference, edge

OpenVINO™ Model Server

OVMS picture

OpenVINO™ Model Server (OVMS) is a scalable, high-performance solution for serving machine learning models optimized for Intel® architectures. The server provides an inference service via gRPC or REST API - making it easy to deploy new algorithms and AI experiments using the same architecture as TensorFlow Serving for any models trained in a framework that is supported by OpenVINO.

The server implements gRPC and REST API framework with data serialization and deserialization using TensorFlow Serving API, and OpenVINO™ as the inference execution provider. Model repositories may reside on a locally accessible file system (e.g. NFS), Google Cloud Storage (GCS), Amazon S3, Minio or Azure Blob Storage.

OVMS is now implemented in C++ and provides much higher scalability compared to its predecessor in Python version. You can take advantage of all the power of Xeon CPU capabilities or AI accelerators and expose it over the network interface. Read release notes to find out what's new in C++ version.

Review the Architecture concept document for more details.

A few key features:

Note: OVMS has been tested on CentOS* and Ubuntu*. Publicly released docker images are based on CentOS.

Run OpenVINO Model Server

A demonstration how to use OpenVINO Model Server can be found in a quick start guide.

More detailed guides to using Model Server in various scenarios can be found here:

API documentation

GRPC

Learn more about GRPC API

Refer to the GRPC example client code to learn how to use and submit the requests using the gRPC interface.

REST

Learn more about REST API

Refer to the REST API example client code to learn how to use REST API

Testing

Learn more about tests in the developer guide

Known Limitations

  • Currently, Predict, GetModelMetadata and GetModelStatus calls are implemented using Tensorflow Serving API.
  • Classify, Regress and MultiInference are not included.
  • Output_filter is not effective in the Predict call. All outputs defined in the model are returned to the clients.

OpenVINO Model Server Contribution Policy

  • All contributed code must be compatible with the Apache 2 license.

  • All changes have to have pass style, unit and functional tests.

  • All new features need to be covered by tests.

Follow a contributor guide and a developer guide.

References

Contact

Submit Github issue to ask question, request a feature or report a bug.


* Other names and brands may be claimed as the property of others.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].