All Projects → BMW-InnovationLab → Bmw Tensorflow Inference Api Gpu

BMW-InnovationLab / Bmw Tensorflow Inference Api Gpu

Licence: apache-2.0
This is a repository for an object detection inference API using the Tensorflow framework.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Bmw Tensorflow Inference Api Gpu

Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (-35.02%)
Mutual labels:  api, rest-api, object-detection, deep-neural-networks, inference
Bmw Tensorflow Inference Api Cpu
This is a repository for an object detection inference API using the Tensorflow framework.
Stars: ✭ 158 (-42.96%)
Mutual labels:  api, rest-api, object-detection, inference
Bmw Yolov4 Inference Api Gpu
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Stars: ✭ 237 (-14.44%)
Mutual labels:  api, rest-api, gpu, inference
Opentpod
Open Toolkit for Painless Object Detection
Stars: ✭ 106 (-61.73%)
Mutual labels:  object-detection, deep-neural-networks, tensorflow-models
Trainyourownyolo
Train a state-of-the-art yolov3 object detector from scratch!
Stars: ✭ 399 (+44.04%)
Mutual labels:  object-detection, gpu, inference
Tf trt models
TensorFlow models accelerated with NVIDIA TensorRT
Stars: ✭ 621 (+124.19%)
Mutual labels:  object-detection, nvidia, inference
Jetson Inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Stars: ✭ 5,191 (+1774.01%)
Mutual labels:  object-detection, nvidia, inference
Hey Jetson
Deep Learning based Automatic Speech Recognition with attention for the Nvidia Jetson.
Stars: ✭ 161 (-41.88%)
Mutual labels:  rest-api, deep-neural-networks, inference
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (-59.21%)
Mutual labels:  object-detection, gpu, tensorflow-models
Deepdetect
Deep Learning API and Server in C++14 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
Stars: ✭ 2,306 (+732.49%)
Mutual labels:  rest-api, object-detection, gpu
Server
Serve your Rubix ML models in production with scalable stand-alone model inference servers.
Stars: ✭ 30 (-89.17%)
Mutual labels:  api, rest-api, inference
Deep Learning In Production
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
Stars: ✭ 3,104 (+1020.58%)
Mutual labels:  rest-api, deep-neural-networks, tensorflow-models
Deep Diamond
A fast Clojure Tensor & Deep Learning library
Stars: ✭ 288 (+3.97%)
Mutual labels:  nvidia, gpu, deep-neural-networks
Bmw Tensorflow Training Gui
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
Stars: ✭ 736 (+165.7%)
Mutual labels:  rest-api, object-detection, deep-neural-networks
Keras object detection
Convert any classification model or architecture trained in keras to an object detection model
Stars: ✭ 28 (-89.89%)
Mutual labels:  api, object-detection, gpu
Realtime object detection
Plug and Play Real-Time Object Detection App with Tensorflow and OpenCV. No Bugs No Worries. Enjoy!
Stars: ✭ 260 (-6.14%)
Mutual labels:  api, object-detection, deep-neural-networks
Quora Api
An unofficial API for Quora.
Stars: ✭ 250 (-9.75%)
Mutual labels:  api, rest-api
Jcabi Github
Object Oriented Wrapper of Github API
Stars: ✭ 252 (-9.03%)
Mutual labels:  api, rest-api
Http Fake Backend
Build a fake backend by providing the content of JSON files or JavaScript objects through configurable routes.
Stars: ✭ 253 (-8.66%)
Mutual labels:  api, rest-api
nn-Meter
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
Stars: ✭ 211 (-23.83%)
Mutual labels:  inference, tensorflow-models

Tensorflow GPU Inference API

This is a repository for an object detection inference API using the Tensorflow framework.

This repo is based on Tensorflow Object Detection API.

The Tensorflow version used is 1.13.1. The inference REST API works on GPU. It's only supported on Linux Operating systems.

Models trained using our training tensorflow repository can be deployed in this API. Several object detection models can be loaded and used at the same time.

This repo can be deployed using either docker or docker swarm.

Please use docker swarm only if you need to:

  • Provide redundancy in terms of API containers: In case a container went down, the incoming requests will be redirected to another running instance.

  • Coordinate between the containers: Swarm will orchestrate between the APIs and choose one of them to listen to the incoming request.

  • Scale up the Inference service in order to get a faster prediction especially if there's traffic on the service.

If none of the aforementioned requirements are needed, simply use docker.

predict image

Prerequisites

  • Ubuntu 18.04
  • NVIDIA Drivers (410.x or higher)
  • Docker CE latest stable release
  • NVIDIA Docker 2

Check for prerequisites

To check if you have docker-ce installed:

docker --version

To check if you have nvidia-docker installed:

nvidia-docker --version

To check your nvidia drivers version, open your terminal and type the command nvidia-smi

img

Install prerequisites

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following the official docs

Build The Docker Image

In order to build the project run the following command from the project's root directory:

sudo docker build -t tensorflow_inference_api_gpu -f docker/dockerfile .

Behind a proxy

sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t tensorflow_inference_api_gpu -f ./docker/dockerfile .

Run the docker container

As mentioned before, this container can be deployed using either docker or docker swarm.

If you wish to deploy this API using docker, please issue the following run command.

If you wish to deploy this API using docker swarm, please refer to following link docker swarm documentation. After deploying the API with docker swarm, please consider returning to this documentation for further information about the API endpoints as well as the model structure sections.

To run the API, go the to the API's directory and run the following:

Using Linux based docker:

sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p <docker_host_port>:4343 tensorflow_inference_api_gpu

The <docker_host_port> can be any unique port of your choice.

The API file will be run automatically, and the service will listen to http requests on the chosen port.

NV_GPU defines on which GPU you want the API to run. If you want the API to run on multiple GPUs just enter multiple numbers seperated by a comma: (NV_GPU=0,1 for example)

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.

P.S: If you are using custom endpoints like /load, /detect, and /get_labels, you should always use the /load endpoint first and then use /detect or /get_labels

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again

load model

/detect (POST)

Performs inference on specified model, image, and returns bounding-boxes

detect image

/get_labels (POST)

Returns all of the specified model labels with their hashed values

get model labels

/models/{model_name}/predict_image (POST)

Performs inference on specified model, image, draws bounding boxes on the image, and returns the actual image as response

predict image

/models (GET)

Lists all available models

/models/{model_name}/load (GET)

Loads the specified model. Loaded models are stored and aren't loaded again

/models/{model_name}/predict (POST)

Performs inference on specified model, image, and returns bounding boxes.

/models/{model_name}/labels (GET)

Returns all of the specified model labels

/models/{model_name}/config (GET)

Returns the specified model's configuration

/models/{model_name}/predict_batch (POST)

Performs inference on specified model and a list of images, and returns bounding boxes

P.S: Custom endpoints like /load, /detect, and /get_labels should be used in a chronological order. First you have to call /load, and then call /detect or /get_labels

Model structure

The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:

  • pb file (frozen_inference_graph.pb): contains the model weights

  • pbtxt file (object-detection.pbtxt): contains model classes

  • Config.json (This is a json file containing information about the model)

      {
          "inference_engine_name": "tensorflow_detection",
          "confidence": 60,
          "predictions": 15,
          "number_of_classes": 2,
          "framework": "tensorflow",
          "type": "detection",
          "network": "inception"
      }
    

    P.S:

    • You can change confidence and predictions values while running the API
    • The API will return bounding boxes with a confidence higher than the "confidence" value. A high "confidence" can show you only accurate predictions
      • The "predictions" value specifies the maximum number of bounding boxes in the API response

Benchmarking

Windows Ubuntu
Network\Hardware Intel Xeon CPU 2.3 GHz Intel Xeon CPU 2.3 GHz Intel Xeon CPU 3.60 GHz GeForce GTX 1080
ssd_fpn 0.867 seconds/image 1.016 seconds/image 0.434 seconds/image 0.0658 seconds/image
frcnn_resnet_50 4.029 seconds/image 4.219 seconds/image 1.994 seconds/image 0.148 seconds/image
ssd_mobilenet 0.055 seconds/image 0.106 seconds/image 0.051 seconds/image 0.052 seconds/image
frcnn_resnet_101 4.469 seconds/image 4.985 seconds/image 2.254 seconds/image 0.364 seconds/image
ssd_resnet_50 1.34 seconds/image 1.462 seconds/image 0.668 seconds/image 0.091 seconds/image
ssd_inception 0.094 seconds/image 0.15 seconds/image 0.074 seconds/image 0.0513 seconds/image

Acknowledgment

inmind.ai

robotron.de

Joe Sleiman, inmind.ai , Beirut, Lebanon

Antoine Charbel, inmind.ai, Beirut, Lebanon

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].