All Projects → NVIDIA-AI-IOT → deepstream_triton_model_deploy

NVIDIA-AI-IOT / deepstream_triton_model_deploy

Licence: Apache-2.0 License
How to deploy open source models using DeepStream and Triton Inference Server

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
Makefile
30231 projects
shell
77523 projects

Deploying an open source model using NVIDIA DeepStream and Triton Inference Server

This repository contains contains the the code and configuration files required to deploy sample open source models video analytics using Triton Inference Server and DeepStream SDK 5.0.

Getting Started

Prerequisites:

DeepStream SDK 5.0 or use docker image (nvcr.io/nvidia/deepstream:5.0.1-20.09-triton) for x86 and (nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples) for NVIDIA Jetson.

The following models have been deployed on DeepStream using Triton Inference Server.

For further details, please see each project's README.

TensorFlow Faster RCNN Inception V2 : README

The project shows how to deploy TensorFlow Faster RCNN Inception V2 network trained on MSCOCO dataset for object detection. faster_rcnn_output

ONNX CenterFace : README

The project shows how to deploy ONNX CenterFace network for face detection and alignment. centerface_output

Additional resources:

Developer blog: Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0

Learn more about Triton Inference Server

Post your questions or feedback in the DeepStream SDK developer forums

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].