Deploying an open source model using NVIDIA DeepStream and Triton Inference Server
This repository contains contains the the code and configuration files required to deploy sample open source models video analytics using Triton Inference Server and DeepStream SDK 5.0.
Getting Started
Prerequisites:
DeepStream SDK 5.0 or use docker image (nvcr.io/nvidia/deepstream:5.0.1-20.09-triton) for x86 and (nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples) for NVIDIA Jetson.
The following models have been deployed on DeepStream using Triton Inference Server.
For further details, please see each project's README.
README
TensorFlow Faster RCNN Inception V2 :The project shows how to deploy TensorFlow Faster RCNN Inception V2 network trained on MSCOCO dataset for object detection.
README
ONNX CenterFace :The project shows how to deploy ONNX CenterFace network for face detection and alignment.
Additional resources:
Developer blog: Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0
Learn more about Triton Inference Server
Post your questions or feedback in the DeepStream SDK developer forums