1. ServerThe Triton Inference Server provides an optimized cloud and edge inferencing solution.
4. model analyzerTriton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
5. python backendTriton backend that enables pre-process, post-processing and other logic to be implemented in Python.
6. backendCommon source, scripts and utilities for creating Triton backends.