All Projects → triton-inference-server → model_analyzer

triton-inference-server / model_analyzer

Licence: Apache-2.0 license
Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to model analyzer

Barracuda-PoseNet-Tutorial
This tutorial series provides step-by-step instructions for how to perform human pose estimation in Unity with the Barracuda inference library.
Stars: ✭ 53 (-63.19%)
Mutual labels:  inference
JMeter-Charts
Application used to generate reports by uploading a JTL file to a rest server
Stars: ✭ 19 (-86.81%)
Mutual labels:  performance-analysis
spark-ml-serving
Spark ML Lib serving library
Stars: ✭ 49 (-65.97%)
Mutual labels:  inference
benchmark-trend
Measure performance trends of Ruby code
Stars: ✭ 60 (-58.33%)
Mutual labels:  performance-analysis
tacc stats
TACC Stats is an automated resource-usage monitoring and analysis package.
Stars: ✭ 36 (-75%)
Mutual labels:  performance-analysis
compile-time-perf
Measures high-level timing and memory usage metrics during compilation
Stars: ✭ 64 (-55.56%)
Mutual labels:  performance-analysis
fast-fomm-mobile
Сompresssing First Order Motion Model for Image Animation to enable its real-time inference on mobile devices
Stars: ✭ 25 (-82.64%)
Mutual labels:  inference
performance-decorator
🏇User behavior & Function execution tracking solution - 大型前端项目的用户行为跟踪,函数调用链分析,断点调试共享化和复用化实践
Stars: ✭ 39 (-72.92%)
Mutual labels:  performance-analysis
safety-gear-detector-python
Observe workers as they pass in front of a camera to determine if they have adequate safety protection.
Stars: ✭ 54 (-62.5%)
Mutual labels:  inference
BMW-IntelOpenVINO-Segmentation-Inference-API
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit
Stars: ✭ 31 (-78.47%)
Mutual labels:  inference
performance-timing
performance-timing.js利用HTML5的navigation timing API进行前端性能数据采集,是性能监控平台搭建的利器
Stars: ✭ 56 (-61.11%)
Mutual labels:  performance-analysis
woodwork
Woodwork is a Python library that provides robust methods for managing and communicating data typing information.
Stars: ✭ 97 (-32.64%)
Mutual labels:  inference
vuex-context
Write fully type inferred Vuex modules
Stars: ✭ 11 (-92.36%)
Mutual labels:  inference
performance-budget-plugin
Perfromance budget plugin for Webpack (https://webpack.js.org/)
Stars: ✭ 65 (-54.86%)
Mutual labels:  performance-analysis
BMW-IntelOpenVINO-Detection-Inference-API
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.
Stars: ✭ 66 (-54.17%)
Mutual labels:  inference
ReactiveMP.jl
Julia package for automatic Bayesian inference on a factor graph with reactive message passing
Stars: ✭ 58 (-59.72%)
Mutual labels:  inference
sagemaker-sparkml-serving-container
This code is used to build & run a Docker container for performing predictions against a Spark ML Pipeline.
Stars: ✭ 44 (-69.44%)
Mutual labels:  inference
inferelator
Task-based gene regulatory network inference using single-cell or bulk gene expression data conditioned on a prior network.
Stars: ✭ 24 (-83.33%)
Mutual labels:  inference
FAST-Pathology
⚡ Open-source software for deep learning-based digital pathology
Stars: ✭ 54 (-62.5%)
Mutual labels:  inference
concurrent-video-analytic-pipeline-optimization-sample-l
Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the ability to encode multiple videos to local storage in a single stream.
Stars: ✭ 39 (-72.92%)
Mutual labels:  inference

License

Triton Model Analyzer

LATEST RELEASE: You are currently on the main branch which tracks under-development progress towards the next release. The latest release of the Triton Model Analyzer is 1.16.0 and is available on branch r22.05.

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models. These reports will help the user better understand the trade-offs in different configurations and choose a configuration that maximizes the performance of Triton Inference Server.

Features

  • Automatic and manual configuration search: Model Analyzer can help you automatically find the optimal settings for Max Batch Size, Dynamic Batching, and Instance Group parameters of your model configuration. Model Analyzer utilizes Performance Analyzer to test the model with different concurrency and batch sizes of requests. Using Manual Config Search, you can create manual sweeps for every parameter that can be specified in the model configuration.

  • Detailed and summary reports: Model Analyzer is able to generate summarized and detailed reports that can help you better understand the trade-offs between different model configurations that can be used for your model.

  • QoS Constraints: Constraints can help you filter out the Model Analyzer results based on your QoS requirements. For example, you can specify a latency budget to filter out model configurations that do not satisfy the specified latency threshold.

Documentation

Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this project. When help with code is needed, follow the process outlined in the Stack Overflow (https://stackoverflow.com/help/mcve) document. Ensure posted examples are:

  • minimal – use as little code as possible that still produces the same problem

  • complete – provide all parts needed to reproduce the problem. Check if you can strip external dependency and still show the problem. The less time we spend on reproducing problems the more time we have to fix it

  • verifiable – test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].