All Projects → ENOT-AutoDL → ONNX-Runtime-with-TensorRT-and-OpenVINO

ENOT-AutoDL / ONNX-Runtime-with-TensorRT-and-OpenVINO

Licence: Apache-2.0 license
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects
Cuda
1817 projects

Projects that are alternatives of or similar to ONNX-Runtime-with-TensorRT-and-OpenVINO

vs-mlrt
Efficient ML Filter Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2, and Real-CUGAN)
Stars: ✭ 34 (+126.67%)
Mutual labels:  tensorrt, onnx, openvino, onnxruntime
Torch-TensorRT
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
Stars: ✭ 1,216 (+8006.67%)
Mutual labels:  cuda, nvidia, tensorrt
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Stars: ✭ 4,012 (+26646.67%)
Mutual labels:  cuda, tensorrt, onnx
mtomo
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Stars: ✭ 24 (+60%)
Mutual labels:  tensorrt, onnx, openvino
YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Stars: ✭ 6,570 (+43700%)
Mutual labels:  tensorrt, onnx, openvino
mediapipe plus
The purpose of this project is to apply mediapipe to more AI chips.
Stars: ✭ 38 (+153.33%)
Mutual labels:  tensorrt, onnx
InsightFace-REST
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Stars: ✭ 308 (+1953.33%)
Mutual labels:  tensorrt, onnx
onnxruntime-rs
Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
Stars: ✭ 149 (+893.33%)
Mutual labels:  onnx, onnxruntime
yolov5 tensorrt int8 tools
tensorrt int8 量化yolov5 onnx模型
Stars: ✭ 105 (+600%)
Mutual labels:  tensorrt, onnx
isaac ros dnn inference
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Stars: ✭ 67 (+346.67%)
Mutual labels:  nvidia, tensorrt
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+8100%)
Mutual labels:  tensorrt, onnxruntime
FAST-Pathology
⚡ Open-source software for deep learning-based digital pathology
Stars: ✭ 54 (+260%)
Mutual labels:  tensorrt, openvino
ros-yolo-sort
YOLO v3, v4, v5, v6, v7 + SORT tracking + ROS platform. Supporting: YOLO with Darknet, OpenCV(DNN), OpenVINO, TensorRT(tkDNN). SORT supports python(original) and C++. (Not Deep SORT)
Stars: ✭ 162 (+980%)
Mutual labels:  tensorrt, openvino
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+740%)
Mutual labels:  tensorrt, onnx
deepvac
PyTorch Project Specification.
Stars: ✭ 507 (+3280%)
Mutual labels:  tensorrt, onnx
yolov5-deepsort-tensorrt
A c++ implementation of yolov5 and deepsort
Stars: ✭ 207 (+1280%)
Mutual labels:  nvidia, tensorrt
play with tensorrt
Sample projects for TensorRT in C++
Stars: ✭ 39 (+160%)
Mutual labels:  nvidia, tensorrt
lane detection
Lane detection for the Nvidia Jetson TX2 using OpenCV4Tegra
Stars: ✭ 15 (+0%)
Mutual labels:  cuda, nvidia
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (+386.67%)
Mutual labels:  onnx, openvino
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+2706.67%)
Mutual labels:  onnx, onnxruntime

ONNX Runtime with TensorRT and OpenVINO

Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment.

Supports x86_64 and aarch64 (JetPack) architectures.

Build requirements

Place CUDA (.run), cuDNN (tar.gz) and TensorRT (tar.gz) files into distrib folder.

Building

Simply type the following command in your terminal and press Enter:

bash docker-run.sh

Wheels will be placed into wheelhouse folder.

Customization

  • To specify Python versions for which wheels will be built, edit PYTHON_TARGETS variable in docker-run.sh
  • To change number of parallel threads edit THREADS_NUM variable in docker-run.sh

Using

Wheels compiled for x86_64 architecture depend on the following packages from NVIDIA repository:

  • nvidia-cudnn (8.2)
  • nvidia-tensorrt (8.4)
  • nvidia-curand (10.2)
  • nvidia-cufft (10.5)

and openvino (2021.4) from standard PyPI repository.
Compiled wheels do not explicitly depend on NVIDIA packages, you can install them by the following commands:

pip install nvidia-cuda-runtime-cu114 nvidia-cudnn-cu114 nvidia-cufft-cu114 nvidia-curand-cu114 nvidia-cublas-cu114 --extra-index-url https://pypi.ngc.nvidia.com
pip install nvidia-tensorrt==8.4.0.6 --no-deps --extra-index-url https://pypi.ngc.nvidia.com

The recommended way to install this ONNX Runtime package is to use our install.sh script, which installs ONNX Runtime with all dependencies automatically.

Install GPU version (with all NVIDIA dependencies):

wget -O - https://raw.githubusercontent.com/ENOT-AutoDL/ONNX-Runtime-with-TensorRT-and-OpenVINO/master/install.sh | bash

Install CPU-only version (without NVIDIA packages, use this version if your target device has no GPU):

wget -O - https://raw.githubusercontent.com/ENOT-AutoDL/ONNX-Runtime-with-TensorRT-and-OpenVINO/master/install.sh | bash -s -- -t CPU
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].