All Projects → AmusementClub → vs-mlrt

AmusementClub / vs-mlrt

Licence: GPL-3.0 license
Efficient ML Filter Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2, and Real-CUGAN)

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to vs-mlrt

ONNX-Runtime-with-TensorRT-and-OpenVINO
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment
Stars: ✭ 15 (-55.88%)
Mutual labels:  tensorrt, onnx, openvino, onnxruntime
vs-realesrgan
Real-ESRGAN function for VapourSynth
Stars: ✭ 27 (-20.59%)
Mutual labels:  vapoursynth, onnxruntime, real-esrgan
vs-dpir
DPIR function for VapourSynth
Stars: ✭ 26 (-23.53%)
Mutual labels:  vapoursynth, onnxruntime, dpir
mtomo
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Stars: ✭ 24 (-29.41%)
Mutual labels:  tensorrt, onnx, openvino
YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Stars: ✭ 6,570 (+19223.53%)
Mutual labels:  tensorrt, onnx, openvino
AI-Lossless-Zoomer
AI无损放大工具
Stars: ✭ 940 (+2664.71%)
Mutual labels:  waifu2x, real-esrgan
deepvac
PyTorch Project Specification.
Stars: ✭ 507 (+1391.18%)
Mutual labels:  tensorrt, onnx
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+270.59%)
Mutual labels:  tensorrt, onnx
mediapipe plus
The purpose of this project is to apply mediapipe to more AI chips.
Stars: ✭ 38 (+11.76%)
Mutual labels:  tensorrt, onnx
djl
An Engine-Agnostic Deep Learning Framework in Java
Stars: ✭ 3,080 (+8958.82%)
Mutual labels:  ml, onnxruntime
ros-yolo-sort
YOLO v3, v4, v5, v6, v7 + SORT tracking + ROS platform. Supporting: YOLO with Darknet, OpenCV(DNN), OpenVINO, TensorRT(tkDNN). SORT supports python(original) and C++. (Not Deep SORT)
Stars: ✭ 162 (+376.47%)
Mutual labels:  tensorrt, openvino
optimum
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
Stars: ✭ 567 (+1567.65%)
Mutual labels:  onnx, onnxruntime
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+1138.24%)
Mutual labels:  onnx, onnxruntime
SynapseML
Simple and Distributed Machine Learning
Stars: ✭ 3,355 (+9767.65%)
Mutual labels:  ml, onnx
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (+114.71%)
Mutual labels:  onnx, openvino
onnxruntime-rs
Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
Stars: ✭ 149 (+338.24%)
Mutual labels:  onnx, onnxruntime
FAST-Pathology
⚡ Open-source software for deep learning-based digital pathology
Stars: ✭ 54 (+58.82%)
Mutual labels:  tensorrt, openvino
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+3517.65%)
Mutual labels:  tensorrt, onnxruntime
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+50467.65%)
Mutual labels:  ml, onnx
Deepstream Project
This is a highly separated deployment project based on Deepstream , including the full range of Yolo and continuously expanding deployment projects such as Ocr.
Stars: ✭ 120 (+252.94%)
Mutual labels:  tensorrt, onnx

vs-mlrt

VapourSynth ML filter runtimes.

Please see the wiki for supported models.

vsov: OpenVINO-based Pure CPU Runtime

OpenVINO is an AI inference runtime developed by Intel, mainly targeting x86 CPUs and Intel GPUs.

The vs-openvino plugin provides optimized pure CPU runtime for some popular AI filters, with Intel GPU support planned in the future.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vsov directory for details.

vsort: ONNX Runtime-based CPU/GPU Runtime

ONNX Runtime is an AI inference runtime with many backends.

The vs-onnxruntime plugin provides optimized CPU and CUDA GPU runtime for some popular AI filters.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vsort directory for details.

vstrt: TensorRT-based GPU Runtime

TensorRT is a highly optimized AI inference runtime for NVidia GPUs. It uses benchmarking to find the optimal kernel to use for your specific GPU, and so there is an extra step to build an engine from ONNX network on the machine you are going to use the vstrt filter, and this extra step makes deploying models a little harder than the other runtimes. However, the resulting performance is also typically much much better than the CUDA backend of vsort.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vstrt directory for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].