All Projects → NVIDIA → Tensorrt Laboratory

NVIDIA / Tensorrt Laboratory

Licence: bsd-3-clause
Explore the Capabilities of the TensorRT Platform

Projects that are alternatives of or similar to Tensorrt Laboratory

Tensorflow Cmake
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
Stars: ✭ 418 (+77.12%)
Mutual labels:  inference, cuda
Adlik
Adlik: Toolkit for Accelerating Deep Learning Inference
Stars: ✭ 237 (+0.42%)
Mutual labels:  grpc, inference
Lightseq
LightSeq: A High Performance Inference Library for Sequence Processing and Generation
Stars: ✭ 501 (+112.29%)
Mutual labels:  inference, cuda
Cubert
Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL
Stars: ✭ 395 (+67.37%)
Mutual labels:  inference, cuda
Forward
A library for high performance deep learning inference on NVIDIA GPUs.
Stars: ✭ 136 (-42.37%)
Mutual labels:  inference, cuda
Grpc Over Webrtc
gRPC over WebRTC
Stars: ✭ 220 (-6.78%)
Mutual labels:  grpc
Pytorch Hed
a reimplementation of Holistically-Nested Edge Detection in PyTorch
Stars: ✭ 228 (-3.39%)
Mutual labels:  cuda
Libonnx
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
Stars: ✭ 217 (-8.05%)
Mutual labels:  inference
Tigre
TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
Stars: ✭ 215 (-8.9%)
Mutual labels:  cuda
Anycable Go
Anycable Go WebSocket Server
Stars: ✭ 234 (-0.85%)
Mutual labels:  grpc
Cudnn Training
A CUDNN minimal deep learning training code sample using LeNet.
Stars: ✭ 231 (-2.12%)
Mutual labels:  cuda
Nvidia Modded Inf
Modified nVidia .inf files to run drivers on all video cards, research & telemetry free drivers
Stars: ✭ 227 (-3.81%)
Mutual labels:  cuda
Softmax Splatting
an implementation of softmax splatting for differentiable forward warping using PyTorch
Stars: ✭ 218 (-7.63%)
Mutual labels:  cuda
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Stars: ✭ 4,012 (+1600%)
Mutual labels:  cuda
Relion
Image-processing software for cryo-electron microscopy
Stars: ✭ 219 (-7.2%)
Mutual labels:  cuda
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-2.54%)
Mutual labels:  cuda
Nicehashquickminer
Super simple & easy Windows 10 cryptocurrency miner made by NiceHash.
Stars: ✭ 211 (-10.59%)
Mutual labels:  cuda
Tnn
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and …
Stars: ✭ 3,257 (+1280.08%)
Mutual labels:  inference
Optix Pathtracer
Simple physically based path tracer based on Nvidia's Optix Ray Tracing Engine
Stars: ✭ 231 (-2.12%)
Mutual labels:  cuda
Deepspeech
DeepSpeech neon implementation
Stars: ✭ 223 (-5.51%)
Mutual labels:  cuda

TensorRT Laboratory

The TensorRT Laboratory (trtlab) is a general purpose set of tools to build customer inference applications and services.

Triton is a professional grade production inference server.

This project is broken into 4 primary components:

  • memory is based on foonathan/memory the memory module was designed to write custom allocators for both host and gpu memory. Several custom allocators are included.

  • core contains host/cpu-side tools for common components such as thread pools, resource pool, and userspace threading based on boost fibers.

  • cuda extends memory with a new memory_type for CUDA device memory. All custom allocators in memory can be used with device_memory, device_managed_memory or host_pinned_memory.

  • nvrpc is an abstraction layer for building asynchronous microservices. The current implementation is based on grpc.

  • tensorrt provides an opinionated runtime built on the TensorRT API.

Quickstart

The easiest way to manage the external NVIDIA dependencies is to leverage the containers hosted on NGC. For bare metal installs, use the Dockerfile as a template for which NVIDIA libraries to install.

docker build -t trtlab . 

For development purposes, the following set of commands first builds the base image, then maps the source code on the host into a running container.

docker build -t trtlab:dev --target base .
docker run --rm -ti --gpus=all -v $PWD:/work --workdir=/work --net=host trtlab:dev bash

Copyright and License

This project is released under the BSD 3-clause license.

Issues and Contributing

Pull requests with changes of 10 lines or more will require a Contributor License Agreement.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].