NeuropodA uniform interface to run deep learning models from multiple frameworks
Bert NerPytorch-Named-Entity-Recognition-with-BERT
Turbotransformersa fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
XnnpackHigh-efficiency floating-point neural network inference operators for mobile, server, and Web
Multi Model ServerMulti Model Server is a tool for serving neural net models for inference
Awesome EmdlEmbedded and mobile deep learning research resources
TorchlayersShape and dimension inference (Keras-like) for PyTorch layers and neural networks
LightseqLightSeq: A High Performance Inference Library for Sequence Processing and Generation
Io TsRuntime type system for IO decoding/encoding
Jetson InferenceHello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
CausaldiscoverytoolboxPackage for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.
Model serverA scalable inference server for models optimized with OpenVINO™
Tensorflow CmakeTensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
CubertFast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL
GfocalGeneralized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection, NeurIPS2020
CppflowRun TensorFlow models in C++ without installation and without Bazel
Filetype.pySmall, dependency-free, fast Python package to infer binary file types checking the magic numbers signature
ChaidnnHLS based Deep Neural Network Accelerator Library for Xilinx Ultrascale+ MPSoCs
TnnTNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and …
Adversarial Robustness ToolboxAdversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Amazon Sagemaker ExamplesExample 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
smfsbDocumentation, models and code relating to the 3rd edition of the textbook Stochastic Modelling for Systems Biology
typeqlTypeQL: the query language of TypeDB - a strongly-typed database
hoiceAn ICE-based predicate synthesizer for Horn clauses.
tf-cpp-pose-estimationTensorflow C++ examples for Visual Studio. Features Pose Estimation and various techniques to utilize the Tensorflow C++ interface
causaldagPython package for the creation, manipulation, and learning of Causal DAGs
FATE-ServingA scalable, high-performance serving system for federated learning models
JOCIOrdinal Common-sense Inference
serving-runtimeExposes a serialized machine learning model through a HTTP API.
caffeThis fork of BVLC/Caffe is dedicated to supporting Cambricon deep learning processor and improving performance of this deep learning framework when running on Machine Learning Unit(MLU).
forestErrorA Unified Framework for Random Forest Prediction Error Estimation
InferenceHelperC++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow
pyinferPyinfer is a model agnostic tool for ML developers and researchers to benchmark the inference statistics for machine learning models or functions.
arboretoA scalable python-based framework for gene regulatory network inference using tree-based ensemble regressors.
optimum🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
model analyzerTriton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
inferelatorTask-based gene regulatory network inference using single-cell or bulk gene expression data conditioned on a prior network.
FAST-Pathology⚡ Open-source software for deep learning-based digital pathology