All Projects → xboot → Libonnx

xboot / Libonnx

Licence: mit
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Libonnx

Szl
A lightweight, embeddable scripting language
Stars: ✭ 134 (-38.25%)
Mutual labels:  embedded-systems, library, lightweight, embedded
Fbg
Lightweight C 2D graphics API agnostic library with parallelism support
Stars: ✭ 349 (+60.83%)
Mutual labels:  library, lightweight, embedded
Multi Model Server
Multi Model Server is a tool for serving neural net models for inference
Stars: ✭ 770 (+254.84%)
Mutual labels:  ai, inference, onnx
Libcanard
A compact implementation of the UAVCAN/CAN protocol in C for high-integrity real-time embedded systems
Stars: ✭ 151 (-30.41%)
Mutual labels:  embedded-systems, embedded
Statecharts
YAKINDU Statechart Tools (http://www.statecharts.org)
Stars: ✭ 145 (-33.18%)
Mutual labels:  embedded-systems, embedded
Jeelizfacefilter
Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...).
Stars: ✭ 2,042 (+841.01%)
Mutual labels:  library, lightweight
Lib Python
Blynk IoT library for Python and Micropython
Stars: ✭ 140 (-35.48%)
Mutual labels:  library, embedded
Awesome Embedded Rust
Curated list of resources for Embedded and Low-level development in the Rust programming language
Stars: ✭ 2,805 (+1192.63%)
Mutual labels:  embedded-systems, embedded
Deeply
PHP client for the DeepL.com translation API (unofficial)
Stars: ✭ 152 (-29.95%)
Mutual labels:  ai, library
React Indiana Drag Scroll
React component which implements scrolling via holding the mouse button or touch
Stars: ✭ 190 (-12.44%)
Mutual labels:  library, lightweight
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+7823.04%)
Mutual labels:  ai, onnx
Ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 13,376 (+6064.06%)
Mutual labels:  inference, onnx
Onnxt5
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.
Stars: ✭ 143 (-34.1%)
Mutual labels:  inference, onnx
Kingly
Zero-cost state-machine library for robust, testable and portable user interfaces (most machines compile ~1-2KB)
Stars: ✭ 147 (-32.26%)
Mutual labels:  library, portable
Wolfssh
wolfSSH is a small, fast, portable SSH implementation, including support for SCP and SFTP.
Stars: ✭ 142 (-34.56%)
Mutual labels:  embedded, portable
Emlearn
Machine Learning inference engine for Microcontrollers and Embedded devices
Stars: ✭ 154 (-29.03%)
Mutual labels:  embedded-systems, inference
Mmlspark
Simple and Distributed Machine Learning
Stars: ✭ 2,899 (+1235.94%)
Mutual labels:  ai, onnx
Depthai
DepthAI Python API utilities, examples, and tutorials.
Stars: ✭ 203 (-6.45%)
Mutual labels:  ai, embedded
Jsontreeviewer
json formatter/viewer/pretty-printer (with jsonTree javascript-library)
Stars: ✭ 211 (-2.76%)
Mutual labels:  library, lightweight
Hfsm2
High-Performance Hierarchical Finite State Machine Framework
Stars: ✭ 134 (-38.25%)
Mutual labels:  embedded-systems, embedded

Libonnx

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Getting Started

The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated struct onnx_context_t * and you can pass an array of struct resolver_t * for hardware acceleration.

The filename is path to the format of onnx model.

struct onnx_context_t * ctx = onnx_context_alloc_from_file(filename, NULL, 0);

Then, you can get input and output tensor using onnx_tensor_search function.

struct onnx_tensor_t * input = onnx_tensor_search(ctx, "input-tensor-name");
struct onnx_tensor_t * output = onnx_tensor_search(ctx, "output-tensor-name");

When the input tensor has been setting, you can run inference engine using onnx_run function and the result will putting into the output tensor.

onnx_run(ctx);

Finally, you must free struct onnx_context_t * using onnx_context_free function.

onnx_context_free(ctx);

Examples

Just type make at the root directory, you will see a static library and some binary of examples and tests for usage.

cd libonnx
make

Screenshots

Notes

This library based on the onnx version 1.8.0 with the newest opset 13 support. The supported operator table in the documents directory.

Links

License

This library is free software; you can redistribute it and or modify it under the terms of the MIT license. See MIT License for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].