All Projects → GValiente → Pocket Tensor

GValiente / Pocket Tensor

Licence: other
Run Keras models from a C++ application on embedded devices

Programming Languages

cpp11
221 projects

Projects that are alternatives of or similar to Pocket Tensor

Cocoos
A cooperative operating system based on coroutines
Stars: ✭ 50 (-23.08%)
Mutual labels:  embedded
Couchbase Lite C
C language bindings for the Couchbase Lite embedded NoSQL database engine
Stars: ✭ 58 (-10.77%)
Mutual labels:  embedded
Unisimd Assembler
SIMD macro assembler unified for ARM, MIPS, PPC and x86
Stars: ✭ 63 (-3.08%)
Mutual labels:  simd
Wasmjit
Small Embeddable WebAssembly Runtime
Stars: ✭ 1,063 (+1535.38%)
Mutual labels:  embedded
Qtwebserver
Qt based web application server
Stars: ✭ 56 (-13.85%)
Mutual labels:  embedded
Spiffs
Wear-leveled SPI flash file system for embedded devices
Stars: ✭ 1,105 (+1600%)
Mutual labels:  embedded
Debootstick
Generate a bootable live image from any Debian/Ubuntu filesystem tree.
Stars: ✭ 48 (-26.15%)
Mutual labels:  embedded
Incubator Nuttx Apps
Apache NuttX Apps is a collection of tools, shells, network utilities, libraries, interpreters and can be used with the NuttX RTOS
Stars: ✭ 65 (+0%)
Mutual labels:  embedded
Reed Solomon
Reed Solomon BCH encoder and decoder
Stars: ✭ 57 (-12.31%)
Mutual labels:  embedded
Expr
Fast and lightweight math expression evaluator in C99
Stars: ✭ 61 (-6.15%)
Mutual labels:  embedded
Mylinux
myLinux is a small UNIX like OS for embedded systems based on Westermo NetBox
Stars: ✭ 53 (-18.46%)
Mutual labels:  embedded
Gdbstub
A simple, dependency-free GDB stub that can be easily dropped in to your project.
Stars: ✭ 56 (-13.85%)
Mutual labels:  embedded
Awesome Embedded Linux
A curated list of awesome Embedded Linux resources.
Stars: ✭ 60 (-7.69%)
Mutual labels:  embedded
Webfsd
A simple HTTP server for mostly static content written in C
Stars: ✭ 50 (-23.08%)
Mutual labels:  embedded
Claudb
ClauDB is a REDIS implementation in Java
Stars: ✭ 64 (-1.54%)
Mutual labels:  embedded
Punchboot
Punchboot
Stars: ✭ 49 (-24.62%)
Mutual labels:  embedded
Gobusybox
Tools for compiling many Go commands into one binary to save space. Builds are supported in vendor-based Go, module-based Go, and bazel with Starlark.
Stars: ✭ 60 (-7.69%)
Mutual labels:  embedded
Stm32l4xx Hal
A Hardware abstraction layer for the stm32l432xx series chips written in rust.
Stars: ✭ 65 (+0%)
Mutual labels:  embedded
Simavr
simavr is a lean, mean and hackable AVR simulator for linux & OSX
Stars: ✭ 1,135 (+1646.15%)
Mutual labels:  embedded
Str
A SIMD optimized fixed-length string class along with an adaptive hash table for fast searching
Stars: ✭ 60 (-7.69%)
Mutual labels:  simd

pocket-tensor

pocket-tensor is an arquolo's Kerasify fork designed for running trained Keras models from a C++ application on embedded devices.

Discontinued

This project is not in development anymore.

If you want to run trained Keras models from a C++ application, try TensorFlow Lite and frugally-deep.

Design goals

  • Compatibility with sequential networks generated by Keras 2.x using Tensorflow backend.
  • Multithread CPU support.
  • Low RAM usage.
  • Easy to build and run (no external dependencies).
  • Fast build times.

Improvements over Kerasify

  • Thanks to the awesome libsimdpp library, tensor operations have been rewritten using SIMD instructions to improve prediction performance.
  • Predictions run across multiple CPU cores.
  • Memory (re)usage has been improved in order to reduce memory allocations.
  • Apart from float, double precision tensors are supported (see pt_tweakme.h file).
  • Tensor dimensions are rigorously validated on each layer to avoid wrong models usage.
  • Besides GCC and Clang, Visual Studio compiler is properly supported.

Hardware requirements

Since there's no GPU support, by default pocket-tensor requires the following CPU SIMD instruction sets:

  • ARM: NEON with floating point support.
  • x86: AVX.

Required SIMD instruction sets are specified in the pt_tweakme.h file, so they can be modified with ease.

Software requirements

Since a copy of libsimdpp comes bundled with this library, there's no external dependencies required, so the only software requirements are a C++11-compatible compiler and CMake >= 3.4.

pocket-tensor has been tested with these compilers:

  • GCC 4.9.
  • MSVC 2017.
  • Whatever Clang comes with Apple LLVM 9.1.0.
  • Whatever Clang comes with Android Studio 3.1.3 (see Android section).

How to build

A CMakeLists.txt is provided with this library, so in order to use it you only need to include this file in your CMake project.

To build and run the unit tests, you need to generate them first:

python make_tests.py
mkdir tests_build
cd tests_build
cmake -DPT_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release ..
make
./tests/pocket-tensor-tests

Usage

  1. Use Keras to build (model.compile(...)) and train (model.fit(...)) your model as usual.

  2. Now convert it to the pocket-tensor file format with pt.export_model(model, 'example.model').

  3. Finally load it in C++ (pt::create("example.model")) and use model->predict(...) to perform a prediction with your data.

The following example shows the full workflow:

# make_model.py:

import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from pt import export_model

test_x = np.random.rand(10, 10).astype('f')
test_y = np.random.rand(10).astype('f')

model = Sequential()
model.add(Dense(1, input_dim=10))

model.compile(loss='mean_squared_error', optimizer='adamax')
model.fit(test_x, test_y, epochs=1)

print model.predict(np.array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]))

export_model(model, 'example.model')
// main.cpp:

#include <iostream>
#include "pt_model.h"
#include "pt_tensor.h"

int main()
{
    // Initialize model:
    auto model = pt::Model::create("example.model");
    // REQUIRE(model);

    // Create input tensor:
    pt::Tensor in(10);
    in.setData({0, 1, 2, 3, 4, 5, 6, 7, 8, 9});

    // Run prediction:
    pt::Tensor out;
    bool success = model->predict(std::move(in), out);
    // REQUIRE(success);
	
    // Print output:
    std::cout << out << std::endl;
    return 0;
}

Supported layer types

The most common layer types used in image recognition and sequences prediction are supported, making many popular model architectures possible:

  • Core: Input, Dense, Flatten, RepeatVector.
  • Convolutional: Conv1D, Conv2D.
  • Pooling: MaxPooling2D, GlobalMaxPooling2D.
  • Locally-connected: LocallyConnected1D.
  • Recurrent: LSTM.
  • Embedding: Embedding.
  • Normalization: BatchNormalization.
  • Activations: Linear, ReLU, ELU, SeLU, Softplus, Softsign, Tanh, Sigmoid, HardSigmoid, Softmax.
  • Advanced activations: LeakyReLU, ELU.

Performance

A benchmark application is included with this library. To build and run it:

mkdir benchmark_build
cd benchmark_build
cmake -DPT_BUILD_BENCHMARK=ON -DCMAKE_BUILD_TYPE=Release ..
make
./benchmark/pocket-tensor-benchmark

The prediction time of the following models has been measured on a PC with a Intel Core i7-6500U CPU @ 2.50GHz and on a Raspberry Pi 3:

MNIST CNN

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='sigmoid'))
Library PC elapsed time (μs) RPi3 elapsed time (μs)
Keras 1470 23363
arquolo's Kerasify 3502 64238
frugally-deep 1402 29298
pocket-tensor 1049 27329

IMDB LSTM

model = Sequential()
model.add(Embedding(20000, 128))
model.add(LSTM(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))
model.add(LSTM(128, return_sequences=False, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
Library PC elapsed time (μs) RPi3 elapsed time (μs)
Keras 10160 89344
arquolo's Kerasify 5378 79060
frugally-deep Not supported Not supported
pocket-tensor 3314 67115

Android

pocket-tensor supports Android apps (armeabi-v7a ABI only).

To add pocket-tensor to an Android project with C++ support, you must:

  1. Enable ARM NEON instructions on the build.gradle project file (https://developer.android.com/ndk/guides/cmake):
android {
    ...
    defaultConfig {
        ...
        externalNativeBuild {
            cmake {
                arguments "-DANDROID_ARM_NEON=TRUE"
            }
        }
    }
}
  1. Disable all ABIs except armeabi-v7a on the build.gradle project file (https://developer.android.com/studio/build/configure-apk-splits):
android {
    ...
    splits {
        abi {
            enable true
            reset()
            include "armeabi-v7a"
        }
    }
}
  1. Include pocket-tensor on the CMakeLists.txt file of your native library:
add_subdirectory(/path/to/pocket-tensor pocket-tensor)
target_link_libraries(native-lib pocket-tensor)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].