All Projects → iwatake2222 → play_with_tflite

iwatake2222 / play_with_tflite

Licence: Apache-2.0 license
Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects
java
68154 projects - #9 most used programming language
c
50402 projects - #5 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to play with tflite

Age-Gender Estimation TF-Android
Age + Gender Estimation on Android with TensorFlow Lite
Stars: ✭ 34 (-84.68%)
Mutual labels:  tensorflow-lite
meta-st-stm32mpu-ai
This repository contains the OpenEmbedded meta layer to install AI frameworks and tools for the STM32MP1
Stars: ✭ 32 (-85.59%)
Mutual labels:  tensorflow-lite
tensorflow-vast
TensorFlow binding library for VA Smalltalk
Stars: ✭ 13 (-94.14%)
Mutual labels:  tensorflow-lite
face-detection-tflite
Face and iris detection for Python based on MediaPipe
Stars: ✭ 78 (-64.86%)
Mutual labels:  tensorflow-lite
glDelegateBench
quick and dirty inference time benchmark for TFLite gles delegate
Stars: ✭ 17 (-92.34%)
Mutual labels:  tensorflow-lite
AgroDocRevamp
Agro Doc is basically an app that will help farmers easily pinpoint their crop diseases using their smartphones. The app uses a pre trained tensorflow model to identify issues and then suggest possible cures for the crop infections/diseases. #AndroidDevChallenge
Stars: ✭ 21 (-90.54%)
Mutual labels:  tensorflow-lite
FaceIDLight
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite
Stars: ✭ 17 (-92.34%)
Mutual labels:  tensorflow-lite
E2E-tfKeras-TFLite-Android
End to end training MNIST image classifier with tf.Keras, convert to TFLite and deploy to Android
Stars: ✭ 17 (-92.34%)
Mutual labels:  tensorflow-lite
mtomo
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Stars: ✭ 24 (-89.19%)
Mutual labels:  edgetpu
Face-Recognition-Flutter
Realtime face recognition with Flutter
Stars: ✭ 111 (-50%)
Mutual labels:  tensorflow-lite
tensorflow-yolov4
YOLOv4 Implemented in Tensorflow 2.
Stars: ✭ 136 (-38.74%)
Mutual labels:  edgetpu
android tflite
GPU Accelerated TensorFlow Lite applications on Android NDK. Higher accuracy face detection, Age and gender estimation, Human pose estimation, Artistic style transfer
Stars: ✭ 105 (-52.7%)
Mutual labels:  tensorflow-lite
glDelegateBenchmark
quick and dirty benchmark for TFLite gles delegate on iOS
Stars: ✭ 13 (-94.14%)
Mutual labels:  tensorflow-lite
TPU-MobilenetSSD
Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Stars: ✭ 82 (-63.06%)
Mutual labels:  tensorflow-lite
CRNN.tf2
Convolutional Recurrent Neural Network(CRNN) for End-to-End Text Recognition - TensorFlow 2
Stars: ✭ 131 (-40.99%)
Mutual labels:  tensorflow-lite
ova-server
OpenVisionAPI server
Stars: ✭ 93 (-58.11%)
Mutual labels:  tensorflow-lite
rpi-urban-mobility-tracker
The easiest way to count pedestrians, cyclists, and vehicles on edge computing devices or live video feeds.
Stars: ✭ 75 (-66.22%)
Mutual labels:  tensorflow-lite
Keras-Android-XOR
How to run a Keras model on Android using Tensorflow API.
Stars: ✭ 32 (-85.59%)
Mutual labels:  tensorflow-lite
coral-pi-rest-server
Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
Stars: ✭ 49 (-77.93%)
Mutual labels:  tensorflow-lite
TensorFlow Lite SSD RPi 64-bits
TensorFlow Lite SSD on bare Raspberry Pi 4 with 64-bit OS at 24 FPS
Stars: ✭ 25 (-88.74%)
Mutual labels:  tensorflow-lite

Play with tflite

  • Sample projects to use TensorFlow Lite in C++ for multi-platform
  • Typical project structure is like the following diagram
    • 00_doc/design.jpg

Target

  • Platform
    • Linux (x64)
    • Linux (armv7)
    • Linux (aarch64)
    • Android (aarch64)
    • Windows (x64). Visual Studio 2019
  • Delegate
    • Edge TPU
    • XNNPACK
    • GPU
    • NNAPI(CPU, GPU, DSP)

Usage

./main [input]

 - input = blank
    - use the default image file set in source code (main.cpp)
    - e.g. ./main
 - input = *.mp4, *.avi, *.webm
    - use video file
    - e.g. ./main test.mp4
 - input = *.jpg, *.png, *.bmp
    - use image file
    - e.g. ./main test.jpg
 - input = number (e.g. 0, 1, 2, ...)
    - use camera
    - e.g. ./main 0

How to build a project

0. Requirements

  • OpenCV 4.x

1. Download

  • Download source code and pre-built libraries
    git clone https://github.com/iwatake2222/play_with_tflite.git
    cd play_with_tflite
    git submodule update --init
    sh InferenceHelper/third_party/download_prebuilt_libraries.sh
  • Download models
    sh ./download_resource.sh

2-a. Build in Linux

cd pj_tflite_cls_mobilenet_v2   # for example
mkdir -p build && cd build
cmake ..
make
./main

2-b. Build in Windows (Visual Studio)

  • Configure and Generate a new project using cmake-gui for Visual Studio 2019 64-bit
    • Where is the source code : path-to-play_with_tflite/pj_tflite_cls_mobilenet_v2 (for example)
    • Where to build the binaries : path-to-build (any)
  • Open main.sln
  • Set main project as a startup project, then build and run!

2-c. Build in Android Studio

  • Please refer to
  • Copy resource directory to /storage/emulated/0/Android/data/com.iwatake.viewandroidtflite/files/Documents/resource
    • the directory will be created after running the app (so the first run should fail because model files cannot be read)
  • Modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select a image processor you want to use
    • set(ImageProcessor_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../../pj_tflite_cls_mobilenet_v2/image_processor")
    • replace pj_tflite_cls_mobilenet_v2 to another
  • By default, InferenceHelper::TENSORFLOW_LITE_DELEGATE_XNNPACK is used. You can modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select which delegate to use. It's better to use InferenceHelper::TENSORFLOW_LITE_GPU to get high performance.
    • You also need to select framework when calling InferenceHelper::create .

Note

Options (Delegate)

# Edge TPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
cp libedgetpu.so.1.0 libedgetpu.so.1
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
sudo LD_LIBRARY_PATH=./ ./main
# you may get "Segmentation fault (core dumped)" without sudo

# GPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
# you may need `sudo apt install ocl-icd-opencl-dev` or `sudo apt install libgles2-mesa-dev`

# XNNPACK
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on

# NNAPI (Note: You use Android for NNAPI. Therefore, you will modify CMakeLists.txt in Android Studio rather than the following command)
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on

You also need to select framework when calling InferenceHelper::create .

EdgeTPU

NNAPI

By default, NNAPI will select the most appropreate accelerator for the model. You can specify which accelerator to use by yourself. Modify the following code in InferenceHelperTensorflowLite.cpp

// options.accelerator_name = "qti-default";
// options.accelerator_name = "qti-dsp";
// options.accelerator_name = "qti-gpu";

License

Acknowledgements

  • This project utilizes OSS (Open Source Software)
  • This project utilizes models from other projects:
    • Please find model_information.md in resource.zip
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].