All Projects → PatWie → Tensorflow Cmake

PatWie / Tensorflow Cmake

Licence: apache-2.0
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake

Programming Languages

c
50402 projects - #5 most used programming language
golang
3204 projects
cpp
1120 projects

Projects that are alternatives of or similar to Tensorflow Cmake

Cubert
Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL
Stars: ✭ 395 (-5.5%)
Mutual labels:  inference, cuda
Opencv4androidwithcmake
Use Android Studio 3.0 (>=2.2) and Cmake Toolchain to make your Android device fly with Opencv (OpenCV 3.40)
Stars: ✭ 126 (-69.86%)
Mutual labels:  cmake, opencv
Camodet
Lightweight Simple CAmera MOtion DETection application.
Stars: ✭ 26 (-93.78%)
Mutual labels:  cmake, opencv
Coherent Line Drawing
🖼✨Automatically generates line drawing from a photograph
Stars: ✭ 461 (+10.29%)
Mutual labels:  cmake, opencv
cuda-cmake-gtest-gbench-starter
A cross-platform CUDA/C++14 starter project with google test and google benchmark support.
Stars: ✭ 24 (-94.26%)
Mutual labels:  cmake, cuda
Raftlib
The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators
Stars: ✭ 717 (+71.53%)
Mutual labels:  cmake, opencv
Kinectazuredkprogramming
Samples about Kinect Azure DK programming
Stars: ✭ 92 (-77.99%)
Mutual labels:  cmake, opencv
Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (-56.94%)
Mutual labels:  opencv, inference
Ck Caffe
Collective Knowledge workflow for Caffe to automate installation across diverse platforms and to collaboratively evaluate and optimize Caffe-based workloads across diverse hardware, software and data sets (compilers, libraries, tools, models, inputs):
Stars: ✭ 192 (-54.07%)
Mutual labels:  cmake, cuda
Anms Codes
Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution
Stars: ✭ 174 (-58.37%)
Mutual labels:  cmake, opencv
Gocv
Go package for computer vision using OpenCV 4 and beyond.
Stars: ✭ 4,511 (+979.19%)
Mutual labels:  opencv, cuda
Cppflow
Run TensorFlow models in C++ without installation and without Bazel
Stars: ✭ 357 (-14.59%)
Mutual labels:  inference, tensorflow-examples
Pine
🌲 Aimbot powered by real-time object detection with neural networks, GPU accelerated with Nvidia. Optimized for use with CS:GO.
Stars: ✭ 202 (-51.67%)
Mutual labels:  opencv, cuda
Opencv Mingw Build
👀 MinGW 32bit and 64bit version of OpenCV compiled on Windows. Including OpenCV 3.3.1, 3.4.1, 3.4.1-x64, 3.4.5, 3.4.6, 3.4.7, 3.4.8-x64, 3.4.9, 4.0.0-alpha-x64, 4.0.0-rc-x64, 4.0.1-x64, 4.1.0, 4.1.0-x64, 4.1.1-x64, 4.5.0-with-contrib
Stars: ✭ 401 (-4.07%)
Mutual labels:  cmake, opencv
Pytorch Cpp
PyTorch C++ inference with LibTorch
Stars: ✭ 194 (-53.59%)
Mutual labels:  opencv, inference
Ncnn Benchmark
The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 70 (-83.25%)
Mutual labels:  cmake, inference
Opencv Mtcnn
An implementation of MTCNN Face detector using OpenCV's DNN module
Stars: ✭ 59 (-85.89%)
Mutual labels:  opencv, inference
Build Deep Learning Env With Tensorflow Python Opencv
Tutorial on how to build your own research envirorment for Deep Learning with OpenCV, Python, Tensorfow
Stars: ✭ 66 (-84.21%)
Mutual labels:  opencv, cuda
Primitiv
A Neural Network Toolkit.
Stars: ✭ 164 (-60.77%)
Mutual labels:  cmake, cuda
Dynamicfusion
Implementation of Newcombe et al. CVPR 2015 DynamicFusion paper
Stars: ✭ 267 (-36.12%)
Mutual labels:  opencv, cuda

TensorFlow CMake/C++ Collection

Looking at the official docs: What do you see? The usual fare? Now, guess what: This is a bazel-free zone. We use CMake here!

This collection contains reliable and dead-simple examples to use TensorFlow in C, C++, Go and Python: load a pre-trained model or compile a custom operation with or without CUDA. All builds are tested against the most recent stable TensorFlow version and rely on CMake with a custom FindTensorFlow.cmake. This cmake file includes common work arounds for bugs in specific TF versions.

TensorFlow Status
1.14.0 Build Status TensorFlow
1.13.1 Build Status TensorFlow
1.12.0 Build Status TensorFlow
1.11.0 Build Status TensorFlow
1.10.0 Build Status TensorFlow
1.9.0 Build Status TensorFlow

The repository contains the following examples.

Example Explanation
custom operation build a custom operation for TensorFLow in C++/CUDA (requires only pip)
inference (C++) run inference in C++
inference (C) run inference in C
inference (Go) run inference in Go
event writer write event files for TensorBoard in C++
keras cpp-inference example run a Keras-model in C++
simple example create and run a TensorFlow graph in C++
resize image example resize an image in TensorFlow with/without OpenCV

Custom Operation

This example illustrates the process of creating a custom operation using C++/CUDA and CMake. It is not intended to show an implementation obtaining peak-performance. Instead, it is just a boilerplate-template.

[email protected] $ pip install tensorflow-gpu --user # solely the pip package is needed
[email protected] $ cd custom_op/user_ops
[email protected] $ cmake .
[email protected] $ make
[email protected] $ python test_matrix_add.py
[email protected] $ cd ..
[email protected] $ python example.py

TensorFlow Graph within C++

This example illustrates the process of loading an image (using OpenCV or TensorFlow), resizing the image saving the image as a JPG or PNG (using OpenCV or TensorFlow).

[email protected] $ cd examples/resize
[email protected] $ export TENSORFLOW_BUILD_DIR=...
[email protected] $ export TENSORFLOW_SOURCE_DIR=...
[email protected] $ cmake .
[email protected] $ make

TensorFlow-Serving

There are two examples demonstrating the handling of TensorFlow-Serving: using a vector input and using an encoded image input.

[email protected] $ CHOOSE=basic # or image
[email protected] $ cd serving/${CHOOSE}/training
[email protected] $ python create.py # create some model
[email protected] $ cd serving/server/
[email protected] $ ./run.sh # start server

# some some queries

[email protected] $ cd client/bash
[email protected] $ ./client.sh
[email protected] $ cd client/python
# for the basic-example
[email protected] $ python client_rest.py
[email protected] $ python client_grpc.py
# for the image-example
[email protected] $ python client_rest.py /path/to/img.[png,jpg]
[email protected] $ python client_grpc.py /path/to/img.[png,jpg]

Inference

Create a model in Python, save the graph to disk and load it in C/C+/Go/Python to perform inference. As these examples are based on the TensorFlow C-API they require the libtensorflow_cc.so library which is not shipped in the pip-package (tensorfow-gpu). Hence, you will need to build TensorFlow from source beforehand, e.g.,

[email protected] $ ls ${TENSORFLOW_SOURCE_DIR}

ACKNOWLEDGMENTS     bazel-genfiles      configure          pip
ADOPTERS.md         bazel-out           configure.py       py.pynano
ANDROID_NDK_HOME    bazel-tensorflow    configure.py.bkp   README.md
...
[email protected] $ cd ${TENSORFLOW_SOURCE_DIR}
[email protected] $  ./configure
[email protected] $  # ... or whatever options you used here
[email protected] $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //tensorflow:libtensorflow.so
[email protected] $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //tensorflow:libtensorflow_cc.so

[email protected] $ export TENSORFLOW_BUILD_DIR=/tensorflow_dist
[email protected] $ mkdir ${TENSORFLOW_BUILD_DIR}
[email protected] $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-bin/tensorflow/*.so ${TENSORFLOW_BUILD_DIR}/
[email protected] $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-genfiles/tensorflow/cc/ops/*.h ${TENSORFLOW_BUILD_DIR}/includes/tensorflow/cc/ops/

1. Save Model

We just run a very basic model

x = tf.placeholder(tf.float32, shape=[1, 2], name='input')
output = tf.identity(tf.layers.dense(x, 1), name='output')

Therefore, save the model like you regularly do. This is done in example.py besides some outputs

[email protected] $ python example.py

[<tf.Variable 'dense/kernel:0' shape=(2, 1) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32_ref>]
input            [[1. 1.]]
output           [[2.1909506]]
dense/kernel:0   [[0.9070684]
 [1.2838823]]
dense/bias:0     [0.]

2. Run Inference

Python

[email protected] $ python python/inference.py

[<tf.Variable 'dense/kernel:0' shape=(2, 1) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32_ref>]
input            [[1. 1.]]
output           [[2.1909506]]
dense/kernel:0   [[0.9070684]
 [1.2838823]]
dense/bias:0     [0.]

C++

[email protected] $ cd cc
[email protected] $ cmake .
[email protected] $ make
[email protected] $ cd ..
[email protected] $ ./cc/inference_cc

input           Tensor<type: float shape: [1,2] values: [1 1]>
output          Tensor<type: float shape: [1,1] values: [2.19095063]>
dense/kernel:0  Tensor<type: float shape: [2,1] values: [0.907068372][1.28388226]>
dense/bias:0    Tensor<type: float shape: [1] values: 0>

C

[email protected] $ cd c
[email protected] $ cmake .
[email protected] $ make
[email protected] $ cd ..
[email protected] $ ./c/inference_c

2.190951

Go

[email protected] $ go get github.com/tensorflow/tensorflow/tensorflow/go
[email protected] $ cd go
[email protected] $ ./build.sh
[email protected] $ cd ../
[email protected] $ ./inference_go

input           [[1 1]]
output          [[2.1909506]]
dense/kernel:0  [[0.9070684] [1.2838823]]
dense/bias:0    [0]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].