All Projects → houmo-ai → mediapipe_plus

houmo-ai / mediapipe_plus

Licence: other
The purpose of this project is to apply mediapipe to more AI chips.

Programming Languages

C++
36643 projects - #6 most used programming language
Starlark
911 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to mediapipe plus

Mediapipe
Cross-platform, customizable ML solutions for live and streaming media.
Stars: ✭ 15,338 (+40263.16%)
Mutual labels:  inference, pipeline-framework, stream-processing, video-processing, graph-framework, mediapipe
Jetson Inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Stars: ✭ 5,191 (+13560.53%)
Mutual labels:  inference, jetson, tensorrt
watsor
Object detection for video surveillance
Stars: ✭ 203 (+434.21%)
Mutual labels:  stream, tensorrt
godsend
A simple and eloquent workflow for streaming messages to micro-services.
Stars: ✭ 15 (-60.53%)
Mutual labels:  stream, stream-processing
Tuna
🐟 A streaming ETL for fish
Stars: ✭ 11 (-71.05%)
Mutual labels:  stream, stream-processing
Torch2trt
An easy to use PyTorch to TensorRT converter
Stars: ✭ 2,974 (+7726.32%)
Mutual labels:  inference, tensorrt
Volksdep
volksdep is an open-source toolbox for deploying and accelerating PyTorch, ONNX and TensorFlow models with TensorRT.
Stars: ✭ 195 (+413.16%)
Mutual labels:  inference, onnx
Redis Stream Demo
Demo for Redis Streams
Stars: ✭ 24 (-36.84%)
Mutual labels:  stream, stream-processing
Ml Model Ci
MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.
Stars: ✭ 122 (+221.05%)
Mutual labels:  inference, onnx
Deepstream Project
This is a highly separated deployment project based on Deepstream , including the full range of Yolo and continuously expanding deployment projects such as Ocr.
Stars: ✭ 120 (+215.79%)
Mutual labels:  tensorrt, onnx
Hstream
The streaming database built for IoT data storage and real-time processing in the 5G Era
Stars: ✭ 166 (+336.84%)
Mutual labels:  stream, stream-processing
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+1007.89%)
Mutual labels:  inference, onnx
Ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 13,376 (+35100%)
Mutual labels:  inference, onnx
Libonnx
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
Stars: ✭ 217 (+471.05%)
Mutual labels:  inference, onnx
Onnxt5
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.
Stars: ✭ 143 (+276.32%)
Mutual labels:  inference, onnx
laav
Asynchronous Audio / Video Library for H264 / MJPEG / OPUS / AAC / MP2 encoding, transcoding, recording and streaming from live sources
Stars: ✭ 50 (+31.58%)
Mutual labels:  stream, video-processing
deepvac
PyTorch Project Specification.
Stars: ✭ 507 (+1234.21%)
Mutual labels:  tensorrt, onnx
Mivisionx
MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. AMD MIVisionX also delivers a highly optimized open-source implementation of the Khronos OpenVX™ and OpenVX™ Extensions.
Stars: ✭ 100 (+163.16%)
Mutual labels:  inference, onnx
Yolov5 Rt Stack
Yet another yolov5, with its runtime stack for libtorch, onnx, tvm and specialized accelerators. You like torchvision's retinanet? You like yolov5? You love yolort!
Stars: ✭ 107 (+181.58%)
Mutual labels:  inference, onnx
Saber
Window-Based Hybrid CPU/GPU Stream Processing Engine
Stars: ✭ 35 (-7.89%)
Mutual labels:  stream, stream-processing

1.About This Project

Our Official Website: www.houmo.ai
Who We Are: We are Houmo - A Great AI Company.
We wish to change the world with unlimited computing power,
We will subvert the AI chip with in memory computing.

This Project we created is the first one that migrate tensorrt inference engine into Google Mediapipe.
The purpose of this project is to apply mediapipe to more AI chips.

2.Our Build Environment

2.1Hardware: AGX Xavier System Information

  • NVIDIA Jetson AGX Xavier [16GB]
    • Jetpack 4.6 [L4T 32.6.1]
    • NV Power Mode: MAXN - Type: 0
    • jetson_stats.service: active
  • Board info:
    • Type: AGX Xavier [16GB]
    • SOC Family: tegra194 - ID:25
    • Module: P28xx-00xx - Board: P28xx-00xx
    • Code Name: galen
    • CUDA GPU architecture (ARCH_BIN): 7.2
    • Serial Number: *****
  • Libraries:
    • CUDA: 10.2.300
    • cuDNN: 8.2.1.32
    • TensorRT: 8.0.1.6
    • Visionworks: 1.6.0.501
    • OpenCV: 3.4.15-dev compiled CUDA: NO
    • VPI: ii libnvvpi1 1.1.12 arm64 NVIDIA Vision Programming Interface library
    • Vulkan: 1.2.70
  • jetson-stats:
    • Version 3.1.1
    • Works on Python 2.7.17

2.2Build-Essential

a)gcc and g++ version 8.4.0 (Ubuntu/Linaro 8.4.0-1ubuntu1~18.04)
install command:

$sudo apt install gcc-8 g++-8
$sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 800 --slave /usr/bin/g++ g++ /usr/bin/g++-8

check gcc g++ version

$gcc -v
$g++ -v

b)Mediapipe Dependencies
Please refer to Mediapipe Installation and install all required software packages and runtime libraries.

c)Cuda Related
We created two soft links pointing to cuda headers. Change to your own paths if needed.

**/mediapipe_plus/third_party/cuda_trt$ tree
.
├── BUILD
├── usr_local_cuda
│   └── include -> /usr/local/cuda/include
└── usr_local_cuda-10.2
    └── include -> /usr/local/cuda-10.2/targets/aarch64-linux/include/

3. Build and Run This Project On Nvidia AGX Xavier

Please follow the instructions below to compile and run our demo.

3.1 Upgrade Your AGX Xavier IF Jetpack Version Smaller than 4.6 or TensorRT Version Smaller Than 8.0

Warning:Before Upgrading, Please BackUp Your AGX Xavier To Prevent Data Loss
Refer to xavier official website:Over-the-Air Update:
Section: Updating a Jetson Device -> To update to a new minor release

3.2 Clone and Build The Project

Clone

$git clone ***
$cd **

Build the Demo

$bazel build //calculators/tensorrt:trt_inference_calculator_test

Run

GLOG_logtostderr=1   ./bazel-bin/calculators/tensorrt/trt_inference_calculator_test   --input_video_path=./test1.mp4   --remote_run=true

Expected Output
Under current folder, there will be a video file named "trt_infer.mp4" be generated.
Each frame has detected facial boxes and facial points.Like This:
face_detection_trt

3.3 About The Demo

We created several calculators under directory "./calculators/" to build a TensorRT Engine from onnx .
And the target:trt_inference_calculator_test is a face detection demo to show how to use these calculators.
Face Detection Demo is an ultrafast face detection solution that comes with 6 landmarks and multi-face support.
It is based on BlazeFace, a lightweight and well-performing face detector.
image

TODO

We left several TODOs which will be done in next version .

  • Use vpi interfaces to accelerate pre and post process such as color space convertion、 resize 、 saving images etc.
  • Reuse mediapipe official released post process calculators.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].