All Projects → PINTO0309 → mtomo

PINTO0309 / mtomo

Licence: MIT license
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

Programming Languages

Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to mtomo

pytorch2keras
PyTorch to Keras model convertor
Stars: ✭ 788 (+3183.33%)
Mutual labels:  onnx, tensorflowjs, models-converter
ONNX-Runtime-with-TensorRT-and-OpenVINO
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment
Stars: ✭ 15 (-37.5%)
Mutual labels:  tensorrt, onnx, openvino
vs-mlrt
Efficient ML Filter Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2, and Real-CUGAN)
Stars: ✭ 34 (+41.67%)
Mutual labels:  tensorrt, onnx, openvino
model-zoo-old
The ONNX Model Zoo is a collection of pre-trained models for state of the art models in deep learning, available in the ONNX format
Stars: ✭ 38 (+58.33%)
Mutual labels:  mxnet, models, onnx
YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Stars: ✭ 6,570 (+27275%)
Mutual labels:  tensorrt, onnx, openvino
onnx2tensorRt
tensorRt-inference darknet2onnx pytorch2onnx mxnet2onnx python version
Stars: ✭ 14 (-41.67%)
Mutual labels:  mxnet, tensorrt, onnx
Onnx
Open standard for machine learning interoperability
Stars: ✭ 11,829 (+49187.5%)
Mutual labels:  mxnet, onnx
Ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 13,376 (+55633.33%)
Mutual labels:  mxnet, onnx
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+71537.5%)
Mutual labels:  mxnet, onnx
Tengine-Convert-Tools
Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.
Stars: ✭ 89 (+270.83%)
Mutual labels:  mxnet, onnx
Multi Model Server
Multi Model Server is a tool for serving neural net models for inference
Stars: ✭ 770 (+3108.33%)
Mutual labels:  mxnet, onnx
Deepstream Project
This is a highly separated deployment project based on Deepstream , including the full range of Yolo and continuously expanding deployment projects such as Ocr.
Stars: ✭ 120 (+400%)
Mutual labels:  tensorrt, onnx
arcface retinaface mxnet2onnx
arcface and retinaface model convert mxnet to onnx.
Stars: ✭ 53 (+120.83%)
Mutual labels:  mxnet, onnx
Ngraph
nGraph has moved to OpenVINO
Stars: ✭ 1,322 (+5408.33%)
Mutual labels:  mxnet, onnx
Gluon2pytorch
Gluon to PyTorch deep neural network model converter
Stars: ✭ 70 (+191.67%)
Mutual labels:  mxnet, onnx
Coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Stars: ✭ 2,085 (+8587.5%)
Mutual labels:  mxnet, onnx
Cv Pretrained Model
A collection of computer vision pre-trained models.
Stars: ✭ 995 (+4045.83%)
Mutual labels:  mxnet, models
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (+204.17%)
Mutual labels:  onnx, openvino
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+425%)
Mutual labels:  tensorrt, onnx
tensorflow-yolov4
YOLOv4 Implemented in Tensorflow 2.
Stars: ✭ 136 (+466.67%)
Mutual labels:  tflite, edgetpu

mtomo

Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. And, Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

1. Environment

  1. Docker 20.10.5, build 55c4c88

2. Model optimization environment to be built

  1. Ubuntu 20.04 x86_64
  2. CUDA 11.2
  3. cuDNN 8.1
  4. TensorFlow v2.5.0-rc1 (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
  5. tflite_runtime v2.5.0-rc1 (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
  6. edgetpu-compiler
  7. flatc 1.12.0
  8. TensorRT cuda11.1-trt7.2.3.4-ga-20210226
  9. PyTorch 1.8.1+cu112
  10. TorchVision 0.9.1+cu112
  11. TorchAudio 0.8.1
  12. OpenVINO 2021.3.394
  13. tensorflowjs
  14. coremltools
  15. onnx
  16. tf2onnx
  17. tensorflow-datasets
  18. openvino2tensorflow
  19. tflite2tensorflow
  20. onnxruntime
  21. onnx-simplifier
  22. MXNet
  23. gdown
  24. OpenCV 4.5.2-openvino
  25. Intel-Media-SDK
  26. Intel iHD GPU (iGPU) support

3. Usage

3-1. Docker Hub

https://hub.docker.com/repository/docker/pinto0309/mtomo/tags?page=1&ordering=last_updated

$ xhost +local: && \
  docker run -it --rm \
    --gpus all \
    -v `pwd`:/home/user/workdir \
    -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
    --device /dev/video0:/dev/video0:mwr \
    --net=host \
    -e LIBVA_DRIVER_NAME=iHD \
    -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
    -e DISPLAY=$DISPLAY \
    --privileged \
    pinto0309/mtomo:ubuntu2004_tf2.5.0-rc1_torch1.8.1_openvino2021.3.394

3-2. Docker Build

$ git clone https://github.com/PINTO0309/mtomo.git && cd mtomo
$ docker build -t {IMAGE_NAME}:{TAG} .

3-3. Docker Run

$ xhost +local: && \
  docker run -it --rm \
    --gpus all \
    -v `pwd`:/home/user/workdir \
    -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
    --device /dev/video0:/dev/video0:mwr \
    --net=host \
    -e LIBVA_DRIVER_NAME=iHD \
    -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
    -e DISPLAY=$DISPLAY \
    --privileged \
    {IMAGE_NAME}:{TAG}

4. Reference articles

  1. openvino2tensorflow
  2. tflite2tensorflow
  3. tensorflow-onnx (a.k.a tf2onnx)
  4. tensorflowjs
  5. coremltools
  6. OpenVINO
  7. onnx
  8. onnx-simplifier
  9. TensorFLow
  10. PyTorch
  11. flatbuffers (a.k.a flatc)
  12. TensorRT
  13. Intel-Media-SDK/MediaSDK - Running on GPU under docker
  14. Intel-Media-SDK/MediaSDK - Intel media stack on Ubuntu
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].