All Projects → hollance → Yolo Coreml Mpsnngraph

hollance / Yolo Coreml Mpsnngraph

Licence: mit
Tiny YOLO for iOS implemented using CoreML but also using the new MPS graph API.

Programming Languages

swift
15916 projects

Projects that are alternatives of or similar to Yolo Coreml Mpsnngraph

Android Yolo
Real-time object detection on Android using the YOLO network with TensorFlow
Stars: ✭ 604 (-30.25%)
Mutual labels:  yolo
Getting Things Done With Pytorch
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoencoders, Object Detection with YOLO v5, Build your first Neural Network, Time Series forecasting for Coronavirus daily cases, Sentiment Analysis with BERT.
Stars: ✭ 738 (-14.78%)
Mutual labels:  yolo
Mobilenet Yolo
A caffe implementation of MobileNet-YOLO detection network
Stars: ✭ 825 (-4.73%)
Mutual labels:  yolo
Label Studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
Stars: ✭ 7,264 (+738.8%)
Mutual labels:  yolo
Openlabeling
Label images and video for Computer Vision applications
Stars: ✭ 706 (-18.48%)
Mutual labels:  yolo
Yolo tensorflow
Tensorflow implementation of YOLO, including training and test phase.
Stars: ✭ 772 (-10.85%)
Mutual labels:  yolo
Ssds.pytorch
Repository for Single Shot MultiBox Detector and its variants, implemented with pytorch, python3.
Stars: ✭ 570 (-34.18%)
Mutual labels:  yolo
Pytorch Yolo2
Convert https://pjreddie.com/darknet/yolo/ into pytorch
Stars: ✭ 941 (+8.66%)
Mutual labels:  yolo
Cv Papers
计算机视觉相关论文整理、记录、分享; 包括图像分类、目标检测、视觉跟踪/目标跟踪、人脸识别/人脸验证、OCR/场景文本检测及识别等领域。欢迎加星,欢迎指正错误,同时也期待能够共同参与!!! 持续更新中... ...
Stars: ✭ 738 (-14.78%)
Mutual labels:  yolo
Tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Stars: ✭ 7,494 (+765.36%)
Mutual labels:  metal
Pixelkit
Live Graphics Framework
Stars: ✭ 644 (-25.64%)
Mutual labels:  metal
Yolo Tf2
yolo(all versions) implementation in keras and tensorflow 2.4
Stars: ✭ 695 (-19.75%)
Mutual labels:  yolo
Tensorflow Yolo
tensorflow implementation of 'YOLO : Real-Time Object Detection'(train and test)
Stars: ✭ 791 (-8.66%)
Mutual labels:  yolo
Ouzel
C++ game engine for Windows, macOS, Linux, iOS, tvOS, Android, and web browsers
Stars: ✭ 607 (-29.91%)
Mutual labels:  metal
Yolo annotation tool
Annotation tool for YOLO in opencv
Stars: ✭ 17 (-98.04%)
Mutual labels:  yolo
Ai Basketball Analysis
🏀🤖🏀 AI web app and API to analyze basketball shots and shooting pose.
Stars: ✭ 582 (-32.79%)
Mutual labels:  yolo
Toypathtracer
Toy path tracer for my own learning purposes (CPU/GPU, C++/C#, Win/Mac/Wasm, DX11/Metal, also Unity)
Stars: ✭ 753 (-13.05%)
Mutual labels:  metal
Tensorflow Yolo V3
Implementation of YOLO v3 object detector in Tensorflow (TF-Slim)
Stars: ✭ 862 (-0.46%)
Mutual labels:  yolo
Dmsmsgrcg
A photo OCR project aims to output DMS messages contained in sign structure images.
Stars: ✭ 18 (-97.92%)
Mutual labels:  yolo
Flexibleimage
A simple way to play with the image!
Stars: ✭ 798 (-7.85%)
Mutual labels:  metal

YOLO with Core ML and MPSNNGraph

This is the source code for my blog post YOLO: Core ML versus MPSNNGraph.

YOLO is an object detection network. It can detect multiple objects in an image and puts bounding boxes around these objects. Read my other blog post about YOLO to learn more about how it works.

YOLO in action

Previously, I implemented YOLO in Metal using the Forge library. Since then Apple released Core ML and MPSNNGraph as part of the iOS 11 beta. So I figured, why not try to get YOLO running on these two other technology stacks too?

In this repo you'll find:

  • TinyYOLO-CoreML: A demo app that runs the Tiny YOLO neural network on Core ML.
  • TinyYOLO-NNGraph: The same demo app but this time it uses the lower-level graph API from Metal Performance Shaders.
  • Convert: The scripts needed to convert the original DarkNet YOLO model to Core ML and MPS format.

To run the app, just open the xcodeproj file in Xcode 9 or later, and run it on a device with iOS 11 or better installed.

The reported "elapsed" time is how long it takes the YOLO neural net to process a single image. The FPS is the actual throughput achieved by the app.

NOTE: Running these kinds of neural networks eats up a lot of battery power. To measure the maximum speed of the model, the setUpCamera() method in ViewController.swift configures the camera to run at 240 FPS, if available. In a real app, you'd use at most 30 FPS and possibly limit the number of times per second it runs the neural net to 15 or less (i.e. only process every other frame).

Tip: Also check out this repo for YOLO v3. It works the same as this repo, but uses the full version of YOLO v3!

iOS 12 and VNRecognizedObjectObservation

The code in my blog post and this repo shows how take the MLMultiArray output from TinyYOLO and interpret it in your app. That was the only way to do it with iOS 11, but as of iOS 12 there is an easier solution.

The Vision framework in iOS 12 directly supports YOLO-like models. The big advantage is that these do the bounding box decoding and non-maximum suppression (NMS) inside the Core ML model. All you need to do is pass in the image and Vision will give you the results as one or more VNRecognizedObjectObservation objects. No more messing around with MLMultiArrays.

It's also really easy to train such models using Turi Create. It combines TinyYOLO v2 and the new NonMaximumSuppression model type into a so-called pipeline model.

The good news is that this new Vision API also supports other object detection models!

I added a chapter to my book Core ML Survival Guide that shows exactly how this works. In the book you’ll see how to add this same functionality to MobileNetV2 + SSDLite, so that you get VNRecognizedObjectObservation predictions for that model too. The book has lots of other great tips on using Core ML, so check it out! 😄

If you're not ready to go all-in on iOS 12 yet, then read on...

Converting the models

NOTE: You don't need to convert the models yourself. Everything you need to run the demo apps is included in the Xcode projects already.

If you're interested in how the conversion was done, there are three conversion scripts:

YAD2K

The original network is in Darknet format. I used YAD2K to convert this to Keras. Since coremltools currently requires Keras 1.2.2, the included YAD2K source code is actually a modified version that runs on Keras 1.2.2 instead of 2.0.

First, set up a virtualenv with Python 3:

virtualenv -p /usr/local/bin/python3 yad2kenv
source yad2kenv/bin/activate
pip3 install tensorflow
pip3 install keras==1.2.2
pip3 install h5py
pip3 install pydot-ng
pip3 install pillow
brew install graphviz

Run the yad2k.py script to convert the Darknet model to Keras:

cd Convert/yad2k
python3 yad2k.py -p ../tiny-yolo-voc.cfg ../tiny-yolo-voc.weights model_data/tiny-yolo-voc.h5

To test the model actually works:

python3 test_yolo.py model_data/tiny-yolo-voc.h5 -a model_data/tiny-yolo-voc_anchors.txt -c model_data/pascal_classes.txt 

This places some images with the computed bounding boxes in the yad2k/images/out folder.

coreml.py

The coreml.py script takes the tiny-yolo-voc.h5 model created by YAD2K and converts it to TinyYOLO.mlmodel. Note: this script requires Python 2.7 from /usr/bin/python (i.e. the one that comes with macOS).

To set up the virtual environment:

virtualenv -p /usr/bin/python2.7 coreml
source coreml/bin/activate
pip install tensorflow
pip install keras==1.2.2
pip install h5py
pip install coremltools

Run the coreml.py script to do the conversion (the paths to the model file and the output folder are hardcoded in the script):

python coreml.py

nngraph.py

The nngraph.py script takes the tiny-yolo-voc.h5 model created by YAD2K and converts it to weights files used by MPSNNGraph. Requires Python 3 and Keras 1.2.2.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].