All Projects → Ma-Dan → YOLOv3-CoreML

Ma-Dan / YOLOv3-CoreML

Licence: MIT license
YOLOv3 for iOS implemented using CoreML.

Projects that are alternatives of or similar to YOLOv3-CoreML

Yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 19,914 (+11896.39%)
Mutual labels:  coreml, yolov3
Yolov3
YOLOv3 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 8,159 (+4815.06%)
Mutual labels:  coreml, yolov3
darknet
php ffi darknet
Stars: ✭ 21 (-87.35%)
Mutual labels:  yolov3
ESP32-CAM-MJPEG-Stream-Decoder-and-Control-Library
The library is MJPEG stream decoder based on libcurl and OpenCV, and written in C/C++.
Stars: ✭ 40 (-75.9%)
Mutual labels:  yolov3
deepvac
PyTorch Project Specification.
Stars: ✭ 507 (+205.42%)
Mutual labels:  coreml
CustomVisionMicrosoftToCoreMLDemoApp
This app recognises 3 hand signs - fist, high five and victory hand [ rock, paper, scissors basically :) ] with live feed camera. It uses a HandSigns.mlmodel which has been trained using Custom Vision from Microsoft.
Stars: ✭ 25 (-84.94%)
Mutual labels:  coreml
DeTeXt
iOS app that detects LaTeX symbols from drawings. Built using PencilKit, SwiftUI, Combine and CoreML for iOS 14(or greater) and macOS 11(or greater).
Stars: ✭ 73 (-56.02%)
Mutual labels:  coreml
DIoU YOLO V3
📈📈📈【口罩佩戴检测数据训练 | 开源口罩检测数据集和预训练模型】Train D/CIoU_YOLO_V3 by darknet for object detection
Stars: ✭ 53 (-68.07%)
Mutual labels:  yolov3
CarLens-iOS
CarLens - Recognize and Collect Cars
Stars: ✭ 124 (-25.3%)
Mutual labels:  coreml
YOLO-Streaming
Push-pull streaming and Web display of YOLO series
Stars: ✭ 56 (-66.27%)
Mutual labels:  yolov3
facetouch
Neural Network to predict face touch on live feed and warn you, "don't touch the face".
Stars: ✭ 24 (-85.54%)
Mutual labels:  yolov3
MIT-Driverless-CV-TrainingInfra
PyTorch pipeline of MIT Driverless Computer Vision paper(2020)
Stars: ✭ 89 (-46.39%)
Mutual labels:  yolov3
IBM-Data-Science-Capstone-Alejandra-Marquez
Homemade face mask detector fine-tuning a Yolo-v3 network
Stars: ✭ 28 (-83.13%)
Mutual labels:  yolov3
udacity-cvnd-projects
My solutions to the projects assigned for the Udacity Computer Vision Nanodegree
Stars: ✭ 36 (-78.31%)
Mutual labels:  yolov3
AnimeGANv3
Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
Stars: ✭ 878 (+428.92%)
Mutual labels:  coreml
baai-federated-learning-helmet-baseline
电力人工智能数据竞赛——安全帽未佩戴行为目标检测赛道基准模型
Stars: ✭ 26 (-84.34%)
Mutual labels:  yolov3
ios-ml-dog-classifier
An iOS app that can detect a dog and determine its breed from an image or video feed.
Stars: ✭ 37 (-77.71%)
Mutual labels:  coreml
detection-pytorch
A pytorch Implementation of classical object detection.
Stars: ✭ 24 (-85.54%)
Mutual labels:  yolov3
detection util scripts
TF and YOLO utility scripts
Stars: ✭ 49 (-70.48%)
Mutual labels:  yolov3
SentimentVisionDemo
🌅 iOS11 demo application for visual sentiment prediction.
Stars: ✭ 34 (-79.52%)
Mutual labels:  coreml

YOLOv3 with Core ML

This repo was forked and modified from hollance/YOLO-CoreML-MPSNNGraph. Some changes I made:

  1. Add YOLOv3 model.
  2. Only keep Keras converter.

About YOLO object detection

YOLO is an object detection network. It can detect multiple objects in an image and puts bounding boxes around these objects. Read hollance's blog post about YOLO to learn more about how it works.

YOLO in action

In this repo you'll find:

  • YOLOv3-CoreML: A demo app that runs the YOLOv3 neural network on Core ML.
  • Converter: The scripts needed to convert the original Keras YOLOv3 model to Core ML.

To run the app:

  1. Extract YOLOv3 CoreML model in YOLOv3 CoreML model folder and copy to YOLOv3-CoreML/YOLOv3-CoreML folder.
  2. Open the xcodeproj file in Xcode 9 and run it on a device with iOS 11 or better installed.

The reported "elapsed" time is how long it takes the YOLO neural net to process a single image. The FPS is the actual throughput achieved by the app.

NOTE: Running these kinds of neural networks eats up a lot of battery power. The app can put a limit on the number of times per second it runs the neural net. You can change this in setUpCamera() by changing the line videoCapture.fps = 50 to a smaller number.

Converting the models

NOTE: You don't need to convert the models yourself. Everything you need to run the demo apps is included in the Xcode projects already.

The model is converted from Keras h5 model, follow the Quick Start guide keras-yolo3 to get YOLOv3 Keras h5 model, then use coreml.py to convert h5 model to CoreML model.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].