All Projects → alexisakers → Mlmoji

alexisakers / Mlmoji

Licence: mit
Hand-Drawn Emoji Classifier (WWDC18 Scholarship Application)

Programming Languages

swift
15916 projects

Projects that are alternatives of or similar to Mlmoji

Visualprogramminglanguage
Visual programming language written in Swift that assembles to executable Swift code. WWDC '18 scholarship submission.
Stars: ✭ 1,145 (+3267.65%)
Mutual labels:  playground, coreml
createml-playgrounds
Create ML playgrounds for building machine learning models. For developers and data scientists.
Stars: ✭ 82 (+141.18%)
Mutual labels:  playground, coreml
mnist-coreml
Simple convolutional neural network to predict handwritten digits using Keras + CoreML for WWDC '18 scholarship [Accepted]
Stars: ✭ 45 (+32.35%)
Mutual labels:  playground, coreml
Ngx Formly Playground
Project with list of Angular Formly exercises. Every next exercise add new feature to the previous one.
Stars: ✭ 16 (-52.94%)
Mutual labels:  playground
Deer Executor
An executor for online judge —— 基于Go语言实现的代码评测工具
Stars: ✭ 23 (-32.35%)
Mutual labels:  playground
Objectclassifier
An iOS swift app that detects objects using machine learning (CoreML, Vision)
Stars: ✭ 12 (-64.71%)
Mutual labels:  coreml
Momo mind
Face classifier for "Momoiro Clozer Z" member using Tensorflow/Keras
Stars: ✭ 31 (-8.82%)
Mutual labels:  coreml
Flexibleimage
A simple way to play with the image!
Stars: ✭ 798 (+2247.06%)
Mutual labels:  playground
Mnistkit
The better way to deal with MNIST model and Core ML in iOS
Stars: ✭ 21 (-38.24%)
Mutual labels:  coreml
Flutterplayground
Playground app for Flutter
Stars: ✭ 859 (+2426.47%)
Mutual labels:  playground
Awesome Ai Books
Some awesome AI related books and pdfs for learning and downloading, also apply some playground models for learning
Stars: ✭ 855 (+2414.71%)
Mutual labels:  playground
Har Keras Coreml
Human Activity Recognition (HAR) with Keras and CoreML
Stars: ✭ 23 (-32.35%)
Mutual labels:  coreml
Coremlimage
A demo of creating your own CoreML model
Stars: ✭ 13 (-61.76%)
Mutual labels:  coreml
Build Ocr
Build an OCR for iOS apps
Stars: ✭ 17 (-50%)
Mutual labels:  coreml
Coremlstyletransfer
A simple demo for Core ML and Style Transfer
Stars: ✭ 30 (-11.76%)
Mutual labels:  coreml
Coremlhelpers
Types and functions that make it a little easier to work with Core ML in Swift.
Stars: ✭ 823 (+2320.59%)
Mutual labels:  coreml
Revery Playground
Live, interactive playground for Revery examples
Stars: ✭ 14 (-58.82%)
Mutual labels:  playground
Vuep
🎡 A component for rendering Vue components with live editor and preview.
Stars: ✭ 840 (+2370.59%)
Mutual labels:  playground
Electron Playground
This is a project to quickly experiment and learn electron related APIs
Stars: ✭ 938 (+2658.82%)
Mutual labels:  playground
Liooon Not A Liooon Classifier
A troll app to check if an object seen by your camera is a lion. Uses iOS CoreML, Vision APIs
Stars: ✭ 11 (-67.65%)
Mutual labels:  coreml

MLMOJI

Emoji are becoming more central in the way we express ourselves digitally, whether we want to convey an emotion or put some fun into a conversation. This growth’s impact is visible on mobile devices. The available emoji keep increasing and it often requires a lot of swiping to find the right character.

My playground’s goal is to research a way to make this process more fun and intuitive. Using touch, Core Graphics, and deep learning, I implemented a keyboard that recognizes hand-drawn emoji and converts them to text.

You can download the playground book, the data set and the Core ML model from the releases tab.

For a live demo, you can watch this video:

MLMOJI Demo

🔧 Building MLMOJI

The first step was to create the drawings themselves. I made a view that builds up points as the user’s finger moves on the screen and renders the stroke incrementally. When the user lifts their finger, a UIGraphicsImageRenderer flattens the strokes together into a static image, improving rendering performance. To achieve smoother lines, I used touch coalescing, which allows detection of more touch points.

The second core component of the playground is a classifier that recognizes a drawn emoji. Building it involved three tasks: gathering training data as images, training the model, and improving its accuracy.

The training data is used by the model to learn the features of each emoji class. This training data came from an app I built that uses the above drawing component to speed up the process of generating drawings.

With the training images now available, I looked into training a Core ML classifier. For this, a convolutional neural network (CNN) was appropriate because it learns efficiently for image recognition tasks.

Training a CNN from scratch can take several weeks because of the complexity of the operations applied to the input. Therefore, I used the “transfer learning” technique to train my classifier. This approach enables you to retrain a general, pre-trained model to detect new features.

Using the TensorFlow ML framework, a Docker container, and a Python script, I was able to train a small, fast, mobile-friendly neural network implementation (MobileNet) with each emoji’s features. I imported the resulting mlmodel into my playground.

The first version of the classifier was too specialized and not very reliable because my original data set was not large enough (only 50 drawings per class). I used data augmentation techniques (such as scaling, distortion, and flipping) to generate more training images from the manual drawings. Then I repeated the training process to reach a more acceptable accuracy.

Finally, using the Playground Books format, I created an interactive playground that explains the techniques used and demonstrates a proof of concept. Using features like the glossary and live view proxies, the book provides an accessible and enjoyable learning experience.

The final result comes with limitations. Because of the assignment’s time and size constraints, I was only able to train data for 7 emoji and to reach a somewhat fluctuating level of accuracy. However, building this playground taught me a lot about deep learning techniques for mobile devices and encouraged me to pursue further research in this field.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].