All Projects → alibaba → Mnn

alibaba / Mnn

MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba

Programming Languages

C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
assembly
5116 projects
python
139335 projects - #7 most used programming language
Objective-C++
1391 projects
Cuda
1817 projects

Projects that are alternatives of or similar to Mnn

Ml Examples
Arm Machine Learning tutorials and examples
Stars: ✭ 207 (-96.71%)
Mutual labels:  arm, deep-neural-networks, ml
Ncnn Android Styletransfer
The style transfer android example
Stars: ✭ 54 (-99.14%)
Mutual labels:  vulkan, arm
Vkfft
Vulkan Fast Fourier Transform library
Stars: ✭ 594 (-90.55%)
Mutual labels:  vulkan, convolution
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (-89.82%)
Mutual labels:  deep-neural-networks, ml
Ultra Light Fast Generic Face Detector 1mb
💎1MB lightweight face detection model (1MB轻量级人脸检测模型)
Stars: ✭ 6,182 (-1.62%)
Mutual labels:  arm, mnn
Ml Kws For Mcu
Keyword spotting on Arm Cortex-M Microcontrollers
Stars: ✭ 823 (-86.9%)
Mutual labels:  arm, deep-neural-networks
ncnn-android-benchmark
ncnn android benchmark app
Stars: ✭ 78 (-98.76%)
Mutual labels:  arm, vulkan
Oneflow
OneFlow is a performance-centered and open-source deep learning framework.
Stars: ✭ 2,868 (-54.36%)
Mutual labels:  deep-neural-networks, ml
ncnn-android-squeezenet
The squeezenet image classification android example
Stars: ✭ 100 (-98.41%)
Mutual labels:  arm, vulkan
Openrec
OpenRec is an open-source and modular library for neural network-inspired recommendation algorithms
Stars: ✭ 360 (-94.27%)
Mutual labels:  deep-neural-networks, ml
Vulkan best practice for mobile developers
Vulkan best practice for mobile developers
Stars: ✭ 424 (-93.25%)
Mutual labels:  vulkan, arm
Tensorflow
An Open Source Machine Learning Framework for Everyone
Stars: ✭ 161,335 (+2467.39%)
Mutual labels:  deep-neural-networks, ml
Nanodet
⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥
Stars: ✭ 3,640 (-42.08%)
Mutual labels:  deep-neural-networks, mnn
Darkon
Toolkit to Hack Your Deep Learning Models
Stars: ✭ 231 (-96.32%)
Mutual labels:  deep-neural-networks, ml
Serving
A flexible, high-performance serving system for machine learning models
Stars: ✭ 5,306 (-15.56%)
Mutual labels:  deep-neural-networks, ml
Awesome Vulkan
Awesome Vulkan ecosystem
Stars: ✭ 2,322 (-63.05%)
Mutual labels:  vulkan, arm
Andrew Ng Notes
This is Andrew NG Coursera Handwritten Notes.
Stars: ✭ 180 (-97.14%)
Mutual labels:  deep-neural-networks, ml
Tfmesos
Tensorflow in Docker on Mesos #tfmesos #tensorflow #mesos
Stars: ✭ 194 (-96.91%)
Mutual labels:  deep-neural-networks, ml
utest
Lightweight unit testing framework for C/C++ projects. Suitable for embedded devices.
Stars: ✭ 18 (-99.71%)
Mutual labels:  arm, embedded-devices
Compression
Data compression in TensorFlow
Stars: ✭ 458 (-92.71%)
Mutual labels:  deep-neural-networks, ml

MNN

中文版本

MNN Homepage

Intro

MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control. In addition, MNN is also used on embedded devices, such as IoT.

The design principles and performance data of MNN has been published in an MLSys 2020 paper here. Please cite MNN in your publications if it helps your research:

@inproceedings{alibaba2020mnn,
  author = {Jiang, Xiaotang and Wang, Huan and Chen, Yiliu and Wu, Ziqi and Wang, Lichuan and Zou, Bin and Yang, Yafeng and Cui, Zongyang and Cai, Yu and Yu, Tianhang and Lv, Chengfei and Wu, Zhihua},
  title = {MNN: A Universal and Efficient Inference Engine},
  booktitle = {MLSys},
  year = {2020}
}

Documentation and Tools

MNN's docs are in placed in Yuque docs here.

MNN Workbench could be downloaded from MNN's homepage, which provides pretrained models, visualized training tools, and one-click deployment of models to devices.

Key Features

High performance

  • Implements core computing with lots of optimized assembly code to make full use of the ARM CPU.
  • For iOS, GPU acceleration (Metal) can be turned on, which is faster than Apple's native CoreML.
  • For Android, OpenCL, Vulkan, and OpenGL are available and deep tuned for mainstream GPUs (Adreno and Mali).
  • Convolution and transposition convolution algorithms are efficient and stable. The Winograd convolution algorithm is widely used to better symmetric convolutions such as 3x3 -> 7x7.
  • Twice speed increase for the new architecture ARM v8.2 with FP16 half-precision calculation support.

Lightweight

  • Optimized for devices, no dependencies, can be easily deployed to mobile devices and a variety of embedded devices.
  • iOS platform: static library size for armv7+arm64 platforms is about 5MB, size increase of linked executables is about 620KB, and metallib file is about 600KB.
  • Android platform: core so size is about 400KB, OpenCL so is about 400KB, Vulkan so is about 400KB.

Versatility

  • Supports Tensorflow, Caffe, ONNX, and supports common neural networks such as CNN, RNN, GAN.
  • MNN model converter supports 149 Tensorflow OPs, 58 TFLite OPs, 47 Caffe OPs and 74 ONNX OPs; Number of OPs by different MNN hardware backends: 111 for CPU, 6 for ARM V8.2, 55 for Metal, 43 for OpenCL, and 32 for Vulkan.
  • Supports iOS 8.0+, Android 4.3+ and embedded devices with POSIX interface.
  • Supports hybrid computing on multiple devices. Currently supports CPU and GPU.

Ease of use

  • Efficient image processing module, speeding up affine transform and color space transform without libyuv or opencv.
  • Provides callbacks throughout the workflow to extract data or control the execution precisely.
  • Provides options for selecting inference branch and paralleling branches on CPU and GPU.
  • (BETA) MNN Python API helps ML engineers to easily use MNN to build a model, train it and quantize it, without dipping their toes in C++ code.

Architecture

architecture

MNN can be divided into two parts: Converter and Interpreter.

Converter consists of Frontends and Graph Optimize. The former is responsible for supporting different training frameworks. MNN currently supports Tensorflow, Tensorflow Lite, Caffe and ONNX (PyTorch/MXNet); the latter optimizes graphs by operator fusion, operator substitution, and layout adjustment.

Interpreter consists of Engine and Backends. The former is responsible for the loading of the model and the scheduling of the calculation graph; the latter includes the memory allocation and the Op implementation under each computing device. In Engine and Backends, MNN applies a variety of optimization schemes, including applying Winograd algorithm in convolution and deconvolution, applying Strassen algorithm in matrix multiplication, low-precision calculation, Neon optimization, hand-written assembly, multi-thread optimization, memory reuse, heterogeneous computing, etc.

How to Discuss and Get Help From MNN Community

Scan the following QR codes to join Dingtalk discussion group. The group discussions are predominantly Chinese. But we welcome and will help English speakers.

Group #1 (Full):

Group #2 (Full):

Group #3:

License

Apache 2.0

Acknowledgement

MNN participants: Taobao Technology Department, Search Engineering Team, DAMO Team, Youku and other Alibaba Group employees.

MNN refers to the following projects:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].