All Projects → google → Xnnpack

google / Xnnpack

Licence: other
High-efficiency floating-point neural network inference operators for mobile, server, and Web

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Xnnpack

Nnpack
Acceleration package for neural networks on multi-core CPUs
Stars: ✭ 1,538 (+90.35%)
Mutual labels:  multithreading, cpu, simd, neural-networks, inference
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-63.49%)
Mutual labels:  convolutional-neural-networks, neural-networks
Deeplearning.ai Assignments
Stars: ✭ 268 (-66.83%)
Mutual labels:  convolutional-neural-networks, neural-networks
Cppflow
Run TensorFlow models in C++ without installation and without Bazel
Stars: ✭ 357 (-55.82%)
Mutual labels:  neural-networks, inference
CPURasterizer
CPU Based Rasterizer Engine
Stars: ✭ 99 (-87.75%)
Mutual labels:  cpu, multithreading
Netket
Machine learning algorithms for many-body quantum systems
Stars: ✭ 256 (-68.32%)
Mutual labels:  convolutional-neural-networks, neural-networks
Artificio
Deep Learning Computer Vision Algorithms for Real-World Use
Stars: ✭ 326 (-59.65%)
Mutual labels:  convolutional-neural-networks, neural-networks
cpuwhat
Nim utilities for advanced CPU operations: CPU identification, ISA extension detection, bindings to assorted intrinsics
Stars: ✭ 25 (-96.91%)
Mutual labels:  cpu, simd
Layer
Neural network inference the Unix way
Stars: ✭ 539 (-33.29%)
Mutual labels:  convolutional-neural-networks, neural-networks
Dll
Fast Deep Learning Library (DLL) for C++ (ANNs, CNNs, RBMs, DBNs...)
Stars: ✭ 605 (-25.12%)
Mutual labels:  cpu, convolutional-neural-networks
Tensorflow 101
TensorFlow 101: Introduction to Deep Learning for Python Within TensorFlow
Stars: ✭ 642 (-20.54%)
Mutual labels:  convolutional-neural-networks, neural-networks
Planeverb
Project Planeverb is a CPU based real-time wave-based acoustics engine for games. It comes with an integration with the Unity Engine.
Stars: ✭ 22 (-97.28%)
Mutual labels:  cpu, multithreading
BMW-IntelOpenVINO-Detection-Inference-API
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.
Stars: ✭ 66 (-91.83%)
Mutual labels:  cpu, inference
Sincnet
SincNet is a neural architecture for efficiently processing raw audio samples.
Stars: ✭ 764 (-5.45%)
Mutual labels:  convolutional-neural-networks, neural-networks
BMW-IntelOpenVINO-Segmentation-Inference-API
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit
Stars: ✭ 31 (-96.16%)
Mutual labels:  cpu, inference
Cs231
Complete Assignments for CS231n: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 317 (-60.77%)
Mutual labels:  convolutional-neural-networks, neural-networks
Tensorflow Tutorial
TensorFlow and Deep Learning Tutorials
Stars: ✭ 748 (-7.43%)
Mutual labels:  convolutional-neural-networks, neural-networks
SoftLight
A shader-based Software Renderer Using The LightSky Framework.
Stars: ✭ 2 (-99.75%)
Mutual labels:  multithreading, simd
Easy Deep Learning With Keras
Keras tutorial for beginners (using TF backend)
Stars: ✭ 367 (-54.58%)
Mutual labels:  convolutional-neural-networks, neural-networks
Neurec
Next RecSys Library
Stars: ✭ 731 (-9.53%)
Mutual labels:  convolutional-neural-networks, neural-networks

XNNPACK

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.

Supported Architectures

  • ARM64 on Android, Linux, macOS, and iOS (including WatchOS and tvOS)
  • ARMv7 (with NEON) on Android, Linux, and iOS (including WatchOS)
  • x86 and x86-64 (up to AVX512) on Windows, Linux, macOS, Android, and iOS simulator
  • WebAssembly MVP
  • WebAssembly SIMD (experimental)

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • 2D Bilinear Resize
  • 2D Depth-to-Space (AKA Pixel Shuffle)
  • Add (including broadcasting, two inputs only)
  • Subtract (including broadcasting)
  • Divide (including broadcasting)
  • Maximum (including broadcasting)
  • Minimum (including broadcasting)
  • Multiply (including broadcasting)
  • Squared Difference (including broadcasting)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Abs (absolute value)
  • Bankers' Rounding (rounding to nearest, ties to even)
  • Ceiling (rounding to integer above)
  • Clamp (includes ReLU and ReLU6)
  • Copy
  • ELU
  • Floor (rounding to integer below)
  • HardSwish
  • Leaky ReLU
  • Negate
  • Sigmoid
  • Softmax
  • Square
  • Truncation (rounding to integer towards zero)
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Performance

Mobile phones

The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

Model Pixel, ms Pixel 2, ms Pixel 3a, ms
MobileNet v1 1.0X 82 86 88
MobileNet v2 1.0X 49 53 55
MobileNet v3 Large 39 42 44
MobileNet v3 Small 12 14 14

The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

Model Pixel, ms Pixel 2, ms Pixel 3a, ms
MobileNet v1 1.0X 43 27 46
MobileNet v2 1.0X 26 18 28
MobileNet v3 Large 22 16 24
MobileNet v3 Small 7 6 8

Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.

Raspberry Pi

The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.

Model RPi Zero W (BCM2835), ms RPi 2 (BCM2836), ms RPi 3+ (BCM2837B0), ms RPi 4 (BCM2711), ms
MobileNet v1 1.0X 4004 337 116 72
MobileNet v2 1.0X 2011 195 83 41
MobileNet v3 Large 1694 163 70 38
MobileNet v3 Small 482 52 23 13

Benchmarked on May 22, 2020 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs.

Publications

Ecosystem

Machine Learning Frameworks

Acknowledgements

XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].