All Projects → NervanaSystems → Ngraph

NervanaSystems / Ngraph

Licence: apache-2.0
nGraph has moved to OpenVINO

Projects that are alternatives of or similar to Ngraph

Deepc
vendor independent deep learning library, compiler and inference framework microcomputers and micro-controllers
Stars: ✭ 260 (-80.33%)
Mutual labels:  compiler, deep-neural-networks, onnx, performance
Gluon2pytorch
Gluon to PyTorch deep neural network model converter
Stars: ✭ 70 (-94.7%)
Mutual labels:  deep-neural-networks, mxnet, onnx
Deepo
Setup and customize deep learning environment in seconds.
Stars: ✭ 6,145 (+364.83%)
Mutual labels:  mxnet, onnx, caffe2
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+1200.53%)
Mutual labels:  mxnet, onnx, caffe2
Onnx
Open standard for machine learning interoperability
Stars: ✭ 11,829 (+794.78%)
Mutual labels:  deep-neural-networks, mxnet, onnx
Deep Learning In Production
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
Stars: ✭ 3,104 (+134.8%)
Mutual labels:  deep-neural-networks, mxnet, caffe2
Yolo2 Pytorch
PyTorch implementation of the YOLO (You Only Look Once) v2
Stars: ✭ 426 (-67.78%)
Mutual labels:  deep-neural-networks, onnx, caffe2
Fastexpressioncompiler
Fast ExpressionTree compiler to delegate
Stars: ✭ 631 (-52.27%)
Mutual labels:  compiler, performance
Windows Machine Learning
Samples and Tools for Windows ML.
Stars: ✭ 663 (-49.85%)
Mutual labels:  onnx, caffe2
Tiramisu
A polyhedral compiler for expressing fast and portable data parallel algorithms
Stars: ✭ 685 (-48.18%)
Mutual labels:  compiler, deep-neural-networks
Onnx Tensorflow
Tensorflow Backend for ONNX
Stars: ✭ 846 (-36.01%)
Mutual labels:  deep-neural-networks, onnx
Mmdnn
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
Stars: ✭ 5,472 (+313.92%)
Mutual labels:  mxnet, onnx
Tusimple Duc
Understanding Convolution for Semantic Segmentation
Stars: ✭ 567 (-57.11%)
Mutual labels:  deep-neural-networks, mxnet
Nimporter
Compile Nim Extensions for Python On Import!
Stars: ✭ 474 (-64.15%)
Mutual labels:  compiler, performance
Halide
a language for fast, portable data-parallel computation
Stars: ✭ 4,722 (+257.19%)
Mutual labels:  compiler, performance
Multi Model Server
Multi Model Server is a tool for serving neural net models for inference
Stars: ✭ 770 (-41.75%)
Mutual labels:  mxnet, onnx
Onnx R
R Interface to Open Neural Network Exchange (ONNX)
Stars: ✭ 31 (-97.66%)
Mutual labels:  deep-neural-networks, onnx
Tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Stars: ✭ 7,494 (+466.87%)
Mutual labels:  compiler, performance
Onnx Scala
An ONNX (Open Neural Network eXchange) API and Backend for Typeful, Functional Deep Learning in Scala
Stars: ✭ 68 (-94.86%)
Mutual labels:  deep-neural-networks, onnx
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework.
Stars: ✭ 8,409 (+536.08%)
Mutual labels:  deep-neural-networks, caffe2

nGraph has moved to OpenVINO: https://github.com/openvinotoolkit/openvino

nGraph Compiler stack

Quick start

To begin using nGraph with popular frameworks, please refer to the links below.

Framework (Version) Installation guide Notes
TensorFlow* Pip install or Build from source 20 Validated workloads
ONNX 1.5 Pip install 17 Validated workloads

Python wheels for nGraph

The Python wheels for nGraph have been tested and are supported on the following 64-bit systems:

  • Ubuntu 16.04 or later
  • CentOS 7.6
  • Debian 10
  • macOS 10.14.3 (Mojave)

To install via pip, run:

pip install --upgrade pip==19.3.1
pip install ngraph-core

Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of
Validated workloads, thanks to nGraph's powerful subgraph pattern matching.

Additionally we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. More details on current architecture of the nGraph Compiler stack can be found in Architecture and features, and recent changes to the stack are explained in the Release Notes.

What is nGraph Compiler?

nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets. We strongly believe in providing freedom, performance, and ease-of-use to AI developers.

The diagram below shows deep learning frameworks and hardware targets supported by nGraph. NNP-T and NNP-I in the diagram refer to Intel's next generation deep learning accelerators: Intel® Nervana™ Neural Network Processor for Training and Inference respectively. Future plans for supporting addtional deep learning frameworks and backends are outlined in the ecosystem section.

Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends. If you wish to contribute to the project, please don't hesitate to ask questions in GitHub issues after reviewing our contribution guide below.

How to contribute

We welcome community contributions to nGraph. If you have an idea how to improve it:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: You can test locally with make check.

We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].