All Projects → intel → Compute Runtime

intel / Compute Runtime

Licence: mit
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver

Projects that are alternatives of or similar to Compute Runtime

Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-36.93%)
Mutual labels:  intel, gpu, gpgpu, opencl
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-88.03%)
Mutual labels:  intel, gpu, gpgpu, opencl
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+522.77%)
Mutual labels:  gpu, gpgpu, opencl
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-61.21%)
Mutual labels:  gpu, gpgpu, opencl
Hipsycl
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs
Stars: ✭ 377 (-36.42%)
Mutual labels:  gpu, gpgpu, opencl
Compute
A C++ GPU Computing Library for OpenCL
Stars: ✭ 1,192 (+101.01%)
Mutual labels:  gpu, gpgpu, opencl
Cekirdekler
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (-87.18%)
Mutual labels:  gpu, gpgpu, opencl
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+176.73%)
Mutual labels:  gpu, gpgpu, opencl
Tf Coriander
OpenCL 1.2 implementation for Tensorflow
Stars: ✭ 775 (+30.69%)
Mutual labels:  intel, gpu, opencl
Bitcracker
BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker
Stars: ✭ 463 (-21.92%)
Mutual labels:  gpu, gpgpu, opencl
Aparapi
The New Official Aparapi: a framework for executing native Java and Scala code on the GPU.
Stars: ✭ 352 (-40.64%)
Mutual labels:  gpu, gpgpu, opencl
John
John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs
Stars: ✭ 5,656 (+853.79%)
Mutual labels:  gpu, gpgpu, opencl
Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (+56.32%)
Mutual labels:  gpu, gpgpu, opencl
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-39.63%)
Mutual labels:  gpu, gpgpu, opencl
Coriander
Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
Stars: ✭ 665 (+12.14%)
Mutual labels:  intel, gpu, opencl
rindow-neuralnetworks
Neural networks library for machine learning on PHP
Stars: ✭ 37 (-93.76%)
Mutual labels:  gpu, opencl, gpgpu
Arrayfire Rust
Rust wrapper for ArrayFire
Stars: ✭ 525 (-11.47%)
Mutual labels:  gpu, gpgpu, opencl
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (-42.33%)
Mutual labels:  gpu, opencl
Clblast
Tuned OpenCL BLAS
Stars: ✭ 559 (-5.73%)
Mutual labels:  gpu, opencl
Realsr Ncnn Vulkan
RealSR super resolution implemented with ncnn library
Stars: ✭ 357 (-39.8%)
Mutual labels:  intel, gpu

Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver

Introduction

The Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver is an open source project providing compute API support (Level Zero, OpenCL) for Intel graphics hardware architectures (HD Graphics, Xe).

What is NEO?

NEO is the shorthand name for Compute Runtime contained within this repository. It is also a development mindset that we adopted when we first started the implementation effort for OpenCL.

The project evolved beyond a single API and NEO no longer implies a specific API. When talking about a specific API, we will mention it by name (e.g. Level Zero, OpenCL).

License

The Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver is distributed under the MIT License.

You may obtain a copy of the License at: https://opensource.org/licenses/MIT

Supported Platforms

Platform OpenCL Level Zero
Intel Core Processors with Gen8 graphics devices (formerly Broadwell) 3.0 -
Intel Core Processors with Gen9 graphics devices (formerly Skylake, Kaby Lake, Coffee Lake) 3.0 Y
Intel Atom Processors with Gen9 graphics devices (formerly Apollo Lake, Gemini Lake) 3.0 -
Intel Core Processors with Gen11 graphics devices (formerly Ice Lake) 3.0 Y
Intel Atom Processors with Gen11 graphics devices (formerly Elkhart Lake) 3.0 -
Intel Core Processors with Gen12 graphics devices (formerly Tiger Lake) 3.0 Y

Release cadence

  • Once a week, we run extended validation cycle on a selected driver.
  • When the extended validation cycle tests pass, the corresponding commit on github is tagged using the format yy.ww.bbbb (yy - year, ww - work week, bbbb - incremental build number).
  • Typically for weekly tags we will post a binary release (e.g. deb).
  • Quality level of the driver (per platform) will be provided in the Release Notes.

Installation Options

To allow NEO access to GPU device make sure user has permissions to files /dev/dri/renderD*.

Via system package manager

NEO is available for installation on a variety of Linux distributions and can be installed via the distro's package manager.

For example on Ubuntu* 20.04:

apt-get install intel-opencl-icd

Procedures for other distributions.

Manual download

.deb packages for Ubuntu are provided along with installation instructions and Release Notes on the release page

Linking applications

Directly linking to the runtime library is not supported:

Dependencies

In addition, to enable performance counters support, the following packages are needed:

How to provide feedback

By default, please submit an issue using native github.com interface.

How to contribute

Create a pull request on github.com with your patch. Make sure your change is cleanly building and passing ULTs. A maintainer will contact you if there are questions or concerns. See contribution guidelines for more details.

See also

Level Zero specific

OpenCL specific

(*) Other names and brands may be claimed as property of others.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].