All Projects → CNugteren → Clblast

CNugteren / Clblast

Licence: apache-2.0
Tuned OpenCL BLAS

Projects that are alternatives of or similar to Clblast

monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (-70.3%)
Mutual labels:  gpu, blas
Arrayfire Rust
Rust wrapper for ArrayFire
Stars: ✭ 525 (-6.08%)
Mutual labels:  gpu, opencl
rindow-neuralnetworks
Neural networks library for machine learning on PHP
Stars: ✭ 37 (-93.38%)
Mutual labels:  gpu, opencl
Bohrium
Automatic parallelization of Python/NumPy, C, and C++ codes on Linux and MacOSX
Stars: ✭ 209 (-62.61%)
Mutual labels:  gpu, opencl
John
John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs
Stars: ✭ 5,656 (+911.81%)
Mutual labels:  gpu, opencl
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-58.86%)
Mutual labels:  gpu, opencl
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+560.64%)
Mutual labels:  gpu, opencl
Kelpnet
Pure C# machine learning framework
Stars: ✭ 136 (-75.67%)
Mutual labels:  gpu, opencl
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-35.96%)
Mutual labels:  gpu, opencl
Aparapi
The New Official Aparapi: a framework for executing native Java and Scala code on the GPU.
Stars: ✭ 352 (-37.03%)
Mutual labels:  gpu, opencl
Compute.scala
Scientific computing with N-dimensional arrays
Stars: ✭ 191 (-65.83%)
Mutual labels:  gpu, opencl
Hipsycl
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs
Stars: ✭ 377 (-32.56%)
Mutual labels:  gpu, opencl
Primitiv
A Neural Network Toolkit.
Stars: ✭ 164 (-70.66%)
Mutual labels:  gpu, opencl
slibs
Single file libraries for C/C++
Stars: ✭ 80 (-85.69%)
Mutual labels:  opencl, blas
Khiva
An open-source library of algorithms to analyse time series in GPU and CPU.
Stars: ✭ 161 (-71.2%)
Mutual labels:  gpu, opencl
hipacc
A domain-specific language and compiler for image processing
Stars: ✭ 72 (-87.12%)
Mutual labels:  gpu, opencl
Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (-78.18%)
Mutual labels:  gpu, blas
Mixbench
A GPU benchmark tool for evaluating GPUs on mixed operational intensity kernels (CUDA, OpenCL, HIP, SYCL)
Stars: ✭ 130 (-76.74%)
Mutual labels:  gpu, opencl
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (-38.82%)
Mutual labels:  gpu, opencl
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-33.09%)
Mutual labels:  gpu, opencl

CLBlast: The tuned OpenCL BLAS library

Build status Tests on Intel CPU Tests on NVIDIA GPU Other tests
Windows Build Status Build Status Build Status N/A
Linux Build Status Build Status Build Status N/A
OS X Build Status Build Status N/A N/A

CLBlast is a modern, lightweight, performant and tunable OpenCL BLAS library written in C++11. It is designed to leverage the full performance potential of a wide variety of OpenCL devices from different vendors, including desktop and laptop GPUs, embedded GPUs, and other accelerators. CLBlast implements BLAS routines: basic linear algebra subprograms operating on vectors and matrices. See the CLBlast website for performance reports on various devices as well as the latest CLBlast news.

The library is not tuned for all possible OpenCL devices: if out-of-the-box performance is poor, please run the tuners first. See below for a list of already tuned devices and instructions on how to tune yourself and contribute to future releases of the CLBlast library. See also the CLBlast feature roadmap to get an indication of the future of CLBlast.

Why CLBlast and not clBLAS or cuBLAS?

Use CLBlast instead of clBLAS:

  • When you care about achieving maximum performance.
  • When you want to be able to inspect the BLAS kernels or easily customize them to your needs.
  • When you run on exotic OpenCL devices for which you need to tune yourself.
  • When you are still running on OpenCL 1.1 hardware.
  • When you prefer a C++ API over a C API (C API also available in CLBlast).
  • When you value an organized and modern C++ codebase.
  • When you target Intel CPUs and GPUs or embedded devices.
  • When you can benefit from the increased performance of half-precision fp16 data-types.

Use CLBlast instead of cuBLAS:

  • When you want your code to run on devices other than NVIDIA CUDA-enabled GPUs.
  • When you want to tune for a specific configuration (e.g. rectangular matrix-sizes).
  • When you sleep better if you know that the library you use is open-source.
  • When you are using OpenCL rather than CUDA.

When not to use CLBlast:

  • When you run on NVIDIA's CUDA-enabled GPUs only and can benefit from cuBLAS's assembly-level tuned kernels.

Getting started

CLBlast can be compiled with minimal dependencies (apart from OpenCL) in the usual CMake-way, e.g.:

mkdir build && cd build
cmake ..
make

Detailed instructions for various platforms can be found are here.

Like clBLAS and cuBLAS, CLBlast also requires OpenCL device buffers as arguments to its routines. This means you'll have full control over the OpenCL buffers and the host-device memory transfers. CLBlast's API is designed to resemble clBLAS's C API as much as possible, requiring little integration effort in case clBLAS was previously used. Using CLBlast starts by including the C++ header:

#include <clblast.h>

Or alternatively the plain C version:

#include <clblast_c.h>

Afterwards, any of CLBlast's routines can be called directly: there is no need to initialize the library. The available routines and the required arguments are described in the above mentioned include files and the included API documentation. The API is kept as close as possible to the Netlib BLAS and the cuBLAS/clBLAS APIs. For an overview of the supported routines, see here.

To get started quickly, a couple of stand-alone example programs are included in the samples subfolder. They can optionally be compiled using the CMake infrastructure of CLBlast by providing the -DSAMPLES=ON flag, for example as follows:

cmake -DSAMPLES=ON ..

Afterwards, you can optionally read more about running proper benchmarks and tuning the library.

Full documentation

More detailed documentation is available in separate files:

Known issues

Known performance related issues:

  • Severe performance issues with Beignet v1.3.0 due to missing support for local memory. Please downgrade to v1.2.1 or upgrade to v1.3.1 or newer.

  • Performance issues on Qualcomm Adreno GPUs.

Other known issues:

  • Routines returning an integer are currently not properly tested for half-precision FP16: IHAMAX/IHAMIN/IHMAX/IHMIN

  • Half-precision FP16 tests might sometimes fail based on order multiplication, i.e. (a * b) * c != (c * b) * a

  • The AMD APP SDK has a bug causing a conflict with libstdc++, resulting in a segfault when initialising static variables. This has been reported to occur with the CLBlast tuners.

  • The AMD run-time compiler has a bug causing it to get stuck in an infinite loop. This is reported to happen occasionally when tuning the CLBlast GEMM routine.

  • AMD Southern Island GPUs might cause wrong results with the amdgpu-pro drivers. Do configure CMake with AMD_SI_EMPTY_KERNEL_WORKAROUND to resolve the issue, see issue #301.

  • Tests might fail on an Intel IvyBridge GPU with the latest Beignet. Please downgrade Beignet to 1.2.1, see issue #231.

Contributing

Contributions are welcome in the form of tuning results for OpenCL devices previously untested or pull requests. See the contributing guidelines for more details.

The main contributing authors (code, pull requests, testing) are:

Tuning and testing on a variety of OpenCL devices was made possible by:

Hardware/software for this project was contributed by:

More information

Further information on CLBlast is available through the following links:

  • A 20-minute presentation of CLBlast was given at the GPU Technology Conference in May 2017. A recording is available on the GTC on-demand website (poor audio quality however) and a full slide-set is also available as PDF. An updated version was also presented at IWOCL in May 2018. The slide set can be found here as PDF.
  • More in-depth information and experimental results are also available in a scientific paper titled CLBlast: A Tuned OpenCL BLAS Library (v1 May 2017, updated to v2 in April 2018). For CLTune, the inspiration for the included auto-tuner, see also the CLTune: A Generic Auto-Tuner for OpenCL Kernels paper.

How to cite this work:

Cedric Nugteren. CLBlast: A Tuned OpenCL BLAS Library. In IWOCL'18: International Workshop
on OpenCL. ACM, New York, NY, USA, 10 pages. 2018. https://doi.org/10.1145/3204919.3204924

Support us

This project started in March 2015 as an evenings and weekends free-time project next to a full-time job for Cedric Nugteren. If you are in the position to support the project by OpenCL-hardware donations or otherwise, please find contact information on the website of the main author.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].