All Projects → illuhad → Hipsycl

illuhad / Hipsycl

Licence: bsd-2-clause
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs

Projects that are alternatives of or similar to Hipsycl

Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (+145.89%)
Mutual labels:  gpu, gpgpu, opencl, cuda, high-performance-computing, gpu-computing
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (-9.28%)
Mutual labels:  gpu, opencl, cuda, high-performance-computing, gpu-computing
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+110.34%)
Mutual labels:  gpgpu, opencl, cuda, high-performance-computing, gpu-computing
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+879.58%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Cuda Api Wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
Stars: ✭ 362 (-3.98%)
Mutual labels:  gpu, gpgpu, cuda, gpu-computing
Stdgpu
stdgpu: Efficient STL-like Data Structures on the GPU
Stars: ✭ 531 (+40.85%)
Mutual labels:  gpu, gpgpu, cuda, gpu-computing
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-81.17%)
Mutual labels:  gpu, gpgpu, opencl, cuda
MatX
An efficient C++17 GPU numerical computing library with Python-like syntax
Stars: ✭ 418 (+10.88%)
Mutual labels:  gpu, cuda, gpgpu, gpu-computing
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+335.28%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Vuh
Vulkan compute for people
Stars: ✭ 264 (-29.97%)
Mutual labels:  gpu, gpgpu, high-performance-computing, gpu-computing
Tf Quant Finance
High-performance TensorFlow library for quantitative finance.
Stars: ✭ 2,925 (+675.86%)
Mutual labels:  gpu, high-performance, high-performance-computing, gpu-computing
Arrayfire Rust
Rust wrapper for ArrayFire
Stars: ✭ 525 (+39.26%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Bitcracker
BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker
Stars: ✭ 463 (+22.81%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-5.04%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-0.8%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Cekirdekler
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (-79.84%)
Mutual labels:  gpu, gpgpu, opencl, gpu-computing
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-38.99%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Clojurecl
ClojureCL is a Clojure library for parallel computations with OpenCL.
Stars: ✭ 266 (-29.44%)
Mutual labels:  opencl, high-performance, high-performance-computing, gpu-computing
bifrost
A stream processing framework for high-throughput applications.
Stars: ✭ 48 (-87.27%)
Mutual labels:  gpu, cuda, high-performance-computing
rbcuda
CUDA bindings for Ruby
Stars: ✭ 57 (-84.88%)
Mutual labels:  cuda, high-performance-computing, gpu-computing

Project logo

hipSYCL - a SYCL implementation for CPUs and GPUs

hipSYCL is a modern SYCL implementation targeting CPUs and GPUs, with a focus on leveraging existing toolchains such as CUDA or HIP. hipSYCL currently targets the following devices:

  • Any CPU via OpenMP
  • NVIDIA GPUs via CUDA
  • AMD GPUs via HIP/ROCm

hipSYCL supports compiling source files into a single binary that can run on all these backends when building against appropriate clang distributions. See here for details on the hipSYCL compilation model.

The following image illustrates how hipSYCL fits into the wider SYCL implementation ecosystem: SYCL implementations

The philosophy behind hipSYCL is to leverage existing toolchains as much as possible. This brings not only maintenance and stability advantages, but enables performance on par with those established toolchains by design, and allows for maximum interoperability with existing compute platforms. For example, the hipSYCL CUDA and ROCm backends rely on the clang CUDA/HIP frontends that have been augmented by hipSYCL to additionally also understand SYCL code. This means that the hipSYCL compiler can not only compile SYCL code, but also CUDA/HIP code even if they are mixed in the same source file, making all CUDA/HIP features - such as the latest device intrinsics - also available from SYCL code (details). Additionally, vendor-optimized template libraries such as rocPRIM or CUB can also be used with hipSYCL. Consequently, hipSYCL allows for highly optimized code paths in SYCL code for specific devices.

Because a SYCL program compiled with hipSYCL looks just like any other CUDA or HIP program to vendor-provided software, vendor tools such as profilers or debuggers also work well with hipSYCL.

About the project

While hipSYCL started its life as a hobby project, development is now led and funded by Heidelberg University. hipSYCL not only serves as a research platform, but is also a solution used in production on machines of all scales, including some of the most powerful supercomputers.

Contributing to hipSYCL

We encourage contributions and are looking forward to your pull request! Please have a look at CONTRIBUTING.md. If you need any guidance, please just open an issue and we will get back to you shortly.

If you are a student at Heidelberg University and wish to work on hipSYCL, please get in touch with us. There are various options possible and we are happy to include you in the project :-)

Citing hipSYCL

hipSYCL is a research project. As such, if you use hipSYCL in your research, we kindly request that you cite:

Aksel Alpay and Vincent Heuveline. 2020. SYCL beyond OpenCL: The architecture, current state and future direction of hipSYCL. In Proceedings of the International Workshop on OpenCL (IWOCL ’20). Association for Computing Machinery, New York, NY, USA, Article 8, 1. DOI:https://doi.org/10.1145/3388333.3388658

(This is a talk and available online. Note that some of the content in this talk is outdated by now)

Acknowledgements

We gratefully acknowledge contributions from the community.

Performance

hipSYCL has been repeatedly shown to deliver very competitive performance compared to other SYCL implementations or proprietary solutions like CUDA. See for example:

  • Sohan Lal, Aksel Alpay, Philip Salzmann, Biagio Cosenza, Nicolai Stawinoga, Peter Thoman, Thomas Fahringer, and Vincent Heuveline. 2020. SYCL-Bench: A Versatile Single-Source Benchmark Suite for Heterogeneous Computing. In Proceedings of the International Workshop on OpenCL (IWOCL ’20). Association for Computing Machinery, New York, NY, USA, Article 10, 1. DOI:https://doi.org/10.1145/3388333.3388669
  • Brian Homerding and John Tramm. 2020. Evaluating the Performance of the hipSYCL Toolchain for HPC Kernels on NVIDIA V100 GPUs. In Proceedings of the International Workshop on OpenCL (IWOCL ’20). Association for Computing Machinery, New York, NY, USA, Article 16, 1–7. DOI:https://doi.org/10.1145/3388333.3388660
  • Tom Deakin and Simon McIntosh-Smith. 2020. Evaluating the performance of HPC-style SYCL applications. In Proceedings of the International Workshop on OpenCL (IWOCL ’20). Association for Computing Machinery, New York, NY, USA, Article 12, 1–11. DOI:https://doi.org/10.1145/3388333.3388643

Benchmarking hipSYCL

When targeting the CUDA or HIP backends, hipSYCL just massages the AST slightly to get clang -x cuda and clang -x hip to accept SYCL code. hipSYCL is not involved in the actual code generation. Therefore any significant deviation in kernel performance compared to clang-compiled CUDA or clang-compiled HIP is unexpected.

As a consequence, if you compare it to other llvm-based compilers please make sure to compile hipSYCL against the same llvm version. Otherwise you would effectively be simply comparing the performance of two different LLVM versions. This is in particular true when comparing it to clang CUDA or clang HIP.

Current state

hipSYCL is not yet a fully conformant SYCL implementation, although many SYCL programs already work with hipSYCL.

Hardware and operating system support

Supported hardware:

  • Any CPU for which a C++17 OpenMP compiler exists
  • NVIDIA CUDA GPUs. Note that clang, which hipSYCL relies on, may not always support the very latest CUDA version which may sometimes impact support for very new hardware. See the clang documentation for more details.
  • AMD GPUs that are supported by ROCm

Operating system support currently strongly focuses on Linux. On Mac, only the CPU backend is expected to work. Windows is currently not supported.

Installing and using hipSYCL

In order to compile software with hipSYCL, use syclcc which automatically adds all required compiler arguments to the CUDA/HIP compiler. syclcc can be used like a regular compiler, i.e. you can use syclcc -o test test.cpp to compile your SYCL application called test.cpp with hipSYCL.

syclcc accepts both command line arguments and environment variables to configure its behavior (e.g., to select the target platform CUDA/ROCm/CPU to compile for). See syclcc --help for a comprehensive list of options.

When targeting a GPU, you will need to provide a target GPU architecture. The expected formats are defined by clang CUDA/HIP. Examples:

  • sm_52: NVIDIA Maxwell GPUs
  • sm_60: NVIDIA Pascal GPUs
  • sm_70: NVIDIA Volta GPUs
  • gfx900: AMD Vega 10 GPUs
  • gfx906: AMD Vega 20 GPUs

Documentation

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].