All Projects → m4rs-mt → Ilgpu

m4rs-mt / Ilgpu

Licence: other
ILGPU JIT Compiler for high-performance .Net GPU programs

Projects that are alternatives of or similar to Ilgpu

Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-81.02%)
Mutual labels:  nvidia, intel, amd, gpu, gpgpu, opencl, cuda
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+338.77%)
Mutual labels:  compiler, gpu, gpgpu, opencl, cuda
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (+111.23%)
Mutual labels:  nvidia, amd, gpu, opencl, cuda
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-38.5%)
Mutual labels:  jit, gpu, gpgpu, opencl, cuda
Coriander
Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
Stars: ✭ 665 (+77.81%)
Mutual labels:  nvidia, intel, amd, gpu, opencl
darknet
Darknet on OpenCL Convolutional Neural Networks on OpenCL on Intel & NVidia & AMD & Mali GPUs for macOS & GNU/Linux
Stars: ✭ 160 (-57.22%)
Mutual labels:  cpu, amd, opencl, intel, nvidia
Ocl
OpenCL for Rust
Stars: ✭ 453 (+21.12%)
Mutual labels:  nvidia, intel, amd, gpgpu
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+887.43%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Tf Coriander
OpenCL 1.2 implementation for Tensorflow
Stars: ✭ 775 (+107.22%)
Mutual labels:  nvidia, intel, gpu, opencl
Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (+147.86%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Nplusminer
NPlusMiner + GUI | NVIDIA/AMD/CPU miner | AI | Autoupdate | MultiRig remote management
Stars: ✭ 75 (-79.95%)
Mutual labels:  nvidia, amd, gpu, cpu
Waifu2x Ncnn Vulkan
waifu2x converter ncnn version, runs fast on intel / amd / nvidia GPU with vulkan
Stars: ✭ 1,258 (+236.36%)
Mutual labels:  nvidia, intel, amd, gpu
Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (-67.38%)
Mutual labels:  intel, gpu, cpu, cuda
Compute Runtime
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
Stars: ✭ 593 (+58.56%)
Mutual labels:  intel, gpu, gpgpu, opencl
Hybridizer Basic Samples
Examples of C# code compiled to GPU by hybridizer
Stars: ✭ 186 (-50.27%)
Mutual labels:  compiler, parallel, gpu, cuda
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-4.28%)
Mutual labels:  gpu, gpgpu, opencl, cuda
Awesome Cuda
This is a list of useful libraries and resources for CUDA development.
Stars: ✭ 274 (-26.74%)
Mutual labels:  parallel, gpu, gpgpu, cuda
Realsr Ncnn Vulkan
RealSR super resolution implemented with ncnn library
Stars: ✭ 357 (-4.55%)
Mutual labels:  nvidia, intel, amd, gpu
Srmd Ncnn Vulkan
SRMD super resolution implemented with ncnn library
Stars: ✭ 186 (-50.27%)
Mutual labels:  nvidia, intel, amd, gpu
peakperf
Achieve peak performance on x86 CPUs and NVIDIA GPUs
Stars: ✭ 33 (-91.18%)
Mutual labels:  cpu, gpu, cuda, nvidia

ILGPU

ILGPU is a JIT (just-in-time) compiler for high-performance GPU programs written in .Net-based languages. ILGPU is entirely written in C# without any native dependencies. It offers the flexibility and the convenience of C++ AMP on the one hand and the high performance of Cuda programs on the other hand. Functions in the scope of kernels do not have to be annotated (default C# functions) and are allowed to work on value types. All kernels (including all hardware features like shared memory and atomics) can be executed and debugged on the CPU using the integrated multi-threaded CPU accelerator.

Build Instructions

ILGPU requires Visual Studio 2019 (Community edition or higher).

Use the provided Visual Studio solution to build the ILGPU libs in the desired configurations (Debug/Release).

Note: T4 (*.tt) text templates must be converted manually depending on the Visual Studio version. To transform them, right-click a text template and select Run Custom Tool. Alternatively, you can open and save any text template in Visual Studio.

Tests

Sometimes the XUnit test runner stops execution when all tests are run in parallel. This is not a problem related to the internal tests, but a known XUnit/Visual Studio problem. If the tests stop unexpectedly, you can simply run the remaining tests again to continue working.

Note: You can unload ILGPU.Tests.Cuda (for example) if you do not have a Cuda-capable device to execute the Cuda test cases.

Related Information

General Contribution Guidelines

  • Make sure that you agree with the general coding style (in terms of braces, whitespaces etc.).
  • Make sure that ILGPU compiles without warnings in all build modes (Debug, DebugVerification and Release).

References

  • Parallel Thread Execution ISA 7.0
    • NVIDIA
  • A Graph-Based Higher-Order Intermediate Representation
    • Roland Leissa, Marcel Koester, and Sebastian Hack
  • Target-Specific Refinement of Multigrid Codes
    • Richard Membarth, Philipp Slusallek, Marcel Koester, Roland Leissa, and Sebastian Hack
  • Code Refinement of Stencil Codes
    • Marcel Koester, Roland Leissa, Sebastian Hack, Richard Membarth, and Philipp Slusallek
  • Simple and Efficient Construction of Static Single Assignment Form
    • Matthias Braun, Sebastian Buchwald, Sebastian Hack, Roland Leissa, Christoph Mallon and Andreas Zwinkau
  • A Simple, Fast Dominance Algorithm
    • Keith D. Cooper, Timothy J. Harvey and Ken Kennedy
  • Fast Half Float Conversions
    • Jeroen van der Zijp
  • Identifying Loops In Almost Linear Time
    • G. Ramalingam

License information

ILGPU is licensed under the University of Illinois/NCSA Open Source License. Detailed license information can be found in LICENSE.txt.

Copyright (c) 2016-2020 Marcel Koester (www.ilgpu.net). All rights reserved.

License information of required dependencies

Different parts of ILGPU require different third-party libraries.

Detailed copyright and license information of these dependencies can be found in LICENSE-3RD-PARTY.txt.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].