All Projects → boostorg → Compute

boostorg / Compute

Licence: bsl-1.0
A C++ GPU Computing Library for OpenCL

Programming Languages

cpp
1120 projects

Projects that are alternatives of or similar to Compute

Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+209.82%)
Mutual labels:  gpu, gpgpu, opencl, performance, hpc
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-94.04%)
Mutual labels:  gpu, gpgpu, opencl, hpc
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+37.67%)
Mutual labels:  gpu, gpgpu, opencl, hpc
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-69.97%)
Mutual labels:  gpu, gpgpu, opencl, hpc
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-80.7%)
Mutual labels:  gpu, gpgpu, opencl, hpc
Arrayfire Rust
Rust wrapper for ArrayFire
Stars: ✭ 525 (-55.96%)
Mutual labels:  gpu, gpgpu, opencl, hpc
MatX
An efficient C++17 GPU numerical computing library with Python-like syntax
Stars: ✭ 418 (-64.93%)
Mutual labels:  hpc, gpu, gpgpu
rindow-neuralnetworks
Neural networks library for machine learning on PHP
Stars: ✭ 37 (-96.9%)
Mutual labels:  gpu, opencl, gpgpu
Tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Stars: ✭ 7,494 (+528.69%)
Mutual labels:  gpu, opencl, performance
Cekirdekler
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (-93.62%)
Mutual labels:  gpu, gpgpu, opencl
Aparapi
The New Official Aparapi: a framework for executing native Java and Scala code on the GPU.
Stars: ✭ 352 (-70.47%)
Mutual labels:  gpu, gpgpu, opencl
Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (-22.23%)
Mutual labels:  gpu, gpgpu, opencl
Bitcracker
BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker
Stars: ✭ 463 (-61.16%)
Mutual labels:  gpu, gpgpu, opencl
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (-33.72%)
Mutual labels:  gpu, opencl, performance
Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (-89.77%)
Mutual labels:  gpu, performance, hpc
Hipsycl
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs
Stars: ✭ 377 (-68.37%)
Mutual labels:  gpu, gpgpu, opencl
John
John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs
Stars: ✭ 5,656 (+374.5%)
Mutual labels:  gpu, gpgpu, opencl
Opencl Intercept Layer
Intercept Layer for Debugging and Analyzing OpenCL Applications
Stars: ✭ 189 (-84.14%)
Mutual labels:  gpgpu, opencl, performance
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-68.62%)
Mutual labels:  gpu, gpgpu, opencl
Compute Runtime
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
Stars: ✭ 593 (-50.25%)
Mutual labels:  gpu, gpgpu, opencl

Boost.Compute

Build Status Build status Coverage Status Gitter

Boost.Compute is a GPU/parallel-computing library for C++ based on OpenCL.

The core library is a thin C++ wrapper over the OpenCL API and provides access to compute devices, contexts, command queues and memory buffers.

On top of the core library is a generic, STL-like interface providing common algorithms (e.g. transform(), accumulate(), sort()) along with common containers (e.g. vector<T>, flat_set<T>). It also features a number of extensions including parallel-computing algorithms (e.g. exclusive_scan(), scatter(), reduce()) and a number of fancy iterators (e.g. transform_iterator<>, permutation_iterator<>, zip_iterator<>).

The full documentation is available at http://boostorg.github.io/compute/.

Example

The following example shows how to sort a vector of floats on the GPU:

#include <vector>
#include <algorithm>
#include <boost/compute.hpp>

namespace compute = boost::compute;

int main()
{
    // get the default compute device
    compute::device gpu = compute::system::default_device();

    // create a compute context and command queue
    compute::context ctx(gpu);
    compute::command_queue queue(ctx, gpu);

    // generate random numbers on the host
    std::vector<float> host_vector(1000000);
    std::generate(host_vector.begin(), host_vector.end(), rand);

    // create vector on the device
    compute::vector<float> device_vector(1000000, ctx);

    // copy data to the device
    compute::copy(
        host_vector.begin(), host_vector.end(), device_vector.begin(), queue
    );

    // sort data on the device
    compute::sort(
        device_vector.begin(), device_vector.end(), queue
    );

    // copy data back to the host
    compute::copy(
        device_vector.begin(), device_vector.end(), host_vector.begin(), queue
    );

    return 0;
}

Boost.Compute is a header-only library, so no linking is required. The example above can be compiled with:

g++ -I/path/to/compute/include sort.cpp -lOpenCL

More examples can be found in the tutorial and under the examples directory.

Support

Questions about the library (both usage and development) can be posted to the mailing list.

Bugs and feature requests can be reported through the issue tracker.

Also feel free to send me an email with any problems, questions, or feedback.

Help Wanted

The Boost.Compute project is currently looking for additional developers with interest in parallel computing.

Please send an email to Kyle Lutz ([email protected]) for more information.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].