All Projects → inducer → Pycuda

inducer / Pycuda

Licence: other
CUDA integration for Python, plus shiny features

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pycuda

Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (-28.96%)
Mutual labels:  array, multidimensional-arrays, gpu, cuda, scientific-computing
Loopy
A code generator for array-based code on CPUs and GPUs
Stars: ✭ 367 (-67%)
Mutual labels:  array, multidimensional-arrays, cuda, scientific-computing
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (-28.69%)
Mutual labels:  multidimensional-arrays, cuda, gpu-computing
cuda memtest
Fork of CUDA GPU memtest 👓
Stars: ✭ 68 (-93.88%)
Mutual labels:  gpu, cuda, gpu-computing
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+232.1%)
Mutual labels:  gpu, cuda, scientific-computing
HeCBench
software.intel.com/content/www/us/en/develop/articles/repo-evaluating-performance-productivity-oneapi.html
Stars: ✭ 85 (-92.36%)
Mutual labels:  cuda, scientific-computing, gpu-computing
monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (-85.07%)
Mutual labels:  gpu, cuda, scientific-computing
Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (-16.64%)
Mutual labels:  gpu, cuda, gpu-computing
Cuda Api Wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
Stars: ✭ 362 (-67.45%)
Mutual labels:  gpu, cuda, gpu-computing
Nvidia libs test
Tests and benchmarks for cudnn (and in the future, other nvidia libraries)
Stars: ✭ 36 (-96.76%)
Mutual labels:  gpu, cuda, gpu-computing
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-91.1%)
Mutual labels:  gpu, cuda, gpu-computing
Stdgpu
stdgpu: Efficient STL-like Data Structures on the GPU
Stars: ✭ 531 (-52.25%)
Mutual labels:  gpu, cuda, gpu-computing
MatX
An efficient C++17 GPU numerical computing library with Python-like syntax
Stars: ✭ 418 (-62.41%)
Mutual labels:  gpu, cuda, gpu-computing
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (-69.24%)
Mutual labels:  gpu, cuda, gpu-computing
Hipsycl
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs
Stars: ✭ 377 (-66.1%)
Mutual labels:  gpu, cuda, gpu-computing
Heteroflow
Concurrent CPU-GPU Programming using Task Models
Stars: ✭ 57 (-94.87%)
Mutual labels:  gpu, cuda, gpu-computing
Accelerate
Embedded language for high-performance array computations
Stars: ✭ 751 (-32.46%)
Mutual labels:  cuda, gpu-computing
Kubernetes Gpu Guide
This guide should help fellow researchers and hobbyists to easily automate and accelerate there deep leaning training with their own Kubernetes GPU cluster.
Stars: ✭ 740 (-33.45%)
Mutual labels:  gpu, gpu-computing
Marian
Fast Neural Machine Translation in C++
Stars: ✭ 777 (-30.13%)
Mutual labels:  gpu, cuda
Gunrock
High-Performance Graph Primitives on GPUs
Stars: ✭ 718 (-35.43%)
Mutual labels:  gpu, cuda

PyCUDA lets you access Nvidia <http://nvidia.com>'s CUDA <http://nvidia.com/cuda/> parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about PyCUDA?

.. image:: https://badge.fury.io/py/pycuda.png :target: http://pypi.python.org/pypi/pycuda

  • Object cleanup tied to lifetime of objects. This idiom, often called RAII <http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization>_ in C++, makes it much easier to write correct, leak- and crash-free code. PyCUDA knows about dependencies, too, so (for example) it won't detach from a context before all memory allocated in it is also freed.

  • Convenience. Abstractions like pycuda.driver.SourceModule and pycuda.gpuarray.GPUArray make CUDA programming even more convenient than with Nvidia's C-based runtime.

  • Completeness. PyCUDA puts the full power of CUDA's driver API at your disposal, if you wish. It also includes code for interoperability with OpenGL.

  • Automatic Error Checking. All CUDA errors are automatically translated into Python exceptions.

  • Speed. PyCUDA's base layer is written in C++, so all the niceties above are virtually free.

  • Helpful Documentation <http://documen.tician.de/pycuda>_ and a Wiki <http://wiki.tiker.net/PyCuda>_.

Relatedly, like-minded computing goodness for OpenCL <http://khronos.org>_ is provided by PyCUDA's sister project PyOpenCL <http://pypi.python.org/pypi/pyopencl>_.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].