All Projects → JuliaGPU → Cuda.jl

JuliaGPU / Cuda.jl

Licence: other
CUDA programming in Julia.

Programming Languages

julia
2034 projects

Labels

Projects that are alternatives of or similar to Cuda.jl

Popsift
PopSift is an implementation of the SIFT algorithm in CUDA.
Stars: ✭ 259 (-30%)
Mutual labels:  gpu, cuda
Cuda Api Wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
Stars: ✭ 362 (-2.16%)
Mutual labels:  gpu, cuda
Gprmax
gprMax is open source software that simulates electromagnetic wave propagation using the Finite-Difference Time-Domain (FDTD) method for numerical modelling of Ground Penetrating Radar (GPR)
Stars: ✭ 268 (-27.57%)
Mutual labels:  gpu, cuda
opencv-cuda-docker
Dockerfiles for OpenCV compiled with CUDA, opencv_contrib modules and Python 3 bindings
Stars: ✭ 55 (-85.14%)
Mutual labels:  gpu, cuda
Fast gicp
A collection of GICP-based fast point cloud registration algorithms
Stars: ✭ 307 (-17.03%)
Mutual labels:  gpu, cuda
LuisaRender
High-Performance Multiple-Backend Renderer Based on LuisaCompute
Stars: ✭ 47 (-87.3%)
Mutual labels:  gpu, cuda
Awesome Cuda
This is a list of useful libraries and resources for CUDA development.
Stars: ✭ 274 (-25.95%)
Mutual labels:  gpu, cuda
lbvh
an implementation of parallel linear BVH (LBVH) on GPU
Stars: ✭ 67 (-81.89%)
Mutual labels:  gpu, cuda
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-3.24%)
Mutual labels:  gpu, cuda
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-20.27%)
Mutual labels:  gpu, cuda
hipacc
A domain-specific language and compiler for image processing
Stars: ✭ 72 (-80.54%)
Mutual labels:  gpu, cuda
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+898.11%)
Mutual labels:  gpu, cuda
tiny-cuda-nn
Lightning fast & tiny C++/CUDA neural network framework
Stars: ✭ 908 (+145.41%)
Mutual labels:  gpu, cuda
gpu-monitor
Script to remotely check GPU servers for free GPUs
Stars: ✭ 85 (-77.03%)
Mutual labels:  gpu, cuda
QPT
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。
Stars: ✭ 308 (-16.76%)
Mutual labels:  gpu, cuda
Hemi
Simple utilities to enable code reuse and portability between CUDA C/C++ and standard C/C++.
Stars: ✭ 275 (-25.68%)
Mutual labels:  gpu, cuda
warp
continuous energy monte carlo neutron transport in general geometries on GPUs
Stars: ✭ 27 (-92.7%)
Mutual labels:  gpu, cuda
MatX
An efficient C++17 GPU numerical computing library with Python-like syntax
Stars: ✭ 418 (+12.97%)
Mutual labels:  gpu, cuda
Deep Diamond
A fast Clojure Tensor & Deep Learning library
Stars: ✭ 288 (-22.16%)
Mutual labels:  gpu, cuda
Thrust
The C++ parallel algorithms library.
Stars: ✭ 3,595 (+871.62%)
Mutual labels:  gpu, cuda

CUDA.jl

CUDA programming in Julia

Documentation Build Status Performance

The CUDA.jl package is the main programming interface for working with NVIDIA CUDA GPUs using Julia. It features a user-friendly array abstraction, a compiler for writing CUDA kernels in Julia, and wrappers for various CUDA libraries.

Requirements

The latest development version of CUDA.jl requires Julia 1.6 or higher. If you are using an older version of Julia, you need to use a released version of CUDA.jl. This will happen automatically when you install the package using Julia's package manager.

CUDA.jl currently also requires a CUDA-capable GPU with compute capability 5.0 (Maxwell) or higher, and an accompanying NVIDIA driver with support for CUDA 10.1 or newer. These requirements are not enforced by the Julia package manager when installing CUDA.jl. Depending on your system and GPU, you may need to install an older version of the package.

Quick start

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add CUDA

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("CUDA")

For an overview of the CUDA toolchain in use, you can run the following command after importing the package:

julia> using CUDA

julia> CUDA.versioninfo()

This may take a while, as it will precompile the package and download a suitable version of the CUDA toolkit. If you prefer to use your own (not recommended), set the JULIA_CUDA_USE_BINARYBUILDER environment variable to false before importing the package.

If your GPU is not fully supported, the above command (or any other command that initializes the toolkit) will issue a warning. Your devices' compute capability will be listed as part of the versioninfo() output, but you can always query it explicitly:

julia> [CUDA.capability(dev) for dev in CUDA.devices()]
1-element Vector{VersionNumber}:
 v"5.0.0"

For more usage instructions and other information, please refer to the documentation.

Supporting and Citing

Much of the software in this ecosystem was developed as part of academic research. If you would like to help support it, please star the repository as such metrics may help us secure funding in the future. If you use our software as part of your research, teaching, or other activities, we would be grateful if you could cite our work. The CITATION.bib file in the root of this repository lists the relevant papers.

Project Status

The package is tested against, and being developed for, Julia 1.3 and above. Main development and testing happens on Linux, but the package is expected to work on macOS and Windows as well.

Questions and Contributions

Usage questions can be posted on the Julia Discourse forum under the GPU domain and/or in the #gpu channel of the Julia Slack.

Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].