All Projects → NVIDIA → Libcudacxx

NVIDIA / Libcudacxx

Licence: other
The C++ Standard Library for your entire system.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
HTML
75241 projects
CMake
9771 projects
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to Libcudacxx

Thrust
The C++ parallel algorithms library.
Stars: ✭ 3,595 (+93.18%)
Mutual labels:  nvidia, gpu, cuda, cpp20, cxx11, cxx14, cxx17, cxx20, nvidia-hpc-sdk
cxx
🔌 Configuration-free utility for building, testing and packaging executables written in C++. Can auto-detect compilation flags based on includes, via the package system and pkg-config.
Stars: ✭ 87 (-95.33%)
Mutual labels:  cpp20, cpp23, cxx20, cxx23
Deep Diamond
A fast Clojure Tensor & Deep Learning library
Stars: ✭ 288 (-84.52%)
Mutual labels:  nvidia, gpu, cuda
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-84.15%)
Mutual labels:  nvidia, gpu, cuda
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-79.9%)
Mutual labels:  nvidia, gpu, cuda
Deep Learning Boot Camp
A community run, 5-day PyTorch Deep Learning Bootcamp
Stars: ✭ 1,270 (-31.76%)
Mutual labels:  nvidia, gpu, cuda
opencv-cuda-docker
Dockerfiles for OpenCV compiled with CUDA, opencv_contrib modules and Python 3 bindings
Stars: ✭ 55 (-97.04%)
Mutual labels:  gpu, cuda, nvidia
Cuda Api Wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
Stars: ✭ 362 (-80.55%)
Mutual labels:  nvidia, gpu, cuda
fameta-counter
Compile time counter that works with all major modern compilers
Stars: ✭ 34 (-98.17%)
Mutual labels:  cxx11, cxx14, cxx17
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (-57.55%)
Mutual labels:  nvidia, gpu, cuda
Coriander
Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
Stars: ✭ 665 (-64.27%)
Mutual labels:  nvidia, llvm, gpu
Cub
Cooperative primitives for CUDA C++.
Stars: ✭ 883 (-52.55%)
Mutual labels:  nvidia, gpu, cuda
peakperf
Achieve peak performance on x86 CPUs and NVIDIA GPUs
Stars: ✭ 33 (-98.23%)
Mutual labels:  gpu, cuda, nvidia
Gprmax
gprMax is open source software that simulates electromagnetic wave propagation using the Finite-Difference Time-Domain (FDTD) method for numerical modelling of Ground Penetrating Radar (GPR)
Stars: ✭ 268 (-85.6%)
Mutual labels:  nvidia, gpu, cuda
uberswitch
A header-only, unobtrusive, almighty alternative to the C++ switch statement that looks just like the original.
Stars: ✭ 83 (-95.54%)
Mutual labels:  cxx11, cxx14, cxx17
Optix Path Tracer
OptiX Path Tracer
Stars: ✭ 60 (-96.78%)
Mutual labels:  nvidia, gpu, cuda
Nvidia Modded Inf
Modified nVidia .inf files to run drivers on all video cards, research & telemetry free drivers
Stars: ✭ 227 (-87.8%)
Mutual labels:  nvidia, gpu, cuda
Plotoptix
Data visualisation in Python based on OptiX 7.2 ray tracing framework.
Stars: ✭ 252 (-86.46%)
Mutual labels:  nvidia, gpu, cuda
Cudasift
A CUDA implementation of SIFT for NVidia GPUs (1.2 ms on a GTX 1060)
Stars: ✭ 555 (-70.18%)
Mutual labels:  nvidia, gpu, cuda
Cuda
Experiments with CUDA and Rust
Stars: ✭ 31 (-98.33%)
Mutual labels:  nvidia, gpu, cuda

libcu++: The C++ Standard Library for Your Entire System

Examples Godbolt Documentation

libcu++, the NVIDIA C++ Standard Library, is the C++ Standard Library for your entire system. It provides a heterogeneous implementation of the C++ Standard Library that can be used in and between CPU and GPU code.

If you know how to use your C++ Standard Library, then you know how to use libcu++. All you have to do is add cuda/std/ to the start of your Standard Library includes and cuda:: before any uses of std:::

#include <cuda/std/atomic>
cuda::std::atomic<int> x;

The NVIDIA C++ Standard Library is an open source project; it is available on GitHub and included in the NVIDIA HPC SDK and CUDA Toolkit. If you have one of those SDKs installed, no additional installation or compiler flags are needed to use libcu++.

cuda:: and cuda::std::

When used with NVCC, NVIDIA C++ Standard Library facilities live in their own header hierarchy and namespace with the same structure as, but distinct from, the host compiler's Standard Library:

  • std::/<*>: When using NVCC, this is your host compiler's Standard Library that works in __host__ code only, although you can use the --expt-relaxed-constexpr flag to use any constexpr functions in __device__ code. With NVCC, libcu++ does not replace or interfere with host compiler's Standard Library.
  • cuda::std::/<cuda/std/*>: Strictly conforming implementations of facilities from the Standard Library that work in __host__ __device__ code.
  • cuda::/<cuda/*>: Conforming extensions to the Standard Library that work in __host__ __device__ code.
  • cuda::device/<cuda/device/*>: Conforming extensions to the Standard Library that work only in __device__ code.
// Standard C++, __host__ only.
#include <atomic>
std::atomic<int> x;

// CUDA C++, __host__ __device__.
// Strictly conforming to the C++ Standard.
#include <cuda/std/atomic>
cuda::std::atomic<int> x;

// CUDA C++, __host__ __device__.
// Conforming extensions to the C++ Standard.
#include <cuda/atomic>
cuda::atomic<int, cuda::thread_scope_block> x;

libcu++ is Heterogeneous

The NVIDIA C++ Standard Library works across your entire codebase, both in and across host and device code. libcu++ is a C++ Standard Library for your entire system, not just Everything in cuda:: is __host__ __device__.

libcu++ facilities are designed to be passed between host and device code. Unless otherwise noted, any libcu++ object which is copyable or movable can be copied or moved between host and device code.

Synchronization objects work across host and device code, and can be used to synchronize between host and device threads. However, there are some restrictions to be aware of; please see the synchronization library section for more details.

cuda::device::

A small number of libcu++ facilities only work in device code, usually because there is no sensible implementation in host code.

Such facilities live in cuda::device::.

libcu++ is Incremental

Today, the NVIDIA C++ Standard Library delivers a high-priority subset of the C++ Standard Library today, and each release increases the feature set. But it is a subset; not everything is available today. The Standard API section lists the facilities available and the releases they were first introduced in.

Licensing

The NVIDIA C++ Standard Library is an open source project developed on GitHub. It is NVIDIA's variant of LLVM's libc++. libcu++ is distributed under the Apache License v2.0 with LLVM Exceptions.

Conformance

The NVIDIA C++ Standard Library aims to be a conforming implementation of the C++ Standard, ISO/IEC IS 14882, Clause 16 through 32.

ABI Evolution

The NVIDIA C++ Standard Library does not maintain long-term ABI stability. Promising long-term ABI stability would prevent us from fixing mistakes and providing best in class performance. So, we make no such promises.

Every major CUDA Toolkit release, the ABI will be broken. The life cycle of an ABI version is approximately one year. Long-term support for an ABI version ends after approximately two years. Please see the versioning section for more details.

We recommend that you always recompile your code and dependencies with the latest NVIDIA SDKs and use the latest NVIDIA C++ Standard Library ABI. Live at head.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].