All Projects → NVIDIA → Thrust

NVIDIA / Thrust

Licence: other
The C++ parallel algorithms library.

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
c
50402 projects - #5 most used programming language
CMake
9771 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Thrust

Libcudacxx
The C++ Standard Library for your entire system.
Stars: ✭ 1,861 (-48.23%)
Mutual labels:  nvidia, gpu, cuda, cpp20, cxx11, cxx14, cxx17, cxx20, nvidia-hpc-sdk
Cub
Cooperative primitives for CUDA C++.
Stars: ✭ 883 (-75.44%)
Mutual labels:  nvidia, algorithms, gpu, cuda
Cuml
cuML - RAPIDS Machine Learning Library
Stars: ✭ 2,504 (-30.35%)
Mutual labels:  nvidia, gpu, cuda
Macos Egpu Cuda Guide
Set up CUDA for machine learning (and gaming) on macOS using a NVIDIA eGPU
Stars: ✭ 187 (-94.8%)
Mutual labels:  nvidia, gpu, cuda
Plotoptix
Data visualisation in Python based on OptiX 7.2 ray tracing framework.
Stars: ✭ 252 (-92.99%)
Mutual labels:  nvidia, gpu, cuda
opencv-cuda-docker
Dockerfiles for OpenCV compiled with CUDA, opencv_contrib modules and Python 3 bindings
Stars: ✭ 55 (-98.47%)
Mutual labels:  gpu, cuda, nvidia
Xmrminer
🐜 A CUDA based miner for Monero
Stars: ✭ 158 (-95.61%)
Mutual labels:  nvidia, gpu, cuda
Nvidia Modded Inf
Modified nVidia .inf files to run drivers on all video cards, research & telemetry free drivers
Stars: ✭ 227 (-93.69%)
Mutual labels:  nvidia, gpu, cuda
Optix Path Tracer
OptiX Path Tracer
Stars: ✭ 60 (-98.33%)
Mutual labels:  nvidia, gpu, cuda
peakperf
Achieve peak performance on x86 CPUs and NVIDIA GPUs
Stars: ✭ 33 (-99.08%)
Mutual labels:  gpu, cuda, nvidia
uberswitch
A header-only, unobtrusive, almighty alternative to the C++ switch statement that looks just like the original.
Stars: ✭ 83 (-97.69%)
Mutual labels:  cxx11, cxx14, cxx17
lbvh
an implementation of parallel linear BVH (LBVH) on GPU
Stars: ✭ 67 (-98.14%)
Mutual labels:  gpu, cuda, thrust
Deep Learning Boot Camp
A community run, 5-day PyTorch Deep Learning Bootcamp
Stars: ✭ 1,270 (-64.67%)
Mutual labels:  nvidia, gpu, cuda
Gmonitor
gmonitor is a GPU monitor (Nvidia only at the moment)
Stars: ✭ 169 (-95.3%)
Mutual labels:  nvidia, gpu, cuda
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-98.03%)
Mutual labels:  nvidia, gpu, cuda
Genomeworks
SDK for GPU accelerated genome assembly and analysis
Stars: ✭ 215 (-94.02%)
Mutual labels:  nvidia, gpu, cuda
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-91.79%)
Mutual labels:  nvidia, gpu, cuda
Cuda
Experiments with CUDA and Rust
Stars: ✭ 31 (-99.14%)
Mutual labels:  nvidia, gpu, cuda
fameta-counter
Compile time counter that works with all major modern compilers
Stars: ✭ 34 (-99.05%)
Mutual labels:  cxx11, cxx14, cxx17
Gprmax
gprMax is open source software that simulates electromagnetic wave propagation using the Finite-Difference Time-Domain (FDTD) method for numerical modelling of Ground Penetrating Radar (GPR)
Stars: ✭ 268 (-92.55%)
Mutual labels:  nvidia, gpu, cuda

Thrust: Code at the speed of light

Thrust is a C++ parallel programming library which resembles the C++ Standard Library. Thrust's high-level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies (such as CUDA, TBB, and OpenMP) facilitates integration with existing software. Develop high-performance applications rapidly with Thrust!

Thrust is included in the NVIDIA HPC SDK and the CUDA Toolkit.

Quick Start

Getting the Thrust Source Code

The CUDA Toolkit provides a recent release of the Thrust source code in include/thrust. This will be suitable for most users.

Users that wish to contribute to Thrust or try out newer features should recursively clone the Thrust Github repository:

git clone --recursive https://github.com/NVIDIA/thrust.git

Using Thrust From Your Project

Thrust is a header-only library; there is no need to build or install the project unless you want to run the Thrust unit tests.

For CMake-based projects, we provide a CMake package for use with find_package. See the CMake README for more information. Thrust can also be added via add_subdirectory or tools like the CMake Package Manager.

For non-CMake projects, compile with:

  • The Thrust include path (-I<thrust repo root>)
  • The CUB include path, if using the CUDA device system (-I<thrust repo root>/dependencies/cub/)
  • By default, the CPP host system and CUDA device system are used. These can be changed using compiler definitions:
    • -DTHRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_XXX, where XXX is CPP (serial, default), OMP (OpenMP), or TBB (Intel TBB)
    • -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_XXX, where XXX is CPP, OMP, TBB, or CUDA (default).

Examples

Thrust is best explained through examples. The following source code generates random numbers serially and then transfers them to a parallel device where they are sorted.

#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/generate.h>
#include <thrust/sort.h>
#include <thrust/copy.h>
#include <algorithm>
#include <cstdlib>

int main(void)
{
  // generate 32M random numbers serially
  thrust::host_vector<int> h_vec(32 << 20);
  std::generate(h_vec.begin(), h_vec.end(), rand);

  // transfer data to the device
  thrust::device_vector<int> d_vec = h_vec;

  // sort data on the device (846M keys per second on GeForce GTX 480)
  thrust::sort(d_vec.begin(), d_vec.end());

  // transfer data back to host
  thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin());

  return 0;
}

This code sample computes the sum of 100 random numbers in parallel:

#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/generate.h>
#include <thrust/reduce.h>
#include <thrust/functional.h>
#include <algorithm>
#include <cstdlib>

int main(void)
{
  // generate random data serially
  thrust::host_vector<int> h_vec(100);
  std::generate(h_vec.begin(), h_vec.end(), rand);

  // transfer to device and compute sum
  thrust::device_vector<int> d_vec = h_vec;
  int x = thrust::reduce(d_vec.begin(), d_vec.end(), 0, thrust::plus<int>());
  return 0;
}

Additional usage examples can be found in the examples/ and testing/ directories of the Github repo.

Documentation Resources

CI Status

Supported Compilers

Thrust is regularly tested using the specified versions of the following compilers. Unsupported versions may emit deprecation warnings, which can be silenced by defining THRUST_IGNORE_DEPRECATED_COMPILER during compilation.

  • NVCC 11.0+
  • NVC++ 20.9+
  • GCC 5+
  • Clang 7+
  • MSVC 2019+ (19.20/16.0/14.20)

Releases

Thrust is distributed with the NVIDIA HPC SDK and the CUDA Toolkit in addition to GitHub.

See the changelog for details about specific releases.

Thrust Release Included In
1.15.0 TBD
1.14.0 NVIDIA HPC SDK 21.9
1.13.1 CUDA Toolkit 11.5
1.13.0 NVIDIA HPC SDK 21.7
1.12.1 CUDA Toolkit 11.4
1.12.0 NVIDIA HPC SDK 21.3
1.11.0 CUDA Toolkit 11.3
1.10.0 NVIDIA HPC SDK 20.9 & CUDA Toolkit 11.2
1.9.10-1 NVIDIA HPC SDK 20.7 & CUDA Toolkit 11.1
1.9.10 NVIDIA HPC SDK 20.5
1.9.9 CUDA Toolkit 11.0
1.9.8-1 NVIDIA HPC SDK 20.3
1.9.8 CUDA Toolkit 11.0 Early Access
1.9.7-1 CUDA Toolkit 10.2 for Tegra
1.9.7 CUDA Toolkit 10.2
1.9.6-1 NVIDIA HPC SDK 20.3
1.9.6 CUDA Toolkit 10.1 Update 2
1.9.5 CUDA Toolkit 10.1 Update 1
1.9.4 CUDA Toolkit 10.1
1.9.3 CUDA Toolkit 10.0
1.9.2 CUDA Toolkit 9.2
1.9.1-2 CUDA Toolkit 9.1
1.9.0-5 CUDA Toolkit 9.0
1.8.3 CUDA Toolkit 8.0
1.8.2 CUDA Toolkit 7.5
1.8.1 CUDA Toolkit 7.0
1.8.0
1.7.2 CUDA Toolkit 6.5
1.7.1 CUDA Toolkit 6.0
1.7.0 CUDA Toolkit 5.5
1.6.0
1.5.3 CUDA Toolkit 5.0
1.5.2 CUDA Toolkit 4.2
1.5.1 CUDA Toolkit 4.1
1.5.0
1.4.0 CUDA Toolkit 4.0
1.3.0
1.2.1
1.2.0
1.1.1
1.1.0
1.0.0

Development Process

Thrust uses the CMake build system to build unit tests, examples, and header tests. To build Thrust as a developer, the following recipe should be followed:

# Clone Thrust and CUB repos recursively:
git clone --recursive https://github.com/NVIDIA/thrust.git
cd thrust

# Create build directory:
mkdir build
cd build

# Configure -- use one of the following:
cmake ..   # Command line interface.
ccmake ..  # ncurses GUI (Linux only)
cmake-gui  # Graphical UI, set source/build directories in the app

# Build:
cmake --build . -j <num jobs>   # invokes make (or ninja, etc)

# Run tests and examples:
ctest

By default, a serial CPP host system, CUDA accelerated device system, and C++14 standard are used. This can be changed in CMake. More information on configuring your Thrust build and creating a pull request can be found in CONTRIBUTING.md.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].