All Projects â†’ LLNL â†’ Raja

LLNL / Raja

Licence: bsd-3-clause
RAJA Performance Portability Layer (C++)

Programming Languages

cpp
1120 projects

Projects that are alternatives of or similar to Raja

Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (-31.17%)
Mutual labels:  parallel-computing
Fast
A framework for GPU based high-performance medical image processing and visualization
Stars: ✭ 179 (-22.51%)
Mutual labels:  parallel-computing
Opentimer
A High-performance Timing Analysis Tool for VLSI Systems
Stars: ✭ 213 (-7.79%)
Mutual labels:  parallel-computing
Bigmachine
Bigmachine is a library for self-managing serverless computing in Go
Stars: ✭ 167 (-27.71%)
Mutual labels:  parallel-computing
Dolfinx
Next generation FEniCS problem solving environment
Stars: ✭ 171 (-25.97%)
Mutual labels:  parallel-computing
Sundials
SUNDIALS is a SUite of Nonlinear and DIfferential/ALgebraic equation Solvers. This is a mirror of current releases, and development will move here eventually. Pull requests are welcome for bug fixes and minor changes.
Stars: ✭ 194 (-16.02%)
Mutual labels:  parallel-computing
Embb
Embedded Multicore Building Blocks (EMB²): Library for parallel programming of embedded systems. Star us on GitHub? +1
Stars: ✭ 153 (-33.77%)
Mutual labels:  parallel-computing
Nwchem
NWChem: Open Source High-Performance Computational Chemistry
Stars: ✭ 227 (-1.73%)
Mutual labels:  parallel-computing
Dkeras
Distributed Keras Engine, Make Keras faster with only one line of code.
Stars: ✭ 181 (-21.65%)
Mutual labels:  parallel-computing
Awesome Parallel Computing
A curated list of awesome parallel computing resources
Stars: ✭ 212 (-8.23%)
Mutual labels:  parallel-computing
Klyng
A message-passing distributed computing framework for node.js
Stars: ✭ 167 (-27.71%)
Mutual labels:  parallel-computing
Ngsolve
Netgen/NGSolve is a high performance multiphysics finite element software. It is widely used to analyze models from solid mechanics, fluid dynamics and electromagnetics. Due to its flexible Python interface new physical equations and solution algorithms can be implemented easily.
Stars: ✭ 171 (-25.97%)
Mutual labels:  parallel-computing
Bohrium
Automatic parallelization of Python/NumPy, C, and C++ codes on Linux and MacOSX
Stars: ✭ 209 (-9.52%)
Mutual labels:  parallel-computing
Samrai
Structured Adaptive Mesh Refinement Application Infrastructure - a scalable C++ framework for block-structured AMR application development
Stars: ✭ 160 (-30.74%)
Mutual labels:  parallel-computing
Gipuma
Massively Parallel Multiview Stereopsis by Surface Normal Diffusion
Stars: ✭ 220 (-4.76%)
Mutual labels:  parallel-computing
Geni
A Clojure dataframe library that runs on Spark
Stars: ✭ 152 (-34.2%)
Mutual labels:  parallel-computing
Hyperactive
A hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (-21.21%)
Mutual labels:  parallel-computing
Feelpp
💎 Feel++: Finite Element Embedded Language and Library in C++
Stars: ✭ 229 (-0.87%)
Mutual labels:  parallel-computing
Dispy
Distributed and Parallel Computing Framework with / for Python
Stars: ✭ 222 (-3.9%)
Mutual labels:  parallel-computing
Joblib
Computing with Python functions.
Stars: ✭ 2,620 (+1034.2%)
Mutual labels:  parallel-computing

RAJA

Build Status Join the chat at https://gitter.im/llnl/raja Coverage

RAJA is a library of C++ software abstractions, primarily developed at Lawrence Livermore National Laboratory (LLNL), that enables architecture and programming model portability for HPC applications. RAJA has two main goals:

  • To enable application portability with manageable disruption to existing algorithms and programming styles
  • To achieve performance comparable to using common programming models, such as OpenMP, CUDA, etc. directly.

RAJA offers portable, parallel loop execution by providing building blocks that extend the generally-accepted parallel for idiom. RAJA relies on standard C++11 features, such as lambda expressions.

RAJA's design is rooted in decades of experience working on production mesh-based multiphysics applications. Based on the diversity of algorithms and software engineering styles used in such applications, RAJA is designed to enable application developers to adapt RAJA concepts and specialize them for different code implementation patterns and C++ usage.

RAJA shares goals and concepts found in other C++ portability abstraction approaches, such as Kokkos and Thrust. However, it includes concepts and capabilities that are absent in other models that are fundamental to applications we work with.

It is important to note that, although RAJA is used in a diversity of production applications, it is very much a work-in-progress. The community of researchers and application developers at LLNL that actively contribute to it is growing. Versions available as GitHub releases contain mostly well-used and well-tested features. Our core interfaces are fairly stable while underlying implementations are being refined. Additional features will appear in future releases.

Quick Start

The RAJA code lives in a GitHub repository. To clone the repo, use the command:

git clone --recursive https://github.com/llnl/raja.git

Then, you can build RAJA like any other CMake project, provided you have a C++ compiler that supports the C++11 standard. The simplest way to build the code, using your system default compiler, is to run the following sequence of commands in the top-level RAJA directory (in-source builds are not allowed!):

mkdir build
cd build
cmake ../
make

More details about RAJA configuration options are located in the User Documentation.

We also maintain a RAJA Template Project that shows how to use RAJA in a CMake project, either as a Git submodule or as an installed library.

User Documentation

The RAJA User Guide and Tutorial is the best place to start learning about RAJA and how to use it.

To cite RAJA, please use the following references:

  • RAJA Performance Portability Layer. https://github.com/LLNL/RAJA

  • D. A. Beckingsale, J. Burmark, R. Hornung, H. Jones, W. Killian, A. J. Kunen, O. Pearce, P. Robinson, B. S. Ryujin, T. R. W. Scogland, "RAJA: Portable Performance for Large-Scale Scientific Applications", 2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC). Download here

Related Software

The RAJA Performance Suite contains a collection of loop kernels implemented in multiple RAJA and non-RAJA variants. We use it to monitor and assess RAJA performance on different platforms using a variety of compilers. Many major compiler vendors use the Suite to improve their support of abstractions like RAJA.

The RAJA Proxies repository contains RAJA versions of several important HPC proxy applications.

CHAI provides a managed array abstraction that works with RAJA to automatically copy data used in RAJA kernels to the appropriate space for execution. It was developed as a complement to RAJA.

Communicate with Us

The most effective way to communicate with the core RAJA development team is via our mailing list: [email protected]

You are also welcome to join our RAJA Google Group.

If you have questions, find a bug, or have ideas about expanding the functionality or applicability of RAJA and are interested in contributing to its development, please do not hesitate to contact us. We are very interested in improving RAJA and exploring new ways to use it.

Contributions

The RAJA team follows the GitFlow development model. Folks wishing to contribute to RAJA, should include their work in a feature branch created from the RAJA develop branch. That branch contains the latest work in RAJA. Then, create a pull request with the develop branch as the destination. Periodically, we merge the develop branch into the main branch and tag a new release.

Authors

Please see the RAJA Contributors Page, to see the full list of contributors to the project.

License

RAJA is licensed under the BSD 3-Clause license.

Copyrights and patents in the RAJA project are retained by contributors. No copyright assignment is required to contribute to RAJA.

Unlimited Open Source - BSD 3-clause Distribution LLNL-CODE-689114 OCEC-16-063

For release details and restrictions, please see the information in the following:

SPDX usage

Individual files contain SPDX tags instead of the full license text. This enables machine processing of license information based on the SPDX License Identifiers that are available here: https://spdx.org/licenses/

Files that are licensed as BSD 3-Clause contain the following text in the license header:

SPDX-License-Identifier: (BSD-3-Clause)

External Packages

RAJA bundles its external dependencies as submodules in the git repository. These packages are covered by various permissive licenses. A summary listing follows. See the license included with each package for full details.

PackageName: BLT
PackageHomePage: https://github.com/LLNL/blt
PackageLicenseDeclared: BSD-3-Clause

PackageName: camp
PackageHomePage: https://github.com/LLNL/camp
PackageLicenseDeclared: BSD-3-Clause

PackageName: CUB
PackageHomePage: https://github.com/NVlabs/cub
PackageLicenseDeclared: BSD-3-Clause

PackageName: rocPRIM
PackageHomePage: https://github.com/ROCmSoftwarePlatform/rocPRIM.git
PackageLicenseDeclared: MIT License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].