All Projects → ricosjp → allgebra

ricosjp / allgebra

Licence: Apache-2.0 License
Base container for developing C++ and Fortran HPC applications

Programming Languages

Dockerfile
14818 projects
Makefile
30231 projects
Cuda
1817 projects
shell
77523 projects

Projects that are alternatives of or similar to allgebra

monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (+1085.71%)
Mutual labels:  hpc, gpu, openmp, cuda
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (+1542.86%)
Mutual labels:  hpc, gpu, openmp, cuda
gpubootcamp
This repository consists for gpu bootcamp material for HPC and AI
Stars: ✭ 227 (+1521.43%)
Mutual labels:  hpc, gpu, openmp, cuda
FGPU
No description or website provided.
Stars: ✭ 30 (+114.29%)
Mutual labels:  gpu, openmp, cuda
Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (+771.43%)
Mutual labels:  hpc, gpu, cuda
Stdgpu
stdgpu: Efficient STL-like Data Structures on the GPU
Stars: ✭ 531 (+3692.86%)
Mutual labels:  gpu, openmp, cuda
Training Material
A collection of code examples as well as presentations for training purposes
Stars: ✭ 85 (+507.14%)
Mutual labels:  hpc, gpu, openmp
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+26278.57%)
Mutual labels:  hpc, gpu, cuda
mbsolve
An open-source solver tool for the Maxwell-Bloch equations.
Stars: ✭ 14 (+0%)
Mutual labels:  hpc, openmp, cuda
MatX
An efficient C++17 GPU numerical computing library with Python-like syntax
Stars: ✭ 418 (+2885.71%)
Mutual labels:  hpc, gpu, cuda
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+11621.43%)
Mutual labels:  hpc, gpu, cuda
Arrayfire Rust
Rust wrapper for ArrayFire
Stars: ✭ 525 (+3650%)
Mutual labels:  hpc, gpu, cuda
cuda memtest
Fork of CUDA GPU memtest 👓
Stars: ✭ 68 (+385.71%)
Mutual labels:  hpc, gpu, cuda
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (+2457.14%)
Mutual labels:  hpc, gpu, cuda
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (+407.14%)
Mutual labels:  hpc, gpu, cuda
euler2d kokkos
Simple 2d finite volume solver for Euler equations using c++ kokkos library
Stars: ✭ 27 (+92.86%)
Mutual labels:  openmp, cuda
Ginkgo
Numerical linear algebra software package
Stars: ✭ 149 (+964.29%)
Mutual labels:  hpc, cuda
Pymapd
Python client for OmniSci GPU-accelerated SQL engine and analytics platform
Stars: ✭ 109 (+678.57%)
Mutual labels:  hpc, gpu
Nsimd
Agenium Scale vectorization library for CPUs and GPUs
Stars: ✭ 138 (+885.71%)
Mutual labels:  hpc, cuda
Umpire
An application-focused API for memory management on NUMA & GPU architectures
Stars: ✭ 154 (+1000%)
Mutual labels:  hpc, gpu

allgebra

Docker images for developing C++ and Fortran HPC programs

Naming rule of tags

Following image tags are pushed from private GitLab CI to public GitHub Container Registry (ghcr.io):

  • {year}.{month}.{patch} formatted tags, e.g. 20.10.0
    • Be sure that it is not a semantic versioning. Every release can be a breaking change. You should use containers with a fixed tag.
    • See CHANGELOG for detail about changes
  • latest
    • Corresponds to latest branch. DO NOT USE unless you are watching all changes in the latest branch.

Images

Named in allgebra/{GPU}/{Compiler}/{Math} format:

Image name CUDA Compiler Math
ghcr.io/ricosjp/allgebra/cuda11_4/clang13/mkl 11.4 clang 13, gcc 10, nvcc 11.4 Intel MKL
ghcr.io/ricosjp/allgebra/cuda11_4/clang13/oss 11.4 clang 13, gcc 10, nvcc 11.4 OpenBLAS
ghcr.io/ricosjp/allgebra/cuda11_4/gcc10/mkl 11.4 gcc 10, nvcc 11.4 Intel MKL
ghcr.io/ricosjp/allgebra/cuda11_4/gcc10/oss 11.4 gcc 10, nvcc 11.4 OpenBLAS

In addition, there are support containers for reproducible development

Image name Application
ghcr.io/ricosjp/allgebra/clang-format clang-format
ghcr.io/ricosjp/allgebra/doxygen doxygen
ghcr.io/ricosjp/allgebra/poetry poetry

OpenMP Offloading, OpenACC examples

The OSS compilers in allgebra containers (gcc, gfortran and clang) are compiled with OpenMP and OpenACC supports. There are several examples in this repository, and they are also copied into the above containers.

Compiler OpenMP Offloading OpenACC
clang/libomp clang_omp_offloading -
gcc/libgomp gcc_omp_offloading gcc_openacc
gfortran/libgomp gfortran_omp_offloading gfortran_openacc

The requirements of these examples are following:

You can build and run e.g. the clang with OpenMP Offloading example as following:

$ docker run --rm -it --gpus=all ghcr.io/ricosjp/allgebra/cuda10_1/clang12/oss:21.06.0
root@41b65ab23aaf:/# cd /examples/clang_omp_offloading
root@41b65ab23aaf:/examples/clang_omp_offloading# make test
clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target -march=sm_70 -O3 -std=c++11 -lm omp_offloading.cpp -o omp_offloading.out
clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target -march=sm_70 -O3 -std=c++11 -lm omp_offloading_cublas.cpp -o omp_offloading_cublas.out -lcuda -lcublas -lcudart
clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target -march=sm_70 -O3 -std=c++11 -lm omp_offloading_math.cpp -o omp_offloading_math.out
./omp_offloading.out 1000000
dot = 2e+06
Pass!
./omp_offloading_cublas.out 1000000
dot = 2e+06
Pass!
./omp_offloading_math.out 1000000
ret = 909297
Pass!

allgebra_get_device_cc command is contained in the allgebra containers, and it detects the [compute capability](compute capability) of your GPU using CUDA API. On a system with NVIDIA TITAN V (compute capability 7.0), for example, it returns 70:

root@3f6b34672c01:/# allgebra_get_device_cc
70

This output is used to generate the flag -Xopenmp-target -march=sm_70 in above example.

With Singularity

Singularity is a container runtime focused on HPC and AI. Since singularity supports Docker and OCI container images, allgebra containers can be used as it is.

singularity run --nv docker://ghcr.io/ricosjp/allgebra/cuda10_1/clang12/mkl:latest

--nv is an option for using NVIDIA GPU in the container. See the official document for detail.

You can build a SIF (Singularity Image Format) file from an allgebra container:

singularity build allgebra_clang_mkl.sif docker://ghcr.io/ricosjp/allgebra/cuda10_1/clang12/mkl:latest

and run it:

singularity run --nv allgebra_clang_mkl.sif

--nv option is required for singularity run and not for singularity build since singularity build only download the container and converts it.

Be sure that this allgebra_clang_mkl.sif contains CUDA and MKL binaries. You have to accept the End User License Agreement of CUDA, and follow the Intel Simplified Software License.

ALLGEBRA_* environment variables

In order to identify the CUDA and LLVM versions in container, following environment variables are defined:

name example description
ALLGEBRA_CUDA_INSTALL_DIR /usr/local/cuda-11.4 The top directory where CUDA is installed
ALLGEBRA_CUDA_VERSION 11.4 Installed CUDA version
ALLGEBRA_CUDA_VERSION_MAJOR 11 The major version of CUDA
ALLGEBRA_CUDA_VERSION_MINOR 4 The minor version of CUDA
ALLGEBRA_CUDA_VERSION_PATCH 1 The patch version of CUDA
ALLGEBRA_LLVM_INSTALL_DIR /usr/local/llvm-12.0.1 The top directory where LLVM is installed
ALLGEBRA_LLVM_VERSION 12.0.1 Installed LLVM version
ALLGEBRA_LLVM_VERSION_MAJOR 12 The major version of LLVM
ALLGEBRA_LLVM_VERSION_MINOR 0 The minor version of LLVM
ALLGEBRA_LLVM_VERSION_PATCH 1 The patch version of LLVM

Build containers manually

See DEVELOPMENT.md

License

Copyright 2020 RICOS Co. Ltd.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

CAUTION

Be sure that you need to accept end user license agreements of CUDA, and Intel Simplified Software License to use these containers. You can find patched source code of GPL applications derived from nvidia/cuda container at nvidia/OpenSource.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].