All Projects → VcDevel → Std Simd

VcDevel / Std Simd

Licence: other
std::experimental::simd for GCC [ISO/IEC TS 19570:2018]

Programming Languages

cpp17
186 projects

Projects that are alternatives of or similar to Std Simd

Xsimd
C++ wrappers for SIMD intrinsics and parallelized, optimized mathematical functions (SSE, AVX, NEON, AVX512)
Stars: ✭ 964 (+250.55%)
Mutual labels:  simd, sse, neon, avx512, avx
Simd
C++ image processing and machine learning library with using of SIMD: SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, AVX-512, VMX(Altivec) and VSX(Power7), NEON for ARM.
Stars: ✭ 1,263 (+359.27%)
Mutual labels:  simd, sse, neon, avx512, avx
Vc
SIMD Vector Classes for C++
Stars: ✭ 985 (+258.18%)
Mutual labels:  simd, sse, neon, avx512, avx
Quadray Engine
Realtime raytracer using SIMD on ARM, MIPS, PPC and x86
Stars: ✭ 13 (-95.27%)
Mutual labels:  simd, sse, neon, avx512, avx
Boost.simd
Boost SIMD
Stars: ✭ 238 (-13.45%)
Mutual labels:  simd, sse, neon, avx512, avx
Unisimd Assembler
SIMD macro assembler unified for ARM, MIPS, PPC and x86
Stars: ✭ 63 (-77.09%)
Mutual labels:  simd, sse, neon, avx512, avx
Simde
Implementations of SIMD instruction sets for systems which don't natively support them.
Stars: ✭ 1,012 (+268%)
Mutual labels:  simd, sse, neon, avx512, avx
Libxsmm
Library for specialized dense and sparse matrix operations, and deep learning primitives.
Stars: ✭ 518 (+88.36%)
Mutual labels:  simd, sse, avx512, avx
Umesimd
UME::SIMD A library for explicit simd vectorization.
Stars: ✭ 66 (-76%)
Mutual labels:  simd, neon, avx512, avx
oversimple
A library for audio oversampling, which tries to offer a simple api while wrapping HIIR, by Laurent De Soras, for minimum phase antialiasing, and r8brain-free-src, by Aleksey Vaneev, for linear phase antialiasing.
Stars: ✭ 25 (-90.91%)
Mutual labels:  neon, avx, sse, simd
Cglm
📽 Highly Optimized Graphics Math (glm) for C
Stars: ✭ 887 (+222.55%)
Mutual labels:  simd, sse, neon, avx
Directxmath
DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps
Stars: ✭ 859 (+212.36%)
Mutual labels:  simd, sse, neon, avx
Nsimd
Agenium Scale vectorization library for CPUs and GPUs
Stars: ✭ 138 (-49.82%)
Mutual labels:  simd, neon, avx512, avx
Sleef
SIMD Library for Evaluating Elementary Functions, vectorized libm and DFT
Stars: ✭ 353 (+28.36%)
Mutual labels:  simd, neon, avx512, avx
ternary-logic
Support for ternary logic in SSE, XOP, AVX2 and x86 programs
Stars: ✭ 21 (-92.36%)
Mutual labels:  avx, sse, simd, avx512
Libsimdpp
Portable header-only C++ low level SIMD library
Stars: ✭ 914 (+232.36%)
Mutual labels:  simd, sse, neon, avx512
Base64simd
Base64 coding and decoding with SIMD instructions (SSE/AVX2/AVX512F/AVX512BW/AVX512VBMI/ARM Neon)
Stars: ✭ 115 (-58.18%)
Mutual labels:  simd, sse, neon, avx512
Mipp
MIPP is a portable wrapper for SIMD instructions written in C++11. It supports NEON, SSE, AVX and AVX-512.
Stars: ✭ 253 (-8%)
Mutual labels:  simd, sse, neon, avx
penguinV
Simple and fast C++ image processing library with focus on heterogeneous systems
Stars: ✭ 110 (-60%)
Mutual labels:  avx, sse, simd
hpc
Learning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (-85.82%)
Mutual labels:  avx, sse, simd

std::experimental::simd

portable, zero-overhead C++ types for explicitly data-parallel programming

This package implements ISO/IEC TS 19570:2018 Section 9 "Data-Parallel Types". It is targetting inclusion into libstdc++. By default, the install.sh script places the std::experimental::simd headers into the directory where the standard library of your C++ compiler (identified via $CXX) resides.

The implementation derives from https://github.com/VcDevel/Vc. It is only tested and supported with GCC 9, even though it may (partially) work with older GCC versions.

Target support

  • x86_64 is the main development platform and thoroughly tested. This includes support from SSE-only up to AVX512 on Xeon Phi or Xeon CPUs.
  • aarch64, arm, and ppc64le was tested and verified to work. No significant performance evaluation was done.
  • In any case, a fallback to correct execution via builtin arthmetic types is available for all targets.

Installation Instructions

$ ./install.sh

Use --help to learn about the available options.

Example

Scalar Product

Let's start from the code for calculating a 3D scalar product using builtin floats:

using Vec3D = std::array<float, 3>;
float scalar_product(Vec3D a, Vec3D b) {
  return a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
}

Using simd, we can easily vectorize the code using the native_simd<float> type (Compiler Explorer):

using std::experimental::native_simd;
using Vec3D = std::array<native_simd<float>, 3>;
native_simd<float> scalar_product(Vec3D a, Vec3D b) {
  return a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
}

The above will scale to 1, 4, 8, 16, etc. scalar products calculated in parallel, depending on the target hardware's capabilities.

For comparison, the same vectorization using Intel SSE intrinsics is more verbose, uses prefix notation (i.e. function calls), and neither scales to AVX or AVX512, nor is it portable to different SIMD ISAs:

using Vec3D = std::array<__m128, 3>;
__m128 scalar_product(Vec3D a, Vec3D b) {
  return _mm_add_ps(_mm_add_ps(_mm_mul_ps(a[0], b[0]), _mm_mul_ps(a[1], b[1])),
                    _mm_mul_ps(a[2], b[2]));
}

Build Requirements

none. It's header-only.

However, to build the unit tests you will need:

  • cmake >= 3.0
  • GCC >= 9.1

To execute all AVX512 unit tests, you will need the Intel SDE.

Building the tests

$ make test

This will create a build directory, run cmake, compile the tests, and execute the tests.

Documentation

https://en.cppreference.com/w/cpp/experimental/simd

Publications

License

The simd headers, tests, and benchmarks are released under the terms of the 3-clause BSD license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].