All Projects → jipolanco → PencilFFTs.jl

jipolanco / PencilFFTs.jl

Licence: MIT License
Fast Fourier transforms of MPI-distributed Julia arrays

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to PencilFFTs.jl

Dash
DASH, the C++ Template Library for Distributed Data Structures with Support for Hierarchical Locality for HPC and Data-Driven Science
Stars: ✭ 134 (+179.17%)
Mutual labels:  mpi, high-performance-computing
PencilArrays.jl
Distributed Julia arrays using the MPI protocol
Stars: ✭ 40 (-16.67%)
Mutual labels:  mpi, high-performance-computing
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (-22.92%)
Mutual labels:  mpi, high-performance-computing
Edge
Extreme-scale Discontinuous Galerkin Environment (EDGE)
Stars: ✭ 18 (-62.5%)
Mutual labels:  mpi, high-performance-computing
axisem
AxiSEM is a parallel spectral-element method to solve 3D wave propagation in a sphere with axisymmetric or spherically symmetric visco-elastic, acoustic, anisotropic structures.
Stars: ✭ 34 (-29.17%)
Mutual labels:  mpi, high-performance-computing
audiowmark
Audio Watermarking
Stars: ✭ 101 (+110.42%)
Mutual labels:  fft
VoiceNET.Library
.NET library to easily create Voice Command Control feature.
Stars: ✭ 14 (-70.83%)
Mutual labels:  fft
old-audiosync
First implementation of the audio synchronization feature for Vidify, now obsolete
Stars: ✭ 16 (-66.67%)
Mutual labels:  fft
ImplicitGlobalGrid.jl
Almost trivial distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid
Stars: ✭ 88 (+83.33%)
Mutual labels:  mpi
pypar
Efficient and scalable parallelism using the message passing interface (MPI) to handle big data and highly computational problems.
Stars: ✭ 66 (+37.5%)
Mutual labels:  mpi
tbslas
A parallel, fast solver for the scalar advection-diffusion and the incompressible Navier-Stokes equations based on semi-Lagrangian/Volume-Integral method.
Stars: ✭ 21 (-56.25%)
Mutual labels:  mpi
fuzzball
Ongoing development of the Fuzzball MUCK server software and associated functionality.
Stars: ✭ 38 (-20.83%)
Mutual labels:  mpi
research-computing-with-cpp
UCL-RITS *C++ for Research* engineering course
Stars: ✭ 16 (-66.67%)
Mutual labels:  mpi
PartitionedArrays.jl
Vectors and sparse matrices partitioned into pieces for parallel distributed-memory computations.
Stars: ✭ 45 (-6.25%)
Mutual labels:  mpi
course
高性能并行编程与优化 - 课件
Stars: ✭ 1,610 (+3254.17%)
Mutual labels:  high-performance-computing
dblclockfft
A configurable C++ generator of pipelined Verilog FFT cores
Stars: ✭ 147 (+206.25%)
Mutual labels:  fft
SimInf
A framework for data-driven stochastic disease spread simulations
Stars: ✭ 21 (-56.25%)
Mutual labels:  high-performance-computing
wxparaver
wxParaver is a trace-based visualization and analysis tool designed to study quantitative detailed metrics and obtain qualitative knowledge of the performance of applications, libraries, processors and whole architectures.
Stars: ✭ 23 (-52.08%)
Mutual labels:  mpi
openvino pytorch layers
How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO
Stars: ✭ 17 (-64.58%)
Mutual labels:  fft
yask
YASK--Yet Another Stencil Kit: a domain-specific language and framework to create high-performance stencil code for implementing finite-difference methods and similar applications.
Stars: ✭ 81 (+68.75%)
Mutual labels:  mpi

PencilFFTs

Stable Dev DOI

Build Status Coverage

Fast Fourier transforms of MPI-distributed Julia arrays.

This package provides multidimensional FFTs and related transforms on MPI-distributed Julia arrays via the PencilArrays package.

The name of this package originates from the decomposition of 3D domains along two out of three dimensions, sometimes called pencil decomposition. This is illustrated by the figure below, where each coloured block is managed by a different MPI process. Typically, one wants to compute FFTs on a scalar or vector field along the three spatial dimensions. In the case of a pencil decomposition, 3D FFTs are performed one dimension at a time (along the non-decomposed direction, using a serial FFT implementation). Global data transpositions are then needed to switch from one pencil configuration to the other and perform FFTs along the other dimensions.


Pencil decomposition of 3D domains

Features

  • distributed N-dimensional FFTs of MPI-distributed Julia arrays, using the PencilArrays package;

  • FFTs and related transforms (e.g. DCTs / Chebyshev transforms) may be arbitrarily combined along different dimensions;

  • in-place and out-of-place transforms;

  • high scalability up to (at least) tens of thousands of MPI processes.

Installation

PencilFFTs can be installed using the Julia package manager:

julia> ] add PencilFFTs

Quick start

The following example shows how to apply a 3D FFT of real data over 12 MPI processes distributed on a 3 × 4 grid (same distribution as in the figure above).

using MPI
using PencilFFTs
using Random

MPI.Init()

dims = (16, 32, 64)  # input data dimensions
transform = Transforms.RFFT()  # apply a 3D real-to-complex FFT

# Distribute 12 processes on a 3 × 4 grid.
comm = MPI.COMM_WORLD  # we assume MPI.Comm_size(comm) == 12
proc_dims = (3, 4)

# Create plan
plan = PencilFFTPlan(dims, transform, proc_dims, comm)

# Allocate and initialise input data, and apply transform.
u = allocate_input(plan)
rand!(u)
uF = plan * u

# Apply backwards transform. Note that the result is normalised.
v = plan \ uF
@assert u  v

For more details see the tutorial.

Performance

The performance of PencilFFTs is comparable to that of widely adopted MPI-based FFT libraries implemented in lower-level languages. As seen below, with its default settings, PencilFFTs generally outperforms the Fortran P3DFFT libraries.


Strong scaling of PencilFFTs

See the benchmarks section of the docs for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].