All Projects → jipolanco → PencilArrays.jl

jipolanco / PencilArrays.jl

Licence: MIT license
Distributed Julia arrays using the MPI protocol

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to PencilArrays.jl

Dash
DASH, the C++ Template Library for Distributed Data Structures with Support for Hierarchical Locality for HPC and Data-Driven Science
Stars: ✭ 134 (+235%)
Mutual labels:  mpi, high-performance-computing
PencilFFTs.jl
Fast Fourier transforms of MPI-distributed Julia arrays
Stars: ✭ 48 (+20%)
Mutual labels:  mpi, high-performance-computing
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (-7.5%)
Mutual labels:  mpi, high-performance-computing
Edge
Extreme-scale Discontinuous Galerkin Environment (EDGE)
Stars: ✭ 18 (-55%)
Mutual labels:  mpi, high-performance-computing
axisem
AxiSEM is a parallel spectral-element method to solve 3D wave propagation in a sphere with axisymmetric or spherically symmetric visco-elastic, acoustic, anisotropic structures.
Stars: ✭ 34 (-15%)
Mutual labels:  mpi, high-performance-computing
mloperator
Machine Learning Operator & Controller for Kubernetes
Stars: ✭ 85 (+112.5%)
Mutual labels:  mpi
iot-master
物联大师是开源免费的物联网智能网关系统,集成了标准Modbus和主流PLC等多种协议,支持数据采集、公式计算、定时控制、自动控制、异常报警、流量监控、Web组态、远程调试等功能,适用于大部分物联网和工业互联网应用场景。
Stars: ✭ 119 (+197.5%)
Mutual labels:  mpi
matrix multiplication
Parallel Matrix Multiplication Using OpenMP, Phtreads, and MPI
Stars: ✭ 41 (+2.5%)
Mutual labels:  mpi
libquo
Dynamic execution environments for coupled, thread-heterogeneous MPI+X applications
Stars: ✭ 21 (-47.5%)
Mutual labels:  mpi
analisis-numerico-computo-cientifico
Análisis numérico y cómputo científico
Stars: ✭ 42 (+5%)
Mutual labels:  mpi
SimInf
A framework for data-driven stochastic disease spread simulations
Stars: ✭ 21 (-47.5%)
Mutual labels:  high-performance-computing
petgem
petgem is a parallel and high-order code for active-source (e.g. CSEM) and passive-source (e.g. MT) geophysics electromagnetic modeling in 3D arbitrary marine/land problems under anisotropic conductivities.
Stars: ✭ 18 (-55%)
Mutual labels:  high-performance-computing
muster
Massively Scalable Clustering
Stars: ✭ 22 (-45%)
Mutual labels:  mpi
pystella
A code generator for grid-based PDE solving on CPUs and GPUs
Stars: ✭ 18 (-55%)
Mutual labels:  mpi
mpiGraph
MPI benchmark to generate network bandwidth images
Stars: ✭ 17 (-57.5%)
Mutual labels:  mpi
course
高性能并行编程与优化 - 课件
Stars: ✭ 1,610 (+3925%)
Mutual labels:  high-performance-computing
vulk
Desktop/Android Python game development framework on top of Vulkan API
Stars: ✭ 31 (-22.5%)
Mutual labels:  high-performance-computing
gblastn
G-BLASTN is a GPU-accelerated nucleotide alignment tool based on the widely used NCBI-BLAST.
Stars: ✭ 52 (+30%)
Mutual labels:  high-performance-computing
ImplicitGlobalGrid.jl
Almost trivial distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid
Stars: ✭ 88 (+120%)
Mutual labels:  mpi
snakefiles
🐍 Snakefiles for common RNA-seq data analysis workflows.
Stars: ✭ 78 (+95%)
Mutual labels:  high-performance-computing

PencilArrays

Stable Dev DOI

Build Status Coverage

Distributed Julia arrays using the MPI protocol.

This package provides a convenient framework for working with multidimensional Julia arrays distributed among MPI processes.

The name of this package originates from the decomposition of 3D domains along two out of three dimensions, sometimes called pencil decomposition. This is illustrated by the figure below, which represents a distributed 3D array. Each coloured block is managed by a different MPI process.


Pencil decomposition of 3D domains

More generally, PencilArrays can decompose arrays of arbitrary dimension N, along an arbitrary number of subdimensions M < N. (In the image above, N = 3 and M = 2.)

PencilArrays is the basis for the PencilFFTs package, which provides efficient and highly scalable distributed FFTs.

Features

  • distribution of N-dimensional arrays among MPI processes;

  • decomposition of arrays along an arbitrary subset of dimensions;

  • tools for conveniently and efficiently iterating over the coordinates of distributed multidimensional geometries;

  • transpositions between different decomposition configurations, using point-to-point and collective MPI communications;

  • zero-cost, convenient dimension permutations using the StaticPermutations.jl package;

  • convenient parallel I/O using either MPI-IO or the Parallel HDF5 libraries;

  • distributed FFTs and related transforms via the PencilFFTs.jl package.

Installation

PencilArrays can be installed using the Julia package manager:

julia> ] add PencilArrays

Quick start

using MPI
using PencilArrays
using LinearAlgebra: transpose!

MPI.Init()
comm = MPI.COMM_WORLD       # MPI communicator
rank = MPI.Comm_rank(comm)  # rank of local process

# Let's decompose a 3D grid across all MPI processes.
# The resulting configuration is described by a Pencil object.
dims_global = (42, 31, 29)  # global dimensions of the array
pen_x = Pencil(dims_global, comm)

# By default the 3D grid is decomposed along the two last dimensions, similarly
# to the "x-pencil" configuration in the figure above:
println(pen_x)
# Decomposition of 3D data
#   Data dimensions: (42, 31, 29)
#   Decomposed dimensions: (2, 3)
#   Data permutation: NoPermutation()
#   Array type: Array

# We can now allocate distributed arrays in the x-pencil configuration.
Ax = PencilArray{Float64}(undef, pen_x)
fill!(Ax, rank * π)  # each process locally fills its part of the array
parent(Ax)           # parent array holding the local data (here, an Array{Float64,3})
size(Ax)             # total size of the array = (42, 31, 29)
size_local(Ax)       # size of local part, e.g. (42, 8, 10) for a given process
range_local(Ax)      # range of local part on global grid, e.g. (1:42, 16:23, 20:29)

# Let's associate the dimensions to a global grid of coordinates (x_i, y_j, z_k)
xs_global = range(0, 1;  length = dims_global[1])
ys_global = range(0, 2;  length = dims_global[2])
zs_global = range(0, 2π; length = dims_global[3])

# Part of the grid associated to the local MPI process:
grid = localgrid(pen_x, (xs_global, ys_global, zs_global))

# This is convenient for example if we want to initialise the `Ax` array as
# a function of the grid coordinates (x, y, z):
@. Ax = grid.x + (2 * grid.y * cos(grid.z))

# Alternatively (useful in higher dimensions):
@. Ax = grid[1] + (2 * grid[2] * cos(grid[3]))

# Create another pencil configuration, decomposing along dimensions (1, 3).
# We could use the same constructor as before, but it's recommended to reuse the
# previous Pencil instead to reduce memory usage.
pen_y = Pencil(pen_x; decomp_dims = (1, 3))

# Now transpose from the x-pencil to the y-pencil configuration, redistributing
# the data initially in Ax.
Ay = PencilArray{Float64}(undef, pen_y)
transpose!(Ay, Ax)

# We can check that Ax and Ay have the same data (but distributed differently)
# by combining the data from all different processes onto a single process
# (this should never be used for large datasets!)
gather(Ax) == gather(Ay)  # true
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].