All Projects → LLNL → mpiGraph

LLNL / mpiGraph

Licence: other
MPI benchmark to generate network bandwidth images

Programming Languages

perl
6916 projects
c
50402 projects - #5 most used programming language
Makefile
30231 projects

Labels

Projects that are alternatives of or similar to mpiGraph

SIRIUS
Domain specific library for electronic structure calculations
Stars: ✭ 87 (+411.76%)
Mutual labels:  mpi
nbodykit
Analysis kit for large-scale structure datasets, the massively parallel way
Stars: ✭ 93 (+447.06%)
Mutual labels:  mpi
pbdML
No description or website provided.
Stars: ✭ 13 (-23.53%)
Mutual labels:  mpi
sst-core
SST Structural Simulation Toolkit Parallel Discrete Event Core and Services
Stars: ✭ 82 (+382.35%)
Mutual labels:  mpi
SWCaffe
A Deep Learning Framework customized for Sunway TaihuLight
Stars: ✭ 37 (+117.65%)
Mutual labels:  mpi
bsuir-csn-cmsn-helper
Repository containing ready-made laboratory works in the specialty of computing machines, systems and networks
Stars: ✭ 43 (+152.94%)
Mutual labels:  mpi
Galaxy
Galaxy is an asynchronous parallel visualization ray tracer for performant rendering in distributed computing environments. Galaxy builds upon Intel OSPRay and Intel Embree, including ray queueing and sending logic inspired by TACC GraviT.
Stars: ✭ 18 (+5.88%)
Mutual labels:  mpi
libquo
Dynamic execution environments for coupled, thread-heterogeneous MPI+X applications
Stars: ✭ 21 (+23.53%)
Mutual labels:  mpi
FluxUtils.jl
Sklearn Interface and Distributed Training for Flux.jl
Stars: ✭ 12 (-29.41%)
Mutual labels:  mpi
fdtd3d
fdtd3d is an open source 1D, 2D, 3D FDTD electromagnetics solver with MPI, OpenMP and CUDA support for x86, arm, arm64 architectures
Stars: ✭ 77 (+352.94%)
Mutual labels:  mpi
faabric
Messaging and state layer for distributed serverless applications
Stars: ✭ 39 (+129.41%)
Mutual labels:  mpi
XH5For
XDMF parallel partitioned mesh I/O on top of HDF5
Stars: ✭ 23 (+35.29%)
Mutual labels:  mpi
h5fortran-mpi
HDF5-MPI parallel Fortran object-oriented interface
Stars: ✭ 15 (-11.76%)
Mutual labels:  mpi
gslib
sparse communication library
Stars: ✭ 22 (+29.41%)
Mutual labels:  mpi
ACCL
Accelerated Collective Communication Library: MPI-like communication operations for Xilinx Alveo accelerators
Stars: ✭ 28 (+64.71%)
Mutual labels:  mpi
raptor
General, high performance algebraic multigrid solver
Stars: ✭ 50 (+194.12%)
Mutual labels:  mpi
sboxgates
Program for finding low gate count implementations of S-boxes.
Stars: ✭ 30 (+76.47%)
Mutual labels:  mpi
matrix multiplication
Parallel Matrix Multiplication Using OpenMP, Phtreads, and MPI
Stars: ✭ 41 (+141.18%)
Mutual labels:  mpi
eventgrad
Event-Triggered Communication in Parallel Machine Learning
Stars: ✭ 14 (-17.65%)
Mutual labels:  mpi
fml
Fused Matrix Library
Stars: ✭ 24 (+41.18%)
Mutual labels:  mpi

mpiGraph

Benchmark to generate network bandwidth images

Build

make

Run

Run one MPI task per node:

SLURM: srun -n <nodes> -N <nodes> ./mpiGraph 1048576 10 10 > mpiGraph.out
Open MPI: mpirun --map-by node -np <nodes> ./mpiGraph 1048576 10 10 > mpiGraph.out

General usage:

mpiGraph <size> <iters> <window>

To compute bandwidth, each task averages the bandwidth from iters iterations. In each iteration, a process sends window number of messages of size bytes to another process while it simultaneously receives an equal number of messages of equal size from another process. The source and destination processes in each step are not necessary the same process.

Watch progress:

tail -f mpiGraph.out

Results

Parse output and create html report:

crunch_mpiGraph mpiGraph.out

View results in a web browser:

firefox file:///path/to/mpiGraph.out_html/index.html

Description

This package consists of an MPI application called "mpiGraph" written in C to measure message bandwidth and an associated "crunch_mpigraph" script written in Perl to parse the application output a generate an HTML report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while subjecting it to heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.

Typically, one MPI task is run per node (or per interconnect link). For a job of N MPI tasks, the N tasks are logically arranged in a ring counting ranks from 0 and increasing to the right with the end wrapping back to rank 0. Then a series of N-1 steps are executed. In each step, each MPI task sends to the task D units to the right and simultaneously receives from the task D units to the left. The value of D starts at 1 and runs to N-1, so that by the end of the N-1 steps, each task has sent to and received from every other task in the run, excluding itself. At the end of the run, two NxN matrices of bandwidths are gathered and written to stdout -- one for send bandwidths and one for receive bandwidths.

The crunch_mpiGraph script is then run on this output to generate a report. It includes a pair of bitmap images representing bandwidth values between different task pairings. Pixels in this image are colored depending on relative bandwidth values. The maximum bandwidth value is set to pure white (value 255) and other values are scaled to black (0) depending on their percentage of the maximum. One can then visually inspect and identify anomalous behavior in the system. One may zoom in and inspect image features in more detail by hovering the mouse cursor over the image. Javascript embedded in the HTML report opens a pop-up tooltip with a zoomed-in view of the cursor location.

References

Contention-free Routing for Shift-based Communication in MPI Applications on Large-scale Infiniband Clusters, Adam Moody, LLNL-TR-418522, Oct 2009

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].