All Projects → cea-hpc → hp2p

cea-hpc / hp2p

Licence: other
Heavy Peer To Peer: a MPI based benchmark for network diagnostic

Programming Languages

c
50402 projects - #5 most used programming language
M4
1887 projects
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to hp2p

Core
parallel finite element unstructured meshes
Stars: ✭ 124 (+629.41%)
Mutual labels:  hpc, parallel, mpi, parallel-computing
Foundations of HPC 2021
This repository collects the materials from the course "Foundations of HPC", 2021, at the Data Science and Scientific Computing Department, University of Trieste
Stars: ✭ 22 (+29.41%)
Mutual labels:  hpc, mpi, parallel-computing, hpc-applications
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (+117.65%)
Mutual labels:  hpc, parallel, mpi, parallel-computing
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (+64.71%)
Mutual labels:  hpc, parallel, parallel-computing, hpc-applications
muster
Massively Scalable Clustering
Stars: ✭ 22 (+29.41%)
Mutual labels:  parallel, mpi, parallel-computing
hpc
Learning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (+129.41%)
Mutual labels:  hpc, mpi, parallel-computing
ParMmg
Distributed parallelization of 3D volume mesh adaptation
Stars: ✭ 19 (+11.76%)
Mutual labels:  hpc, parallel, mpi
Hpcinfo
Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
Stars: ✭ 171 (+905.88%)
Mutual labels:  hpc, parallel, mpi
Dash
DASH, the C++ Template Library for Distributed Data Structures with Support for Hierarchical Locality for HPC and Data-Driven Science
Stars: ✭ 134 (+688.24%)
Mutual labels:  hpc, mpi, parallel-computing
cruise
User space POSIX-like file system in main memory
Stars: ✭ 27 (+58.82%)
Mutual labels:  hpc, parallel, parallel-computing
Easylambda
distributed dataflows with functional list operations for data processing with C++14
Stars: ✭ 475 (+2694.12%)
Mutual labels:  hpc, parallel, mpi
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (+835.29%)
Mutual labels:  hpc, parallel, parallel-computing
Ompi
Open MPI main development repository
Stars: ✭ 1,221 (+7082.35%)
Mutual labels:  hpc, mpi
Hiop
HPC solver for nonlinear optimization problems
Stars: ✭ 75 (+341.18%)
Mutual labels:  hpc, mpi
vpic
Vector Particle-In-Cell (VPIC) Project
Stars: ✭ 124 (+629.41%)
Mutual labels:  hpc, hpc-applications
Sundials
SUNDIALS is a SUite of Nonlinear and DIfferential/ALgebraic equation Solvers. This is a mirror of current releases, and development will move here eventually. Pull requests are welcome for bug fixes and minor changes.
Stars: ✭ 194 (+1041.18%)
Mutual labels:  hpc, parallel-computing
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (+317.65%)
Mutual labels:  hpc, parallel-computing
Training Material
A collection of code examples as well as presentations for training purposes
Stars: ✭ 85 (+400%)
Mutual labels:  hpc, mpi
Singularity
Singularity: Application containers for Linux
Stars: ✭ 2,290 (+13370.59%)
Mutual labels:  hpc, parallel
Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (+617.65%)
Mutual labels:  hpc, parallel-computing

HP2P

HP2P (Heavy Peer To Peer) benchmark is a test which performs MPI Point-to-Point non-blocking communications between all MPI processes. Its goal is to measure the bandwidths and the latencies in a situation where the network is busy. This benchmark can help to detect network problems like congestions or problems with switches or links.

The benchmark generates an HTML output with interactive visualisation with Plotly.

alt tag

New: HTML report can be generated instead of Python GUI for portability. The following link gives an example of a report.

Plotly example

Prerequisites

Main program:

  • C++ compiler
  • MPI

Getting started

$ ./configure --prefix=<path-to-install>
$ make
$ make install

The program hp2p.exe is generated.

Running HP2P

$ hp2p.exe -h
Usage: ./hp2p.exe [-h] [-n nit] [-k freq] [-m nb_msg]
       [-s msg_size] [-o output] [-a align] [-y]
       [-p file]       [-i conf_file]
       [-f bin|html] [-M max_comm_time] [-X mult_time]
Options:
   -i conf_file       Configuration file
   -n nit             Number of iterations
   -k freq            Iterations between snapshot
   -s msg_size        Message size
   -m nb_msg          Number of msg per comm
   -a align           Alignment size for MPI buffer (default=8)
   -t max_time        Max duration
   -c build           Algorithm to build couple
                      (random = 0 (default), mirroring shift = 1)
   -y anon            1 = hide hostname, 0 = write hostname (default)
   -p jsfile          Path to a plotly.min.js file to include into HTML
                      Use get_plotlyjs.py script if plotly is installed
                      in your Python distribution
   -o output          Output file
   -f format          Output format (binary format = bin, plotly
                      format = html) [default: html]
   -M max_comm_time   If set, print a warning each time a
                      communication pair is slower than 
                      max_comm_time
   -X mult_time       If set, print a warning each time a
                      communication pair is slower than 
                      mult_time * mean of previous
                      communication times

The program is written in MPI:

 $ mpirun -n 32 ./hp2p.exe -n 1000 -s 1024 -m 10 -b 1 -o first_test -o output.html

This command will launch the benchmark on 32 MPI processes and will run 1000 iterations. An iteration consists on a draw of random couples of MPI processes and then a phase where 10 successive communications of 1024 bytes will be performed. The benchmark aims to test the network, so it is better to launch the benchmark with 1 MPI process per node. At the end of the execution, the output.html file wan be viewed with a web browser.

Using CUDA

Compilation

$ ./configure --enable-cuda --with-cuda=${CUDA_ROOT}
$ make
$ make install

Running

hp2p should be launched with one MPI process for one GPU. If you have 4 GPUs on one node, you should launch 4 MPI processes on the node.

Using UNIX signals

Signals can be sent to one of hp2p processes to make the program generate an output:

Compilation

$ ./configure --enable-signal
$ make
$ make install

Running

$ mpirun -n 32 ./hp2p.exe -n 1000 -s 1024 -m 10 -b 1 -o first_test -o output.html

$ kill -s SIGUSR1 <hp2p process PID> # make hp2p generate an output

$ kill -s SIGTERM <hp2p process PID> # make hp2p generate an output and exit

Contributing

Authors

See the list of AUTHORS who participated in this project.

Contact

Laurent Nguyen - [email protected]

Website

CEA-HPC

License

Copyright 2010-2022 CEA/DAM/DIF

HP2P is distributed under the CeCILL-C. See the included files
Licence_CeCILL-C_V1-en.txt (English version) and
Licence_CeCILL-C_V1-fr.txt (French version) or visit
http://www.cecill.info for details.

Notes

The benchmark is similar to the FZ-Juelich linktest benchmark.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].