All Projects → jjmaldonis → mpi-parallelization

jjmaldonis / mpi-parallelization

Licence: MIT License
Examples for MPI Spawning and Splitting, and the differences between two implementations

Programming Languages

python
139335 projects - #7 most used programming language
fortran
972 projects

Projects that are alternatives of or similar to mpi-parallelization

hpc
Learning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (+143.75%)
Mutual labels:  mpi, mpi4py
nbodykit
Analysis kit for large-scale structure datasets, the massively parallel way
Stars: ✭ 93 (+481.25%)
Mutual labels:  mpi, mpi4py
es pytorch
High performance implementation of Deep neuroevolution in pytorch using mpi4py. Intended for use on HPC clusters
Stars: ✭ 20 (+25%)
Mutual labels:  mpi, mpi4py
analisis-numerico-computo-cientifico
Análisis numérico y cómputo científico
Stars: ✭ 42 (+162.5%)
Mutual labels:  mpi
PencilArrays.jl
Distributed Julia arrays using the MPI protocol
Stars: ✭ 40 (+150%)
Mutual labels:  mpi
PencilFFTs.jl
Fast Fourier transforms of MPI-distributed Julia arrays
Stars: ✭ 48 (+200%)
Mutual labels:  mpi
pyccel
Python extension language using accelerators
Stars: ✭ 189 (+1081.25%)
Mutual labels:  mpi
ImplicitGlobalGrid.jl
Almost trivial distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid
Stars: ✭ 88 (+450%)
Mutual labels:  mpi
mpifx
Modern Fortran wrappers around MPI routines
Stars: ✭ 25 (+56.25%)
Mutual labels:  mpi
tbslas
A parallel, fast solver for the scalar advection-diffusion and the incompressible Navier-Stokes equations based on semi-Lagrangian/Volume-Integral method.
Stars: ✭ 21 (+31.25%)
Mutual labels:  mpi
wxparaver
wxParaver is a trace-based visualization and analysis tool designed to study quantitative detailed metrics and obtain qualitative knowledge of the performance of applications, libraries, processors and whole architectures.
Stars: ✭ 23 (+43.75%)
Mutual labels:  mpi
fuzzball
Ongoing development of the Fuzzball MUCK server software and associated functionality.
Stars: ✭ 38 (+137.5%)
Mutual labels:  mpi
gpubootcamp
This repository consists for gpu bootcamp material for HPC and AI
Stars: ✭ 227 (+1318.75%)
Mutual labels:  mpi
yask
YASK--Yet Another Stencil Kit: a domain-specific language and framework to create high-performance stencil code for implementing finite-difference methods and similar applications.
Stars: ✭ 81 (+406.25%)
Mutual labels:  mpi
hpdbscan
Highly parallel DBSCAN (HPDBSCAN)
Stars: ✭ 19 (+18.75%)
Mutual labels:  mpi
research-computing-with-cpp
UCL-RITS *C++ for Research* engineering course
Stars: ✭ 16 (+0%)
Mutual labels:  mpi
neworder
A dynamic microsimulation framework for python
Stars: ✭ 15 (-6.25%)
Mutual labels:  mpi
EFDCPlus
www.eemodelingsystem.com
Stars: ✭ 9 (-43.75%)
Mutual labels:  mpi
PartitionedArrays.jl
Vectors and sparse matrices partitioned into pieces for parallel distributed-memory computations.
Stars: ✭ 45 (+181.25%)
Mutual labels:  mpi
pypar
Efficient and scalable parallelism using the message passing interface (MPI) to handle big data and highly computational problems.
Stars: ✭ 66 (+312.5%)
Mutual labels:  mpi

This repository contains MPI examples using mpi4py.

Examples

Basic Gather Examples

  • gather.py uses each core to update a different piece of information in a list and passes that information to all cores using the .gather and .bcast methods. The allgather.py example is similar, but uses the .allgather method instead; it then parses the resulting information to keep only the updated values.

Spawn Multiple Examples

  • spawn_multiple_worker.py is a program designed to be run as a worker/child process. spawn_multiple_worker_fortran.f90 is the corresponding FORTRAN example. These programs will be called in the spawning programs below. They distribute a calculation of pi over the cores that were allocated to them and return the reduced-sum value to their parent process.

  • spawn.py is the most basic example of spawning, and performs a single spawn. spawn_loop.py has similar functionality, but spawns workers iteratively, waiting for each child process to finish before starting a new one.

  • spawn_multiple.py spawns multiple copies of an executable with different data. spawn_multiple_loop.py spawns multiple executables during each iteration of a loop.

  • spawn_fortran_multiple.py and spawn_fortran_multiple_loop.py are the analogous programs that call the FORTRAN worker rather than the Python worker.

Split-Spawn Examples

  • split_multiple.py is an analogous program to spawn_multiple.py, only rather than using spawn_multiple, it splits the world communicator within the parent process and then spawns processes on the newly created communicators. The worker process can be found in split_multiple_worker.py.

  • split_multiple_loop.py splits and spawns multiple executables within a loop.

Notes

This code has only been tested with Open MPI 1.10.2 and mpi4py 2.0.0.

Open MPI sets the environment variable OMPI_MCA_orte_app_num, which is crucial for the spawn_multiple commands. This necessity can be avoided in other MPI implementations by passing the color in to the worker program. This is illustrated in the examples.

Spawn Multiple vs. Split-Spawn

The processes of spawning multiple individuals using Spawn_multiple vs. Split -> Spawn both take about the same amount of time (within the margin of error for the simple tests I ran).

The Spawn Multiple programs can take advantage of the OpenMPI environment variable OMPI_MCA_orte_app_num directly, while the splitting examples inherently cannot. The Spawn Multiple examples can therefore be made to run naturally using MPMD with OpenMPI.

If more MPI APIs were to specify an argument analogous to OMPI_MCA_orte_app_num, the MPI_COMM_Spawn_multiple function would become signficantly more useful.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].