All Projects → geospace-code → h5fortran-mpi

geospace-code / h5fortran-mpi

Licence: BSD-3-Clause license
HDF5-MPI parallel Fortran object-oriented interface

Programming Languages

fortran
972 projects
CMake
9771 projects
C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to h5fortran-mpi

XH5For
XDMF parallel partitioned mesh I/O on top of HDF5
Stars: ✭ 23 (+53.33%)
Mutual labels:  mpi, hdf5
libquo
Dynamic execution environments for coupled, thread-heterogeneous MPI+X applications
Stars: ✭ 21 (+40%)
Mutual labels:  mpi, mpi-applications
raptor
General, high performance algebraic multigrid solver
Stars: ✭ 50 (+233.33%)
Mutual labels:  mpi
FluxUtils.jl
Sklearn Interface and Distributed Training for Flux.jl
Stars: ✭ 12 (-20%)
Mutual labels:  mpi
jhdf
A pure Java HDF5 library
Stars: ✭ 83 (+453.33%)
Mutual labels:  hdf5
jupyterlab-h5web
A JupyterLab extension to explore and visualize HDF5 file contents. Based on https://github.com/silx-kit/h5web.
Stars: ✭ 41 (+173.33%)
Mutual labels:  hdf5
EDLib
Exact diagonalization solver for quantum electron models
Stars: ✭ 18 (+20%)
Mutual labels:  mpi
hpc
Learning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (+160%)
Mutual labels:  mpi
bsuir-csn-cmsn-helper
Repository containing ready-made laboratory works in the specialty of computing machines, systems and networks
Stars: ✭ 43 (+186.67%)
Mutual labels:  mpi
npy2bdv
Fast writing of numpy 3d-arrays into HDF5 Fiji/BigDataViewer files.
Stars: ✭ 25 (+66.67%)
Mutual labels:  hdf5
SWCaffe
A Deep Learning Framework customized for Sunway TaihuLight
Stars: ✭ 37 (+146.67%)
Mutual labels:  mpi
matio-cpp
A C++ wrapper of the matio library, with memory ownership handling, to read and write .mat files.
Stars: ✭ 24 (+60%)
Mutual labels:  hdf5
gslib
sparse communication library
Stars: ✭ 22 (+46.67%)
Mutual labels:  mpi
nc4fortran
Object-oriented Fortran NetCDF4 interface
Stars: ✭ 31 (+106.67%)
Mutual labels:  object-oriented-fortran
SIRIUS
Domain specific library for electronic structure calculations
Stars: ✭ 87 (+480%)
Mutual labels:  mpi
nbodykit
Analysis kit for large-scale structure datasets, the massively parallel way
Stars: ✭ 93 (+520%)
Mutual labels:  mpi
Galaxy
Galaxy is an asynchronous parallel visualization ray tracer for performant rendering in distributed computing environments. Galaxy builds upon Intel OSPRay and Intel Embree, including ray queueing and sending logic inspired by TACC GraviT.
Stars: ✭ 18 (+20%)
Mutual labels:  mpi
faabric
Messaging and state layer for distributed serverless applications
Stars: ✭ 39 (+160%)
Mutual labels:  mpi
niqlow
design, solve and estimate discrete dynamic programs.
Stars: ✭ 16 (+6.67%)
Mutual labels:  mpi-applications
sboxgates
Program for finding low gate count implementations of S-boxes.
Stars: ✭ 30 (+100%)
Mutual labels:  mpi

h5fortran-mpi

DOI

ci ci intel-oneapi

Easy to use object-oriented Fortran parallel HDF5-MPI interface. The h5fortran-mpi API can be used with or with MPI. A very similar NetCDF4 interface is nc4fortran.

Many computer systems default to the serial HDF5 API, which lacks the HDF5 parallel MPI layer. The scripts/CMakeLists.txt can build the HDF5-MPI stack if needed. To use HDF5-MPI features, the computer must have a working MPI library installed already (e.g. OpenMPI, MPICH, Intel MPI, MS-MPI).

Some OS have an installable parallel HDF5 package:

  • Ubuntu: apt install libhdf5-mpi-dev
  • CentOS: yum install hdf5-openmpi-devel
  • MacOS Homebrew: brew install hdf5-mpi
  • MacOS MacPorts: port install hdf5 +fortran +mpich

While HDF5 1.10.2 is the oldest working HDF5 version, in general for bugfixes and performance HDF5 ≥ 1.10.5 is recommended. For highest performance with parallel compressed writes consider HDF5 ≥ 1.12.2.

Compressed parallel HDF5

Compression is useful in general to save significant disk space and speed up write/read. HDF5-MPI file compression requires HDF5 >= 1.10.2 and MPI-3. As noted above, HDF5 >= 1.10.5 is recommended for stability and performance.

Windows limitations

Microsoft Windows does not currently support native HDF5 parallel file compression. Windows Subsystem for Linux can be used for HDF5 parallel file compression. Native Windows users can read HDF5 compressed files but without using MPI.

Native Windows MPI options are currently limited to MS-MPI and Intel MPI. Currently Windows MS-MPI is MPI-2. A quirk with Intel oneAPI on Windows despite having MPI-3 is that with HDF5 1.10.x and at least through HDF5 1.12.1 the collective filtered parallel compression file I/O does not work. We test for this in CMake and set the compile flags appropriately.

Windows users that need file compression may use Windows Subsystem for Linux (e.g. Ubuntu) and install libhdf5-mpi-dev.

Build this project

Build this project like:

cmake -B build
cmake --build build

If you have previously built / installed a parallel HDF5 library, refer to it (saving build time) like:

cmake -B build -DHDF5_ROOT=~/lib_par
cmake --build build

To build without MPI (serial HDF5 file operations only):

cmake -B build -Dhdf5_parallel=off

Cray computers may use a CMake toolchain file to work with Intel or GCC backend.


Fortran Package Manager (FPM) users build like:

fpm build --flag -Dh5fortran_HAVE_PARALLEL
# omitting this flag builds the serial API only

fpm test

Notes

To build and install the HDF5 parallel library use the script:

cmake -B build_hdf5 -S scripts --install-prefix=$HOME/lib_par

cmake --build build_hdf5

that will build and install HDF5 under ~/lib_par (or other directory of your choice).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].