All Projects → cb-geo → Mpm

cb-geo / Mpm

Licence: other
CB-Geo High-Performance Material Point Method

Projects that are alternatives of or similar to Mpm

ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (-75.65%)
Mutual labels:  parallel, high-performance-computing
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (-67.83%)
Mutual labels:  parallel, high-performance-computing
Laser
The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers
Stars: ✭ 191 (+66.09%)
Mutual labels:  parallel, high-performance-computing
Taskflow
A General-purpose Parallel and Heterogeneous Task Programming System
Stars: ✭ 6,128 (+5228.7%)
Mutual labels:  high-performance-computing, parallel
Doazureparallel
A R package that allows users to submit parallel workloads in Azure
Stars: ✭ 102 (-11.3%)
Mutual labels:  parallel
Libceed
CEED Library: Code for Efficient Extensible Discretizations
Stars: ✭ 90 (-21.74%)
Mutual labels:  high-performance-computing
Redrun
✨🐌 🐎✨ fastest npm scripts runner
Stars: ✭ 85 (-26.09%)
Mutual labels:  parallel
Hibitset
Hierarchical bit set container
Stars: ✭ 81 (-29.57%)
Mutual labels:  parallel
React Native Workers
Do heavy data process outside of your UI JS thread.
Stars: ✭ 114 (-0.87%)
Mutual labels:  parallel
Highs
Linear optimization software
Stars: ✭ 107 (-6.96%)
Mutual labels:  parallel
Root
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically
Stars: ✭ 1,377 (+1097.39%)
Mutual labels:  parallel
Drake
An R-focused pipeline toolkit for reproducibility and high-performance computing
Stars: ✭ 1,301 (+1031.3%)
Mutual labels:  high-performance-computing
Yabox
Yet another black-box optimization library for Python
Stars: ✭ 103 (-10.43%)
Mutual labels:  parallel
Infinity
A lightweight C++ RDMA library for InfiniBand networks.
Stars: ✭ 86 (-25.22%)
Mutual labels:  high-performance-computing
Index
Metarhia educational program index 📖
Stars: ✭ 2,045 (+1678.26%)
Mutual labels:  parallel
Napajs
Napa.js: a multi-threaded JavaScript runtime
Stars: ✭ 8,945 (+7678.26%)
Mutual labels:  parallel
Rslint
A (WIP) Extremely fast JavaScript and TypeScript linter and Rust crate
Stars: ✭ 1,377 (+1097.39%)
Mutual labels:  parallel
Dace
DaCe - Data Centric Parallel Programming
Stars: ✭ 106 (-7.83%)
Mutual labels:  high-performance-computing
Scroll
Scroll - making scrolling through buffers fun since 2016
Stars: ✭ 100 (-13.04%)
Mutual labels:  parallel
Floops.jl
Fast sequential, threaded, and distributed for-loops for Julia—fold for humans™
Stars: ✭ 96 (-16.52%)
Mutual labels:  parallel

High-Performance Material Point Method (CB-Geo mpm)

CB-Geo Computational Geomechanics Research Group

License Developer docs User docs CircleCI codecov Coverity Language grade: C/C++ Project management Discourse forum

Documentation

Please refer to CB-Geo MPM Documentation for information on compiling, and running the code. The documentation also include the MPM theory.

If you have any issues running or compiling the MPM code please open a issue on the CB-Geo Discourse forum.

Running code on Docker

Running code locally

Prerequisite packages

The following prerequisite packages can be found in the docker image:

Optional

Fedora installation (recommended)

Please run the following command:

dnf install -y boost boost-devel clang clang-analyzer clang-tools-extra cmake cppcheck dnf-plugins-core \
                   eigen3-devel findutils freeglut freeglut-devel gcc gcc-c++ git hdf5 hdf5-devel \
                   kernel-devel lcov libnsl make ninja-build openmpi openmpi-devel tar \
                   valgrind vim vtk vtk-devel wget

Ubuntu installation

Please run the following commands to install dependencies:

sudo apt update
sudo apt upgrade
sudo apt install -y gcc git libboost-all-dev libeigen3-dev libhdf5-serial-dev libopenmpi-dev libomp-dev

If you are running Ubuntu 18.04 or below, you may want to update the GCC version to 9 to have OpenMP 5 specifications support.

sudo apt install software-properties-common
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt install gcc-9 g++-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 --slave /usr/bin/g++ g++ /usr/bin/g++-9 --slave /usr/bin/gcov gcov /usr/bin/gcov-9

To install other dependencies:

CMake 3.15

sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
sudo apt update
sudo apt upgrade

OpenGL and X11:Xt

sudo apt-get install freeglut3-dev libxt-dev

VTK

git clone https://gitlab.kitware.com/vtk/vtk.git VTK
cd VTK && mkdir build && cd build/
cmake -DCMAKE_BUILD_TYPE:STRING=Release ..
make -j
sudo make install

Partio for Houdini SFX Visualization

mkdir -p ~/workspace && cd ~/workspace/ && git clone https://github.com/wdas/partio.git && \
    cd partio && cmake . && make

Houdini supported (*.bgeo) files will be generated. These can be rendered using the non-commercial Houdini Apprentice.

KaHIP installation for domain decomposition

cd ~/workspace/ && git clone https://github.com/schulzchristian/KaHIP && \
   cd KaHIP && sh ./compile_withcmake.sh

Compile

See CB-Geo MPM Documentation for more detailed instructions.

  1. Run mkdir build && cd build && cmake -DCMAKE_CXX_COMPILER=g++ ...

  2. Run make clean && make -jN (where N is the number of cores).

To compile without KaHIP partitioning use cmake -DNO_KAHIP=True ..

Compile mpm or mpmtest

  • To compile either mpm or mpmtest alone, run make mpm -jN or make mpmtest -jN (where N is the number of cores).

Compile without tests [Editing CMake options]

To compile without tests run: mkdir build && cd build && cmake -DMPM_BUILD_TESTING=Off -DCMAKE_CXX_COMPILER=g++ ...

Compile with MPI (Running on a cluster)

The CB-Geo mpm code can be compiled with MPI to distribute the workload across compute nodes in a cluster.

Additional steps to load OpenMPI on Fedora:

source /etc/profile.d/modules.sh
export MODULEPATH=$MODULEPATH:/usr/share/modulefiles
module load mpi/openmpi-x86_64

Compile with OpenMPI (with halo exchange):

mkdir build && cd build
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ -DHALO_EXCHANGE=On ..
make -jN

To enable halo exchange set -DHALO_EXCHANGE=On in CMake. Halo exchange is a better MPI communication protocol, however, use this only for larger number of MPI tasks (> 4).

Compile with Ninja build system [Alternative to Make]

  1. Run mkdir build && cd build && cmake -GNinja -DCMAKE_CXX_COMPILER=g++ ...

  2. Run ninja

Compile with Partio viz support

Please include -DPARTIO_ROOT=/path/to/partio/ in the cmake command. A typical cmake command would look like cmake -DCMAKE_BUILD_TYPE=Release -DPARTIO_ROOT=~/workspace/partio/ ..

Run tests

  1. Run ./mpmtest -s (for a verbose output) or ctest -VV.

Run MPM

See CB-Geo MPM Documentation for more detailed instructions.

The CB-Geo MPM code uses a JSON file for input configuration. To run the mpm code:

   ./mpm  [-p <parallel>] [-i <input_file>] -f <working_dir> [--]
          [--version] [-h]

For example:

export OMP_SCHEDULE="static,4"
./mpm -f /path/to/input-dir/ -i mpm-usf-3d.json

Where:

   -p <parallel>,  --parallel <parallel>
     Number of parallel threads

   -i <input_file>,  --input_file <input_file>
     Input JSON file [mpm.json]

   -f <working_dir>,  --working_dir <working_dir>
     (required)  Current working folder

   --,  --ignore_rest
     Ignores the rest of the labeled arguments following this flag.

   --version
     Displays version information and exits.

   -h,  --help
     Displays usage information and exits.

Running the code with MPI

To run the CB-Geo mpm code on a cluster with MPI:

mpirun -N <#-MPI-tasks> ./mpm -f /path/to/input-dir/ -i mpm.json

For example to run the code on 4 compute nodes (MPI tasks):

mpirun -N 4 ./mpm -f ~/benchmarks/3d/uniaxial-stress -i mpm.json

Authors

Please refer to the list of contributors to the CB-Geo MPM code.

Citation

If you publish results using our code, please acknowledge our work by quoting the following paper:

Kumar, K., Salmond, J., Kularathna, S., Wilkes, C., Tjung, E., Biscontin, G., & Soga, K. (2019). Scalable and modular material point method for large scale simulations. 2nd International Conference on the Material Point Method. Cambridge, UK. https://arxiv.org/abs/1909.13380

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].