All Projects → mlwong → HAMeRS

mlwong / HAMeRS

Licence: LGPL-3.0 License
Hydrodynamics Adaptive Mesh Refinement Simulator (HAMeRS) for compressible multi-species/multi-phase simulations

Programming Languages

C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to HAMeRS

SNaC
A multi-block solver for massively parallel direct numerical simulations (DNS) of fluid flows
Stars: ✭ 26 (-39.53%)
Mutual labels:  turbulence, computational-fluid-dynamics
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (-13.95%)
Mutual labels:  parallel-computing, adaptive-mesh-refinement
opensbli
A framework for the automated derivation and parallel execution of finite difference solvers on a range of computer architectures.
Stars: ✭ 56 (+30.23%)
Mutual labels:  parallel-computing, computational-fluid-dynamics
exadg
ExaDG - High-Order Discontinuous Galerkin for the Exa-Scale
Stars: ✭ 62 (+44.19%)
Mutual labels:  turbulence, computational-fluid-dynamics
CaNS
A code for fast, massively-parallel direct numerical simulations (DNS) of canonical flows
Stars: ✭ 144 (+234.88%)
Mutual labels:  turbulence, computational-fluid-dynamics
sph opengl
SPH simulation in OpenGL compute shader.
Stars: ✭ 57 (+32.56%)
Mutual labels:  parallel-computing, computational-fluid-dynamics
framework
The Arcane Framework for HPC codes
Stars: ✭ 15 (-65.12%)
Mutual labels:  parallel-computing
NGA2
Object-oriented multi-mesh version of the classic reacting turbulent multiphase flow solver
Stars: ✭ 25 (-41.86%)
Mutual labels:  turbulence
tbslas
A parallel, fast solver for the scalar advection-diffusion and the incompressible Navier-Stokes equations based on semi-Lagrangian/Volume-Integral method.
Stars: ✭ 21 (-51.16%)
Mutual labels:  adaptive-mesh-refinement
nnpso
Training of Neural Network using Particle Swarm Optimization
Stars: ✭ 24 (-44.19%)
Mutual labels:  parallel-computing
parallel-dfs-dag
A parallel implementation of DFS for Directed Acyclic Graphs (https://research.nvidia.com/publication/parallel-depth-first-search-directed-acyclic-graphs)
Stars: ✭ 29 (-32.56%)
Mutual labels:  parallel-computing
WABBIT
Wavelet Adaptive Block-Based solver for Interactions with Turbulence
Stars: ✭ 25 (-41.86%)
Mutual labels:  turbulence
libROM
Model reduction library with an emphasis on large scale parallelism and linear subspace methods
Stars: ✭ 66 (+53.49%)
Mutual labels:  parallel-computing
TrainingTracks
Materials for training tracks for continua media - OpenFOAM, vortex method, and other
Stars: ✭ 59 (+37.21%)
Mutual labels:  computational-fluid-dynamics
bitpit
Open source library for scientific HPC
Stars: ✭ 80 (+86.05%)
Mutual labels:  parallel-computing
hybridCentralSolvers
United collection of hybrid Central solvers - one-phase, two-phase and multicomponent versions
Stars: ✭ 42 (-2.33%)
Mutual labels:  computational-fluid-dynamics
CFD
Basic Computational Fluid Dynamics (CFD) schemes implemented in FORTRAN using Finite-Volume and Finite-Difference Methods. Sample simulations and figures are provided.
Stars: ✭ 89 (+106.98%)
Mutual labels:  computational-fluid-dynamics
2D-Turbulence-Python
Simple OOP Python Code to run some Pseudo-Spectral 2D Simulations of Turbulence
Stars: ✭ 24 (-44.19%)
Mutual labels:  turbulence
Corium
Corium is a modern scripting language which combines simple, safe and efficient programming.
Stars: ✭ 18 (-58.14%)
Mutual labels:  parallel-computing
pqdm
Comfortable parallel TQDM using concurrent.futures
Stars: ✭ 118 (+174.42%)
Mutual labels:  parallel-computing

HAMeRS: Hydrodynamics Adaptive Mesh Refinement Simulator

Build Status

HAMeRS is a compressible Navier-Stokes/Euler solver with the patch-based adaptive mesh refinement (AMR) technique. The parallelization of the code and all the construction, management and storage of cells are facilitated by the Structured Adaptive Mesh Refinement Application Infrastructure (SAMRAI) from the Lawrence Livermore National Laboratory (LLNL).

The code consists of various explicit high-order finite difference shock-capturing WCNS's (Weighted Compact Nonlinear Schemes) for capturing shock waves, material interfaces, and turbulent features. The AMR algorithm implemented is based on the one developed by Berger et al.

How do I get set up?

Git is used for the version control of the code. To install Git on Debian-based distribution like Ubuntu, try apt-get:

sudo apt-get install git-all

To compile the code, in general all you need is to use CMake. For example, after cloning the repository with git clone:

cd HAMeRS
mkdir build
cd build
cmake ..
make

The compilers to be used to compile C, C++ and Fortran parts of HAMeRS can be chosen by setting the environment variables CC, CXX and F77 respectively before running CMake. For example, to use the default MPI compilers, you can run:

export CC=mpicc
export CXX=mpicxx
export F77=mpif77

To run the code, you need to provide the input file:

src/exec/main <input filename>

To restart a simulation, you need to provide restart directory and restore number in addition to the input file:

src/exec/main <input filename> <restart dir> <restore number>

To run the code in parallel, you need MPI. You can try mpirun:

mpirun -np <number of processors> src/exec/main <input filename>

What libraries do I need?

HAMeRS relies on HDF5, Boost and SAMRAI. Before installing HAMeRS, it is required to set up the environment variables for CMake to look for the locations of the libraries.

To set up HDF5:

export HDF5_ROOT=<path to the directory of HDF5>

To set up Boost:

export BOOST_ROOT=<path to the directory of Boost>

To set up SAMRAI:

export SAMRAI_ROOT=<path to the directory of SAMRAI>

HAMeRS has already been successfully tested with HDF5-1.8, Boost-1.60 and SAMRAI-3.11.2.

Note that SAMRAI does not depend on the Boost library anymore since version 3.12.0. Please install HAMeRS without Boost library dependency using the CMake flag -DHAMERS_USE_BOOST=OFF when SAMRAI verison is equal to or greater than 3.12.0.

How do I change the problem?

To change the problem that you want to run for an application, e.g. the Euler application, just simply link the corresponding initial conditions cpp symlink (EulerInitialConditions.cpp in src/apps/Euler) to the actual problem file using ln -sf <absolute path to .cpp file containing problem's initial conditions> EulerInitialConditions.cpp. If the problem has special boundary conditions, the user can supply the boundary conditions with ln -sf <absolute path to .cpp file containing problem's user-coded boundary conditions> EulerSpecialBoundaryConditions.cpp. There are some initial conditions and boundary conditions files from different example problems in the problems folder.

Are there more tips and tutorials on how to compile and run the code?

Please have a look at the Wiki page.

Who do I talk to?

The code is managed by the previous PhD graduate Man-Long Wong ([email protected]) of the Flow Physics and Aeroacoustics Laboratory (FPAL) at the Department of Aeronautics and Astronautics of Stanford University.

Copyright

HAMeRS is licensed under a GNU Lesser General Public License v3.0.

If you find this work useful, please consider citing the author's dissertation:

@phdthesis{wong2019thesis,
title={High-order shock-capturing methods for study of shock-induced turbulent mixing with adaptive mesh refinement simulations},
author={Wong, Man Long},
year={2019},
school={Stanford University}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].