All Projects → alsvinn → alsvinn

alsvinn / alsvinn

Licence: GPL-3.0 license
The fast Finite Volume simulator with UQ support.

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
CMake
9771 projects
python
139335 projects - #7 most used programming language
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to alsvinn

fosolvers
FOSolverS: a Suite of Free & Open Source Solvers. A brief overview:
Stars: ✭ 24 (+9.09%)
Mutual labels:  cfd, fvm
euler2D-kfvs-Fortran2003
2D solver for Euler equations in quadrilateral grid, using kinetic flux vector splitting scheme, written in OOP F2003
Stars: ✭ 17 (-22.73%)
Mutual labels:  cfd, euler-equations
Abyss
🔬 Assemble large genomes using short reads
Stars: ✭ 219 (+895.45%)
Mutual labels:  mpi
CaNS
A code for fast, massively-parallel direct numerical simulations (DNS) of canonical flows
Stars: ✭ 144 (+554.55%)
Mutual labels:  cfd
Foundations of HPC 2021
This repository collects the materials from the course "Foundations of HPC", 2021, at the Data Science and Scientific Computing Department, University of Trieste
Stars: ✭ 22 (+0%)
Mutual labels:  mpi
Batch Shipyard
Simplify HPC and Batch workloads on Azure
Stars: ✭ 240 (+990.91%)
Mutual labels:  mpi
ccfd
A 2D finite volume CFD code, written in C
Stars: ✭ 26 (+18.18%)
Mutual labels:  cfd
Raxml Ng
RAxML Next Generation: faster, easier-to-use and more flexible
Stars: ✭ 191 (+768.18%)
Mutual labels:  mpi
ravel
Ravel MPI trace visualization tool
Stars: ✭ 26 (+18.18%)
Mutual labels:  mpi
hp2p
Heavy Peer To Peer: a MPI based benchmark for network diagnostic
Stars: ✭ 17 (-22.73%)
Mutual labels:  mpi
az-hop
The Azure HPC On-Demand Platform provides an HPC Cluster Ready solution
Stars: ✭ 33 (+50%)
Mutual labels:  mpi
azurehpc
This repository provides easy automation scripts for building a HPC environment in Azure. It also includes examples to build e2e environment and run some of the key HPC benchmarks and applications.
Stars: ✭ 102 (+363.64%)
Mutual labels:  mpi
Aff3ct
A fast simulator and a library dedicated to the channel coding.
Stars: ✭ 240 (+990.91%)
Mutual labels:  mpi
GenomicsDB
Highly performant data storage in C++ for importing, querying and transforming variant data with C/C++/Java/Spark bindings. Used in gatk4.
Stars: ✭ 77 (+250%)
Mutual labels:  mpi
Dmtcp
DMTCP: Distributed MultiThreaded CheckPointing
Stars: ✭ 229 (+940.91%)
Mutual labels:  mpi
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (+68.18%)
Mutual labels:  mpi
Mpi.jl
MPI wrappers for Julia
Stars: ✭ 197 (+795.45%)
Mutual labels:  mpi
Fluid2d
A versatile Python-Fortran CFD code that solves a large class of 2D flows
Stars: ✭ 49 (+122.73%)
Mutual labels:  cfd
api-spec
API Specififications
Stars: ✭ 30 (+36.36%)
Mutual labels:  mpi
Singularity-tutorial
Singularity 101
Stars: ✭ 31 (+40.91%)
Mutual labels:  mpi

Build Status Documentation Status

Alsvinn

Alsvinn

Alsvinn is a toolset consisting of a finite volume simulator (FVM) and modules for uncertaintity quantifications (UQ). All the major operations can be computed on either a multi-core CPU or an NVIDIA GPU (through CUDA). It also supports cluster configurations consisting of either CPUs or GPUs. It exhibits excellent scaling.

Alsvinn is maintained by Kjetil Olsen Lye at ETH Zurich. We want Alsvinn to be easy to use, so if you have issues compiling or running it, please don't hesitate to leave an issue.

Alsvinn is also available as a Docker container. See below for how to run with Docker or Singularity without any installation needed.

Supported equations

  • The Compressible Euler Equations
  • The scalar Burgers' Equation
  • A scalar cubic conservation law
  • Buckley-Leverett

It is also possible to add new equations without issue (tutorial coming soon).

Initial data

Initial data can easily be specified through a python script. For instance, the Sod Shock tube can be specified as

if x < 0.0:
    rho = 1.
    ux = 0.
    p = 1.0
else:
    rho = .125
    ux = 0.
    p = 0.1

Notable implemented initial data

While it's easy to implement new configurations, we already have a wide variety of configuraitons implemented, including:

  • Kelvin-Helmholtz instability
  • Richtmeyer-Meshkov instability
  • Sod shock tube
  • Cloudshock interaction
  • Shockvortex
  • Fractional Brownian motion

Requirements

see Installing necessary software for more information.

Cloning

This project uses a git submodule, the easiest way to clone the repository is by

git clone --recursive https://github.com/alsvinn/alsvinn.git

Compiling

Should be as easy as running (for advanced cmake-users: the location of the build folder can be arbitrary)

mkdir build
cd build
cmake ..

note that you should probably run it with -DCMAKE_BUILD_TYPE=Release, ie

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release

Compiling with CUDA

If you do not have a CUDA device on your computer, or if you do not have CUDA installed, you should run CMAKE with the -DALSVINN_USE_CUDA=OFF option.

Running tests

Before you try to run the simulations, it's probably a good idea to validate that the build was successful by running the unittests. From the build folder, run

./test/library_test/alstest
bash run_mpi_tests.sh

Running alsvinn

The basic input of alsvinn are .xml-files specifying the different options. You can view the different examples under examples. The initial data is usually specified in a .py-file (named in the xml file).

Deterministic runs

You make a deterministic run by running the alsvinncli utility. From the build folder, run

./alsvinncli/alsvinncli <path to xml file>

it has some options for mpi parallelization (for all options run alsvinncli/alsvinncli --help). To run with MPI support, run eg

 mpirun -np <number of processes> ./alsvinncli/alsvinncli --multi-x <number of procs in x direction> path-to-xml.xml

UQ run

You make a UQ run by running the alsuqcli utility. From the build folder, run

./alsuqcli/alsuqcli <path to xml file>

it has some options for mpi parallelization (for all options run alsuqcli/alsuqcli --help). To run with MPI support, run eg

 mpirun -np <number of processes> ./alsuqcli/alsuqcli --multi-sample <number of procs in sample direction> path-to-xml.xml

Output files

Most output is saved as a NetCDF file. These can easily be read in programming languages such as python.

Python scripts

There is a simple python API for running alsvinn under the python folder. Check out the readme file there for more information.

Running in Docker or Singularity

To run alsvinn using Docker and get the output to the current directory, all you have to do is

docker run --rm -v $(pwd):$(pwd) -w $(pwd) alsvinn/alsvinn_cuda /examples/kelvinhelmholtz/kelvinhelmholtz.xml

NOTE Replace alsvinn_cuda with alsvinn_cpu if you are running a CPU only setup.

NOTE You can also make your own configuration files and specify them instead of /examples/kelvinhelmholtz/kelvinhelmholtz.xml

Running with Singularity

On LSF systems with Singularity installed, Alsvinn can be run as

# Cluster WITHOUT GPUs
bsub -R singularity singularity run -B $(pwd):$(pwd) \
     docker://alsvinn/alsvinn_cpu 
     /examples/kelvinhelmholtz/kelvinhelmholtz.xml
 
# Clusters WITH GPUs:
bsub <other options to get GPU> \
     -R singularity singularity run --nv -B $(pwd):$(pwd) \
     docker://alsvinn/alsvinn_cuda \
     /examples/kelvinhelmholtz/kelvinhelmholtz.xml

Note that it is probably a good idea to pull the image first and then run the downloaded image, eg.

 bsub -R light -J downloading_image -R singularity \
     singularity pull docker://alsvinn/alsvinn_cuda
     
 bsub -R singularity -w "done(downloading_image)" \
     singularity run --nv -B $(pwd):$(pwd) \
     alsvinn_cuda.simg \
     /examples/kelvinhelmholtz/kelvinhelmholtz.xml

To run with MPI, we strongly recommend using MVAPICH2 (or any mpich ABI compatible implementation) on the cluster. On a lot of cluster, you would do something like:

# Or whatver the mvapich2 module is called
module load mvapich2
bsub -R light -J downloading_image -R singularity \
     singularity pull docker://alsvinn/alsvinn_cuda
     
 bsub -R singularity -w "done(downloading_image)" \
     mpirun -np <number of cores> singularity run --nv -B $(pwd):$(pwd) \
     alsvinn_cuda.simg  --multi-y <number of cores> \
     /examples/kelvinhelmholtz/kelvinhelmholtz.xml

Note about Boost and Python

Be sure the python version you link with is the same as the the python version that was used with boost. Usually, this is taken care of by using the libraries corresponding to the python executable (this is especially true for CSCS Daint. On Euler you probably have to build boost with numpy support yourself).

Note about GCC versions and CUDA

CUDA on Linux does at the moment not support GCC versions later than 6, therefore, to build with GPU support, you need to set the compiler to GCC-6.

After you have installed GCC-6 on your distribution, you can set the C/C++ compiler as

cmake .. -DCMAKE_CXX_COMPILER=`which g++-6` -DCMAKE_C_COMPILER=`which gcc-6`

Notes on Windows

You will need to download ghostcript in order for doxygen to work. Download from

 https://code.google.com/p/ghostscript/downloads/detail?name=gs910w32.exe&can=2&q=

Sometimes cmake finds libhdf5 as the dll and not the lib-file. Check that the HDF5_C_LIBRARY is set to hdf5.lib in cmake

gtest should be built with "-Dgtest_force_shared_crt=ON" via cmake. This will ensure that there is no library compability issues.

You will also want to compile a Debug AND Release version of Gtest, and set each library manually in Cmake.

If you have installed Anaconda AND HDF5, make sure the right HDF5 version is picked (it is irrelevant which one you pick, but the two can intertwine in one or several of: include directories, libraries and Path (for dll).)

Installing necessary software

Ubuntu 19.04

Simply run

sudo apt-get update
sudo apt-get install libnetcdf-mpi-dev libnetcdf-dev
	cmake python3 python3-matplotlib python3-numpy python3-scipy git \
libopenmpi-dev gcc g++ libgtest-dev libboost-all-dev doxygen \
    build-essential graphviz libhdf5-mpi-dev libpnetcdf-dev

if you want CUDA (GPU) support, you have to install the CUDA packages as well

sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit

Compiling with CUDA

Compile with

cd alsvinn/
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DALSVINN_PYTHON_VERSION=3.7
make

(If you are using a different python version, replace 3.7 with whatever python version you are using.)

Compiling without CUDA

Compile with

cd alsvinn/
mkdir build
cd build
cmake .. -DALSVINN_USE_CUDA=OFF -DCMAKE_BUILD_TYPE=Release \
    -DALSVINN_PYTHON_VERSION=3.7
make

(If you are using a different python version, replace 3.7 with whatever python version you are using.)

Arch Linux

Pacman should have all needed packages, simply install

pacman -S git cmake cuda g++ netcdf parallel-netcdf-openmpi hdf5-openmpi doxygen gtest boost python

Manually installing parallel-netcdf

On some old distributions, you have to manually compile parallel-netcdf. You can download the latest version from the webpage and install using autotools. Notice that you should compile with the CFLAG -fPIC. A quick way to install parallel-netcdf would be

wget http://cucis.ece.northwestern.edu/projects/PnetCDF/Release/parallel-netcdf-1.9.0.tar.gz
tar xvf parallel-netcdf-1.9.0.tar.gz
cd parallel-netcdf-1.9.0
export CFLAGS='-fPIC'
./configure --prefix=<some location>
make install

remember to specify <some location> to -DCMAKE_PREFIX_PATH afterwards

Using alsvinn as a library

While it is not recommend, there are guides available on how to run alsvinn as a standalone library.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].