All Projects → SCOREC → Core

SCOREC / Core

Licence: bsd-3-clause
parallel finite element unstructured meshes

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Core

hp2p
Heavy Peer To Peer: a MPI based benchmark for network diagnostic
Stars: ✭ 17 (-86.29%)
Mutual labels:  hpc, parallel, mpi, parallel-computing
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (-70.16%)
Mutual labels:  hpc, parallel, mpi, parallel-computing
Foundations of HPC 2021
This repository collects the materials from the course "Foundations of HPC", 2021, at the Data Science and Scientific Computing Department, University of Trieste
Stars: ✭ 22 (-82.26%)
Mutual labels:  hpc, mpi, parallel-computing
ParMmg
Distributed parallelization of 3D volume mesh adaptation
Stars: ✭ 19 (-84.68%)
Mutual labels:  hpc, parallel, mpi
Raftlib
The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators
Stars: ✭ 717 (+478.23%)
Mutual labels:  cmake, parallel, hpc
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (+28.23%)
Mutual labels:  parallel, parallel-computing, hpc
Hpcinfo
Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
Stars: ✭ 171 (+37.9%)
Mutual labels:  parallel, hpc, mpi
hpc
Learning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (-68.55%)
Mutual labels:  hpc, mpi, parallel-computing
muster
Massively Scalable Clustering
Stars: ✭ 22 (-82.26%)
Mutual labels:  parallel, mpi, parallel-computing
cruise
User space POSIX-like file system in main memory
Stars: ✭ 27 (-78.23%)
Mutual labels:  hpc, parallel, parallel-computing
PyMFEM
Python wrapper for MFEM
Stars: ✭ 91 (-26.61%)
Mutual labels:  hpc, parallel-computing, finite-elements
Dash
DASH, the C++ Template Library for Distributed Data Structures with Support for Hierarchical Locality for HPC and Data-Driven Science
Stars: ✭ 134 (+8.06%)
Mutual labels:  parallel-computing, hpc, mpi
Elmerfem
Official git repository of Elmer FEM software
Stars: ✭ 523 (+321.77%)
Mutual labels:  parallel-computing, finite-elements, mpi
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (-77.42%)
Mutual labels:  hpc, parallel, parallel-computing
Easylambda
distributed dataflows with functional list operations for data processing with C++14
Stars: ✭ 475 (+283.06%)
Mutual labels:  parallel, hpc, mpi
Mfem
Lightweight, general, scalable C++ library for finite element methods
Stars: ✭ 667 (+437.9%)
Mutual labels:  parallel-computing, finite-elements, hpc
Prpl
parallel Raster Processing Library (pRPL) is a MPI-enabled C++ programming library that provides easy-to-use interfaces to parallelize raster/image processing algorithms
Stars: ✭ 15 (-87.9%)
Mutual labels:  parallel, mpi
Appiumtestdistribution
A tool for running android and iOS appium tests in parallel across devices... U like it STAR it !
Stars: ✭ 764 (+516.13%)
Mutual labels:  parallel, parallel-computing
Sos
Sandia OpenSHMEM is an implementation of the OpenSHMEM specification over multiple Networking APIs, including Portals 4, the Open Fabric Interface (OFI), and UCX. Please click on the Wiki tab for help with building and using SOS.
Stars: ✭ 34 (-72.58%)
Mutual labels:  parallel-computing, hpc
Ray Tracing Iow Rust
Ray Tracing in One Weekend written in Rust
Stars: ✭ 57 (-54.03%)
Mutual labels:  parallel, parallel-computing

SCOREC Core

The SCOREC Core is a set of C/C++ libraries for unstructured mesh simulations on supercomputers.

For more information, start at our wiki page

What is in this repository?

  • PUMI: parallel unstructured mesh infrastructure API User's Guide
  • PCU: Communication and file IO built on MPI
  • APF: Abstract definition of meshes, fields, and related operations
  • GMI: Common interface for geometric modeling kernels
  • MDS: Compact but flexible array-based mesh data structure
  • PARMA: Scalable partitioning and load balancing procedures
  • SPR: Superconvergent Patch Recovery error estimator
  • MA: Anisotropic mixed mesh adaptation and solution transfer
  • SAM: Sizing anisotropic meshes
  • STK: Conversion from APF meshes to Sandia's STK meshes
  • ZOLTAN: Interface to run Sandia's Zoltan code on APF meshes
  • PHASTA: Tools and file formats related to the PHASTA fluid solver
  • MTH: Math containers and routines
  • CRV: Support for curved meshes with Bezier Shapes
  • PYCORE: Python Wrappers (see python_wrappers/README.md for build instructions)
  • REE: Residual based implicit error estimator

How do I get set up?

  • Dependencies: CMake for compiling and MPI for running
  • Configuration: Typical CMake configure and build. The example_config.sh shows common options to select, use a front-end like ccmake to see a full list of options
  • Tests: the test/ subdirectory has tests and standalone tools that can be compiled by explicitly listing them as targets to make.
  • Users: make install places libraries and headers in a specified prefix, application code can use these in their own compilation process. We also install pkg-config files for all libraries.

Contribution guidelines

  • Don't break the build
  • See the STYLE file
  • If in doubt, make a branch
  • Run the ctest suite
  • Don't try to force push to master or develop; it is disabled

Who do I talk to?

Citing PUMI

If you use these tools, please cite the following paper:

Daniel A. Ibanez, E. Seegyoung Seol, Cameron W. Smith, and Mark S. Shephard. 2016. PUMI: Parallel Unstructured Mesh Infrastructure. ACM Trans. Math. Softw. 42, 3, Article 17 (May 2016), 28 pages. DOI: https://doi.org/10.1145/2814935

We would be happy to provide feedback on journal submissions using PUMI prior to publication.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].