All Projects → OpenABL → OpenABL

OpenABL / OpenABL

Licence: Apache-2.0 license
A domain-specific language for parallel and distributed agent-based simulations.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
Yacc
648 projects
c
50402 projects - #5 most used programming language
Lex
420 projects
shell
77523 projects

Projects that are alternatives of or similar to OpenABL

malib
A parallel framework for population-based multi-agent reinforcement learning.
Stars: ✭ 341 (+1320.83%)
Mutual labels:  parallel, distributed
distributed
Library to provide Erlang style distributed computations. This library is inspired by Cloud Haskell.
Stars: ✭ 49 (+104.17%)
Mutual labels:  parallel, distributed
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (+16.67%)
Mutual labels:  parallel, distributed
Optuna
A hyperparameter optimization framework
Stars: ✭ 5,679 (+23562.5%)
Mutual labels:  parallel, distributed
Ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: ✭ 18,547 (+77179.17%)
Mutual labels:  parallel, distributed
pooljs
Browser computing unleashed!
Stars: ✭ 17 (-29.17%)
Mutual labels:  parallel, distributed
optuna-examples
Examples for https://github.com/optuna/optuna
Stars: ✭ 238 (+891.67%)
Mutual labels:  parallel, distributed
Xyzpy
Efficiently generate and analyse high dimensional data.
Stars: ✭ 45 (+87.5%)
Mutual labels:  parallel, distributed
Lightgbm
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
Stars: ✭ 13,293 (+55287.5%)
Mutual labels:  parallel, distributed
Java-AgentSpeak
LightJason - AgentSpeak(L++) for Java
Stars: ✭ 21 (-12.5%)
Mutual labels:  parallel, agent-based-modeling
Galaxy
Galaxy is an asynchronous parallel visualization ray tracer for performant rendering in distributed computing environments. Galaxy builds upon Intel OSPRay and Intel Embree, including ray queueing and sending logic inspired by TACC GraviT.
Stars: ✭ 18 (-25%)
Mutual labels:  parallel, distributed
COVID-Resource-Allocation-Simulator
Agent-based modelling for resource allocation in viral crises to investigate resource allocation and policy interventions with respect to transmission rate.
Stars: ✭ 61 (+154.17%)
Mutual labels:  agent-based-modeling
machine-learning-data-pipeline
Pipeline module for parallel real-time data processing for machine learning models development and production purposes.
Stars: ✭ 22 (-8.33%)
Mutual labels:  parallel
Savior
(WIP)The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework for algorithm service that ensures reliability, high concurrency and scalability of services.
Stars: ✭ 124 (+416.67%)
Mutual labels:  distributed
EpiModelHIV
Network Models of HIV Transmission Dynamics among MSM and Heterosexuals
Stars: ✭ 20 (-16.67%)
Mutual labels:  agent-based-modeling
thread-pool
BS::thread_pool: a fast, lightweight, and easy-to-use C++17 thread pool library
Stars: ✭ 1,043 (+4245.83%)
Mutual labels:  parallel
sst-core
SST Structural Simulation Toolkit Parallel Discrete Event Core and Services
Stars: ✭ 82 (+241.67%)
Mutual labels:  parallel
funboost
pip install funboost,python全功能分布式函数调度框架,。支持python所有类型的并发模式和全球一切知名消息队列中间件,python函数加速器,框架包罗万象,一统编程思维,兼容50% python编程业务场景,适用范围广。只需要一行代码即可分布式执行python一切函数。旧名字是function_scheduling_distributed_framework
Stars: ✭ 351 (+1362.5%)
Mutual labels:  distributed
Credits
Credits(CRDS) - An Evolving Currency For An Evolving Society
Stars: ✭ 14 (-41.67%)
Mutual labels:  distributed
ddal
DDAL(Distributed Data Access Layer) is a simple solution to access database shard.
Stars: ✭ 33 (+37.5%)
Mutual labels:  distributed

OpenABL

OpenABL is a work-in-progress domain-specific language for agent based simulations. It it designed to compile to multiple backends targeting different computing architectures, including single CPUs, GPUs and clusters.

Installation

The build requires flex, bison, cmake and a C++11 compatible C++ compiler. The build requirements can be installed using:

sudo apt-get install flex bison cmake g++

An out-of-source build can be performed using:

mkdir ./build
cmake -Bbuild -H.
make -C build -j4

Installation of backend libraries

OpenABL supports a number of backend libraries, which need to be installed separately. For convenience a script to download and build the different backend libraries is provided.

Some of the backends have additional build or runtime dependencies. Most of them can be installed by running:

sudo apt-get install git autoconf libtool libxml2-utils xsltproc \
                     default-jdk libjava3d-java \
                     libgl1-mesa-dev libglu1-mesa-dev libglew-dev freeglut3-dev

FlameGPU additionally requires a CUDA installation.

The backends can then be downloaded and built using the following command:

# To build all
make -C deps

# To build only a specific one
make -C deps mason
make -C deps flame
make -C deps flamegpu
make -C deps dmason

Running

Examples are located in the examples directory.

To compile the examples/circle.abl example using the Mason backend:

build/OpenABL -i examples/circle.abl -o ./output -b mason

The result will be written into the ./output directory. To run the generated code:

cd ./output
./build.sh
./run.sh

For the circle.abl example, this will create a points.json file.

You can also automatically build and run the generated code (if this is supported by the backend):

# Generate + Build
build/OpenABL -i examples/circle.abl -o ./output -b mason -B
# Generate + Build + Run
build/OpenABL -i examples/circle.abl -o ./output -b mason -R

If the backend supports it, it is also possible to run with visualization:

build/OpenABL -i examples/circle.abl -b mason -C visualize=true -R

If -R is used, the output directory can be omitted. In this case a temporary directory will be used.

Running benchmarks

To run benchmarks for the different backends against our samples models, the bench/bench.py script can be used. The script requires Python 2.7 or Python >= 3.2. Usage summary:

usage: bench.py [-h] [-b BACKENDS] [-m MODELS] [-n NUM_AGENTS] [-r RESULT_DIR]
                [-M SEC]

optional arguments:
  -h, --help            show this help message and exit
  -b BACKENDS, --backends BACKENDS
                        Backends to benchmark (comma separated)
  -m MODELS, --models MODELS
                        Models to benchmark (comma separated)
  -n NUM_AGENTS, --num-agents NUM_AGENTS
                        Number of agent range (min-max)
  -r RESULT_DIR, --result-dir RESULT_DIR
                        Directory for benchmark results
  -M SEC, --max-time SEC
                        (Apprimate) maximal time per backend per model

Some example usages:

# Benchmark default backends against default models with default agent numbers
# Write results to results/ directory
python bench/bench.py -r results/

# Do the same, but limit time spend on each backend+model combination to
# approximately 30 seconds. With the default benchmark configuration of
# 4 backends and 6 models this will take approximately 4*6*30s = 12min
python bench/bench.py -r results/ --max-time 30

# Run circle and boids2d models only with 250 to 64000 agents
python bench/bench.py -r results/ -b mason -m circle,boids2d \
       --num-agents=250-64000

Benchmark results are both written to stdout and the specified results directory. Subsequently it is possible to plot the obtained runtimes. This requires matplotlib to be installed:

sudo apt-get install python-matplotlib

Plotting can then be performed by calling:

python bench/plot.py results/

Help

Output of OpenABL --help:

Usage: ./OpenABL -i input.abl -o ./output-dir -b backend

Options:
  -A, --asset-dir    Asset directory (default: ./asset)
  -b, --backend      Backend
  -B, --build        Build the generated code
  -C, --config       Specify a configuration value (name=value)
  -D, --deps         Deps directory (default: ./deps)
  -h, --help         Display this help
  -i, --input        Input file
  -o, --output-dir   Output directory
  -P, --param        Specify a simulation parameter (name=value)
  -R, --run          Build and run the generated code

Available backends:
 * c
 * flame
 * flamegpu
 * mason
 * dmason

Available configuration options:
 * bool use_float (default: false, flame/gpu only)
 * bool visualize (default: false, d/mason only)

Configuration options

  • bool use_float = false: By default models are compiled to use double-precision floating point numbers, as some backends only support doubles. For the Flame and FlameGPU backends this option may be enabled to use single-precision floating point numbers instead.
  • bool visualize = false: Display a graphical visualization of the model. This option is currently only supported by the Mason and DMason backends.

Environment configuration

To use the automatic build and run scripts, some environment variables have to be set for the different backends. If you are using the deps Makefile, then OpenABL will automatically set these environment variables when building and running. However, you need to set these environment variables either if you are using a non-standard configuration, or want to manually invoke the build and run scripts.

  • c backend:
    • None.
  • flame backend:
    • FLAME_XPARSER_DIR must be set to the xparser directory.
    • LIBMBOARD_DIR must be set to the libmboard directory.
  • flamegpu backend:
    • FLAMEGPU_DIR must be set to the FLAMEGPU directory.
    • CUDA must be in PATH and LD_LIBRARY_PATH.
    • SMS can be used to specify the SM architecture. This defaults to "30 35 37 50 60"
  • mason backend:
    • MASON_JAR must be set to the MASON Jar file.
  • dmason backend:
    • DMASON_JAR must be set to the DMASON Jar file.
    • DMASON_RESOURCES must be set to the DMASON resources directory.

Reference

If you find this code useful in your research, please consider citing:

@inproceedings{CosenzaEUROPAR18,
  author    = {Biagio Cosenza and Nikita Popov and Ben Juurlink and Paul Richmond and Mozhgan Kabiri Chimeh and Carmine Spagnuolo and Gennaro Cordasco and Vittorio Scarano},
  title     = {OpenABL: A Domain-Specific Language for Parallel and Distributed Agent-Based Simulations},
  booktitle = {International European Conference on Parallel and Distributed Computing (Euro-Par)},
  pages     = {505--518},
  year      = {2018},
  url       = {https://doi.org/10.1007/978-3-319-96983-1\_36},
  doi       = {10.1007/978-3-319-96983-1\_36}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].