faasm / Faasm
Programming Languages
Projects that are alternatives of or similar to Faasm
Faasm is a high-performance stateful serverless runtime.
Faasm provides multi-tenant isolation, yet allows functions to share regions of memory. These shared memory regions give low-latency concurrent access to data, and are synchronised globally to support large-scale parallelism.
Faasm combines software fault isolation from WebAssembly with standard Linux tooling, to provide security and resource isolation at low cost. Faasm runs functions side-by-side as threads of a single runtime process, with low overheads and fast boot times.
Faasm is built on Faabric which provides the distributed messaging and state layer.
The underlying WebAssembly execution and code generation is built using WAVM.
Faasm defines a custom host interface which extends WASI to include function inputs and outputs, chaining functions, managing state, accessing the distributed filesystem, dynamic linking, pthreads, OpenMP and MPI.
Our paper from Usenix ATC '20 on Faasm can be found here.
Quick start
You can start a Faasm cluster locally using the docker-compose.yml
file in the
root of the project:
docker-compose up -d
To interact with this local cluster you can run the Faasm CLI:
# Start the CLI
./bin/cli.sh
# Compile the demo function
inv compile demo hello
# Upload the demo "hello" function
inv upload demo hello
# Invoke the function
inv invoke demo hello
Note that the first time you run the local set-up it will generate some machine
code specific to your host. This is stored in the container/machine-code
directory in the root of the project and reused on subsequent runs.
More information
More detail on some key features and implementations can be found below:
- Usage and set-up - using the CLI and other features.
- C/C++ functions - writing and deploying Faasm functions in C/C++.
- Python functions - isolating and executing functions in Python.
- Distributed state - sharing state between functions.
- Faasm host interface - the serverless-specific interface between functions and the underlying host.
- Kubernetes and Knative integration- deploying Faasm as part of a full serverless platform.
- Bare metal/ VM deployment - deploying Faasm on bare metal or VMs as a stand-alone system.
- API - invoking and managing functions and state through Faasm's HTTP API.
- MPI and OpenMP - executing existing MPI and OpenMP applications in Faasm.
- Developing Faasm - developing and modifying Faasm.
- Releases - instructions for releasing new versions and building container tags.
- Faasm.js - executing Faasm functions in the browser and on the server.
- Threading - executing multi-threaded applications.
- Proto-Faaslets - snapshot-and-restore to reduce cold starts.
- WAMR support - support for the wasm-micro-runtime (WIP).
- SGX - information on executing functions with SGX (WIP).
Experiments and benchmarks
Faasm experiments and benchmarks live in the Faasm experiments repo:
- Tensorflow Lite - performing inference in Faasm with Tensorflow Lite
- Polybench - benchmarking with Polybench/C
- ParRes Kernels - benchmarking with the ParRes Kernels
- Python performance - executing the Python performance benchmarks