All Projects → WillSewell → Gc Latency Experiment

WillSewell / Gc Latency Experiment

Exploring some worst-case latencies in GCs, inspired by a post on GHC's runtime pause times: https://making.pusher.com/latency-working-set-ghc-gc-pick-two/

Labels

Projects that are alternatives of or similar to Gc Latency Experiment

Swift Graphql
GraphQL implementation written in Swift
Stars: ✭ 38 (-15.56%)
Mutual labels:  makefile
Lakka Libreelec
Lakka is a lightweight Linux distribution that transforms a small computer into a full blown game console.
Stars: ✭ 1,007 (+2137.78%)
Mutual labels:  makefile
Coreos Stack Bootstrap
Stars: ✭ 43 (-4.44%)
Mutual labels:  makefile
The Ooc Language
📘 The definitive manual on the ooc programming language
Stars: ✭ 38 (-15.56%)
Mutual labels:  makefile
Cloverleaf
A hydrodynamics mini-app to solve the compressible Euler equations in 2D, using an explicit, second-order method.
Stars: ✭ 39 (-13.33%)
Mutual labels:  makefile
Trec Data
scripts to download and standardize trec query and document sets
Stars: ✭ 42 (-6.67%)
Mutual labels:  makefile
Docker Unix 1st Ed
A Docker image that drops you into 1st Edition Unix
Stars: ✭ 37 (-17.78%)
Mutual labels:  makefile
Ansible Newrelic
Ansible role which installs and configures New Relic Server Monitoring Daemon
Stars: ✭ 44 (-2.22%)
Mutual labels:  makefile
Openre
HandsFree OpenRE Tutorial
Stars: ✭ 41 (-8.89%)
Mutual labels:  makefile
Jekyll Bootstrap4
Bootstrap 4 with Jekyll minimalistic example site
Stars: ✭ 43 (-4.44%)
Mutual labels:  makefile
Zig.ko
Linux kernel module written in Zig
Stars: ✭ 39 (-13.33%)
Mutual labels:  makefile
Exopenwrt
Extended OpenWrt repository. Note: Latest dnscrypt-proxy merged to upstream (Designated Driver).
Stars: ✭ 39 (-13.33%)
Mutual labels:  makefile
Turris Os Packages
Mirror of https://gitlab.nic.cz/turris/turris-os-packages
Stars: ✭ 42 (-6.67%)
Mutual labels:  makefile
Acris Download
Download NYC real estate transaction data and drop it in a database
Stars: ✭ 38 (-15.56%)
Mutual labels:  makefile
Sfnd lidar obstacle detection
SFND_Lidar_Obstacle_Detection
Stars: ✭ 44 (-2.22%)
Mutual labels:  makefile
Ananas
This is an Arduino based program for step motor controller,Ananas.
Stars: ✭ 38 (-15.56%)
Mutual labels:  makefile
Twemoji Color Font
Twitter Unicode 13 emoji color OpenType-SVG font for Linux/MacOS/Windows
Stars: ✭ 1,006 (+2135.56%)
Mutual labels:  makefile
Debian Packages
debian/ folders for MATE packages
Stars: ✭ 44 (-2.22%)
Mutual labels:  makefile
Perfectdemo
使用Swift的Perfect开发Web服务端
Stars: ✭ 44 (-2.22%)
Mutual labels:  makefile
Tmwa Client Data
DEPRECATED: The data used by the ManaPlus client for the tmwAthena server used by The Mana World Legacy. All further development will take place in the "client-data" repo.
Stars: ✭ 42 (-6.67%)
Mutual labels:  makefile

What

This repository contains code to measure the worst-case pauses observable from of a specific workflow in many languages.

The workflow (allocating N 1Kio strings with only W kept in memory at any time, and the oldest string deallocated) comes from James Fisher's blog post Low latency, large working set, and GHC’s garbage collector: pick two of three, May 2016, who identified it as a situation in which the GHC garbage collector (Haskell) exhibits unpleasant latencies.

How to run

Because each benchmark requires a language-specific toolchain to build/run, we have included Dockerfiles to make this environment consistent. With Docker downloaded, a benchmark can be run with

make racket/results.txt

or by running Docker directly:

docker build -t gc-racket racket
docker run gc-racket

replacing racket with whatever language you are interested in.

How to contribute

The reference repository for this benchmark is Will Sewell's https://github.com/WillSewell/gc-latency-experiment. It was previously maintained by Gabriel Scherer at https://gitlab.com/gasche/gc-latency-experiment.

Pull requests to implement support for a new language are welcome.

The benchmark should write the worst case pause time in milliseconds to STDOUT. You must include a Dockerfile which installs the benchmark dependencies and runs the benchmark in the entrypoint.

We encourage you to use the best performing compiler/runtime systems options.

How to measure worst-case latency

The benchmark is essentially a loop where each iteration allocates a new string and adds it to the message set, and (if the maximum window size W is reached) also removes the oldest message in the set.

Runtime instrumentation vs. manual measurement

There are two ways to measure worst-case pause time. One, "instrumentation", is to activate some sort of runtime monitoring/instrumentation that is specific to the language implementation, and get its worst-case-pause number. Another, "manual measure" is to measure time at each iteration, and compute the maximal difference.

We recommend trying both ways (it's good to build knowledge of how to measure GC latencies, and having a Makefile full of instructions for many languages is useful). One has to be careful with instrumentation, as it may be incomplete (not account for certain sources of pauses); if the two measures disagrees, we consider the manual measure to be the reference.

Message payloads

The fact that the messages themselves take some space is an essential aspect of the benchmark: without it, GHC's garbage collector does an excellent job. Please ensure that your implementation actually allocates 1Kio of memory for each message (no copy-on-write, etc.). (It's fine if the GC knows that this memory doesn't need to be traversed). You should also avoid using less-compact string representations (UTF16, or linked lists of bytes, etc.).

Message set structure

The message set is an associative data structure where each message is indexed by the time it was inserted.

We are not trying to measure latency caused by the specific choice of associative data-structure. For the initial tests Haskell, OCaml and Racket, using an array, a balanced search tree or a hashtable makes no difference. For a new language, feel free to choose whichever gives the best results; but if one of them creates large latencies, you may want to understand why -- there were bugs in Go's maps that made latencies much higher, and some of them were since fixed. For GC-ed languages it may be the case that contiguous arrays are actually worse than structures with more pointer indirections, if the GC doesn't incrementally scan arrays; then use another data structure.

Links and reference

Gabriel Scherer wrote a blog post on measuring the latency of the OCaml benchmark through GC instrumentation, and of how Racket developers advised to tune the Racket benchmark and decided to make small changes to their runtime: Measuring GC latencies in Haskell, OCaml, Racket.

Will Sewell, working at the same company as Jame Fisher's, has a follow-up blog post on this work where Go latencies are discussed: Golang’s Real-time GC in Theory and Practice.

Gorgi Kosev runs another latency-copmarison benchmark, also inspired by the same blog post, but also involving HTTP requests, at https://github.com/spion/hashtable-latencies.

Santeri Hiltunen has a nice blog post with other measurements, in particular some information on tuning the Java benchmark, and a Javascript benchmark with similar explanations, and data on the importance of graph-of-pointers over contiguous data structures to reduce latencies.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].