All Projects → darklang → fizzboom

darklang / fizzboom

Licence: MIT License
Benchmark to compare async web server + interpreter + web client implementations across various languages

Programming Languages

F#
602 projects
rust
11053 projects
ocaml
1615 projects
shell
77523 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to fizzboom

Audit-Test-Automation
The Audit Test Automation Package gives you the ability to get an overview about the compliance status of several systems. You can easily create HTML-reports and have a transparent overview over compliance and non-compliance of explicit setttings and configurations in comparison to industry standards and hardening guides.
Stars: ✭ 37 (-19.57%)
Mutual labels:  benchmark, webserver
Agoo
A High Performance HTTP Server for Ruby
Stars: ✭ 679 (+1376.09%)
Mutual labels:  benchmark, webserver
KLUE
📖 Korean NLU Benchmark
Stars: ✭ 420 (+813.04%)
Mutual labels:  benchmark
frobtads
Linux and macOS development tools and text-mode interpreter for TADS adventure games.
Stars: ✭ 41 (-10.87%)
Mutual labels:  interpreter
tpch-spark
TPC-H queries in Apache Spark SQL using native DataFrames API
Stars: ✭ 63 (+36.96%)
Mutual labels:  benchmark
copyparty
⇆🎉 http file sharing hub (py2/py3)
Stars: ✭ 45 (-2.17%)
Mutual labels:  webserver
JohnSnow
A tiny C++ webserver, when it goes wrong, it returns I know nothing.
Stars: ✭ 55 (+19.57%)
Mutual labels:  webserver
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (+58.7%)
Mutual labels:  benchmark
hedgehog
a toy programming language
Stars: ✭ 24 (-47.83%)
Mutual labels:  interpreter
BinKit
Binary Code Similarity Analysis (BCSA) Benchmark
Stars: ✭ 54 (+17.39%)
Mutual labels:  benchmark
esp32 snow
esp32 evk
Stars: ✭ 74 (+60.87%)
Mutual labels:  webserver
interp
Interpreter experiment. Testing dispatch methods: Switching, Direct/Indirect Threaded Code, Tail-Calls and Inlining
Stars: ✭ 32 (-30.43%)
Mutual labels:  interpreter
warpy
WebAssembly interpreter in RPython
Stars: ✭ 54 (+17.39%)
Mutual labels:  interpreter
cult
CPU Ultimate Latency Test.
Stars: ✭ 67 (+45.65%)
Mutual labels:  benchmark
pip
Pip: an imperative code-golf language
Stars: ✭ 22 (-52.17%)
Mutual labels:  interpreter
EthernetWebServer
This is simple yet complete WebServer library for AVR, Portenta_H7, Teensy, SAM DUE, SAMD21/SAMD51, nRF52, STM32, RP2040-based, etc. boards running Ethernet shields. The functions are similar and compatible to ESP8266/ESP32 WebServer libraries to make life much easier to port sketches from ESP8266/ESP32. Coexisting now with `ESP32 WebServer` and…
Stars: ✭ 118 (+156.52%)
Mutual labels:  webserver
PsWebServer
Civet web server integration plugin for Unreal Engine 4
Stars: ✭ 24 (-47.83%)
Mutual labels:  webserver
sebasic4
SE Basic IV 4.2 Cordelia - A free BASIC interpreter written in Z80 assembly language
Stars: ✭ 44 (-4.35%)
Mutual labels:  interpreter
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-52.17%)
Mutual labels:  benchmark
ATS-blockchain
⛓️ Blockchain + Smart contracts from scratch
Stars: ✭ 18 (-60.87%)
Mutual labels:  interpreter

Benchmark the same async program across Rust, OCaml, and F#.

These days, it's mostly used to benchmark F# in various different configurations.

Benchmark overview

This is a benchmark to test what's the best language for implementing Dark in. Dark already has an implementation, but we are looking for improvements, especially around async.

The benchmark is fizzbuzz: using an interpreter connected to web server, dynamically calculate fizzbuzz and return it as a JSON response. This is to test the raw speed of the HTTP server and interpreter.

The most important metric is requests/second.

Contributing

No-one likes to see their favorite language lose at benchmarks, so please feel free to submit pull requests to improve existing benchmarks, or add new variations (different web servers, new languages/frameworks, etc). Some rules:

  • the interpreter must be easy to update, add to, and improve. As such, no microoptimizations, assembly code, JITs, etc. However, it is fine to:
    • add one-off fixes that for example, improve the compiler optimization settings, the webserver configuration, etc. Whatever you'd use for best performance in production is fine.
    • if the code had bad performance that's unfairly penalizing your language (eg due to a compiler bug), it's fine to propose alternatives
    • fix existing bad code (eg if data is being copied unnecessarily)
    • provide code review for existing implementations
  • I can't imagine all the ways that people will try to game this, so I'm definitely going to reject things that don't support how we'd actually want to write Dark's backend. New rules will come as this happens.

Overview of codebase

The benchmark is implemented in measure.py. Requires wrk to be installed.

Run ./measure to test all the fizzbuzz implementations, or ./measure <directory_name1> <directory_name2> <etc> to test a subset of them.

Benchmarks

Each benchmark candidate is in its own directory, which has some known files:

  • ./install.sh - installs dependencies
  • ./build.sh - builds the server. This should use release configuration
  • ./run.sh - runs the server on port 5000
  • BROKEN - if this file exists, skip the implementation in this directory

Benchmarks implement a HTTP server connected to an interpreter which each implement a simple subset of the dark language.

The purpose of the benchmark is to establish:

  • how fast the language is
  • what is the cost of async
  • test variations of using async to see how performance can be improved

The sync implementation helps us figure out a baseline for the performance. We can then compare the sync and async implementation on fizzbuzz to see how much async costs.

Different languages can be compared async-vs-async for (which is raw performance given fizzbuzz constraints).

The optimized async implementation is to see the value of different optimizations and see if there are ways to optimize above a baseline async implementation.

Results

Recent results are posted to the Result issue

Code of Conduct

Dark's community is held to the Dark Code of Conduct. Benchmarks can be contentious, please be kind to all people involved.

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].