All Projects → the-benchmarker → graphql-benchmarks

the-benchmarker / graphql-benchmarks

Licence: MIT license
GraphQL benchmarks using the-benchmarker framework.

Programming Languages

ruby
36898 projects - #4 most used programming language
go
31211 projects - #10 most used programming language
javascript
184084 projects - #8 most used programming language
c
50402 projects - #5 most used programming language
Dockerfile
14818 projects
Makefile
30231 projects

Projects that are alternatives of or similar to graphql-benchmarks

criterion
statistics-driven micro-benchmarking framework
Stars: ✭ 17 (-68.52%)
Mutual labels:  benchmark, measurement
Web Frameworks
Which is the fastest web framework?
Stars: ✭ 6,125 (+11242.59%)
Mutual labels:  benchmark, measurement
react-native-startup-time
measure startup time of your react-native app
Stars: ✭ 88 (+62.96%)
Mutual labels:  benchmark, measurement
quic vs tcp
A Survey and Benchmark of QUIC
Stars: ✭ 41 (-24.07%)
Mutual labels:  benchmark
Python-Complementary-Languages
Just a small test to see which language is better for extending python when using lists of lists
Stars: ✭ 32 (-40.74%)
Mutual labels:  benchmark
hashcat-benchmark-comparison
Hashcat Benchmark Comparison
Stars: ✭ 22 (-59.26%)
Mutual labels:  benchmark
nowplaying-RS-Music-Reco-FM
#nowplaying-RS: Music Recommendation using Factorization Machines
Stars: ✭ 23 (-57.41%)
Mutual labels:  benchmark
php-benchmarks
It is a collection of php benchmarks
Stars: ✭ 38 (-29.63%)
Mutual labels:  benchmark
gl-bench
⏱ WebGL performance monitor with CPU/GPU load.
Stars: ✭ 146 (+170.37%)
Mutual labels:  benchmark
benchmarkjs-pretty
Tiny wrapper around benchmarkjs with a nicer api
Stars: ✭ 20 (-62.96%)
Mutual labels:  benchmark
HArray
Fastest Trie structure (Linux & Windows)
Stars: ✭ 89 (+64.81%)
Mutual labels:  benchmark
IMCtermite
Enables extraction of measurement data from binary files with extension 'raw' used by proprietary software imcFAMOS/imcSTUDIO and facilitates its storage in open source file formats
Stars: ✭ 20 (-62.96%)
Mutual labels:  measurement
TensorTrade
This repository hosts all my code related to TensorTrade. It consists of the main program, its old versions, and some extras for more insights.
Stars: ✭ 16 (-70.37%)
Mutual labels:  benchmark
MDBenchmark
Quickly generate, start and analyze benchmarks for molecular dynamics simulations.
Stars: ✭ 64 (+18.52%)
Mutual labels:  benchmark
glDelegateBenchmark
quick and dirty benchmark for TFLite gles delegate on iOS
Stars: ✭ 13 (-75.93%)
Mutual labels:  benchmark
graphql-bench
A super simple tool to benchmark GraphQL queries
Stars: ✭ 222 (+311.11%)
Mutual labels:  benchmark
LuaJIT-Benchmarks
LuaJIT Benchmark tests
Stars: ✭ 20 (-62.96%)
Mutual labels:  benchmark
SQL-ProcBench
SQL-ProcBench is an open benchmark for procedural workloads in RDBMSs.
Stars: ✭ 26 (-51.85%)
Mutual labels:  benchmark
python-pytest-harvest
Store data created during your `pytest` tests execution, and retrieve it at the end of the session, e.g. for applicative benchmarking purposes.
Stars: ✭ 44 (-18.52%)
Mutual labels:  benchmark
embeddings
Embeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Stars: ✭ 27 (-50%)
Mutual labels:  benchmark

Which is the fastest GraphQL?

It's all about GraphQL server benchmarking across many languages.

Benchmarks cover maximum throughput and normal use latency. For a more detailed description of the methodology used, the how, and the why see the bottom of this page.

Results

Top 5 Ranking

Rate Latency Verbosity
1️⃣ agoo-c (c) agoo (ruby) fastify-mercurius (javascript)
2️⃣ ggql-i (go) agoo-c (c) express-graphql (javascript)
3️⃣ ggql (go) ggql-i (go) koa-koa-graphql (javascript)
4️⃣ agoo (ruby) ggql (go) apollo-server-fastify (javascript)
5️⃣ fastify-mercurius (javascript) koa-koa-graphql (javascript) apollo-server-express (javascript)

Parameters

  • Last updated: 2021-08-19
  • OS: Linux (version: 5.7.1-050701-generic, arch: x86_64)
  • CPU Cores: 12
  • Connections: 1000
  • Duration: 20 seconds
Rate Latency Verbosity README

Requirements

  • Ruby for tooling
  • Docker as frameworks are isolated into containers
  • perfer the benchmarking tool, >= 1.5.3
  • Oj is needed by the benchmarking Ruby script, >= 3.7
  • RSpec is needed for testing

Usage

  • Install all dependencies, Ruby, Docker, Perfer, Oj, and RSpec.

  • Build containers

build all

build.rb

build just named targets

build.rb [target] [target] ...
  • Runs the tests (optional)
rspec spec.rb
  • Run the benchmarks

frameworks is an options list of frameworks or languages run (example: ruby agoo-c)

benchmarker.rb [frameworks...]

Methodology

Performance of a framework includes latency and maximum number of requests that can be handled in a span of time. The assumption is that users of a framework will choose to run at somewhat less that fully loaded. Running fully loaded would leave no room for a spike in usage. With that in mind, the maximum number of requests per second will serve as the upper limit for a framework.

Latency tends to vary significantly not only radomly but according to the load. A typical latency versus throughput curve starts at some low-load value and stays fairly flat in the normal load region until some inflection point. At the inflection point until the maximum throughput the latency increases.

 |                                                                  *
 |                                                              ****
 |                                                          ****
 |                                                      ****
 |******************************************************
 +---------------------------------------------------------------------
  ^               \             /                       ^           ^
  low-load          normal-load                         inflection  max

These benchmarks show the normal-load latency as that is what most users will see when using a service. Most deployments do not run at near maximum throughput but try to stay in the normal-load are but are prepared for spike in usage. To accomdate slower frameworks a value of 1000 request per second is used for determing the median latency. The assumption is that a rate of 1000 request per second falls in the normal range for most if not all frameworks tested.

The perfer benchmarking tool is used for these reasons:

  • A rate can be specified for latency determination.
  • JSON output makes parsing output easier.
  • Fewer threads are needed by perfer leaving more for the application being benchmarked.
  • perfer is faster than wrk albeit only slightly

How to Contribute

In any way you want ...

  • Provide a Pull Request for a framework addition
  • Report a bug (on any implementation)
  • Suggest an idea
  • More details

All ideas are welcome.

Contributors

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].