All Projects → LesnyRumcajs → grpc_bench

LesnyRumcajs / grpc_bench

Licence: MIT License
Various gRPC benchmarks

Programming Languages

shell
77523 projects
Dockerfile
14818 projects
C++
36643 projects - #6 most used programming language
swift
15916 projects
java
68154 projects - #9 most used programming language
scala
5932 projects

Projects that are alternatives of or similar to grpc bench

jmeter-grpc-plugin
A JMeter plugin supports load test gRPC
Stars: ✭ 36 (-92.5%)
Mutual labels:  benchmark, grpc
Rpc Benchmark
java rpc benchmark, 灵感源自 https://www.techempower.com/benchmarks/
Stars: ✭ 463 (-3.54%)
Mutual labels:  benchmark, grpc
IGUANA
IGUANA is a benchmark execution framework for querying HTTP endpoints and CLI Applications such as Triple Stores. Contact: [email protected]
Stars: ✭ 22 (-95.42%)
Mutual labels:  benchmark
poc-micronaut-kotlin-grpc
Prova de conceito - Micronaut, Kotlin e GRPC
Stars: ✭ 14 (-97.08%)
Mutual labels:  grpc
openmgmt
Documentation and examples for using open network management tools such as OpenConfig
Stars: ✭ 23 (-95.21%)
Mutual labels:  grpc
glassbench
A micro-benchmark framework to use with cargo bench
Stars: ✭ 29 (-93.96%)
Mutual labels:  benchmark
gruf-demo
A demonstration Rails application utilizing gruf, a gRPC Rails framework.
Stars: ✭ 42 (-91.25%)
Mutual labels:  grpc
dalal-street-client
Frontend client for Dalal Street
Stars: ✭ 13 (-97.29%)
Mutual labels:  grpc
Long-Map-Benchmarks
Benchmarking the best way to store long, Object value pairs in a map.
Stars: ✭ 32 (-93.33%)
Mutual labels:  benchmark
gogrpcgin
golang grpc gin
Stars: ✭ 33 (-93.12%)
Mutual labels:  grpc
FewCLUE
FewCLUE 小样本学习测评基准,中文版
Stars: ✭ 251 (-47.71%)
Mutual labels:  benchmark
ContosoLending
An ASP.NET Core 3.1 app showcasing gRPC, server-side Blazor, SignalR, and C# 8.
Stars: ✭ 15 (-96.87%)
Mutual labels:  grpc
docker-protobuf
An all-inclusive protoc Docker image
Stars: ✭ 105 (-78.12%)
Mutual labels:  grpc
url-frontier
API definition, resources and reference implementation of URL Frontiers
Stars: ✭ 16 (-96.67%)
Mutual labels:  grpc
grpc-angular
gRPC to Angular service compatible with grpc-gateway
Stars: ✭ 12 (-97.5%)
Mutual labels:  grpc
tls-perf
TLS handshakes benchnarking tool
Stars: ✭ 18 (-96.25%)
Mutual labels:  benchmark
liar
Flexible, stand-alone benchmarking
Stars: ✭ 16 (-96.67%)
Mutual labels:  benchmark
grpc-spring-security-demo
Spring Boot-based gRPC server with gRPC endpoints secured by Spring Security
Stars: ✭ 50 (-89.58%)
Mutual labels:  grpc
criterion-compare-action
⚡️📊 Compare the performance of Rust project branches
Stars: ✭ 16 (-96.67%)
Mutual labels:  benchmark
PPM
A High-Quality Photograpy Portrait Matting Benchmark
Stars: ✭ 37 (-92.29%)
Mutual labels:  benchmark

CI Discord

One repo to finally have a clear, objective gRPC benchmark with code for everyone to verify and improve.

Contributions are most welcome!

See Nexthink blog post for a deeper overview of the project and recent results.

Goal

The goal of this benchmark is to compare the performance and resource usage of various gRPC libraries across different programming languages and technologies. To achieve that, a minimal protobuf contract is used to not pollute the results with other concepts (e.g. performances of hash maps) and to make the implementations simple.

That being said, the service implementations should NOT take advantage of that and keep the code generic and maintainable. What does generic mean? One should be able to easily adapt the existing code to some fundamental use cases (e.g. having a thread-safe hash map on server side to provide values to client given some key, performing blocking I/O or retrieving a network resource).
Keep in mind the following guidelines:

  • No inline assembly or other, language specific, tricks / hacks should be used
  • The code should be (reasonably) idiomatic, built upon the modern patterns of the language
  • Don't make any assumption on the kind of work done inside the server's request handler
  • Don't assume all client requests will have the exact same content

You decide what is better

Although in the end results are sorted according to the number of requests served, one should go beyond and look at the resource usage - perhaps one implementation is slightly better in terms of raw speed but uses three times more CPU to achieve that. Maybe it's better to take the first one if you're running on a Raspberry Pi and want to get the most of it. Maybe it's better to use the latter in a big server with 32 CPUs because it scales. It all depends on your use case. This benchmark is created to help people make an informed decision (and get ecstatic when their favourite technology seems really good, without doubts).

Metrics

We try to provide some metrics to make this decision easier:

  • req/s - the number of requests the service was able to successfully serve
  • average latency, and 90/95/99 percentiles - time from sending a request to receiving the response
  • average CPU, memory - average resource usage during the benchmark, as reported by docker stats

What this benchmark does NOT take into account

  1. Completeness of the gRPC library. We test only basic unary RPC at the moment. This is the most common service method which may be enough for some business use cases, but not for the others. When you're happy about the results of some technology, you should check out it's documentation (if it exists) and decide yourself if is it production-ready.
  2. Taste. Some may find beauty in Ruby, some may feel like Java is the only real deal. Others treat languages as tools and don't care at all. We don't judge (officially 😉 ). Unless it's a huge state machine with raw void pointers. Ups!

Prerequisites

Linux or MacOS with Docker. Keep in mind that the results on MacOS may not be that reliable, Docker for Mac runs on a VM.

Running benchmark

To build the benchmarks images use: ./build.sh [BENCH1] [BENCH2] ... . You need them to run the benchmarks.

To run the benchmarks use: ./bench.sh [BENCH1] [BENCH2] ... . They will be run sequentially.

To clean-up the benchmark images use: ./clean.sh [BENCH1] [BENCH2] ...

Configuring the benchmark

The benchmark can be configured through the following environment variables:

Name Description Default value
GRPC_BENCHMARK_DURATION Duration of the benchmark. 20s
GRPC_BENCHMARK_WARMUP Duration of the warmup. Stats won't be collected. 5s
GRPC_REQUEST_SCENARIO Scenario (from scenarios/) containing the protobuf and the data to be sent in the client request. complex_proto
GRPC_SERVER_CPUS Maximum number of cpus used by the server. 1
GRPC_SERVER_RAM Maximum memory used by the server. 512m
GRPC_CLIENT_CONNECTIONS Number of connections to use. 50
GRPC_CLIENT_CONCURRENCY Number of requests to run concurrently. It can't be smaller than the number of connections. 1000
GRPC_CLIENT_QPS Rate limit, in queries per second (QPS). 0 (unlimited)
GRPC_CLIENT_CPUS Maximum number of cpus used by the client. 1
GRPC_IMAGE_NAME Name of Docker image built by ./build.sh 'grpc_bench'

Parameter recommendations

  • GRPC_BENCHMARK_DURATION should not be too small. Some implementations need a warm-up before achieving their optimal performance and most real-life gRPC services are expected to be long running processes. From what we measured, 300s should be enough.
  • GRPC_SERVER_CPUS + GRPC_CLIENT_CPUS should not exceed total number of cores on the machine. The reason for this is that you don't want the ghz client to steal precious CPU cycles from the service under test. Keep in mind that having the GRPC_CLIENT_CPUS too low may not saturate the service in some of the more performant implementations. Also keep in mind limiting the number of GRPC_SERVER_CPUS to 1 will severely hamper the performance for some technologies - is running a service on 1 CPU your use case? It may be, but keep in mind eventual load balancer also incurs some costs.
  • GRPC_REQUEST_SCENARIO is a parameter to both build.sh and bench.sh. The images must be rebuilt each time you intend to use a scenario having a different helloworld.proto from the one ran previously.

Other parameters will depend on your use-case. Choose wisely.

Results

You can find our sample results in the Wiki. Be sure to run the benchmarks yourself if you have sufficient hardware, especially for multi-core scenarios.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].