All Projects → microsoft → Labench

microsoft / Labench

Licence: mit
Latency Benchmarking tool

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Labench

http bench
golang HTTP stress test tool, support single and distributed
Stars: ✭ 142 (+89.33%)
Mutual labels:  benchmark, http2
Awesome Http Benchmark
HTTP(S) benchmark tools, testing/debugging, & restAPI (RESTful)
Stars: ✭ 2,236 (+2881.33%)
Mutual labels:  http2, benchmark
Esa Restlight
ESA Restlight is a lightweight and rest-oriented web framework.
Stars: ✭ 67 (-10.67%)
Mutual labels:  http2
Service Mesh Benchmark
Stars: ✭ 72 (-4%)
Mutual labels:  benchmark
Ncnn Benchmark
The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform
Stars: ✭ 70 (-6.67%)
Mutual labels:  benchmark
Crypto Bench
Benchmarks for crypto libraries (in Rust, or with Rust bindings)
Stars: ✭ 67 (-10.67%)
Mutual labels:  benchmark
Appdocs
Application Performance Optimization Summary
Stars: ✭ 1,169 (+1458.67%)
Mutual labels:  benchmark
Grpc Rust
Rust implementation of gRPC
Stars: ✭ 1,139 (+1418.67%)
Mutual labels:  http2
Unsafe
Assorted java classes that make use of sun.misc.Unsafe
Stars: ✭ 74 (-1.33%)
Mutual labels:  benchmark
Quantum Benchmarks
benchmarking quantum circuit emulators for your daily research usage
Stars: ✭ 70 (-6.67%)
Mutual labels:  benchmark
Asr benchmark
Program to benchmark various speech recognition APIs
Stars: ✭ 71 (-5.33%)
Mutual labels:  benchmark
Akka Http
The Streaming-first HTTP server/module of Akka
Stars: ✭ 1,163 (+1450.67%)
Mutual labels:  http2
Http Benchmark Tornado
基于Python Tornado的高性能http性能测试工具。Java Netty版: https://github.com/junneyang/http-benchmark-netty 。
Stars: ✭ 67 (-10.67%)
Mutual labels:  benchmark
Ossf Cve Benchmark
The OpenSSF CVE Benchmark consists of code and metadata for over 200 real life CVEs, as well as tooling to analyze the vulnerable codebases using a variety of static analysis security testing (SAST) tools and generate reports to evaluate those tools.
Stars: ✭ 71 (-5.33%)
Mutual labels:  benchmark
Evalne
Source code for EvalNE, a Python library for evaluating Network Embedding methods.
Stars: ✭ 67 (-10.67%)
Mutual labels:  benchmark
Ben
Your benchmark assistant, written in Go.
Stars: ✭ 72 (-4%)
Mutual labels:  benchmark
Umesimd
UME::SIMD A library for explicit simd vectorization.
Stars: ✭ 66 (-12%)
Mutual labels:  benchmark
The Cpp Abstraction Penalty
Modern C++ benchmarking
Stars: ✭ 69 (-8%)
Mutual labels:  benchmark
Attabench
Microbenchmarking app for Swift with nice log-log plots
Stars: ✭ 1,167 (+1456%)
Mutual labels:  benchmark
1m Go Tcp Server
benchmarks for implementation of servers which support 1 million connections
Stars: ✭ 1,193 (+1490.67%)
Mutual labels:  benchmark

Introduction

LaBench (for LAtency BENCHmark) is a tool that measures latency percentiles of HTTP GET or POST requests under very even and steady load.

The main feature and distinction of this tool is that (unlike many other benchmarking tools) it dictates request rate to the server and tries to maintain that rate very evenly even when server is experiencing slowdowns and hiccups. While other tools would usually back off and let the server to recover (see Coordinated Omission Problem for more details).

The main difference from wrk2 tool is very even load generated by LaBench.

Quick-Start Guide

  1. Copy or compile LaBench binary (there are both Windows and Linux executables). Windows version has more precise clock.
  2. Modify labench.yaml to meet your needs, most basic params should be self-explanatory. For the full list of supported parameters look at full_config.yaml.
  3. Run the benchmark by simply running labench (you can also specify .yaml file on command line, but labench.yaml is used by default).
  4. BEFORE looking at the latency results check the following things in the tool output:
    1. TimelyTicks percentage. If it's less than say 99.9% then you need to increase number of Clients in yaml config. It's very realistic to keep it at 100%.
    2. TimelySends percentage. If it's less than say 99.9% then you need a beefier machine to run the test. It's very realistic to keep it at 100%.
    3. Number of errors returned by the server (non-200 responses). Some small percentage is OK, but they are not accounted for in latency results.
    4. Throughput reported in last line. If should be close to the value RequestRatePerSec in your .yaml config.
  5. If ANY of the above is not satisfied then the run was not valid and there is no point in looking at the latency results produced, so fix and re-run.
  6. The measurement results (latency percentiles) are placed in out\res.hgrm file. You can open it in Excel or go to http://hdrhistogram.github.io/HdrHistogram/plotFiles.html to plot it.
  7. Note that plotted results have logarithmic X axis (i.e. the distance between 99% and 99.9% is the same as the distance between 99.9% and 99.99%).

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].