All Projects → udoprog → leaky-bucket

udoprog / leaky-bucket

Licence: other
A tokio-based leaky bucket rate limiter

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to leaky-bucket

RateLimiter
简单限流算法实现
Stars: ✭ 22 (-31.25%)
Mutual labels:  leaky-bucket, rate-limiter, token-bucket
gentle-force
Brute-force, error and request rate limiting
Stars: ✭ 45 (+40.63%)
Mutual labels:  leaky-bucket, rate-limiter, token-bucket
FireflySoft.RateLimit
It is a rate limiting library based on .Net standard.
Stars: ✭ 76 (+137.5%)
Mutual labels:  leaky-bucket, token-bucket
Polite
Be nice on the web
Stars: ✭ 253 (+690.63%)
Mutual labels:  rate-limiter
nestjs-ratelimiter
Distributed consistent flexible NestJS rate limiter based on Redis
Stars: ✭ 49 (+53.13%)
Mutual labels:  rate-limiter
portara-website
Portara dashboard controller to change rate limit settings without redeploying your app
Stars: ✭ 42 (+31.25%)
Mutual labels:  rate-limiter
php-ratelimiter
A framework independent rate limiter for PHP
Stars: ✭ 59 (+84.38%)
Mutual labels:  rate-limiter
Limitrr
Light NodeJS rate limiting and response delaying using Redis - including Express middleware.
Stars: ✭ 203 (+534.38%)
Mutual labels:  rate-limiter
rl
Rate limit from stdin to stdout (drop or keep messages)
Stars: ✭ 38 (+18.75%)
Mutual labels:  rate-limiter
flaps
🛬 Modular rate limiting for PHP.
Stars: ✭ 64 (+100%)
Mutual labels:  leaky-bucket
redislimiter-spring-boot
an excellent API limiting framework for Spring boot/cloud application, especially for microservice project
Stars: ✭ 64 (+100%)
Mutual labels:  rate-limiter
phalcon-throttler
Phalcon Throttler is a Rate Limiter for the PHP Phalcon Framework.
Stars: ✭ 19 (-40.62%)
Mutual labels:  rate-limiter
sample-spring-cloud-gateway
sample spring cloud application with embedded api gateway on spring cloud gateway with or without service discovery with eureka
Stars: ✭ 25 (-21.87%)
Mutual labels:  rate-limiter
lua-resty-limit-rate
Lua module for limiting request rate for OpenResty/ngx_lua, using the "token bucket" method.
Stars: ✭ 64 (+100%)
Mutual labels:  token-bucket
aiolimiter
An efficient implementation of a rate limiter for asyncio.
Stars: ✭ 121 (+278.13%)
Mutual labels:  leaky-bucket
adaptive throttler
manages multiple throttlers with ability to ramp up and down
Stars: ✭ 31 (-3.12%)
Mutual labels:  rate-limiter
Diehard
Clojure library of flexible retry, circuit breaker and rate limiter
Stars: ✭ 227 (+609.38%)
Mutual labels:  rate-limiter
rate-limiter
The Rate Limiter Component provides a Token Bucket implementation to rate limit input and output in your application.
Stars: ✭ 156 (+387.5%)
Mutual labels:  rate-limiter
limitrr-php
Better PHP rate limiting using Redis.
Stars: ✭ 19 (-40.62%)
Mutual labels:  rate-limiter
rush
rush.readthedocs.io/en/latest/
Stars: ✭ 42 (+31.25%)
Mutual labels:  rate-limiter

leaky-bucket

github crates.io docs.rs build status

A token-based rate limiter based on the leaky bucket algorithm.

If the bucket overflows and goes over its max configured capacity, the task that tried to acquire the tokens will be suspended until the required number of tokens has been drained from the bucket.

Since this crate uses timing facilities from tokio it has to be used within a Tokio runtime with the time feature enabled.


Usage

Add the following to your Cargo.toml:

leaky-bucket = "0.11.1"

Examples

The core type is the RateLimiter type, which allows for limiting the throughput of a section using its acquire and acquire_one methods.

use leaky_bucket::RateLimiter;
use std::time;

#[tokio::main]
async fn main() {
    let limiter = RateLimiter::builder()
        .max(10)
        .initial(0)
        .refill(5)
        .build();

    let start = time::Instant::now();

    println!("Waiting for permit...");

    // Should take about 5 seconds to acquire in total.
    let a = limiter.acquire(7);
    let b = limiter.acquire(3);
    let c = limiter.acquire(10);

    let ((), (), ()) = tokio::join!(a, b, c);

    println!(
        "I made it in {:?}!",
        time::Instant::now().duration_since(start)
    );
}

Implementation details

Each rate limiter has two acquisition modes. A fast path and a slow path. The fast path is used if the desired number of tokens are readily available, and involves incrementing an atomic counter indicating that the acquired number of tokens have been added to the bucket.

If this counter goes over its configured maximum capacity, it overflows into a slow path. Here one of the acquiring tasks will switch over to work as a core. This is known as core switching.

use leaky_bucket::RateLimiter;
use std::time;

let limiter = RateLimiter::builder()
    .initial(10)
    .interval(time::Duration::from_millis(100))
    .build();

// This is instantaneous since the rate limiter starts with 10 tokens to
// spare.
limiter.acquire(10).await;

// This however needs to core switch and wait for a while until the desired
// number of tokens is available.
limiter.acquire(3).await;

The core is responsible for sleeping for the configured interval so that more tokens can be added. After which it ensures that any tasks that are waiting to acquire including itself are appropriately unsuspended.

On-demand core switching is what allows this rate limiter implementation to work without a coordinating background thread. But we need to ensure that any asynchronous tasks that uses RateLimiter must either run an acquire call to completion, or be cancelled by being dropped.

If none of these hold, the core might leak and be locked indefinitely preventing any future use of the rate limiter from making progress. This is similar to if you would lock an asynchronous Mutex but never drop its guard.

You can run this example with:

cargo run --example block-forever
use leaky_bucket::RateLimiter;
use std::future::Future;
use std::sync::Arc;
use std::task::Context;

struct Waker;

let limiter = Arc::new(RateLimiter::builder().build());

let waker = Arc::new(Waker).into();
let mut cx = Context::from_waker(&waker);

let mut a0 = Box::pin(limiter.acquire(1));
// Poll once to ensure that the core task is assigned.
assert!(a0.as_mut().poll(&mut cx).is_pending());
assert!(a0.is_core());

// We leak the core task, preventing the rate limiter from making progress
// by assigning new core tasks.
std::mem::forget(a0);

// Awaiting acquire here would block forever.
// limiter.acquire(1).await;

Fairness

By default RateLimiter uses a fair scheduler. This ensures that the core task makes progress even if there are many tasks waiting to acquire tokens. As a result it causes more frequent core switching, increasing the total work needed. An unfair scheduler is expected to do a bit less work under contention. But without fair scheduling some tasks might end up taking longer to acquire than expected.

This behavior can be tweaked with the Builder::fair option.

use leaky_bucket::RateLimiter;

let limiter = RateLimiter::builder()
    .fair(false)
    .build();

The unfair-scheduling example can showcase this phenomenon.

cargh run --example unfair-scheduling
# fair
Max: 1011ms, Total: 1012ms
Timings:
 0: 101ms
 1: 101ms
 2: 101ms
 3: 101ms
 4: 101ms
 ...
# unfair
Max: 1014ms, Total: 1014ms
Timings:
 0: 1014ms
 1: 101ms
 2: 101ms
 3: 101ms
 4: 101ms
 ...

As can be seen above the first task in the unfair scheduler takes longer to run because it prioritises releasing other tasks waiting to acquire over itself.

License: MIT/Apache-2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].