All Projects → fastify → Fastify Rate Limit

fastify / Fastify Rate Limit

Licence: mit
A low overhead rate limiter for your routes

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Fastify Rate Limit

Bottleneck
Job scheduler and rate limiter, supports Clustering
Stars: ✭ 1,113 (+521.79%)
Mutual labels:  rate-limiting
Speedbump
A Redis-backed rate limiter in Go
Stars: ✭ 107 (-40.22%)
Mutual labels:  rate-limiting
Enroute
EnRoute Universal Gateway: Cloud Native API gateway with OpenAPI support and free L7 rate-limiting built on Envoy proxy
Stars: ✭ 126 (-29.61%)
Mutual labels:  rate-limiting
Webapithrottle
ASP.NET Web API rate limiter for IIS and Owin hosting
Stars: ✭ 1,180 (+559.22%)
Mutual labels:  rate-limiting
Governor
A rate-limiting library for Rust (formerly ratelimit_meter)
Stars: ✭ 99 (-44.69%)
Mutual labels:  rate-limiting
Istio Workshop
In this workshop, you'll learn how to install and configure Istio, an open source framework for connecting, securing, and managing microservices, on Google Kubernetes Engine, Google’s hosted Kubernetes product. You will also deploy an Istio-enabled multi-service application
Stars: ✭ 120 (-32.96%)
Mutual labels:  rate-limiting
Dalli Rate limiter
Arbitrary Memcached-backed rate limiting for Ruby
Stars: ✭ 38 (-78.77%)
Mutual labels:  rate-limiting
Ratelimiter
C# rate limiting utility
Stars: ✭ 159 (-11.17%)
Mutual labels:  rate-limiting
Axios Rate Limit
Rate limit for axios
Stars: ✭ 106 (-40.78%)
Mutual labels:  rate-limiting
Node Rate Limiter Flexible
Node.js rate limit requests by key with atomic increments in single process or distributed environment.
Stars: ✭ 1,950 (+989.39%)
Mutual labels:  rate-limiting
Ring Ratelimit
Rate limiting middleware for Clojure Ring
Stars: ✭ 78 (-56.42%)
Mutual labels:  rate-limiting
Sentinel Cpp
C++ implementation of Sentinel
Stars: ✭ 91 (-49.16%)
Mutual labels:  rate-limiting
Guzzle Advanced Throttle
A Guzzle middleware that can throttle requests according to (multiple) defined rules. It is also possible to define a caching strategy, e.g. get the response from cache when the rate limit is exceeded or always get a cached value to spare your rate limits. Using wildcards in host names is also supported.
Stars: ✭ 120 (-32.96%)
Mutual labels:  rate-limiting
Nekobin
Elegant and open-source pastebin service
Stars: ✭ 61 (-65.92%)
Mutual labels:  rate-limiting
Aspnetcoreratelimit
ASP.NET Core rate limiting middleware
Stars: ✭ 2,199 (+1128.49%)
Mutual labels:  rate-limiting
Bucket4j
Java rate limiting library based on token/leaky-bucket algorithm.
Stars: ✭ 1,025 (+472.63%)
Mutual labels:  rate-limiting
Sentinel Golang
Sentinel Go version (Reliability & Resilience)
Stars: ✭ 1,817 (+915.08%)
Mutual labels:  rate-limiting
Laravel Rate Limited Job Middleware
A job middleware to rate limit jobs
Stars: ✭ 166 (-7.26%)
Mutual labels:  rate-limiting
Nginxconfig.io
⚙️ NGINX config generator on steroids 💉
Stars: ✭ 14,983 (+8270.39%)
Mutual labels:  rate-limiting
Play Guard
Play2 module for rate limiting, based on token bucket algorithm
Stars: ✭ 123 (-31.28%)
Mutual labels:  rate-limiting

fastify-rate-limit

js-standard-style CI workflow

A low overhead rate limiter for your routes. Supports Fastify 2.x - 3.x semver range.

Please refer to this branch and related versions for Fastify 1.x compatibility.

Install

npm i fastify-rate-limit

Usage

Register the plugin pass to it some custom option.
This plugin will add an onRequest hook to check if the clients (based on their ip) has done too many request in the given timeWindow.

const fastify = require('fastify')()

fastify.register(require('fastify-rate-limit'), {
  max: 100,
  timeWindow: '1 minute'
})

fastify.get('/', (req, reply) => {
  reply.send({ hello: 'world' })
})

fastify.listen(3000, err => {
  if (err) throw err
  console.log('Server listening at http://localhost:3000')
})

In case a client reaches the maximum number of allowed requests, an error will be sent to the user with the status code set to 429:

{
  statusCode: 429,
  error: 'Too Many Requests',
  message: 'Rate limit exceeded, retry in 1 minute'
}

You can change the response by providing a callback to errorResponseBuilder or setting a custom error handler:

fastify.setErrorHandler(function (error, request, reply) {
  if (reply.statusCode === 429) {
    error.message = 'You hit the rate limit! Slow down please!'
  }
  reply.send(error)
})

The response will have some additional headers:

Header Description
x-ratelimit-limit how many request the client can do
x-ratelimit-remaining how many request remain to the client in the timewindow
x-ratelimit-reset how many seconds must pass before the rate limit resets
retry-after if the max has been reached, the millisecond the client must wait before perform new requests

Preventing guessing of URLS through 404s

An attacker could search for valid URLs if your 404 error handling is not rate limited. To rate limit your 404 response, you can use a custom handler:

const fastify = Fastify()
await fastify.register(rateLimit, { global: true, max: 2, timeWindow: 1000 })
fastify.setNotFoundHandler({
  preHandler: fastify.rateLimit()
}, function (request, reply) {
  reply.code(404).send({ hello: 'world' })
})

Note that you can customize the behaviour of the preHandler in the same way you would for specific routes:

const fastify = Fastify()
await fastify.register(rateLimit, { global: true, max: 2, timeWindow: 1000 })
fastify.setNotFoundHandler({
  preHandler: fastify.rateLimit({
    max: 4,
    timeWindow: 500
  })
}, function (request, reply) {
  reply.code(404).send({ hello: 'world' })
})

Options

You can pass the following options during the plugin registration:

fastify.register(require('fastify-rate-limit'), {
  global : false, // default true
  max: 3, // default 1000
  ban: 2, // default null
  timeWindow: 5000, // default 1000 * 60
  cache: 10000, // default 5000
  allowList: ['127.0.0.1'], // default []
  redis: new Redis({ host: '127.0.0.1' }), // default null
  skipOnError: true, // default false
  keyGenerator: function(req) { /* ... */ }, // default (req) => req.raw.ip
  errorResponseBuilder: function(req, context) { /* ... */},
  enableDraftSpec: true, // default false. Uses IEFT draft header standard
  addHeaders: { // default show all the response headers when rate limit is reached
    'x-ratelimit-limit': true,
    'x-ratelimit-remaining': true,
    'x-ratelimit-reset': true,
    'retry-after': true
  }
})
  • global : indicates if the plugin should apply the rate limit setting to all routes within the encapsulation scope
  • max: is the maximum number of requests a single client can perform inside a timeWindow. It can be an async function with the signature async (req, key) => {} where req is the Fastify request object and key is the value generated by the keyGenerator. The function must return a number.
  • ban: is the maximum number of 429 responses to return to a single client before returning 403. When the ban limit is exceeded the context field will have ban=true in the errorResponseBuilder. This parameter is an in-memory counter and could not work properly in a distributed environment.
  • timeWindow: the duration of the time window. It can be expressed in milliseconds or as a string (in the ms format)
  • cache: this plugin internally uses a lru cache to handle the clients, you can change the size of the cache with this option
  • allowList: array of string of ips to exclude from rate limiting. It can be a sync function with the signature (req, key) => {} where req is the Fastify request object and key is the value generated by the keyGenerator. If the function return a truthy value, the request will be excluded from the rate limit.
  • redis: by default this plugins uses an in-memory store, which is fast but if you application works on more than one server it is useless, since the data is stored locally.
    You can pass a Redis client here and magically the issue is solved. To achieve the maximum speed, this plugins requires the use of ioredis. Note:: the default parameters of a redis connection are not the fastest to provide a rate-limit. We suggest to customize the connectTimeout and maxRetriesPerRequest as in the example.
  • store: a custom store to track requests and rates which allows you to use your own storage mechanism (such as using an RDBMS, MongoDB, etc...) as well as further customizing the logic used in calculating the rate limits. A simple example is provided below as well as a more detailed example using Knex.js can be found in the example/ folder
  • skipOnError: if true it will skip errors generated by the storage (eg, redis not reachable).
  • keyGenerator: a function to generate a unique identifier for each incoming request. Defaults to (req) => req.ip, the IP is resolved by fastify using req.connection.remoteAddress or req.headers['x-forwarded-for'] if trustProxy option is enabled. Use it if you want to override this behavior
  • errorResponseBuilder: a function to generate a custom response object. Defaults to (req, context) => ({statusCode: 429, error: 'Too Many Requests', message: ``Rate limit exceeded, retry in ${context.after}``})
  • addHeaders: define which headers should be added in the response when the limit is reached. Defaults all the headers will be shown
  • enableDraftSpec: if true it will change the HTTP rate limit headers following the IEFT draft document. More information at draft-ietf-httpapi-ratelimit-headers.md.

keyGenerator example usage:

fastify.register(require('fastify-rate-limit'), {
  /* ... */
  keyGenerator: function(req) {
    return req.headers['x-real-ip'] // nginx
    || req.headers['x-client-ip'] // apache
    || req.headers['x-forwarded-for'] // use this only if you trust the header
    || req.session.username // you can limit based on any session value
    || req.raw.ip // fallback to default
  }
})

Variable max example usage:

// In the same timeWindow, the max value can change based on request and/or key like this
fastify.register(rateLimit, {
  /* ... */
  keyGenerator (req) { return req.headers['service-key'] },
  max: async (req, key) => { return key === 'pro' ? 3 : 2 },
  timeWindow: 1000
})

errorResponseBuilder example usage:

fastify.register(require('fastify-rate-limit'), {
  /* ... */
  errorResponseBuilder: function(req, context) {
    return {
      code: 429,
      error: 'Too Many Requests',
      message: `I only allow ${context.max} requests per ${context.after} to this Website. Try again soon.`,
      date: Date.now()
    }
  }
})

Dynamic allowList example usage:

fastify.register(require('fastify-rate-limit'), {
  /* ... */
  allowList: function(req, key) {
    return req.headers['x-app-client-id'] === 'internal-usage'
  }
})

Custom store example usage:

NOTE: The timeWindow will always be passed as the numeric value in millseconds into the store's constructor.

function CustomStore (options) {
  this.options = options
  this.current = 0
}

CustomStore.prototype.incr = function (key, cb) {
  const timeWindow = this.options.timeWindow
  this.current++
  cb(null, { current: this.current, ttl: timeWindow - (this.current * 1000) })
}

CustomStore.prototype.child = function (routeOptions) {
  // We create a merged copy of the current parent parameters with the specific
  // route parameters and pass them into the child store.
  const childParams = Object.assign(this.options, routeOptions)
  const store = new CustomStore(childParams)
  // Here is where you may want to do some custom calls on the store with the information
  // in routeOptions first...
  // store.setSubKey(routeOptions.method + routeOptions.url)
  return store
}

fastify.register(require('fastify-rate-limit'), {
  /* ... */
  store: CustomStore
})

The routeOptions object passed to the the child method of the store will contain the same options that are detailed above for plugin registration with any specific overrides provided on the route. In addition the following parameter is provided:

  • routeInfo: The configuration of the route including method, url, path, and the full route config

Options on the endpoint itself

Rate limiting can be configured also for some routes, applying the configuration independently.

For example the allowList if configured:

  • on the plugin registration will affect all endpoints within the encapsulation scope
  • on the route declaration will affect only the targeted endpoint

The global allowlist is configured when registering it with fastify.register(...).

The endpoint allowlist is set on the endpoint directly with the { config : { rateLimit : { allowList : [] } } } object.

ACL checking is performed based on the value of the key from the keyGenerator.

In this example we are checking the IP address, but it could be an allowlist of specific user identifiers (like JWT or tokens):

const fastify = require('fastify')()

fastify.register(require('fastify-rate-limit'),
  {
    global : false, // don't apply these settings to all the routes of the context
    max: 3000, // default global max rate limit
    allowList: ['192.168.0.10'], // global allowlist access.
    redis: redis, // custom connection to redis
  })

// add a limited route with this configuration plus the global one
fastify.get('/', {
  config: {
    rateLimit: {
      max: 3,
      timeWindow: '1 minute'
    }
  }
}, (req, reply) => {
  reply.send({ hello: 'from ... root' })
})

// add a limited route with this configuration plus the global one
fastify.get('/private', {
  config: {
    rateLimit: {
      max: 3,
      timeWindow: '1 minute'
    }
  }
}, (req, reply) => {
  reply.send({ hello: 'from ... private' })
})

// this route doesn't have any rate limit
fastify.get('/public', (req, reply) => {
  reply.send({ hello: 'from ... public' })
})

// add a limited route with this configuration plus the global one
fastify.get('/public/sub-rated-1', {
  config: {
    rateLimit: {
      timeWindow: '1 minute',
      allowList: ['127.0.0.1'],
      onExceeding: function (req) {
        console.log('callback on exceededing ... executed before response to client')
      },
      onExceeded: function (req) {
        console.log('callback on exceeded ... to black ip in security group for example, req is give as argument')
      }
    }
  }
}, (req, reply) => {
  reply.send({ hello: 'from sub-rated-1 ... using default max value ... ' })
})

In the route creation you can override the same settings of the plugin registration plus the additionals options:

  • onExceeding : callback that will be executed each time a request is made to a route that is rate limited
  • onExceeded : callback that will be executed when a user reached the maximum number of tries. Can be useful to blacklist clients

Examples of Custom Store

These examples show an overview of the store feature and you should take inspiration from it and tweak as you need:

IETF Draft Spec Headers

The response will have the following headers if enableDraftSpec is true:

Header Description
ratelimit-limit how many request the client can do
ratelimit-remaining how many request remain to the client in the timewindow
ratelimit-reset how many seconds must pass before the rate limit resets
retry-after contains the same value in time as ratelimit-reset

License

MIT

Copyright © 2018 Tomas Della Vedova

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].