caddyrl

package module
v0.0.0-...-cbaf1e5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 10, 2022 License: Apache-2.0 Imports: 20 Imported by: 0

README

Caddy HTTP Rate Limit Module

This module implements both internal and distributed HTTP rate limiting. Requests can be rejected after a specified rate limit is hit.

WORK IN PROGRESS: Please note that this module is still unfinished and may have bugs. Please try it out and file bug reports - thanks!

Features

  • Multiple rate limit zones
  • Sliding window algorithm
  • Scalable ring buffer implementation
    • Buffer pooling
    • Goroutines: 1 (to clean up old buffers)
    • Memory O(Kn) where:
      • K = events allowed in window (constant, configurable)
      • n = number of rate limits allocated in zone (configured by zone key; constant or dynamic)
  • RL state persisted through config reloads
  • Automatically sets Retry-After header
  • Optional jitter for retry times
  • Configurable memory management
  • Distributed rate limiting across a cluster
  • Caddyfile support

PLANNED:

  • Ability to define matchers in zones with Caddyfile
  • Smoothed estimates of distributed rate limiting
  • RL state persisted in storage for resuming after restarts
  • Admin API endpoints to inspect or modify rate limits

Building

To build Caddy with this module, use xcaddy:

$ xcaddy build --with github.com/jamison-phillips/caddy-ratelimit

Overview

The rate_limit HTTP handler module lets you define rate limit zones, which have a unique name of your choosing. A rate limit zone is 1:1 with a rate limit (i.e. events per duration).

A zone also has a key, which is different from its name. Keys associate 1:1 with rate limiters, implemented as ring buffers; i.e. a new key implies allocating a new ring buffer. Keys can be static (no placeholders; same for every request), in which case only one rate limiter will be allocated for the whole zone. Or, keys can contain placeholders which can be different for every request, in which case a zone may contain numerous rate limiters depending on the result of expanding the key.

A zone is synomymous with a rate limit, being a number of events per duration. Both window and max_events are required configuration for a zone. For example: 100 events every 1 minute. Because this module uses a sliding window algorithm, it works by looking back <window> duration and seeing if <max_events> events have already happened in that timeframe. If so, an internal HTTP 429 error is generated and returned, invoking error routes which you have defined (if any). Otherwise, the a reservation is made and the event is allowed through.

Each zone may optionally filter the requests it applies to by specifying request matchers.

Unlike nginx's rate limit module, this one does not require you to set a memory bound. Instead, rate limiters are scanned every so often and expired ones are deleted so their memory can be recovered by the garbage collector: Caddy does not drop rate limiters on the floor and forget events like nginx does.

Distributed rate limiting

With a little bit more CPU, I/O, and a teensy bit more memory overhead, this module distributes its rate limit state across a cluster. A cluster is simply defined as other rate limit modules that are configured to use the same storage.

Distributed RL works by periodically writing its internal RL state to storage, while also periodically reading other instances' RL state from storage, then accounting for their states when making allowance decisions. In order for this to work, all instances in the cluster must have the exact same RL zone configurations.

This synchronization algorithm is inherently approximate, but also eventually consistent (and is similar to what other enterprise-only rate limiters do). Its performance depends heavily on parameter tuning (e.g. how often to read and write), configured rate limit windows and event maximums, and performance characteristics of the underlying storage implementation. (It will be fairly heavy on reads, but writes will be lighter, even if more frequent.)

Syntax

This is an HTTP handler module, so it can be used wherever http.handlers modules are accepted.

JSON config
{
	"handler": "rate_limit",
	"rate_limits": {
		"<name>": {
			"match": [],
			"key": "",
			"window": "",
			"max_events": 0
		},
		"distributed": {
			"write_interval": "",
			"read_interval": ""
		},
		"storage": {},
		"jitter": 0.0,
		"sweep_interval": ""
	}
}

All fields are optional, but to be useful, you'll need to define at least one zone, and a zone requires window and max_events to be set. Keys can be static (no placeholders) or dynamic (with placeholders). Matchers can be used to filter requests that apply to a zone. Replace <name> with your RL zone's name.

To enable distributed RL, set distributed to a non-null object. The default read and write intervals are 5s, but you should tune these for your individual deployments.

Storage customizes the storage module that is used. Like normal Caddy convention, all instances with the same storage configuration are considered to be part of a cluster.

Jitter is an optional percentage that adds random variance to the Retry-After time to avoid stampeding herds.

Sweep interval configures how often to scan for expired rate limiters. The default is 1m.

Caddyfile config

As with all non-standard HTTP handler modules, this directive is not known to the Caddyfile adapter and so it must be "ordered" manually using global options unless it only appears within a route block. This ordering usually works well, but you should use discretion:

{
	order rate_limit before basicauth
}

Here is the syntax. See the JSON config section above for explanations about each property:

rate_limit {
	zone <name> {
		key    <string>
		window <duration>
		events <max_events>
	}
	distributed {
		read_interval  <duration>
		write_interval <duration>
	}
	storage <module...>
	jitter  <percent>
	sweep_interval <duration>
}

Like with the JSON config, all subdirectives are optional and have sensible defaults (but you will obviously want to specify at least one zone).

Multiple zones can be defined. Distributed RL can be enabled just by specifying distributed if you want to use its default settings.

Examples

We'll show an equivalent JSON and Caddyfile example that defines two rate limit zones: static_example and dynamic_example.

In the static_example zone, there is precisely one ring buffer allocated because the key is static (no placeholders), and we also demonstrate defining a matcher set to select which requests the rate limit applies to. Only 100 GET requests will be allowed through every minute, across all clients.

In the dynamic_example zone, the key is dynamic (has a placeholder), and in this case we're using the client's IP address ({http.request.remote.host}). We allow only 2 requests per client IP in the last 5 seconds from any given time.

We also enable distributed rate limiting. By deploying this config to two or more instances sharing the same storage module (which we did not define here, so Caddy's global storage config will be used), they will act approximately as one instance when making rate limiting decisions.

JSON example
{
	"apps": {
		"http": {
			"servers": {
				"demo": {
					"listen": [":80"],
					"routes": [
						{
							"handle": [
								{
									"handler": "rate_limit",
									"rate_limits": {
										"static_example": {
											"match": [
												{"method": ["GET"]}
											],
											"key": "static",
											"window": "1m",
											"max_events": 100
										},
										"dynamic_example": {
											"key": "{http.request.remote.host}",
											"window": "5s",
											"max_events": 2
										}
									},
									"distributed": {}
								},
								{
									"handler": "static_response",
									"body": "I'm behind the rate limiter!"
								}
							]
						}
					]
				}
			}
		}
	}
}
Caddyfile example

(The Caddyfile does not yet support defining matchers for RL zones, so that has been omitted from this example.)

{
	order rate_limit before basicauth
}

:80

rate_limit {
	distributed
	zone static_example {
		key    static
		events 100
		window 1m
	}
	zone dynamic_example {
		key    {remote_host}
		events 2
		window 5s
	}
}

respond "I'm behind the rate limiter!"

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type DistributedRateLimiting

type DistributedRateLimiting struct {
	// How often to sync internal state to storage. Default: 5s
	WriteInterval caddy.Duration `json:"write_interval,omitempty"`

	// How often to sync other instances' states from storage.
	// Default: 5s
	ReadInterval caddy.Duration `json:"read_interval,omitempty"`
	// contains filtered or unexported fields
}

DistributedRateLimiting enables and customizes distributed rate limiting. It works by writing out the state of all internal rate limiters to storage, and reading in the state of all other rate limiters in the cluster, every so often.

Distributed rate limiting is not exact like the standard internal rate limiting, but it is eventually consistent. Lower (more frequent) sync intervals will result in higher consistency and precision, but more I/O and CPU overhead.

type Handler

type Handler struct {
	// RateLimits contains the definitions of the rate limit zones, keyed by name.
	// The name **MUST** be globally unique across all other instances of this handler.
	RateLimits map[string]*RateLimit `json:"rate_limits,omitempty"`

	// Percentage jitter on expiration times (example: 0.2 means 20% jitter)
	Jitter float64 `json:"jitter,omitempty"`

	// How often to scan for expired rate limit states. Default: 1m.
	SweepInterval caddy.Duration `json:"sweep_interval,omitempty"`

	// Enables distributed rate limiting. For this to work properly, rate limit
	// zones must have the same configuration for all instances in the cluster
	// because an instance's own configuration is used to calculate whether a
	// rate limit is exceeded. As usual, a cluster is defined to be all instances
	// sharing the same storage configuration.
	Distributed *DistributedRateLimiting `json:"distributed,omitempty"`

	// Storage backend through which rate limit state is synced. If not set,
	// the global or default storage configuration will be used.
	StorageRaw json.RawMessage `json:"storage,omitempty" caddy:"namespace=caddy.storage inline_key=module"`
	// contains filtered or unexported fields
}

Handler implements rate limiting functionality.

If a rate limit is exceeded, an HTTP error with status 429 will be returned. This error can be handled using the conventional error handling routes in your config. An additional placeholder is made available, called `{http.rate_limit.exceeded.name}`, which you can use for logging or handling; it contains the name of the rate limit zone which limit was exceeded.

func (Handler) CaddyModule

func (Handler) CaddyModule() caddy.ModuleInfo

CaddyModule returns the Caddy module information.

func (*Handler) Cleanup

func (h *Handler) Cleanup() error

Cleanup cleans up the handler.

func (*Handler) Provision

func (h *Handler) Provision(ctx caddy.Context) error

Provision sets up the handler.

func (Handler) ServeHTTP

func (h Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error

func (*Handler) UnmarshalCaddyfile

func (h *Handler) UnmarshalCaddyfile(d *caddyfile.Dispenser) error

UnmarshalCaddyfile implements caddyfile.Unmarshaler. Syntax:

rate_limit {
    zone <name> {
        key    <string>
        window <duration>
        events <max_events>
    }
    distributed {
        read_interval  <duration>
        write_interval <duration>
    }
    storage <module...>
    jitter  <percent>
    sweep_interval <duration>
}

type RateLimit

type RateLimit struct {
	// Request matchers, which defines the class of requests that are in the RL zone.
	MatcherSetsRaw caddyhttp.RawMatcherSets `json:"match,omitempty" caddy:"namespace=http.matchers"`

	// The key which uniquely differentiates rate limits within this zone. It could
	// be a static string (no placeholders), resulting in one and only one rate limiter
	// for the whole zone. Or, placeholders could be used to dynamically allocate
	// rate limiters. For example, a key of "foo" will create exactly one rate limiter
	// for all clients. But a key of "{http.request.remote.host}" will create one rate
	// limiter for each different client IP address.
	Key string `json:"key,omitempty"`

	// Number of events allowed within the window.
	MaxEvents int `json:"max_events,omitempty"`

	// Duration of the sliding window.
	Window caddy.Duration `json:"window,omitempty"`
	// contains filtered or unexported fields
}

RateLimit describes an HTTP rate limit zone.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL