leaky

package module
v0.0.0-...-392718e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 1, 2023 License: MIT Imports: 8 Imported by: 0

README

Leaky Bucket Throttling Rate Limiter v0.1.1 Go Report Card

Implementation of the Leaky Bucket rate limiter as a Golang Middleware

Prerequisites

  • A running Redis instance to connect to and store real time state.
  • A Redis client instance from go-redis

Usage:

go get github.com/2bytes/leaky

Then import the middlware and go-redis

import (
    "github.com/2bytes/leaky"
    "github.com/redis/go-redis/v9"
)

Create your redis client instance as necessary, then instantiate the ThrottlingManager using it.

Limits can be set per handler, and each handler takes takes a KeyFunc used to identify a client.

rc := redis.NewClient(&redis.Options{
    Addr: "<redis server address>:6379",
    Password: "<redis password>",
    DB: <db number>,
})

// Consider pinging your Redis server here to ensure it's connected successfully

tm := leaky.NewThrottleManager(rc)

http.Handle("/api", tm.NewThrottlingHandler(myHandler, <bucket size>, <leak rate per minute>, keyFunc, "bucket name"))

Bucket size

The number of requests a particular client can make before they start to be rate limited

Leak rate

The rate at which a filled bucket leaks allowing more connections in that time period.

Drop size

Currently the drop size is restricted to a unit drop size, since the bucket size and leak rate can be varied per endpoint, I currently see no need to complicate this with varied drop sizes.

KeyFunc

The middleware provides an interface to provide your own key function, this is used to identify a particular client, by returning a string used to key the bucket values in the Redis database.

Identifying clients

How you identify each client you wish to rate-limit is up to you, and depends entirely on your requirements.

Preferably, your API uses a username or token and you can use the key function to extract this from the necessary headers and construct a string to use as the key.

Failure state

An implementation choice has been made that if the Redis instance is unavailable, the failure state is to reset the bucket counter to its maximum size allowing requests to continue.

This happens per request, and if the server returns, the state will be returned to its previous value (taking into account elapsed time).

Documentation

Overview

Package leaky provides a simple implementation of the leaky bucket algorithm It can be used as a middleware or called manually depending on requirements

At the moment, the leaky-bucket and handler middleware are tightly coupled, this should be cleaned up to allow usage in scenarios where the handler middleware is not required, or not desired.

At the moment, the middleware and the leaky-bucket are tightly coupled to Redis as a cache, although a Redis failure is non-fatal (fail-open), the dependency might not be required or desired, and should be decoupled appropriately.

Package leaky implements a leaky bucket rate limiting Middleware

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Bucket

type Bucket struct {
	// contains filtered or unexported fields
}

Bucket is the instance of a leaky bucket

func (*Bucket) Add

func (b *Bucket) Add(count int, keyID string) bool

Add adds drops to the bucket if there is space

func (*Bucket) ServeHTTP

func (b *Bucket) ServeHTTP(w http.ResponseWriter, r *http.Request)

ServeHTTP implements http.Handler

type Handler

type Handler func(w http.ResponseWriter, r *http.Request)

Handler creates a new rate limiting leaky bucket handler

type KeyFunc

type KeyFunc func(r http.Request) string

KeyFunc allows an implementation to use a request value to identify a client

type ThrottleManager

type ThrottleManager struct {
	// contains filtered or unexported fields
}

ThrottleManager manages leaky buckets

func NewThrottleManager

func NewThrottleManager(redis *redis.Client) *ThrottleManager

NewThrottleManager creates a new instance of bucket manager it requires a Redis client for storing state

func (*ThrottleManager) ThrottlingHandler

func (m *ThrottleManager) ThrottlingHandler(handler Handler, size int, rate int, keyFunc KeyFunc, bucketName string) *Bucket

ThrottlingHandler creates a new handler wrapper for use as an HTTP middleware

Directories

Path Synopsis
Example of using the throttling rate limiter Start a Redis instance in Docker like this: docker run -itd --name redis -p 6379:6379 redis:alpine Run this example with: go run throttling_api.go Make some requests and watch them throttle: curl http://localhost:7777/api
Example of using the throttling rate limiter Start a Redis instance in Docker like this: docker run -itd --name redis -p 6379:6379 redis:alpine Run this example with: go run throttling_api.go Make some requests and watch them throttle: curl http://localhost:7777/api

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL