rate

package
v1.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 29, 2025 License: MIT Imports: 10 Imported by: 0

README

Rate

Overview

The Rate component provides functionality for rate limiting to protect resources from being overwhelmed by too many requests. It implements a token bucket rate limiter that allows a configurable number of requests per second with a configurable burst size.

Features

  • Token Bucket Algorithm: Implements the token bucket algorithm for efficient rate limiting
  • Configurable Rate Limits: Set requests per second and burst size to match your application needs
  • Immediate Rejection: Reject requests immediately when rate limit is exceeded
  • Wait Mode: Optionally wait for tokens to become available instead of rejecting requests
  • Telemetry Integration: Built-in OpenTelemetry tracing for monitoring rate limiter operations
  • Context Awareness: Respects context cancellation for graceful shutdowns
  • Generic Functions: Type-safe execution with Go generics

Installation

go get github.com/abitofhelp/servicelib/rate

Quick Start

package main

import (
    "context"
    "fmt"
    "time"
    
    "github.com/abitofhelp/servicelib/rate"
)

func main() {
    // Create a rate limiter with default configuration
    cfg := rate.DefaultConfig()
    options := rate.DefaultOptions().WithName("api-limiter")
    limiter := rate.NewRateLimiter(cfg, options)
    
    // Execute a function with rate limiting
    ctx := context.Background()
    result, err := rate.Execute(ctx, limiter, "get-user", func(ctx context.Context) (string, error) {
        // Your rate-limited operation here
        return "user data", nil
    })
    
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    
    fmt.Printf("Result: %s\n", result)
}

API Documentation

Core Types
Config

Configuration for the rate limiter.

type Config struct {
    // Enabled determines if the rate limiter is enabled
    Enabled bool
    // RequestsPerSecond is the number of requests allowed per second
    RequestsPerSecond int
    // BurstSize is the maximum number of requests allowed in a burst
    BurstSize int
}
Options

Additional options for the rate limiter.

type Options struct {
    // Logger is used for logging rate limiter operations
    Logger *logging.ContextLogger
    // Tracer is used for tracing rate limiter operations
    Tracer telemetry.Tracer
    // Name is the name of the rate limiter
    Name string
}
RateLimiter

The main rate limiter struct.

type RateLimiter struct {
    // Internal fields
}
Key Methods
NewRateLimiter

Creates a new rate limiter.

func NewRateLimiter(config Config, options Options) *RateLimiter
Allow

Checks if a request should be allowed based on the rate limit.

func (rl *RateLimiter) Allow() bool
Execute

Executes a function with rate limiting, returning an error if the rate limit is exceeded.

func Execute[T any](ctx context.Context, rl *RateLimiter, operation string, fn func(ctx context.Context) (T, error)) (T, error)
ExecuteWithWait

Executes a function with rate limiting, waiting for a token to become available if necessary.

func ExecuteWithWait[T any](ctx context.Context, rl *RateLimiter, operation string, fn func(ctx context.Context) (T, error)) (T, error)
Reset

Resets the rate limiter to its initial state.

func (rl *RateLimiter) Reset()

Examples

Currently, there are no dedicated examples for the rate package in the EXAMPLES directory. The following code snippets demonstrate common usage patterns:

Basic Rate Limiting
// Create a rate limiter that allows 100 requests per second with a burst of 50
cfg := rate.DefaultConfig().
    WithRequestsPerSecond(100).
    WithBurstSize(50)
options := rate.DefaultOptions().WithName("api-limiter")
limiter := rate.NewRateLimiter(cfg, options)

// Check if a request is allowed
if limiter.Allow() {
    // Process the request
} else {
    // Return rate limit exceeded error
}
Rate Limited Function Execution
// Create a rate limiter
limiter := rate.NewRateLimiter(rate.DefaultConfig(), rate.DefaultOptions())

// Execute a function with rate limiting
ctx := context.Background()
result, err := rate.Execute(ctx, limiter, "get-data", func(ctx context.Context) ([]byte, error) {
    // Make an API call or perform other rate-limited operation
    return fetchData(ctx)
})

if err != nil {
    // Handle error (including rate limit exceeded)
    return err
}

// Use the result
processData(result)
Waiting for Rate Limit
// Create a rate limiter
limiter := rate.NewRateLimiter(rate.DefaultConfig(), rate.DefaultOptions())

// Execute a function with waiting for rate limit
ctx := context.Background()
result, err := rate.ExecuteWithWait(ctx, limiter, "process-item", func(ctx context.Context) (bool, error) {
    // This function will be executed once a token is available
    return processItem(ctx, item)
})

if err != nil {
    // Handle error (context cancellation, etc.)
    return err
}

// Use the result
if result {
    fmt.Println("Item processed successfully")
}

Best Practices

  1. Choose Appropriate Limits: Set rate limits based on your resource capacity and expected load
  2. Use Descriptive Operation Names: Provide meaningful operation names for better telemetry and logging
  3. Handle Rate Limit Errors: Properly handle rate limit exceeded errors in your application
  4. Consider Wait vs. Reject: Use ExecuteWithWait when appropriate to smooth out traffic spikes
  5. Monitor Rate Limiter Metrics: Use the integrated telemetry to monitor rate limiter performance

Troubleshooting

Common Issues
Rate Limiter Too Restrictive

If your rate limiter is rejecting too many requests, consider increasing the RequestsPerSecond or BurstSize parameters.

cfg := rate.DefaultConfig().
    WithRequestsPerSecond(200).  // Increase from default 100
    WithBurstSize(100)           // Increase from default 50
High Latency with ExecuteWithWait

If you're experiencing high latency with ExecuteWithWait, it might indicate that your rate limits are too low for your traffic. Consider increasing the limits or adding more capacity.

  • Errors - Error handling for rate limiting errors
  • Telemetry - Telemetry integration for rate limiter monitoring
  • Logging - Logging for rate limiter events

Contributing

Contributions to this component are welcome! Please see the Contributing Guide for more information.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Documentation

Overview

Package rate provides functionality for rate limiting to protect resources.

This package implements a token bucket rate limiter that helps protect services and resources from being overwhelmed by too many requests. Rate limiting is essential for maintaining system stability, preventing resource exhaustion, and ensuring fair usage of shared resources.

The rate limiter uses a token bucket algorithm where:

  • Tokens are added to the bucket at a fixed rate (RequestsPerSecond)
  • Each request consumes one token
  • If tokens are available, the request is allowed
  • If no tokens are available, the request is rejected or delayed
  • The bucket has a maximum capacity (BurstSize) to allow for bursts of traffic

Key features:

  • Configurable requests per second and burst size
  • Support for blocking and non-blocking rate limiting
  • Integration with OpenTelemetry for tracing
  • Comprehensive logging of rate limiting decisions
  • Thread-safe implementation for concurrent use

Example usage:

// Create a rate limiter with default configuration
limiter := rate.NewRateLimiter(rate.DefaultConfig(), rate.DefaultOptions())

// Execute a function with rate limiting (non-blocking)
result, err := rate.Execute(ctx, limiter, "fetch_data", func(ctx context.Context) (string, error) {
    // Operation to be rate limited
    return fetchData(ctx)
})

// Execute a function with rate limiting (blocking until allowed)
result, err := rate.ExecuteWithWait(ctx, limiter, "process_item", func(ctx context.Context) (string, error) {
    // Operation to be rate limited
    return processItem(ctx)
})

The package is designed to be flexible and can be used to rate limit various types of operations, including API calls, database queries, and resource-intensive computations.

Package rate provides functionality for rate limiting to protect resources.

This package implements a token bucket rate limiter to protect resources from being overwhelmed by too many requests.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Execute

func Execute[T any](ctx context.Context, rl *RateLimiter, operation string, fn func(ctx context.Context) (T, error)) (T, error)

Execute executes a function with rate limiting. This is a generic function that works with any return type. If the rate limit is exceeded, it returns an error without executing the function. If the rate limit is not exceeded, it executes the function and returns its result.

This is a non-blocking operation - it will not wait for tokens to become available.

Type Parameters:

  • T: The return type of the function to execute.

Parameters:

  • ctx: The context for the operation. Can be used to cancel the operation.
  • rl: The rate limiter to use. If nil, the function is executed without rate limiting.
  • operation: A name for the operation being performed, used in logs and traces.
  • fn: The function to execute if the rate limit is not exceeded.

Returns:

  • The result of the function execution, or the zero value of T if the rate limit is exceeded.
  • An error if the rate limit is exceeded or if the function returns an error.

func ExecuteWithWait

func ExecuteWithWait[T any](ctx context.Context, rl *RateLimiter, operation string, fn func(ctx context.Context) (T, error)) (T, error)

ExecuteWithWait executes a function with rate limiting, waiting if necessary. This is a generic function that works with any return type. If the rate limit is exceeded, it will wait until a token becomes available before executing the function. If the context is canceled while waiting, it returns an error without executing the function.

This is a blocking operation - it will wait for tokens to become available.

Type Parameters:

  • T: The return type of the function to execute.

Parameters:

  • ctx: The context for the operation. Can be used to cancel the operation.
  • rl: The rate limiter to use. If nil, the function is executed without rate limiting.
  • operation: A name for the operation being performed, used in logs and traces.
  • fn: The function to execute when a token becomes available.

Returns:

  • The result of the function execution, or the zero value of T if the context is canceled.
  • An error if the context is canceled or if the function returns an error.

Types

type Config

type Config struct {
	// Enabled determines if the rate limiter is enabled.
	// If set to false, all requests will be allowed without rate limiting.
	Enabled bool

	// RequestsPerSecond is the number of requests allowed per second.
	// This determines the rate at which tokens are added to the bucket.
	RequestsPerSecond int

	// BurstSize is the maximum number of requests allowed in a burst.
	// This determines the maximum capacity of the token bucket.
	BurstSize int
}

Config contains rate limiter configuration parameters. It defines the behavior of the rate limiter, including whether it's enabled, how many requests are allowed per second, and the maximum burst size.

func DefaultConfig

func DefaultConfig() Config

DefaultConfig returns a default rate limiter configuration. The default configuration includes:

  • Enabled: true (rate limiting is enabled)
  • RequestsPerSecond: 100 (100 requests allowed per second)
  • BurstSize: 50 (maximum of 50 requests allowed in a burst)

Returns:

  • A Config instance with default values.

func (Config) WithBurstSize

func (c Config) WithBurstSize(burstSize int) Config

WithBurstSize sets the maximum number of requests allowed in a burst. This determines the maximum capacity of the token bucket. If a non-positive value is provided, it will be set to 1.

Parameters:

  • burstSize: The maximum number of requests allowed in a burst.

Returns:

  • A new Config instance with the updated BurstSize value.

func (Config) WithEnabled

func (c Config) WithEnabled(enabled bool) Config

WithEnabled sets whether the rate limiter is enabled. If enabled is set to false, all requests will be allowed without rate limiting.

Parameters:

  • enabled: A boolean indicating whether the rate limiter should be enabled.

Returns:

  • A new Config instance with the updated Enabled value.

func (Config) WithRequestsPerSecond

func (c Config) WithRequestsPerSecond(requestsPerSecond int) Config

WithRequestsPerSecond sets the number of requests allowed per second. This determines the rate at which tokens are added to the bucket. If a non-positive value is provided, it will be set to 1.

Parameters:

  • requestsPerSecond: The number of requests allowed per second.

Returns:

  • A new Config instance with the updated RequestsPerSecond value.

type Options

type Options struct {
	// Logger is used for logging rate limiter operations.
	// If nil, a no-op logger will be used.
	Logger *logging.ContextLogger

	// Tracer is used for tracing rate limiter operations.
	// It provides integration with OpenTelemetry for distributed tracing.
	Tracer telemetry.Tracer

	// Name is the name of the rate limiter.
	// This is useful for identifying the rate limiter in logs and traces.
	Name string
}

Options contains additional options for the rate limiter. These options are not directly related to the rate limiting behavior itself, but provide additional functionality like logging, tracing, and identification.

func DefaultOptions

func DefaultOptions() Options

DefaultOptions returns default options for rate limiter operations. The default options include:

  • No logger (a no-op logger will be used)
  • A no-op tracer (no OpenTelemetry integration)
  • Name: "default"

Returns:

  • An Options instance with default values.

func (Options) WithLogger

func (o Options) WithLogger(logger *logging.ContextLogger) Options

WithLogger sets the logger for the rate limiter. The logger is used to log rate limiter operations, such as when requests are allowed or rejected due to rate limiting.

Parameters:

  • logger: A ContextLogger instance for logging rate limiter operations.

Returns:

  • A new Options instance with the updated Logger value.

func (Options) WithName

func (o Options) WithName(name string) Options

WithName sets the name of the rate limiter. The name is used to identify the rate limiter in logs and traces, which is especially useful when multiple rate limiters are used in the same application.

Parameters:

  • name: A string identifier for the rate limiter.

Returns:

  • A new Options instance with the updated Name value.

func (Options) WithOtelTracer

func (o Options) WithOtelTracer(tracer trace.Tracer) Options

WithOtelTracer returns Options with an OpenTelemetry tracer. This allows users to opt-in to OpenTelemetry tracing if they need it. The tracer is used to create spans for rate limiter operations, which can be viewed in a distributed tracing system.

Parameters:

  • tracer: An OpenTelemetry trace.Tracer instance.

Returns:

  • A new Options instance with the provided OpenTelemetry tracer.

type RateLimiter

type RateLimiter struct {
	// contains filtered or unexported fields
}

RateLimiter implements a token bucket rate limiter to protect resources from being overwhelmed by too many requests.

The token bucket algorithm works by maintaining a bucket of tokens that are added at a fixed rate (RequestsPerSecond). Each request consumes one token. If tokens are available, the request is allowed; otherwise, it is rejected or delayed until tokens become available.

This implementation is thread-safe and can be used concurrently from multiple goroutines.

func NewRateLimiter

func NewRateLimiter(config Config, options Options) *RateLimiter

NewRateLimiter creates a new rate limiter with the specified configuration and options. If the rate limiter is disabled (config.Enabled is false), a special no-op rate limiter is returned that allows all requests without rate limiting.

Parameters:

  • config: The configuration parameters for the rate limiter.
  • options: Additional options for the rate limiter, such as logging and tracing.

Returns:

  • A new RateLimiter instance configured according to the provided parameters.

func (*RateLimiter) Allow

func (rl *RateLimiter) Allow() bool

Allow checks if a request should be allowed based on the rate limit. This is a non-blocking method that immediately returns whether the request is allowed or not. If the rate limiter is disabled or nil, all requests are allowed.

The method is thread-safe and can be called concurrently from multiple goroutines.

Returns:

  • true if the request is allowed (a token was available or rate limiting is disabled).
  • false if the request is not allowed (no tokens were available).

func (*RateLimiter) Reset

func (rl *RateLimiter) Reset()

Reset resets the rate limiter to its initial state. This method refills the token bucket to its maximum capacity (BurstSize) and resets the last refill time to the current time. This is useful for testing or when you want to clear any rate limiting history.

The method is thread-safe and can be called concurrently from multiple goroutines. If the rate limiter is nil, this method does nothing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL