ratelimit

package module
v1.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 2, 2025 License: MIT Imports: 15 Imported by: 0

README

Gorly - Production-Grade Rate Limiting for Go

Go Version License Coverage Tests

Gorly is a battle-tested, production-ready rate limiting library for Go with enterprise-grade security and reliability.

Why Gorly?

  • Production-Ready: 744 tests passing, 74% coverage, zero race conditions, security-hardened
  • Flexible: IP, API key, user, tenant, or custom identity extraction
  • Per-Endpoint Rate Limits: Automatically apply different limits to different routes (payments, admin, search) without custom logic (v1.2.0+)
  • Multi-Backend: In-memory (dev) or Redis (production) with the same API
  • Thread-Safe: Race detector tested, production-proven concurrency guarantees
  • Zero Surprises: Explicit configuration, predictable behavior, comprehensive error handling
  • Developer-Friendly: Simple API for basic cases, powerful features for complex requirements

What's New in v1.2.0

  • Pattern-Based Per-Endpoint Rate Limiting: Apply different rate limits to different endpoints automatically - no more manual scope extraction for every route
  • Prometheus Metrics for Routing: Built-in observability for pattern matching performance and routing decisions
  • Debug & Inspection Tools: ExplainMatch(), Inspect(), and ValidateConfiguration() for troubleshooting routing issues
  • Jump to Use Case 9 for a complete example

Quick Start

Installation
go get github.com/itsatony/gorly
30-Second Integration
import (
    ratelimit "github.com/itsatony/gorly"
    "github.com/itsatony/gorly/stores"
)

// 1. Create store
store, _ := stores.NewMemoryStore(nil)
defer store.Close()

// 2. Create limiter (100 requests/hour)
limiter, _ := ratelimit.NewSimple(store, 100, time.Hour)
defer limiter.Close()

// 3. Check rate limits
ctx := context.Background()
identity := ratelimit.NewIPContext("192.168.1.1")
result, _ := limiter.Allow(ctx, identity)

if result.Allowed {
    // ✅ Process request
    fmt.Printf("Remaining: %d/%d\n", result.Remaining, result.Limit)
} else {
    // ❌ Rate limited - inform client
    fmt.Printf("Rate limited. Retry after: %.0fs\n", result.RetryAfter.Seconds())
}

Common Use Cases

Use Case 1: API with IP-Based Rate Limiting

Scenario: Public API that limits requests by IP address

store, _ := stores.NewMemoryStore(nil)
limiter, _ := ratelimit.NewSimple(store, 1000, time.Hour)

// In your HTTP handler:
func handleRequest(w http.ResponseWriter, r *http.Request) {
    ip := r.RemoteAddr
    identity := ratelimit.NewIPContext(ip)

    result, err := limiter.Allow(r.Context(), identity)
    if err != nil {
        http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
        return
    }

    if !result.Allowed {
        w.Header().Set("Retry-After", fmt.Sprintf("%.0f", result.RetryAfter.Seconds()))
        http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
        return
    }

    // Process request
    w.Write([]byte("Success"))
}
Use Case 2: SaaS with Multi-Tier Rate Limiting

Scenario: SaaS platform with Free, Premium, and Enterprise tiers

// Create limiter with tier support
limiter, _ := ratelimit.NewBuilder().
    WithStore(store).
    WithTokenBucket().
    WithDefaultTiers(). // Configures standard tiers
    Build()

// Rate limit based on user's tier
func handleAPIRequest(w http.ResponseWriter, r *http.Request) {
    user := getCurrentUser(r)
    tier := getUserTier(user.ID) // "free", "premium", or "enterprise"

    identity := ratelimit.NewUserContext(user.ID, tier)
    result, err := limiter.Allow(r.Context(), identity)

    if err != nil {
        http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
        return
    }

    // Add rate limit headers
    w.Header().Set("X-RateLimit-Limit", fmt.Sprintf("%d", result.Limit))
    w.Header().Set("X-RateLimit-Remaining", fmt.Sprintf("%d", result.Remaining))
    w.Header().Set("X-RateLimit-Reset", fmt.Sprintf("%d", result.ResetAt.Unix()))

    if !result.Allowed {
        w.Header().Set("Retry-After", fmt.Sprintf("%.0f", result.RetryAfter.Seconds()))
        http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
        return
    }

    // Process request
}

Configure custom tier limits:

// Create custom tier configuration
resolverConfig := ratelimit.NewResolverConfig()

// Free tier: 100 requests/hour
resolverConfig.AddTierLimit(ratelimit.TierFree,
    ratelimit.ScopeGlobal,
    ratelimit.NewLimitConfig(100, time.Hour, 10))

// Premium tier: 10,000 requests/hour with higher burst
resolverConfig.AddTierLimit(ratelimit.TierPremium,
    ratelimit.ScopeGlobal,
    ratelimit.NewLimitConfig(10000, time.Hour, 1000))

// Enterprise tier: 1,000,000 requests/hour
resolverConfig.AddTierLimit(ratelimit.TierEnterprise,
    ratelimit.ScopeGlobal,
    ratelimit.NewLimitConfig(1000000, time.Hour, 10000))

limiter, _ := ratelimit.NewWithTiers(store, resolverConfig)
Use Case 3: API with Different Limits per Endpoint

Scenario: Different rate limits for search vs. upload operations

// Create limiter with scope support
resolverConfig := ratelimit.NewResolverConfig()

// Search endpoints: 1000 requests/hour
resolverConfig.AddTierLimit(ratelimit.TierFree, "search",
    ratelimit.NewLimitConfig(1000, time.Hour, 100))

// Upload endpoints: 50 uploads/hour (more expensive operation)
resolverConfig.AddTierLimit(ratelimit.TierFree, "upload",
    ratelimit.NewLimitConfig(50, time.Hour, 5))

// Analytics endpoints: 10 requests/hour (heavy queries)
resolverConfig.AddTierLimit(ratelimit.TierFree, "analytics",
    ratelimit.NewLimitConfig(10, time.Hour, 1))

limiter, _ := ratelimit.NewWithTiers(store, resolverConfig)

// In your handlers:
func handleSearch(w http.ResponseWriter, r *http.Request) {
    identity := ratelimit.NewSimpleContext(userID, "search", userTier, nil)
    result, _ := limiter.Allow(r.Context(), identity)
    // ... handle result
}

func handleUpload(w http.ResponseWriter, r *http.Request) {
    identity := ratelimit.NewSimpleContext(userID, "upload", userTier, nil)
    result, _ := limiter.Allow(r.Context(), identity)
    // ... handle result
}
Use Case 4: Distributed System with Redis

Scenario: Microservices that share rate limits across instances

import "github.com/itsatony/gorly/stores"

// Create Redis store (shared across all instances)
store, err := stores.NewRedisStore(&stores.RedisStoreConfig{
    Addr:         "redis:6379",
    Password:     os.Getenv("REDIS_PASSWORD"),
    DB:           0,
    MaxRetries:   3,
    DialTimeout:  5 * time.Second,
    ReadTimeout:  3 * time.Second,
    WriteTimeout: 3 * time.Second,
})
if err != nil {
    log.Fatal(err)
}
defer store.Close()

// Check store health
if err := store.Health(context.Background()); err != nil {
    log.Fatal("Redis store unhealthy:", err)
}

// Create limiter (rate limits now shared across all instances)
limiter, _ := ratelimit.NewSimple(store, 10000, time.Hour)
defer limiter.Close()

// Use normally - limits apply across all service instances
Use Case 5: API Key-Based Rate Limiting

Scenario: API that authenticates via API keys with tier-based limits

// In your API key validation middleware
func extractIdentity(r *http.Request) (ratelimit.Identity, error) {
    apiKey := r.Header.Get("X-API-Key")
    if apiKey == "" {
        return nil, errors.New("missing API key")
    }

    // Look up API key details from database
    keyInfo, err := db.GetAPIKey(apiKey)
    if err != nil {
        return nil, err
    }

    // Create identity with tier based on API key
    identity := ratelimit.NewAPIKeyContext(apiKey, keyInfo.Tier)
    return identity, nil
}

// In your handler
func handleAPIRequest(w http.ResponseWriter, r *http.Request) {
    identity, err := extractIdentity(r)
    if err != nil {
        http.Error(w, "Unauthorized", http.StatusUnauthorized)
        return
    }

    result, err := limiter.Allow(r.Context(), identity)
    if err != nil {
        http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
        return
    }

    if !result.Allowed {
        http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
        return
    }

    // Process request
}
Use Case 6: Batch Operations (Consuming Multiple Tokens)

Scenario: Batch upload that should consume multiple tokens at once

func handleBatchUpload(w http.ResponseWriter, r *http.Request) {
    // Parse batch size
    files := parseUploadFiles(r)
    numFiles := int64(len(files))

    // Create identity
    identity := ratelimit.NewUserContext(userID, userTier)

    // Check if user can upload this many files
    result, err := limiter.AllowN(r.Context(), identity, numFiles)
    if err != nil {
        http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
        return
    }

    if !result.Allowed {
        msg := fmt.Sprintf("Cannot upload %d files. You have %d requests remaining. Retry after %.0fs",
            numFiles, result.Remaining, result.RetryAfter.Seconds())
        http.Error(w, msg, http.StatusTooManyRequests)
        return
    }

    // Process batch upload (numFiles tokens consumed)
    processBatchUpload(files)
    w.Write([]byte(fmt.Sprintf("Uploaded %d files. Remaining quota: %d", numFiles, result.Remaining)))
}
Use Case 7: Pre-Flight Checks (Check Without Consuming)

Scenario: Show users their current rate limit status before they take action

func handleQuotaStatus(w http.ResponseWriter, r *http.Request) {
    identity := ratelimit.NewUserContext(userID, userTier)

    // Check current status WITHOUT consuming a token
    result, err := limiter.Check(r.Context(), identity)
    if err != nil {
        http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
        return
    }

    // Return quota information
    response := map[string]interface{}{
        "limit":         result.Limit,
        "used":          result.Used,
        "remaining":     result.Remaining,
        "reset_at":      result.ResetAt.Format(time.RFC3339),
        "window":        result.Window.String(),
        "quota_percent": float64(result.Used) / float64(result.Limit) * 100,
    }

    json.NewEncoder(w).Encode(response)
}
Use Case 8: HTTP Middleware Integration

Scenario: Automatic rate limiting for all HTTP endpoints

import "github.com/itsatony/gorly/middleware"

// Create limiter
store, _ := stores.NewMemoryStore(nil)
limiter, _ := ratelimit.NewSimple(store, 1000, time.Hour)

// Create HTTP middleware
mw, err := middleware.NewHTTPMiddleware(&middleware.HTTPMiddlewareConfig{
    Limiter: limiter,

    // Extract identity from request
    ContextExtractor: func(r *http.Request) (ratelimit.Identity, error) {
        // Option 1: Use IP address
        return ratelimit.NewIPContext(r.RemoteAddr), nil

        // Option 2: Use API key from header
        // apiKey := r.Header.Get("X-API-Key")
        // tier := lookupTier(apiKey)
        // return ratelimit.NewAPIKeyContext(apiKey, tier), nil

        // Option 3: Use authenticated user
        // user := getUserFromSession(r)
        // return ratelimit.NewUserContext(user.ID, user.Tier), nil
    },

    // Add standard rate limit headers to all responses
    AddHeaders: true,

    // Optional: Custom rate limit response
    CustomResponse: &middleware.HTTPRateLimitResponse{
        StatusCode: http.StatusTooManyRequests,
        Headers: map[string]string{
            "X-Custom-Error": "Rate-Limited",
        },
        Body: map[string]interface{}{
            "error":   "rate_limit_exceeded",
            "message": "Too many requests. Please slow down.",
        },
    },
})
if err != nil {
    log.Fatal(err)
}

// Apply middleware to routes
mux := http.NewServeMux()
mux.Handle("/api/", mw.Middleware(http.HandlerFunc(apiHandler)))
mux.Handle("/search", mw.Middleware(http.HandlerFunc(searchHandler)))

// Start server
http.ListenAndServe(":8080", mux)
Use Case 9: Pattern-Based Per-Endpoint Rate Limiting (v1.2.0+)

Scenario: Different rate limits for different API endpoints using intelligent pattern matching

Pattern-based routing allows you to map request paths to specific rate limit scopes using exact matches, prefixes, globs, or regex patterns. This eliminates manual scope extraction logic and provides fine-grained control over endpoint-specific limits.

import (
    ratelimit "github.com/itsatony/gorly"
    "github.com/itsatony/gorly/middleware"
    "github.com/itsatony/gorly/routing"
    "github.com/itsatony/gorly/stores"
)

// 1. Configure pattern-based route resolver
resolver := routing.NewBuilder().
    // Exact match - highest priority for critical endpoints
    AddExact("/api/payment/process", "payment_critical", 100).

    // Glob patterns - match path segments
    AddGlob("/api/payment/*", "payment", 50).           // Single segment
    AddGlob("/api/admin/**", "admin", 80).              // Multiple segments

    // Regex patterns - flexible matching (e.g., API versioning)
    AddRegex(`^/api/v[0-9]+/.*`, "api_versioned", 30).

    // Prefix matching - catch-all for API routes
    AddPrefix("/api/", "api_default", 10).

    MustBuild()

// 2. Create limiter with per-scope configurations
store, _ := stores.NewMemoryStore(nil)
config := ratelimit.DefaultConfig()
config.Store = store

// Configure different limits for each scope
config.ScopeLimits = map[string]ratelimit.RateLimit{
    "payment_critical": {RateString: "100/minute", BurstSize: 10},
    "payment":          {RateString: "500/hour", BurstSize: 50},
    "admin":            {RateString: "1000/hour", BurstSize: 100},
    "api_versioned":    {RateString: "5000/hour", BurstSize: 200},
    "api_default":      {RateString: "10000/hour", BurstSize: 500},
}

limiter, _ := ratelimit.NewRateLimiter(config)
defer limiter.Close()

// 3. Create route-aware context extractor
extractor := middleware.RouteAwareContextExtractor(
    resolver,
    func(r *http.Request) (ratelimit.Identity, error) {
        // Extract identity (IP, API key, user, etc.)
        apiKey := r.Header.Get("X-API-Key")
        tier := lookupUserTier(apiKey)
        return ratelimit.NewAPIKeyContext(apiKey, tier), nil
    },
    "global", // default scope if no pattern matches
)

// 4. Use in HTTP middleware
mw, _ := middleware.NewHTTPMiddleware(&middleware.HTTPMiddlewareConfig{
    Limiter:          limiter,
    ContextExtractor: extractor,
    AddHeaders:       true,
})

// 5. Apply to your routes
mux := http.NewServeMux()
mux.Handle("/", mw.Middleware(yourHandler))

http.ListenAndServe(":8080", mux)

How Pattern Resolution Works:

  1. Incoming request: POST /api/payment/process
  2. Pattern matching (priority order):
    • ✅ Exact match /api/payment/process"payment_critical" scope (100 priority)
    • Glob /api/payment/* → matches but lower priority (50)
    • Prefix /api/ → matches but lower priority (10)
  3. Rate limit applied: 100 requests/minute with 10 burst
  4. Request proceeds if within limit

Key Benefits:

  • Declarative Configuration: Define patterns once, no per-route logic
  • Priority-Based: Higher priority patterns override lower ones
  • Performance: Exact matches are O(1), glob/regex optimized with caching
  • Security: Built-in ReDoS protection with configurable timeouts
  • Observability: Optional Prometheus metrics for pattern matching

Complete Example: See examples/pattern-routing/ for a full working HTTP server with metrics, debug tools, and multiple pattern types.

Configuration Patterns

limiter, err := ratelimit.NewBuilder().
    WithStore(redisStore).              // Set storage backend
    WithTokenBucket().                  // Algorithm: token bucket (allows bursts)
    WithLimit(1000, time.Hour).         // Base limit: 1000/hour
    WithBurst(100).                     // Allow bursts up to 100
    WithLogger(myLogger).               // Custom logging
    Build()
Using Rate Strings
// Instead of WithLimit(50, 5*time.Minute)
limiter, _ := ratelimit.NewBuilder().
    WithStore(store).
    WithLimitString("50/5m").          // Cleaner syntax
    Build()

// Supported formats:
// "1000/1h"  - 1000 per hour
// "100/1m"   - 100 per minute
// "10/1s"    - 10 per second
// "5000/1d"  - 5000 per day
Preset Configurations

For common use cases, use built-in presets:

// Public REST API (moderate limits with bursts)
apiLimiter, _ := ratelimit.NewForAPI(store)

// Web application (higher limits for UI interactions)
webLimiter, _ := ratelimit.NewForWebApp(store)

// Microservice (very high limits for internal services)
serviceLimiter, _ := ratelimit.NewForMicroservice(store)

// Public API with strict limits (prevents abuse)
publicLimiter, _ := ratelimit.NewForPublicAPI(store)

// Multi-tenant SaaS (tier-based limits)
saasLimiter, _ := ratelimit.NewForSaaS(store)

Rate Limiting Algorithms

Best for: APIs that should allow occasional bursts while maintaining average rate

limiter, _ := ratelimit.NewBuilder().
    WithTokenBucket().
    WithLimit(100, time.Minute).       // 100 tokens per minute
    WithBurst(20).                     // Allow bursts up to 20
    Build()

Behavior:

  • Tokens refill at a constant rate (100/minute)
  • Can accumulate up to burst size (20) for sudden spikes
  • Smooth handling of bursty traffic
  • Production-proven, widely used
Sliding Window

Best for: Strict fairness without allowing bursts

limiter, _ := ratelimit.NewBuilder().
    WithSlidingWindow().
    WithLimit(100, time.Minute).
    Build()

Behavior:

  • Precise rate limiting over sliding time windows
  • No burst allowance
  • More computationally expensive than token bucket
  • Best for scenarios requiring exact rate enforcement
Algorithm Comparison
Algorithm Burst Support Fairness Performance Use Case
Token Bucket ✅ Yes Good Excellent General APIs, most use cases
Sliding Window ❌ No Excellent Good Strict rate enforcement, billing APIs
Fixed Window ⚠️ Boundary Fair Excellent Simple counters, analytics
Leaky Bucket ⚠️ Queue Good Good Message queues, job processing

Storage Backends

In-Memory Store (Development & Testing)
store, err := stores.NewMemoryStore(&stores.MemoryStoreConfig{
    CleanupInterval: 5 * time.Minute,    // How often to clean expired keys
    MaxKeys:         10000,               // Maximum keys before eviction
})

Pros:

  • Zero external dependencies
  • Extremely fast (<1ms latency)
  • Perfect for testing and development

Cons:

  • Not distributed (each instance has separate limits)
  • Lost on restart
  • Not suitable for production multi-instance deployments
Redis Store (Production)
store, err := stores.NewRedisStore(&stores.RedisStoreConfig{
    Addr:            "localhost:6379",
    Password:        "your-password",
    DB:              0,

    // Connection pool settings
    PoolSize:        10,
    MinIdleConns:    2,

    // Timeout settings
    DialTimeout:     5 * time.Second,
    ReadTimeout:     3 * time.Second,
    WriteTimeout:    3 * time.Second,

    // Reliability settings
    MaxRetries:      3,
    MinRetryBackoff: 8 * time.Millisecond,
    MaxRetryBackoff: 512 * time.Millisecond,

    // TLS configuration (for production)
    TLSConfig:       &tls.Config{...},
})

Pros:

  • Distributed rate limiting across multiple instances
  • Persistent across restarts
  • Battle-tested in production
  • High performance with proper configuration

Cons:

  • External dependency (Redis server required)
  • Network latency (~1-5ms for local Redis, more for remote)
  • Requires monitoring and maintenance

Production Best Practices:

  • Use Redis Sentinel or Cluster for high availability
  • Enable persistence (AOF or RDB) for durability
  • Monitor Redis performance metrics
  • Set appropriate connection pool sizes
  • Use TLS for production deployments

Error Handling

Distinguishing Rate Limits from Errors

Critical: Always distinguish between rate limiting and operational errors.

result, err := limiter.Allow(ctx, identity)

if err != nil {
    // OPERATIONAL ERROR: Store unavailable, network issue, invalid config
    // Action: Log error, return 503 Service Unavailable
    log.Error("Rate limiter error", "error", err)
    return http.StatusServiceUnavailable
}

if !result.Allowed {
    // RATE LIMIT EXCEEDED: User hit their quota (normal behavior)
    // Action: Return 429 Too Many Requests with Retry-After
    w.Header().Set("Retry-After", fmt.Sprintf("%.0f", result.RetryAfter.Seconds()))
    return http.StatusTooManyRequests
}

// SUCCESS: Process request
Handling Store Failures Gracefully
// Health check before critical operations
if err := limiter.Health(ctx); err != nil {
    log.Warn("Rate limiter unhealthy, bypassing", "error", err)
    // Option 1: Fail open (allow requests but log)
    // Option 2: Fail closed (reject requests)
    // Option 3: Use fallback in-memory limiter
}

// With context timeout
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()

result, err := limiter.Allow(ctx, identity)
if err != nil {
    // Handle timeout or error
}
Error Types

Gorly provides structured error types for better error handling:

if rateLimitErr, ok := err.(*ratelimit.RateLimitError); ok {
    switch rateLimitErr.Type {
    case ratelimit.ErrorTypeStore:
        // Storage backend error (Redis down, etc.)
        log.Error("Storage error", "error", rateLimitErr)

    case ratelimit.ErrorTypeAlgorithm:
        // Algorithm error (should be rare)
        log.Error("Algorithm error", "error", rateLimitErr)

    case ratelimit.ErrorTypeConfig:
        // Configuration error (invalid limits, etc.)
        log.Error("Config error", "error", rateLimitErr)

    case ratelimit.ErrorTypeNetwork:
        // Network error (Redis connection timeout, etc.)
        log.Warn("Network error", "error", rateLimitErr)

    case ratelimit.ErrorTypeTimeout:
        // Operation timeout
        log.Warn("Timeout", "error", rateLimitErr)
    }
}

Production Deployment

Health Checks
// Add health check endpoint
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
    ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
    defer cancel()

    if err := limiter.Health(ctx); err != nil {
        w.WriteHeader(http.StatusServiceUnavailable)
        json.NewEncoder(w).Encode(map[string]string{
            "status": "unhealthy",
            "error":  err.Error(),
        })
        return
    }

    json.NewEncoder(w).Encode(map[string]string{
        "status": "healthy",
    })
})
Graceful Shutdown
// Create limiter
limiter, _ := ratelimit.NewSimple(store, 1000, time.Hour)

// Ensure cleanup on shutdown
defer limiter.Close()

// Or with graceful shutdown:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)

<-sigChan
log.Info("Shutting down...")

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

// Close limiter (flushes any pending operations)
if err := limiter.Close(); err != nil {
    log.Error("Error closing limiter", "error", err)
}
Monitoring & Observability
// Enable metrics collection
limiter, _ := ratelimit.NewBuilder().
    WithStore(store).
    WithLimit(1000, time.Hour).
    Build()

// Get statistics for monitoring
identity := ratelimit.NewUserContext(userID, tier)
result, _ := limiter.Stats(ctx, identity)

// Export metrics
metrics := map[string]interface{}{
    "limit":           result.Limit,
    "used":            result.Used,
    "remaining":       result.Remaining,
    "quota_percent":   float64(result.Used) / float64(result.Limit) * 100,
    "reset_at":        result.ResetAt,
}

// Log to structured logging / metrics system
log.Info("Rate limit stats", "metrics", metrics)
Performance Tuning

Memory Store:

store, _ := stores.NewMemoryStore(&stores.MemoryStoreConfig{
    CleanupInterval: 1 * time.Minute,     // More frequent for high traffic
    MaxKeys:         100000,               // Higher limit for large user bases
})

Redis Store:

store, _ := stores.NewRedisStore(&stores.RedisStoreConfig{
    PoolSize:        100,                  // Higher pool for high concurrency
    MinIdleConns:    10,                   // Keep connections warm
    ReadTimeout:     100 * time.Millisecond,  // Aggressive timeout
    WriteTimeout:    100 * time.Millisecond,
    MaxRetries:      2,                    // Fail fast
})

Testing

Unit Testing with In-Memory Store
func TestRateLimiting(t *testing.T) {
    // Create test limiter
    store, _ := stores.NewMemoryStore(nil)
    defer store.Close()

    limiter, _ := ratelimit.NewSimple(store, 5, time.Second)
    defer limiter.Close()

    ctx := context.Background()
    identity := ratelimit.NewIPContext("192.168.1.1")

    // First 5 requests should succeed
    for i := 0; i < 5; i++ {
        result, err := limiter.Allow(ctx, identity)
        assert.NoError(t, err)
        assert.True(t, result.Allowed, "Request %d should be allowed", i+1)
    }

    // 6th request should be denied
    result, err := limiter.Allow(ctx, identity)
    assert.NoError(t, err)
    assert.False(t, result.Allowed, "Request should be rate limited")
    assert.Greater(t, result.RetryAfter.Seconds(), 0.0)
}
Integration Testing with Redis
func TestRedisRateLimiting(t *testing.T) {
    // Connect to test Redis instance
    store, err := stores.NewRedisStore(&stores.RedisStoreConfig{
        Addr: "localhost:6379",
        DB:   15, // Use separate DB for tests
    })
    require.NoError(t, err)
    defer store.Close()

    // Test rate limiting
    limiter, _ := ratelimit.NewSimple(store, 10, time.Minute)
    defer limiter.Close()

    // Your tests here
}
Resetting Limits in Tests
func TestWithReset(t *testing.T) {
    store, _ := stores.NewMemoryStore(nil)
    limiter, _ := ratelimit.NewSimple(store, 5, time.Second)
    defer limiter.Close()

    identity := ratelimit.NewIPContext("test-ip")

    // Consume all tokens
    for i := 0; i < 5; i++ {
        limiter.Allow(context.Background(), identity)
    }

    // Reset for next test
    err := limiter.Reset(context.Background(), identity)
    assert.NoError(t, err)

    // Should work again
    result, _ := limiter.Allow(context.Background(), identity)
    assert.True(t, result.Allowed)
}

Security Features (v1.1.0+)

Gorly v1.1.0 includes enterprise-grade security hardening:

DOS Attack Protection

Rate String Parser Protection:

  • Input length validation (max 32 characters)
  • Strict regex patterns (prevents malformed input)
  • Integer overflow protection (safe arithmetic)
  • Zero-value rejection (prevents divide-by-zero)

Key Length Validation:

  • Maximum key length: 256 bytes (prevents memory exhaustion)
  • UTF-8 aware (measures bytes, not characters)
  • Redis compatible (ensures compatibility)
Thread Safety Guarantees

Result Object Safety:

  • Safe concurrent reads of immutable fields
  • Thread-safe metadata operations
  • Documented safe/unsafe operation patterns
  • Race detector tested (zero race conditions)

Statistics Integrity:

  • Value clamping (0 ≤ Remaining ≤ Limit)
  • Invariant enforcement (prevents impossible states)
  • Atomic operations (consistency guarantees)
HTTP API Compliance

Standard Headers (always present in 429 responses):

  • X-RateLimit-Limit: Maximum requests allowed
  • X-RateLimit-Remaining: Requests remaining
  • X-RateLimit-Reset: Unix timestamp of limit reset
  • Retry-After: Seconds until retry allowed

Even with custom response handlers, standard headers are always included.

Performance Characteristics

Throughput:

  • In-Memory: 500,000+ requests/second
  • Redis (local): 50,000+ requests/second
  • Redis (remote): Depends on network latency

Latency (p99):

  • In-Memory: <1ms
  • Redis (local): <5ms
  • Redis (remote): <50ms (typical)

Memory:

  • ~200 bytes per tracked identity (in-memory)
  • ~150 bytes per identity (Redis)

Concurrency:

  • Fully thread-safe
  • Zero race conditions (race detector tested)
  • Scales linearly with CPU cores

API Reference

Core Types
// RateLimiter interface - main entry point
type RateLimiter interface {
    Allow(ctx context.Context, identity Identity) (*Result, error)
    AllowN(ctx context.Context, identity Identity, n int64) (*Result, error)
    Check(ctx context.Context, identity Identity) (*Result, error)
    Stats(ctx context.Context, identity Identity) (*Result, error)
    Reset(ctx context.Context, identity Identity) error
    Health(ctx context.Context) error
    Close() error
}

// Result - outcome of rate limit check
type Result struct {
    Allowed   bool          // Whether request is allowed
    Limit     int64         // Maximum requests allowed
    Remaining int64         // Requests remaining in window
    Used      int64         // Requests used in window
    RetryAfter time.Duration // Time until next allowed request
    ResetAt   time.Time     // When limit resets
    Window    time.Duration // Time window for limit
    // ... additional fields
}

// Identity - represents the rate limit subject
type Identity interface {
    Identity() string           // Unique identifier
    Scope() string             // Rate limit scope
    Tier() string              // Service tier
    Metadata() map[string]interface{}
    Key() string               // Storage key
}
Constructors
// Simple constructors
NewSimple(store, limit, window) (RateLimiter, error)
NewWithConfig(config) (RateLimiter, error)
NewWithTiers(store, resolverConfig) (RateLimiter, error)

// Builder pattern
NewBuilder() *Builder

// Preset configurations
NewForAPI(store) (RateLimiter, error)
NewForWebApp(store) (RateLimiter, error)
NewForMicroservice(store) (RateLimiter, error)
NewForPublicAPI(store) (RateLimiter, error)
NewForSaaS(store) (RateLimiter, error)

// Identity constructors
NewIPContext(ip) Identity
NewUserContext(userID, tier) Identity
NewAPIKeyContext(apiKey, tier) Identity
NewTenantContext(tenantID, tier) Identity
NewSimpleContext(identity, scope, tier, metadata) Identity

// Builder for complex identities
NewContextBuilder() *ContextBuilder
Constants
// Tiers
const (
    TierFree       = "free"
    TierPremium    = "premium"
    TierEnterprise = "enterprise"
)

// Scopes
const (
    ScopeGlobal    = "global"
    ScopeAPI       = "api"
    ScopeSearch    = "search"
    ScopeUpload    = "upload"
    ScopeMetadata  = "metadata"
    ScopeAnalytics = "analytics"
    ScopeAdmin     = "admin"
)

Migration from v1.0.0 to v1.1.0

Breaking Changes

None! v1.1.0 is fully backward compatible.

New Features
  1. Enhanced Security: DOS protection, overflow guards, key validation
  2. Thread Safety: Comprehensive Result safety guarantees
  3. Statistics Integrity: Value clamping, invariant enforcement
  4. HTTP Compliance: Standard headers always present
// v1.0.0 (still works)
limiter, _ := ratelimit.NewSimple(store, 1000, time.Hour)

// v1.1.0 (enhanced - recommended)
limiter, _ := ratelimit.NewBuilder().
    WithStore(store).
    WithLimitString("1000/1h").  // Safer parsing
    WithTokenBucket().
    Build()

Examples

See the examples/ directory for complete working examples:

  • basic/ - Simple rate limiting fundamentals
  • builder/ - Builder pattern and configurations
  • middleware/ - HTTP middleware integration
  • tiers/ - Multi-tier SaaS rate limiting
  • pattern-routing/ - ⭐ NEW in v1.2.0: Advanced pattern-based per-endpoint rate limiting with Prometheus metrics and debug tools

Run examples:

cd examples/basic && go run main.go
cd examples/middleware && go run main.go
cd examples/pattern-routing && go run main.go  # New: Pattern-based routing

Troubleshooting

Common Issues

Issue: "Rate limit always returns allowed"

// ❌ Wrong: Using different identity objects
id1 := ratelimit.NewIPContext("192.168.1.1")
id2 := ratelimit.NewIPContext("192.168.1.1")  // Different object!

// ✅ Correct: Reuse same identity or ensure same key
identity := ratelimit.NewIPContext("192.168.1.1")
limiter.Allow(ctx, identity)
limiter.Allow(ctx, identity)  // Same identity = same limit

Issue: "Redis connection timeout"

// ✅ Set appropriate timeouts
store, _ := stores.NewRedisStore(&stores.RedisStoreConfig{
    DialTimeout:  5 * time.Second,   // Connection establishment
    ReadTimeout:  3 * time.Second,   // Read operations
    WriteTimeout: 3 * time.Second,   // Write operations
    MaxRetries:   3,                 // Retry failed operations
})

Issue: "Memory store grows unbounded"

// ✅ Configure cleanup
store, _ := stores.NewMemoryStore(&stores.MemoryStoreConfig{
    CleanupInterval: 5 * time.Minute,  // Regular cleanup
    MaxKeys:         10000,             // Maximum keys
})

Contributing

Contributions welcome! Please read CONTRIBUTING.md first.

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Run tests (go test ./... -race -cover)
  4. Commit changes (git commit -m 'Add amazing feature')
  5. Push to branch (git push origin feature/amazing-feature)
  6. Open Pull Request

License

MIT License - see LICENSE for details.

Support

Acknowledgments

Built with production experience from scaling rate limiting across thousands of services.

Special thanks to the Go community for excellent testing tools and Redis for providing a rock-solid distributed store.

Documentation

Overview

interfaces.go

Package ratelimit provides production-ready rate limiting for Go applications.

Gorly is designed for simplicity and performance, with a clean API that scales from prototype to production without changes. It supports multiple storage backends (in-memory for development, Redis for production), flexible identity extraction (IP-based, API key-based, user-based), and comprehensive HTTP middleware.

Quick Start

Get started with a one-liner:

import "github.com/itsatony/gorly/middleware"
http.Handle("/api/", middleware.QuickLimit(100, time.Minute, yourHandler))

Or build a custom limiter in 3 lines:

store, _ := stores.NewMemoryStore(nil)
limiter, _ := ratelimit.NewSimple(store, 100, time.Hour)
result, _ := limiter.Allow(ctx, ratelimit.NewIPContext("192.168.1.1"))

Core Concepts

Identity: Who/what is being rate limited (IP address, user ID, API key, etc.)

Scope: What operation is being rate limited (global, search, upload, etc.)

Tier: The user's subscription level (free, premium, enterprise)

Store: Where rate limit state is persisted (memory or Redis)

Production Features

- Thread-safe with race detector testing - Proper error handling (distinguish errors from rate limits) - Redis-backed for distributed systems - Configurable algorithms (token bucket, sliding window) - Multi-tier support (different limits per subscription level) - Scope-based limits (different limits per operation type)

Architecture

Gorly follows a clean architecture with clear separation:

RateLimiter (interface) -> Algorithm -> Store
     ↓
  Identity (who is making the request)
     ↓
  Result (allowed/denied + metadata)

See the README for detailed examples and recipes.

Package ratelimit version information using go-version

Index

Constants

View Source
const (
	// IDPrefixRateLimiter is the prefix for rate limiter instance IDs
	IDPrefixRateLimiter = "rl"

	// IDPrefixContext is the prefix for rate limit context IDs
	IDPrefixContext = "rlc"

	// IDPrefixResult is the prefix for result IDs (if needed)
	IDPrefixResult = "rlr"
)
View Source
const (
	// StrategyTokenBucket implements token bucket algorithm
	// Allows burst traffic, tokens regenerate over time
	StrategyTokenBucket = "token_bucket"

	// StrategySlidingWindow implements sliding window algorithm
	// More precise than token bucket, no burst allowance
	StrategySlidingWindow = "sliding_window"

	// StrategyFixedWindow implements fixed window algorithm
	// Simple window-based counting, fast performance
	StrategyFixedWindow = "fixed_window"

	// StrategyLeakyBucket implements leaky bucket algorithm
	// Smooth rate limiting, queue-based processing
	StrategyLeakyBucket = "leaky_bucket"
)
View Source
const (
	// StoreTypeMemory uses in-memory storage (default)
	// Good for: single instance, development, testing
	StoreTypeMemory = "memory"

	// StoreTypeRedis uses Redis for distributed storage
	// Good for: multi-instance, high availability, shared limits
	StoreTypeRedis = "redis"

	// StoreTypeMock uses mock store for testing
	StoreTypeMock = "mock"
)
View Source
const (
	// TierFree is the free service tier
	TierFree = "free"

	// TierPremium is the premium/paid service tier
	TierPremium = "premium"

	// TierEnterprise is the enterprise service tier
	TierEnterprise = "enterprise"

	// TierInternal is for internal/system requests
	TierInternal = "internal"

	// TierDefault is used when no tier is specified
	TierDefault = "default"
)
View Source
const (
	// ScopeGlobal applies to all operations
	ScopeGlobal = "global"

	// ScopeAPI applies to API requests
	ScopeAPI = "api"

	// ScopeSearch applies to search operations
	ScopeSearch = "search"

	// ScopeUpload applies to file upload operations
	ScopeUpload = "upload"

	// ScopeDownload applies to file download operations
	ScopeDownload = "download"

	// ScopeDatabase applies to database query operations
	ScopeDatabase = "database"

	// ScopeEvents applies to event processing
	ScopeEvents = "events"

	// ScopeAdmin applies to admin operations
	ScopeAdmin = "admin"

	// ScopeMetadata applies to metadata operations
	ScopeMetadata = "metadata"

	// ScopeAnalytics applies to analytics operations
	ScopeAnalytics = "analytics"
)
View Source
const (
	// MetadataKeyIP stores the client IP address
	MetadataKeyIP = "ip"

	// MetadataKeyUserAgent stores the client user agent
	MetadataKeyUserAgent = "user_agent"

	// MetadataKeyResource stores the resource being accessed
	MetadataKeyResource = "resource"

	// MetadataKeyMethod stores the HTTP method
	MetadataKeyMethod = "method"

	// MetadataKeyPath stores the URL path
	MetadataKeyPath = "path"

	// MetadataKeyTimestamp stores the request timestamp
	MetadataKeyTimestamp = "timestamp"

	// MetadataKeyRequestID stores the request ID
	MetadataKeyRequestID = "request_id"

	// MetadataKeyUserID stores the user ID
	MetadataKeyUserID = "user_id"

	// MetadataKeyTenant stores the tenant ID
	MetadataKeyTenant = "tenant"

	// MetadataKeyRegion stores the region
	MetadataKeyRegion = "region"
)
View Source
const (
	// ErrCodeInvalidConfig indicates invalid configuration
	ErrCodeInvalidConfig = "GORLY_INVALID_CONFIG"

	// ErrCodeInvalidContext indicates invalid rate limit context
	ErrCodeInvalidContext = "GORLY_INVALID_CONTEXT"

	// ErrCodeStorageFailure indicates storage backend failure
	ErrCodeStorageFailure = "GORLY_STORAGE_FAILURE"

	// ErrCodeStrategyFailure indicates strategy execution failure
	ErrCodeStrategyFailure = "GORLY_STRATEGY_FAILURE"

	// ErrCodeResolverFailure indicates config resolver failure
	ErrCodeResolverFailure = "GORLY_RESOLVER_FAILURE"

	// ErrCodeLimitExceeded indicates rate limit exceeded
	ErrCodeLimitExceeded = "GORLY_LIMIT_EXCEEDED"

	// ErrCodeInvalidLimit indicates invalid limit value
	ErrCodeInvalidLimit = "GORLY_INVALID_LIMIT"

	// ErrCodeInvalidWindow indicates invalid time window
	ErrCodeInvalidWindow = "GORLY_INVALID_WINDOW"

	// ErrCodeInvalidBurst indicates invalid burst value
	ErrCodeInvalidBurst = "GORLY_INVALID_BURST"

	// ErrCodeClosed indicates limiter is closed
	ErrCodeClosed = "GORLY_CLOSED"

	// ErrCodeKeyNotFound indicates key not found in store
	ErrCodeKeyNotFound = "GORLY_KEY_NOT_FOUND"

	// ErrCodeConnectionFailed indicates connection failure
	ErrCodeConnectionFailed = "GORLY_CONNECTION_FAILED"

	// ErrCodeTimeout indicates operation timeout
	ErrCodeTimeout = "GORLY_TIMEOUT"

	// ErrCodeScriptNotSupported indicates script execution is not supported
	ErrCodeScriptNotSupported = "GORLY_SCRIPT_NOT_SUPPORTED"

	// ErrCodeKeyTooLong indicates the key exceeds maximum allowed length
	ErrCodeKeyTooLong = "GORLY_KEY_TOO_LONG"
)
View Source
const (
	// ErrMsgInvalidConfig is returned when configuration is invalid
	ErrMsgInvalidConfig = "invalid rate limiter configuration"

	// ErrMsgInvalidContext is returned when context is invalid
	ErrMsgInvalidContext = "invalid rate limit context"

	// ErrMsgStorageFailure is returned when storage operation fails
	ErrMsgStorageFailure = "storage backend failure"

	// ErrMsgStrategyFailure is returned when strategy execution fails
	ErrMsgStrategyFailure = "rate limiting strategy failure"

	// ErrMsgResolverFailure is returned when config resolver fails
	ErrMsgResolverFailure = "rate limit resolver failure"

	// ErrMsgLimitExceeded is returned when rate limit is exceeded
	ErrMsgLimitExceeded = "rate limit exceeded"

	// ErrMsgInvalidLimit is returned when limit value is invalid
	ErrMsgInvalidLimit = "invalid rate limit value"

	// ErrMsgInvalidWindow is returned when window value is invalid
	ErrMsgInvalidWindow = "invalid time window"

	// ErrMsgInvalidBurst is returned when burst value is invalid
	ErrMsgInvalidBurst = "invalid burst value"

	// ErrMsgClosed is returned when limiter is closed
	ErrMsgClosed = "rate limiter is closed"

	// ErrMsgKeyNotFound is returned when key not found
	ErrMsgKeyNotFound = "key not found"

	// ErrMsgConnectionFailed is returned when connection fails
	ErrMsgConnectionFailed = "connection failed"

	// ErrMsgTimeout is returned when operation times out
	ErrMsgTimeout = "operation timeout"

	// ErrMsgScriptNotSupported is returned when store doesn't support script execution
	ErrMsgScriptNotSupported = "script execution not supported by this store"

	// ErrMsgKeyTooLong is returned when key exceeds maximum length
	ErrMsgKeyTooLong = "key length exceeds maximum allowed"
)
View Source
const (
	// DefaultLimit is the default rate limit (requests per window)
	DefaultLimit = int64(1000)

	// DefaultWindowSeconds is the default time window in seconds
	DefaultWindowSeconds = int64(3600) // 1 hour

	// DefaultWindow is the default time window duration
	DefaultWindow = time.Duration(DefaultWindowSeconds) * time.Second

	// DefaultBurst is the default burst size
	DefaultBurst = int64(100)

	// DefaultCleanupIntervalSeconds is the default cleanup interval for memory store
	DefaultCleanupIntervalSeconds = int64(300) // 5 minutes

	// DefaultCleanupInterval is the default cleanup interval duration
	DefaultCleanupInterval = time.Duration(DefaultCleanupIntervalSeconds) * time.Second

	// DefaultMaxKeys is the default maximum keys in memory store
	DefaultMaxKeys = int64(100000)

	// DefaultShardCount is the default number of shards for memory store
	DefaultShardCount = int(32)

	// DefaultRedisPoolSize is the default Redis connection pool size
	DefaultRedisPoolSize = int(10)

	// DefaultRedisTimeout is the default Redis operation timeout
	DefaultRedisTimeout = 5 * time.Second

	// DefaultRedisMaxRetries is the default number of Redis retries
	DefaultRedisMaxRetries = int(3)
)
View Source
const (
	// MinLimit is the minimum allowed rate limit
	MinLimit = int64(1)

	// MaxLimit is the maximum allowed rate limit
	MaxLimit = int64(1000000)

	// MinWindowSeconds is the minimum window duration in seconds
	MinWindowSeconds = int64(1)

	// MaxWindowSeconds is the maximum window duration in seconds (24 hours)
	MaxWindowSeconds = int64(86400)

	// MinBurst is the minimum burst size
	MinBurst = int64(0)

	// MaxBurst is the maximum burst size
	MaxBurst = int64(100000)

	// MinCleanupIntervalSeconds is the minimum cleanup interval
	MinCleanupIntervalSeconds = int64(10)

	// MaxCleanupIntervalSeconds is the maximum cleanup interval
	MaxCleanupIntervalSeconds = int64(3600)

	// MaxKeyLength is the maximum allowed length for store keys (in bytes)
	// This prevents DOS attacks via oversized keys and ensures compatibility
	// with Redis (512 MB max key size) while being reasonable for rate limiting
	MaxKeyLength = 256
)
View Source
const (
	// HeaderRateLimitLimit is the header for rate limit total
	HeaderRateLimitLimit = "X-RateLimit-Limit"

	// HeaderRateLimitRemaining is the header for remaining requests
	HeaderRateLimitRemaining = "X-RateLimit-Remaining"

	// HeaderRateLimitReset is the header for reset timestamp
	HeaderRateLimitReset = "X-RateLimit-Reset"

	// HeaderRateLimitRetryAfter is the header for retry delay
	HeaderRateLimitRetryAfter = "Retry-After"

	// HeaderRateLimitUsed is the header for used requests
	HeaderRateLimitUsed = "X-RateLimit-Used"
)
View Source
const (
	// StorageKeyPrefixDefault is the default prefix for storage keys
	StorageKeyPrefixDefault = "gorly"

	// StorageKeySeparator is the separator for key components
	StorageKeySeparator = ":"
)
View Source
const (
	// LogMsgLimiterCreated is logged when limiter is created
	LogMsgLimiterCreated = "rate limiter created"

	// LogMsgLimiterClosed is logged when limiter is closed
	LogMsgLimiterClosed = "rate limiter closed"

	// LogMsgCheckAllowed is logged when request is allowed
	LogMsgCheckAllowed = "request allowed"

	// LogMsgCheckDenied is logged when request is denied
	LogMsgCheckDenied = "request denied"

	// LogMsgReset is logged when rate limit is reset
	LogMsgReset = "rate limit reset"

	// LogMsgStoreConnected is logged when store connects
	LogMsgStoreConnected = "store connected"

	// LogMsgStoreClosed is logged when store closes
	LogMsgStoreClosed = "store closed"

	// LogMsgCleanupStarted is logged when cleanup starts
	LogMsgCleanupStarted = "cleanup started"

	// LogMsgCleanupCompleted is logged when cleanup completes
	LogMsgCleanupCompleted = "cleanup completed"
)
View Source
const (
	// RateUnitSecond represents per-second rate
	RateUnitSecond = "second"

	// RateUnitMinute represents per-minute rate
	RateUnitMinute = "minute"

	// RateUnitHour represents per-hour rate
	RateUnitHour = "hour"

	// RateUnitDay represents per-day rate
	RateUnitDay = "day"

	// RateUnitShortSecond is the short form for second
	RateUnitShortSecond = "s"

	// RateUnitShortMinute is the short form for minute
	RateUnitShortMinute = "m"

	// RateUnitShortHour is the short form for hour
	RateUnitShortHour = "h"

	// RateUnitShortDay is the short form for day
	RateUnitShortDay = "d"
)

Variables

View Source
var (
	// ErrInvalidConfig indicates the rate limiter configuration is invalid
	ErrInvalidConfig = errors.New(ErrMsgInvalidConfig)

	// ErrInvalidContext indicates the rate limit context is invalid
	ErrInvalidContext = errors.New(ErrMsgInvalidContext)

	// ErrStorageFailure indicates a storage backend failure
	ErrStorageFailure = errors.New(ErrMsgStorageFailure)

	// ErrStrategyFailure indicates a rate limiting strategy failure
	ErrStrategyFailure = errors.New(ErrMsgStrategyFailure)

	// ErrResolverFailure indicates a config resolver failure
	ErrResolverFailure = errors.New(ErrMsgResolverFailure)

	// ErrLimitExceeded indicates the rate limit has been exceeded
	ErrLimitExceeded = errors.New(ErrMsgLimitExceeded)

	// ErrInvalidLimit indicates an invalid rate limit value
	ErrInvalidLimit = errors.New(ErrMsgInvalidLimit)

	// ErrInvalidWindow indicates an invalid time window value
	ErrInvalidWindow = errors.New(ErrMsgInvalidWindow)

	// ErrInvalidBurst indicates an invalid burst value
	ErrInvalidBurst = errors.New(ErrMsgInvalidBurst)

	// ErrClosed indicates the rate limiter has been closed
	ErrClosed = errors.New(ErrMsgClosed)

	// ErrKeyNotFound indicates a key was not found in the store
	ErrKeyNotFound = errors.New(ErrMsgKeyNotFound)

	// ErrConnectionFailed indicates a connection failure
	ErrConnectionFailed = errors.New(ErrMsgConnectionFailed)

	// ErrTimeout indicates an operation timeout
	ErrTimeout = errors.New(ErrMsgTimeout)

	// ErrScriptNotSupported indicates the store doesn't support script execution
	ErrScriptNotSupported = errors.New(ErrMsgScriptNotSupported)

	// ErrKeyTooLong indicates the key exceeds maximum allowed length
	ErrKeyTooLong = errors.New(ErrMsgKeyTooLong)
)

Core sentinel errors using standard errors package These are wrapped with cuserr when additional context is needed

Functions

func CheckMultiple added in v1.1.0

func CheckMultiple(ctx context.Context, limiter RateLimiter, identities []string, scope, tier string) map[string]*Result

CheckMultiple checks rate limits for multiple identities at once Returns a map of identity -> result

func ContextsEqual added in v1.1.0

func ContextsEqual(a, b Identity) bool

ContextsEqual checks if two contexts are equivalent for rate limiting

func ConvertLogFields added in v1.1.0

func ConvertLogFields(fields []LogField) []interface{}

ConvertLogFields converts LogField slice to interface{} slice for logging

func FormatRateString added in v1.1.0

func FormatRateString(limit int64, window time.Duration) string

FormatRateString formats limit and window into a rate string

func FormatVersionString added in v1.1.0

func FormatVersionString() string

FormatVersionString returns a formatted version string with additional info

func GetAPIVersion added in v1.1.0

func GetAPIVersion(apiName string) (string, error)

GetAPIVersion returns the version of a specific API

func GetComponentVersion added in v1.1.0

func GetComponentVersion(componentName string) (string, error)

GetComponentVersion returns the version of a specific component

func GetGitCommit

func GetGitCommit() string

GetGitCommit returns the git commit hash

func GetProjectName added in v1.1.0

func GetProjectName() string

GetProjectName returns the project name

func GetRetryAfter

func GetRetryAfter(result *Result) time.Duration

GetRetryAfter returns the retry-after duration from a result

func GetSchemaVersion added in v1.1.0

func GetSchemaVersion(schemaName string) (string, error)

GetSchemaVersion returns the version of a specific schema

func GetUsagePercent added in v1.1.0

func GetUsagePercent(result *Result) float64

GetUsagePercent returns the usage as a percentage (0-100)

func GetVersion

func GetVersion() string

GetVersion returns the current version string

func GetVersionInfo

func GetVersionInfo() *version.Info

GetVersionInfo returns comprehensive version information Initializes version info if not already done

func HealthHandler added in v1.1.0

func HealthHandler() http.Handler

HealthHandler returns an HTTP handler for health checks with version info

func InitializeVersion added in v1.1.0

func InitializeVersion(opts ...version.Option) error

InitializeVersion initializes the version information from the manifest This should be called once during application startup

func IsClosed added in v1.1.0

func IsClosed(err error) bool

IsClosed checks if error indicates limiter is closed

func IsConfigError

func IsConfigError(err error) bool

IsConfigError checks if error is a configuration error

func IsConnectionError

func IsConnectionError(err error) bool

IsConnectionError checks if error is a connection error

func IsContextError added in v1.1.0

func IsContextError(err error) bool

IsContextError checks if error is a context error

func IsKeyTooLong added in v1.1.0

func IsKeyTooLong(err error) bool

IsKeyTooLong checks if error indicates key length exceeded maximum

func IsNearLimit added in v1.1.0

func IsNearLimit(result *Result, thresholdPercent float64) bool

IsNearLimit checks if usage is near the limit (>= threshold %)

func IsRateLimitExceeded

func IsRateLimitExceeded(err error) bool

IsRateLimitExceeded checks if error is a rate limit exceeded error

func IsRateLimited added in v1.1.0

func IsRateLimited(result *Result) bool

IsRateLimited checks if a result indicates rate limiting

func IsScriptNotSupported added in v1.1.0

func IsScriptNotSupported(err error) bool

IsScriptNotSupported checks if error indicates scripts are not supported

func IsStorageError added in v1.1.0

func IsStorageError(err error) bool

IsStorageError checks if an error is a storage-related error

func IsStorageFailure added in v1.1.0

func IsStorageFailure(err error) bool

IsStorageFailure checks if error is a storage failure

func IsTimeoutError added in v1.1.0

func IsTimeoutError(err error) bool

IsTimeoutError checks if error is a timeout error

func MustGetVersionInfo added in v1.1.0

func MustGetVersionInfo() *version.Info

MustGetVersionInfo returns version information or panics if unavailable

func NewConnectionError added in v1.1.0

func NewConnectionError(storeType, address string, err error, keyValues ...interface{}) error

NewConnectionError creates a connection error with context

func NewLimitExceededError added in v1.1.0

func NewLimitExceededError(identity, scope string, limit, used int64, keyValues ...interface{}) error

NewLimitExceededError creates a rate limit exceeded error with context

func NewTimeoutError added in v1.1.0

func NewTimeoutError(operation string, duration interface{}, keyValues ...interface{}) error

NewTimeoutError creates a timeout error with context

func ParseKey added in v1.1.0

func ParseKey(key string) (tier, scope, identity string, err error)

ParseKey parses a storage key back into components Expected format: "gorly:tier:scope:identity"

func ParseRateString

func ParseRateString(rateStr string) (int64, time.Duration, error)

ParseRateString parses a rate string like "1000/1h" into limit and window Supported formats:

  • "1000/1h" - 1000 requests per hour
  • "100/1m" - 100 requests per minute
  • "10/1s" - 10 requests per second
  • "5000/1d" - 5000 requests per day

SECURITY: This function includes comprehensive validation to prevent DOS attacks:

  • Input length limited to 32 characters
  • Only accepts positive non-zero values
  • Overflow protection for all calculations
  • Strict bounds checking on all parameters

func QuickCheck added in v1.1.0

func QuickCheck(ctx context.Context, limiter RateLimiter, identity, scope, tier string) (bool, error)

QuickCheck is a convenience function for simple rate limit checks Returns true if the request is allowed, false if rate limited, error if operation failed

BREAKING CHANGE: Now returns (bool, error) instead of just bool This allows callers to distinguish between rate limiting and errors

Example:

allowed, err := ratelimit.QuickCheck(ctx, limiter, "user123", "global", "free")
if err != nil {
    // Handle error (e.g., store unavailable)
    return err
}
if !allowed {
    // Rate limit exceeded
    return http.StatusTooManyRequests
}

func QuickCheckN added in v1.1.0

func QuickCheckN(ctx context.Context, limiter RateLimiter, identity, scope, tier string, n int64) (bool, error)

QuickCheckN is a convenience function for checking N tokens Returns true if the request is allowed, false if rate limited, error if operation failed

BREAKING CHANGE: Now returns (bool, error) instead of just bool

func QuickStats added in v1.1.0

func QuickStats(ctx context.Context, limiter RateLimiter, identity, scope, tier string) (limit, used, remaining int64, err error)

QuickStats returns basic usage statistics for an identity Returns limit, used, remaining, and error

BREAKING CHANGE: Now returns (int64, int64, int64, error) instead of just (int64, int64, int64) Zero values no longer indicate errors - check the error return instead

func ResetMultiple added in v1.1.0

func ResetMultiple(ctx context.Context, limiter RateLimiter, identities []string, scope, tier string) error

ResetMultiple resets rate limits for multiple identities

func ResetVersion added in v1.1.0

func ResetVersion()

ResetVersion resets the version information (useful for testing)

func ResultsEqual added in v1.1.0

func ResultsEqual(a, b *Result) bool

ResultsEqual checks if two results are equivalent

func ValidateBurstValue added in v1.1.0

func ValidateBurstValue(burst int64) error

ValidateBurstValue validates a burst value

func ValidateContext added in v1.1.0

func ValidateContext(ctx Identity) error

ValidateContext validates a rate limit context

func ValidateKeyLength added in v1.1.0

func ValidateKeyLength(key string) error

ValidateKeyLength validates a storage key length Returns ErrKeyTooLong if the key exceeds MaxKeyLength bytes

func ValidateLimitValue added in v1.1.0

func ValidateLimitValue(limit int64) error

ValidateLimitValue validates a rate limit value

func ValidateVersions added in v1.1.0

func ValidateVersions(ctx context.Context, validators ...version.Validator) error

ValidateVersions validates that all components meet minimum version requirements This is useful for ensuring compatibility during initialization

func ValidateWindowSeconds added in v1.1.0

func ValidateWindowSeconds(windowSec int64) error

ValidateWindowSeconds validates a window duration in seconds

func VersionHandler added in v1.1.0

func VersionHandler() http.Handler

VersionHandler returns an HTTP handler that exposes version information This uses the go-version package's built-in handler

func VersionHandlerFunc added in v1.1.0

func VersionHandlerFunc() http.HandlerFunc

VersionHandlerFunc returns an HTTP handler function that exposes version information

func VersionMiddleware added in v1.1.0

func VersionMiddleware(next http.Handler) http.Handler

VersionMiddleware returns middleware that adds version information to the context

func WrapConfigError added in v1.1.0

func WrapConfigError(err error, message string, keyValues ...interface{}) error

WrapConfigError wraps a configuration error with additional context

func WrapContextError added in v1.1.0

func WrapContextError(err error, message string, keyValues ...interface{}) error

WrapContextError wraps a context error with additional context

func WrapResolverError added in v1.1.0

func WrapResolverError(err error, message string, keyValues ...interface{}) error

WrapResolverError wraps a resolver error with additional context

func WrapStorageError added in v1.1.0

func WrapStorageError(err error, operation string, keyValues ...interface{}) error

WrapStorageError wraps a storage error with additional context

func WrapStrategyError added in v1.1.0

func WrapStrategyError(err error, strategyName string, keyValues ...interface{}) error

WrapStrategyError wraps a strategy error with additional context

Types

type AggregatedResult added in v1.1.0

type AggregatedResult struct {
	// Overall indicates if all checks passed
	Overall bool

	// Results contains individual results by scope
	Results map[string]*Result

	// LowestRemaining is the scope with the least remaining capacity
	LowestRemaining string

	// NextReset is the earliest reset time across all scopes
	NextReset time.Time
}

AggregatedResult combines multiple results (e.g., from multi-scope checks)

func NewAggregatedResult added in v1.1.0

func NewAggregatedResult(results map[string]*Result) *AggregatedResult

NewAggregatedResult creates an aggregated result from multiple results

func (*AggregatedResult) String added in v1.1.0

func (ar *AggregatedResult) String() string

String returns a human-readable representation

type Algorithm

type Algorithm interface {
	// Name returns the algorithm name
	Name() string

	// Allow checks if a request is allowed and returns the result
	Allow(ctx context.Context, store Store, key string, limit int64, window time.Duration, n int64) (*Result, error)

	// Reset resets the rate limit for the given key
	Reset(ctx context.Context, store Store, key string) error
}

Algorithm represents a rate limiting algorithm

func NewFixedWindowAlgorithm added in v1.1.0

func NewFixedWindowAlgorithm() Algorithm

NewFixedWindowAlgorithm creates a fixed window algorithm This is a placeholder - not yet implemented

func NewLeakyBucketAlgorithm added in v1.1.0

func NewLeakyBucketAlgorithm() Algorithm

NewLeakyBucketAlgorithm creates a leaky bucket algorithm This is a placeholder - not yet implemented

func NewSlidingWindowAlgorithm added in v1.1.0

func NewSlidingWindowAlgorithm() Algorithm

NewSlidingWindowAlgorithm creates a sliding window algorithm

func NewTokenBucketAlgorithm added in v1.1.0

func NewTokenBucketAlgorithm() Algorithm

NewTokenBucketAlgorithm creates a token bucket algorithm

type Builder

type Builder struct {
	// contains filtered or unexported fields
}

Builder provides a fluent API for constructing rate limiters

func NewBuilder added in v1.1.0

func NewBuilder() *Builder

NewBuilder creates a new rate limiter builder

func (*Builder) Build

func (b *Builder) Build() (RateLimiter, error)

Build creates the rate limiter

func (*Builder) MustBuild added in v1.1.0

func (b *Builder) MustBuild() RateLimiter

MustBuild creates the rate limiter or panics on error Use only when you're certain the configuration is valid

func (*Builder) WithAlgorithm added in v1.1.0

func (b *Builder) WithAlgorithm(algorithm Algorithm) *Builder

WithAlgorithm sets a custom algorithm

func (*Builder) WithBurst added in v1.1.0

func (b *Builder) WithBurst(burst int64) *Builder

WithBurst sets the burst size

func (*Builder) WithDefaultTiers added in v1.1.0

func (b *Builder) WithDefaultTiers() *Builder

WithDefaultTiers sets up tier-based limiting with default configuration

func (*Builder) WithGenerousTiers added in v1.1.0

func (b *Builder) WithGenerousTiers() *Builder

WithGenerousTiers sets up tier-based limiting with generous limits

func (*Builder) WithLimit added in v1.1.0

func (b *Builder) WithLimit(limit int64, window time.Duration) *Builder

WithLimit sets the default rate limit

func (*Builder) WithLimitString added in v1.1.0

func (b *Builder) WithLimitString(rateStr string) *Builder

WithLimitString sets the limit from a rate string like "1000/1h"

func (*Builder) WithLogger added in v1.1.0

func (b *Builder) WithLogger(logger Logger) *Builder

WithLogger sets a custom logger

func (*Builder) WithMetrics added in v1.1.0

func (b *Builder) WithMetrics(enable bool) *Builder

WithMetrics enables metrics collection

func (*Builder) WithResolver added in v1.1.0

func (b *Builder) WithResolver(resolver LimitResolver) *Builder

WithResolver sets a limit resolver for tier-based limits

func (*Builder) WithSlidingWindow added in v1.1.0

func (b *Builder) WithSlidingWindow() *Builder

WithSlidingWindow sets the sliding window algorithm

func (*Builder) WithStore added in v1.1.0

func (b *Builder) WithStore(store Store) *Builder

WithStore sets a custom store

func (*Builder) WithStrictTiers added in v1.1.0

func (b *Builder) WithStrictTiers() *Builder

WithStrictTiers sets up tier-based limiting with strict limits

func (*Builder) WithTiers added in v1.1.0

func (b *Builder) WithTiers(resolverConfig *ResolverConfig) *Builder

WithTiers sets up tier-based limiting with the given configuration

func (*Builder) WithTokenBucket added in v1.1.0

func (b *Builder) WithTokenBucket() *Builder

WithTokenBucket sets the token bucket algorithm

type Config

type Config struct {
	// Store is the backend storage for rate limit data
	Store Store

	// Algorithm is the rate limiting algorithm to use
	Algorithm Algorithm

	// DefaultLimit is the default rate limit (requests per window)
	DefaultLimit int64

	// DefaultWindow is the default time window
	DefaultWindow time.Duration

	// DefaultBurst is the default burst size for token bucket
	DefaultBurst int64

	// Resolver is the optional limit resolver for dynamic limits
	// If set, it will override DefaultLimit/DefaultWindow/DefaultBurst
	// based on tier and scope resolution
	Resolver LimitResolver

	// Logger for rate limiter operations
	Logger Logger

	// EnableMetrics enables metrics collection
	EnableMetrics bool
}

Config holds rate limiter configuration

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig returns a configuration with sensible defaults

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration

type ContextBuilder added in v1.1.0

type ContextBuilder struct {
	// contains filtered or unexported fields
}

ContextBuilder provides a fluent API for building rate limit contexts

func NewContextBuilder added in v1.1.0

func NewContextBuilder() *ContextBuilder

NewContextBuilder creates a new context builder

func (*ContextBuilder) AddMetadata added in v1.1.0

func (cb *ContextBuilder) AddMetadata(key string, value interface{}) *ContextBuilder

AddMetadata adds a single metadata entry

func (*ContextBuilder) Build added in v1.1.0

func (cb *ContextBuilder) Build() (Identity, error)

Build creates the rate limit context

func (*ContextBuilder) MustBuild added in v1.1.0

func (cb *ContextBuilder) MustBuild() Identity

MustBuild creates the rate limit context or panics

func (*ContextBuilder) WithIP added in v1.1.0

func (cb *ContextBuilder) WithIP(ip string) *ContextBuilder

WithIP adds IP address to metadata

func (*ContextBuilder) WithIdentity added in v1.1.0

func (cb *ContextBuilder) WithIdentity(identity string) *ContextBuilder

WithIdentity sets the identity

func (*ContextBuilder) WithMetadata added in v1.1.0

func (cb *ContextBuilder) WithMetadata(metadata map[string]interface{}) *ContextBuilder

WithMetadata adds metadata (replaces existing)

func (*ContextBuilder) WithMethod added in v1.1.0

func (cb *ContextBuilder) WithMethod(method string) *ContextBuilder

WithMethod adds HTTP method to metadata

func (*ContextBuilder) WithPath added in v1.1.0

func (cb *ContextBuilder) WithPath(path string) *ContextBuilder

WithPath adds URL path to metadata

func (*ContextBuilder) WithScope added in v1.1.0

func (cb *ContextBuilder) WithScope(scope string) *ContextBuilder

WithScope sets the scope

func (*ContextBuilder) WithTier added in v1.1.0

func (cb *ContextBuilder) WithTier(tier string) *ContextBuilder

WithTier sets the tier

func (*ContextBuilder) WithUserAgent added in v1.1.0

func (cb *ContextBuilder) WithUserAgent(ua string) *ContextBuilder

WithUserAgent adds user agent to metadata

type HealthStatus

type HealthStatus struct {
	// Overall health
	Healthy bool `json:"healthy"`

	// Component health
	StoreHealthy    bool `json:"store_healthy"`
	StrategyHealthy bool `json:"strategy_healthy"`

	// Details
	Message   string                 `json:"message,omitempty"`
	Timestamp time.Time              `json:"timestamp"`
	Metadata  map[string]interface{} `json:"metadata,omitempty"`

	// Performance
	ResponseTime time.Duration `json:"response_time,omitempty"`
}

HealthStatus represents the health status of the rate limiter

func NewHealthStatus added in v1.1.0

func NewHealthStatus(healthy bool, message string) *HealthStatus

NewHealthStatus creates a new health status

type IdentifiedContext added in v1.1.0

type IdentifiedContext struct {
	// contains filtered or unexported fields
}

IdentifiedContext wraps a Identity with a unique ID

func NewIdentifiedContext added in v1.1.0

func NewIdentifiedContext(wrapped Identity) *IdentifiedContext

NewIdentifiedContext creates a context with a unique ID

func (*IdentifiedContext) ID added in v1.1.0

func (ic *IdentifiedContext) ID() string

ID returns the unique context ID

func (*IdentifiedContext) Identity added in v1.1.0

func (ic *IdentifiedContext) Identity() string

Identity returns the wrapped context's identity

func (*IdentifiedContext) Key added in v1.1.0

func (ic *IdentifiedContext) Key() string

Key returns the wrapped context's key

func (*IdentifiedContext) Metadata added in v1.1.0

func (ic *IdentifiedContext) Metadata() map[string]interface{}

Metadata returns the wrapped context's metadata

func (*IdentifiedContext) Scope added in v1.1.0

func (ic *IdentifiedContext) Scope() string

Scope returns the wrapped context's scope

func (*IdentifiedContext) Tier added in v1.1.0

func (ic *IdentifiedContext) Tier() string

Tier returns the wrapped context's tier

type Identity added in v1.1.0

type Identity interface {
	// Identity returns the unique identifier for this rate limit subject
	// Examples: user ID, API key, IP address, tenant ID, connection ID
	Identity() string

	// Scope returns the rate limit scope
	// Examples: "api", "search", "upload", "db_query", "events"
	Scope() string

	// Tier returns the service tier
	// Examples: "free", "premium", "enterprise", "internal"
	Tier() string

	// Metadata returns additional context for rate limiting decisions
	// Can include: IP address, user agent, resource being accessed, etc.
	Metadata() map[string]interface{}

	// Key generates the storage key for this context
	// Format: "gorly:tier:scope:identity"
	Key() string
}

Identity represents the complete context for a rate limit decision It replaces the old AuthEntity concept with something more flexible and generic

func CloneContext added in v1.1.0

func CloneContext(ctx Identity) Identity

CloneContext creates a deep copy of a rate limit context

func ContextFromKey added in v1.1.0

func ContextFromKey(key string) (Identity, error)

ContextFromKey creates a simple context from a storage key

func NewAPIKeyContext added in v1.1.0

func NewAPIKeyContext(apiKey, tier string) Identity

NewAPIKeyContext creates a rate limit context for API key-based limiting

func NewIPContext added in v1.1.0

func NewIPContext(ip string) Identity

NewIPContext creates a rate limit context for IP-based limiting

func NewScopedContext added in v1.1.0

func NewScopedContext(identity, scope, tier string, metadata map[string]interface{}) Identity

NewScopedContext creates a context for a user with a specific scope This is a helper that wraps NewSimpleContext with clearer naming

func NewTenantContext added in v1.1.0

func NewTenantContext(tenantID, tier string) Identity

NewTenantContext creates a rate limit context for tenant-based limiting

func NewUserContext added in v1.1.0

func NewUserContext(userID, tier string) Identity

NewUserContext creates a rate limit context for user-based limiting

type LimitConfig added in v1.1.0

type LimitConfig struct {
	// Limit is the number of requests allowed
	Limit int64

	// Window is the time window for the limit
	Window time.Duration

	// Burst is the burst size (for token bucket)
	Burst int64

	// Strategy is the rate limiting strategy to use
	Strategy string
}

LimitConfig represents a single rate limit configuration

func NewLimitConfig added in v1.1.0

func NewLimitConfig(limit int64, window time.Duration, burst int64) *LimitConfig

NewLimitConfig creates a new limit configuration

func NewLimitConfigFromRate added in v1.1.0

func NewLimitConfigFromRate(rateStr string, burst int64) (*LimitConfig, error)

NewLimitConfigFromRate creates a limit config from a rate string

func (*LimitConfig) Clone added in v1.1.0

func (lc *LimitConfig) Clone() *LimitConfig

Clone creates a copy of the limit configuration

func (*LimitConfig) Validate added in v1.1.0

func (lc *LimitConfig) Validate() error

Validate validates the limit configuration

type LimitResolver added in v1.1.0

type LimitResolver interface {
	// ResolveLimit resolves the rate limit for the given context
	// Returns the resolved limit configuration based on the hierarchy:
	// 1. Entity-specific override (if enabled)
	// 2. Tier-specific limit for scope (if enabled)
	// 3. Tier default limit (if enabled)
	// 4. Global default limit
	ResolveLimit(rlCtx Identity) (*LimitConfig, error)

	// SetEntityOverride sets a limit override for a specific entity
	SetEntityOverride(entityID, scope string, limit *LimitConfig) error

	// RemoveEntityOverride removes a limit override for an entity
	RemoveEntityOverride(entityID, scope string) error

	// GetTierConfig returns the configuration for a tier
	GetTierConfig(tier string) (*TierConfig, error)
}

LimitResolver resolves rate limit configuration for a given context

func NewLimitResolver added in v1.1.0

func NewLimitResolver(config *ResolverConfig) (LimitResolver, error)

NewLimitResolver creates a new limit resolver with the given configuration

type LogField added in v1.1.0

type LogField struct {
	Key   string
	Value interface{}
}

LogField is a helper type for structured logging fields

func F added in v1.1.0

func F(key string, value interface{}) LogField

F creates a LogField (shorthand for structured logging)

type Logger

type Logger interface {
	// Debug logs a debug message with optional fields
	Debug(msg string, fields ...interface{})

	// Info logs an info message with optional fields
	Info(msg string, fields ...interface{})

	// Warn logs a warning message with optional fields
	Warn(msg string, fields ...interface{})

	// Error logs an error message with optional fields
	Error(msg string, fields ...interface{})

	// With creates a child logger with the given fields
	With(fields ...interface{}) Logger

	// Named creates a named logger
	Named(name string) Logger
}

Logger defines the logging interface for gorly This allows injection of any logging implementation

func NewCustomLogger added in v1.1.0

func NewCustomLogger(level zapcore.Level, encoding string, outputPaths []string) Logger

NewCustomLogger creates a logger with custom configuration

func NewDevelopmentLogger added in v1.1.0

func NewDevelopmentLogger() Logger

NewDevelopmentLogger creates a logger optimized for development

func NewLoggerFromZap added in v1.1.0

func NewLoggerFromZap(logger *zap.Logger) Logger

NewLoggerFromZap wraps an existing zap.Logger

func NewNopLogger added in v1.1.0

func NewNopLogger() Logger

NewNopLogger creates a logger that discards all logs Useful for testing or when logging is disabled

func NewProductionLogger added in v1.1.0

func NewProductionLogger() Logger

NewProductionLogger creates a logger optimized for production

type RateLimiter

type RateLimiter interface {
	// Check performs a rate limit check WITHOUT consuming tokens
	// Useful for preflight checks or monitoring
	Check(ctx context.Context, rlCtx Identity) (*Result, error)

	// Allow performs a rate limit check and CONSUMES one token if allowed
	// This is the main method for enforcing rate limits
	Allow(ctx context.Context, rlCtx Identity) (*Result, error)

	// AllowN performs a rate limit check and consumes N tokens if allowed
	// Useful for batch operations or operations with different costs
	AllowN(ctx context.Context, rlCtx Identity, n int64) (*Result, error)

	// Reset clears the rate limit for the given context
	// Useful for administrative overrides or testing
	Reset(ctx context.Context, rlCtx Identity) error

	// Stats returns usage statistics for the given context
	// Provides visibility into current rate limit state
	Stats(ctx context.Context, rlCtx Identity) (*Result, error)

	// Health checks the health of the rate limiter
	// Verifies store connectivity and system health
	Health(ctx context.Context) error

	// Close cleanly shuts down the rate limiter
	// Releases resources, closes connections, stops goroutines
	Close() error
}

RateLimiter is the core interface for rate limiting functionality This is the main entry point for all rate limiting operations

func NewForAPI added in v1.1.0

func NewForAPI(store Store) (RateLimiter, error)

NewForAPI creates a rate limiter optimized for API gateway use - Higher limits for throughput - Token bucket for burst handling Requires a store to be provided

func NewForMicroservice added in v1.1.0

func NewForMicroservice(store Store) (RateLimiter, error)

NewForMicroservice creates a rate limiter optimized for microservices - Lower limits to protect services - Token bucket for burst handling Requires a store to be provided

func NewForPublicAPI added in v1.1.0

func NewForPublicAPI(store Store) (RateLimiter, error)

NewForPublicAPI creates a rate limiter for public-facing APIs - Strict limits to prevent abuse - Sliding window for fairness - Multi-tier support Requires a store to be provided

func NewForSaaS added in v1.1.0

func NewForSaaS(store Store) (RateLimiter, error)

NewForSaaS creates a rate limiter for SaaS applications - Tier-based limits (free, premium, enterprise) - Token bucket for user experience Requires a store to be provided

func NewForWebApp added in v1.1.0

func NewForWebApp(store Store) (RateLimiter, error)

NewForWebApp creates a rate limiter optimized for web applications - Moderate limits - Sliding window for fairness Requires a store to be provided

func NewRateLimiter

func NewRateLimiter(config *Config) (RateLimiter, error)

NewRateLimiter creates a new rate limiter with the given configuration

func NewSimple added in v1.1.0

func NewSimple(store Store, limit int64, window time.Duration) (RateLimiter, error)

NewSimple creates a rate limiter with the simplest possible configuration Requires a store to be provided (use stores.NewMemoryStore(nil) for in-memory storage)

Example:

store := stores.NewMemoryStore(nil)
limiter, err := ratelimit.NewSimple(store, 100, time.Hour)
if err != nil {
    log.Fatal(err)
}
defer limiter.Close()

func NewWithConfig added in v1.1.0

func NewWithConfig(config *Config) (RateLimiter, error)

NewWithConfig creates a rate limiter with the given configuration Fills in sensible defaults for any missing values

Example:

store := stores.NewMemoryStore(nil)
limiter, err := ratelimit.NewWithConfig(&ratelimit.Config{
    Store:         store,
    DefaultLimit:  1000,
    DefaultWindow: time.Hour,
    DefaultBurst:  100,
})

func NewWithTiers added in v1.1.0

func NewWithTiers(store Store, resolverConfig *ResolverConfig) (RateLimiter, error)

NewWithTiers creates a rate limiter with multi-tier support Uses the provided resolver configuration for tier-based limits Requires a store to be provided

Example:

store := stores.NewMemoryStore(nil)
resolverConfig := ratelimit.NewDefaultResolverConfig()
limiter, err := ratelimit.NewWithTiers(store, resolverConfig)

type ResolverConfig added in v1.1.0

type ResolverConfig struct {
	// TierConfigs maps tier names to their configurations
	TierConfigs map[string]*TierConfig

	// EntityOverrides maps entity IDs to their specific limit configurations
	// This takes highest precedence in resolution
	EntityOverrides map[string]map[string]*LimitConfig // entity_id -> scope -> limit

	// DefaultTierConfig is used when no tier-specific config is found
	DefaultTierConfig *TierConfig

	// EnableEntityOverrides enables entity-specific overrides
	EnableEntityOverrides bool

	// EnableTierLimits enables tier-based limits
	EnableTierLimits bool

	// EnableScopeLimits enables scope-based limits
	EnableScopeLimits bool
}

ResolverConfig represents configuration for the limit resolver

func NewDefaultResolverConfig added in v1.1.0

func NewDefaultResolverConfig() *ResolverConfig

NewDefaultResolverConfig creates a resolver config with sensible defaults

func NewGenerousResolverConfig added in v1.1.0

func NewGenerousResolverConfig() *ResolverConfig

NewGenerousResolverConfig creates a resolver config with generous limits

func NewResolverConfig added in v1.1.0

func NewResolverConfig() *ResolverConfig

NewResolverConfig creates a new resolver configuration

func NewStrictResolverConfig added in v1.1.0

func NewStrictResolverConfig() *ResolverConfig

NewStrictResolverConfig creates a resolver config with strict limits

func (*ResolverConfig) SetEntityOverride added in v1.1.0

func (rc *ResolverConfig) SetEntityOverride(entityID, scope string, limit *LimitConfig)

SetEntityOverride sets a limit override for a specific entity and scope

func (*ResolverConfig) SetTierConfig added in v1.1.0

func (rc *ResolverConfig) SetTierConfig(tier string, config *TierConfig)

SetTierConfig sets the configuration for a tier

func (*ResolverConfig) Validate added in v1.1.0

func (rc *ResolverConfig) Validate() error

Validate validates the resolver configuration

type Result

type Result struct {
	// Allowed indicates if the request is permitted
	Allowed bool

	// Limit is the maximum requests allowed in the window
	Limit int64

	// Remaining is how many requests are left in current window
	Remaining int64

	// Used is how many requests have been consumed
	Used int64

	// RetryAfter indicates when the client can retry (if not allowed)
	// This is the duration to wait before the next request
	RetryAfter time.Duration

	// ResetAt indicates when the rate limit window resets
	ResetAt time.Time

	// Window is the rate limit time window duration
	Window time.Duration

	// Scope is the scope that was evaluated
	Scope string

	// Entity is the entity that was evaluated
	Entity string

	// Tier is the tier that was evaluated
	Tier string

	// Strategy is the name of the strategy that was used
	Strategy string
	// contains filtered or unexported fields
}

Result represents the outcome of a rate limit check All fields are concrete types - no interface{}

============================================================================ P0-4 THREAD SAFETY DOCUMENTATION ============================================================================

SAFE OPERATIONS (can be called concurrently):

  • GetMetadata(), SetMetadata(), GetAllMetadata(), HasMetadata() - all metadata operations
  • Clone() - creates a thread-safe snapshot (FIXED: now properly synchronized)
  • String(), UsagePercentage() - read-only helper methods

CONDITIONALLY SAFE (safe ONLY if WithContext/WithStrategy not called concurrently):

  • Reading fields: Allowed, Limit, Remaining, Used, RetryAfter, ResetAt, Window
  • Reading context fields: Scope, Entity, Tier, Strategy

UNSAFE OPERATIONS (NOT thread-safe, must NOT be called concurrently):

  • WithContext() - modifies Scope, Entity, Tier without synchronization
  • WithStrategy() - modifies Strategy without synchronization
  • Direct field writes after construction
  • Reading Scope/Entity/Tier/Strategy while WithContext/WithStrategy are running

RECOMMENDED USAGE PATTERN:

  1. Create Result with constructor (NewAllowedResult, NewDeniedResult)
  2. Call WithContext() and WithStrategy() to configure (BEFORE sharing)
  3. Return/share Result (now treat as read-only except for metadata)
  4. Use only GetMetadata/SetMetadata for concurrent metadata access
  5. If multiple goroutines need different fields, use Clone()

ANTI-PATTERNS (will cause race conditions):

❌ Sharing Result and calling WithContext/WithStrategy concurrently
❌ Modifying non-metadata fields after sharing across goroutines
❌ Direct writes to fields after construction: result.Scope = "new"

Example - SAFE:

result := NewAllowedResult(100, 90, 10, resetTime, window)
result.WithContext("api", "user123", "premium")  // OK - before sharing
result.WithStrategy("token_bucket")              // OK - before sharing
go processResult(result)                         // OK - Result now read-only

Example - UNSAFE (RACE CONDITION):

result := NewAllowedResult(...)
go func() { result.WithContext(...) }()  // ❌ RACE
go func() { fmt.Println(result.Scope) }() // ❌ RACE

IMPORTANT: While individual Result instances are thread-safe for metadata operations, Result objects should generally not be shared across goroutines. Each rate limit check returns a new Result that should be consumed by a single goroutine or properly synchronized if sharing is necessary.

If you need to safely share a Result across goroutines:

  • Option 1: Use Clone() to create independent copies
  • Option 2: Only read immutable fields and use Get/SetMetadata()
  • Option 3: Add external synchronization (sync.Mutex)

func NewAllowedResult added in v1.1.0

func NewAllowedResult(limit, remaining, used int64, resetAt time.Time, window time.Duration) *Result

NewAllowedResult creates a result indicating the request is allowed

func NewDeniedResult added in v1.1.0

func NewDeniedResult(limit, used int64, resetAt time.Time, window time.Duration) *Result

NewDeniedResult creates a result indicating the request is denied

func NewEmptyResult added in v1.1.0

func NewEmptyResult(limit int64, window time.Duration) *Result

NewEmptyResult creates an empty result (for stats queries when no data exists)

func (*Result) Clone added in v1.1.0

func (r *Result) Clone() *Result

Clone creates a deep copy of the result (thread-safe)

THREAD SAFETY: This method is fully thread-safe and can be called concurrently with all other methods, including WithContext() and WithStrategy(). It acquires a read lock for the entire duration of the clone operation to ensure a consistent snapshot of all fields.

func (*Result) GetAllMetadata added in v1.1.0

func (r *Result) GetAllMetadata() map[string]interface{}

GetAllMetadata returns a copy of all metadata (thread-safe) Returns a new map so modifications won't affect the original

func (*Result) GetMetadata added in v1.1.0

func (r *Result) GetMetadata(key string) (interface{}, bool)

GetMetadata retrieves a metadata value by key (thread-safe) Returns the value and a boolean indicating if the key exists

func (*Result) HasMetadata added in v1.1.0

func (r *Result) HasMetadata(key string) bool

HasMetadata checks if a metadata key exists (thread-safe)

func (*Result) IsNearLimit added in v1.1.0

func (r *Result) IsNearLimit(threshold float64) bool

IsNearLimit checks if usage is near the limit (>= threshold percentage) threshold should be between 0 and 100

func (*Result) ResetAtUnix added in v1.1.0

func (r *Result) ResetAtUnix() int64

ResetAtUnix returns the ResetAt timestamp as Unix seconds Useful for HTTP headers

func (*Result) RetryAfterSeconds added in v1.1.0

func (r *Result) RetryAfterSeconds() int64

RetryAfterSeconds returns the RetryAfter duration as seconds Useful for HTTP headers

func (*Result) SetMetadata added in v1.1.0

func (r *Result) SetMetadata(key string, value interface{})

SetMetadata sets a metadata key-value pair (thread-safe)

func (*Result) SetMetadataMap added in v1.1.0

func (r *Result) SetMetadataMap(metadata map[string]interface{})

SetMetadataMap sets multiple metadata entries (thread-safe)

func (*Result) String added in v1.1.0

func (r *Result) String() string

String returns a human-readable representation of the result

func (*Result) TimeUntilReset added in v1.1.0

func (r *Result) TimeUntilReset() time.Duration

TimeUntilReset returns the duration until the window resets

func (*Result) UsagePercentage added in v1.1.0

func (r *Result) UsagePercentage() float64

UsagePercentage returns the percentage of limit used (0-100)

func (*Result) WithContext added in v1.1.0

func (r *Result) WithContext(scope, entity, tier string) *Result

WithContext adds context information to the result

P0-4 THREAD SAFETY WARNING: This method modifies the Result and is NOT thread-safe. It should only be called during Result construction before the Result is shared across goroutines. Do NOT call this method on a Result that may be accessed concurrently by multiple goroutines.

Safe pattern:

result := NewAllowedResult(...)
result.WithContext(scope, entity, tier)  // OK - before sharing
return result

Unsafe pattern:

go func() { result.WithContext(...) }()  // RACE CONDITION
go func() { fmt.Println(result.Scope) }()  // RACE CONDITION

func (*Result) WithStrategy added in v1.1.0

func (r *Result) WithStrategy(strategy string) *Result

WithStrategy adds strategy information to the result

P0-4 THREAD SAFETY WARNING: This method modifies the Result and is NOT thread-safe. It should only be called during Result construction before the Result is shared across goroutines. Do NOT call this method on a Result that may be accessed concurrently by multiple goroutines.

Safe pattern:

result := NewAllowedResult(...)
result.WithStrategy("token_bucket")  // OK - before sharing
return result

Unsafe pattern:

go func() { result.WithStrategy(...) }()  // RACE CONDITION
go func() { fmt.Println(result.Strategy) }()  // RACE CONDITION

type ResultBuilder added in v1.1.0

type ResultBuilder struct {
	// contains filtered or unexported fields
}

ResultBuilder provides a fluent API for building results

func NewResultBuilder added in v1.1.0

func NewResultBuilder() *ResultBuilder

NewResultBuilder creates a new result builder

func (*ResultBuilder) Allowed added in v1.1.0

func (rb *ResultBuilder) Allowed(allowed bool) *ResultBuilder

Allowed sets whether the request is allowed

func (*ResultBuilder) Build added in v1.1.0

func (rb *ResultBuilder) Build() *Result

Build returns the constructed result

func (*ResultBuilder) Entity added in v1.1.0

func (rb *ResultBuilder) Entity(entity string) *ResultBuilder

Entity sets the entity

func (*ResultBuilder) Limit added in v1.1.0

func (rb *ResultBuilder) Limit(limit int64) *ResultBuilder

Limit sets the limit

func (*ResultBuilder) Metadata added in v1.1.0

func (rb *ResultBuilder) Metadata(key string, value interface{}) *ResultBuilder

Metadata adds metadata (thread-safe)

func (*ResultBuilder) Remaining added in v1.1.0

func (rb *ResultBuilder) Remaining(remaining int64) *ResultBuilder

Remaining sets the remaining count

func (*ResultBuilder) ResetAt added in v1.1.0

func (rb *ResultBuilder) ResetAt(resetAt time.Time) *ResultBuilder

ResetAt sets the reset time

func (*ResultBuilder) RetryAfter added in v1.1.0

func (rb *ResultBuilder) RetryAfter(retryAfter time.Duration) *ResultBuilder

RetryAfter sets the retry after duration

func (*ResultBuilder) Scope added in v1.1.0

func (rb *ResultBuilder) Scope(scope string) *ResultBuilder

Scope sets the scope

func (*ResultBuilder) Strategy added in v1.1.0

func (rb *ResultBuilder) Strategy(strategy string) *ResultBuilder

Strategy sets the strategy

func (*ResultBuilder) Tier added in v1.1.0

func (rb *ResultBuilder) Tier(tier string) *ResultBuilder

Tier sets the tier

func (*ResultBuilder) Used added in v1.1.0

func (rb *ResultBuilder) Used(used int64) *ResultBuilder

Used sets the used count

func (*ResultBuilder) Window added in v1.1.0

func (rb *ResultBuilder) Window(window time.Duration) *ResultBuilder

Window sets the window duration

type ScriptSupporter added in v1.1.0

type ScriptSupporter interface {
	Store
	// LoadScript pre-loads a script and returns its SHA1 hash for later execution
	// This enables using EVALSHA instead of EVAL for better performance
	LoadScript(ctx context.Context, script string) (string, error)

	// ExecuteScriptSHA executes a pre-loaded script by its SHA1 hash
	ExecuteScriptSHA(ctx context.Context, sha string, keys []string, args ...interface{}) (interface{}, error)
}

ScriptSupporter is an optional interface for stores that support Lua scripts Stores can implement this to provide script caching and optimization

type SimpleContext added in v1.1.0

type SimpleContext struct {
	// contains filtered or unexported fields
}

SimpleContext is a basic implementation of Identity

func NewSimpleContext added in v1.1.0

func NewSimpleContext(identity, scope, tier string, metadata map[string]interface{}) *SimpleContext

NewSimpleContext creates a new simple rate limit context

func (*SimpleContext) Identity added in v1.1.0

func (sc *SimpleContext) Identity() string

Identity returns the identity

func (*SimpleContext) Key added in v1.1.0

func (sc *SimpleContext) Key() string

Key generates the storage key

func (*SimpleContext) Metadata added in v1.1.0

func (sc *SimpleContext) Metadata() map[string]interface{}

Metadata returns the metadata

func (*SimpleContext) Scope added in v1.1.0

func (sc *SimpleContext) Scope() string

Scope returns the scope

func (*SimpleContext) Tier added in v1.1.0

func (sc *SimpleContext) Tier() string

Tier returns the tier

type Store

type Store interface {
	// Get retrieves a value from the store
	Get(ctx context.Context, key string) ([]byte, error)

	// Set stores a value in the store with an optional expiration
	Set(ctx context.Context, key string, value []byte, expiration time.Duration) error

	// Increment atomically increments a counter and returns the new value
	Increment(ctx context.Context, key string, expiration time.Duration) (int64, error)

	// IncrementBy atomically increments a counter by the given amount
	IncrementBy(ctx context.Context, key string, amount int64, expiration time.Duration) (int64, error)

	// Delete removes a key from the store
	Delete(ctx context.Context, key string) error

	// Exists checks if a key exists in the store
	Exists(ctx context.Context, key string) (bool, error)

	// ExecuteScript executes a Lua script atomically in the store
	// This enables complex atomic operations for race-free rate limiting
	// Returns the script result as interface{} which must be type-asserted by caller
	// Returns ErrScriptNotSupported if the store doesn't support scripting
	ExecuteScript(ctx context.Context, script string, keys []string, args ...interface{}) (interface{}, error)

	// Health checks the health of the store connection
	Health(ctx context.Context) error

	// Close closes the store connection
	Close() error
}

Store represents a storage backend for rate limiting data

type TierConfig

type TierConfig struct {
	// TierName is the name of the tier
	TierName string

	// ScopeLimits maps scope names to their specific limits
	// Example: "global" -> 1000/1h, "search" -> 100/1m
	ScopeLimits map[string]*LimitConfig

	// DefaultLimit is the default limit for scopes not explicitly configured
	DefaultLimit *LimitConfig

	// Strategy is the default strategy for this tier
	Strategy string

	// Description is a human-readable description of this tier
	Description string
}

TierConfig represents rate limit configuration for a service tier

func NewTierConfig added in v1.1.0

func NewTierConfig(tierName string, defaultLimit *LimitConfig) *TierConfig

NewTierConfig creates a new tier configuration

func (*TierConfig) Clone added in v1.1.0

func (tc *TierConfig) Clone() *TierConfig

Clone creates a copy of the tier configuration

func (*TierConfig) GetScopeLimit added in v1.1.0

func (tc *TierConfig) GetScopeLimit(scope string) *LimitConfig

GetScopeLimit gets the limit for a specific scope, or default if not found

func (*TierConfig) SetScopeLimit added in v1.1.0

func (tc *TierConfig) SetScopeLimit(scope string, limit *LimitConfig)

SetScopeLimit sets the limit for a specific scope

func (*TierConfig) Validate added in v1.1.0

func (tc *TierConfig) Validate() error

Validate validates the tier configuration

Directories

Path Synopsis
algorithms/sliding_window.go
algorithms/sliding_window.go
examples
basic command
Package main demonstrates the most basic usage of gorly
Package main demonstrates the most basic usage of gorly
builder command
Package main demonstrates the builder pattern for configuring rate limiters
Package main demonstrates the builder pattern for configuring rate limiters
middleware command
Package main demonstrates HTTP middleware integration
Package main demonstrates HTTP middleware integration
pattern-routing command
Package main demonstrates pattern-based rate limiting with gorly.
Package main demonstrates pattern-based rate limiting with gorly.
tiers command
Package main demonstrates multi-tier rate limiting with scopes
Package main demonstrates multi-tier rate limiting with scopes
version-info command
Package main demonstrates comprehensive version information features using go-version
Package main demonstrates comprehensive version information features using go-version
middleware/http.go
middleware/http.go
Package routing provides debugging and inspection tools for pattern-based routing.
Package routing provides debugging and inspection tools for pattern-based routing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL