swre

package module
v0.0.0-...-783d4cd Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 7, 2025 License: MIT Imports: 23 Imported by: 0

README

SWR Cache

CI codecov Go Reference Go Report Card

A high-performance, production-ready Stale-While-Revalidate (SWR) cache implementation in Go. Perfect for applications that need intelligent caching with background refresh capabilities.

🚀 Why SWR Cache?

Stale-While-Revalidate is a powerful caching strategy that:

  • Serves stale content instantly while refreshing in the background
  • Eliminates cache stampedes through intelligent request deduplication
  • Provides predictable performance with sub-millisecond cache hits
  • Handles failures gracefully with expired content fallbacks

Perfect for APIs, microservices, and any application where low latency and high availability matter.

✨ Features

  • 🏃‍♂️ Ultra-Fast: Lock-free hot path with only 2 atomic reads
  • 🔄 True SWR Pattern: Background refresh while serving stale content
  • 🛡️ Circuit Breaker: Built-in protection against cascading failures
  • 📊 Pluggable Metrics: Monitor cache performance and hit rates
  • 🔧 Flexible Serialization: JSON, Gob, custom serializers, with compression
  • High Concurrency: Optimized for millions of concurrent requests
  • 💾 Multiple Storage Backends: Badger (embedded), Redis, or custom
  • 🎯 Dynamic TTL: Configure cache lifetime based on content or patterns
  • 🧠 Memory Efficient: Smart buffer pooling and optimized allocations

📦 Installation

go get github.com/aiagentinc/swre

🚀 Quick Start

Basic Usage
package main

import (
    "context"
    "fmt"
    "log"
    "time"
    
    "github.com/aiagentinc/swre"
    "go.uber.org/zap"
)

func main() {
    // Create logger (you can use any logger that implements swre.Logger)
    zapLogger, _ := zap.NewProduction()
    logger, _ := swre.NewZapAdapter(zapLogger)
    
    // Create storage backend
    storage, err := swre.NewBadgerStorage(context.Background(), 
        swre.DefaultBadgerConfig("/tmp/cache"))
    if err != nil {
        log.Fatal(err)
    }
    defer storage.Close()
    
    // Create SWR engine
    engine, err := swre.NewStaleEngine(storage, logger)
    if err != nil {
        log.Fatal(err)
    }
    defer engine.Shutdown()
    
    // Use the cache
    ctx := context.Background()
    entry, err := engine.Execute(ctx, "user:123", func() (interface{}, error) {
        // Simulate expensive operation (database call, API request, etc.)
        time.Sleep(100 * time.Millisecond)
        return map[string]interface{}{
            "id":   123,
            "name": "John Doe",
            "email": "john@example.com",
        }, nil
    })
    
    if err != nil {
        log.Fatal(err)
    }
    
    fmt.Printf("Cache Status: %s\n", entry.Status)
    fmt.Printf("Data: %s\n", string(entry.Value))
}
Type-Safe Usage
type User struct {
    ID    int    `json:"id"`
    Name  string `json:"name"`
    Email string `json:"email"`
}

var user User
err := engine.ExecuteGeneric(ctx, "user:123", &user, func() (interface{}, error) {
    return fetchUserFromDatabase(123)
})

if err != nil {
    log.Printf("Cache error: %v", err)
    return
}

fmt.Printf("User: %+v\n", user)

🔧 Advanced Configuration

Custom TTL Configuration
// Create logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Configure cache behavior with fine-grained TTL control
engine, err := swre.NewStaleEngineWithOptions(storage, logger,
    swre.WithCacheTTL(&swre.CacheTTL{
        FreshSeconds:   300,  // 5 minutes fresh
        StaleSeconds:   3600, // 1 hour stale (background refresh)
        ExpiredSeconds: 3600, // 1 hour expired (fallback)
    }),
    swre.WithMaxConcurrentRefreshes(100),
    swre.WithRefreshTimeout(10*time.Second),
)
Dynamic TTL Based on Content
// Create logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Customize TTL based on data characteristics
ttlCalc := &swre.DynamicTTLCalculator{
    Calculator: func(key string, value interface{}) (ttl, staleTTL time.Duration, err error) {
        switch v := value.(type) {
        case User:
            if v.IsActive {
                return 1 * time.Hour, 50 * time.Minute, nil
            }
            return 24 * time.Hour, 23 * time.Hour, nil
        case APIResponse:
            // Cache API responses based on their type
            if v.StatusCode == 200 {
                return 30 * time.Minute, 25 * time.Minute, nil
            }
            return 5 * time.Minute, 4 * time.Minute, nil
        default:
            return 15 * time.Minute, 10 * time.Minute, nil
        }
    },
}

engine, err := swre.NewStaleEngineWithOptions(storage, logger,
    swre.WithTTLCalculator(ttlCalc),
)
Circuit Breaker Protection
// Add fault tolerance with circuit breaker
cb := swre.NewCircuitBreaker(swre.CircuitBreakerConfig{
    Engine:           engine,
    FailureThreshold: 5,  // Open after 5 failures
    SuccessThreshold: 2,  // Close after 2 successes
    Timeout:          30 * time.Second,
    MaxHalfOpenReqs:  3,  // Allow 3 test requests in half-open state
})

// Use circuit breaker instead of engine directly
entry, err := cb.Execute(ctx, "api:weather", func() (interface{}, error) {
    return callExternalAPI()
})

if errors.Is(err, swre.ErrCircuitOpen) {
    // Circuit is open, use fallback data
    return getFallbackWeatherData()
}
Compression for Large Payloads
// Create logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Compress large responses to save memory and storage
compressedSerializer := swre.NewCompressedSerializer(&swre.JSONSerializer{})

engine, err := swre.NewStaleEngineWithOptions(storage, logger,
    swre.WithSerializer(compressedSerializer),
)
Custom Serialization
// Use Protocol Buffers or other serialization formats
type ProtobufSerializer struct{}

func (p *ProtobufSerializer) Marshal(v interface{}) ([]byte, error) {
    if pb, ok := v.(proto.Message); ok {
        return proto.Marshal(pb)
    }
    return nil, fmt.Errorf("value is not a protobuf message")
}

func (p *ProtobufSerializer) Unmarshal(data []byte, v interface{}) error {
    if pb, ok := v.(proto.Message); ok {
        return proto.Unmarshal(data, pb)
    }
    return fmt.Errorf("value is not a protobuf message")
}

📝 Flexible Logging

SWR Cache uses a generic Logger interface, allowing you to use any logging library:

import (
    "go.uber.org/zap"
    "github.com/aiagentinc/swre"
)

// Create zap logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Create engine with zap logger
engine, _ := swre.NewStaleEngine(storage, logger)
Using Standard Library slog
import (
    "log/slog"
    "github.com/aiagentinc/swre"
)

// Create slog logger
slogLogger := slog.Default()
logger, _ := swre.NewSlogAdapter(slogLogger)

// Create engine with slog logger
engine, _ := swre.NewStaleEngine(storage, logger)
Custom Logger Implementation
// Implement the Logger interface for any logging library
type MyLogger struct {
    // your logger implementation
}

func (l *MyLogger) Debug(msg string, fields ...swre.Field) { /* ... */ }
func (l *MyLogger) Info(msg string, fields ...swre.Field) { /* ... */ }
func (l *MyLogger) Warn(msg string, fields ...swre.Field) { /* ... */ }
func (l *MyLogger) Error(msg string, fields ...swre.Field) { /* ... */ }
func (l *MyLogger) Named(name string) swre.Logger { /* ... */ }

// Use your custom logger
logger := &MyLogger{}
engine, _ := swre.NewStaleEngine(storage, logger)
Testing with Mock Logger
// Use the provided test helpers
func TestMyCache(t *testing.T) {
    logger := swre.NewTestLogger(t)
    engine, _ := swre.NewStaleEngine(storage, logger)
    // ... your tests
}

// Or use a no-op logger for benchmarks
func BenchmarkCache(b *testing.B) {
    logger := swre.NewNoOpLogger()
    engine, _ := swre.NewStaleEngine(storage, logger)
    // ... your benchmark
}

📊 Cache Status Types

Understanding cache entry statuses helps with debugging and monitoring:

  • hit: Fresh cache hit (optimal performance)
  • stale: Stale content served, background refresh triggered
  • miss: Cache miss, fetched from source synchronously
  • expired: Expired content served while refreshing
  • fallback: Error occurred, serving expired content as last resort

🏗️ Storage Backends

High-performance embedded key-value store, perfect for single-node deployments:

// Production-optimized configuration
cfg := swre.DefaultBadgerConfig("/var/lib/myapp/cache")
cfg.Compression = options.ZSTD        // Enable compression
cfg.MemTableSize = 512 << 20          // 512MB memory tables
cfg.ValueLogFileSize = 2 << 30        // 2GB value log files
cfg.GCInterval = 5 * time.Minute      // Frequent garbage collection

storage, err := swre.NewBadgerStorage(ctx, cfg)
Custom Storage Backend

Implement the Storage interface for Redis, MongoDB, or any other backend:

type Storage interface {
    Get(ctx context.Context, key string) (*CacheEntry, error)
    Set(ctx context.Context, key string, value *CacheEntry) error
    Delete(ctx context.Context, key string) error
    Close() error
}

// Example Redis storage implementation
type RedisStorage struct {
    client *redis.Client
}

func (r *RedisStorage) Get(ctx context.Context, key string) (*CacheEntry, error) {
    data, err := r.client.Get(ctx, key).Result()
    if err == redis.Nil {
        return nil, swre.ErrNotFound
    }
    if err != nil {
        return nil, err
    }
    
    var entry swre.CacheEntry
    err = json.Unmarshal([]byte(data), &entry)
    return &entry, err
}

📈 Monitoring & Metrics

Prometheus Integration
type PrometheusMetrics struct {
    hits    *prometheus.CounterVec
    misses  prometheus.Counter
    errors  prometheus.Counter
    latency prometheus.Histogram
}

func (p *PrometheusMetrics) RecordHit(key string, status string) {
    p.hits.WithLabelValues(status).Inc()
}

func (p *PrometheusMetrics) RecordMiss(key string) {
    p.misses.Inc()
}

func (p *PrometheusMetrics) RecordError(key string, err error) {
    p.errors.Inc()
}

func (p *PrometheusMetrics) RecordLatency(key string, duration time.Duration) {
    p.latency.Observe(duration.Seconds())
}

// Create logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Use with engine
engine, err := swre.NewStaleEngineWithOptions(storage, logger,
    swre.WithMetrics(prometheusMetrics),
)

🎯 Use Cases

API Response Caching
// Cache external API responses with smart TTL
engine.Execute(ctx, "weather:london", func() (interface{}, error) {
    return callWeatherAPI("London")
})
Database Query Caching
// Cache expensive database queries
engine.ExecuteGeneric(ctx, "users:active", &users, func() (interface{}, error) {
    return db.Query("SELECT * FROM users WHERE active = true")
})
Microservice Communication
// Cache service-to-service calls
engine.Execute(ctx, "inventory:product:123", func() (interface{}, error) {
    return inventoryService.GetProduct(123)
})
Static Content Caching
// Cache generated content (reports, images, etc.)
engine.Execute(ctx, "report:monthly:2024-01", func() (interface{}, error) {
    return generateMonthlyReport("2024-01")
})

⚡ Performance

Benchmarks

Our benchmarks show exceptional performance for cache operations:

BenchmarkStaleEngine_CacheHit-8           20000000    105 ns/op      0 B/op     0 allocs/op
BenchmarkStaleEngine_ConcurrentMiss-8      2000000   1053 ns/op    512 B/op     8 allocs/op
BenchmarkStaleEngine_StaleServing-8       15000000    120 ns/op      0 B/op     0 allocs/op
Performance Characteristics
  • 🏃‍♂️ Hot Path: Lock-free cache hits with only 2 atomic reads
  • 🔄 Concurrent Protection: Single upstream call per key via singleflight
  • 🧠 Memory Efficient: Smart buffer pooling reduces GC pressure
  • ⚖️ Load Balancing: Background refreshes prevent request clustering

💡 Best Practices

1. Key Design
// ✅ Good: Hierarchical keys
"user:profile:123"
"api:weather:london:current"
"db:products:category:electronics"

// ❌ Avoid: Flat keys without structure
"user123profile"
"weatherlondon"
2. TTL Strategy
// ✅ Match TTL to data characteristics
type TTLCalculator struct{}

func (t *TTLCalculator) CalculateTTL(key string, value interface{}) (time.Duration, time.Duration, error) {
    switch {
    case strings.HasPrefix(key, "user:session:"):
        return 30 * time.Minute, 25 * time.Minute, nil  // Short-lived
    case strings.HasPrefix(key, "config:"):
        return 24 * time.Hour, 23 * time.Hour, nil      // Stable data
    case strings.HasPrefix(key, "api:stock:"):
        return 1 * time.Minute, 50 * time.Second, nil   // High volatility
    default:
        return 15 * time.Minute, 10 * time.Minute, nil
    }
}
3. Error Handling
// ✅ Always handle errors gracefully
entry, err := engine.Execute(ctx, key, fetchFunc)
switch {
case err == nil:
    // Use cached data
    return entry.Value
case errors.Is(err, swre.ErrCircuitOpen):
    // Circuit breaker is open, use fallback
    return getFallbackData()
default:
    // Other errors, decide based on criticality
    log.Printf("Cache error: %v", err)
    return nil, err
}
4. Resource Management
// ✅ Always clean up resources
defer engine.Shutdown()  // Graceful shutdown
defer storage.Close()    // Close storage connections
5. Monitoring
// ✅ Track important metrics
type Metrics struct {
    HitRate    float64
    MissRate   float64
    ErrorRate  float64
    P99Latency time.Duration
}

// Monitor cache effectiveness
if hitRate < 0.8 {
    log.Warn("Low cache hit rate, consider tuning TTL")
}

🚀 Production Configuration

// Create logger
zapLogger, _ := zap.NewProduction()
logger, _ := swre.NewZapAdapter(zapLogger)

// Production-ready configuration
cfg := &swre.EngineConfig{
    Storage: storage,
    Logger:  logger.Named("cache"),
    
    // Configure for your workload
    DefaultCacheTTL: &swre.CacheTTL{
        FreshSeconds:   1800,  // 30 minutes fresh
        StaleSeconds:   3600,  // 1 hour stale window  
        ExpiredSeconds: 1800,  // 30 minutes expired buffer
    },
    
    // Performance tuning
    MaxConcurrentRefreshes: 100,
    RefreshTimeout:         10 * time.Second,
    
    // Enable observability
    Metrics: prometheusMetrics,
    
    // Optimize for your data
    Serializer: swre.NewCompressedSerializer(&swre.JSONSerializer{}),
}

engine, err := swre.NewStaleEngineWithConfig(cfg)

🤝 Contributing

We welcome contributions! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Run tests: go test ./...
  5. Run benchmarks: go test -bench=. ./...
  6. Commit changes (git commit -m 'Add amazing feature')
  7. Push to branch (git push origin feature/amazing-feature)
  8. Open a Pull Request
Development Commands
# Run tests with race detection
go test -race ./...

# Run benchmarks
go test -bench=. -benchmem ./...

# Check coverage
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙋‍♂️ Support

Documentation

Overview

Package swre provides a high-performance, production-ready SWR cache implementation in Go.

Package swre provides a high-performance, production-ready SWR cache implementation in Go.

Package swre provides test helpers for logger testing.

Package swre provides a high-performance, production-ready SWR cache implementation in Go.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrCircuitOpen = errors.New("circuit breaker is open")
)
View Source
var (
	// ErrNilLogger is returned when a nil logger is provided
	ErrNilLogger = errors.New("logger cannot be nil")
)

Common errors

View Source
var ErrNotFound = errors.New("key not found in storage")

ErrNotFound is returned when a key is not found in the storage.

Functions

func NowUnixMilli

func NowUnixMilli() int64

NowUnixMilli returns current time in Unix milliseconds with intelligent caching. This function optimizes for high-throughput scenarios by caching the current time for up to 10ms to reduce expensive syscall overhead.

Performance Characteristics: - Under high load (>100K ops/sec), reduces syscalls by ~90% - Maintains accuracy within 10ms for most operations - Thread-safe through atomic operations - Zero allocation in the fast path

Usage Guidelines: - Use for performance-critical paths and logging - For precise TTL calculations, consider using time.Now() directly - The 10ms tolerance is acceptable for most caching scenarios

func WithWriteBatch

func WithWriteBatch(ctx context.Context, wb *badger.WriteBatch) context.Context

Types

type BadgerConfig

type BadgerConfig struct {
	// Storage paths
	Dir      string // Directory for LSM tree and metadata
	ValueDir string // Directory for value log files (can be same as Dir)

	// Write behavior
	SyncWrites bool // Whether to sync writes to disk immediately (impacts performance)

	// Compression settings
	Compression        options.CompressionType // Compression algorithm (None, Snappy, ZSTD)
	ZSTDCompressionLvl int                     // ZSTD compression level (1-22, higher = better compression, slower)

	// Conflict detection (expensive, typically disabled for cache workloads)
	DetectConflicts bool

	// Memory and storage tuning
	ValueThreshold int   // Size threshold for storing values in value log vs LSM tree
	MemTableSize   int64 // Size of each memtable before flushing to disk
	IndexCacheSize int64 // Size of block cache for index blocks (0 = disabled)
	BlockCacheSize int64 // Size of block cache for data blocks (0 = disabled)
	MaxTableSize   int64 // Maximum size of each SST file
	NumCompactors  int   // Number of concurrent compaction goroutines

	// Value log configuration
	ValueLogFileSize int64 // Maximum size of each value log file

	// Garbage collection settings
	GCInterval     time.Duration // How often to run value log GC
	GCDiscardRatio float64       // Minimum ratio of discardable data to trigger GC

	// Logging (optional)
	Logger badger.Logger // Custom logger for Badger operations (can be nil)
}

BadgerConfig provides comprehensive configuration for Badger v4 storage backend. This configuration is optimized for high-throughput DNS caching workloads with careful tuning of memory usage, compression, and garbage collection.

Performance Considerations: - Memory tables and caches sized for DNS response patterns - Compression disabled by default for latency optimization - Sync writes disabled for maximum throughput (relies on OS buffering) - Value log GC tuned for write-heavy workloads

func DefaultBadgerConfig

func DefaultBadgerConfig(dir string) BadgerConfig

DefaultBadgerConfig returns a production-ready configuration optimized for DNS caching. These defaults are tuned for high-throughput, low-latency cache workloads with typical DNS response sizes (mostly under 4KB).

Configuration Rationale: - No compression: Prioritizes latency over storage efficiency - Large memtables (256MB): Reduces write amplification for high write rates - No block caching: DNS workloads typically don't benefit from read caching - Aggressive GC: Prevents value log bloat in write-heavy scenarios - Compactor count = CPU cores: Utilizes available parallelism for compaction

type CacheEntry

type CacheEntry struct {
	// Key stores the cache key for debugging and logging purposes
	Key string `json:"key"`

	// Value contains the serialized cached data as bytes
	// This allows storage of any data type after serialization
	Value []byte `json:"value"`

	// CreatedAt is the Unix millisecond timestamp when this entry was created
	// Used for metrics, debugging, and age-based eviction policies
	CreatedAt int64 `json:"created_at"`

	// ExpiresAfter is the Unix millisecond timestamp when the entry expires completely
	// After this time, the entry may still be served as a fallback during refresh failures
	ExpiresAfter int64 `json:"expires_after"`

	// StaleAfter is the Unix millisecond timestamp when the entry becomes stale
	// Between StaleAfter and ExpiresAfter, the entry triggers background refresh
	StaleAfter int64 `json:"stale_after"`

	// IsShared indicates if this entry is being accessed by multiple goroutines
	// Used for optimization decisions and concurrent access patterns
	IsShared bool `json:"is_shared"`

	// Status indicates the current state of the cache entry
	// Common values: "hit", "miss", "stale", "expired", "refresh", "fallback"
	Status string `json:"status"`
}

CacheEntry represents a cache entry with full SWR (Stale-While-Revalidate) metadata. This structure encapsulates all information needed to implement SWR caching logic including precise timing control and status tracking.

SWR Lifecycle: 1. Fresh: 0 < now < StaleAfter (serve immediately) 2. Stale: StaleAfter < now < ExpiresAfter (serve + background refresh) 3. Expired: ExpiresAfter < now (optional serve + foreground refresh)

func NewCacheEntry deprecated

func NewCacheEntry(key string, val interface{}) (*CacheEntry, error)

Deprecated: Use NewCacheEntryWithTTL instead This function is kept for backward compatibility but will be removed in future versions

func NewCacheEntryWithTTL

func NewCacheEntryWithTTL(key string, value []byte, ttl, staleTTL time.Duration) *CacheEntry

NewCacheEntryWithTTL creates a new cache entry with specified TTL values. This constructor properly initializes all SWR-related timestamps and applies sensible defaults to prevent degenerate cache behavior.

Parameters: - key: Cache key for identification and debugging - value: Serialized data to cache - ttl: Total time-to-live (fresh + stale + expired periods) - staleTTL: Time after which entry becomes stale (triggers background refresh)

TTL Validation: - Minimum TTL of 5 seconds enforced to prevent cache thrashing - Negative staleTTL values are normalized to prevent timing issues - staleTTL is clamped to be <= ttl for logical consistency

func (*CacheEntry) CloneWithStatus

func (e *CacheEntry) CloneWithStatus(status string) *CacheEntry

CloneWithStatus creates a deep copy of the CacheEntry with a new status. This method ensures thread-safety by providing each caller with independent data structures, preventing race conditions in concurrent environments.

Performance Notes: - Uses Go 1.22+ bytes.Clone() for optimal slice copying - Struct copy is efficient (~64 bytes of fixed-size data) - Zero allocation for empty Value slices - Each clone is fully independent (no shared references)

Thread Safety: - 100% race-free operation - No shared mutable state between clones - Safe for concurrent read access to original and clones

func (*CacheEntry) IsExpired

func (e *CacheEntry) IsExpired() bool

IsExpired checks if entry is expired now Uses fresh time for accurate TTL checking

func (*CacheEntry) IsExpiredAt

func (e *CacheEntry) IsExpiredAt(nowMs int64) bool

IsExpiredAt checks if entry is expired at the given time

func (*CacheEntry) IsStale

func (e *CacheEntry) IsStale() bool

IsStale checks if entry is stale now Uses fresh time for accurate TTL checking

func (*CacheEntry) IsStaleAt

func (e *CacheEntry) IsStaleAt(nowMs int64) bool

IsStaleAt checks if entry is stale at the given time

func (*CacheEntry) Marshal

func (e *CacheEntry) Marshal() ([]byte, error)

Marshal serializes the cache entry to JSON with buffer pooling

func (*CacheEntry) Unmarshal

func (e *CacheEntry) Unmarshal(data []byte) error

Unmarshal deserializes JSON data into the cache entry

type CacheKey

type CacheKey struct {
	Key string
	TTL *CacheTTL // nil means use engine defaults
}

CacheKey represents a cache key with optional TTL overrides

type CacheMetrics

type CacheMetrics interface {
	// RecordHit tracks cache hit events with status details.
	// Status values: "hit", "stale", "expired", "refresh", "fallback"
	RecordHit(key string, status string)

	// RecordMiss tracks cache miss events requiring upstream fetch.
	RecordMiss(key string)

	// RecordError tracks error events for reliability monitoring.
	RecordError(key string, err error)

	// RecordLatency tracks operation timing for performance analysis.
	RecordLatency(key string, duration time.Duration)
}

CacheMetrics defines the interface for cache observability. This abstraction enables pluggable metrics collection for monitoring cache performance, hit rates, and error patterns.

Metrics Categories: - Hit/Miss ratios for cache effectiveness - Latency distributions for performance monitoring - Error rates and types for reliability tracking - Per-key patterns for hotspot identification

Implementation Guidelines: - Methods should be non-blocking and fast - Consider sampling for high-frequency operations - Include relevant labels/tags for aggregation - Handle nil/invalid inputs gracefully

type CacheTTL

type CacheTTL struct {
	// FreshSeconds is how long the cache entry is considered fresh
	FreshSeconds int

	// StaleSeconds is how long the cache entry can be served stale while refreshing
	StaleSeconds int

	// ExpiredSeconds is additional time before the entry is completely removed
	ExpiredSeconds int
}

CacheTTL defines cache TTL settings in seconds The relationship between TTLs: - Fresh period: 0 to FreshSeconds (cache returns immediately) - Stale period: FreshSeconds to (FreshSeconds + StaleSeconds) (returns stale + triggers refresh) - Total lifetime: FreshSeconds + StaleSeconds + ExpiredSeconds (data is removed after this)

func (*CacheTTL) ToEngineTTLs

func (c *CacheTTL) ToEngineTTLs() (time.Duration, time.Duration)

ToEngineTTLs converts CacheTTL to the internal TTL format used by the engine Returns (totalTTL, freshDuration)

type CircuitBreaker

type CircuitBreaker struct {
	// contains filtered or unexported fields
}

CircuitBreaker wraps StaleEngine with circuit breaker pattern

func NewCircuitBreaker

func NewCircuitBreaker(cfg CircuitBreakerConfig) *CircuitBreaker

NewCircuitBreaker creates a new circuit breaker wrapper

func (*CircuitBreaker) Execute

Execute wraps StaleEngine.Execute with circuit breaker logic

func (*CircuitBreaker) ExecuteGeneric

func (cb *CircuitBreaker) ExecuteGeneric(ctx context.Context, key string, result interface{}, fn StaleEngineCallback) error

ExecuteGeneric wraps StaleEngine.ExecuteGeneric with circuit breaker logic

func (*CircuitBreaker) GetState

func (cb *CircuitBreaker) GetState() string

GetState returns the current circuit breaker state

func (*CircuitBreaker) Reset

func (cb *CircuitBreaker) Reset()

Reset resets the circuit breaker to closed state

type CircuitBreakerConfig

type CircuitBreakerConfig struct {
	Engine           *StaleEngine
	FailureThreshold int32         // Failures before opening circuit
	SuccessThreshold int32         // Successes in half-open before closing
	Timeout          time.Duration // Time before trying half-open
	MaxHalfOpenReqs  int32         // Max concurrent requests in half-open state
	MaxKeyBreakers   int           // Maximum number of per-key circuit breakers (default: 10000)
}

CircuitBreakerConfig holds circuit breaker configuration

type CircuitState

type CircuitState int32

CircuitState represents the state of a circuit breaker

const (
	CircuitClosed CircuitState = iota
	CircuitOpen
	CircuitHalfOpen
)

type CompressedSerializer

type CompressedSerializer struct {
	Inner Serializer // Underlying serializer to compress
	Level int        // gzip compression level (1=fast, 9=best compression)
}

CompressedSerializer adds gzip compression to any underlying serializer. This decorator pattern allows compression to be layered onto existing serialization strategies for size-sensitive scenarios.

Use Cases: - Large DNS responses (e.g., TXT records with certificates) - Memory-constrained environments - Network bandwidth optimization - Long-term storage of large cache entries

Performance Trade-offs: - Significant CPU overhead during serialization/deserialization - Reduced memory usage and network transfer - Compression ratio varies by data type (JSON compresses well)

func NewCompressedSerializer

func NewCompressedSerializer(inner Serializer) *CompressedSerializer

func (*CompressedSerializer) Marshal

func (c *CompressedSerializer) Marshal(v interface{}) ([]byte, error)

func (*CompressedSerializer) Unmarshal

func (c *CompressedSerializer) Unmarshal(data []byte, v interface{}) error

type DefaultTTLCalculator

type DefaultTTLCalculator struct {
	TTL      time.Duration // Total time-to-live for cache entries
	StaleTTL time.Duration // Fresh period before background refresh
}

DefaultTTLCalculator provides static TTL values for all cache entries. This is the simplest TTL strategy, using fixed durations regardless of key patterns or content characteristics.

Configuration: - TTL: Total cache entry lifetime - StaleTTL: Fresh period before stale-while-revalidate

Suitable for: - Uniform caching policies - Simple deployment scenarios - When content characteristics don't vary significantly

func (*DefaultTTLCalculator) CalculateTTL

func (d *DefaultTTLCalculator) CalculateTTL(key string, value interface{}) (time.Duration, time.Duration, error)

type DynamicTTLCalculator

type DynamicTTLCalculator struct {
	// Calculator function implementing custom TTL logic
	Calculator func(key string, value interface{}) (ttl time.Duration, staleTTL time.Duration, err error)
}

DynamicTTLCalculator enables custom TTL logic via a user-provided function. This flexible approach allows TTL decisions based on content analysis, key patterns, external factors, or complex business logic.

Function Signature: - Input: cache key and value for context - Output: total TTL, stale TTL, and optional error - Error triggers fallback to default TTL values

Example Use Cases: - DNS record type-specific TTLs (A=1h, MX=24h, TXT=10m) - Size-based TTL (larger responses cached longer) - Time-of-day adjustments (shorter TTL during peak hours) - External API-driven TTL decisions

func (*DynamicTTLCalculator) CalculateTTL

func (d *DynamicTTLCalculator) CalculateTTL(key string, value interface{}) (time.Duration, time.Duration, error)

type EngineConfig

type EngineConfig struct {
	// Core required components
	Storage Storage // Backend storage implementation
	Logger  Logger  // Structured logger for operations

	// Legacy TTL configuration (deprecated, use DefaultCacheTTL)
	DefaultTTL         time.Duration // Total cache lifetime
	DefaultStaleOffset time.Duration // Offset from total TTL to stale time

	// Modern TTL configuration with explicit phases
	DefaultCacheTTL *CacheTTL // Structured TTL with fresh/stale/expired periods

	// Pluggable components for extensibility
	Serializer       Serializer       // Value serialization strategy
	TTLCalculator    TTLCalculator    // Dynamic TTL calculation
	ValueTransformer ValueTransformer // Value preprocessing pipeline
	Metrics          CacheMetrics     // Observability and monitoring

	// Performance and concurrency controls
	MaxConcurrentRefreshes int           // Limit for background refresh goroutines
	RefreshTimeout         time.Duration // Individual refresh operation timeout
}

EngineConfig provides comprehensive configuration for StaleEngine initialization. This structure centralizes all tunable parameters for cache behavior, performance characteristics, and operational features.

Configuration Categories: - Core: Required components (Storage, Logger) - TTL: Cache lifetime management - Extensibility: Pluggable components for customization - Performance: Concurrency and timeout controls

Usage Patterns: - Use SetDefaults() to apply sensible defaults - Override specific fields for custom behavior - Validate configuration before engine creation

func (*EngineConfig) SetDefaults

func (c *EngineConfig) SetDefaults()

SetDefaults applies default values to config

type Field

type Field interface {
	// Key returns the field's key/name.
	Key() string

	// Value returns the field's value.
	// The type of the value depends on the field type.
	Value() interface{}

	// Type returns the field type for optimized handling.
	Type() FieldType
}

Field represents a structured logging field with a key-value pair. Implementations of this interface can optimize storage and serialization based on the specific logging library being used.

func Any

func Any(key string, val interface{}) Field

Any creates a field with any type of value. This should be used sparingly as it may be less efficient than typed fields.

func ByteString

func ByteString(key string, val []byte) Field

ByteString creates a byte string field. This is useful for logging binary data that should be treated as a string.

func Duration

func Duration(key string, val time.Duration) Field

Duration creates a time.Duration field.

func Error

func Error(err error) Field

Error creates an error field with the key "error". If err is nil, it returns a field with a nil value.

func ErrorKey

func ErrorKey(key string, err error) Field

ErrorKey creates an error field with a custom key.

func Int

func Int(key string, val int) Field

Int creates an integer field.

func Int32

func Int32(key string, val int32) Field

Int32 creates a 32-bit integer field.

func Int64

func Int64(key string, val int64) Field

Int64 creates a 64-bit integer field.

func Stack

func Stack(val string) Field

Stack creates a stack trace field with the key "stacktrace". The value should be a string representation of the stack trace.

func StackKey

func StackKey(key string, val string) Field

StackKey creates a stack trace field with a custom key.

func String

func String(key, val string) Field

String creates a string field.

func Time

func Time(key string, val time.Time) Field

Time creates a time.Time field.

type FieldType

type FieldType int

FieldType represents the type of a logging field. This allows logging adapters to optimize serialization.

const (
	// FieldTypeUnknown indicates an unknown field type.
	FieldTypeUnknown FieldType = iota
	// FieldTypeString indicates a string field.
	FieldTypeString
	// FieldTypeInt indicates an integer field.
	FieldTypeInt
	// FieldTypeInt32 indicates a 32-bit integer field.
	FieldTypeInt32
	// FieldTypeInt64 indicates a 64-bit integer field.
	FieldTypeInt64
	// FieldTypeDuration indicates a time.Duration field.
	FieldTypeDuration
	// FieldTypeTime indicates a time.Time field.
	FieldTypeTime
	// FieldTypeError indicates an error field.
	FieldTypeError
	// FieldTypeAny indicates a field with any type.
	FieldTypeAny
	// FieldTypeByteString indicates a byte string field.
	FieldTypeByteString
	// FieldTypeStack indicates a stack trace field.
	FieldTypeStack
)

type GobSerializer

type GobSerializer struct{}

GobSerializer implements Go-specific binary serialization using encoding/gob. This serializer is more efficient than JSON for Go-native types but sacrifices cross-language compatibility.

Trade-offs: - Faster serialization for complex Go structures - More compact binary representation - Go-specific format (not interoperable with other languages) - Requires type registration for complex interfaces

Best suited for: - Internal Go-to-Go communication - Complex nested structures - When binary size matters more than interoperability

func (*GobSerializer) Marshal

func (g *GobSerializer) Marshal(v interface{}) ([]byte, error)

func (*GobSerializer) Unmarshal

func (g *GobSerializer) Unmarshal(data []byte, v interface{}) error

type JSONSerializer

type JSONSerializer struct{}

JSONSerializer implements high-performance JSON serialization using jsoniter. This serializer provides 2-3x better performance than the standard library while maintaining full compatibility with encoding/json.

Performance Characteristics: - Optimized for speed over memory usage - Uses jsoniter.ConfigFastest for maximum throughput - Suitable for high-frequency cache operations - Thread-safe for concurrent use

func (*JSONSerializer) Marshal

func (j *JSONSerializer) Marshal(v interface{}) ([]byte, error)

func (*JSONSerializer) Unmarshal

func (j *JSONSerializer) Unmarshal(data []byte, v interface{}) error

type LogCall

type LogCall struct {
	Message string
	Fields  []Field
}

LogCall represents a single log method call.

type Logger

type Logger interface {
	// Debug logs a message at debug level with optional structured fields.
	// Debug messages are typically used for detailed troubleshooting information.
	Debug(msg string, fields ...Field)

	// Info logs a message at info level with optional structured fields.
	// Info messages are used for general informational messages.
	Info(msg string, fields ...Field)

	// Warn logs a message at warning level with optional structured fields.
	// Warning messages indicate potentially harmful situations.
	Warn(msg string, fields ...Field)

	// Error logs a message at error level with optional structured fields.
	// Error messages indicate error conditions that should be investigated.
	Error(msg string, fields ...Field)

	// Named creates a new Logger instance with the specified name.
	// This is useful for creating subsystem-specific loggers with a common prefix.
	// The behavior of naming is implementation-specific (e.g., dot-separated, slash-separated).
	Named(name string) Logger
}

Logger defines a structured logging interface that can be implemented by various logging libraries. This interface is designed to provide structured logging capabilities while maintaining flexibility for users to choose their preferred logging implementation.

The interface follows common logging patterns and supports the most frequently used log levels and field types required by the SWR cache engine.

func NewBenchmarkLogger

func NewBenchmarkLogger() Logger

NewBenchmarkLogger creates a no-op logger suitable for benchmarks. This avoids the overhead of logging during performance tests.

func NewDevelopmentLogger

func NewDevelopmentLogger() Logger

NewDevelopmentLogger creates a logger suitable for development and debugging. This logger outputs human-readable logs that are helpful during development.

func NewNoOpLogger

func NewNoOpLogger() Logger

NewNoOpLogger creates a new no-op logger.

func NewProductionLogger

func NewProductionLogger() Logger

NewProductionLogger creates a logger suitable for production use. This logger outputs structured JSON logs optimized for production environments.

func NewSafeTestLogger

func NewSafeTestLogger(t *testing.T) Logger

NewSafeTestLogger creates a logger that safely handles logging after test completion. This is useful when dealing with background goroutines that might log after the test ends.

func NewTestLogger

func NewTestLogger(t *testing.T) Logger

NewTestLogger creates a logger suitable for testing. This provides a simple way to create loggers in tests that work with both the old zap-based API and the new Logger interface.

type NoOpLogger

type NoOpLogger struct{}

NoOpLogger is a logger implementation that discards all log messages. It's useful for testing or when logging needs to be disabled.

func (NoOpLogger) Debug

func (n NoOpLogger) Debug(msg string, fields ...Field)

Debug implements Logger.Debug.

func (NoOpLogger) Error

func (n NoOpLogger) Error(msg string, fields ...Field)

Error implements Logger.Error.

func (NoOpLogger) Info

func (n NoOpLogger) Info(msg string, fields ...Field)

Info implements Logger.Info.

func (NoOpLogger) Named

func (n NoOpLogger) Named(name string) Logger

Named implements Logger.Named.

func (NoOpLogger) Warn

func (n NoOpLogger) Warn(msg string, fields ...Field)

Warn implements Logger.Warn.

type NoOpMetrics

type NoOpMetrics struct{}

NoOpMetrics provides a null implementation of CacheMetrics. This allows metrics collection to be optional while maintaining a consistent interface throughout the caching system.

Behavior: - All metric recording methods are no-ops - Zero performance overhead - Useful for testing or when metrics aren't needed - Can be replaced with real metrics implementation later

func (*NoOpMetrics) RecordError

func (n *NoOpMetrics) RecordError(key string, err error)

func (*NoOpMetrics) RecordHit

func (n *NoOpMetrics) RecordHit(key string, status string)

func (*NoOpMetrics) RecordLatency

func (n *NoOpMetrics) RecordLatency(key string, duration time.Duration)

func (*NoOpMetrics) RecordMiss

func (n *NoOpMetrics) RecordMiss(key string)

type NoOpTransformer

type NoOpTransformer struct{}

NoOpTransformer provides a pass-through implementation of ValueTransformer. This null object pattern allows the transformer interface to be optional while maintaining a consistent API.

Behavior: - Transform: Returns input value unchanged - Restore: Returns input data unchanged - No processing overhead - Useful as a default or placeholder implementation

func (*NoOpTransformer) Restore

func (n *NoOpTransformer) Restore(ctx context.Context, key string, data []byte) (interface{}, error)

func (*NoOpTransformer) Transform

func (n *NoOpTransformer) Transform(ctx context.Context, key string, value interface{}) (interface{}, error)

type Option

type Option func(*StaleEngine)

Option configures a StaleEngine. All options are purely additive and do not break existing callers using NewStaleEngine.

Example:

eng, _ := NewStaleEngineWithOptions(storage, logger,
    WithCacheTTL(&CacheTTL{FreshSeconds: 300, StaleSeconds: 3600}),
    WithMetrics(myMetrics),
)

Exposing Option publicly is optional; keep it internal if you do not want to encourage tuning.

func WithCacheTTL

func WithCacheTTL(ttl *CacheTTL) Option

WithCacheTTL sets the default cache TTL configuration

func WithMaxConcurrentRefreshes

func WithMaxConcurrentRefreshes(max int) Option

WithMaxConcurrentRefreshes sets the maximum concurrent background refreshes

func WithMetrics

func WithMetrics(m CacheMetrics) Option

WithMetrics sets a custom metrics collector

func WithRefreshTimeout

func WithRefreshTimeout(timeout time.Duration) Option

WithRefreshTimeout sets the timeout for refresh operations

func WithSerializer

func WithSerializer(s Serializer) Option

WithSerializer sets a custom serializer

func WithTTLCalculator

func WithTTLCalculator(c TTLCalculator) Option

WithTTLCalculator sets a custom TTL calculator

func WithValueTransformer

func WithValueTransformer(t ValueTransformer) Option

WithValueTransformer sets a custom value transformer

type RefreshTracker

type RefreshTracker struct {
	// contains filtered or unexported fields
}

RefreshTracker prevents duplicate background refresh operations through time-based tracking with automatic memory leak prevention.

Key Features: - Thread-safe tracking of active refresh operations - Automatic cleanup of expired tracking entries - Memory-efficient with bounded growth - Graceful shutdown support - Sharded design for high concurrency performance

Performance Optimizations: - Uses 16 shards to reduce lock contention - Incremental cleanup to avoid blocking all operations - Read-write mutex per shard for optimal read performance

Design Rationale: - Uses time-based expiration instead of explicit cleanup - Background cleanup goroutine prevents unbounded memory growth - Sharding reduces lock contention by 16x - Incremental cleanup processes one shard at a time

func NewRefreshTracker

func NewRefreshTracker(ttl time.Duration) *RefreshTracker

NewRefreshTracker creates a new refresh tracker with automatic cleanup. The TTL parameter controls how long refresh operations are tracked, which affects both memory usage and the precision of duplicate detection.

TTL Guidelines: - Too short: May allow duplicate refreshes for slow operations - Too long: Increases memory usage and cleanup overhead - Recommended: 2-5x your typical refresh timeout - Default: 5 minutes for most caching scenarios

func (*RefreshTracker) Delete

func (rt *RefreshTracker) Delete(key string)

Delete removes a key from the refresh tracker.

func (*RefreshTracker) Get

func (rt *RefreshTracker) Get(key string) bool

Get checks if a key is currently being refreshed without side effects. This read-only operation uses shared locking for optimal concurrency in scenarios with frequent refresh status checks.

Performance Characteristics: - Uses read lock for minimal contention with other readers - O(1) lookup time with map-based storage - Automatic expiration check without cleanup overhead - Safe for high-frequency polling

Use Cases: - Quick duplicate refresh detection - Monitoring and debugging refresh patterns - Load balancing decisions based on refresh status

func (*RefreshTracker) Size

func (rt *RefreshTracker) Size() int

Size returns the current number of tracked refresh operations. This method is primarily intended for monitoring, debugging, and capacity planning purposes.

Monitoring Uses: - Track refresh concurrency patterns - Detect potential memory leaks - Monitor system load and refresh rates - Validate cleanup effectiveness

Note: The returned value is a snapshot and may change immediately after the call returns due to concurrent operations.

func (*RefreshTracker) Stop

func (rt *RefreshTracker) Stop()

Stop gracefully shuts down the refresh tracker and its cleanup goroutine.

func (*RefreshTracker) TrySet

func (rt *RefreshTracker) TrySet(key string) bool

TrySet atomically attempts to mark a key as being refreshed. This method implements optimistic concurrency control to prevent duplicate refresh operations while allowing expired entries to be reused.

Return Values: - true: Successfully marked key for refresh (caller should proceed) - false: Key already being refreshed by another goroutine (caller should skip)

Race Condition Handling: - Uses per-shard locking to minimize contention - Automatically handles expired entries without external cleanup - Provides strong consistency guarantees for refresh coordination

type Serializer

type Serializer interface {
	// Marshal converts a Go value to bytes for storage
	Marshal(v interface{}) ([]byte, error)

	// Unmarshal converts stored bytes back to a Go value
	Unmarshal(data []byte, v interface{}) error
}

Serializer defines the interface for cache value serialization. This abstraction allows pluggable serialization strategies optimized for different data types and performance requirements.

Implementation Guidelines: - Should be thread-safe for concurrent use - Marshal should handle nil values gracefully - Unmarshal should validate input data - Consider compression for large payloads

type SlogAdapter

type SlogAdapter struct {
	// contains filtered or unexported fields
}

SlogAdapter adapts Go's standard slog.Logger to implement the Logger interface. This adapter enables using the standard library's structured logger with SWR cache.

func NewSlogAdapter

func NewSlogAdapter(logger *slog.Logger) (*SlogAdapter, error)

NewSlogAdapter creates a new SlogAdapter from an existing slog.Logger.

func (*SlogAdapter) Debug

func (s *SlogAdapter) Debug(msg string, fields ...Field)

Debug logs a message at debug level with optional structured fields.

func (*SlogAdapter) Error

func (s *SlogAdapter) Error(msg string, fields ...Field)

Error logs a message at error level with optional structured fields.

func (*SlogAdapter) Info

func (s *SlogAdapter) Info(msg string, fields ...Field)

Info logs a message at info level with optional structured fields.

func (*SlogAdapter) Named

func (s *SlogAdapter) Named(name string) Logger

Named creates a new Logger instance with the specified name.

func (*SlogAdapter) Warn

func (s *SlogAdapter) Warn(msg string, fields ...Field)

Warn logs a message at warning level with optional structured fields.

type StaleEngine

type StaleEngine struct {
	// contains filtered or unexported fields
}

StaleEngine implements SWR caching logic that scales to massive concurrency. All public methods are goroutine‑safe.

Storage is an application‑defined interface (not included here) providing Get and Set semantics.

func NewStaleEngine

func NewStaleEngine(storage Storage, logger Logger) (*StaleEngine, error)

NewStaleEngine constructs an engine with default parameters.

func NewStaleEngineWithConfig

func NewStaleEngineWithConfig(cfg *EngineConfig) (*StaleEngine, error)

NewStaleEngineWithConfig creates engine with full configuration

func NewStaleEngineWithOptions

func NewStaleEngineWithOptions(storage Storage, logger Logger, opts ...Option) (*StaleEngine, error)

NewStaleEngineWithOptions allows fine‑grained tuning via functional options.

func (*StaleEngine) Execute

func (e *StaleEngine) Execute(ctx context.Context, keyOrCacheKey interface{}, fn StaleEngineCallback) (*CacheEntry, error)

Execute returns the cached (or freshly fetched) value for key. Supports both string keys and CacheKey with TTL overrides.

Guarantees:

  • Single upstream call per key at any time.
  • Upstream call continues even if the original caller cancels context.

func (*StaleEngine) ExecuteGeneric

func (e *StaleEngine) ExecuteGeneric(ctx context.Context, key string, result interface{}, fn StaleEngineCallback) error

ExecuteGeneric is a high-level method that handles serialization/deserialization automatically

func (*StaleEngine) GetCacheEntry

func (e *StaleEngine) GetCacheEntry(ctx context.Context, key string) (*CacheEntry, error)

GetCacheEntry retrieves a cache entry without triggering refresh

func (*StaleEngine) Shutdown

func (e *StaleEngine) Shutdown()

Shutdown gracefully shuts down the StaleEngine, cancelling all background refreshes

type StaleEngineCallback

type StaleEngineCallback func() (interface{}, error)

StaleEngineCallback represents the user‑supplied function to fetch fresh data. It must be idempotent and safe to call concurrently.

The engine recovers panics inside the callback and returns them as errors.

type Storage

type Storage interface {
	// Get retrieves a cache entry by key.
	// Returns ErrNotFound if the key does not exist.
	// The returned CacheEntry contains all metadata needed for SWR logic.
	Get(ctx context.Context, key string) (*CacheEntry, error)

	// Set stores a cache entry with the key.
	// The storage implementation should respect the context for timeout control.
	// TTL is managed at the CacheEntry level, not the storage level.
	Set(ctx context.Context, key string, value *CacheEntry) error

	// Delete removes a key from the storage.
	// Should be idempotent (no error if key doesn't exist).
	Delete(ctx context.Context, key string) error

	// Close gracefully shuts down the storage and releases resources.
	// Should be safe to call multiple times.
	Close() error
}

Storage defines the interface for the underlying cache storage layer. This interface provides a clean abstraction over different storage backends (e.g., Badger, Redis, memory maps) while maintaining high performance.

Key Design Principles: - Context-aware operations for timeout and cancellation support - Structured error handling with ErrNotFound for cache misses - TTL management delegated to CacheEntry for consistency - Resource management through Close() method

func NewBadgerStorage

func NewBadgerStorage(ctx context.Context, cfg BadgerConfig) (Storage, error)

type TTLCalculator

type TTLCalculator interface {
	// CalculateTTL computes TTL values for a cache entry.
	// Returns (totalTTL, staleTTL, error) where:
	// - totalTTL: Complete lifetime of the cache entry
	// - staleTTL: Fresh period before stale-while-revalidate kicks in
	// - error: Calculation failure (uses defaults)
	CalculateTTL(key string, value interface{}) (time.Duration, time.Duration, error)
}

TTLCalculator defines the interface for dynamic TTL calculation. This allows cache TTL to be determined based on the actual data content, key patterns, or external factors like DNS record types.

TTL Relationship: - totalTTL = fresh period + stale period + expired period - staleTTL = fresh period (when background refresh starts) - The difference (totalTTL - staleTTL) is the stale serving window

Use Cases: - DNS records with varying TTLs based on record type - Content-based TTL (e.g., larger responses cached longer) - Time-of-day based TTL adjustments

type TestLoggerMock

type TestLoggerMock struct {
	DebugCalls []LogCall
	InfoCalls  []LogCall
	WarnCalls  []LogCall
	ErrorCalls []LogCall
	// contains filtered or unexported fields
}

TestLoggerMock is a mock logger that captures log calls for testing. This is useful when you need to verify that specific log messages were generated.

func NewTestLoggerMock

func NewTestLoggerMock() *TestLoggerMock

NewTestLoggerMock creates a new mock logger for testing.

func (*TestLoggerMock) Debug

func (m *TestLoggerMock) Debug(msg string, fields ...Field)

Debug captures debug log calls.

func (*TestLoggerMock) Error

func (m *TestLoggerMock) Error(msg string, fields ...Field)

Error captures error log calls.

func (*TestLoggerMock) GetAllCalls

func (m *TestLoggerMock) GetAllCalls() []LogCall

GetAllCalls returns all captured log calls across all levels.

func (*TestLoggerMock) HasDebugMessage

func (m *TestLoggerMock) HasDebugMessage(msg string) bool

HasDebugMessage checks if a debug message was logged.

func (*TestLoggerMock) HasErrorMessage

func (m *TestLoggerMock) HasErrorMessage(msg string) bool

HasErrorMessage checks if an error message was logged.

func (*TestLoggerMock) HasInfoMessage

func (m *TestLoggerMock) HasInfoMessage(msg string) bool

HasInfoMessage checks if an info message was logged.

func (*TestLoggerMock) HasWarnMessage

func (m *TestLoggerMock) HasWarnMessage(msg string) bool

HasWarnMessage checks if a warn message was logged.

func (*TestLoggerMock) Info

func (m *TestLoggerMock) Info(msg string, fields ...Field)

Info captures info log calls.

func (*TestLoggerMock) Named

func (m *TestLoggerMock) Named(name string) Logger

Named creates a new mock logger with the given name.

func (*TestLoggerMock) Reset

func (m *TestLoggerMock) Reset()

Reset clears all captured log calls.

func (*TestLoggerMock) Warn

func (m *TestLoggerMock) Warn(msg string, fields ...Field)

Warn captures warn log calls.

type ValueTransformer

type ValueTransformer interface {
	// Transform processes a value before caching.
	// Called after upstream fetch but before serialization.
	Transform(ctx context.Context, key string, value interface{}) (interface{}, error)

	// Restore converts cached data back to usable form.
	// Called after deserialization but before returning to caller.
	Restore(ctx context.Context, key string, data []byte) (interface{}, error)
}

ValueTransformer defines the interface for value preprocessing. This abstraction enables custom transformations like compression, encryption, or format conversion before caching.

Common Use Cases: - Compression for large DNS responses - Encryption for sensitive data - Format normalization (e.g., DNS wire format) - Content filtering or sanitization

Implementation Notes: - Transform operates on pre-serialization values - Restore works with post-deserialization data - Both methods should be idempotent - Must handle context cancellation gracefully

type ZapAdapter

type ZapAdapter struct {
	// contains filtered or unexported fields
}

ZapAdapter adapts a zap.Logger to implement the Logger interface. This adapter provides a bridge between the generic Logger interface and zap's specific implementation, allowing existing zap users to continue using their configured loggers seamlessly.

func NewZapAdapter

func NewZapAdapter(logger *zap.Logger) (*ZapAdapter, error)

NewZapAdapter creates a new ZapAdapter from an existing zap.Logger. If the provided logger is nil, it returns an error.

func (*ZapAdapter) Debug

func (z *ZapAdapter) Debug(msg string, fields ...Field)

Debug logs a message at debug level with optional structured fields.

func (*ZapAdapter) Error

func (z *ZapAdapter) Error(msg string, fields ...Field)

Error logs a message at error level with optional structured fields.

func (*ZapAdapter) Info

func (z *ZapAdapter) Info(msg string, fields ...Field)

Info logs a message at info level with optional structured fields.

func (*ZapAdapter) Named

func (z *ZapAdapter) Named(name string) Logger

Named creates a new Logger instance with the specified name.

func (*ZapAdapter) Warn

func (z *ZapAdapter) Warn(msg string, fields ...Field)

Warn logs a message at warning level with optional structured fields.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL