cache

package module
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 31, 2026 License: MIT Imports: 18 Imported by: 0

README

cache

A hybrid L1+L2 cache for Go, ported from FusionCache. The core library has zero (one 😳) external dependencies.


Features

  • In-process L1 memory cache with optional distributed L2 backend
  • Cache stampede protection via singleflight
  • Fail-safe: return stale values when the factory or L2 fails
  • Soft and hard factory timeouts with background completion
  • Eager refresh: background factory call before expiry, zero-latency misses
  • TTL jitter to spread expiry across nodes
  • Tag-based bulk invalidation across L1 and L2
  • Inter-node invalidation via a pluggable backplane
  • Circuit breakers for L2 and the backplane
  • Observable lifecycle events (EventEmitter)
  • Type-safe generic helpers (GetOrSet[T], Get[T])
  • Every pluggable point is a pure interface; bring your own adapters

Requirements

Go 1.21 or later.


Modules

module version
github.com/coefficient-engineering/cache v0.2.0
github.com/coefficient-engineering/cache/adapters/l2/redis v0.2.0
github.com/coefficient-engineering/cache/adapters/backplane/redis v0.2.0

Installation

Core library (L1 only, no external dependencies):

go get github.com/coefficient-engineering/cache

Redis L2 adapter:

go get github.com/coefficient-engineering/cache/adapters/l2/redis

Redis backplane adapter:

go get github.com/coefficient-engineering/cache/adapters/backplane/redis

Quick Start

This example uses the Redis L2 adapter and Redis backplane for a production-style setup. If you only need an in-process cache, skip the L2, serializer, and backplane setup and call cache.New() with only WithDefaultEntryOptions.

package main

import (
	"context"
	"log/slog"
	"time"

	"github.com/coefficient-engineering/cache"
	redisbp "github.com/coefficient-engineering/cache/adapters/backplane/redis"
	redisl2 "github.com/coefficient-engineering/cache/adapters/l2/redis"
	"github.com/coefficient-engineering/cache/adapters/serializer/json"
	"github.com/redis/go-redis/v9"
)

func main() {
	ctx := context.Background()

	rdb := redis.NewClient(&redis.Options{Addr: "localhost:6379"})

	l2 := redisl2.New(rdb,
		redisl2.WithKeyPrefix("myapp:"),
		redisl2.WithLogger(slog.Default()),
	)

	bp := redisbp.New(rdb,
		redisbp.WithChannel("myapp:backplane"),
		redisbp.WithLogger(slog.Default()),
	)

	c, err := cache.New(
		cache.WithCacheName("myapp"),
		cache.WithL2(l2),
		cache.WithSerializer(&json.Serializer{}),
		cache.WithBackplane(bp),
		cache.WithLogger(slog.Default()),
		cache.WithL2CircuitBreaker(5, 10*time.Second),
		cache.WithDefaultEntryOptions(cache.EntryOptions{
			Duration:                 5 * time.Minute,
			IsFailSafeEnabled:        true,
			FailSafeMaxDuration:      2 * time.Hour,
			FailSafeThrottleDuration: 30 * time.Second,
			EagerRefreshThreshold:    0.9,
			FactorySoftTimeout:       100 * time.Millisecond,
			FactoryHardTimeout:       2 * time.Second,
			JitterMaxDuration:        10 * time.Second,
		}),
	)
	if err != nil {
		panic(err)
	}
	defer c.Close()

	type Product struct {
		ID   int
		Name string
	}

	product, err := cache.GetOrSet[*Product](ctx, c, "product:42",
		func(ctx context.Context, fctx *cache.FactoryExecutionContext) (*Product, error) {
			// Replace with a real database call.
			return &Product{ID: 42, Name: "Widget"}, nil
		},
	)
	if err != nil {
		panic(err)
	}

	_ = product
}

Adapters

adapter purpose
github.com/coefficient-engineering/cache/adapters/l2/memory In-process L2 using sync.Map (testing)
github.com/coefficient-engineering/cache/adapters/l2/redis Redis L2 (accepts redis.Cmdable)
github.com/coefficient-engineering/cache/adapters/backplane/memory In-process backplane via Go channel (testing)
github.com/coefficient-engineering/cache/adapters/backplane/noop No-op backplane for single-node deployments
github.com/coefficient-engineering/cache/adapters/backplane/redis Redis pub/sub backplane
github.com/coefficient-engineering/cache/adapters/serializer/json encoding/json serializer (stdlib, zero extra deps)

To use a different backend, implement the relevant interface:

interface package methods
l1.Adapter github.com/coefficient-engineering/cache/l1 Get, Set, Delete, CompareAndSwap, Range, Clear
l2.Adapter github.com/coefficient-engineering/cache/l2 Get, Set, Delete, DeleteByTag, Clear, Ping
backplane.Backplane github.com/coefficient-engineering/cache/backplane Publish, Subscribe, Close
serializer.Serializer github.com/coefficient-engineering/cache/serializer Marshal, Unmarshal

Documentation

Overview

Package cache provides a hybrid L1+L2 cache for distributed systems, inspired by ZiggyCreatures FusionCache.

The core library has zero external dependencies beyond the Go standard library and golang.org/x/sync. Every pluggable point (the distributed cache, the inter-node backplane, and the serializer) is expressed as a Go interface. See packages github.com/coefficient-engineering/cache/l1, github.com/coefficient-engineering/cache/l2, github.com/coefficient-engineering/cache/backplane, and github.com/coefficient-engineering/cache/serializer for those contracts.

Quick Start

Create a pure in-process (L1-only) cache with New and use the type-safe generic helpers GetOrSet and Get:

c, err := cache.New(
    cache.WithDefaultEntryOptions(cache.EntryOptions{
        Duration: 5 * time.Minute,
    }),
)
if err != nil {
    log.Fatal(err)
}
defer c.Close()

product, err := cache.GetOrSet[*Product](ctx, c, "product:42",
    func(ctx context.Context, fctx *cache.FactoryExecutionContext) (*Product, error) {
        return db.GetProduct(ctx, 42)
    },
)

Generics Strategy

The Cache interface stores and returns any. Generic package-level functions (GetOrSet and Get) provide compile-time type safety at the call site without complicating the core. Go does not allow generic methods on structs, so these helpers are package-level functions.

Construction

Use New to create a Cache instance, passing Option functions to configure it. See WithL2, WithSerializer, WithBackplane, and WithDefaultEntryOptions for common options.

New returns an error if an L2 adapter is configured without a serializer.

Index

Constants

This section is empty.

Variables

View Source
var ErrFactoryHardTimeout = fmt.Errorf("cache: factory hard timeout")

ErrFactoryHardTimeout is returned by [Cache.GetOrSet] when the factory exceeds [EntryOptions.FactoryHardTimeout] and no stale fail-safe value is available.

Use errors.Is to check:

if errors.Is(err, cache.ErrFactoryHardTimeout) {
    // handle timeout
}
View Source
var ErrLockTimeout = fmt.Errorf("cache: stampede lock timeout")

ErrLockTimeout is returned when [EntryOptions.LockTimeout] elapses before the stampede protection lock is acquired. The caller may proceed without the lock (best-effort).

Use errors.Is to check:

if errors.Is(err, cache.ErrLockTimeout) {
    // handle lock timeout
}

Functions

func Get

func Get[T any](ctx context.Context, c Cache, key string, opts ...EntryOption) (T, bool, error)

Get is the type-safe counterpart to [Cache.Get].

T can be any type, including pointers and interfaces. Returns a type assertion error if the cached value is not of type T.

func GetOrSet

func GetOrSet[T any](
	ctx context.Context,
	c Cache,
	key string,
	factory func(ctx context.Context, fctx *FactoryExecutionContext) (T, error),
	opts ...EntryOption,
) (T, error)

GetOrSet is the type-safe counterpart to [Cache.GetOrSet].

T can be any type, including pointers and interfaces. Returns a type assertion error if the cached value is not of type T.

Types

type Cache

type Cache interface {
	// Get returns the cached value for key if present and not logically
	// expired. Returns (nil, false, nil) on a clean cache miss.
	//
	// When [EntryOptions.AllowStaleOnReadOnly] is set, logically expired
	// values are returned with ok == true. The event emitted in that case
	// is [EventCacheHit] with IsStale: true.
	//
	// Get reads L1 first, then L2 if configured and L1 missed.
	// Use the type-safe wrapper [Get] for compile-time type safety.
	Get(ctx context.Context, key string, opts ...EntryOption) (any, bool, error)

	// Set stores value under key in all configured cache layers (L1, L2)
	// and publishes a backplane notification. Skip flags in [EntryOptions]
	// control which layers are written.
	Set(ctx context.Context, key string, value any, opts ...EntryOption) error

	// GetOrSet returns the cached value for key. On a cache miss it calls
	// factory, stores the result, and returns it. This is the primary method.
	//
	// The full execution path:
	//  1. Read L1 (unless [EntryOptions.SkipL1Read])
	//  2. On L1 miss, enter stampede protection (singleflight)
	//  3. Re-check L1 inside the lock
	//  4. Read L2 if configured (unless [EntryOptions.SkipL2Read] or circuit breaker open)
	//  5. Call factory with fail-safe and timeout handling
	//  6. Store result in L1, L2, and notify backplane
	//
	// On a fresh L1 hit with [EntryOptions.EagerRefreshThreshold] configured,
	// a background refresh may be started.
	//
	// Use the type-safe wrapper [GetOrSet] for compile-time type safety.
	GetOrSet(ctx context.Context, key string, factory FactoryFunc, opts ...EntryOption) (any, error)

	// Delete removes the entry from L1 and L2. Publishes a
	// [backplane.MessageTypeDelete] backplane notification.
	// Does not error if the key does not exist.
	Delete(ctx context.Context, key string, opts ...EntryOption) error

	// DeleteByTag removes all entries associated with tag from L1
	// (via the in-memory tag index) and L2 (via [l2.Adapter.DeleteByTag]).
	// Publishes a [backplane.MessageTypeDelete] backplane notification for
	// each removed key.
	DeleteByTag(ctx context.Context, tag string, opts ...EntryOption) error

	// Expire marks the entry as logically expired without removing it from
	// L1. The value remains available as a fail-safe fallback until physical
	// expiry. If fail-safe is not enabled on subsequent reads, the stale
	// value is not returned.
	//
	// Publishes a [backplane.MessageTypeExpire] backplane notification.
	Expire(ctx context.Context, key string, opts ...EntryOption) error

	// Clear removes all entries from L1 and clears the tag index. When
	// clearL2 is true and an L2 adapter is configured, calls
	// [l2.Adapter.Clear]. Publishes a [backplane.MessageTypeClear]
	// backplane notification.
	Clear(ctx context.Context, clearL2 bool) error

	// Name returns the configured cache name. Defaults to "default".
	Name() string

	// DefaultEntryOptions returns a copy of the cache-wide default entry
	// options. Safe to read and modify without affecting the cache.
	DefaultEntryOptions() EntryOptions

	// Events returns the [EventEmitter] for subscribing to cache lifecycle
	// events.
	Events() *EventEmitter

	// Close shuts down background goroutines (backplane subscription) and
	// releases resources. Calls Close on the backplane and L1 adapter.
	// Safe to call multiple times.
	Close() error
}

Cache is the primary interface for interacting with the cache.

The concrete implementation is created via New and is safe for concurrent use by multiple goroutines.

func New

func New(opts ...Option) (Cache, error)

New creates and returns a new Cache instance.

Validation:

  • If L2 is non-nil and Serializer is nil, returns an error.

Side effects at construction:

  • If NodeID is empty, generates a random 16-byte hex string.
  • If Logger is nil, creates a discard logger.
  • If L1 is nil, creates a default syncmap adapter.
  • If Backplane is non-nil, calls Subscribe to start receiving messages.
  • Creates circuit breakers for L2 and backplane (disabled when threshold is 0).

type EntryOption

type EntryOption func(*EntryOptions)

EntryOption modifies an EntryOptions value. Compose multiple options freely; they are applied in order on top of a copy of the cache-wide [Options.DefaultEntryOptions].

func WithAllowStaleOnReadOnly

func WithAllowStaleOnReadOnly() EntryOption

WithAllowStaleOnReadOnly sets [EntryOptions.AllowStaleOnReadOnly] to true, permitting stale values from read-only operations ([Cache.Get]).

func WithAutoClone

func WithAutoClone() EntryOption

WithAutoClone sets [EntryOptions.EnableAutoClone] to true. Values returned from L1 are deep-cloned via the serializer.Serializer (marshal then unmarshal) before being returned, preventing callers from mutating cached objects. Requires a Serializer to be configured.

func WithBackgroundL2Ops

func WithBackgroundL2Ops() EntryOption

WithBackgroundL2Ops sets [EntryOptions.AllowBackgroundDistributedCacheOperations] to true, making L2 writes fire-and-forget goroutines.

func WithDistributedCacheTimeouts

func WithDistributedCacheTimeouts(soft, hard time.Duration) EntryOption

WithDistributedCacheTimeouts sets [EntryOptions.DistributedCacheSoftTimeout] and [EntryOptions.DistributedCacheHardTimeout] for L2 operations.

func WithDuration

func WithDuration(d time.Duration) EntryOption

WithDuration sets [EntryOptions.Duration], the logical TTL of the entry.

func WithEagerRefresh

func WithEagerRefresh(threshold float32) EntryOption

WithEagerRefresh sets [EntryOptions.EagerRefreshThreshold]. The threshold must be in (0.0, 1.0). For example, 0.9 starts a background refresh when 90% of Duration has elapsed.

func WithFactoryTimeouts

func WithFactoryTimeouts(soft, hard time.Duration) EntryOption

WithFactoryTimeouts sets [EntryOptions.FactorySoftTimeout] and [EntryOptions.FactoryHardTimeout]. The soft timeout returns a stale value early (factory continues in the background); the hard timeout is an absolute deadline.

func WithFailSafe

func WithFailSafe(maxDuration, throttleDuration time.Duration) EntryOption

WithFailSafe enables fail-safe and sets [EntryOptions.FailSafeMaxDuration] and [EntryOptions.FailSafeThrottleDuration]. When fail-safe is enabled and a factory or L2 fetch fails, a stale value (if available) is returned instead of an error.

func WithJitter

func WithJitter(max time.Duration) EntryOption

WithJitter sets [EntryOptions.JitterMaxDuration], adding a random extra TTL in [0, max) to prevent thundering-herd expiry spikes.

func WithLockTimeout

func WithLockTimeout(d time.Duration) EntryOption

WithLockTimeout sets [EntryOptions.LockTimeout], the maximum time to wait for the stampede protection lock. Returns ErrLockTimeout on timeout.

func WithPriority

func WithPriority(p EvictionPriority) EntryOption

WithPriority sets [EntryOptions.Priority], an eviction hint for bounded L1 adapters.

func WithSize

func WithSize(n int64) EntryOption

WithSize sets [EntryOptions.Size], an arbitrary weight hint for bounded L1 adapters.

func WithSkipL2

func WithSkipL2() EntryOption

WithSkipL2 sets [EntryOptions.SkipL2Read], [EntryOptions.SkipL2Write], and [EntryOptions.SkipBackplaneNotifications] to true, bypassing L2 and the backplane entirely for this operation.

func WithSkipL2ReadWhenStale

func WithSkipL2ReadWhenStale() EntryOption

WithSkipL2ReadWhenStale sets [EntryOptions.SkipL2ReadWhenStale] to true. When L1 has a stale entry, the L2 check is skipped.

func WithTags

func WithTags(tags ...string) EntryOption

WithTags appends tag labels to [EntryOptions.Tags] for bulk invalidation via [Cache.DeleteByTag].

type EntryOptions

type EntryOptions struct {
	// Duration is how long the entry is considered fresh (logically valid).
	// When fail-safe is enabled the physical TTL in backing stores is
	// FailSafeMaxDuration; Duration marks the "stale after" boundary.
	Duration time.Duration

	// DistributedCacheDuration overrides Duration for the L2 layer only.
	// Zero means "use Duration".
	DistributedCacheDuration time.Duration

	// JitterMaxDuration adds a random extra TTL in [0, JitterMaxDuration) to
	// both L1 and L2 entries. Prevents thundering-herd expiry spikes across
	// nodes in multi-instance deployments. Zero disables jitter.
	JitterMaxDuration time.Duration

	// IsFailSafeEnabled activates the fail-safe mechanism: if a factory call
	// or L2 fetch fails and a stale entry exists, the stale value is returned
	// rather than propagating the error.
	IsFailSafeEnabled bool

	// FailSafeMaxDuration is the total lifetime of an entry in the backing
	// store when fail-safe is on. The entry will be physically present (but
	// logically stale) for this duration after it was first written, enabling
	// it to be used as a fallback.
	// Must be >= Duration when fail-safe is enabled.
	FailSafeMaxDuration time.Duration

	// DistributedCacheFailSafeMaxDuration overrides FailSafeMaxDuration for
	// the L2 physical TTL. Zero means "use FailSafeMaxDuration".
	DistributedCacheFailSafeMaxDuration time.Duration

	// FailSafeThrottleDuration is how long a fail-safe-activated stale value
	// is temporarily promoted back to "fresh" in L1 to prevent the factory
	// from being hammered again immediately after an error.
	FailSafeThrottleDuration time.Duration

	// AllowStaleOnReadOnly permits stale (logically expired) values to be
	// returned from read-only operations (Get) without triggering a factory call.
	AllowStaleOnReadOnly bool

	// FactorySoftTimeout is the maximum time to wait for the factory before
	// returning a stale fail-safe value to the caller. The factory continues
	// running in the background and caches its result when done.
	// Zero means no soft timeout.
	FactorySoftTimeout time.Duration

	// FactoryHardTimeout is the absolute maximum time to wait for the factory.
	// After this the call returns an error (or stale value if fail-safe is on),
	// regardless of whether a stale value is available.
	// Zero means wait indefinitely.
	FactoryHardTimeout time.Duration

	// AllowTimedOutFactoryBackgroundCompletion: when true, a factory that
	// triggered a soft or hard timeout (but eventually succeeds) will have its
	// result stored in the cache. Default: true.
	AllowTimedOutFactoryBackgroundCompletion bool

	// DistributedCacheSoftTimeout is the max time to wait for an L2 read/write
	// before falling back to a stale value (fail-safe must be on for a fallback
	// to be available). Zero means no soft timeout.
	DistributedCacheSoftTimeout time.Duration

	// DistributedCacheHardTimeout is the absolute max time for any L2 operation.
	// Zero means wait indefinitely.
	DistributedCacheHardTimeout time.Duration

	// AllowBackgroundDistributedCacheOperations: when true, L2 writes are
	// fire-and-forget goroutines. This improves latency but means a write
	// failure is logged rather than returned to the caller.
	// Default: false (blocking, deterministic behaviour).
	AllowBackgroundDistributedCacheOperations bool

	// EagerRefreshThreshold: when a cache hit occurs after this fraction of
	// Duration has elapsed, a background factory call is started to refresh
	// the entry before it expires, so callers never observe a miss.
	// Value must be in (0.0, 1.0); zero or values outside this range disable
	// eager refresh.
	// Example: 0.9 starts refreshing at 90% of Duration elapsed.
	EagerRefreshThreshold float32

	// SkipBackplaneNotifications: if true, mutations (Set/Delete/Expire) will
	// not publish backplane messages for this operation.
	SkipBackplaneNotifications bool

	// AllowBackgroundBackplaneOperations: when true, backplane publishes are
	// fire-and-forget goroutines. Default: true.
	AllowBackgroundBackplaneOperations bool

	// ReThrowBackplaneExceptions: when true and AllowBackgroundBackplaneOperations
	// is false, backplane publish errors are returned to the caller.
	ReThrowBackplaneExceptions bool

	// ReThrowDistributedCacheExceptions: when true and AllowBackgroundDistributedCacheOperations
	// is false, L2 errors are returned to the caller.
	ReThrowDistributedCacheExceptions bool

	// ReThrowSerializationExceptions: when true, serialization errors during L2
	// reads/writes are returned to the caller. Default: true.
	ReThrowSerializationExceptions bool

	// Priority hints to the L1 eviction policy. Higher priority entries are
	// evicted last under memory pressure. Default: PriorityNormal.
	Priority EvictionPriority

	// Size is an arbitrary weight unit used by L1 when a SizeLimit is configured
	// on the cache. Typically represents relative byte size or item weight.
	Size int64

	// SkipL1Read: bypass reading from the in-process memory cache (L1).
	// Use with care, removes stampede protection.
	SkipL1Read bool

	// SkipL1Write: bypass writing to L1 after a factory call or L2 hit.
	SkipL1Write bool

	// SkipL2Read: bypass reading from the distributed cache (L2).
	SkipL2Read bool

	// SkipL2Write: bypass writing to L2.
	SkipL2Write bool

	// SkipL2ReadWhenStale: when L1 has a stale entry, skip checking L2 for a
	// newer version. Useful when L2 is local (not shared across nodes).
	SkipL2ReadWhenStale bool

	// Tags associates string labels with this entry for bulk invalidation via
	// DeleteByTag. Tags are stored in both L1 (in-memory reverse index) and L2
	// (implementation-defined, e.g., a Redis SET per tag).
	Tags []string

	// LockTimeout is the maximum time to wait to acquire the stampede protection
	// lock for this key. After this, the caller proceeds without the lock
	// (risks a mini-stampede but prevents indefinite starvation).
	// Zero means wait indefinitely.
	LockTimeout time.Duration

	// EnableAutoClone: when true, values returned from L1 are deep-cloned
	// (via the Serializer: marshal -> unmarshal) before being returned to the
	// caller. Prevents callers from inadvertently mutating cached objects.
	// Requires a Serializer to be configured.
	EnableAutoClone bool
}

EntryOptions holds all per-entry settings. It is a plain value type; the cache copies it on every call so mutations never affect the stored defaults.

Per-call configuration is expressed as EntryOption functions. The cache copies [Options.DefaultEntryOptions] and applies the provided funcs before each operation via [applyOptions].

Defaults

When [Options.DefaultEntryOptions] is not explicitly set, New uses:

  • Duration: 5 * time.Minute
  • AllowTimedOutFactoryBackgroundCompletion: true
  • AllowBackgroundBackplaneOperations: true
  • ReThrowSerializationExceptions: true
  • Priority: PriorityNormal

All other fields default to their zero value.

type Event

type Event interface {
	// contains filtered or unexported methods
}

Event is the marker interface for all cache events. Use a type switch to handle specific event types.

cache.Events().On(func(e cache.Event) {
    switch ev := e.(type) {
    case cache.EventCacheHit:
        hits.Inc()
    case cache.EventCacheMiss:
        misses.Inc()
    }
})

type EventBackplaneCircuitBreakerStateChange

type EventBackplaneCircuitBreakerStateChange struct{ Open bool }

EventBackplaneCircuitBreakerStateChange is emitted when the backplane circuit breaker opens or closes.

type EventBackplaneReceived

type EventBackplaneReceived struct {
	Key  string
	Type backplane.MessageType
}

EventBackplaneReceived is emitted when a backplane message is received from another node.

type EventBackplaneSent

type EventBackplaneSent struct {
	Key  string
	Type backplane.MessageType
}

EventBackplaneSent is emitted when a backplane message is published successfully.

type EventCacheHit

type EventCacheHit struct {
	Key     string
	IsStale bool
}

EventCacheHit is emitted on an L1 hit (fresh or stale with [EntryOptions.AllowStaleOnReadOnly]).

type EventCacheMiss

type EventCacheMiss struct{ Key string }

EventCacheMiss is emitted when no fresh L1 entry is found.

type EventEagerRefreshComplete

type EventEagerRefreshComplete struct{ Key string }

EventEagerRefreshComplete is emitted when a background eager refresh goroutine completes.

type EventEagerRefreshStarted

type EventEagerRefreshStarted struct{ Key string }

EventEagerRefreshStarted is emitted when a background eager refresh goroutine starts.

type EventEmitter

type EventEmitter struct {
	// contains filtered or unexported fields
}

EventEmitter is a simple in-process event bus. Safe for concurrent use.

Obtain a reference via [Cache.Events]. Register handlers with EventEmitter.On.

func (*EventEmitter) On

func (e *EventEmitter) On(handler EventHandler) (unsubscribe func())

On registers handler to receive all cache events and returns an unsubscribe function that removes the handler.

Handlers are called synchronously on the goroutine that produced the event. Keep handlers fast; offload expensive work to a channel or goroutine.

type EventFactoryCall

type EventFactoryCall struct{ Key string }

EventFactoryCall is emitted when the factory is about to be called.

type EventFactoryError

type EventFactoryError struct {
	Key string
	Err error
}

EventFactoryError is emitted when the factory returns an error (after fail-safe evaluation).

type EventFactorySuccess

type EventFactorySuccess struct {
	Key    string
	Shared bool
}

EventFactorySuccess is emitted when the factory returns successfully. Shared indicates the result came from singleflight (another goroutine's call).

type EventFailSafeActivated

type EventFailSafeActivated struct {
	Key        string
	StaleValue any
}

EventFailSafeActivated is emitted when a stale value is returned due to a factory or L2 failure.

type EventHandler

type EventHandler func(Event)

EventHandler is a callback invoked for each cache event. Handlers are called synchronously on the goroutine that produced the event. Keep handlers fast; offload expensive work to a channel or goroutine.

type EventHardTimeoutActivated

type EventHardTimeoutActivated struct{ Key string }

EventHardTimeoutActivated is emitted when the factory hard timeout fires.

type EventL2CircuitBreakerStateChange

type EventL2CircuitBreakerStateChange struct{ Open bool }

EventL2CircuitBreakerStateChange is emitted when the L2 circuit breaker opens or closes.

type EventL2Error

type EventL2Error struct {
	Key string
	Err error
}

EventL2Error is emitted when an L2 read or write fails.

type EventL2Hit

type EventL2Hit struct{ Key string }

EventL2Hit is emitted when L2 returns a fresh entry.

type EventL2Miss

type EventL2Miss struct{ Key string }

EventL2Miss is emitted when L2 does not have the key or has a stale entry.

type EventSoftTimeoutActivated

type EventSoftTimeoutActivated struct{ Key string }

EventSoftTimeoutActivated is emitted when the factory soft timeout fires and a stale value is returned.

type EvictionPriority

type EvictionPriority int

EvictionPriority hints to the L1 eviction policy. Higher priority entries are evicted last under memory pressure. Unbounded L1 adapters (like the default sync.Map adapter) ignore this.

const (
	PriorityLow         EvictionPriority = -1 // evict first
	PriorityNormal      EvictionPriority = 0  // default
	PriorityHigh        EvictionPriority = 1  // evict last
	PriorityNeverRemove EvictionPriority = 2  // never evict
)

Eviction priority constants for the L1 cache.

type FactoryExecutionContext added in v0.2.0

type FactoryExecutionContext struct {
	// Options is a mutable pointer to the EntryOptions for this cache
	// operation. Changes made by the factory are honoured when the
	// value is stored.
	Options *EntryOptions

	// StaleValue is the previously cached (now stale) value, if one
	// exists. Nil when there is no stale entry.
	StaleValue any

	// HasStaleValue is true when a stale entry was found in L1 or L2.
	// Use this to distinguish a nil stale value from "no stale entry".
	HasStaleValue bool
}

FactoryExecutionContext provides adaptive caching capabilities to factory functions. It is passed to every FactoryFunc invocation and allows the factory to modify cache entry options based on the value being cached.

Mutate the Options pointer to change how the entry is stored. For example, set Options.Duration to control TTL based on the fetched value's characteristics.

type FactoryFunc

type FactoryFunc func(ctx context.Context, fctx *FactoryExecutionContext) (any, error)

FactoryFunc is the function called on a cache miss to produce a fresh value. It receives the request context and should respect cancellation. Return a non-nil error to signal failure; when fail-safe is enabled and a stale value exists, the error is swallowed and the stale value is returned instead. See [EntryOptions.IsFailSafeEnabled].

type Option

type Option func(*Options)

Option configures an Options value at construction time. Pass Option values to New.

func WithBackplane

func WithBackplane(bp backplane.Backplane) Option

WithBackplane sets [Options.Backplane], the inter-node invalidation transport. If nil, no cross-node notifications are sent or received.

func WithBackplaneCircuitBreaker

func WithBackplaneCircuitBreaker(threshold int, openDuration time.Duration) Option

WithBackplaneCircuitBreaker configures the backplane circuit breaker. threshold is the consecutive-failure count before the breaker opens. openDuration is how long it stays open. A threshold of 0 disables it.

func WithCacheName

func WithCacheName(name string) Option

WithCacheName sets [Options.CacheName], which identifies this cache instance in logs, events, and backplane messages. Default: "default".

func WithDefaultEntryOptions

func WithDefaultEntryOptions(eo EntryOptions) Option

WithDefaultEntryOptions sets [Options.DefaultEntryOptions], the baseline EntryOptions for every cache operation. Per-call EntryOption functions are applied on top of a copy of this value.

func WithKeyPrefix

func WithKeyPrefix(prefix string) Option

WithKeyPrefix sets [Options.KeyPrefix], prepended to every key before L1, L2, and backplane access. Enables namespace isolation when multiple caches share an L2 backend.

func WithL1

func WithL1(adapter l1.Adapter) Option

WithL1 sets [Options.L1], the in-process memory cache adapter. If not set, a default github.com/coefficient-engineering/cache/adapters/l1/syncmap adapter is used.

func WithL2

func WithL2(adapter l2.Adapter) Option

WithL2 sets [Options.L2], the distributed cache adapter. Requires WithSerializer to also be set.

func WithL2CircuitBreaker

func WithL2CircuitBreaker(threshold int, openDuration time.Duration) Option

WithL2CircuitBreaker configures the L2 circuit breaker. threshold is the number of consecutive L2 errors before the breaker opens. openDuration is how long the breaker stays open before attempting recovery. A threshold of 0 disables the L2 circuit breaker.

func WithLogger

func WithLogger(logger *slog.Logger) Option

WithLogger sets [Options.Logger], the structured logger for internal diagnostics. If not set, all logging is silently discarded.

func WithNodeID

func WithNodeID(id string) Option

WithNodeID sets [Options.NodeID], which uniquely identifies this cache node in backplane messages for self-message suppression. If not set, a random 16-byte hex string is generated.

func WithSerializer

func WithSerializer(s serializer.Serializer) Option

WithSerializer sets [Options.Serializer], which encodes and decodes Go values for L2 storage and auto-clone. Required when WithL2 is used.

type Options

type Options struct {
	// CacheName identifies this cache instance in logs, events, and backplane messages.
	// Default: "default"
	CacheName string

	// KeyPrefix is prepended to every key before it is passed to L1, L2, and the
	// backplane. Enables namespace isolation when multiple caches share an L2 backend.
	// Example: "myapp:products:"
	KeyPrefix string

	// DefaultEntryOptions is the baseline EntryOptions for every cache operation.
	// Per-call EntryOption funcs are applied on top of a copy of this value.
	DefaultEntryOptions EntryOptions

	// L1 is the in-process memory cache adapter.
	// If nil, a default sync.Map-backed adapter is used.
	L1 l1.Adapter

	// L2 is the distributed cache adapter.
	// If nil, cache operates as a pure in-process memory cache.
	L2 l2.Adapter

	// Serializer is required when L2 is non-nil.
	// It encodes/decodes Go values to/from []byte for L2 storage.
	Serializer serializer.Serializer

	// Backplane enables inter-node invalidation.
	// If nil, no cross-node notifications are sent or received.
	Backplane backplane.Backplane

	// Logger is the structured logger for internal diagnostics.
	// If nil, all logging is silently discarded.
	Logger *slog.Logger

	// NodeID uniquely identifies this cache node in backplane messages.
	// Used to suppress processing of self-sent notifications.
	// If empty, a random UUID is generated at construction time.
	NodeID string

	// DistributedCacheCircuitBreakerThreshold is the number of consecutive L2
	// errors that cause the circuit breaker to open (reject further L2 calls).
	// 0 disables the L2 circuit breaker entirely.
	DistributedCacheCircuitBreakerThreshold int

	// DistributedCacheCircuitBreakerDuration is how long the L2 circuit
	// breaker stays open before attempting recovery.
	DistributedCacheCircuitBreakerDuration time.Duration

	// BackplaneCircuitBreakerThreshold is the consecutive-failure threshold
	// for the backplane circuit breaker. 0 disables it.
	BackplaneCircuitBreakerThreshold int

	// BackplaneCircuitBreakerDuration is how long the backplane circuit
	// breaker stays open. Only relevant when threshold > 0.
	BackplaneCircuitBreakerDuration time.Duration

	// BackplaneAutoRecovery enables automatic L1 re-sync when the backplane
	// reconnects after an outage. Default: true.
	BackplaneAutoRecovery bool

	// AutoRecoveryMaxRetries is the maximum number of recovery attempts.
	// Default: 10.
	AutoRecoveryMaxRetries int

	// AutoRecoveryDelay is the pause between recovery attempts.
	// Default: 2s.
	AutoRecoveryDelay time.Duration

	// IgnoreIncomingBackplaneNotifications disables acting on received backplane
	// messages. Useful for read-only replicas or during testing.
	IgnoreIncomingBackplaneNotifications bool

	// SkipL2OnError: if true (default), L2 errors are logged and swallowed.
	// cache continues with L1 only. If false, L2 errors propagate to callers
	// unless overridden by the per-entry ReThrowDistributedCacheExceptions flag.
	SkipL2OnError bool
}

Options configures the entire cache instance. Set once via Option functions passed to New. Never mutated after construction.

See the individual field documentation for defaults. See WithCacheName, WithKeyPrefix, WithL1, WithL2, WithSerializer, WithBackplane, WithLogger, and WithDefaultEntryOptions for the functional option constructors.

Directories

Path Synopsis
adapters
backplane/memory
Package memory provides an in-process backplane.Backplane using Go channels.
Package memory provides an in-process backplane.Backplane using Go channels.
backplane/noop
Package noop provides a backplane.Backplane that silently discards all messages.
Package noop provides a backplane.Backplane that silently discards all messages.
l1/syncmap
Package syncmap provides an unbounded l1.Adapter backed by sync.Map.
Package syncmap provides an unbounded l1.Adapter backed by sync.Map.
l2/memory
Package memory provides an in-process l2.Adapter backed by sync.Map.
Package memory provides an in-process l2.Adapter backed by sync.Map.
serializer/json
Package json provides a serializer.Serializer using encoding/json.
Package json provides a serializer.Serializer using encoding/json.
l2/redis module
Package backplane defines the Backplane interface for inter-node cache invalidation.
Package backplane defines the Backplane interface for inter-node cache invalidation.
internal
clock
Package clock provides a time abstraction so cache internals can be tested without sleeping or time.Sleep-based races.
Package clock provides a time abstraction so cache internals can be tested without sleeping or time.Sleep-based races.
Package l1 defines the Adapter interface for the in-process (L1) memory cache layer.
Package l1 defines the Adapter interface for the in-process (L1) memory cache layer.
Package l2 defines the Adapter interface for the distributed (L2) cache layer.
Package l2 defines the Adapter interface for the distributed (L2) cache layer.
Package serializer defines the Serializer interface for encoding and decoding Go values for storage in the L2 cache layer.
Package serializer defines the Serializer interface for encoding and decoding Go values for storage in the L2 cache layer.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL