redcache

package module
v0.1.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 27, 2026 License: Apache-2.0 Imports: 17 Imported by: 0

README

redcache

CI Go Reference Go Report Card

A cache-aside implementation for Redis, built on the rueidis client.

Features

  • Cache-aside pattern — automatic cache population with Get and GetMulti.
  • Distributed lockingSET NX GET PX with UUIDv7 prevents thundering herd.
  • Client-side caching — rueidis client-side cache with Redis invalidation messages reduces round trips.
  • Cluster support — multi-key operations are grouped by slot for efficient batching.
  • Lock ownership verification — Lua scripts atomically verify lock ownership before SET/DEL.

Requirements

  • Go 1.23+
  • Redis 7+

Installation

go get github.com/dcbickfo/redcache

Usage

package main

import (
    "context"
    "database/sql"
    "log"
    "time"

    "github.com/redis/rueidis"
    "github.com/dcbickfo/redcache"
)

func main() {
    if err := run(); err != nil {
        log.Fatal(err)
    }
}

func run() error {
    var db *sql.DB
    // initialize db
    client, err := redcache.NewRedCacheAside(
        rueidis.ClientOption{
            InitAddress: []string{"127.0.0.1:6379"},
        },
        redcache.CacheAsideOption{
            LockTTL:   time.Second * 1,
        },
    )
    if err != nil {
        return err
    }

    repo := Repository{
        client: client,
        db:     &db,
    }

    val, err := repo.GetByID(context.Background(), "key")
    if err != nil {
        return err
    }

    vals, err := repo.GetByIDs(context.Background(), []string{"key1", "key2"})
    if err != nil {
        return err
    }
    _, _ = val, vals
    return nil
}

type Repository struct {
    client  *redcache.CacheAside
    db      *sql.DB
}

func (r Repository) GetByID(ctx context.Context, key string) (string, error) {
    val, err := r.client.Get(ctx, time.Minute, key, func(ctx context.Context, key string) (val string, err error) {
        if err = r.db.QueryRowContext(ctx, "SELECT val FROM mytab WHERE id = ?", key).Scan(&val); err == sql.ErrNoRows {
            val = "NULL" // cache null to avoid penetration.
            err = nil     // clear err in case of sql.ErrNoRows.
        }
        return
    })
    if err != nil {
       return "", err
    } else if val == "NULL" {
        val = ""
        err = sql.ErrNoRows
    }
    return val, err
}

func (r Repository) GetByIDs(ctx context.Context, keys []string) (map[string]string, error) {
    val, err := r.client.GetMulti(ctx, time.Minute, keys, func(ctx context.Context, keys []string) (val map[string]string, err error) {
        val = make(map[string]string)
        rows, err := r.db.QueryContext(ctx, "SELECT id, val FROM mytab WHERE id IN (?)", keys)
        if err != nil {
            return nil, err
        }
        defer rows.Close()
        for rows.Next() {
            var id, rowVal string
            if err = rows.Scan(&id, &rowVal); err != nil {
                return nil, err
            }
            val[id] = rowVal
        }
        if len(val) != len(keys) {
            for _, k := range keys {
                if _, ok := val[k]; !ok {
                    val[k] = "NULL" // cache null to avoid penetration.
                }
            }
        }
        return val, nil
    })
    if err != nil {
        return nil, err
    }
    // handle any NULL vals if desired
    // ...

    return val, nil
}

Configuration

CacheAsideOption controls the behavior of the cache-aside client:

Field Type Default Description
LockTTL time.Duration 10s Maximum time a lock can be held. Also the timeout for waiting on lost invalidation messages. Must be at least 100ms.
ClientBuilder func(rueidis.ClientOption) (rueidis.Client, error) nil Custom builder for the underlying rueidis client. Uses rueidis.NewClient when nil.
Logger Logger slog.Default() Logger for errors and debug information. Must be safe for concurrent use.
Metrics Metrics NoopMetrics{} Receives observability events. Must be concurrent-safe — methods run on the hot path.
LockPrefix string "__redcache:lock:" Prefix for distributed lock values. Choose a prefix unlikely to conflict with your data keys.
RefreshLockPrefix string "__redcache:refresh:" Prefix for distributed refresh-ahead locks.
RefreshAfterFraction float64 0 (disabled) Fraction of TTL after which a refresh-ahead is triggered. Must be in [0, 1).
RefreshWorkers int 4 (when refresh enabled) Background workers processing refresh jobs.
RefreshQueueSize int 64 (when refresh enabled) Capacity of the refresh job queue. Jobs are silently dropped when full; the stale value continues to serve.

Refresh-Ahead

Set RefreshAfterFraction to enable background refreshes. When a Get/GetMulti returns a value whose TTL has crossed the configured threshold, the stale value is returned immediately and a background worker repopulates the entry. Distributed and local dedup ensure only one refresh runs per key.

client, err := redcache.NewRedCacheAside(
    rueidis.ClientOption{InitAddress: []string{"127.0.0.1:6379"}},
    redcache.CacheAsideOption{
        LockTTL:              5 * time.Second,
        RefreshAfterFraction: 0.8, // refresh once 80% of TTL has elapsed
        RefreshWorkers:       4,
        RefreshQueueSize:     64,
    },
)

Metrics

Implement Metrics (or embed NoopMetrics and override the methods you care about) to wire counters into Prometheus, OpenTelemetry, or any other backend. Methods are called on the hot path and must be concurrent-safe.

type myMetrics struct {
    redcache.NoopMetrics
    hits, misses atomic.Int64
}

func (m *myMetrics) CacheHit(string)  { m.hits.Add(1) }
func (m *myMetrics) CacheMiss(string) { m.misses.Add(1) }

client, err := redcache.NewRedCacheAside(
    rueidis.ClientOption{InitAddress: []string{"127.0.0.1:6379"}},
    redcache.CacheAsideOption{
        LockTTL: 5 * time.Second,
        Metrics: &myMetrics{},
    },
)

Local Development

# Start Redis
docker compose up -d

# Run tests
go test -race ./...

# Lint
golangci-lint run

# Benchmarks
go test -bench=. -benchtime=3s ./...

Documentation

Overview

Package redcache provides a cache-aside implementation for Redis with distributed locking.

This library builds on the rueidis Redis client to provide:

  • Cache-aside pattern with automatic cache population
  • Distributed locking to prevent thundering herd
  • Client-side caching to reduce Redis round trips
  • Redis cluster support with slot-aware batching
  • Automatic cleanup of expired lock entries

Basic Usage

client, err := redcache.NewRedCacheAside(
    rueidis.ClientOption{InitAddress: []string{"localhost:6379"}},
    redcache.CacheAsideOption{LockTTL: 10 * time.Second},
)
if err != nil {
    return err
}
defer client.Client().Close()

// Get a single value with automatic cache population
value, err := client.Get(ctx, time.Minute, "user:123", func(ctx context.Context, key string) (string, error) {
    return fetchFromDatabase(ctx, key)
})

// Get multiple values with batched cache population
values, err := client.GetMulti(ctx, time.Minute, []string{"user:1", "user:2"}, func(ctx context.Context, keys []string) (map[string]string, error) {
    return fetchMultipleFromDatabase(ctx, keys)
})

Distributed Locking

The library ensures that only one goroutine (across all instances of your application) executes the callback function for a given key at a time. Other goroutines will wait for the lock to be released and then return the cached value.

Locks are implemented using Redis SET NX with a configurable TTL. Lock values use UUIDv7 for uniqueness and are prefixed (default: "__redcache:lock:") to avoid collisions with application data.

Context and Timeouts

All operations respect context cancellation. The LockTTL option controls:

  • Maximum time a lock can be held before automatic expiration
  • Timeout for waiting on locks when handling invalidation messages
  • Context timeout for cleanup operations

Use context deadlines to control overall operation timeout:

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
value, err := client.Get(ctx, time.Minute, key, callback)

Client-Side Caching

The library uses rueidis client-side caching with Redis invalidation messages. When a key is modified in Redis, invalidation messages automatically clear the local cache, ensuring consistency across distributed instances.

Index

Examples

Constants

View Source
const (
	DefaultLockPrefix    = "__redcache:lock:"
	DefaultRefreshPrefix = "__redcache:refresh:"
)

Default prefixes used by CacheAside. These can be overridden via CacheAsideOption.LockPrefix and CacheAsideOption.RefreshLockPrefix.

Variables

View Source
var ErrLockLost = errors.New("lock was lost or expired before value could be set")

ErrLockLost indicates the distributed lock was lost or expired before the value could be set. This can occur if the lock TTL expires during callback execution or if Redis invalidates the lock.

Functions

func NewBatchError added in v0.1.7

func NewBatchError(failed map[string]error, succeeded []string) error

NewBatchError creates a BatchError from the given failures and successes. Returns nil (untyped) if there are no failures, so it is safe to return directly as an error interface value.

Types

type BatchError added in v0.1.7

type BatchError struct {
	// Failed maps each failed key to its error.
	Failed map[string]error
	// Succeeded lists the keys that were set successfully.
	Succeeded []string
}

BatchError represents partial failures in a multi-key operation. Some keys may have succeeded while others failed.

func (*BatchError) Error added in v0.1.7

func (e *BatchError) Error() string

Error returns a human-readable summary of the batch failure.

func (*BatchError) ErrorFor added in v0.1.7

func (e *BatchError) ErrorFor(key string) error

ErrorFor returns the error recorded for key, or nil if the key did not fail. Safe to call on a nil receiver, so callers can chain after errors.As without a nil-check.

func (*BatchError) HasError added in v0.1.7

func (e *BatchError) HasError(key string) bool

HasError reports whether the given key failed. Safe to call on a nil receiver.

func (*BatchError) HasFailures added in v0.1.7

func (e *BatchError) HasFailures() bool

HasFailures returns true if any keys failed.

type CacheAside

type CacheAside struct {
	// contains filtered or unexported fields
}

CacheAside provides a cache-aside pattern backed by Redis with distributed locking and client-side caching via rueidis invalidation messages.

func NewRedCacheAside

func NewRedCacheAside(clientOption rueidis.ClientOption, caOption CacheAsideOption) (*CacheAside, error)

NewRedCacheAside creates a CacheAside with the given Redis client and cache-aside options.

func (*CacheAside) Client

func (rca *CacheAside) Client() rueidis.Client

Client returns the underlying rueidis.Client for advanced operations. Most users should not need direct client access. Use with caution as direct operations bypass the cache-aside pattern and distributed locking.

func (*CacheAside) Close added in v0.1.7

func (rca *CacheAside) Close()

Close cancels all pending lock entries and shuts down refresh workers. It does NOT close the underlying Redis client — that is the caller's responsibility. If refresh-ahead is enabled, Close waits for in-flight refresh jobs to complete (bounded by LockTTL). Safe to call multiple times.

Shutdown signals workers via the refreshDone channel rather than closing the data channel. Concurrent send + close on the same channel is a data race even when the panic is recovered; closing only the signal channel keeps refreshQueue senders race-free since closed channels are read-safe but write-unsafe.

func (*CacheAside) Del

func (rca *CacheAside) Del(ctx context.Context, key string) error

Del removes a key from Redis, triggering invalidation on all clients.

func (*CacheAside) DelMulti

func (rca *CacheAside) DelMulti(ctx context.Context, keys ...string) error

DelMulti removes multiple keys from Redis, triggering invalidation on all clients.

All commands are issued; on partial failure each per-key error is logged and the first error encountered is returned wrapped with key context. Some deletes may have succeeded.

func (*CacheAside) Get

func (rca *CacheAside) Get(
	ctx context.Context,
	ttl time.Duration,
	key string,
	fn func(ctx context.Context, key string) (val string, err error),
) (string, error)

Get returns the cached value for key, populating the cache by calling fn on a miss. Only one goroutine across all instances executes fn for a given key at a time; other callers wait for the result via Redis invalidation messages.

Empty-string values are valid: an empty value present in Redis is returned as a cache hit, not treated as a miss.

Example
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/redis/rueidis"

	"github.com/dcbickfo/redcache"
)

func main() {
	client, err := redcache.NewRedCacheAside(
		rueidis.ClientOption{
			InitAddress: []string{"127.0.0.1:6379"},
		},
		redcache.CacheAsideOption{
			LockTTL: 5 * time.Second,
		},
	)
	if err != nil {
		panic(err)
	}
	defer client.Client().Close()

	val, err := client.Get(context.Background(), time.Minute, "example:get", func(ctx context.Context, key string) (string, error) {
		// Called only on cache miss — fetch from your data source.
		return "hello", nil
	})
	if err != nil {
		panic(err)
	}
	fmt.Println(val)
}
Output:
hello

func (*CacheAside) GetMulti

func (rca *CacheAside) GetMulti(
	ctx context.Context,
	ttl time.Duration,
	keys []string,
	fn func(ctx context.Context, key []string) (val map[string]string, err error),
) (map[string]string, error)

GetMulti returns cached values for the given keys, populating any misses by calling fn. SET operations are grouped by Redis cluster slot for efficient batching.

Example
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/redis/rueidis"

	"github.com/dcbickfo/redcache"
)

func main() {
	client, err := redcache.NewRedCacheAside(
		rueidis.ClientOption{
			InitAddress: []string{"127.0.0.1:6379"},
		},
		redcache.CacheAsideOption{
			LockTTL: 5 * time.Second,
		},
	)
	if err != nil {
		panic(err)
	}
	defer client.Client().Close()

	keys := []string{"example:multi:a", "example:multi:b"}
	vals, err := client.GetMulti(context.Background(), time.Minute, keys, func(ctx context.Context, keys []string) (map[string]string, error) {
		// Called only for keys not in cache — fetch from your data source.
		result := make(map[string]string, len(keys))
		for _, k := range keys {
			result[k] = "value-for-" + k
		}
		return result, nil
	})
	if err != nil {
		panic(err)
	}
	fmt.Println(len(vals))
}
Output:
2

type CacheAsideOption

type CacheAsideOption struct {
	// LockTTL is the maximum time a lock can be held, and also the timeout for waiting
	// on locks when handling lost Redis invalidation messages. Defaults to 10 seconds.
	LockTTL time.Duration
	// ClientBuilder optionally overrides how the rueidis.Client is created.
	// When nil, rueidis.NewClient is used.
	ClientBuilder func(option rueidis.ClientOption) (rueidis.Client, error)
	// Logger for logging errors and debug information. Defaults to slog.Default().
	// The logger should handle log levels internally (e.g., only log Debug if level is enabled).
	Logger Logger
	// Metrics receives observability events. Defaults to NoopMetrics.
	// Implementations must be concurrent-safe; methods run on the hot path.
	Metrics Metrics
	// LockPrefix for distributed locks. Defaults to DefaultLockPrefix.
	// Choose a prefix unlikely to conflict with your data keys.
	LockPrefix string
	// RefreshLockPrefix is the key prefix used for distributed refresh-ahead locks.
	// Defaults to DefaultRefreshPrefix. Refresh keys also include a hash tag wrapping
	// the data key so they hash to the same Redis cluster slot as the data key
	// (e.g. "__redcache:refresh:{user:123}").
	RefreshLockPrefix string
	// RefreshAfterFraction enables refresh-ahead caching. When a cached value
	// is returned and more than this fraction of its TTL has elapsed, a
	// background worker refreshes the value while the stale one is returned
	// immediately. For example, 0.8 means "refresh after 80% of TTL has passed"
	// (i.e., when 20% remains). Set to 0 (default) to disable. Must be in [0, 1).
	//
	// The refresh threshold is based on the client-side cache TTL (CachePTTL),
	// which tracks the remaining lifetime of the locally cached entry. This closely
	// approximates the server-side TTL when the same ttl parameter is used
	// consistently for a given key across Get calls.
	RefreshAfterFraction float64
	// RefreshWorkers is the number of background workers that process refresh-ahead
	// jobs. Defaults to 4 when RefreshAfterFraction > 0. Must be > 0 when refresh
	// is enabled.
	RefreshWorkers int
	// RefreshQueueSize is the maximum number of pending refresh jobs. When the queue
	// is full, new refresh requests are silently dropped — the stale value continues
	// to be served until the next access. Defaults to 64 when RefreshAfterFraction > 0.
	// Must be > 0 when refresh is enabled.
	RefreshQueueSize int
}

CacheAsideOption configures a CacheAside instance.

type Logger added in v0.1.6

type Logger interface {
	// Error logs error messages. Should be used for unexpected failures or critical issues.
	Error(msg string, args ...any)
	// Debug logs detailed diagnostic information useful for development and troubleshooting.
	// Call Debug to record verbose output about internal state, cache operations, or lock handling.
	// Debug messages should not include sensitive information and may be omitted in production.
	Debug(msg string, args ...any)
}

Logger defines the logging interface used by CacheAside. Implementations must be safe for concurrent use and should handle log levels internally.

type Metrics added in v0.1.7

type Metrics interface {
	// CacheHit fires when a Get/GetMulti served a value from the client-side cache.
	CacheHit(key string)
	// CacheMiss fires when a Get/GetMulti had to populate via the user callback.
	CacheMiss(key string)
	// LockContended fires when an operation observed an existing lock and waited.
	LockContended(key string)
	// LockLost fires when a CAS detected the operation's lock was no longer held
	// (typically because a ForceSet or similar overwrote it).
	LockLost(key string)
	// RefreshTriggered fires when a refresh-ahead job was enqueued.
	RefreshTriggered(key string)
	// RefreshSkipped fires when a refresh was skipped due to local or distributed dedup.
	RefreshSkipped(key string)
	// RefreshDropped fires when a refresh was dropped because the worker queue was full.
	RefreshDropped(key string)
	// RefreshPanicked fires once per affected key when a refresh worker
	// recovered from a panic in the callback. The panic value itself is logged
	// via the configured logger; the metric carries only the key for tagging.
	RefreshPanicked(key string)
	// RefreshError fires when a refresh-ahead operation failed due to a Redis
	// error (network, timeout, command failure) rather than expected dedup
	// contention. Distinct from RefreshSkipped, which signals healthy contention.
	RefreshError(key string)
	// InvalidationError fires when a Redis invalidation message could not be
	// parsed. The key is unknown in this case, so no key is reported.
	InvalidationError()
}

Metrics receives observability events from CacheAside operations.

All methods must be safe for concurrent use; callers may invoke them from background workers and request goroutines simultaneously. Implementations should be cheap — they run on the hot path.

Implementations that only care about a subset of events can embed NoopMetrics and override the methods of interest.

type NoopMetrics added in v0.1.7

type NoopMetrics struct{}

NoopMetrics is a Metrics implementation that does nothing. Embed it to opt in to a subset of events:

type myMetrics struct {
    redcache.NoopMetrics
}

func (myMetrics) CacheMiss(key string) { /* count miss */ }

func (NoopMetrics) CacheHit added in v0.1.7

func (NoopMetrics) CacheHit(string)

CacheHit implements Metrics.

func (NoopMetrics) CacheMiss added in v0.1.7

func (NoopMetrics) CacheMiss(string)

CacheMiss implements Metrics.

func (NoopMetrics) InvalidationError added in v0.1.7

func (NoopMetrics) InvalidationError()

InvalidationError implements Metrics.

func (NoopMetrics) LockContended added in v0.1.7

func (NoopMetrics) LockContended(string)

LockContended implements Metrics.

func (NoopMetrics) LockLost added in v0.1.7

func (NoopMetrics) LockLost(string)

LockLost implements Metrics.

func (NoopMetrics) RefreshDropped added in v0.1.7

func (NoopMetrics) RefreshDropped(string)

RefreshDropped implements Metrics.

func (NoopMetrics) RefreshError added in v0.1.7

func (NoopMetrics) RefreshError(string)

RefreshError implements Metrics.

func (NoopMetrics) RefreshPanicked added in v0.1.7

func (NoopMetrics) RefreshPanicked(string)

RefreshPanicked implements Metrics.

func (NoopMetrics) RefreshSkipped added in v0.1.7

func (NoopMetrics) RefreshSkipped(string)

RefreshSkipped implements Metrics.

func (NoopMetrics) RefreshTriggered added in v0.1.7

func (NoopMetrics) RefreshTriggered(string)

RefreshTriggered implements Metrics.

type PrimeableCacheAside added in v0.1.7

type PrimeableCacheAside struct {
	*CacheAside
	// contains filtered or unexported fields
}

PrimeableCacheAside extends CacheAside with explicit Set operations for cache priming and coordinated cache updates.

It inherits all Get/GetMulti/Del/DelMulti capabilities and adds:

  • Set/SetMulti for coordinated cache updates with write locking
  • ForceSet/ForceSetMulti for unconditional writes bypassing locks

func NewPrimeableCacheAside added in v0.1.7

func NewPrimeableCacheAside(clientOption rueidis.ClientOption, caOption CacheAsideOption) (*PrimeableCacheAside, error)

NewPrimeableCacheAside creates a PrimeableCacheAside that wraps a CacheAside with additional Set operations.

func (*PrimeableCacheAside) Close added in v0.1.7

func (pca *PrimeableCacheAside) Close()

Close cancels all pending lock entries. It does NOT close the underlying Redis client.

func (*PrimeableCacheAside) ForceSet added in v0.1.7

func (pca *PrimeableCacheAside) ForceSet(ctx context.Context, ttl time.Duration, key, value string) error

ForceSet unconditionally writes a value to Redis, bypassing all locks. Any in-progress Get or Set on this key will see ErrLockLost and retry.

ttl must be > 0 (Redis rejects PX 0). Use Del to remove a key.

Prefer Set when you need rollback semantics on callback failure.

func (*PrimeableCacheAside) ForceSetMulti added in v0.1.7

func (pca *PrimeableCacheAside) ForceSetMulti(ctx context.Context, ttl time.Duration, values map[string]string) error

ForceSetMulti unconditionally writes multiple values to Redis, bypassing all locks. Any in-progress Get or Set on these keys will see ErrLockLost and retry.

ttl must be > 0 (Redis rejects PX 0).

All commands are issued; on partial failure each per-key error is logged and the first error encountered is returned. Some writes may have succeeded. Callers that need structured per-key status should use SetMulti, which returns a BatchError with per-key results.

func (*PrimeableCacheAside) Set added in v0.1.7

func (pca *PrimeableCacheAside) Set(
	ctx context.Context,
	ttl time.Duration,
	key string,
	fn func(ctx context.Context, key string) (string, error),
) error

Set acquires a write lock on the key, calls fn to produce the value, and atomically sets it in Redis. If another operation holds a lock, Set waits for it to complete.

The callback fn receives the key and should return the value to cache. Set respects context cancellation for timeouts.

On callback error, the previous value is restored only if Set still holds the lock. If a concurrent ForceSet has stolen the lock, the stealer's value is preserved rather than overwritten with the stale prior value, and Set returns the callback error. The CAS-set after a successful callback may also return ErrLockLost under the same race; in that case, the lock-stealer's value is preserved.

func (*PrimeableCacheAside) SetMulti added in v0.1.7

func (pca *PrimeableCacheAside) SetMulti(
	ctx context.Context,
	ttl time.Duration,
	keys []string,
	fn func(ctx context.Context, keys []string) (map[string]string, error),
) error

SetMulti acquires write locks on all keys, calls fn once with all keys, and atomically sets the returned values. Locks are acquired in sorted order to prevent deadlocks.

On partial CAS failure, returns a *BatchError listing succeeded and failed keys. On full success, returns nil.

Directories

Path Synopsis
internal
cmdx
Package cmdx provides Redis cluster slot calculation utilities.
Package cmdx provides Redis cluster slot calculation utilities.
lockpool
Package lockpool provides fast lock value generation using an atomic counter and a per-instance UUID prefix.
Package lockpool provides fast lock value generation using an atomic counter and a per-instance UUID prefix.
mapsx
Package mapsx provides generic helpers for map operations.
Package mapsx provides generic helpers for map operations.
syncx
Package syncx provides generic typed wrappers around standard library sync primitives.
Package syncx provides generic typed wrappers around standard library sync primitives.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL