cache

package module
v0.1.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 6, 2026 License: MIT Imports: 22 Imported by: 2

README

cache logo

cache gives your services one cache API with multiple backend options. Swap drivers without refactoring.

Go Reference Go version Latest tag Go Report Card Unit tests (executed count) Integration tests (executed count)

What cache is

An explicit cache abstraction with a minimal Store interface and ergonomic Cache helpers. Drivers are chosen when you construct the store, so swapping backends is a dependency-injection change instead of a refactor.

Installation

go get github.com/goforj/cache

Optional backends are separate modules. Install only what you use:

go get github.com/goforj/cache/driver/rediscache
go get github.com/goforj/cache/driver/memcachedcache
go get github.com/goforj/cache/driver/natscache
go get github.com/goforj/cache/driver/dynamocache
go get github.com/goforj/cache/driver/sqlitecache
go get github.com/goforj/cache/driver/postgrescache
go get github.com/goforj/cache/driver/mysqlcache

Drivers

Driver / Backend Mode Shared Durable TTL Counters Locks RateLimit Prefix Batch Shaping Notes
Null No-op - - - - No-op No-op Great for tests: cache calls are no-ops and never persist.
File Local filesystem - Local Local - Simple durability on a single host; set StoreConfig.FileDir (or use NewFileStore).
Memory In-process - - Local Local - Fastest; per-process only, best for single-node or short-lived data.
Memcached Networked - Shared Shared TTL resolution is 1s; configure addresses via memcachedcache.Config.Addresses.
Redis Networked - Shared Shared Full feature set; counters refresh TTL (Redis counter TTL granularity currently 1s).
NATS Networked - Shared Shared JetStream KV-backed driver; inject an existing bucket via natscache.Config.KeyValue.
DynamoDB Networked Shared Shared Backed by DynamoDB (supports localstack/dynamodb-local).
SQLite Local / file - Local Local sqlitecache (via sqlcore); great for embedded/local durable cache.
Postgres Networked Shared Shared postgrescache (via sqlcore); good shared durable backend.
MySQL Networked Shared Shared mysqlcache (via sqlcore); good shared durable backend.

Driver constructor quick examples

Use root constructors for in-process backends, and driver-module constructors for external backends. Driver backends live in separate modules so applications only import/link the optional backend dependencies they actually use.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/goforj/cache"
	"github.com/goforj/cache/cachecore"
	"github.com/goforj/cache/driver/dynamocache"
	"github.com/goforj/cache/driver/memcachedcache"
	"github.com/goforj/cache/driver/mysqlcache"
	"github.com/goforj/cache/driver/natscache"
	"github.com/goforj/cache/driver/postgrescache"
	"github.com/goforj/cache/driver/rediscache"
	"github.com/goforj/cache/driver/sqlitecache"
)

func main() {
	ctx := context.Background()
	base := cachecore.BaseConfig{DefaultTTL: 5 * time.Minute, Prefix: "app"}

	cache.NewMemoryStore(ctx)               // in-process memory
	cache.NewFileStore(ctx, "./cache-data") // local file-backed
	cache.NewNullStore(ctx)                 // disabled / drop-only

	// Redis (driver-owned connection config; no direct redis client required)
	redisStore := rediscache.New(rediscache.Config{BaseConfig: base, Addr: "127.0.0.1:6379"})
	_ = redisStore

	// Memcached (one or more server addresses)
	memcachedStore := memcachedcache.New(memcachedcache.Config{
		BaseConfig: base,
		Addresses:  []string{"127.0.0.1:11211"},
	})
	_ = memcachedStore

	// NATS JetStream KV (inject a bucket from your NATS setup)
	var kv natscache.KeyValue // create via your NATS JetStream setup
	natsStore := natscache.New(natscache.Config{BaseConfig: base, KeyValue: kv})
	_ = natsStore

	// DynamoDB (auto-creates client when Client is nil)
	dynamoStore, err := dynamocache.New(ctx, dynamocache.Config{
		BaseConfig: base,
		Region:     "us-east-1",
		Table:      "cache_entries",
	})
	fmt.Println(dynamoStore, err)

	// SQLite (via sqlcore)
	sqliteStore, err := sqlitecache.New(sqlitecache.Config{
		BaseConfig: base,
		DSN:        "file::memory:?cache=shared",
		Table:      "cache_entries",
	})
	fmt.Println(sqliteStore, err)

	// Postgres (via sqlcore)
	postgresStore, err := postgrescache.New(postgrescache.Config{
		BaseConfig: base,
		DSN:        "postgres://user:pass@127.0.0.1:5432/app?sslmode=disable",
		Table:      "cache_entries",
	})
	fmt.Println(postgresStore, err)

	// MySQL (via sqlcore)
	mysqlStore, err := mysqlcache.New(mysqlcache.Config{
		BaseConfig: base,
		DSN:        "user:pass@tcp(127.0.0.1:3306)/app?parseTime=true",
		Table:      "cache_entries",
	})
	fmt.Println(mysqlStore, err)
}

Module Layout

Category Module Purpose
Core github.com/goforj/cache Cache API and root-backed stores (memory, file, null)
Core github.com/goforj/cache/cachecore Shared contracts, types, and base config
Core github.com/goforj/cache/cachetest Shared store contract test harness
Optional drivers github.com/goforj/cache/driver/*cache Backend driver modules
Optional drivers github.com/goforj/cache/driver/sqlcore Shared SQL implementation for dialect wrappers
Testing and tooling github.com/goforj/cache/integration Integration suites (root, all)
Testing and tooling github.com/goforj/cache/docs Docs + benchmark tooling

Quick Start

import (
    "context"
    "fmt"
    "time"

    "github.com/goforj/cache"
    "github.com/goforj/cache/cachecore"
    "github.com/goforj/cache/driver/rediscache"
)

func main() {
    ctx := context.Background()

    store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
        BaseConfig: cachecore.BaseConfig{DefaultTTL: 5 * time.Minute},
    })
    c := cache.NewCache(store)

    type Profile struct { Name string `json:"name"` }

    // Typed lifecycle (generic helpers): set -> get -> delete
    _ = cache.Set(c, "user:42:profile", Profile{Name: "Ada"}, time.Minute)
    profile, ok, err := cache.Get[Profile](c, "user:42:profile")
    fmt.Println(err == nil, ok, profile.Name) // true true Ada
    _ = c.Delete("user:42:profile")

    // String lifecycle: set -> get -> delete
    _ = c.SetString("settings:mode", "dark", time.Minute)
    mode, ok, err := c.GetString("settings:mode")
    fmt.Println(err == nil, ok, mode) // true true dark
    _ = c.Delete("settings:mode")

    // Remember pattern.
    profile, err := cache.Remember[Profile](c, "user:42:profile", time.Minute, func() (Profile, error) {
        return Profile{Name: "Ada"}, nil
    })
    fmt.Println(profile.Name) // Ada

    // Switch to Redis (dependency injection, no code changes below).
    store = rediscache.New(rediscache.Config{
        BaseConfig: cachecore.BaseConfig{
            Prefix:     "app",
            DefaultTTL: 5 * time.Minute,
        },
        Addr: "127.0.0.1:6379",
    })
    c = cache.NewCache(store)
}

Config Options

Cache uses explicit config structs throughout, with shared fields embedded via cachecore.BaseConfig.

Shared config (embedded by root stores and optional drivers):

type BaseConfig struct {
	DefaultTTL    time.Duration
	Prefix        string
	Compression   CompressionCodec
	MaxValueBytes int
	EncryptionKey []byte
}

Root-backed stores use cache.StoreConfig:

type StoreConfig struct {
	cachecore.BaseConfig
	MemoryCleanupInterval time.Duration
	FileDir               string
}

Typical root constructor usage:

store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	MemoryCleanupInterval: time.Minute,
})

Optional backends use driver-local config types that embed the same cachecore.BaseConfig plus backend-specific fields.

Example shapes:

// rediscache.Config (abridged)
type Config struct {
	cachecore.BaseConfig
	Client rediscache.Client
}
// sqlitecache.Config (abridged)
type Config struct {
	cachecore.BaseConfig
	DSN   string
	Table string
}

See the API Index Driver Configs section for per-driver defaults and compile-checked examples for: rediscache, memcachedcache, natscache, dynamocache, sqlitecache, postgrescache, mysqlcache, and sqlcore.

Behavior Semantics

For precise runtime semantics, see Behavior Semantics:

  • TTL/default-TTL matrix by operation/helper
  • stale and refresh-ahead behavior and edge cases
  • lock and rate-limit guarantees (process-local vs distributed scope)

Production Guidance

For deployment defaults and operational patterns, see Production Guide:

  • recommended defaults and tuning
  • key naming/versioning conventions
  • TTL jitter and miss-storm mitigation
  • observability instrumentation patterns

Memoized reads

Wrap any store with NewMemoStore to memoize reads within the process; cache is invalidated automatically on write paths.

memoStore := cache.NewMemoStore(store)
memoRepo := cache.NewCache(memoStore)

Staleness note: memoization is per-process only. Writes that happen in other processes (or outside your app) will not invalidate this memo cache. Use it when local staleness is acceptable, or scope it narrowly (e.g., per-request) if multiple writers exist.

Testing

Unit tests cover the public helpers. Shared cross-driver integration coverage runs from the integration module (with testcontainers-go for container-backed backends):

cd integration
go test -tags=integration ./all

Use INTEGRATION_DRIVER=sqlitecache (comma-separated) to select which fixtures run, or use the repo helper:

bash scripts/test-all-modules.sh

Benchmarks

cd docs
go test -tags benchrender ./bench -run TestRenderBenchmarks -count=1 -v

Note: NATS numbers can look slower than Redis/memory because the NATS driver preserves per-operation TTL semantics by storing per-key expiry metadata (envelope encode/decode) and may do extra compare/update steps for some operations. Generic helper benchmarks (Get[T] / Set[T]) use the default JSON codec, so compare them against GetBytes / SetBytes (and GetString / SetString) when evaluating convenience vs raw-path performance.

Note: DynamoDB is intentionally omitted from these local charts because emulator-based numbers are not representative of real AWS latency.

NATS variants in these charts:

  • nats: per-key TTL semantics using a binary envelope (magic/expiresAt/value). This preserves per-key expiry parity with other drivers, with modest metadata overhead.
  • nats_bucket_ttl: bucket-level TTL mode (WithNATSBucketTTL(true)), raw value path; faster but different expiry semantics.

Latency (ns/op)

Cache benchmark latency chart

Iterations (N)

Cache benchmark iteration chart

Allocated Bytes (B/op)

Cache benchmark bytes chart

Allocations (allocs/op)

Cache benchmark allocs chart

API reference

The API section below is autogenerated; do not edit between the markers. Many functions also provide ...Context variants that accept an explicit context.Context.

API Index

Group Functions
Constructors NewFileStore NewFileStoreWithConfig NewMemoryStore NewMemoryStoreWithConfig NewNullStore NewNullStoreWithConfig
Core Driver NewCache NewCacheWithTTL Ready Store
Driver Configs Shared BaseConfig DynamoDB Config Memcached Config MySQL Config NATS Config Postgres Config Redis Config SQL Core Config SQLite Config
Invalidation Delete DeleteMany Flush Pull PullBytes
Locking Acquire Block Lock LockContext LockHandle.Get NewLockHandle Release TryLock Unlock
Memoization NewMemoStore
Observability OnCacheOp WithObserver
Rate Limiting RateLimit
Read Through Remember RememberBytes RememberStale RememberStaleBytes RememberStaleContext
Reads BatchGetBytes Get GetBytes GetJSON GetString
Refresh Ahead RefreshAhead RefreshAheadBytes RefreshAheadValueWithCodec
Testing Helpers AssertCalled AssertNotCalled AssertTotal Cache Count New Reset Total
Writes Add BatchSetBytes Decrement Increment Set SetBytes SetJSON SetString

Examples assume ctx := context.Background() and c := cache.NewCache(cache.NewMemoryStore(ctx)) unless shown otherwise.

Constructors

NewFileStore

NewFileStore is a convenience for a filesystem-backed store.

ctx := context.Background()
store := cache.NewFileStore(ctx, "/tmp/my-cache")
fmt.Println(store.Driver()) // file

NewFileStoreWithConfig

NewFileStoreWithConfig builds a filesystem-backed store using explicit root config.

ctx := context.Background()
store := cache.NewFileStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		EncryptionKey: []byte("01234567890123456789012345678901"),
		MaxValueBytes: 4096,
		Compression:   cache.CompressionGzip,
	},
	FileDir: "/tmp/my-cache",
})
fmt.Println(store.Driver()) // file

NewMemoryStore

NewMemoryStore is a convenience for an in-process store using defaults.

ctx := context.Background()
store := cache.NewMemoryStore(ctx)
fmt.Println(store.Driver()) // memory

NewMemoryStoreWithConfig

NewMemoryStoreWithConfig builds an in-process store using explicit root config.

ctx := context.Background()
store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL:  30 * time.Second,
		Compression: cache.CompressionGzip,
	},
	MemoryCleanupInterval: 5 * time.Minute,
})
fmt.Println(store.Driver()) // memory

NewNullStore

NewNullStore is a no-op store useful for tests where caching should be disabled.

ctx := context.Background()
store := cache.NewNullStore(ctx)
fmt.Println(store.Driver()) // null

NewNullStoreWithConfig

NewNullStoreWithConfig builds a null store with shared wrappers (compression/encryption/limits).

ctx := context.Background()
store := cache.NewNullStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		Compression:   cache.CompressionGzip,
		MaxValueBytes: 1024,
	},
})
fmt.Println(store.Driver()) // null

Core

Driver

Driver reports the underlying store driver.

NewCache

NewCache creates a cache facade bound to a concrete store.

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
fmt.Println(c.Driver()) // memory

NewCacheWithTTL

NewCacheWithTTL lets callers override the default TTL applied when ttl <= 0.

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCacheWithTTL(s, 2*time.Minute)
fmt.Println(c.Driver(), c != nil) // memory true

Ready

Ready checks whether the underlying store is ready to serve requests.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.Ready() == nil) // true

Store

Store returns the underlying store implementation.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.Store().Driver()) // memory

Driver Configs

Optional backend config examples (compile-checked from generated examples and driver New(...) docs).

Shared cachecore.BaseConfig

Shared fields are embedded via cachecore.BaseConfig on every driver config:

  • DefaultTTL: defaults to 5*time.Minute when zero in all optional drivers
  • Prefix: defaults to "app" when empty in all optional drivers
  • Compression: default zero value (cachecore.CompressionNone) unless set
  • MaxValueBytes: default 0 (no limit) unless set
  • EncryptionKey: default nil (disabled) unless set

DynamoDB

Defaults:

  • Region: "us-east-1" when empty
  • Table: "cache_entries" when empty
  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • Client: auto-created when nil (uses Region and optional Endpoint)
  • Endpoint: empty by default (normal AWS endpoint resolution)
ctx := context.Background()
store, err := dynamocache.New(ctx, dynamocache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	Region: "us-east-1",
	Table:  "cache_entries",
})
if err != nil {
	panic(err)
}
fmt.Println(store.Driver()) // dynamo

Memcached

Defaults:

  • Addresses: []string{"127.0.0.1:11211"} when empty
  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
store := memcachedcache.New(memcachedcache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	Addresses: []string{"127.0.0.1:11211"},
})
fmt.Println(store.Driver()) // memcached

MySQL

Defaults:

  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • Table: "cache_entries" when empty
  • DSN: required
store, err := mysqlcache.New(mysqlcache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	DSN:   "user:pass@tcp(127.0.0.1:3306)/app?parseTime=true",
	Table: "cache_entries",
})
if err != nil {
	panic(err)
}
fmt.Println(store.Driver()) // sql

NATS

Defaults:

  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • BucketTTL: false (TTL enforced in value envelope metadata)
  • KeyValue: required for real operations (nil allowed, operations return errors)
var kv natscache.KeyValue // provided by your NATS setup
store := natscache.New(natscache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	KeyValue:  kv,
	BucketTTL: false,
})
fmt.Println(store.Driver()) // nats

Postgres

Defaults:

  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • Table: "cache_entries" when empty
  • DSN: required
store, err := postgrescache.New(postgrescache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	DSN:   "postgres://user:pass@localhost:5432/app?sslmode=disable",
	Table: "cache_entries",
})
if err != nil {
	panic(err)
}
fmt.Println(store.Driver()) // sql

Redis

Defaults:

  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • Addr: empty by default (no client auto-created unless Addr is set)
  • Client: optional advanced override (takes precedence when set)
  • If neither Client nor Addr is set, operations return errors until a client is provided
store := rediscache.New(rediscache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	Addr: "127.0.0.1:6379",
})
fmt.Println(store.Driver()) // redis

SQL Core (advanced/shared implementation)

Defaults:

  • Table: "cache_entries" when empty
  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • DriverName: required
  • DSN: required
store, err := sqlcore.New(sqlcore.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	DriverName: "sqlite",
	DSN:        "file::memory:?cache=shared",
	Table:      "cache_entries",
})
if err != nil {
	panic(err)
}
fmt.Println(store.Driver()) // sql

SQLite

Defaults:

  • DefaultTTL: 5*time.Minute when zero
  • Prefix: "app" when empty
  • Table: "cache_entries" when empty
  • DSN: required
store, err := sqlitecache.New(sqlitecache.Config{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL: 5 * time.Minute,
		Prefix:     "app",
	},
	DSN:   "file::memory:?cache=shared",
	Table: "cache_entries",
})
if err != nil {
	panic(err)
}
fmt.Println(store.Driver()) // sql

Invalidation

Delete

Delete removes a single key.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Delete("a") == nil) // true

DeleteMany

DeleteMany removes multiple keys.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.DeleteMany("a", "b") == nil) // true

Flush

Flush clears all keys for this store scope.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Flush() == nil) // true

Pull

Pull returns a typed value for key and removes it, using the default codec (JSON).

type Token struct { Value string `json:"value"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "reset:token:42", Token{Value: "abc"}, time.Minute)
tok, ok, err := cache.Pull[Token](c, "reset:token:42")
fmt.Println(err == nil, ok, tok.Value) // true true abc

PullBytes

PullBytes returns value and removes it from cache.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("reset:token:42", "abc", time.Minute)
body, ok, _ := c.PullBytes("reset:token:42")
fmt.Println(ok, string(body)) // true abc

Locking

Acquire

Acquire attempts to acquire the lock once (non-blocking).

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true true

Block

Block waits up to timeout to acquire the lock, runs fn if acquired, then releases.

retryInterval <= 0 falls back to the cache default lock retry interval.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Block(500*time.Millisecond, 25*time.Millisecond, func() error {
	// do protected work
	return nil
})
fmt.Println(err == nil, locked) // true true

Lock

Lock waits until the lock is acquired or timeout elapses.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, err := c.Lock("job:sync", 10*time.Second, time.Second)
fmt.Println(err == nil, locked) // true true

LockContext

LockContext retries lock acquisition until success or context cancellation.

LockHandle.Get

Get acquires the lock once, runs fn if acquired, then releases automatically.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Get(func() error {
	// do protected work
	return nil
})
fmt.Println(err == nil, locked) // true true

NewLockHandle

NewLockHandle creates a reusable lock handle for a key/ttl pair.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true true
if locked {
	_ = lock.Release()
}

Release

Release unlocks the key if this handle previously acquired it.

It is safe to call multiple times; repeated calls become no-ops after the first successful release.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, _ := lock.Acquire()
if locked {
	_ = lock.Release()
}

TryLock

TryLock acquires a short-lived lock key when not already held.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
fmt.Println(locked) // true

Unlock

Unlock releases a previously acquired lock key.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
if locked {
	_ = c.Unlock("job:sync")
}

Memoization

NewMemoStore

NewMemoStore decorates store with per-process read memoization.

Behavior:

  • First Get hits the backing store, clones the value, and memoizes it in-process.
  • Subsequent Get for the same key returns the memoized clone (no backend call).
  • Any write/delete/flush invalidates the memo entry so local reads stay in sync with changes made through this process.
  • Memo data is per-process only; other processes or external writers will not invalidate it. Use only when that staleness window is acceptable.
ctx := context.Background()
base := cache.NewMemoryStore(ctx)
memo := cache.NewMemoStore(base)
c := cache.NewCache(memo)
fmt.Println(c.Driver()) // memory

Observability

OnCacheOp

OnCacheOp implements Observer.

obs := cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
	fmt.Println(op, key, hit, err == nil, driver)
	_ = ctx
	_ = dur
})
obs.OnCacheOp(context.Background(), "get", "user:42", true, nil, time.Millisecond, cachecore.DriverMemory)

WithObserver

WithObserver attaches an observer to receive operation events.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
c = c.WithObserver(cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
	// See docs/production-guide.md for a real metrics recipe.
	fmt.Println(op, driver, hit, err == nil)
	_ = ctx
	_ = key
	_ = dur
}))
_, _, _ = c.GetBytes("profile:42")

Rate Limiting

RateLimit

RateLimit increments a fixed-window counter and returns allowance metadata.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
res, err := c.RateLimit("rl:api:ip:1.2.3.4", 100, time.Minute)
fmt.Println(err == nil, res.Allowed, res.Count, res.Remaining, !res.ResetAt.IsZero())
// Output: true true 1 99 true

Read Through

Remember

Remember is the ergonomic, typed remember helper using JSON encoding by default.

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, err := cache.Remember[Profile](c, "profile:42", time.Minute, func() (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, profile.Name) // true Ada

RememberBytes

RememberBytes returns key value or computes/stores it when missing.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
data, err := c.RememberBytes("dashboard:summary", time.Minute, func() ([]byte, error) {
	return []byte("payload"), nil
})
fmt.Println(err == nil, string(data)) // true payload

RememberStale

RememberStale returns a typed value with stale fallback semantics using JSON encoding by default.

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStale[Profile](c, "profile:42", time.Minute, 10*time.Minute, func() (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false Ada

RememberStaleBytes

RememberStaleBytes returns a fresh value when available, otherwise computes and caches it. If computing fails and a stale value exists, it returns the stale value. The returned bool is true when a stale fallback was used.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, usedStale, err := c.RememberStaleBytes("profile:42", time.Minute, 10*time.Minute, func() ([]byte, error) {
	return []byte(`{"name":"Ada"}`), nil
})
fmt.Println(err == nil, usedStale, len(body) > 0)

RememberStaleContext

RememberStaleContext returns a typed value with stale fallback semantics using JSON encoding by default.

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStaleContext[Profile](ctx, c, "profile:42", time.Minute, 10*time.Minute, func(ctx context.Context) (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false Ada

Reads

BatchGetBytes

BatchGetBytes returns all found values for the provided keys. Missing keys are omitted from the returned map.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
_ = c.SetBytes("b", []byte("2"), time.Minute)
values, err := c.BatchGetBytes("a", "b", "missing")
fmt.Println(err == nil, string(values["a"]), string(values["b"])) // true 1 2

Get

Get returns a typed value for key using the default codec (JSON) when present.

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
_ = cache.Set(c, "settings:mode", "dark", time.Minute)
profile, ok, err := cache.Get[Profile](c, "profile:42")
mode, ok2, err2 := cache.Get[string](c, "settings:mode")
fmt.Println(err == nil, ok, profile.Name, err2 == nil, ok2, mode) // true true Ada true true dark

GetBytes

GetBytes returns raw bytes for key when present.

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
_ = c.SetBytes("user:42", []byte("Ada"), 0)
value, ok, _ := c.GetBytes("user:42")
fmt.Println(ok, string(value)) // true Ada

GetJSON

GetJSON decodes a JSON value into T when key exists, using background context.

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.SetJSON(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
profile, ok, err := cache.GetJSON[Profile](c, "profile:42")
fmt.Println(err == nil, ok, profile.Name) // true true Ada

GetString

GetString returns a UTF-8 string value for key when present.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("user:42:name", "Ada", 0)
name, ok, _ := c.GetString("user:42:name")
fmt.Println(ok, name) // true Ada

Refresh Ahead

RefreshAhead

RefreshAhead returns a typed value and refreshes asynchronously when near expiry.

type Summary struct { Text string `json:"text"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
s, err := cache.RefreshAhead[Summary](c, "dashboard:summary", time.Minute, 10*time.Second, func() (Summary, error) {
	return Summary{Text: "ok"}, nil
})
fmt.Println(err == nil, s.Text) // true ok

RefreshAheadBytes

RefreshAheadBytes returns cached value immediately and refreshes asynchronously when near expiry. On miss, it computes and stores synchronously.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, err := c.RefreshAheadBytes("dashboard:summary", time.Minute, 10*time.Second, func() ([]byte, error) {
	return []byte("payload"), nil
})
fmt.Println(err == nil, len(body) > 0) // true true

RefreshAheadValueWithCodec

RefreshAheadValueWithCodec allows custom encoding/decoding for typed refresh-ahead operations.

Testing Helpers

AssertCalled

AssertCalled verifies key was touched by op the expected number of times.

f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)
t := &testing.T{}
f.AssertCalled(t, cachefake.OpSet, "settings:mode", 1)

AssertNotCalled

AssertNotCalled ensures key was never touched by op.

f := cachefake.New()
t := &testing.T{}
f.AssertNotCalled(t, cachefake.OpDelete, "settings:mode")

AssertTotal

AssertTotal ensures the total call count for an op matches times.

f := cachefake.New()
c := f.Cache()
_ = c.Delete("a")
_ = c.Delete("b")
t := &testing.T{}
f.AssertTotal(t, cachefake.OpDelete, 2)

Cache

Cache returns the cache facade to inject into code under test.

f := cachefake.New()
c := f.Cache()
_, _, _ = c.GetBytes("settings:mode")

Count

Count returns calls for op+key.

f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)
n := f.Count(cachefake.OpSet, "settings:mode")
_ = n

New

New creates a Fake using an in-memory store.

f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)

Reset

Reset clears recorded counts.

f := cachefake.New()
_ = f.Cache().SetString("settings:mode", "dark", 0)
f.Reset()

Total

Total returns total calls for an op across keys.

f := cachefake.New()
c := f.Cache()
_ = c.Delete("a")
_ = c.Delete("b")
n := f.Total(cachefake.OpDelete)
_ = n

Writes

Add

Add writes value only when key is not already present.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
created, _ := c.Add("boot:seeded", []byte("1"), time.Hour)
fmt.Println(created) // true

BatchSetBytes

BatchSetBytes writes many key/value pairs using a shared ttl.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := c.BatchSetBytes(map[string][]byte{
	"a": []byte("1"),
	"b": []byte("2"),
}, time.Minute)
fmt.Println(err == nil) // true

Decrement

Decrement decrements a numeric value and returns the result.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Decrement("rate:login:42", 1, time.Minute)
fmt.Println(val) // -1

Increment

Increment increments a numeric value and returns the result.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Increment("rate:login:42", 1, time.Minute)
fmt.Println(val) // 1

Set

Set encodes value with the default codec (JSON) and writes it to key.

type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.Set(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
err2 := cache.Set(c, "settings:mode", "dark", time.Minute)
fmt.Println(err == nil, err2 == nil) // true true

SetBytes

SetBytes writes raw bytes to key.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetBytes("token", []byte("abc"), time.Minute) == nil) // true

SetJSON

SetJSON encodes value as JSON and writes it to key using background context.

type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.SetJSON(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
fmt.Println(err == nil) // true

SetString

SetString writes a string value to key.

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetString("user:42:name", "Ada", time.Minute) == nil) // true

Payload size caps (effective bytes written)

Driver Hard / default cap Configurable Notes
Null N/A N/A No persistence.
Memory Process memory - No backend hard cap.
File Disk / filesystem - No backend hard cap.
Redis Backend practical (memory/SLO) Server-side No commonly hit low per-value hard cap in app use.
NATS Server/bucket payload limits Server-side Depends on NATS/JetStream config.
Memcached ~1 MiB per item (default) ✓ (server -I) Backend-enforced item limit.
DynamoDB 400 KB item hard cap No Includes key/metadata overhead, so usable value bytes are lower.
SQL DB/engine config dependent Server-side Blob/row/packet limits vary by engine and deployment.

StoreConfig.MaxValueBytes (root-backed stores) is the uniform application-level cap, and it applies to post-shaping bytes (after compression/encryption overhead).

Integration Coverage

Area What is validated Scope
Core store contract Set/Get, TTL expiry, Add, counters, Delete/DeleteMany, Flush, typed Remember All drivers
Option contracts prefix, compression, encryption, prefix+compression+encryption, max_value_bytes, default_ttl All drivers (per option case)
Locking single-winner contention, timeout/cancel, TTL expiry reacquire, unlock safety All drivers
Rate limiting monotonic counts, remaining >= 0, window rollover reset All drivers
Refresh-ahead miss/hit behavior, async refresh success/error, malformed metadata handling All drivers
Remember stale stale fallback semantics, TTL interactions, stale/fresh independent expiry, joined errors All drivers
Batch ops partial misses, empty input behavior, default TTL application All drivers
Counter semantics signed deltas, zero delta, TTL refresh extension All drivers
Context cancellation GetContext/SetContext/LockContext/RefreshAheadContext/Remember*Ctx prompt return + driver-aware cancel semantics All drivers (driver-aware assertions)
Latency / transient faults injected slow Get/Add/Increment, timeout propagation, no hidden retries for RefreshAhead/Remember*/LockContext/RateLimit* All drivers (integration wrappers over real stores)
Prefix isolation Delete/Flush isolation + helper-generated keys (__lock:, :__refresh_exp, :__stale, rate-limit buckets) Shared/prefixed backends
Payload shaping / corruption compression+encryption round-trips, corrupted compressed/encrypted payload errors Shared/persistent backends
Payload size limits large binary payload round-trips; backend-specific near/over-limit checks (Memcached, DynamoDB) Driver-specific where meaningful
Cross-store scope shared vs local semantics across store instances (e.g. rate-limit counters) Driver-specific expectations
Backend fault / recovery backend restart mid-suite, outage errors, post-recovery round-trip/lock/refresh/stale flows Container-backed drivers (runs automatically when container-backed fixtures are selected)
Observer metadata op names, hit/miss flags, propagated errors, driver labels Unit contract tests (integration helper paths exercise emissions indirectly)
Memo store caveats per-process memoization, local-only invalidation, cross-process staleness behavior Unit tests

Default integration runs cover the contract suite above. Fault/recovery restart tests run automatically when the selected integration suite includes container-backed fixtures.

Contributing (README updates)

README content is a mix of generated sections and manual sections.

  • API reference (<!-- api:embed:start --> ... <!-- api:embed:end -->) is generated.
  • Test badges are updated separately.
  • Sections like driver notes and the integration coverage table are manual.

Update generated API docs

go run ./docs/readme/main.go

Update test badges

Static counts (fast, watcher-friendly; counts top-level Test* funcs):

go run ./docs/readme/main.go

Executed counts (runs tests and counts real go test -json test/subtest starts):

go run ./docs/readme/testcounts/main.go

Watch mode

./docs/watcher.sh

Notes:

  • The badge watcher runs real tests, so it is slower than API/example regeneration.
  • Fault/recovery integration tests run with the integration suite when container-backed fixtures are selected.

Documentation

Index

Constants

View Source
const (
	CompressionNone   = cachecore.CompressionNone
	CompressionGzip   = cachecore.CompressionGzip
	CompressionSnappy = cachecore.CompressionSnappy
)

Variables

View Source
var (
	ErrValueTooLarge      = errors.New("cache: value exceeds max size")
	ErrUnsupportedCodec   = errors.New("cache: unsupported compression codec")
	ErrCorruptCompression = errors.New("cache: corrupt compressed payload")
)
View Source
var (
	ErrEncryptionKey      = errors.New("cache: encryption key must be 16, 24, or 32 bytes")
	ErrDecryptFailed      = errors.New("cache: decrypt failed")
	ErrEncryptValueTooBig = errors.New("cache: encrypt value too large")
)

Functions

func Get

func Get[T any](cache *Cache, key string) (T, bool, error)

Get returns a typed value for key using the default codec (JSON) when present. @group Reads

Example: get typed values (struct + string)

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
_ = cache.Set(c, "settings:mode", "dark", time.Minute)
profile, ok, err := cache.Get[Profile](c, "profile:42")
mode, ok2, err2 := cache.Get[string](c, "settings:mode")
fmt.Println(err == nil, ok, profile.Name, err2 == nil, ok2, mode) // true true Ada true true dark

func GetContext added in v0.1.5

func GetContext[T any](ctx context.Context, cache *Cache, key string) (T, bool, error)

GetContext is the context-aware variant of Get. @group Reads

func GetJSON

func GetJSON[T any](cache *Cache, key string) (T, bool, error)

GetJSON decodes a JSON value into T when key exists, using background context. @group Reads

Example: get typed JSON

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.SetJSON(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
profile, ok, err := cache.GetJSON[Profile](c, "profile:42")
fmt.Println(err == nil, ok, profile.Name) // true true Ada

func GetJSONContext added in v0.1.5

func GetJSONContext[T any](ctx context.Context, cache *Cache, key string) (T, bool, error)

GetJSONContext is the context-aware variant of GetJSON. @group Reads

func NewFileStore

func NewFileStore(ctx context.Context, dir string) cachecore.Store

NewFileStore is a convenience for a filesystem-backed store. @group Constructors

Example: file helper

ctx := context.Background()
store := cache.NewFileStore(ctx, "/tmp/my-cache")
fmt.Println(store.Driver()) // file

func NewFileStoreWithConfig

func NewFileStoreWithConfig(ctx context.Context, cfg StoreConfig) cachecore.Store

NewFileStoreWithConfig builds a filesystem-backed store using explicit root config. @group Constructors

Example: file helper with root config

ctx := context.Background()
store := cache.NewFileStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		EncryptionKey: []byte("01234567890123456789012345678901"),
		MaxValueBytes: 4096,
		Compression:   cache.CompressionGzip,
	},
	FileDir: "/tmp/my-cache",
})
fmt.Println(store.Driver()) // file

func NewMemoStore

func NewMemoStore(store cachecore.Store) cachecore.Store

NewMemoStore decorates store with per-process read memoization.

Behavior:

  • First Get hits the backing store, clones the value, and memoizes it in-process.
  • Subsequent Get for the same key returns the memoized clone (no backend call).
  • Any write/delete/flush invalidates the memo entry so local reads stay in sync with changes made through this process.
  • Memo data is per-process only; other processes or external writers will not invalidate it. Use only when that staleness window is acceptable.

@group Memoization

Example: memoize a backing store

ctx := context.Background()
base := cache.NewMemoryStore(ctx)
memo := cache.NewMemoStore(base)
c := cache.NewCache(memo)
fmt.Println(c.Driver()) // memory

func NewMemoryStore

func NewMemoryStore(ctx context.Context) cachecore.Store

NewMemoryStore is a convenience for an in-process store using defaults. @group Constructors

Example: memory helper

ctx := context.Background()
store := cache.NewMemoryStore(ctx)
fmt.Println(store.Driver()) // memory

func NewMemoryStoreWithConfig

func NewMemoryStoreWithConfig(ctx context.Context, cfg StoreConfig) cachecore.Store

NewMemoryStoreWithConfig builds an in-process store using explicit root config. @group Constructors

Example: memory helper with root config

ctx := context.Background()
store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		DefaultTTL:  30 * time.Second,
		Compression: cache.CompressionGzip,
	},
	MemoryCleanupInterval: 5 * time.Minute,
})
fmt.Println(store.Driver()) // memory

func NewNullStore

func NewNullStore(ctx context.Context) cachecore.Store

NewNullStore is a no-op store useful for tests where caching should be disabled. @group Constructors

Example: null helper

ctx := context.Background()
store := cache.NewNullStore(ctx)
fmt.Println(store.Driver()) // null

func NewNullStoreWithConfig

func NewNullStoreWithConfig(ctx context.Context, cfg StoreConfig) cachecore.Store

NewNullStoreWithConfig builds a null store with shared wrappers (compression/encryption/limits). @group Constructors

Example: null helper with shared wrappers enabled

ctx := context.Background()
store := cache.NewNullStoreWithConfig(ctx, cache.StoreConfig{
	BaseConfig: cachecore.BaseConfig{
		Compression:   cache.CompressionGzip,
		MaxValueBytes: 1024,
	},
})
fmt.Println(store.Driver()) // null

func Pull

func Pull[T any](cache *Cache, key string) (T, bool, error)

Pull returns a typed value for key and removes it, using the default codec (JSON). @group Invalidation

Example: pull typed value

type Token struct { Value string `json:"value"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "reset:token:42", Token{Value: "abc"}, time.Minute)
tok, ok, err := cache.Pull[Token](c, "reset:token:42")
fmt.Println(err == nil, ok, tok.Value) // true true abc

func PullContext added in v0.1.5

func PullContext[T any](ctx context.Context, cache *Cache, key string) (T, bool, error)

PullContext is the context-aware variant of Pull. @group Invalidation

func RefreshAhead

func RefreshAhead[T any](cache *Cache, key string, ttl, refreshAhead time.Duration, fn func() (T, error)) (T, error)

RefreshAhead returns a typed value and refreshes asynchronously when near expiry. @group Refresh Ahead

Example: refresh ahead typed

type Summary struct { Text string `json:"text"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
s, err := cache.RefreshAhead[Summary](c, "dashboard:summary", time.Minute, 10*time.Second, func() (Summary, error) {
	return Summary{Text: "ok"}, nil
})
fmt.Println(err == nil, s.Text) // true ok

func RefreshAheadContext added in v0.1.5

func RefreshAheadContext[T any](ctx context.Context, cache *Cache, key string, ttl, refreshAhead time.Duration, fn func(context.Context) (T, error)) (T, error)

RefreshAheadContext is the context-aware variant of RefreshAhead. @group Refresh Ahead

func RefreshAheadValueWithCodec

func RefreshAheadValueWithCodec[T any](ctx context.Context, cache *Cache, key string, ttl, refreshAhead time.Duration, fn func() (T, error), codec ValueCodec[T]) (T, error)

RefreshAheadValueWithCodec allows custom encoding/decoding for typed refresh-ahead operations. @group Refresh Ahead

func Remember

func Remember[T any](cache *Cache, key string, ttl time.Duration, fn func() (T, error)) (T, error)

Remember is the ergonomic, typed remember helper using JSON encoding by default. @group Read Through

Example: remember typed value

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, err := cache.Remember[Profile](c, "profile:42", time.Minute, func() (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, profile.Name) // true Ada

func RememberContext added in v0.1.5

func RememberContext[T any](ctx context.Context, cache *Cache, key string, ttl time.Duration, fn func(context.Context) (T, error)) (T, error)

RememberContext is the context-aware variant of Remember. @group Read Through

func RememberStale

func RememberStale[T any](cache *Cache, key string, ttl, staleTTL time.Duration, fn func() (T, error)) (T, bool, error)

RememberStale returns a typed value with stale fallback semantics using JSON encoding by default. @group Read Through

Example: remember stale typed

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStale[Profile](c, "profile:42", time.Minute, 10*time.Minute, func() (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false Ada

func RememberStaleContext added in v0.1.5

func RememberStaleContext[T any](ctx context.Context, cache *Cache, key string, ttl, staleTTL time.Duration, fn func(context.Context) (T, error)) (T, bool, error)

RememberStaleContext returns a typed value with stale fallback semantics using JSON encoding by default. @group Read Through

Example: remember stale typed with context

type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStaleContext[Profile](ctx, c, "profile:42", time.Minute, 10*time.Minute, func(ctx context.Context) (Profile, error) {
	return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false Ada

func Set

func Set[T any](cache *Cache, key string, value T, ttl time.Duration) error

Set encodes value with the default codec (JSON) and writes it to key. @group Writes

Example: set typed values (struct + string)

type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.Set(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
err2 := cache.Set(c, "settings:mode", "dark", time.Minute)
fmt.Println(err == nil, err2 == nil) // true true

func SetContext added in v0.1.5

func SetContext[T any](ctx context.Context, cache *Cache, key string, value T, ttl time.Duration) error

SetContext is the context-aware variant of Set. @group Writes

func SetJSON

func SetJSON[T any](cache *Cache, key string, value T, ttl time.Duration) error

SetJSON encodes value as JSON and writes it to key using background context. @group Writes

Example: set typed JSON

type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.SetJSON(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
fmt.Println(err == nil) // true

func SetJSONContext added in v0.1.5

func SetJSONContext[T any](ctx context.Context, cache *Cache, key string, value T, ttl time.Duration) error

SetJSONContext is the context-aware variant of SetJSON. @group Writes

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache provides an ergonomic cache API on top of Store.

func NewCache

func NewCache(store cachecore.Store) *Cache

NewCache creates a cache facade bound to a concrete store. @group Core

Example: cache from store

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
fmt.Println(c.Driver()) // memory

func NewCacheWithTTL

func NewCacheWithTTL(store cachecore.Store, defaultTTL time.Duration) *Cache

NewCacheWithTTL lets callers override the default TTL applied when ttl <= 0. @group Core

Example: cache with custom default TTL

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCacheWithTTL(s, 2*time.Minute)
fmt.Println(c.Driver(), c != nil) // memory true

func (*Cache) Add

func (c *Cache) Add(key string, value []byte, ttl time.Duration) (bool, error)

Add writes value only when key is not already present. @group Writes

Example: add once

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
created, _ := c.Add("boot:seeded", []byte("1"), time.Hour)
fmt.Println(created) // true

func (*Cache) AddContext added in v0.1.5

func (c *Cache) AddContext(ctx context.Context, key string, value []byte, ttl time.Duration) (bool, error)

AddContext is the context-aware variant of Add. @group Writes

func (*Cache) BatchGetBytes

func (c *Cache) BatchGetBytes(keys ...string) (map[string][]byte, error)

BatchGetBytes returns all found values for the provided keys. Missing keys are omitted from the returned map. @group Reads

Example: batch get keys

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
_ = c.SetBytes("b", []byte("2"), time.Minute)
values, err := c.BatchGetBytes("a", "b", "missing")
fmt.Println(err == nil, string(values["a"]), string(values["b"])) // true 1 2

func (*Cache) BatchGetBytesContext added in v0.1.5

func (c *Cache) BatchGetBytesContext(ctx context.Context, keys ...string) (map[string][]byte, error)

BatchGetBytesContext is the context-aware variant of BatchGetBytes. @group Reads

func (*Cache) BatchSetBytes

func (c *Cache) BatchSetBytes(values map[string][]byte, ttl time.Duration) error

BatchSetBytes writes many key/value pairs using a shared ttl. @group Writes

Example: batch set keys

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := c.BatchSetBytes(map[string][]byte{
	"a": []byte("1"),
	"b": []byte("2"),
}, time.Minute)
fmt.Println(err == nil) // true

func (*Cache) BatchSetBytesContext added in v0.1.5

func (c *Cache) BatchSetBytesContext(ctx context.Context, values map[string][]byte, ttl time.Duration) error

BatchSetBytesContext is the context-aware variant of BatchSetBytes. @group Writes

func (*Cache) Decrement

func (c *Cache) Decrement(key string, delta int64, ttl time.Duration) (int64, error)

Decrement decrements a numeric value and returns the result. @group Writes

Example: decrement counter

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Decrement("rate:login:42", 1, time.Minute)
fmt.Println(val) // -1

func (*Cache) DecrementContext added in v0.1.5

func (c *Cache) DecrementContext(ctx context.Context, key string, delta int64, ttl time.Duration) (int64, error)

DecrementContext is the context-aware variant of Decrement. @group Writes

func (*Cache) Delete

func (c *Cache) Delete(key string) error

Delete removes a single key. @group Invalidation

Example: delete key

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Delete("a") == nil) // true

func (*Cache) DeleteContext added in v0.1.5

func (c *Cache) DeleteContext(ctx context.Context, key string) error

DeleteContext is the context-aware variant of Delete. @group Invalidation

func (*Cache) DeleteMany

func (c *Cache) DeleteMany(keys ...string) error

DeleteMany removes multiple keys. @group Invalidation

Example: delete many keys

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.DeleteMany("a", "b") == nil) // true

func (*Cache) DeleteManyContext added in v0.1.5

func (c *Cache) DeleteManyContext(ctx context.Context, keys ...string) error

DeleteManyContext is the context-aware variant of DeleteMany. @group Invalidation

func (*Cache) Driver

func (c *Cache) Driver() cachecore.Driver

Driver reports the underlying store driver. @group Core

func (*Cache) Flush

func (c *Cache) Flush() error

Flush clears all keys for this store scope. @group Invalidation

Example: flush all keys

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Flush() == nil) // true

func (*Cache) FlushContext added in v0.1.5

func (c *Cache) FlushContext(ctx context.Context) error

FlushContext is the context-aware variant of Flush. @group Invalidation

func (*Cache) GetBytes

func (c *Cache) GetBytes(key string) ([]byte, bool, error)

GetBytes returns raw bytes for key when present. @group Reads

Example: get bytes

ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
_ = c.SetBytes("user:42", []byte("Ada"), 0)
value, ok, _ := c.GetBytes("user:42")
fmt.Println(ok, string(value)) // true Ada

func (*Cache) GetBytesContext added in v0.1.5

func (c *Cache) GetBytesContext(ctx context.Context, key string) ([]byte, bool, error)

GetBytesContext is the context-aware variant of GetBytes. @group Reads

func (*Cache) GetString

func (c *Cache) GetString(key string) (string, bool, error)

GetString returns a UTF-8 string value for key when present. @group Reads

Example: get string

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("user:42:name", "Ada", 0)
name, ok, _ := c.GetString("user:42:name")
fmt.Println(ok, name) // true Ada

func (*Cache) GetStringContext added in v0.1.5

func (c *Cache) GetStringContext(ctx context.Context, key string) (string, bool, error)

GetStringContext is the context-aware variant of GetString. @group Reads

func (*Cache) Increment

func (c *Cache) Increment(key string, delta int64, ttl time.Duration) (int64, error)

Increment increments a numeric value and returns the result. @group Writes

Example: increment counter

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Increment("rate:login:42", 1, time.Minute)
fmt.Println(val) // 1

func (*Cache) IncrementContext added in v0.1.5

func (c *Cache) IncrementContext(ctx context.Context, key string, delta int64, ttl time.Duration) (int64, error)

IncrementContext is the context-aware variant of Increment. @group Writes

func (*Cache) Lock

func (c *Cache) Lock(key string, ttl, timeout time.Duration) (bool, error)

Lock waits until the lock is acquired or timeout elapses. @group Locking

Example: lock with timeout

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, err := c.Lock("job:sync", 10*time.Second, time.Second)
fmt.Println(err == nil, locked) // true true

func (*Cache) LockContext added in v0.1.5

func (c *Cache) LockContext(ctx context.Context, key string, ttl, retryInterval time.Duration) (bool, error)

LockContext retries lock acquisition until success or context cancellation. @group Locking

func (*Cache) NewLockHandle

func (c *Cache) NewLockHandle(key string, ttl time.Duration) *LockHandle

NewLockHandle creates a reusable lock handle for a key/ttl pair. @group Locking

Example: lock handle acquire/release

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true true
if locked {
	_ = lock.Release()
}

func (*Cache) PullBytes

func (c *Cache) PullBytes(key string) ([]byte, bool, error)

PullBytes returns value and removes it from cache. @group Invalidation

Example: pull and delete

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("reset:token:42", "abc", time.Minute)
body, ok, _ := c.PullBytes("reset:token:42")
fmt.Println(ok, string(body)) // true abc

func (*Cache) PullBytesContext added in v0.1.5

func (c *Cache) PullBytesContext(ctx context.Context, key string) ([]byte, bool, error)

PullBytesContext is the context-aware variant of PullBytes. @group Invalidation

func (*Cache) RateLimit

func (c *Cache) RateLimit(key string, limit int64, window time.Duration) (RateLimitStatus, error)

RateLimit increments a fixed-window counter and returns allowance metadata. @group Rate Limiting

Example: fixed-window rate limit metadata

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
res, err := c.RateLimit("rl:api:ip:1.2.3.4", 100, time.Minute)
fmt.Println(err == nil, res.Allowed, res.Count, res.Remaining, !res.ResetAt.IsZero())
// Output: true true 1 99 true

func (*Cache) RateLimitContext added in v0.1.5

func (c *Cache) RateLimitContext(ctx context.Context, key string, limit int64, window time.Duration) (RateLimitStatus, error)

RateLimitContext is the context-aware variant of RateLimit. @group Rate Limiting

func (*Cache) Ready added in v0.1.4

func (c *Cache) Ready() error

Ready checks whether the underlying store is ready to serve requests. @group Core

Example: readiness probe

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.Ready() == nil) // true

func (*Cache) ReadyContext added in v0.1.5

func (c *Cache) ReadyContext(ctx context.Context) error

ReadyContext is the context-aware variant of Ready. @group Core

func (*Cache) RefreshAheadBytes

func (c *Cache) RefreshAheadBytes(key string, ttl, refreshAhead time.Duration, fn func() ([]byte, error)) ([]byte, error)

RefreshAheadBytes returns cached value immediately and refreshes asynchronously when near expiry. On miss, it computes and stores synchronously. @group Refresh Ahead

Example: refresh ahead

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, err := c.RefreshAheadBytes("dashboard:summary", time.Minute, 10*time.Second, func() ([]byte, error) {
	return []byte("payload"), nil
})
fmt.Println(err == nil, len(body) > 0) // true true

func (*Cache) RefreshAheadBytesContext added in v0.1.5

func (c *Cache) RefreshAheadBytesContext(ctx context.Context, key string, ttl, refreshAhead time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, error)

RefreshAheadBytesContext is the context-aware variant of RefreshAheadBytes. @group Refresh Ahead

func (*Cache) RememberBytes

func (c *Cache) RememberBytes(key string, ttl time.Duration, fn func() ([]byte, error)) ([]byte, error)

RememberBytes returns key value or computes/stores it when missing. @group Read Through

Example: remember bytes

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
data, err := c.RememberBytes("dashboard:summary", time.Minute, func() ([]byte, error) {
	return []byte("payload"), nil
})
fmt.Println(err == nil, string(data)) // true payload

func (*Cache) RememberBytesContext added in v0.1.5

func (c *Cache) RememberBytesContext(ctx context.Context, key string, ttl time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, error)

RememberBytesContext is the context-aware variant of RememberBytes. @group Read Through

func (*Cache) RememberStaleBytes

func (c *Cache) RememberStaleBytes(key string, ttl, staleTTL time.Duration, fn func() ([]byte, error)) ([]byte, bool, error)

RememberStaleBytes returns a fresh value when available, otherwise computes and caches it. If computing fails and a stale value exists, it returns the stale value. The returned bool is true when a stale fallback was used. @group Read Through

Example: stale fallback on upstream failure

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, usedStale, err := c.RememberStaleBytes("profile:42", time.Minute, 10*time.Minute, func() ([]byte, error) {
	return []byte(`{"name":"Ada"}`), nil
})
fmt.Println(err == nil, usedStale, len(body) > 0)

func (*Cache) RememberStaleBytesContext added in v0.1.5

func (c *Cache) RememberStaleBytesContext(ctx context.Context, key string, ttl, staleTTL time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, bool, error)

RememberStaleBytesContext is the context-aware variant of RememberStaleBytes. @group Read Through

func (*Cache) SetBytes

func (c *Cache) SetBytes(key string, value []byte, ttl time.Duration) error

SetBytes writes raw bytes to key. @group Writes

Example: set bytes with ttl

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetBytes("token", []byte("abc"), time.Minute) == nil) // true

func (*Cache) SetBytesContext added in v0.1.5

func (c *Cache) SetBytesContext(ctx context.Context, key string, value []byte, ttl time.Duration) error

SetBytesContext is the context-aware variant of SetBytes. @group Writes

func (*Cache) SetString

func (c *Cache) SetString(key string, value string, ttl time.Duration) error

SetString writes a string value to key. @group Writes

Example: set string

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetString("user:42:name", "Ada", time.Minute) == nil) // true

func (*Cache) SetStringContext added in v0.1.5

func (c *Cache) SetStringContext(ctx context.Context, key string, value string, ttl time.Duration) error

SetStringContext is the context-aware variant of SetString. @group Writes

func (*Cache) Store

func (c *Cache) Store() cachecore.Store

Store returns the underlying store implementation. @group Core

Example: access store

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.Store().Driver()) // memory

func (*Cache) TryLock

func (c *Cache) TryLock(key string, ttl time.Duration) (bool, error)

TryLock acquires a short-lived lock key when not already held. @group Locking

Example: try lock

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
fmt.Println(locked) // true

func (*Cache) TryLockContext added in v0.1.5

func (c *Cache) TryLockContext(ctx context.Context, key string, ttl time.Duration) (bool, error)

TryLockContext is the context-aware variant of TryLock. @group Locking

func (*Cache) Unlock

func (c *Cache) Unlock(key string) error

Unlock releases a previously acquired lock key. @group Locking

Example: unlock key

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
if locked {
	_ = c.Unlock("job:sync")
}

func (*Cache) UnlockContext added in v0.1.5

func (c *Cache) UnlockContext(ctx context.Context, key string) error

UnlockContext is the context-aware variant of Unlock. @group Locking

func (*Cache) WithObserver

func (c *Cache) WithObserver(o Observer) *Cache

WithObserver attaches an observer to receive operation events. @group Observability

Example: attach observer

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
c = c.WithObserver(cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
	// See docs/production-guide.md for a real metrics recipe.
	fmt.Println(op, driver, hit, err == nil)
	_ = ctx
	_ = key
	_ = dur
}))
_, _, _ = c.GetBytes("profile:42")

type CacheAPI

CacheAPI is the composed application-facing interface for Cache.

type CompressionCodec

type CompressionCodec = cachecore.CompressionCodec

CompressionCodec represents a value compression algorithm.

type CoreAPI

type CoreAPI interface {
	Driver() cachecore.Driver
	Ready() error
	ReadyContext(ctx context.Context) error
}

CoreAPI exposes basic cache metadata.

type CounterAPI

type CounterAPI interface {
	Increment(key string, delta int64, ttl time.Duration) (int64, error)
	IncrementContext(ctx context.Context, key string, delta int64, ttl time.Duration) (int64, error)
	Decrement(key string, delta int64, ttl time.Duration) (int64, error)
	DecrementContext(ctx context.Context, key string, delta int64, ttl time.Duration) (int64, error)
}

CounterAPI exposes increment/decrement operations.

type LockAPI

type LockAPI interface {
	TryLock(key string, ttl time.Duration) (bool, error)
	TryLockContext(ctx context.Context, key string, ttl time.Duration) (bool, error)
	Lock(key string, ttl, timeout time.Duration) (bool, error)
	LockContext(ctx context.Context, key string, ttl, retryInterval time.Duration) (bool, error)
	Unlock(key string) error
	UnlockContext(ctx context.Context, key string) error
}

LockAPI exposes lock helpers based on cache keys.

type LockHandle

type LockHandle struct {
	// contains filtered or unexported fields
}

LockHandle provides ergonomic lock management on top of Cache lock helpers.

It wraps TryLock/Lock/Unlock and adds callback-based helpers.

Caveat:

  • Release is a best-effort wrapper over Unlock and does not perform owner-token validation. Do not assume ownership safety after lock expiry.

@group Locking

func (*LockHandle) Acquire

func (l *LockHandle) Acquire() (bool, error)

Acquire attempts to acquire the lock once (non-blocking). @group Locking

Example: single acquire attempt

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true true

func (*LockHandle) AcquireContext added in v0.1.5

func (l *LockHandle) AcquireContext(ctx context.Context) (bool, error)

AcquireContext is the context-aware variant of Acquire. @group Locking

func (*LockHandle) Block

func (l *LockHandle) Block(timeout, retryInterval time.Duration, fn func() error) (bool, error)

Block waits up to timeout to acquire the lock, runs fn if acquired, then releases.

retryInterval <= 0 falls back to the cache default lock retry interval. @group Locking

Example: wait for lock, then auto-release

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Block(500*time.Millisecond, 25*time.Millisecond, func() error {
	// do protected work
	return nil
})
fmt.Println(err == nil, locked) // true true

func (*LockHandle) BlockContext added in v0.1.5

func (l *LockHandle) BlockContext(ctx context.Context, retryInterval time.Duration, fn func(context.Context) error) (bool, error)

BlockContext is the context-aware variant of Block. @group Locking

func (*LockHandle) Get

func (l *LockHandle) Get(fn func() error) (bool, error)

Get acquires the lock once, runs fn if acquired, then releases automatically. @group Locking

Example: acquire once and auto-release

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Get(func() error {
	// do protected work
	return nil
})
fmt.Println(err == nil, locked) // true true

func (*LockHandle) GetContext added in v0.1.5

func (l *LockHandle) GetContext(ctx context.Context, fn func(context.Context) error) (bool, error)

GetContext is the context-aware variant of Get. @group Locking

func (*LockHandle) Release

func (l *LockHandle) Release() error

Release unlocks the key if this handle previously acquired it.

It is safe to call multiple times; repeated calls become no-ops after the first successful release. @group Locking

Example: release a held lock

ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, _ := lock.Acquire()
if locked {
	_ = lock.Release()
}

func (*LockHandle) ReleaseContext added in v0.1.5

func (l *LockHandle) ReleaseContext(ctx context.Context) error

ReleaseContext is the context-aware variant of Release. @group Locking

type Observer

type Observer interface {
	OnCacheOp(ctx context.Context, op string, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver)
}

Observer receives events for cache operations. It is called from Cache helpers after each operation completes.

type ObserverFunc

type ObserverFunc func(ctx context.Context, op string, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver)

ObserverFunc adapts a function to the Observer interface.

func (ObserverFunc) OnCacheOp

func (f ObserverFunc) OnCacheOp(ctx context.Context, op string, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver)

OnCacheOp implements Observer. @group Observability

Example: observer func callback

obs := cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
	fmt.Println(op, key, hit, err == nil, driver)
	_ = ctx
	_ = dur
})
obs.OnCacheOp(context.Background(), "get", "user:42", true, nil, time.Millisecond, cachecore.DriverMemory)

type RateLimitAPI

type RateLimitAPI interface {
	RateLimit(key string, limit int64, window time.Duration) (RateLimitStatus, error)
	RateLimitContext(ctx context.Context, key string, limit int64, window time.Duration) (RateLimitStatus, error)
}

RateLimitAPI exposes rate limiting helpers.

type RateLimitStatus

type RateLimitStatus struct {
	Allowed   bool
	Count     int64
	Remaining int64
	ResetAt   time.Time
}

RateLimitStatus contains fixed-window rate limiting metadata. @group Rate Limiting

type ReadAPI

type ReadAPI interface {
	GetBytes(key string) ([]byte, bool, error)
	GetBytesContext(ctx context.Context, key string) ([]byte, bool, error)
	BatchGetBytes(keys ...string) (map[string][]byte, error)
	BatchGetBytesContext(ctx context.Context, keys ...string) (map[string][]byte, error)
	GetString(key string) (string, bool, error)
	GetStringContext(ctx context.Context, key string) (string, bool, error)
	PullBytes(key string) ([]byte, bool, error)
	PullBytesContext(ctx context.Context, key string) ([]byte, bool, error)
}

ReadAPI exposes read-oriented cache operations.

type RefreshAheadAPI

type RefreshAheadAPI interface {
	RefreshAheadBytes(key string, ttl, refreshAhead time.Duration, fn func() ([]byte, error)) ([]byte, error)
	RefreshAheadBytesContext(ctx context.Context, key string, ttl, refreshAhead time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, error)
}

RefreshAheadAPI exposes refresh-ahead helpers.

type RememberAPI

type RememberAPI interface {
	RememberBytes(key string, ttl time.Duration, fn func() ([]byte, error)) ([]byte, error)
	RememberBytesContext(ctx context.Context, key string, ttl time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, error)
	RememberStaleBytes(key string, ttl, staleTTL time.Duration, fn func() ([]byte, error)) ([]byte, bool, error)
	RememberStaleBytesContext(ctx context.Context, key string, ttl, staleTTL time.Duration, fn func(context.Context) ([]byte, error)) ([]byte, bool, error)
}

RememberAPI exposes remember and stale-remember helpers.

type StoreConfig

type StoreConfig struct {
	cachecore.BaseConfig

	// MemoryCleanupInterval controls in-process cache eviction.
	MemoryCleanupInterval time.Duration

	// FileDir controls where file driver stores cache entries.
	FileDir string
}

StoreConfig controls shared/root store construction settings.

type ValueCodec

type ValueCodec[T any] struct {
	Encode func(T) ([]byte, error)
	Decode func([]byte) (T, error)
}

ValueCodec defines how to encode/decode typed values for helper operations.

type WriteAPI

type WriteAPI interface {
	SetBytes(key string, value []byte, ttl time.Duration) error
	SetBytesContext(ctx context.Context, key string, value []byte, ttl time.Duration) error
	SetString(key string, value string, ttl time.Duration) error
	SetStringContext(ctx context.Context, key string, value string, ttl time.Duration) error
	BatchSetBytes(values map[string][]byte, ttl time.Duration) error
	BatchSetBytesContext(ctx context.Context, values map[string][]byte, ttl time.Duration) error
	Add(key string, value []byte, ttl time.Duration) (bool, error)
	AddContext(ctx context.Context, key string, value []byte, ttl time.Duration) (bool, error)
	Delete(key string) error
	DeleteContext(ctx context.Context, key string) error
	DeleteMany(keys ...string) error
	DeleteManyContext(ctx context.Context, keys ...string) error
	Flush() error
	FlushContext(ctx context.Context) error
}

WriteAPI exposes write and invalidation operations.

Directories

Path Synopsis
cachecore module
cachetest module
driver
dynamocache module
mysqlcache module
natscache module
postgrescache module
rediscache module
sqlcore module
sqlitecache module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL