Documentation
¶
Overview ¶
Package memcache implements a bounded, generic, thread-safe in-memory cache designed for long-running Go programs: REPLs, daemons, watch-mode build tools, and other CLI workloads where cache state must survive across many requests but the host process restarts often enough that warm-restart matters.
The primary type is the generic Cache, constructed via New with one or more Option functional options:
c, err := memcache.New[string, *Profile](
memcache.WithMaxEntries(10_000),
memcache.WithDefaultTTL(15*time.Minute),
memcache.WithPolicy(memcache.PolicyS3FIFO),
)
if err != nil { panic(err) }
defer c.Close()
c.Set("alice", profile)
got, ok := c.Get("alice")
Bounded by Default ¶
The cache must be bounded at construction. New returns ErrUnbounded if neither WithMaxEntries nor WithMaxBytes is supplied. Use NewUnbounded only when the key space is provably bounded by something else.
Eviction Policies ¶
The default eviction policy is PolicyS3FIFO, a modern algorithm that achieves hit rates comparable to TinyLFU with substantially simpler state. Other policies are available via WithPolicy: PolicyLRU, PolicyLFU, PolicyTinyLFU, PolicyFIFO, PolicyARC, Policy2Q.
Stampede Protection ¶
When a Loader is configured via WithLoader, the cache deduplicates concurrent misses for the same key (singleflight semantics). Use WithRefreshAhead to pre-fetch entries before they expire and WithStaleWhileRevalidate to serve stale values during reload. WithNegativeCache caches "not found" results to avoid repeated upstream calls on missing keys.
Snapshot & Restore ¶
The cache can be persisted to disk and restored on the next process launch, the killer feature for REPL-style workloads where a 30-second warm-up cost dominates user perception. Cache.Save and Cache.Load handle the basic flow; WithAutoSave and WithAutoLoad make persistence transparent for the typical CLI case.
Tags ¶
Entries can be tagged at insert time and invalidated as a group:
c.SetWithTags("user:42:profile", profile, "user-42")
c.SetWithTags("user:42:settings", settings, "user-42")
// Later, invalidate everything tagged "user-42":
c.InvalidateTag("user-42")
Tag invalidation is O(k) where k is the number of tagged entries, far cheaper than scanning the cache.
Errors ¶
All errors implement errors.Is:
if errors.Is(err, memcache.ErrNotFound) { ... }
if errors.Is(err, memcache.ErrClosed) { ... }
See ConfigError, CapacityError, LoadError, SnapshotError, CodecError, and the full error type list in errors.go.
Index ¶
- Variables
- func Decrement[K comparable, V Number](c *Cache[K, V], key K) (V, error)
- func DeleteCacheable[V CacheKeyer](c *Cache[string, V], proto V) bool
- func GetCacheable[V CacheKeyer](c *Cache[string, V], proto V) (V, bool)
- func Increment[K comparable, V Number](c *Cache[K, V], key K) (V, error)
- func IncrementBy[K comparable, V Number](c *Cache[K, V], key K, delta V) (V, error)
- func SetCacheable[V CacheKeyer](c *Cache[string, V], v V, opts ...SetOption) error
- type AdmissionPolicy
- type AdmitAlways
- type Attr
- type BulkLoader
- type Cache
- func (c *Cache[K, V]) Bytes() int64
- func (c *Cache[K, V]) Capacity() int64
- func (c *Cache[K, V]) Clear()
- func (c *Cache[K, V]) Clone() (*Cache[K, V], error)
- func (c *Cache[K, V]) Close() error
- func (c *Cache[K, V]) Coldest(n int) []KeyedItem[K, V]
- func (c *Cache[K, V]) CompareAndSwap(key K, old, newValue V) bool
- func (c *Cache[K, V]) Compute(key K, fn func(cur V, ok bool) (V, ComputeAction, error)) (V, error)
- func (c *Cache[K, V]) ComputeIfAbsent(key K, fn func() (V, time.Duration, error)) (value V, computed bool, err error)
- func (c *Cache[K, V]) ComputeIfPresent(key K, fn func(cur V) (V, ComputeAction, error)) (V, error)
- func (c *Cache[K, V]) Delete(key K) bool
- func (c *Cache[K, V]) DeleteCtx(ctx context.Context, key K) (bool, error)
- func (c *Cache[K, V]) DeleteExpired() int
- func (c *Cache[K, V]) DeleteIf(key K, pred func(V) bool) bool
- func (c *Cache[K, V]) DeleteMulti(keys []K) int
- func (c *Cache[K, V]) DeletePrefix(prefix string) int
- func (c *Cache[K, V]) DeleteWhere(pred func(key K, value V) bool) int
- func (c *Cache[K, V]) Dump(w io.Writer) error
- func (c *Cache[K, V]) Expiry(key K) (time.Time, bool)
- func (c *Cache[K, V]) Get(key K) (V, bool)
- func (c *Cache[K, V]) GetCtx(ctx context.Context, key K) (V, bool, error)
- func (c *Cache[K, V]) GetMulti(keys []K) map[K]V
- func (c *Cache[K, V]) GetMultiOrLoad(ctx context.Context, keys []K) (map[K]V, error)
- func (c *Cache[K, V]) GetOrLoad(ctx context.Context, key K) (V, error)
- func (c *Cache[K, V]) GetOrLoadFn(ctx context.Context, key K, ...) (V, error)
- func (c *Cache[K, V]) GetOrSet(key K, value V) (V, bool, error)
- func (c *Cache[K, V]) GetWithExpiry(key K) (V, time.Time, bool)
- func (c *Cache[K, V]) Has(key K) bool
- func (c *Cache[K, V]) Histogram() Histogram
- func (c *Cache[K, V]) Hottest(n int) []KeyedItem[K, V]
- func (c *Cache[K, V]) InvalidateTag(tag string) int
- func (c *Cache[K, V]) InvalidateTags(tags ...string) int
- func (c *Cache[K, V]) ItemMetadata(key K) (Metadata, bool)
- func (c *Cache[K, V]) Items() []KeyedItem[K, V]
- func (c *Cache[K, V]) Keys() []K
- func (c *Cache[K, V]) Len() int
- func (c *Cache[K, V]) Load(r io.Reader) (n int, err error)
- func (c *Cache[K, V]) LoadFile(path string) (n int, err error)
- func (c *Cache[K, V]) LongestLived(n int) []KeyedItem[K, V]
- func (c *Cache[K, V]) Merge(r io.Reader) (int, error)
- func (c *Cache[K, V]) Peek(key K) (V, bool)
- func (c *Cache[K, V]) PeekOrAdd(key K, value V) (V, bool, error)
- func (c *Cache[K, V]) Range(fn func(key K, value V) bool)
- func (c *Cache[K, V]) Refresh(ctx context.Context, key K) error
- func (c *Cache[K, V]) RefreshAll(ctx context.Context) int
- func (c *Cache[K, V]) Reset()
- func (c *Cache[K, V]) ResetStats()
- func (c *Cache[K, V]) Resize(newSize int64) int
- func (c *Cache[K, V]) Save(w io.Writer) (err error)
- func (c *Cache[K, V]) SaveFile(path string) error
- func (c *Cache[K, V]) Set(key K, value V) error
- func (c *Cache[K, V]) SetCtx(ctx context.Context, key K, value V) error
- func (c *Cache[K, V]) SetIfAbsent(key K, value V) (bool, error)
- func (c *Cache[K, V]) SetIfPresent(key K, value V) (bool, error)
- func (c *Cache[K, V]) SetMulti(items map[K]V) error
- func (c *Cache[K, V]) SetWithOptions(key K, value V, opts ...SetOption) error
- func (c *Cache[K, V]) SetWithTTL(key K, value V, ttl time.Duration) error
- func (c *Cache[K, V]) SetWithTags(key K, value V, tags ...string) error
- func (c *Cache[K, V]) SoonestExpiring(n int) []KeyedItem[K, V]
- func (c *Cache[K, V]) Stats() Stats
- func (c *Cache[K, V]) Subscribe(buf int, kinds ...EventKind) (<-chan Event[K, V], func())
- func (c *Cache[K, V]) Sync(ctx context.Context) error
- func (c *Cache[K, V]) TTL(key K) (time.Duration, bool)
- func (c *Cache[K, V]) Tags(key K) []string
- func (c *Cache[K, V]) Touch(key K) bool
- func (c *Cache[K, V]) TouchWithTTL(key K, ttl time.Duration) bool
- func (c *Cache[K, V]) Update(key K, fn func(cur V) V) (V, error)
- func (c *Cache[K, V]) View() *CacheView[K, V]
- type CacheKeyer
- type CacheTTLer
- type CacheTagger
- type CacheView
- func (v *CacheView[K, V]) Bytes() int64
- func (v *CacheView[K, V]) Expiry(key K) (time.Time, bool)
- func (v *CacheView[K, V]) Get(key K) (V, bool)
- func (v *CacheView[K, V]) Has(key K) bool
- func (v *CacheView[K, V]) Keys() []K
- func (v *CacheView[K, V]) Len() int
- func (v *CacheView[K, V]) Peek(key K) (V, bool)
- func (v *CacheView[K, V]) Range(fn func(key K, value V) bool)
- func (v *CacheView[K, V]) Stats() Stats
- func (v *CacheView[K, V]) TTL(key K) (time.Duration, bool)
- type Cacheable
- type CapacityError
- type Clock
- type Codec
- type CodecError
- type CompressedCodec
- type ComputeAction
- type ConfigError
- type Doorkeeper
- type EncryptedCodec
- type Event
- type EventKind
- type EvictionReason
- type FakeClock
- type GobCodec
- type Histogram
- type Item
- type JSONCodec
- type KeyedItem
- type LoadError
- type LoadResult
- type Loader
- type LoaderFunc
- type MemoryStore
- func (m *MemoryStore[K, V]) Close() error
- func (m *MemoryStore[K, V]) Delete(ctx context.Context, key K) (bool, error)
- func (m *MemoryStore[K, V]) Get(ctx context.Context, key K) (V, bool, error)
- func (m *MemoryStore[K, V]) Iterate(ctx context.Context, fn func(key K, value V) bool) error
- func (m *MemoryStore[K, V]) Len(ctx context.Context) (int, error)
- func (m *MemoryStore[K, V]) Set(ctx context.Context, key K, value V, ttl time.Duration) error
- type Metadata
- type Number
- type Option
- func WithAdmissionPolicy[K comparable](p AdmissionPolicy[K]) Option
- func WithAsyncWrites() Option
- func WithAutoLoad(path string) Option
- func WithAutoLoadIgnoreErrors(b bool) Option
- func WithAutoSave(path string, interval time.Duration) Option
- func WithBulkLoader[K comparable, V any](l BulkLoader[K, V]) Option
- func WithCallbackTimeout(d time.Duration) Option
- func WithClock(clk Clock) Option
- func WithCodec(codec Codec) Option
- func WithCollisionTracking(b bool) Option
- func WithCompressedCodec(base Codec, level int) Option
- func WithCopyOnGet[V any](fn func(V) V) Option
- func WithDefaultTTL(d time.Duration) Option
- func WithDoorkeeper(b bool) Option
- func WithEncryptedCodec(base Codec, key []byte) Option
- func WithErrorTTL(d time.Duration) Option
- func WithEventsBuffer(n int) Option
- func WithExpireFunc[K comparable, V any](fn func(key K, value V, meta Metadata) bool) Option
- func WithExpvar(name string) Option
- func WithFlatStorage() Option
- func WithGroup(name string, capacity int) Option
- func WithHasher[K comparable](fn func(K) uint64) Option
- func WithInvalidationPublisher[K comparable](fn func(key K, reason EvictionReason)) Option
- func WithInvalidationSubscriber[K comparable](ch <-chan K) Option
- func WithJanitorInterval(d time.Duration) Option
- func WithLoader[K comparable, V any](l Loader[K, V]) Option
- func WithLoaderRateLimit(perSecond int) Option
- func WithLoaderTimeout(d time.Duration) Option
- func WithLockFreeRead() Option
- func WithLogger(l *slog.Logger) Option
- func WithMaxBytes(n int64) Option
- func WithMaxConcurrentLoads(n int) Option
- func WithMaxEntries(n int) Option
- func WithMaxKeySize(n int) Option
- func WithMaxSnapshotBytes(n int64) Option
- func WithMaxTagsPerEntry(n int) Option
- func WithMaxTagsTotal(n int) Option
- func WithMaxValueWeight(n int64) Option
- func WithName(name string) Option
- func WithNegativeCache(negativeTTL time.Duration) Option
- func WithOnEvict[K comparable, V any](fn func(key K, value V, reason EvictionReason)) Option
- func WithOnExpire[K comparable, V any](fn func(key K, value V)) Option
- func WithOnHit[K comparable, V any](fn func(key K, value V)) Option
- func WithOnLoad[K comparable, V any](fn func(key K, value V, ttl time.Duration, err error)) Option
- func WithOnMiss[K comparable](fn func(key K)) Option
- func WithPolicy(p Policy) Option
- func WithPurgeVisitor[K comparable, V any](fn func(key K, value V) error) Option
- func WithRefreshAhead(refreshAt float64) Option
- func WithSafeKeys(b bool) Option
- func WithShardedStats(b bool) Option
- func WithShards(n int) Option
- func WithSlidingTTL(b bool) Option
- func WithSnapshotMetadata(meta map[string]string) Option
- func WithStaleWhileRevalidate(staleFor time.Duration) Option
- func WithStatsEnabled(b bool) Option
- func WithStore[K comparable, V any](store Store[K, V]) Option
- func WithTTLBuckets(slots, tickPerBucket int) Option
- func WithTTLJitter(j time.Duration) Option
- func WithTracer(t Tracer) Option
- func WithWeigher[V any](fn Weigher[V]) Option
- type Policy
- type PolicyDetail2Q
- type PolicyDetailARC
- type PolicyDetailFIFO
- type PolicyDetailLFU
- type PolicyDetailLRU
- type PolicyDetailS3FIFO
- type PolicyDetailTinyLFU
- type Prefixer
- type RawCodec
- type RealClock
- type SetOption
- type SnapshotError
- type SnapshotInfo
- type SnapshotMarshaler
- type SnapshotUnmarshaler
- type Span
- type Stats
- type Store
- type Tiered
- func (t *Tiered[K, V]) Close() error
- func (t *Tiered[K, V]) Delete(key K) bool
- func (t *Tiered[K, V]) Get(key K) (V, bool)
- func (t *Tiered[K, V]) GetCtx(ctx context.Context, key K) (V, bool, error)
- func (t *Tiered[K, V]) Has(key K) bool
- func (t *Tiered[K, V]) InvalidateTag(tag string) int
- func (t *Tiered[K, V]) InvalidateTags(tags ...string) int
- func (t *Tiered[K, V]) L1() *Cache[K, V]
- func (t *Tiered[K, V]) L2() *Cache[K, V]
- func (t *Tiered[K, V]) Len() int
- func (t *Tiered[K, V]) Set(key K, value V) error
- func (t *Tiered[K, V]) SetWithOptions(key K, value V, opts ...SetOption) error
- func (t *Tiered[K, V]) SetWithTTL(key K, value V, ttl time.Duration) error
- func (t *Tiered[K, V]) Stats() TieredStats
- func (t *Tiered[K, V]) Sync(ctx context.Context) error
- type TieredStats
- type Timer
- type Tracer
- type Weigher
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrNotFound is returned by Get-style methods when the key is absent // or expired. Loaders may also return this to signal a not-found // result, which the cache may negatively cache. ErrNotFound = errors.New("memcache: not found") // ErrClosed is returned when an operation is attempted on a closed // cache. ErrClosed = errors.New("memcache: cache is closed") // ErrUnbounded is returned by New when no size bound is configured. ErrUnbounded = errors.New("memcache: must specify WithMaxEntries or WithMaxBytes") // ErrNoLoader is returned by GetOrLoad when no Loader is configured. ErrNoLoader = errors.New("memcache: no loader configured") // ErrInvalidTTL is returned by SetWithTTL on a negative TTL. ErrInvalidTTL = errors.New("memcache: ttl must be non-negative") // ErrSnapshotIncompatible is returned by Load when the snapshot format // version is not understood. ErrSnapshotIncompatible = errors.New("memcache: snapshot version not supported") // ErrSnapshotCorrupt is returned by Load when the snapshot fails its // CRC check or is otherwise malformed. ErrSnapshotCorrupt = errors.New("memcache: snapshot corrupt") // ErrKeyTooLarge is returned by Set when a key exceeds WithMaxKeySize. ErrKeyTooLarge = errors.New("memcache: key exceeds size limit") // ErrValueTooLarge is returned by Set when a value's weight exceeds // WithMaxValueWeight. ErrValueTooLarge = errors.New("memcache: value exceeds weight limit") // ErrTooManyTags is returned by SetWithTags when the entry has more // tags than WithMaxTagsPerEntry permits. ErrTooManyTags = errors.New("memcache: too many tags for entry") // ErrLoaderRateLimited is returned by GetOrLoad when the loader rate // limit (WithLoaderRateLimit) has been exceeded for this tick. ErrLoaderRateLimited = errors.New("memcache: loader rate limit exceeded") // ErrLoaderTimeout is returned when a Loader exceeds the per-call // deadline set by WithLoaderTimeout. ErrLoaderTimeout = errors.New("memcache: loader timeout exceeded") // ErrLoaderTooManyInFlight is returned by GetOrLoad when the number of // in-flight Loader calls has reached WithMaxConcurrentLoads and the // caller's context cancels before a slot opens. ErrLoaderTooManyInFlight = errors.New("memcache: too many in-flight loads") // ErrComputeReentrant is returned (panicked) when a Compute callback // attempts to call back into the cache for the same key. ErrComputeReentrant = errors.New("memcache: compute callback re-entered cache") )
Sentinel errors. All errors returned by this package satisfy errors.Is against one of these.
Functions ¶
func Decrement ¶
func Decrement[K comparable, V Number](c *Cache[K, V], key K) (V, error)
Decrement is IncrementBy(c, key, -1). For unsigned V, zero wraps per Go integer rules; use Cache.Compute for saturation.
func DeleteCacheable ¶
func DeleteCacheable[V CacheKeyer](c *Cache[string, V], proto V) bool
DeleteCacheable removes the entry whose key equals proto.CacheKey() and returns true when an entry was removed.
func GetCacheable ¶
func GetCacheable[V CacheKeyer](c *Cache[string, V], proto V) (V, bool)
GetCacheable looks up an entry in c by deriving the key from proto.CacheKey(). Returns the cached value or proto's zero on miss.
func Increment ¶
func Increment[K comparable, V Number](c *Cache[K, V], key K) (V, error)
Increment is IncrementBy(c, key, 1).
Example ¶
ExampleIncrement shows the typed numeric helper as a thin wrapper over Compute.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int64](memcache.WithMaxEntries(8))
defer c.Close()
_, _ = memcache.Increment(c, "events")
_, _ = memcache.Increment(c, "events")
v, _ := memcache.IncrementBy(c, "events", 10)
fmt.Println(v)
}
Output: 12
func IncrementBy ¶
func IncrementBy[K comparable, V Number]( c *Cache[K, V], key K, delta V, ) (V, error)
IncrementBy atomically adds delta to the integer value at key. If absent, the entry is created with value 0+delta. Returns the post-increment value.
func SetCacheable ¶
func SetCacheable[V CacheKeyer](c *Cache[string, V], v V, opts ...SetOption) error
SetCacheable stores v in c under the key returned by v.CacheKey(). Without opts, routes through Cache.Set so CacheTTLer, CacheTagger, and template tags fire as usual; with opts, routes through Cache.SetWithOptions and explicit options override auto-extraction.
Example ¶
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
// ExampleSetCacheable demonstrates the CacheKeyer pattern: define a
// type whose values know their own cache key, then store them
// without restating the derivation at every call site.
type exampleProfile struct {
UserID int
Email string
}
func (p exampleProfile) CacheKey() string { return fmt.Sprintf("user:%d", p.UserID) }
func main() {
c, _ := memcache.New[string, exampleProfile](memcache.WithMaxEntries(64))
defer c.Close()
_ = memcache.SetCacheable(c, exampleProfile{UserID: 42, Email: "a@b.c"})
// Look up by passing only the ID-bearing prototype.
got, _ := memcache.GetCacheable(c, exampleProfile{UserID: 42})
fmt.Println(got.Email)
}
Output: a@b.c
Types ¶
type AdmissionPolicy ¶
type AdmissionPolicy[K comparable] interface { // Admit returns true if the candidate key should be inserted into // the cache. Admit(key K) bool // Observe records a key access. Called on Set (regardless of Admit's // verdict) and on every successful Get hit. Observe(key K) // Reset clears the policy's internal state. Reset() }
AdmissionPolicy decides whether to admit a candidate key into the cache. It is consulted on each Cache.Set before the entry is written; returning false drops the candidate silently. Implementations must be safe for concurrent use.
type AdmitAlways ¶
type AdmitAlways[K comparable] struct{}
AdmitAlways is an AdmissionPolicy that accepts every candidate. It is the package default when no admission option is supplied.
type BulkLoader ¶
type BulkLoader[K comparable, V any] interface { // LoadMulti fetches values for the given keys. Implementations // should populate the returned map with one entry per requested // key; missing keys may be omitted (treated as load-failures with // [ErrNotFound]) or present with a per-key error in // LoadResult.Err. LoadMulti(ctx context.Context, keys []K) (map[K]LoadResult[V], error) }
BulkLoader loads multiple keys at once, e.g. via a database batch query. Used by Cache.GetMultiOrLoad when configured with WithBulkLoader.
type Cache ¶
type Cache[K comparable, V any] struct { // contains filtered or unexported fields }
Cache is a generic, bounded, thread-safe in-memory cache.
The zero value of Cache is not usable. Construct one with New.
All methods are safe for concurrent use. Per-shard locking keeps contention bounded; each Get acquires at most one shard's RLock.
func Must ¶
func Must[K comparable, V any](c *Cache[K, V], err error) *Cache[K, V]
Must wraps a call to New and panics if err is non-nil. Intended for package-level variable initialization.
func New ¶
func New[K comparable, V any](opts ...Option) (*Cache[K, V], error)
New constructs a Cache. At least one of WithMaxEntries or WithMaxBytes must be supplied; otherwise New returns ErrUnbounded. Returned errors of type *ConfigError indicate a specific option that was rejected.
Example ¶
ExampleNew shows zero-config construction. Either WithMaxEntries or WithMaxBytes MUST be supplied; New refuses an unbounded cache.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, err := memcache.New[string, int](memcache.WithMaxEntries(1000))
if err != nil {
panic(err)
}
defer c.Close()
_ = c.Set("answer", 42)
v, ok := c.Get("answer")
fmt.Println(v, ok)
}
Output: 42 true
Example (UnboundedRejected) ¶
ExampleNew_unboundedRejected demonstrates the spec-mandated safety check: New refuses to construct an unbounded cache.
package main
import (
"errors"
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
_, err := memcache.New[string, int]()
fmt.Println(errors.Is(err, memcache.ErrUnbounded))
}
Output: true
func NewUnbounded ¶
func NewUnbounded[K comparable, V any](opts ...Option) (*Cache[K, V], error)
NewUnbounded constructs a Cache with no size bound. Use only when the key space is provably bounded by something else; an unbounded cache is otherwise a memory leak in disguise.
func (*Cache[K, V]) Bytes ¶
Bytes returns the current total weight of all entries. When no Weigher is configured, this equals Cache.Len.
func (*Cache[K, V]) Capacity ¶
Capacity returns the cache's effective configured bound: entries when WithMaxEntries is set, bytes when WithMaxBytes is set, or 0 when unbounded.
func (*Cache[K, V]) Clear ¶
func (c *Cache[K, V]) Clear()
Clear removes every entry, recording each removal under EvictReasonClear. When WithPurgeVisitor is configured, the visitor is invoked for every entry before its slot is recycled.
func (*Cache[K, V]) Clone ¶
Clone returns a new Cache populated with a snapshot of c's live entries. Hit counts and policy positioning are not preserved. Holds each source shard's read lock per pass; concurrent mutations produce a per-shard-consistent (not cache-wide-consistent) snapshot.
func (*Cache[K, V]) Close ¶
Close releases all resources, stops the auto-save goroutine after writing a final snapshot, stops every shard's janitor, closes every active subscriber channel, and disables further operations. Close is idempotent. After Close, writes return ErrClosed and reads return the zero value with ok=false. A final auto-save error is logged but not returned.
func (*Cache[K, V]) Coldest ¶
Coldest is the inverse of Cache.Hottest; entries with the lowest hit counts come first.
func (*Cache[K, V]) CompareAndSwap ¶
CompareAndSwap atomically replaces the value for key with newValue when the current value equals old, using reflect.DeepEqual for comparison. Returns true when the swap happened.
func (*Cache[K, V]) Compute ¶
func (c *Cache[K, V]) Compute( key K, fn func(cur V, ok bool) (V, ComputeAction, error), ) (V, error)
Compute atomically applies fn to the entry for key. fn receives the current value (zero V when absent) and a presence bool; it returns the new value, a ComputeAction, and an optional error.
Actions: ComputeStore stores with default TTL, ComputeDelete removes (under EvictReasonComputed), ComputeNoOp discards the returned value.
fn runs under the shard write lock and MUST NOT call into the cache for any key on the same shard.
Example ¶
ExampleCache_Compute is the canonical atomic read-modify-write for a counter.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int](memcache.WithMaxEntries(8))
defer c.Close()
for range 5 {
_, _ = c.Compute("hits", func(cur int, _ bool) (int, memcache.ComputeAction, error) {
return cur + 1, memcache.ComputeStore, nil
})
}
v, _ := c.Get("hits")
fmt.Println(v)
}
Output: 5
func (*Cache[K, V]) ComputeIfAbsent ¶
func (c *Cache[K, V]) ComputeIfAbsent( key K, fn func() (V, time.Duration, error), ) (value V, computed bool, err error)
ComputeIfAbsent invokes fn only when the key is absent or expired. On a hit, the existing value is returned with computed=false. fn's TTL of 0 means use WithDefaultTTL; a negative TTL returns ErrInvalidTTL. fn errors are propagated and no entry is stored.
func (*Cache[K, V]) ComputeIfPresent ¶
func (c *Cache[K, V]) ComputeIfPresent( key K, fn func(cur V) (V, ComputeAction, error), ) (V, error)
ComputeIfPresent invokes fn only when the key is present and fresh. On miss, returns (zero, nil) without error.
func (*Cache[K, V]) Delete ¶
Delete removes key from the cache and returns true when an entry was removed. With WithStore the delete is mirrored through; Store errors are logged but do not affect the returned bool. Use Cache.DeleteCtx to surface Store errors. With WithAsyncWrites the delete is queued and the bool is true except on a closed cache; call Cache.Sync first to observe the post-delete state.
func (*Cache[K, V]) DeleteCtx ¶
DeleteCtx is the context-aware variant of Cache.Delete. The boolean reports whether an entry was removed from the in-memory cache; the error is non-nil when ctx was canceled or a configured Store Delete failed. The in-memory delete completes regardless of Store error.
func (*Cache[K, V]) DeleteExpired ¶
DeleteExpired sweeps every shard, removing entries whose TTL has elapsed, and returns the count removed.
func (*Cache[K, V]) DeleteIf ¶
DeleteIf removes the entry for key only when pred returns true for the current value. pred is invoked under the shard write lock and MUST NOT call back into the cache for the same key.
func (*Cache[K, V]) DeleteMulti ¶
DeleteMulti removes every key in keys. Returns the number of entries actually removed (absent keys are silently skipped).
func (*Cache[K, V]) DeletePrefix ¶
DeletePrefix removes every entry whose key begins with the given prefix. Only valid when K is `string` or implements Prefixer. For caches with other K types DeletePrefix is a no-op and returns 0; callers can detect this case via Prefixer type assertions on their own K type before calling.
Iteration is O(n) shard-by-shard. Tag-based invalidation (Cache.InvalidateTag, Phase 7) is the preferred mechanism when applicable.
func (*Cache[K, V]) DeleteWhere ¶
DeleteWhere removes every entry for which pred returns true. pred runs OUTSIDE the shard write lock and may call into the cache. An entry modified between snapshot and delete is still removed if its prior value satisfied pred (eventual semantics).
func (*Cache[K, V]) Dump ¶
Dump writes a human-readable summary to w. The format is for debugging only and is NOT stable across releases.
func (*Cache[K, V]) Expiry ¶
Expiry returns the absolute expiry time for the entry at key, or the zero time when the entry has no TTL. The bool is true when the entry exists and is fresh.
func (*Cache[K, V]) Get ¶
Get returns the value stored for key, or the zero value of V and false if absent or expired. Get does NOT invoke a configured Loader; use Cache.GetOrLoad for that.
When WithRefreshAhead is enabled, a successful hit on an entry past refreshAt*TTL triggers an asynchronous Loader call. When WithStaleWhileRevalidate is enabled, a hit on an entry within the staleFor window returns the stale value and triggers an asynchronous refresh. When WithStore is configured, an in-memory miss falls through to the Store and a hit is promoted into the in-memory cache.
func (*Cache[K, V]) GetCtx ¶
GetCtx is the context-aware variant of Cache.Get. The ctx is checked at entry and threaded through to any Store consultation triggered by an in-memory miss; cancellation surfaces as ctx.Err().
func (*Cache[K, V]) GetMulti ¶
func (c *Cache[K, V]) GetMulti(keys []K) map[K]V
GetMulti returns the cached value for each key. Missing or expired keys are absent from the returned map. Each lookup goes through Cache.Get, so refresh-ahead, SWR, and sliding-TTL all apply.
func (*Cache[K, V]) GetMultiOrLoad ¶
GetMultiOrLoad returns the cached value for each key. Hits are returned directly; misses are coalesced via [BulkLoader.LoadMulti] when WithBulkLoader is configured, otherwise via per-key Cache.GetOrLoad. The map contains only successfully-resolved keys; the first transport error from LoadMulti aborts.
func (*Cache[K, V]) GetOrLoad ¶
GetOrLoad returns the cached value for key, or invokes the configured Loader when the key is absent or expired. Concurrent callers for the same missing key share a single Loader invocation (singleflight). Returns ErrNoLoader, ErrClosed, ctx.Err(), or any error from the Loader. ErrNotFound with WithNegativeCache active records a tombstone and returns the same error.
Example ¶
ExampleCache_GetOrLoad demonstrates the singleflight-deduplicated loader path. 1000 concurrent gets on a missing key would still invoke the loader exactly once.
package main
import (
"context"
"fmt"
"time"
"github.com/go-rotini/memcache"
)
func main() {
loader := memcache.LoaderFunc[string, int](
func(_ context.Context, k string) (int, time.Duration, error) {
return len(k), 5 * time.Minute, nil
},
)
c, _ := memcache.New[string, int](
memcache.WithMaxEntries(64),
memcache.WithLoader(loader),
)
defer c.Close()
v, _ := c.GetOrLoad(context.Background(), "hello")
fmt.Println(v)
}
Output: 5
func (*Cache[K, V]) GetOrLoadFn ¶
func (c *Cache[K, V]) GetOrLoadFn( ctx context.Context, key K, fn func(ctx context.Context, key K) (V, time.Duration, error), ) (V, error)
GetOrLoadFn is Cache.GetOrLoad with a per-call loader function. fn is invoked at most once per concurrent miss (singleflight by key).
func (*Cache[K, V]) GetOrSet ¶
GetOrSet returns the cached value for key when present and fresh, or stores value and returns it otherwise. The bool is true when the returned value came from the cache. Atomic with respect to concurrent Set on the same key. Does not invoke any configured Loader; for that see Cache.GetOrLoad.
func (*Cache[K, V]) GetWithExpiry ¶
GetWithExpiry returns the value, its absolute expiry time, and a hit boolean. expiry is the zero time when the entry has no TTL.
func (*Cache[K, V]) Has ¶
Has reports whether the cache contains a fresh entry for key. It does NOT promote the entry in the eviction policy and never invokes the configured Loader. The shard RLock MUST be held through every field read on the resolved entry to avoid racing with eviction's pool recycling.
func (*Cache[K, V]) Histogram ¶
Histogram returns a coarse summary bucketed by age, weight, and hit count. See Histogram for bucket bounds.
func (*Cache[K, V]) Hottest ¶
Hottest returns the n entries with the highest hit counts, descending. n=0 returns every entry. Hit count is the per-entry hits counter, a coarse proxy for popularity; frequency-aware policy state is not exposed here.
Example ¶
ExampleCache_Hottest sorts by hit count for "what's been getting the most attention?" REPL views.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int](memcache.WithMaxEntries(8))
defer c.Close()
_ = c.Set("cold", 0)
_ = c.Set("hot", 1)
for range 5 {
_, _ = c.Get("hot")
}
top := c.Hottest(1)
fmt.Println(top[0].Key)
}
Output: hot
func (*Cache[K, V]) InvalidateTag ¶
InvalidateTag removes every entry currently tagged with tag. Returns the number of entries removed; eviction stats record each removal under EvictReasonTag.
Lock discipline: the tag→keys snapshot is taken under the index's read lock and that lock is released before any per-shard write lock is acquired, so InvalidateTag never holds both simultaneously (matching spec §19.2.1).
Example ¶
ExampleCache_SetWithTags + InvalidateTag is the "drop everything owned by a user" pattern.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int](memcache.WithMaxEntries(64))
defer c.Close()
_ = c.SetWithTags("user:42:profile", 1, "user-42")
_ = c.SetWithTags("user:42:settings", 2, "user-42")
_ = c.SetWithTags("user:7:profile", 3, "user-7")
dropped := c.InvalidateTag("user-42")
fmt.Println("dropped", dropped)
fmt.Println("user-7 survives:", c.Has("user:7:profile"))
}
Output: dropped 2 user-7 survives: true
func (*Cache[K, V]) InvalidateTags ¶
InvalidateTags removes every entry tagged with ANY of the supplied tags (set union). An entry tagged with multiple matching tags is counted once.
func (*Cache[K, V]) ItemMetadata ¶
ItemMetadata returns the entry's metadata without copying its value. Absent or negative-tombstone entries return (zero, false).
func (*Cache[K, V]) Items ¶
Items returns a snapshot of every live entry plus metadata. Iteration order is shard-by-shard, then map order; neither is stable. O(n).
Example ¶
ExampleCache_Items is the canonical diagnostic pattern.
package main
import (
"fmt"
"sort"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int](memcache.WithMaxEntries(8))
defer c.Close()
_ = c.Set("a", 1)
_ = c.Set("b", 2)
keys := []string{}
for _, ki := range c.Items() {
keys = append(keys, ki.Key)
}
sort.Strings(keys)
fmt.Println(keys)
}
Output: [a b]
func (*Cache[K, V]) Keys ¶
func (c *Cache[K, V]) Keys() []K
Keys returns a freshly-allocated snapshot of every key in the cache. Use Cache.Range for non-allocating iteration.
func (*Cache[K, V]) Load ¶
Load replaces the cache contents with the snapshot read from r and returns the number of entries inserted. Records past their TTL are skipped. On error the cache may be partially loaded.
func (*Cache[K, V]) LoadFile ¶
LoadFile loads a snapshot from path; convenience wrapper for Cache.Load that opens and closes the file.
func (*Cache[K, V]) LongestLived ¶
LongestLived returns the n entries with the oldest insertion time, oldest first. n=0 returns every entry.
func (*Cache[K, V]) Merge ¶
Merge folds the snapshot's entries into the cache, overwriting existing keys. Untouched keys are preserved. Returns inserts plus replacements.
func (*Cache[K, V]) Peek ¶
Peek returns the value without affecting eviction policy state. A negative-cache tombstone is reported as a miss.
func (*Cache[K, V]) PeekOrAdd ¶
PeekOrAdd is Cache.GetOrSet that does not promote the existing entry in the eviction policy. New inserts follow normal policy placement.
func (*Cache[K, V]) Range ¶
Range calls fn for every entry in the cache. Iteration is shard by shard under each shard's read lock; expired and negative-cache entries are skipped. The visit set is NOT a consistent snapshot. fn returning false stops iteration immediately.
func (*Cache[K, V]) Refresh ¶
Refresh asynchronously triggers a Loader call for key, replacing the cached value when the load completes. Joins an existing in-flight call. Returns ErrNoLoader, ErrClosed, or ctx.Err() if already canceled.
func (*Cache[K, V]) RefreshAll ¶
RefreshAll triggers a Loader call for every key in the cache. Returns the number of flights queued, skipping keys with an in-flight load. Negative-cache tombstones are not refreshed.
func (*Cache[K, V]) Reset ¶
func (c *Cache[K, V]) Reset()
Reset removes all entries without firing eviction callbacks.
func (*Cache[K, V]) ResetStats ¶
func (c *Cache[K, V]) ResetStats()
ResetStats zeros counters and stamps [Stats.LastResetAt]. Live entry/byte counts are preserved.
func (*Cache[K, V]) Resize ¶
Resize changes the cache's bound at runtime. newSize is interpreted as MaxEntries unless the cache was built with WithMaxBytes, in which case it is the byte budget. Returns the number of entries evicted from shrinking; growing the cache or passing newSize <= 0 returns 0. Concurrent Resize calls produce last-writer-wins on the bound.
func (*Cache[K, V]) Save ¶
Save writes a snapshot of the cache to w using the cache's configured Codec: magic-prefixed header, length-prefixed records, and a trailing CRC32-Castagnoli. Acquires each shard's read lock in turn; writers to a shard being snapshotted block until its pass completes.
func (*Cache[K, V]) SaveFile ¶
SaveFile writes a snapshot to path atomically via temp file + fsync + rename. Readers see either the prior or new snapshot, never torn.
Example ¶
ExampleCache_SaveFile demonstrates the warm-restart pattern for REPLs and daemons: snapshot to disk, reload on restart.
package main
import (
"fmt"
"os"
"path/filepath"
"github.com/go-rotini/memcache"
)
func main() {
dir, _ := os.MkdirTemp("", "memcache-example-*")
defer os.RemoveAll(dir)
path := filepath.Join(dir, "cache.snap")
c, _ := memcache.New[string, int](memcache.WithMaxEntries(64))
_ = c.Set("warm", 99)
if err := c.SaveFile(path); err != nil {
fmt.Println("save:", err)
return
}
_ = c.Close()
c2, _ := memcache.New[string, int](memcache.WithMaxEntries(64))
defer c2.Close()
if _, err := c2.LoadFile(path); err != nil {
fmt.Println("load:", err)
return
}
v, ok := c2.Get("warm")
fmt.Println(v, ok)
}
Output: 99 true
func (*Cache[K, V]) Set ¶
Set stores value under key with the cache's default TTL. If V implements CacheTTLer the value's TTL overrides WithDefaultTTL; if V implements CacheTagger its tags are merged into the entry's tag set. After insertion, WithGroup-bounded groups named by the entry's tags are shrunk back to capacity.
func (*Cache[K, V]) SetCtx ¶
SetCtx is the context-aware variant of Cache.Set. The ctx is checked at entry and threaded through to any Store write-through; on Store failure the in-memory entry is rolled back to keep both sides consistent.
func (*Cache[K, V]) SetIfAbsent ¶
SetIfAbsent stores value under key only when the key is absent (or expired). Returns stored=true when the value was inserted.
func (*Cache[K, V]) SetIfPresent ¶
SetIfPresent updates the value under key only when an entry already exists (and is fresh). Returns updated=true when the entry was replaced.
func (*Cache[K, V]) SetMulti ¶
SetMulti stores every (key, value) in items. The first error aborts; successfully-stored entries are NOT rolled back.
func (*Cache[K, V]) SetWithOptions ¶
SetWithOptions stores value under key with per-call overrides supplied via SetOption. Cache defaults apply to fields no option touches; when they conflict, the per-call option wins.
Example ¶
ExampleCache_SetWithOptions composes per-call overrides.
package main
import (
"fmt"
"sort"
"time"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, string](memcache.WithMaxEntries(8))
defer c.Close()
_ = c.SetWithOptions("user:42",
"alice@example.com",
memcache.SetTTL(time.Hour),
memcache.SetTags("user-42", "team-eng"),
)
tags := c.Tags("user:42")
sort.Strings(tags)
fmt.Println(tags)
}
Output: [team-eng user-42]
func (*Cache[K, V]) SetWithTTL ¶
SetWithTTL stores value under key with the given TTL. A TTL of 0 means no expiry; a negative TTL returns ErrInvalidTTL.
Example ¶
ExampleCache_SetWithTTL stores a value with a per-call TTL.
package main
import (
"fmt"
"time"
"github.com/go-rotini/memcache"
)
func main() {
// WithTTLJitter(0) keeps the TTL exact for the example; the
// default 5%-of-TTL jitter would produce a non-deterministic
// godoc output.
c, _ := memcache.New[string, int](
memcache.WithMaxEntries(8),
memcache.WithTTLJitter(0),
)
defer c.Close()
_ = c.SetWithTTL("session-token", 1234, 30*time.Minute)
d, _ := c.TTL("session-token")
fmt.Println(d <= 30*time.Minute)
}
Output: true
func (*Cache[K, V]) SetWithTags ¶
SetWithTags stores value under key and indexes the entry under each supplied tag. The cache-level default TTL applies. Tags can later be used to remove this and similarly-tagged entries in one call via Cache.InvalidateTag.
Returns the same errors as Cache.Set plus, in future revisions, ErrTooManyTags when limit options are added.
func (*Cache[K, V]) SoonestExpiring ¶
SoonestExpiring returns the n entries closest to TTL expiry, soonest first. Entries without a TTL are excluded entirely.
func (*Cache[K, V]) Subscribe ¶
Subscribe registers a buffered channel for events matching kinds (no kinds = all). The returned cancel function unsubscribes and closes the channel. Drops on full channels are counted in [Stats.EventsDropped].
Example ¶
ExampleCache_Subscribe shows event-driven observability. EventInsert/Update/Evict and friends fan out to subscribers through a non-blocking publish; full channels drop and EventsDropped tracks the loss.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
c, _ := memcache.New[string, int](memcache.WithMaxEntries(4))
defer c.Close()
ch, cancel := c.Subscribe(8, memcache.EventInsert)
defer cancel()
_ = c.Set("k", 1)
e := <-ch
fmt.Println(e.Kind, e.Key, e.Value)
}
Output: insert k 1
func (*Cache[K, V]) Sync ¶
Sync drains pending background work (async tag cleanup, async writes) and returns when the cache is quiescent. Returns ctx.Err() if ctx cancels first. Useful before Cache.Save so the snapshot reflects every Set/Delete already returned.
func (*Cache[K, V]) TTL ¶
TTL returns the remaining time-to-live for the entry at key. (0, true) means the entry exists with no TTL; (0, false) means the entry is absent or expired.
func (*Cache[K, V]) Tags ¶
Tags returns a copy of the tags carried by the entry at key, or nil when the key is absent. The returned slice is independent of cache storage; callers may mutate it freely.
func (*Cache[K, V]) Touch ¶
Touch refreshes the TTL of the entry at key without modifying its value. The new TTL is the entry's sliding-TTL when set, otherwise WithDefaultTTL. Returns true when the entry was refreshed.
func (*Cache[K, V]) TouchWithTTL ¶
TouchWithTTL resets the TTL of the entry at key to ttl. A TTL of 0 clears the expiry; a negative TTL is a no-op returning false.
func (*Cache[K, V]) Update ¶
Update is shorthand for Cache.ComputeIfPresent that always stores fn's result. Returns (zero, ErrNotFound) when the entry is absent.
type CacheKeyer ¶
type CacheKeyer interface {
CacheKey() string
}
CacheKeyer is implemented by values whose natural cache key is a deterministic function of the value itself. Pair with SetCacheable, GetCacheable, and DeleteCacheable to avoid repeating the key-derivation expression at every call site. Implementations must be deterministic.
type CacheTTLer ¶
CacheTTLer is implemented by values that supply their own TTL at insert time. Cache.Set uses the returned duration in place of the configured default; explicit TTLs from Cache.SetWithTTL / Cache.SetWithOptions take precedence. Returning 0 means no expiry; a negative duration falls back to the cache default.
type CacheTagger ¶
type CacheTagger interface {
CacheTags() []string
}
CacheTagger is implemented by values that auto-tag themselves at insert time. Cache.Set merges the returned tags into the entry's tag set; Cache.SetWithTags and Cache.SetWithOptions override and drop auto-tags. A nil or empty slice leaves the entry untagged.
type CacheView ¶
type CacheView[K comparable, V any] struct { // contains filtered or unexported fields }
CacheView is a read-only handle that shares storage with a Cache. View methods that touch a closed cache return zero values.
func (*CacheView[K, V]) Bytes ¶
Bytes returns the underlying cache's Cache.Bytes.
func (*CacheView[K, V]) Expiry ¶
Expiry delegates to Cache.Expiry.
func (*CacheView[K, V]) Get ¶
Get delegates to Cache.Get, including eviction-policy promotion on hit. Use CacheView.Peek for a no-promotion read.
func (*CacheView[K, V]) Keys ¶
func (v *CacheView[K, V]) Keys() []K
Keys returns a snapshot of cache keys via Cache.Keys.
func (*CacheView[K, V]) Peek ¶
Peek delegates to Cache.Peek (no eviction-policy side effect).
func (*CacheView[K, V]) Range ¶
Range iterates entries via Cache.Range.
func (*CacheView[K, V]) Stats ¶
Stats returns the underlying cache's Cache.Stats.
type Cacheable ¶
type Cacheable interface {
CacheKeyer
CacheTTLer
CacheTagger
SnapshotMarshaler
SnapshotUnmarshaler
}
Cacheable is the umbrella interface for V types that participate in cache-specific behaviors. Every method is independently optional; the cache type-asserts against the sub-interfaces individually.
type CapacityError ¶
type CapacityError struct {
Key any
Reason string
LimitField string
// Cause is the underlying sentinel (e.g. [ErrTooManyTags]).
// Optional; populated when the limit field has a dedicated sentinel.
Cause error
}
CapacityError describes a Set rejected because of size limits.
func (*CapacityError) Error ¶
func (e *CapacityError) Error() string
func (*CapacityError) Is ¶
func (e *CapacityError) Is(target error) bool
Is reports whether target is a *CapacityError or the cached sentinel cause.
func (*CapacityError) Unwrap ¶
func (e *CapacityError) Unwrap() error
Unwrap returns the underlying sentinel cause, if any.
type Clock ¶
type Clock interface {
// Now returns the current time.
Now() time.Time
// AfterFunc schedules fn to run after d. The returned Timer can be
// stopped or reset.
AfterFunc(d time.Duration, fn func()) Timer
}
Clock is the abstraction the cache uses to read time and schedule callbacks. Tests use FakeClock for deterministic timing.
type Codec ¶
type Codec interface {
// Marshal encodes v to bytes.
Marshal(v any) ([]byte, error)
// Unmarshal decodes data into v (which must be a non-nil pointer).
Unmarshal(data []byte, v any) error
// Name returns the codec's stable identifier (used in snapshot
// headers to detect codec mismatches on Load).
Name() string
}
Codec is the abstraction the snapshot subsystem uses to serialize keys and values to bytes. The package ships gob, JSON, and raw implementations; users may supply their own via WithCodec.
type CodecError ¶
type CodecError struct {
Op string // "marshal" or "unmarshal"
Codec string // codec name ("gob", "json", "raw", ...)
Err error
}
CodecError wraps an error from a Codec implementation.
func (*CodecError) Error ¶
func (e *CodecError) Error() string
func (*CodecError) Is ¶
func (e *CodecError) Is(target error) bool
Is reports whether target is a *CodecError.
func (*CodecError) Unwrap ¶
func (e *CodecError) Unwrap() error
Unwrap returns the underlying error.
type CompressedCodec ¶
type CompressedCodec struct {
// contains filtered or unexported fields
}
CompressedCodec wraps a base Codec with gzip compression. The resulting codec marshals through the base codec first, then runs the bytes through gzip; Unmarshal reverses the order. The Name returned is `<base>+gzip` so snapshot codec-mismatch checks catch the difference between a compressed and an uncompressed snapshot of the same payload.
Use this when snapshots cross a network boundary, when on-disk space matters more than save/load CPU, or when the cache holds highly compressible data (parsed config, repeated tags, ASCII).
Constructed via WithCompressedCodec or directly with NewCompressedCodec; either form composes with WithCodec.
func NewCompressedCodec ¶
func NewCompressedCodec(base Codec, level int) CompressedCodec
NewCompressedCodec wraps base with gzip at the given compression level. Pass gzip.DefaultCompression for the standard tradeoff (level 6); gzip.BestSpeed (1) for tighter latency budgets; gzip.BestCompression (9) for the smallest snapshots. Invalid levels fall back to gzip.DefaultCompression at Marshal time.
func (CompressedCodec) Marshal ¶
func (c CompressedCodec) Marshal(v any) ([]byte, error)
Marshal encodes v through the base codec and then gzip-compresses the result.
func (CompressedCodec) Name ¶
func (c CompressedCodec) Name() string
Name reports `<base>+gzip` (e.g. `gob+gzip`, `json+gzip`).
type ComputeAction ¶
type ComputeAction uint8
ComputeAction is the return discriminator for Compute callbacks.
const ( ComputeStore ComputeAction = iota // store the returned value ComputeDelete // remove the entry ComputeNoOp // leave the entry unchanged )
Compute actions.
func (ComputeAction) String ¶
func (a ComputeAction) String() string
String returns a human-readable name for the action.
type ConfigError ¶
ConfigError describes an invalid configuration passed to New.
func (*ConfigError) Error ¶
func (e *ConfigError) Error() string
func (*ConfigError) Is ¶
func (e *ConfigError) Is(target error) bool
Is reports whether target is a *ConfigError.
type Doorkeeper ¶
type Doorkeeper[K comparable] struct { // contains filtered or unexported fields }
Doorkeeper is a "seen-twice" admission filter backed by a bloom filter. A candidate is admitted only after Doorkeeper.Observe has been called for it at least once previously. Doorkeeper is safe for concurrent use.
func NewDoorkeeper ¶
func NewDoorkeeper[K comparable](expected int, fn func(K) uint64) *Doorkeeper[K]
NewDoorkeeper builds a Doorkeeper sized for expected distinct keys, hashing through fn.
func (*Doorkeeper[K]) Admit ¶
func (d *Doorkeeper[K]) Admit(key K) bool
Admit reports whether the doorkeeper has previously observed key.
func (*Doorkeeper[K]) Observe ¶
func (d *Doorkeeper[K]) Observe(key K)
Observe records key so subsequent Admit calls return true for the same key until Doorkeeper.Reset.
func (*Doorkeeper[K]) Reset ¶
func (d *Doorkeeper[K]) Reset()
Reset clears the bloom so every key starts fresh.
type EncryptedCodec ¶
type EncryptedCodec struct {
// contains filtered or unexported fields
}
EncryptedCodec wraps a base Codec with AES-256-GCM authenticated encryption. Wire format: 12-byte nonce followed by GCM-sealed ciphertext. Construct via NewEncryptedCodec; install via WithCodec or WithEncryptedCodec. The package does not manage key rotation.
func NewEncryptedCodec ¶
func NewEncryptedCodec(base Codec, key []byte) (*EncryptedCodec, error)
NewEncryptedCodec wraps base with AES-256-GCM keyed by key. key must be exactly 32 bytes; shorter or longer keys return a configuration error at construction time so the failure is loud rather than masked as a runtime decode error later.
func (*EncryptedCodec) Marshal ¶
func (c *EncryptedCodec) Marshal(v any) ([]byte, error)
Marshal encodes v through the base codec, then encrypts the resulting bytes with a fresh 96-bit nonce. The output layout is nonce ‖ ciphertext.
func (*EncryptedCodec) Name ¶
func (c *EncryptedCodec) Name() string
Name reports `<base>+aes-256-gcm` so snapshot codec-mismatch detection catches the difference between encrypted and plaintext snapshots of the same payload.
func (*EncryptedCodec) Unmarshal ¶
func (c *EncryptedCodec) Unmarshal(data []byte, v any) error
Unmarshal splits the leading nonce off data, decrypts the remainder, and feeds the recovered plaintext to the base codec. Returns an error when the ciphertext fails authentication (wrong key, truncation, tampering).
type Event ¶
type Event[K comparable, V any] struct { Kind EventKind Key K Value V Reason EvictionReason // populated for EventEvict Err error // populated for EventLoadError At time.Time Tags []string }
Event is an observable cache event delivered to Subscribe channels.
type EventKind ¶
type EventKind uint8
EventKind identifies an event kind for Subscribe.
type EvictionReason ¶
type EvictionReason uint8
EvictionReason describes why an entry was removed from the cache.
const ( EvictReasonCapacity EvictionReason = iota // policy chose to evict EvictReasonExpired // TTL hit EvictReasonExpireFunc // WithExpireFunc returned true EvictReasonReplaced // overwritten by Set EvictReasonDeleted // explicit Delete EvictReasonDeletedPrefix // DeletePrefix EvictReasonDeletedWhere // DeleteWhere EvictReasonTag // InvalidateTag EvictReasonRemote // WithInvalidationSubscriber EvictReasonResize // Resize shrunk the cache EvictReasonComputed // Compute returned ComputeDelete EvictReasonClear // Clear() EvictReasonStoreRollback // Store write-through failed; in-memory rolled back )
Eviction reasons. The order is fixed for stable indexing into Stats.EvictionsByReason.
func (EvictionReason) String ¶
func (r EvictionReason) String() string
String returns a human-readable name for the reason.
type FakeClock ¶
type FakeClock struct {
// contains filtered or unexported fields
}
FakeClock is a controllable Clock for tests. Calls to Advance and Set drive any scheduled timers whose deadline has been reached.
FakeClock is safe for concurrent use.
func NewFakeClock ¶
NewFakeClock returns a FakeClock seeded at the given time. If t is the zero value, it is replaced with the Unix epoch midnight.
func (*FakeClock) Advance ¶
Advance moves the FakeClock forward by d. If d is non-positive, no effect. Any timers due fire in deadline order.
type GobCodec ¶
type GobCodec struct{}
GobCodec is the default snapshot codec. Compact, fast, Go-only. Each snapshot Save/Load round trip includes type schema in the byte stream; types whose fields change between writes/reads handle the drift gracefully (gob's documented behavior for missing/extra fields).
type Histogram ¶
type Histogram struct {
AgeBuckets [8]int64 // bucket bounds: 1s, 10s, 1m, 10m, 1h, 1d, 7d, +inf
WeightBuckets [8]int64 // bucket bounds: 1, 16, 256, 4Ki, 64Ki, 1Mi, 16Mi, +inf
HitsBuckets [8]int64 // bucket bounds: 0, 1, 2, 4, 16, 64, 256, +inf
TotalEntries int
TotalWeight int64
}
Histogram is a coarse multi-dimensional summary of cache contents.
type Item ¶
type Item[V any] struct { Value V Expiry time.Time // zero if no TTL LastAccess time.Time Inserted time.Time Hits uint32 // hit count since insertion Weight int64 Tags []string Sliding bool }
Item is the value-with-metadata view used by Range, snapshots, and events.
type JSONCodec ¶
type JSONCodec struct{}
JSONCodec is a JSON snapshot codec. Larger and slower than gob, but portable across languages and `jq`-able. Requires V (and K when used as a map key in the snapshot format) to be JSON-marshalable.
type KeyedItem ¶
type KeyedItem[K comparable, V any] struct { Item[V] Key K }
KeyedItem pairs a key with its Item view.
type LoadError ¶
LoadError wraps an error returned by a Loader.
type LoadResult ¶
type LoadResult[V any] struct { Value V TTL time.Duration Err error // per-key error; cache treats these as load failures, not catastrophic }
LoadResult is one entry in a BulkLoader response.
type Loader ¶
type Loader[K comparable, V any] interface { // Load fetches the value for key. Implementations should respect // ctx cancellation and return promptly when ctx is done. Load(ctx context.Context, key K) (value V, ttl time.Duration, err error) }
Loader is the interface invoked on a cache miss when a Cache has been configured with WithLoader. The returned TTL is applied to the resulting entry; a zero TTL falls back to the cache's configured default.
type LoaderFunc ¶
LoaderFunc adapts a plain function to the Loader interface.
type MemoryStore ¶
type MemoryStore[K comparable, V any] struct { // contains filtered or unexported fields }
MemoryStore is the default in-process Store. TTLs are enforced lazily on Get. Safe for concurrent use; zero value is invalid (use NewMemoryStore).
func NewMemoryStore ¶
func NewMemoryStore[K comparable, V any](clock Clock) *MemoryStore[K, V]
NewMemoryStore constructs an empty in-memory Store. clock controls TTL evaluation; passing nil falls back to RealClock so callers can inline `NewMemoryStore[K, V](nil)` for typical production use.
func (*MemoryStore[K, V]) Close ¶
func (m *MemoryStore[K, V]) Close() error
Close empties the store and marks it closed. Subsequent calls return ErrClosed. Idempotent.
func (*MemoryStore[K, V]) Delete ¶
func (m *MemoryStore[K, V]) Delete(ctx context.Context, key K) (bool, error)
Delete removes the entry for key.
func (*MemoryStore[K, V]) Get ¶
func (m *MemoryStore[K, V]) Get(ctx context.Context, key K) (V, bool, error)
Get returns the value at key, honoring TTL. ctx cancellation is checked once at entry; the rest of the call is a single map lookup under read lock.
func (*MemoryStore[K, V]) Iterate ¶
func (m *MemoryStore[K, V]) Iterate(ctx context.Context, fn func(key K, value V) bool) error
Iterate calls fn for each non-expired entry. fn returns false to stop iteration. Iteration order is map-iteration order (unspecified); callers must not rely on it.
fn runs under the store's read lock; mutations from fn would deadlock. Callers who need to mutate should snapshot first.
type Metadata ¶
type Metadata struct {
Expiry time.Time
Inserted time.Time
LastAccess time.Time
Hits uint32
Weight int64
Tags []string
Sliding bool
}
Metadata is the value-less view used by ItemMetadata.
type Number ¶
type Number interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 |
~float32 | ~float64
}
Number is the type constraint for numeric atomic operations (Increment, Decrement, IncrementBy).
type Option ¶
type Option func(*config)
Option configures a Cache at construction time. Options are applied in order; later options override earlier ones for the same setting.
func WithAdmissionPolicy ¶
func WithAdmissionPolicy[K comparable](p AdmissionPolicy[K]) Option
WithAdmissionPolicy installs a custom AdmissionPolicy. Composes freely with WithPolicy: admission decides whether a key is tracked, eviction decides which tracked key is dropped.
func WithAsyncWrites ¶
func WithAsyncWrites() Option
WithAsyncWrites decouples Cache.Set / Cache.Delete from the storage update; ops enqueue into a per-shard pending map drained by a per-cache apply goroutine. Visibility: a Set followed by Get returns the new value (read paths consult pending before storage). Eviction state, hit counters, refresh-ahead, and Store write-through are deferred to apply time; Store.Set errors during apply are logged not returned. Cache.Compute stays synchronous. Cache.Sync blocks until pending is empty; Cache.Close drains pending. EXPERIMENTAL.
func WithAutoLoad ¶
WithAutoLoad populates the cache from a snapshot at path during New. A missing file is not an error; any other load error fails New unless WithAutoLoadIgnoreErrors is also set.
func WithAutoLoadIgnoreErrors ¶
WithAutoLoadIgnoreErrors makes WithAutoLoad log and swallow load errors instead of aborting New.
func WithAutoSave ¶
WithAutoSave persists a snapshot to path every interval via a per-cache goroutine. Cache.Close writes a final snapshot before tearing it down. Errors are logged. A non-positive interval disables the periodic save.
func WithBulkLoader ¶
func WithBulkLoader[K comparable, V any](l BulkLoader[K, V]) Option
WithBulkLoader attaches a BulkLoader used by Cache.GetMultiOrLoad. When configured, missing keys in a bulk-get are coalesced into a single LoadMulti call rather than firing one Loader invocation per key.
func WithCallbackTimeout ¶
WithCallbackTimeout bounds synchronous-hook duration before a warning is logged. Best-effort: Go cannot preempt the callback. d <= 0 disables; default 0.
func WithClock ¶
WithClock overrides the clock used for all TTL and timing decisions. The default is RealClock; tests may pass FakeClock.
func WithCodec ¶
WithCodec selects the codec used by Cache.Save and Cache.Load. The default is GobCodec.
func WithCollisionTracking ¶
WithCollisionTracking enables hash-collision counting in [Stats.HashCollisions]. Off by default; turn on only when diagnosing hot keys or weak custom hashers, since it adds map lookup+write per insert.
func WithCompressedCodec ¶
WithCompressedCodec installs NewCompressedCodec(base, level) as the cache's snapshot codec. Convenience wrapper around WithCodec for the common gzip case.
func WithCopyOnGet ¶
WithCopyOnGet installs fn applied to V on every successful Get/Peek return; callers receive fn(value). fn is called under the shard's read lock and MUST be fast and side-effect-free.
func WithDefaultTTL ¶
WithDefaultTTL sets the TTL applied by Set when no per-call TTL is supplied. A zero duration means "no TTL by default".
func WithDoorkeeper ¶
WithDoorkeeper enables a default-sized Doorkeeper sized for the cache's max-entries budget (or 1024 when unbounded). Pass false to opt out.
func WithEncryptedCodec ¶
WithEncryptedCodec installs NewEncryptedCodec(base, key) as the cache's snapshot codec. Construction errors (e.g., wrong key length) propagate through New as a *ConfigError.
func WithErrorTTL ¶
WithErrorTTL caches Loader errors (other than ErrNotFound) for d. Subsequent Cache.GetOrLoad calls within the window return the cached error without re-invoking the Loader.
func WithEventsBuffer ¶
WithEventsBuffer sets the default buffer size used by Cache.Subscribe when the caller does not supply an explicit size. Default 64.
func WithExpireFunc ¶
func WithExpireFunc[K comparable, V any](fn func(key K, value V, meta Metadata) bool) Option
WithExpireFunc registers a per-entry expiry predicate. fn is called on every Get and janitor sweep with the entry's metadata; true marks it expired regardless of TTL. Removals via fn are recorded under EvictReasonExpireFunc. fn MUST be fast and side-effect-free; it is called under shard locks and is NOT routed through WithCallbackTimeout. A panicking fn is recovered and the entry is treated as fresh.
func WithExpvar ¶
WithExpvar registers the cache's Stats under name in stdlib expvar as a JSON object keyed on snake-case stat names. A duplicate registration is silently a no-op.
func WithFlatStorage ¶
func WithFlatStorage() Option
WithFlatStorage opts each shard into a flat hash-probed storage layout (linear probing + tombstone-driven compactions) instead of the default Go map. [Stats.Compactions] reports total rebuilds across shards. EXPERIMENTAL.
func WithGroup ¶
WithGroup defines a capacity-bounded tag group. After a Set whose tags include name, the cache evicts oldest-first under EvictReasonTag if more than capacity entries carry that tag. Multiple WithGroup calls register independent groups; last call wins per name. Non-positive capacity removes a prior registration.
func WithHasher ¶
func WithHasher[K comparable](fn func(K) uint64) Option
WithHasher overrides the default key hasher. Useful when the caller can supply a faster hasher for their key type than the package's default reflect-based fallback.
func WithInvalidationPublisher ¶
func WithInvalidationPublisher[K comparable](fn func(key K, reason EvictionReason)) Option
WithInvalidationPublisher registers fn invoked synchronously on every eviction (any reason). The callback runs on the eviction path and MUST be fast; slow callbacks delay every operation on the shard. Bound runtime with WithCallbackTimeout. Fires after stats/events update.
func WithInvalidationSubscriber ¶
func WithInvalidationSubscriber[K comparable](ch <-chan K) Option
WithInvalidationSubscriber attaches a channel of keys to invalidate. A goroutine drains it until close or Cache.Close; each key triggers a delete under EvictReasonRemote. Type mismatches return *ConfigError.
func WithJanitorInterval ¶
WithJanitorInterval sets how often each shard's background expiry sweep runs. A non-positive duration disables the janitor entirely; expired entries are still removed lazily on Get.
func WithLoader ¶
func WithLoader[K comparable, V any](l Loader[K, V]) Option
WithLoader attaches a Loader used by Cache.GetOrLoad and related methods.
func WithLoaderRateLimit ¶
WithLoaderRateLimit bounds Loader invocations per second across the cache. Excess callers receive ErrLoaderRateLimited (no blocking). A non-positive value disables rate limiting.
func WithLoaderTimeout ¶
WithLoaderTimeout sets the per-call deadline applied to Loader contexts when the caller does not supply one.
func WithLockFreeRead ¶
func WithLockFreeRead() Option
WithLockFreeRead enables a sync.Map-style read fast path: each shard publishes an atomic snapshot consulted without lock. Misses fall through to the locked path; entries published into a snapshot are NOT returned to the sync.Pool. Gated to S3-FIFO with no WithExpireFunc. EXPERIMENTAL.
func WithLogger ¶
WithLogger attaches a slog.Logger for internal warnings (snapshot corruption, dropped events, slow callbacks). Default slog.Default.
func WithMaxBytes ¶
WithMaxBytes sets the maximum total weight of entries across the cache. Effective only when the cache has a configured Weigher.
func WithMaxConcurrentLoads ¶
WithMaxConcurrentLoads caps in-flight Loader calls across the cache. Leaders blocked on a slot return ErrLoaderTooManyInFlight on timeout (bounded by WithLoaderTimeout). When all waiters cancel, the leader's ctx is canceled. A non-positive value disables the cap.
func WithMaxEntries ¶
WithMaxEntries sets the maximum number of entries (across all shards). Must be > 0.
func WithMaxKeySize ¶
WithMaxKeySize rejects keys whose serialized representation exceeds n bytes. Default 0 (no limit).
func WithMaxSnapshotBytes ¶
WithMaxSnapshotBytes caps snapshots accepted by Cache.Load / Cache.LoadFile. A non-positive value falls back to the default (256 MiB).
func WithMaxTagsPerEntry ¶
WithMaxTagsPerEntry caps the number of tags on any single entry. Inputs exceeding the cap return *CapacityError wrapping ErrTooManyTags without inserting. 0 disables the cap.
func WithMaxTagsTotal ¶
WithMaxTagsTotal caps the number of distinct tags the cache's inverted index may carry. A Set introducing a new tag past the cap returns *CapacityError wrapping ErrTooManyTags without inserting. 0 disables the cap.
func WithMaxValueWeight ¶
WithMaxValueWeight rejects values whose Weigher result exceeds n. Default 0 (no limit).
func WithName ¶
WithName attaches a human-readable name to the cache. Used in stats, events, and error messages.
func WithNegativeCache ¶
WithNegativeCache caches "not found" results. When a Loader returns ErrNotFound, the cache stores a tombstone with the supplied TTL and subsequent calls return ErrNotFound without re-invoking the Loader. negativeTTL <= 0 disables negative caching.
func WithOnEvict ¶
func WithOnEvict[K comparable, V any](fn func(key K, value V, reason EvictionReason)) Option
WithOnEvict registers a callback invoked when an entry is removed for any reason OTHER than TTL expiry. The callback fires AFTER the shard lock is released, so re-entering the cache from the callback is safe. Multiple removals from one locked section batch their callbacks.
func WithOnExpire ¶
func WithOnExpire[K comparable, V any](fn func(key K, value V)) Option
WithOnExpire registers a callback invoked when an entry is removed by TTL expiry or because WithExpireFunc returned true. The callback fires AFTER the shard lock is released, so re-entry is safe.
func WithOnHit ¶
func WithOnHit[K comparable, V any](fn func(key K, value V)) Option
WithOnHit registers a synchronous callback invoked on every Get hit. The callback runs under the shard lock; slow callbacks block the Get path and re-entering the cache for a same-shard key deadlocks. Use Cache.Subscribe for asynchronous notification.
func WithOnLoad ¶
WithOnLoad registers a synchronous callback invoked after every Loader completion (success or failure).
func WithOnMiss ¶
func WithOnMiss[K comparable](fn func(key K)) Option
WithOnMiss registers a synchronous callback invoked on every Get miss (including expired and negative-tombstone hits). Same caveats as WithOnHit apply.
func WithPolicy ¶
WithPolicy selects the eviction policy. The default is PolicyS3FIFO.
func WithPurgeVisitor ¶
func WithPurgeVisitor[K comparable, V any](fn func(key K, value V) error) Option
WithPurgeVisitor registers fn called for every entry during Cache.Clear and Cache.Close. Called outside any shard lock; may block, do I/O, or re-enter the cache. Errors are logged.
func WithRefreshAhead ¶
WithRefreshAhead enables refresh-ahead: when an entry's age exceeds refreshAt*TTL, the next Cache.Get triggers an asynchronous Loader call. refreshAt must be in (0, 1); other values disable.
func WithSafeKeys ¶
WithSafeKeys enables a construction-time check that warns when K is a pointer or contains pointer fields. Off by default.
func WithShardedStats ¶
WithShardedStats enables per-CPU sharded counters for Hits/Misses, reducing cache-line contention on >32-core machines at the cost of slight memory and a fan-in cost on every Cache.Stats read. Default off.
func WithShards ¶
WithShards sets the shard count. Must be a power of two; values that are not are rounded up at construction time.
func WithSlidingTTL ¶
WithSlidingTTL toggles sliding-TTL behavior on the cache's default TTL. When true, every Get refreshes the entry's expiry to now+slidingTTL.
func WithSnapshotMetadata ¶
WithSnapshotMetadata embeds a key/value map in the snapshot header, exposed by InspectSnapshot. Up to 65535 keys; the map is cloned on each option call. Calling this option more than once replaces the metadata wholesale.
func WithStaleWhileRevalidate ¶
WithStaleWhileRevalidate enables stale-while-revalidate: when an entry has expired but its expireAt was less than staleFor ago, the next Cache.Get returns the stale value and triggers a background refresh. staleFor <= 0 disables SWR.
func WithStatsEnabled ¶
WithStatsEnabled toggles statistics collection. Default on; disabling removes a small amount of atomic-add overhead from the hot path.
func WithStore ¶
func WithStore[K comparable, V any](store Store[K, V]) Option
WithStore wires a user-supplied Store behind the cache as the source of truth. Reads check in-memory first then fall through to Store with promotion; writes go through to Store and roll the in-memory entry back on Store error; deletes go through and the in-memory delete completes regardless of Store error. The cache does NOT close the Store on Cache.Close. Iteration helpers (Cache.Range, Cache.Keys) walk only the in-memory portion. A Store whose K/V do not match the cache parameters is a ConfigError at New.
func WithTTLBuckets ¶
WithTTLBuckets enables a hashed timing wheel as the per-shard TTL backend, replacing the default min-heap. The wheel has slots buckets with per-tick duration ~janitorInterval/tickPerBucket (one tick per janitor interval when tickPerBucket <= 0). Useful for very large numbers of TTL'd entries where the heap's O(log n) cost dominates.
func WithTTLJitter ¶
WithTTLJitter sets the jitter window applied to TTL expirations: actual expiry is ttl + uniform(-j, +j). Default is 5% of the resolved TTL; an explicit zero disables jitter entirely.
func WithTracer ¶
WithTracer attaches a Tracer for span emission. See Tracer for the catalog of span names.
func WithWeigher ¶
WithWeigher attaches a Weigher used together with WithMaxBytes for byte-bounded caches. A weigher whose type does not match V is rejected by New as a *ConfigError.
type Policy ¶
type Policy uint8
Policy selects an eviction policy.
const ( // PolicyS3FIFO is the default eviction policy: small/main/ghost FIFO // queues with a 2-bit saturating frequency counter. Achieves // hit-rate competitive with TinyLFU at lower implementation cost. PolicyS3FIFO Policy = iota // PolicyLRU evicts the least-recently-used entry. PolicyLRU // PolicyLFU evicts the least-frequently-used entry. Frequency is // tracked with a doubly-linked list of frequency buckets for O(1) // access and eviction. PolicyLFU // PolicyTinyLFU is W-TinyLFU: a small window LRU plus an LFU main // region admitted by a 4-bit count-min sketch. PolicyTinyLFU // PolicyFIFO evicts in insertion order. Cheapest policy. PolicyFIFO // PolicyARC is the Adaptive Replacement Cache. Splits cache into // recency and frequency sub-caches with adaptive sizing. PolicyARC // Policy2Q is the 2Q algorithm: A1in (FIFO) + Am (LRU) + A1out // (ghost). Lighter than ARC, similar workload coverage. Policy2Q )
Eviction policies. PolicyS3FIFO is the default.
type PolicyDetail2Q ¶
type PolicyDetail2Q struct {
A1inSize, AmSize, A1outSize int
}
PolicyDetail2Q is the snapshot payload for Policy2Q.
type PolicyDetailARC ¶
PolicyDetailARC is the snapshot payload for PolicyARC.
type PolicyDetailFIFO ¶
type PolicyDetailFIFO struct {
Size int
}
PolicyDetailFIFO is the snapshot payload for PolicyFIFO.
type PolicyDetailLFU ¶
PolicyDetailLFU is the snapshot payload for PolicyLFU; reports the number of distinct frequency buckets currently populated.
type PolicyDetailLRU ¶
type PolicyDetailLRU struct {
Size int
}
PolicyDetailLRU is the [evictionPolicy.Snapshot] payload for PolicyLRU. The list size equals the shard's entry count.
type PolicyDetailS3FIFO ¶
PolicyDetailS3FIFO is the snapshot payload for PolicyS3FIFO. Sizes are the per-region counts; ghost is a key-only queue with no associated entries.
type PolicyDetailTinyLFU ¶
PolicyDetailTinyLFU is the snapshot payload for PolicyTinyLFU.
type Prefixer ¶
Prefixer is implemented by key types that have a meaningful "prefix" concept. Cache.DeletePrefix first checks if K is string, then falls back to this interface; for K types that satisfy neither, DeletePrefix is a silent no-op (returns 0).
type RawCodec ¶
type RawCodec struct{}
RawCodec passes byte slices through without encoding. It is only valid for caches whose value type is []byte. Marshal returns the input directly (a defensive copy is made so subsequent mutations of the caller's slice don't affect the snapshot stream); Unmarshal copies the input into the destination *[]byte.
type RealClock ¶
type RealClock struct{}
RealClock is the wall-clock implementation. Methods are safe for concurrent use. The zero value is usable.
type SetOption ¶
type SetOption func(*setConfig)
SetOption applies a per-call override to Cache.SetWithOptions. Options are applied in order; later options win for the same setting. The zero SetOption is nil and is silently ignored.
func SetExpireAt ¶
SetExpireAt sets an absolute wall-clock expiry. Takes precedence over SetTTL and the cache default. NTP backsteps may temporarily un-expire the entry; use SetTTL for monotonic-safe expiry.
func SetSliding ¶
SetSliding marks the entry's TTL as sliding: every Get refreshes the expiry to `now + TTL`. Pass false to opt out of the cache's configured sliding default for this single insert.
func SetTTL ¶
SetTTL overrides the per-call TTL. A TTL of 0 stores the entry with no expiry; a negative TTL produces ErrInvalidTTL when the resulting Cache.SetWithOptions runs.
func SetTags ¶
SetTags attaches tags to the entry. Calling SetTags with no arguments explicitly opts out of CacheTagger / template auto-tag derivation. SetTags("a", "b") overrides CacheTags() and any cache:"...,tag=..." template.
type SnapshotError ¶
type SnapshotError struct {
Op string // "save" or "load"
Path string
Offset int64
Message string
Err error
}
SnapshotError describes a snapshot save or load failure.
func (*SnapshotError) Error ¶
func (e *SnapshotError) Error() string
func (*SnapshotError) Is ¶
func (e *SnapshotError) Is(target error) bool
Is reports whether target is a *SnapshotError or one of the snapshot sentinels (ErrSnapshotCorrupt, ErrSnapshotIncompatible) when the corresponding error is wrapped.
func (*SnapshotError) Unwrap ¶
func (e *SnapshotError) Unwrap() error
Unwrap returns the underlying error.
type SnapshotInfo ¶
type SnapshotInfo struct {
Version uint8
Codec string
Name string
SaveTime time.Time
Count int64
SizeBytes int64
Metadata map[string]string
}
SnapshotInfo summarizes a snapshot file's header without loading it.
func InspectSnapshot ¶
func InspectSnapshot(path string) (*SnapshotInfo, error)
InspectSnapshot reads only the header of a snapshot file and returns its metadata, skipping records and CRC. Useful for compatibility checks before a full Load.
type SnapshotMarshaler ¶
SnapshotMarshaler is implemented by values with custom snapshot encoding. The cache calls SnapshotMarshal in place of the configured Codec during Cache.Save / Cache.SaveFile.
type SnapshotUnmarshaler ¶
SnapshotUnmarshaler is the receive-side counterpart of SnapshotMarshaler. The cache calls SnapshotUnmarshal during Cache.Load / Cache.LoadFile for every value whose pointer implements it. Implementations almost always have a pointer receiver.
type Span ¶
type Span interface {
// End closes the span. err annotates whether the operation
// succeeded; pass nil on success.
End(err error)
// SetAttr attaches a key/value attribute mid-span. Useful
// when an attribute (e.g., hit/miss) only becomes known
// partway through the operation.
SetAttr(key string, value any)
}
Span is one in-flight tracing scope.
type Stats ¶
type Stats struct {
// Hits is the number of Get-style operations that returned a
// fresh value.
Hits uint64
// Misses is the number of Get-style operations that returned
// nothing (missing or expired).
Misses uint64
// Inserts is the number of new keys added to the cache.
Inserts uint64
// Updates is the number of existing keys whose value was
// replaced.
Updates uint64
// Deletes is the number of explicit deletes that removed a key.
Deletes uint64
// Evictions is the total number of entries removed by the
// eviction policy or by capacity-driven sweeps.
Evictions uint64
// Expirations is the total number of entries removed by TTL
// expiry (lazy or via the janitor).
Expirations uint64
// EvictionsByReason counts the per-reason eviction breakdown,
// indexed by [EvictionReason].
EvictionsByReason [numEvictionReasons]uint64
// LoadsTotal is the number of Loader invocations.
LoadsTotal uint64
// LoadHits is the count of loader calls that resolved successfully.
LoadHits uint64
// LoadErrors is the count of loader calls that returned an error.
LoadErrors uint64
// LoadCoalesced is the count of waiters that joined an in-flight
// load (singleflight savings).
LoadCoalesced uint64
// EventsDropped is the count of subscriber events dropped because
// a subscriber's channel was full.
EventsDropped uint64
// HashCollisions is the count of distinct keys that hashed to the
// same bucket. Populated only when [WithCollisionTracking] is on.
HashCollisions uint64
// AdmissionRejects counts inserts dropped by the admission
// policy ([WithAdmissionPolicy] / [WithDoorkeeper]). Each
// rejection means a [Cache.Set] call returned without storing
// the value because the policy refused the candidate.
AdmissionRejects uint64
// LoadTimeouts counts loader invocations that exceeded
// [WithLoaderTimeout]. Each is also a [LoadErrors] increment.
LoadTimeouts uint64
// LoadRateLimited counts GetOrLoad calls rejected by
// [WithLoaderRateLimit] before the loader fired.
LoadRateLimited uint64
// LoadCachedError counts GetOrLoad calls that short-circuited
// against a tombstone produced by [WithErrorTTL].
LoadCachedError uint64
// RefreshAhead counts asynchronous loader calls triggered by
// [WithRefreshAhead].
RefreshAhead uint64
// StaleWhileRevalidate counts asynchronous loader calls
// triggered by [WithStaleWhileRevalidate].
StaleWhileRevalidate uint64
// NegativeHits counts Get-style calls that short-circuited
// against a [WithNegativeCache] tombstone.
NegativeHits uint64
// Resizes counts [Cache.Resize] invocations.
Resizes uint64
// TagInvalidations counts [Cache.InvalidateTag] /
// [Cache.InvalidateTags] calls (one per call, not per dropped
// entry; those land in EvictionsByReason[EvictReasonTag]).
TagInvalidations uint64
// TagsTracked is the live tag-index size (distinct tag count).
TagsTracked int
// TagCleanupBacklog is the depth of the per-cache async untag
// queue. A persistently-positive value indicates the eviction
// rate is outpacing the index drainer; consider raising the
// queue capacity (compile-time `tagCleanupBuffer` constant) or
// reducing tag churn.
TagCleanupBacklog int
// Compactions counts cumulative storage rebuilds across shards;
// always 0 unless [WithFlatStorage] is enabled.
Compactions uint64
// Uptime is the wall-clock duration since [New] returned.
Uptime time.Duration
// LastSnapshotAt is the wall-clock time of the most recent
// successful Save / Load. Zero when no snapshot has run.
LastSnapshotAt time.Time
// LastResetAt is the wall-clock time of the most recent
// [Cache.ResetStats]. Zero when never reset.
LastResetAt time.Time
// PolicyName is the canonical name of the configured eviction
// policy (e.g. "S3FIFO", "LRU"). Filled from
// [Policy.String].
PolicyName string
// PolicyDetail is the per-policy diagnostic struct returned by
// [evictionPolicy.Snapshot] for shard 0; concrete type depends
// on the configured policy ([PolicyDetailLRU] / [PolicyDetailS3FIFO]
// / etc.). Use a type switch or assertion when consuming.
// nil when the cache has no shards (degenerate state).
PolicyDetail any
// LoadLatency is the arithmetic mean of every Loader call that
// completed since the last [Cache.ResetStats].
LoadLatency time.Duration
// LoadLatencyP50 is the median Loader-call duration. Computed
// from a fixed-bucket log-spaced histogram with roughly half-
// bucket precision (see [internal/tdigest]).
LoadLatencyP50 time.Duration
// LoadLatencyP99 is the 99th-percentile Loader-call duration.
// Same precision caveat as P50.
LoadLatencyP99 time.Duration
// Entries is the current number of entries in the cache (live
// Len at snapshot time).
Entries int64
// Bytes is the current total weight of all entries (sum of the
// configured Weigher's results), or the entry count when no
// Weigher is configured.
Bytes int64
// Capacity is the cache's current configured bound (entries when
// MaxEntries is set, bytes otherwise).
Capacity int64
// At is the wall-clock time at which the snapshot was taken.
At time.Time
}
Stats is a point-in-time snapshot of per-cache statistics.
All counters are monotonic and are reset only by Cache.ResetStats. Each field is loaded atomically; the snapshot as a whole may straddle concurrent operations.
type Store ¶
type Store[K comparable, V any] interface { // Get returns the value stored for key. Missing keys produce // (zero V, false, nil); only I/O failures populate err. Get(ctx context.Context, key K) (V, bool, error) // Set stores value under key with the given TTL. ttl <= 0 // means "no expiry"; the implementation decides how to record // that internally. Set(ctx context.Context, key K, value V, ttl time.Duration) error // Delete removes the entry for key. Returns whether an entry // was actually removed. Delete(ctx context.Context, key K) (bool, error) // Iterate calls fn for each currently-stored entry. Returning // false from fn stops iteration. Order is unspecified; // callers cannot rely on it. Iterate(ctx context.Context, fn func(key K, value V) bool) error // Len returns the number of entries currently in the store. Len(ctx context.Context) (int, error) // Close releases backend resources. Stores not backed by an // external resource may return nil. Close is idempotent. Close() error }
Store is the pluggable backend interface. The package ships MemoryStore as the default; disk-backed (BoltDB-style), Redis-backed, and other adapters are intended to be third-party implementations layered behind this contract.
Every method takes a context.Context so backends can support per-call deadlines, cancellation, and tracing without retrofitting them later. Implementations of in-process stores (like MemoryStore) can choose to honor only ctx.Err() at entry.
Store implementations MUST be safe for concurrent use.
A Cache built with WithStore treats the Store as its source of truth (read-through with promotion, write-through, delete-through). Store also serves as L2 for Tiered.
type Tiered ¶
type Tiered[K comparable, V any] struct { // contains filtered or unexported fields }
Tiered combines two Cache instances into an L1/L2 hierarchy. Reads hit L1 first; L2 hits are promoted to L1. Writes are write-through. Tiered exposes the high-traffic methods directly; reach the rest via Tiered.L1 / Tiered.L2. Both caches must be non-nil.
func NewTiered ¶
func NewTiered[K comparable, V any](l1, l2 *Cache[K, V]) *Tiered[K, V]
NewTiered constructs a Tiered cache from two non-nil Cache instances. Tiered.Close closes both; share an underlying cache only if you do not need to outlive Tiered.Close.
Example ¶
ExampleNewTiered composes a small fast L1 with a larger warm L2.
package main
import (
"fmt"
"github.com/go-rotini/memcache"
)
func main() {
l1, _ := memcache.New[string, int](memcache.WithMaxEntries(16))
l2, _ := memcache.New[string, int](memcache.WithMaxEntries(1024))
t := memcache.NewTiered(l1, l2)
defer t.Close()
_ = t.Set("k", 42)
v, _ := t.Get("k")
fmt.Println(v)
}
Output: 42
func (*Tiered[K, V]) Close ¶
Close closes BOTH tiers. The first non-nil error encountered is returned; subsequent close errors are silently swallowed (the caller cannot act on more than one shutdown failure usefully). Idempotent: subsequent Close calls return nil.
func (*Tiered[K, V]) Delete ¶
Delete removes key from BOTH tiers. Returns true when at least one tier had an entry to remove.
func (*Tiered[K, V]) Get ¶
Get returns the cached value for key. Lookup order: L1, then L2 with best-effort promotion to L1, then miss. A failed L1 promotion still returns the L2 hit.
func (*Tiered[K, V]) GetCtx ¶
GetCtx is the context-aware variant of Tiered.Get. The ctx is threaded through to each tier's Cache.GetCtx; either tier's WithStore read-through honors cancellation. The boolean reports whether either tier had the entry; the error surfaces any underlying ctx or Store failure.
func (*Tiered[K, V]) InvalidateTag ¶
InvalidateTag removes every entry tagged with t from BOTH tiers. Returns the total number of entries removed across both tiers. Negative-cache tombstones and invalidated read-snapshot entries are not counted by either tier's InvalidateTag.
func (*Tiered[K, V]) InvalidateTags ¶
InvalidateTags removes every entry tagged with at least one of tags from BOTH tiers. Returns the total removed across both.
func (*Tiered[K, V]) L1 ¶
L1 returns the underlying L1 Cache. Useful for callers that need to apply L1-specific operations (e.g., InvalidateTag) that the Tiered wrapper does not expose directly.
func (*Tiered[K, V]) Len ¶
Len returns the number of entries in L2. Because writes are write-through, L1 ⊆ L2 in steady state, so reporting L2's Len is the most accurate "how many distinct keys does this tiered cache hold" measurement. Use Tiered.L1/Tiered.L2 and their own Len() if you need per-tier counts.
func (*Tiered[K, V]) Set ¶
Set writes value to BOTH tiers (write-through). Returns the first error encountered; partial failure is possible but uncommon (e.g., L1 succeeds but L2 hits a weight limit). In that case L1 may carry a fresher value than L2 until the next Set rebalances.
func (*Tiered[K, V]) SetWithOptions ¶
SetWithOptions writes value to both tiers with the same options. For per-tier overrides, call into Tiered.L1 / Tiered.L2 directly.
func (*Tiered[K, V]) SetWithTTL ¶
SetWithTTL writes value to BOTH tiers with the given TTL. Returns the first error encountered.
func (*Tiered[K, V]) Stats ¶
func (t *Tiered[K, V]) Stats() TieredStats
Stats returns the combined observability snapshot.
type TieredStats ¶
type TieredStats struct {
// L1 is the L1 cache's full Stats snapshot.
L1 Stats
// L2 is the L2 cache's full Stats snapshot.
L2 Stats
// L1Hits is the count of Tiered.Get calls served from L1.
L1Hits uint64
// L2Hits is the count of Tiered.Get calls served from L2
// (with subsequent promotion to L1).
L2Hits uint64
// Misses is the count of Tiered.Get calls that found neither
// tier.
Misses uint64
// Promotions is L2Hits but recorded after the L1 promotion
// actually wrote successfully (failed Sets do not bump it).
Promotions uint64
}
TieredStats reports observability for a Tiered cache split between its two tiers. L1Hits are hits served from the fast (in-memory) tier; L2Hits are served from the slower-but-larger tier and trigger a promotion. Misses are queries that found neither tier.
type Timer ¶
type Timer interface {
// Stop attempts to cancel the timer. It returns true if the timer
// was canceled before firing.
Stop() bool
// Reset reschedules the timer to fire after d. Returns true if the
// timer had been active.
Reset(d time.Duration) bool
}
Timer is the abstraction over time.Timer-like values returned by Clock.AfterFunc.
type Tracer ¶
type Tracer interface {
// Start begins a span for op. The returned context carries
// span data so child spans can chain via ctx; the returned
// Span MUST be closed via End when the operation finishes.
Start(ctx context.Context, op string, attrs ...Attr) (context.Context, Span)
}
Tracer emits observability spans. Spans: memcache.load (Loader runs from Cache.GetOrLoad), memcache.snapshot.save, memcache.snapshot.load. Get/Set/Delete do NOT emit spans (per-call cost would dominate). Implementations must be thread-safe.
type Weigher ¶
Weigher returns the weight of a value, used with WithMaxBytes to bound the cache by total weight rather than entry count. Implementations must be deterministic; weights below 1 are clamped to 1.
func BytesWeigher ¶
BytesWeigher returns a Weigher that reports the byte length of a byte slice.
func StringWeigher ¶
StringWeigher returns a Weigher that reports the byte length of a string.
func UnitWeigher ¶
UnitWeigher returns a Weigher that reports 1 for every value.
Source Files
¶
- admission.go
- arc.go
- asyncwrites.go
- cache.go
- cache_ctx.go
- cache_store.go
- cacheable.go
- cachetag.go
- clock.go
- codec.go
- codec_compress.go
- codec_encrypt.go
- compute.go
- diagnose.go
- doc.go
- entry.go
- errors.go
- events.go
- expiry.go
- expvar.go
- fifo.go
- flatstore.go
- invalidation.go
- lfu.go
- loader.go
- lru.go
- options.go
- options_safety.go
- policy.go
- ratelimit.go
- readmap.go
- s3fifo.go
- setoptions.go
- shard.go
- sharded_counter.go
- shardstore.go
- snapshot.go
- stats.go
- store.go
- tagcleanup.go
- tags.go
- tiered.go
- tinylfu.go
- tracer.go
- ttlbackend.go
- twoq.go
- types.go
- view.go
- weigher.go
- wheel_ttl.go
Directories
¶
| Path | Synopsis |
|---|---|
|
internal
|
|
|
sketch
Package sketch hosts probabilistic data structures (count-min sketch, bloom filter) and SipHash-2-4 used by the cache.
|
Package sketch hosts probabilistic data structures (count-min sketch, bloom filter) and SipHash-2-4 used by the cache. |
|
tdigest
Package tdigest hosts the latency-quantile estimator used by [memcache.Stats.LoadLatencyP50] / P99 / LoadLatency.
|
Package tdigest hosts the latency-quantile estimator used by [memcache.Stats.LoadLatencyP50] / P99 / LoadLatency. |
|
wheel
Package wheel implements a hashed timing wheel with O(1) amortized add/cancel/expire.
|
Package wheel implements a hashed timing wheel with O(1) amortized add/cancel/expire. |