mcache

package module
v0.0.0-...-6ad1c48 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 28, 2026 License: MIT Imports: 14 Imported by: 7

README

go-mcache

Go Report Card GoDoc

High-performance, thread-safe in-memory cache for Go with generics, TinyLFU eviction, lock-free data structures, and Redis-style iterators.

Features

  • Generic API — Type-safe Cache[K, V] with any comparable key and value types
  • TinyLFU Eviction — Smart admission policy for better hit ratios (inspired by Ristretto)
  • Lock-Free Structures — Lock-free Count-Min Sketch, Bloom filter, and TinyLFU for high concurrency
  • SIMD Optimizations — AVX2/SSE optimized operations on amd64
  • Cost-Based Eviction — Evict by memory cost, not just entry count
  • Redis-Style IteratorsScan, ScanPrefix, ScanMatch with cursor-based pagination
  • Glob Pattern Matching — Find keys with *, ?, [abc], [a-z] patterns
  • Prefix Search — O(k) prefix lookups via radix tree (for string keys)
  • Optimized Batch OperationsGetBatch, GetBatchOptimized with prefetching
  • CallbacksOnEvict, OnExpire, OnReject hooks
  • Metrics — Hit ratio, evictions, expirations tracking
  • GC-Free Storage — Optional mode for reduced GC pressure (BigCache-style)
  • Zero-Allocation Reads — Optimized read path
  • Backward Compatible — Legacy CacheDriver API still works

Installation

go get github.com/OrlovEvgeny/go-mcache

Requires Go 1.21+

Quick Start

package main

import (
    "fmt"
    "time"

    "github.com/OrlovEvgeny/go-mcache"
)

func main() {
    // Create a cache with string keys and int values
    cache := mcache.NewCache[string, int]()
    defer cache.Close()

    // Set with TTL
    cache.Set("counter", 42, 5*time.Minute)

    // Get (type-safe, no casting needed)
    if val, ok := cache.Get("counter"); ok {
        fmt.Printf("counter = %d\n", val)
    }

    // Delete
    cache.Delete("counter")
}
Legacy API
cache := mcache.New()
defer cache.Close()

cache.Set("key", "value", 5*time.Minute)

if val, ok := cache.Get("key"); ok {
    fmt.Println(val.(string))
}

Configuration

cache := mcache.NewCache[string, []byte](
    // Size limits
    mcache.WithMaxEntries[string, []byte](100000),      // Max 100k entries
    mcache.WithMaxCost[string, []byte](1<<30),          // Max 1GB total cost

    // Cost function (for []byte, use length)
    mcache.WithCostFunc[string, []byte](func(v []byte) int64 {
        return int64(len(v))
    }),

    // Callbacks
    mcache.WithOnEvict[string, []byte](func(key string, val []byte, cost int64) {
        fmt.Printf("evicted: %s (cost=%d)\n", key, cost)
    }),
    mcache.WithOnExpire[string, []byte](func(key string, val []byte) {
        fmt.Printf("expired: %s\n", key)
    }),

    // Performance tuning
    mcache.WithShardCount[string, []byte](2048),        // More shards = less contention
    mcache.WithNumCounters[string, []byte](1000000),    // TinyLFU counters (10x entries)
    mcache.WithBufferItems[string, []byte](64),         // Async write buffer

    // Default TTL
    mcache.WithDefaultTTL[string, []byte](time.Hour),
)
All Options
Option Description Default
WithMaxEntries Maximum number of entries unlimited
WithMaxCost Maximum total cost unlimited
WithNumCounters TinyLFU counters (recommend 10x max entries) auto
WithShardCount Number of shards (power of 2) 1024
WithBufferItems Async write buffer size (0 = sync) 0
WithOnEvict Called when entry is evicted nil
WithOnExpire Called when entry expires nil
WithOnReject Called when entry rejected by TinyLFU nil
WithCostFunc Custom cost calculator cost=1
WithKeyHasher Custom key hash function auto
WithDefaultTTL Default TTL for entries 0 (no expiry)
WithGCFreeStorage Use GC-free storage false

API Reference

Basic Operations
// Set with TTL (0 = no expiration)
cache.Set(key K, value V, ttl time.Duration) bool

// Set with explicit cost
cache.SetWithCost(key K, value V, cost int64, ttl time.Duration) bool

// Get value
cache.Get(key K) (V, bool)

// Check existence
cache.Has(key K) bool

// Delete
cache.Delete(key K) bool

// Count entries
cache.Len() int

// Clear all
cache.Clear()

// Shutdown
cache.Close()
Batch Operations
// Get multiple keys (simple)
results := cache.GetMany([]string{"a", "b", "c"})
// returns map[string]V with found entries

// Get multiple keys (optimized with prefetching)
batch := cache.GetBatch(keys)
for i, key := range batch.Keys {
    if batch.Found[i] {
        fmt.Printf("%s = %v\n", key, batch.Values[i])
    }
}

// Get multiple keys (shard-order optimized for best cache locality)
batch := cache.GetBatchOptimized(keys)

// Get batch as map
results := cache.GetBatchToMap(keys)

// Set multiple items
items := []mcache.Item[string, int]{
    {Key: "a", Value: 1, TTL: time.Minute},
    {Key: "b", Value: 2, TTL: time.Minute},
    {Key: "c", Value: 3, Cost: 100, TTL: time.Hour},
}
count := cache.SetMany(items)

// Delete multiple keys
deleted := cache.DeleteMany([]string{"a", "b"})
Iterators (Redis-style)
// Scan all entries
iter := cache.Scan(0, 100)  // cursor=0, count=100
for iter.Next() {
    fmt.Printf("%v = %v\n", iter.Key(), iter.Value())
}
if err := iter.Err(); err != nil {
    log.Fatal(err)
}

// Resume from cursor
nextCursor := iter.Cursor()
iter2 := cache.Scan(nextCursor, 100)
Prefix Search (string keys only)
cache := mcache.NewCache[string, int]()

cache.Set("user:1:name", 1, 0)
cache.Set("user:1:email", 2, 0)
cache.Set("user:2:name", 3, 0)
cache.Set("order:1", 4, 0)

// Find all user:* keys
iter := cache.ScanPrefix("user:", 0, 100)
for iter.Next() {
    fmt.Println(iter.Key())
}
// Output:
// user:1:name
// user:1:email
// user:2:name
Pattern Matching (string keys only)
// Find keys matching pattern
iter := cache.ScanMatch("user:*:name", 0, 100)
for iter.Next() {
    fmt.Println(iter.Key())
}
// Output:
// user:1:name
// user:2:name

Supported patterns:

  • * — matches any characters
  • ? — matches single character
  • ** — matches any characters including path separator
  • [abc] — matches any character in set
  • [a-z] — matches any character in range
  • [^abc] — matches any character NOT in set
Iterator Helpers
// Collect all remaining entries
items := iter.All()  // []Item[K, V]

// Collect keys only
keys := iter.Keys()  // []K

// Collect values only
values := iter.Values()  // []V

// Count remaining
count := iter.Count()

// ForEach with early exit
iter.ForEach(func(key K, value V) bool {
    fmt.Println(key, value)
    return true  // continue, false to stop
})
Metrics
metrics := cache.Metrics()

fmt.Printf("Hits: %d\n", metrics.Hits)
fmt.Printf("Misses: %d\n", metrics.Misses)
fmt.Printf("Hit Ratio: %.2f%%\n", metrics.HitRatio*100)
fmt.Printf("Evictions: %d\n", metrics.Evictions)
fmt.Printf("Expirations: %d\n", metrics.Expirations)
fmt.Printf("Rejections: %d\n", metrics.Rejections)  // TinyLFU rejections
Async Writes
// Enable write buffering for higher throughput
cache := mcache.NewCache[string, int](
    mcache.WithBufferItems[string, int](64),
)

// Writes are batched asynchronously
cache.Set("key", 42, 0)

// Wait for pending writes to complete
cache.Wait()

// Now read is guaranteed to see the value
val, _ := cache.Get("key")

Architecture

┌──────────────────────────────────────────────────────────────────────────┐
│                            Cache[K, V]                                    │
├──────────────────────────────────────────────────────────────────────────┤
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────────────┐  │
│  │  ShardedStore   │  │     Policy      │  │      Radix Tree         │  │
│  │  (1024 shards)  │  │ (TinyLFU+SLFU)  │  │   (prefix search)       │  │
│  │  ┌───┐ ┌───┐    │  │  ┌──────────┐   │  │                         │  │
│  │  │ 0 │ │ 1 │... │  │  │ CM Sketch│   │  │  Only for string keys   │  │
│  │  └───┘ └───┘    │  │  │ (LockFree)│  │  │                         │  │
│  │  map[K]*Entry   │  │  │ Bloom    │   │  │                         │  │
│  │  + Prefetching  │  │  │ (LockFree)│  │  │                         │  │
│  │  + Cache Padding│  │  │ SampledLFU│  │  │                         │  │
│  └─────────────────┘  │  └──────────┘   │  └─────────────────────────┘  │
│                       └─────────────────┘                                │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────────────┐  │
│  │  Write Buffer   │  │    Metrics      │  │    Expiration Worker    │  │
│  │ (ring buffer)   │  │  (atomic ops)   │  │  (background goroutine) │  │
│  └─────────────────┘  └─────────────────┘  └─────────────────────────┘  │
└──────────────────────────────────────────────────────────────────────────┘
TinyLFU Admission Policy

When the cache is full and a new item arrives:

  1. Doorkeeper — Lock-free Bloom filter checks if item was seen before
  2. Count-Min Sketch — Lock-free frequency estimation (8-bit atomic counters)
  3. Sampled LFU — Samples 5 random victims, picks lowest frequency
  4. Admission — New item admitted only if frequency > victim

This prevents "one-hit wonders" from evicting frequently accessed items.

Lock-Free Data Structures

The cache uses lock-free implementations for the hot path:

  • Lock-Free Count-Min Sketch — 8-bit atomic counters packed in atomic.Uint32, CAS-based increments
  • Lock-Free Bloom Filter — Atomic bit operations with atomic.Uint64
  • Lock-Free TinyLFU — Completely lock-free Access() path for read operations

This eliminates contention on the admission policy during concurrent reads.

Advanced Data Structures

The library includes several advanced data structures:

  • Blocked Bloom Filter — Cache-line sized blocks (512 bits) for 1 cache miss max per lookup
  • Cuckoo Filter — Supports deletion, only 2 memory accesses per lookup
  • Swiss Table — SIMD-friendly hash table with 16-byte control groups
Storage Options

Standard (default):

  • map[K]*Entry[K,V] per shard
  • Works with any value type
  • Values tracked by GC
  • Cache-line padded shards to prevent false sharing

GC-Free (opt-in):

  • map[uint64]uint32 + byte slices
  • Values serialized, invisible to GC
  • Best for []byte values and large caches
  • Reduces GC pause times
cache := mcache.NewCache[string, []byte](
    mcache.WithGCFreeStorage[string, []byte](),
)

Performance

Benchmarks on Apple M4 Pro (arm64):

Generic API (with TinyLFU)
BenchmarkCacheGet-12                          4915591    445.9 ns/op     0 B/op    0 allocs/op
BenchmarkCacheSet-12                          2185386   1061   ns/op    49 B/op    1 allocs/op
BenchmarkCacheSetWithTTL-12                   2511756    973.9 ns/op    49 B/op    1 allocs/op
BenchmarkCacheMixed-12                        4878026    473.3 ns/op     9 B/op    0 allocs/op
BenchmarkCacheHas-12                        137653788     17.48 ns/op    0 B/op    0 allocs/op
BenchmarkCacheOperations/Read-12             40523341     57.98 ns/op    5 B/op    0 allocs/op
BenchmarkCacheOperations/ParallelReadWrite-12 11975133   229.5 ns/op    94 B/op    3 allocs/op
BenchmarkCacheOperationsPreallocated/Read-12  75477038    31.97 ns/op    0 B/op    0 allocs/op
BenchmarkCacheZipf-12                          683647   3624   ns/op   87.40 hit%  13 B/op  0 allocs/op
Lock-Free Structures Performance
BenchmarkCMSketchLockFreeIncrement-12         58410620   20.76 ns/op    0 B/op    0 allocs/op
BenchmarkCMSketchLockFreeIncrementParallel-12 43926879   24.45 ns/op    0 B/op    0 allocs/op
BenchmarkBloomFilterLockFreeAdd-12           123533065    9.52 ns/op    0 B/op    0 allocs/op
BenchmarkBloomFilterLockFreeAddParallel-12   889117638    1.25 ns/op    0 B/op    0 allocs/op
BenchmarkTinyLFULockFreeIncrement-12          32159007   38.91 ns/op    0 B/op    0 allocs/op
BenchmarkPolicyLockFreeAccess-12              35165865   36.34 ns/op    0 B/op    0 allocs/op
Comparison: Lock-Free vs Original
Operation Original Lock-Free Improvement
CM Sketch Increment 38.23 ns 20.76 ns 1.84x faster
Bloom Filter Add 13.39 ns 9.52 ns 1.41x faster
TinyLFU Increment 46.73 ns 38.91 ns 1.20x faster
Advanced Structures
BenchmarkBlockedBloomFilterContains-12  272477785   4.05 ns/op    0 B/op    0 allocs/op
BenchmarkSwissTableGet-12                47139546  26.22 ns/op    0 B/op    0 allocs/op
Legacy API (no TinyLFU)
BenchmarkLegacyGet-12       57359919     21 ns/op      0 B/op    0 allocs/op
BenchmarkLegacySet-12        6964370    222 ns/op     72 B/op    1 allocs/op
BenchmarkLegacyMixed-12     23142158     75 ns/op     14 B/op    0 allocs/op

The generic API is slower due to TinyLFU policy overhead, but provides:

  • Better hit ratio on skewed workloads
  • Cost-based eviction
  • Iterator support
  • Prefix search
  • Metrics

Run benchmarks:

go test -bench=. -benchmem

Examples

Session Cache
type Session struct {
    UserID    int64
    Token     string
    ExpiresAt time.Time
}

cache := mcache.NewCache[string, *Session](
    mcache.WithMaxEntries[string, *Session](100000),
    mcache.WithDefaultTTL[string, *Session](24*time.Hour),
    mcache.WithOnExpire[string, *Session](func(key string, s *Session) {
        log.Printf("session expired: user=%d", s.UserID)
    }),
)

// Store session
cache.Set(sessionToken, &Session{
    UserID: 123,
    Token:  sessionToken,
}, 0)  // Uses default TTL

// Lookup
if session, ok := cache.Get(sessionToken); ok {
    fmt.Printf("User: %d\n", session.UserID)
}
Rate Limiter
cache := mcache.NewCache[string, int](
    mcache.WithDefaultTTL[string, int](time.Minute),
)

func checkRateLimit(ip string, limit int) bool {
    count, _ := cache.Get(ip)
    if count >= limit {
        return false
    }
    cache.Set(ip, count+1, 0)
    return true
}
LRU-style Cache with Cost
cache := mcache.NewCache[string, []byte](
    mcache.WithMaxCost[string, []byte](100<<20),  // 100MB
    mcache.WithCostFunc[string, []byte](func(v []byte) int64 {
        return int64(len(v))
    }),
)

// Large values will evict multiple small ones
cache.Set("large", make([]byte, 10<<20), 0)  // 10MB
High-Throughput Batch Operations
cache := mcache.NewCache[string, int]()

// Prefill cache
for i := 0; i < 10000; i++ {
    cache.Set(fmt.Sprintf("key:%d", i), i, 0)
}

// Batch read with prefetching (30-50% faster than individual reads)
keys := []string{"key:1", "key:2", "key:3", "key:100", "key:500"}
batch := cache.GetBatchOptimized(keys)

for i, key := range batch.Keys {
    if batch.Found[i] {
        fmt.Printf("%s = %d\n", key, batch.Values[i])
    }
}
User Data with Prefix Queries
cache := mcache.NewCache[string, string]()

// Store user data with namespaced keys
cache.Set("user:1:name", "Alice", 0)
cache.Set("user:1:email", "alice@example.com", 0)
cache.Set("user:2:name", "Bob", 0)
cache.Set("user:2:email", "bob@example.com", 0)

// Get all data for user 1
iter := cache.ScanPrefix("user:1:", 0, 100)
for iter.Next() {
    fmt.Printf("%s = %s\n", iter.Key(), iter.Value())
}

// Find all email keys
iter = cache.ScanMatch("user:*:email", 0, 100)
emails := iter.Values()
Different Key Types
// Integer keys
intCache := mcache.NewCache[int, string]()
intCache.Set(42, "answer", 0)

// Struct keys (must be comparable)
type CacheKey struct {
    Namespace string
    ID        int64
}
structCache := mcache.NewCache[CacheKey, []byte]()
structCache.Set(CacheKey{"users", 123}, data, time.Hour)

Migration from v1

The legacy API (mcache.New()) remains fully functional. To migrate to the generic API:

// Before (v1)
cache := mcache.New()
cache.Set("key", myValue, time.Hour)
val, ok := cache.Get("key")
if ok {
    typed := val.(MyType)  // Type assertion needed
}

// After (v2)
cache := mcache.NewCache[string, MyType]()
cache.Set("key", myValue, time.Hour)
val, ok := cache.Get("key")  // val is already MyType

Thread Safety

All operations are thread-safe. The cache uses:

  • Sharded storage with per-shard sync.RWMutex
  • Lock-free TinyLFU admission policy for read path
  • Atomic operations for metrics
  • Lock-free ring buffer for async writes
  • Cache-line padded structures to prevent false sharing

Internal Packages

The library includes several optimized internal packages:

Package Description
internal/policy TinyLFU, Count-Min Sketch, Bloom filters (lock-free variants)
internal/store Sharded storage with prefetching and batch operations
internal/hashtable Swiss table implementation
internal/hash FNV-1a hashing with batch operations
internal/prefetch CPU prefetch intrinsics (amd64)
internal/alloc Aligned memory allocation for SIMD
internal/radix Radix tree for prefix search
internal/glob Glob pattern matching

License

MIT

Documentation

Index

Constants

View Source
const TTL_FOREVER = 0

TTL_FOREVER represents an infinite TTL (no expiration).

Variables

This section is empty.

Functions

This section is empty.

Types

type BatchResult

type BatchResult[K comparable, V any] struct {
	Keys   []K
	Values []V
	Found  []bool
	Hashes []uint64
}

BatchResult holds the result of a batch get operation.

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic, high-performance in-memory cache.

func NewCache

func NewCache[K comparable, V any](opts ...Option[K, V]) *Cache[K, V]

NewCache creates a new generic Cache with the given options.

func (*Cache[K, V]) Clear

func (c *Cache[K, V]) Clear()

Clear removes all entries from the cache.

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close()

Close stops the cache and releases resources.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(key K) bool

Delete removes a value from the cache. Returns true if the value was found and deleted.

func (*Cache[K, V]) DeleteMany

func (c *Cache[K, V]) DeleteMany(keys []K) int

DeleteMany removes multiple keys from the cache. Returns the number of keys successfully deleted.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(key K) (V, bool)

Get retrieves a value from the cache. Returns the value and true if found, zero value and false otherwise.

func (*Cache[K, V]) GetBatch

func (c *Cache[K, V]) GetBatch(keys []K) *BatchResult[K, V]

GetBatch retrieves multiple values from the cache with optimized prefetching. This is more efficient than calling Get in a loop, especially for large batches.

func (*Cache[K, V]) GetBatchOptimized

func (c *Cache[K, V]) GetBatchOptimized(keys []K) *BatchResult[K, V]

GetBatchOptimized retrieves multiple values with shard-order optimization. Keys are processed in shard order for better cache locality, but results are returned in the original key order.

func (*Cache[K, V]) GetBatchToMap

func (c *Cache[K, V]) GetBatchToMap(keys []K) map[K]V

GetBatchToMap retrieves multiple values and returns them as a map. More convenient than GetBatch when you need map access.

func (*Cache[K, V]) GetMany

func (c *Cache[K, V]) GetMany(keys []K) map[K]V

GetMany retrieves multiple values from the cache. Returns a map of found keys to their values.

func (*Cache[K, V]) Has

func (c *Cache[K, V]) Has(key K) bool

Has checks if a key exists in the cache.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries in the cache.

func (*Cache[K, V]) Metrics

func (c *Cache[K, V]) Metrics() MetricsSnapshot

Metrics returns the cache metrics.

func (*Cache[K, V]) Scan

func (c *Cache[K, V]) Scan(cursor uint64, count int) *Iterator[K, V]

Scan returns an iterator over cache entries. cursor is the starting position (0 for beginning). count is the maximum number of entries to return per iteration.

func (*Cache[K, V]) ScanMatch

func (c *Cache[K, V]) ScanMatch(pattern string, cursor uint64, count int) *Iterator[K, V]

ScanMatch returns an iterator over entries with keys matching the glob pattern. Only works when K is string. Supported patterns: * (any chars), ? (single char), [abc] (char class).

func (*Cache[K, V]) ScanPrefix

func (c *Cache[K, V]) ScanPrefix(prefix string, cursor uint64, count int) *Iterator[K, V]

ScanPrefix returns an iterator over entries with keys matching the prefix. Only works when K is string.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(key K, value V, ttl time.Duration) bool

Set stores a value in the cache with the given TTL. A TTL of 0 means the entry never expires. Returns true if the value was stored, false if rejected by admission policy.

func (*Cache[K, V]) SetMany

func (c *Cache[K, V]) SetMany(items []Item[K, V]) int

SetMany stores multiple items in the cache. Returns the number of items successfully stored.

func (*Cache[K, V]) SetWithCost

func (c *Cache[K, V]) SetWithCost(key K, value V, cost int64, ttl time.Duration) bool

SetWithCost stores a value with a specified cost. Cost is used for eviction decisions when MaxCost is set.

func (*Cache[K, V]) Wait

func (c *Cache[K, V]) Wait()

Wait waits for all pending write operations to complete.

type CacheDriver

type CacheDriver struct {
	// contains filtered or unexported fields
}

CacheDriver manages cache operations with storage and expiration.

func New

func New() *CacheDriver

New creates and initializes a new CacheDriver.

func StartInstance

func StartInstance() *CacheDriver

StartInstance is deprecated; use New instead.

func (*CacheDriver) Close

func (mc *CacheDriver) Close() map[string]interface{}

Close stops the GC and returns all non-expired entries.

func (*CacheDriver) GCBufferQueue

func (mc *CacheDriver) GCBufferQueue() int

GCBufferQueue returns the count of pending expirations in the GC.

func (*CacheDriver) Get

func (mc *CacheDriver) Get(key string) (interface{}, bool)

Get retrieves a value by key. Returns (value, true) if found and not expired.

func (*CacheDriver) GetPointer

func (mc *CacheDriver) GetPointer(key string) (interface{}, bool)

GetPointer is deprecated; use Get instead.

func (*CacheDriver) Len

func (mc *CacheDriver) Len() int

Len returns the number of current cache entries.

func (*CacheDriver) Remove

func (mc *CacheDriver) Remove(key string)

Remove deletes a key from the cache and expiration tracking.

func (*CacheDriver) Set

func (mc *CacheDriver) Set(key string, value interface{}, ttl time.Duration) error

Set inserts or updates a key with the given value and TTL.

func (*CacheDriver) SetPointer

func (mc *CacheDriver) SetPointer(key string, value interface{}, ttl time.Duration) error

SetPointer is deprecated; use Set instead.

func (*CacheDriver) Truncate

func (mc *CacheDriver) Truncate()

Truncate clears all cache entries and pending expirations.

type Item

type Item[K comparable, V any] struct {
	Key   K
	Value V
	Cost  int64         // 0 = auto-calculate (cost of 1)
	TTL   time.Duration // 0 = no expiration
}

Item represents an item to be stored in the cache.

type Iterator

type Iterator[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Iterator provides a Redis-style iterator over cache entries.

func (*Iterator[K, V]) All

func (it *Iterator[K, V]) All() []Item[K, V]

All collects all remaining entries and returns them. Warning: This may be memory-intensive for large result sets.

func (*Iterator[K, V]) Count

func (it *Iterator[K, V]) Count() int

Count counts the remaining entries without collecting them.

func (*Iterator[K, V]) Cursor

func (it *Iterator[K, V]) Cursor() uint64

Cursor returns the current cursor position. This can be used to resume iteration later.

func (*Iterator[K, V]) Entry

func (it *Iterator[K, V]) Entry() (K, V)

Entry returns the current entry (key, value pair).

func (*Iterator[K, V]) Err

func (it *Iterator[K, V]) Err() error

Err returns any error that occurred during iteration.

func (*Iterator[K, V]) ForEach

func (it *Iterator[K, V]) ForEach(fn func(key K, value V) bool)

ForEach calls fn for each remaining entry. If fn returns false, iteration stops.

func (*Iterator[K, V]) Key

func (it *Iterator[K, V]) Key() K

Key returns the current entry's key.

func (*Iterator[K, V]) Keys

func (it *Iterator[K, V]) Keys() []K

Keys collects all remaining keys and returns them.

func (*Iterator[K, V]) Next

func (it *Iterator[K, V]) Next() bool

Next advances the iterator to the next entry. Returns true if there is an entry available, false when exhausted.

func (*Iterator[K, V]) Value

func (it *Iterator[K, V]) Value() V

Value returns the current entry's value.

func (*Iterator[K, V]) Values

func (it *Iterator[K, V]) Values() []V

Values collects all remaining values and returns them.

type Metrics

type Metrics struct {
	// contains filtered or unexported fields
}

Metrics holds cache statistics.

func (*Metrics) Reset

func (m *Metrics) Reset()

Reset resets all metrics to zero.

func (*Metrics) Snapshot

func (m *Metrics) Snapshot() MetricsSnapshot

Snapshot returns a point-in-time snapshot of the metrics.

type MetricsSnapshot

type MetricsSnapshot struct {
	Hits        int64   // Total cache hits
	Misses      int64   // Total cache misses
	Sets        int64   // Total successful sets
	Deletes     int64   // Total successful deletes
	Evictions   int64   // Total evictions due to size/cost limit
	Expirations int64   // Total expirations due to TTL
	Rejections  int64   // Total rejections by admission policy
	CostAdded   int64   // Total cost added over time
	CostEvicted int64   // Total cost evicted over time
	HitRatio    float64 // Hit ratio (hits / (hits + misses))
}

MetricsSnapshot is a point-in-time snapshot of cache metrics.

type Option

type Option[K comparable, V any] func(*config[K, V])

Option is a function that configures a Cache.

func WithBufferItems

func WithBufferItems[K comparable, V any](n int64) Option[K, V]

WithBufferItems sets the write buffer size. Writes are batched in this buffer before being applied to the cache. Default is 64.

func WithCostFunc

func WithCostFunc[K comparable, V any](fn func(V) int64) Option[K, V]

WithCostFunc sets a custom function to calculate the cost of a value. If not set, each entry has a cost of 1.

func WithDefaultTTL

func WithDefaultTTL[K comparable, V any](ttl time.Duration) Option[K, V]

WithDefaultTTL sets the default TTL for entries that don't specify one. A value of 0 means no expiration (default).

func WithGCFreeStorage

func WithGCFreeStorage[K comparable, V any]() Option[K, V]

WithGCFreeStorage enables GC-free storage mode. This mode stores values as serialized bytes, reducing GC pressure. Best suited for caches with []byte values.

func WithIgnoreInternalCost

func WithIgnoreInternalCost[K comparable, V any](ignore bool) Option[K, V]

WithIgnoreInternalCost configures whether internal metadata cost should be ignored when calculating total cache cost.

func WithKeyHasher

func WithKeyHasher[K comparable, V any](fn func(K) uint64) Option[K, V]

WithKeyHasher sets a custom function to hash keys. If not set, a default hasher is used based on the key type.

func WithMaxCost

func WithMaxCost[K comparable, V any](cost int64) Option[K, V]

WithMaxCost sets the maximum total cost of entries in the cache. Each entry's cost is determined by CostFunc or defaults to 1. A value of 0 means unlimited cost (default).

func WithMaxEntries

func WithMaxEntries[K comparable, V any](n int64) Option[K, V]

WithMaxEntries sets the maximum number of entries in the cache. When the limit is reached, entries are evicted using the configured policy. A value of 0 means unlimited entries (default).

func WithNumCounters

func WithNumCounters[K comparable, V any](n int64) Option[K, V]

WithNumCounters sets the number of counters for TinyLFU frequency estimation. Recommended value is 10x the expected number of entries. A value of 0 uses a default based on MaxEntries.

func WithOnEvict

func WithOnEvict[K comparable, V any](fn func(K, V, int64)) Option[K, V]

WithOnEvict sets a callback function that is called when an entry is evicted. The callback receives the key, value, and cost of the evicted entry.

func WithOnExpire

func WithOnExpire[K comparable, V any](fn func(K, V)) Option[K, V]

WithOnExpire sets a callback function that is called when an entry expires. The callback receives the key and value of the expired entry.

func WithOnReject

func WithOnReject[K comparable, V any](fn func(K, V)) Option[K, V]

WithOnReject sets a callback function that is called when an entry is rejected by the TinyLFU admission policy.

func WithShardCount

func WithShardCount[K comparable, V any](n int) Option[K, V]

WithShardCount sets the number of shards for concurrent access. Must be a power of 2. Default is 1024.

func WithStandardStorage

func WithStandardStorage[K comparable, V any]() Option[K, V]

WithStandardStorage uses the standard storage mode (default). This mode supports any value type but values are tracked by GC.

Directories

Path Synopsis
internal
alloc
Package alloc provides aligned memory allocation utilities for SIMD operations.
Package alloc provides aligned memory allocation utilities for SIMD operations.
buffer
Package buffer provides lock-free ring buffer for write coalescing.
Package buffer provides lock-free ring buffer for write coalescing.
clock
Package clock provides a cached time source for high-performance scenarios.
Package clock provides a cached time source for high-performance scenarios.
glob
Package glob provides glob pattern matching for cache keys.
Package glob provides glob pattern matching for cache keys.
hash
Package hash provides optimized hash functions for cache keys.
Package hash provides optimized hash functions for cache keys.
hashtable
Package hashtable provides high-performance hash table implementations.
Package hashtable provides high-performance hash table implementations.
policy
Package policy provides cache eviction policies.
Package policy provides cache eviction policies.
pool
Package pool provides sync.Pool utilities for reducing allocations.
Package pool provides sync.Pool utilities for reducing allocations.
prefetch
Package prefetch provides software prefetching utilities for cache optimization.
Package prefetch provides software prefetching utilities for cache optimization.
radix
Package radix provides a radix tree for efficient prefix search.
Package radix provides a radix tree for efficient prefix search.
store
Package store provides storage backends for the cache.
Package store provides storage backends for the cache.
Package item defines the structure used for cache entries.
Package item defines the structure used for cache entries.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL