mcache

package module
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 10, 2026 License: MIT Imports: 17 Imported by: 7

README

go-mcache

Go Report Card GoDoc Tests Release

Generic in-memory cache for Go with TinyLFU admission, sharded storage, batched read tracking, and a timing-wheel expiration path.

Design

mcache combines two ideas to achieve both high hit ratios and high throughput:

Admission control via TinyLFU. Not every new item gets into the cache. On insertion, a Count-Min Sketch estimates the access frequency of the incoming item and compares it against a random sample of existing entries (SampledLFU). If the new item has lower frequency, it is rejected. This prevents one-time keys from evicting frequently accessed data — a problem that LRU caches have by design. The frequency sketch uses 4-bit counters (8 bytes per ~16 tracked keys) and resets periodically to adapt to changing access patterns.

Cheap read path. Reads always go through a sharded map lookup. Frequency tracking is updated in a best-effort batched buffer once the cache becomes meaningfully occupied, which avoids paying TinyLFU bookkeeping on every hot read while the cache is still far from capacity.

Architecture
Cache[K, V]
├── ShardedStore         1024 shards, cache-line padded, per-shard RWMutex
│   └── map[K]*Entry     standard Go map per shard
├── Policy[K]            generic over key type (no hash-collision ambiguity)
│   ├── TinyLFU          doorkeeper (Bloom filter) + Count-Min Sketch
│   └── SampledLFU[K]    dense array + map for O(1) random sampling
├── ExpiryWheel          coarse timing wheel for best-effort background expiry
├── RadixTree            opt-in, for prefix search on string keys
├── WriteBuffer          lock-free ring buffer for async batching
├── ReadBuffer           lossy batched policy-access replay
└── Metrics              optional atomic counters
Why exact-key policy matters

The policy tracks entries by exact key K, not by hash. This means:

  • No hash collision ambiguity — two keys with the same hash are tracked independently
  • Eviction victims are returned as {Key, KeyHash} pairs — no reverse lookup needed
  • The SampledLFU uses a dense array with swap-delete, so random sampling is O(sampleSize) instead of O(n) map iteration
Expiration

Entries with TTL are scheduled into a coarse timing wheel for best-effort background cleanup. Exact TTL enforcement still happens on Get/Has by checking the entry's ExpireAt, while the background worker lazily removes entries whose scheduled expiration still matches the live entry.

Install

go get github.com/OrlovEvgeny/go-mcache

Requires Go 1.23+

Usage

cache := mcache.NewCache[string, int](
    mcache.WithMaxEntries[string, int](100_000),
)
defer cache.Close()

cache.Set("key", 42, 5*time.Minute)

val, ok := cache.Get("key")  // type-safe, no assertion
Cost-based eviction
cache := mcache.NewCache[string, []byte](
    mcache.WithMaxCost[string, []byte](100 << 20), // 100 MB
    mcache.WithCostFunc[string, []byte](func(v []byte) int64 {
        return int64(len(v))
    }),
)

// A 10 MB value may evict multiple smaller entries
cache.Set("large", make([]byte, 10<<20), 0)
Batch reads
batch := cache.GetBatchOptimized(keys)
// Keys are sorted by shard index before lookup
// for sequential memory access and reduced lock contention
for i, key := range batch.Keys {
    if batch.Found[i] {
        process(key, batch.Values[i])
    }
}
Prefix search (string keys, opt-in)
cache := mcache.NewCache[string, int](
    mcache.WithPrefixSearch[string, int](true),
)

cache.Set("user:1:name", 1, 0)
cache.Set("user:1:email", 2, 0)
cache.Set("user:2:name", 3, 0)

iter := cache.ScanPrefix("user:1:", 0, 100)
for iter.Next() {
    fmt.Println(iter.Key()) // user:1:name, user:1:email
}
Async writes
cache := mcache.NewCache[string, int](
    mcache.WithBufferItems[string, int](64),
)

cache.Set("key", 1, 0)    // buffered, returns immediately
cache.Wait()               // blocks until buffer is flushed
val, _ := cache.Get("key") // guaranteed to see the value

When the write buffer is full, the operation falls back to synchronous execution instead of dropping the entry.

Configuration

Option Description Default
WithMaxEntries Maximum number of entries unlimited
WithMaxCost Maximum total cost unlimited
WithNumCounters TinyLFU counters (recommend 10x max entries) auto
WithShardCount Number of shards (power of 2) 1024
WithBufferItems Async write buffer size (0 = sync) 0
WithMetrics Enable cache metrics collection true
WithExpirationResolution Background expiration tick resolution 100ms
WithDefaultTTL Default TTL for entries without explicit TTL 0 (no expiry)
WithCostFunc Custom cost calculator cost = 1
WithKeyHasher Custom key hash function auto (FNV-1a)
WithLockFreePolicy Use lock-free TinyLFU for reads true
WithPrefixSearch Enable radix tree for ScanPrefix false
WithOnEvict Callback on eviction nil
WithOnExpire Callback on TTL expiration nil
WithOnReject Callback when TinyLFU rejects entry nil
Supported key types

Built-in zero-allocation hashing for: string, int, int8int64, uintuint64, float32, float64, bool, uintptr. Other comparable types use fmt.Sprintf fallback — provide WithKeyHasher for production use with custom types.

API

// Core
cache.Set(key K, value V, ttl time.Duration) bool
cache.SetWithCost(key K, value V, cost int64, ttl time.Duration) bool
cache.Get(key K) (V, bool)
cache.Has(key K) bool
cache.Delete(key K) bool
cache.Len() int
cache.Clear()
cache.Close()
cache.Wait()

// Batch
cache.GetMany(keys []K) map[K]V
cache.GetBatch(keys []K) *BatchResult[K, V]
cache.GetBatchOptimized(keys []K) *BatchResult[K, V]
cache.SetMany(items []Item[K, V]) int
cache.DeleteMany(keys []K) int

// Iterators (cursor-based, Redis-style)
cache.Scan(cursor uint64, count int) *Iterator[K, V]
cache.ScanPrefix(prefix string, cursor uint64, count int) *Iterator[K, V]
cache.ScanMatch(pattern string, cursor uint64, count int) *Iterator[K, V]

// Iterator methods
iter.Next() bool
iter.Key() K
iter.Value() V
iter.All() []Item[K, V]
iter.Keys() []K
iter.ForEach(func(K, V) bool)
iter.Count() int
iter.Cursor() uint64
iter.Close()

// Metrics
cache.Metrics() MetricsSnapshot
// Fields: Hits, Misses, HitRatio, Sets, Deletes, Evictions,
//         Expirations, Rejections, CostAdded, CostEvicted, BufferDrops

Benchmarks

The repo includes a cross-library comparison suite for local in-process caches: go-mcache, ristretto, bigcache, freecache, go-cache, ttlcache, golang-lru/expirable, otter, and theine.

Run the same suite used for the numbers below:

env GOCACHE=/tmp/go-build-cache GOTOOLCHAIN=auto \
go test -run '^$' -bench '^BenchmarkCompare/' -benchmem -benchtime=1s -count=3 .

Current reference run:

  • machine: Apple M4 Pro
  • metric shown in tables: median ns/op across count=3 runs (lower is better)

The suite separates:

  • core throughput (ReadParallelHot, WriteParallelOverwrite, MixedParallel80_20, DeleteCycle)
  • TTL overhead (SetWithTTLParallel, ExpiredRead for precise TTL implementations)
  • bounded-cache pressure (MixedParallelZipf95_5, MissThenSetZipf)

These numbers are useful as a comparative signal for this repository, but they are still microbenchmarks on one machine. They should not be read as a universal "best cache for every workload" claim.

Core scenarios:

Scenario go-mcache ristretto bigcache freecache go-cache ttlcache golang-lru-expirable otter theine
Core/ReadParallelHot 8.598 14.640 51.220 52.180 120.700 408.300 319.300 4.919 8.412
Core/WriteParallelOverwrite 21.600 238.000 50.130 52.840 227.900 365.100 320.500 393.600 284.500
Core/MixedParallel80_20 15.040 75.300 38.090 51.160 49.070 401.000 340.900 85.120 92.620
Core/DeleteCycle 148.500 241.700 62.730 39.670 36.910 78.370 85.140 129.500 197.400

TTL and bounded scenarios:

Scenario go-mcache ristretto bigcache freecache go-cache ttlcache golang-lru-expirable otter theine
TTL/SetWithTTLParallel 237.000 507.300 59.800 53.430 271.900 261.900 298.600 401.600 339.300
TTL/ExpiredRead 7.032 20.850 158.100 350.100 100.900 3.635 11.010
Bounded/MixedParallelZipf95_5 37.430 64.920 86.130 122.200 320.700 263.100 21.790 49.720
Bounded/MissThenSetZipf 22.440 26.600 54.650 120.700 324.400 262.100 6.258 9.766

means the scenario is not part of that library's comparison set in this suite (for example, ExpiredRead is only run for caches with precise TTL semantics).

Thread safety

All operations are safe for concurrent use. The concurrency model:

  • Reads: shard RLock + optional best-effort read buffering for policy replay
  • Writes: overwrite fast path updates entries in-place when possible; inserts go through admission/eviction
  • Expiration: timing-wheel scheduling on writes, lazy delete on background ticks
  • Metrics: atomic counters when enabled
  • Shards: cache-line padded to prevent false sharing between cores

Legacy API

The mcache.New() / CacheDriver API from v1 still works. It uses safeMap + GC-based expiration without TinyLFU. See mcache.go and gcmap/ for details. For new code, use the generic NewCache[K, V] API.

License

MIT

Documentation

Index

Constants

View Source
const TTL_FOREVER = 0

TTL_FOREVER represents an infinite TTL (no expiration).

Variables

This section is empty.

Functions

This section is empty.

Types

type BatchResult

type BatchResult[K comparable, V any] struct {
	Keys   []K
	Values []V
	Found  []bool
	Hashes []uint64
}

BatchResult holds the result of a batch get operation.

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic, high-performance in-memory cache.

func NewCache

func NewCache[K comparable, V any](opts ...Option[K, V]) *Cache[K, V]

NewCache creates a new generic Cache with the given options.

func (*Cache[K, V]) Clear

func (c *Cache[K, V]) Clear()

Clear removes all entries from the cache.

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close()

Close stops the cache and releases resources.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(key K) bool

Delete removes a value from the cache. Returns true if the value was found and deleted.

func (*Cache[K, V]) DeleteMany

func (c *Cache[K, V]) DeleteMany(keys []K) int

DeleteMany removes multiple keys from the cache. Returns the number of keys successfully deleted.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(key K) (V, bool)

Get retrieves a value from the cache. Returns the value and true if found, zero value and false otherwise.

func (*Cache[K, V]) GetBatch

func (c *Cache[K, V]) GetBatch(keys []K) *BatchResult[K, V]

GetBatch retrieves multiple values from the cache with optimized prefetching. This is more efficient than calling Get in a loop, especially for large batches.

func (*Cache[K, V]) GetBatchOptimized

func (c *Cache[K, V]) GetBatchOptimized(keys []K) *BatchResult[K, V]

GetBatchOptimized retrieves multiple values with shard-order optimization. Keys are processed in shard order for better cache locality, but results are returned in the original key order.

func (*Cache[K, V]) GetBatchToMap

func (c *Cache[K, V]) GetBatchToMap(keys []K) map[K]V

GetBatchToMap retrieves multiple values and returns them as a map. More convenient than GetBatch when you need map access.

func (*Cache[K, V]) GetMany

func (c *Cache[K, V]) GetMany(keys []K) map[K]V

GetMany retrieves multiple values from the cache. Returns a map of found keys to their values.

func (*Cache[K, V]) Has

func (c *Cache[K, V]) Has(key K) bool

Has checks if a key exists in the cache.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries in the cache.

func (*Cache[K, V]) Metrics

func (c *Cache[K, V]) Metrics() MetricsSnapshot

Metrics returns the cache metrics.

func (*Cache[K, V]) Scan

func (c *Cache[K, V]) Scan(cursor uint64, count int) *Iterator[K, V]

Scan returns an iterator over cache entries. cursor is the starting position (0 for beginning). count is the maximum number of entries to return per iteration.

func (*Cache[K, V]) ScanMatch

func (c *Cache[K, V]) ScanMatch(pattern string, cursor uint64, count int) *Iterator[K, V]

ScanMatch returns an iterator over entries with keys matching the glob pattern. Only works when K is string. Supported patterns: * (any chars), ? (single char), [abc] (char class).

func (*Cache[K, V]) ScanPrefix

func (c *Cache[K, V]) ScanPrefix(prefix string, cursor uint64, count int) *Iterator[K, V]

ScanPrefix returns an iterator over entries with keys matching the prefix. Only works when K is string.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(key K, value V, ttl time.Duration) bool

Set stores a value in the cache with the given TTL. A TTL of 0 means the entry never expires. Returns true if the value was stored, false if rejected by admission policy.

func (*Cache[K, V]) SetMany

func (c *Cache[K, V]) SetMany(items []Item[K, V]) int

SetMany stores multiple items in the cache. Returns the number of items successfully stored.

func (*Cache[K, V]) SetWithCost

func (c *Cache[K, V]) SetWithCost(key K, value V, cost int64, ttl time.Duration) bool

SetWithCost stores a value with a specified cost. Cost is used for eviction decisions when MaxCost is set.

func (*Cache[K, V]) Wait

func (c *Cache[K, V]) Wait()

Wait waits for all pending write operations to complete.

type CacheDriver

type CacheDriver struct {
	// contains filtered or unexported fields
}

CacheDriver manages cache operations with storage and expiration.

func New

func New() *CacheDriver

New creates and initializes a new CacheDriver.

func StartInstance

func StartInstance() *CacheDriver

StartInstance is deprecated; use New instead.

func (*CacheDriver) Close

func (mc *CacheDriver) Close() map[string]interface{}

Close stops the GC and returns all non-expired entries.

func (*CacheDriver) GCBufferQueue

func (mc *CacheDriver) GCBufferQueue() int

GCBufferQueue returns the count of pending expirations in the GC.

func (*CacheDriver) Get

func (mc *CacheDriver) Get(key string) (interface{}, bool)

Get retrieves a value by key. Returns (value, true) if found and not expired.

func (*CacheDriver) GetPointer

func (mc *CacheDriver) GetPointer(key string) (interface{}, bool)

GetPointer is deprecated; use Get instead.

func (*CacheDriver) Len

func (mc *CacheDriver) Len() int

Len returns the number of current cache entries.

func (*CacheDriver) Remove

func (mc *CacheDriver) Remove(key string)

Remove deletes a key from the cache and expiration tracking.

func (*CacheDriver) Set

func (mc *CacheDriver) Set(key string, value interface{}, ttl time.Duration) error

Set inserts or updates a key with the given value and TTL.

func (*CacheDriver) SetPointer

func (mc *CacheDriver) SetPointer(key string, value interface{}, ttl time.Duration) error

SetPointer is deprecated; use Set instead.

func (*CacheDriver) Truncate

func (mc *CacheDriver) Truncate()

Truncate clears all cache entries and pending expirations.

type Item

type Item[K comparable, V any] struct {
	Key   K
	Value V
	Cost  int64         // 0 = auto-calculate (cost of 1)
	TTL   time.Duration // 0 = no expiration
}

Item represents an item to be stored in the cache.

type Iterator

type Iterator[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Iterator provides a Redis-style iterator over cache entries.

func (*Iterator[K, V]) All

func (it *Iterator[K, V]) All() []Item[K, V]

All collects all remaining entries and returns them. Warning: This may be memory-intensive for large result sets.

func (*Iterator[K, V]) Close

func (it *Iterator[K, V]) Close()

Close releases resources associated with the iterator.

func (*Iterator[K, V]) Count

func (it *Iterator[K, V]) Count() int

Count counts the remaining entries without collecting them.

func (*Iterator[K, V]) Cursor

func (it *Iterator[K, V]) Cursor() uint64

Cursor returns the current cursor position. This can be used to resume iteration later.

func (*Iterator[K, V]) Entry

func (it *Iterator[K, V]) Entry() (K, V)

Entry returns the current entry (key, value pair).

func (*Iterator[K, V]) Err

func (it *Iterator[K, V]) Err() error

Err returns any error that occurred during iteration.

func (*Iterator[K, V]) ForEach

func (it *Iterator[K, V]) ForEach(fn func(key K, value V) bool)

ForEach calls fn for each remaining entry. If fn returns false, iteration stops.

func (*Iterator[K, V]) Key

func (it *Iterator[K, V]) Key() K

Key returns the current entry's key.

func (*Iterator[K, V]) Keys

func (it *Iterator[K, V]) Keys() []K

Keys collects all remaining keys and returns them.

func (*Iterator[K, V]) Next

func (it *Iterator[K, V]) Next() bool

Next advances the iterator to the next entry. Returns true if there is an entry available, false when exhausted.

func (*Iterator[K, V]) Value

func (it *Iterator[K, V]) Value() V

Value returns the current entry's value.

func (*Iterator[K, V]) Values

func (it *Iterator[K, V]) Values() []V

Values collects all remaining values and returns them.

type Metrics

type Metrics struct {
	// contains filtered or unexported fields
}

Metrics holds cache statistics.

func (*Metrics) Reset

func (m *Metrics) Reset()

Reset resets all metrics to zero.

func (*Metrics) Snapshot

func (m *Metrics) Snapshot() MetricsSnapshot

Snapshot returns a point-in-time snapshot of the metrics.

type MetricsSnapshot

type MetricsSnapshot struct {
	Hits        int64   // Total cache hits
	Misses      int64   // Total cache misses
	Sets        int64   // Total successful sets
	Deletes     int64   // Total successful deletes
	Evictions   int64   // Total evictions due to size/cost limit
	Expirations int64   // Total expirations due to TTL
	Rejections  int64   // Total rejections by admission policy
	CostAdded   int64   // Total cost added over time
	CostEvicted int64   // Total cost evicted over time
	BufferDrops int64   // Times buffer was full and sync fallback was used
	HitRatio    float64 // Hit ratio (hits / (hits + misses))
}

MetricsSnapshot is a point-in-time snapshot of cache metrics.

type Option

type Option[K comparable, V any] func(*config[K, V])

Option is a function that configures a Cache.

func WithBufferItems

func WithBufferItems[K comparable, V any](n int64) Option[K, V]

WithBufferItems sets the write buffer size. Writes are batched in this buffer before being applied to the cache. Default is 64.

func WithCostFunc

func WithCostFunc[K comparable, V any](fn func(V) int64) Option[K, V]

WithCostFunc sets a custom function to calculate the cost of a value. If not set, each entry has a cost of 1.

func WithDefaultTTL

func WithDefaultTTL[K comparable, V any](ttl time.Duration) Option[K, V]

WithDefaultTTL sets the default TTL for entries that don't specify one. A value of 0 means no expiration (default).

func WithExpirationResolution added in v1.0.2

func WithExpirationResolution[K comparable, V any](resolution time.Duration) Option[K, V]

WithExpirationResolution sets the proactive expiration sweep resolution. Expired entries are always rejected exactly on Get/Has; this option only controls how quickly background cleanup observes expired items.

func WithIgnoreInternalCost

func WithIgnoreInternalCost[K comparable, V any](ignore bool) Option[K, V]

WithIgnoreInternalCost configures whether internal metadata cost should be ignored when calculating total cache cost.

func WithKeyHasher

func WithKeyHasher[K comparable, V any](fn func(K) uint64) Option[K, V]

WithKeyHasher sets a custom function to hash keys. If not set, a default hasher is used based on the key type.

func WithLockFreePolicy

func WithLockFreePolicy[K comparable, V any](enabled bool) Option[K, V]

WithLockFreePolicy configures whether to use the lock-free policy. When enabled (default), the Access() method is completely lock-free, which significantly reduces contention on read-heavy workloads. When disabled, the standard mutex-based policy is used.

func WithMaxCost

func WithMaxCost[K comparable, V any](cost int64) Option[K, V]

WithMaxCost sets the maximum total cost of entries in the cache. Each entry's cost is determined by CostFunc or defaults to 1. A value of 0 means unlimited cost (default).

func WithMaxEntries

func WithMaxEntries[K comparable, V any](n int64) Option[K, V]

WithMaxEntries sets the maximum number of entries in the cache. When the limit is reached, entries are evicted using the configured policy. A value of 0 means unlimited entries (default).

func WithMetrics added in v1.0.2

func WithMetrics[K comparable, V any](enabled bool) Option[K, V]

WithMetrics enables or disables metrics collection. Default: false.

func WithNumCounters

func WithNumCounters[K comparable, V any](n int64) Option[K, V]

WithNumCounters sets the number of counters for TinyLFU frequency estimation. Recommended value is 10x the expected number of entries. A value of 0 uses a default based on MaxEntries.

func WithOnEvict

func WithOnEvict[K comparable, V any](fn func(K, V, int64)) Option[K, V]

WithOnEvict sets a callback function that is called when an entry is evicted. The callback receives the key, value, and cost of the evicted entry.

func WithOnExpire

func WithOnExpire[K comparable, V any](fn func(K, V)) Option[K, V]

WithOnExpire sets a callback function that is called when an entry expires. The callback receives the key and value of the expired entry.

func WithOnReject

func WithOnReject[K comparable, V any](fn func(K, V)) Option[K, V]

WithOnReject sets a callback function that is called when an entry is rejected by the TinyLFU admission policy.

func WithPrefixSearch

func WithPrefixSearch[K comparable, V any](enabled bool) Option[K, V]

WithPrefixSearch enables prefix search functionality for string keys. When enabled, a radix tree is maintained for efficient prefix lookups. This adds memory overhead and slight write latency, so only enable if you need ScanPrefix functionality. Default: false (disabled).

func WithShardCount

func WithShardCount[K comparable, V any](n int) Option[K, V]

WithShardCount sets the number of shards for concurrent access. Must be a power of 2. Default is 1024.

Directories

Path Synopsis
internal
alloc
Package alloc provides aligned memory allocation utilities for SIMD operations.
Package alloc provides aligned memory allocation utilities for SIMD operations.
buffer
Package buffer provides lock-free ring buffer for write coalescing.
Package buffer provides lock-free ring buffer for write coalescing.
clock
Package clock provides a cached time source for high-performance scenarios.
Package clock provides a cached time source for high-performance scenarios.
glob
Package glob provides glob pattern matching for cache keys.
Package glob provides glob pattern matching for cache keys.
hash
Package hash provides optimized hash functions for cache keys.
Package hash provides optimized hash functions for cache keys.
hashtable
Package hashtable provides high-performance hash table implementations.
Package hashtable provides high-performance hash table implementations.
policy
Package policy provides cache eviction policies.
Package policy provides cache eviction policies.
pool
Package pool provides sync.Pool utilities for reducing allocations.
Package pool provides sync.Pool utilities for reducing allocations.
prefetch
Package prefetch provides software prefetching utilities for cache optimization.
Package prefetch provides software prefetching utilities for cache optimization.
radix
Package radix provides a radix tree for efficient prefix search.
Package radix provides a radix tree for efficient prefix search.
store
Package store provides storage backends for the cache.
Package store provides storage backends for the cache.
Package item defines the structure used for cache entries.
Package item defines the structure used for cache entries.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL