multicache

package module
v1.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 29, 2025 License: Apache-2.0 Imports: 11 Imported by: 0

README

multicache

multicache is an absurdly fast multi-threaded multi-tiered in-memory cache library for Go -- it offers higher performance than any other option ever created for the language.

It offers optional persistence with compression, and has been specifically optimized for Cloud Compute environments where the process is periodically restarted, such as Kubernetes or Google Cloud Run.

Install

go get github.com/codeGROOVE-dev/multicache

Use

cache := multicache.New[string, int](multicache.Size(10000))
cache.Set("answer", 42)
val, ok := cache.Get("answer")

With persistence:

store, _ := localfs.New[string, User]("myapp", "")
cache, _ := multicache.NewTiered(store)

cache.Set(ctx, "user:123", user)           // sync write
cache.SetAsync(ctx, "user:456", user)      // async write

GetSet deduplicates concurrent loads to prevent thundering herd situations:

user, err := cache.GetSet("user:123", func() (User, error) {
    return db.LoadUser("123")
})

Options

multicache.Size(n)           // max entries (default 16384)
multicache.TTL(time.Hour)    // default expiration

Persistence

Memory cache backed by durable storage. Reads check memory first; writes go to both.

Backend Import
Local filesystem pkg/store/localfs
Valkey/Redis pkg/store/valkey
Google Cloud Datastore pkg/store/datastore
Auto-detect (Cloud Run) pkg/store/cloudrun

For maximum efficiency, all backends support S2 or Zstd compression via pkg/store/compress.

Performance

multicache has been exhaustively tested for performance using gocachemark.

Where multicache wins:

  • Throughput: 954M int gets/sec at 16 threads (2.2X faster than otter). 140M string sets/sec (9X faster than otter).
  • Hit rate: Wins 7 of 9 workloads. Highest average across all datasets (+2.9% vs otter, +0.9% vs sieve).
  • Latency: 8ns int gets, 10ns string gets, zero allocations (4X lower latency than otter)

Where others win:

  • Memory: freelru and otter use less memory per entry (73 bytes/item overhead vs 15 for otter)
  • Specific workloads: clock +0.07% on ibm-docker, theine +0.34% on zipf

Much of the credit for high throughput goes to puzpuzpuz/xsync. While highly sharded maps and flightGroups performed well, you can't beat xsync's lock-free data structures.

Run make benchmark for full results, or see benchmarks/gocachemark_results.md.

Algorithm

multicache uses S3-FIFO, which features three queues: small (new entries), main (promoted entries), and ghost (recently evicted keys). New items enter small; items accessed twice move to main. The ghost queue tracks evicted keys in a bloom filter to fast-track their return.

multicache has been hyper-tuned for high performance, and deviates from the original paper in a handful of ways:

  • Dynamic sharding - scales to 16×GOMAXPROCS shards; at 32 threads: 21x Get throughput, 6x Set throughput vs single shard
  • Tuned small queue - 90% vs paper's 10%, tuned via binary search to maximize average hit rate across 9 production traces
  • Full ghost frequency restoration - returning keys restore 100% of their previous access count
  • Reduced frequency cap - max freq=2 vs paper's 3, tuned via binary search for best average hit rate
  • Hot item demotion - items accessed at least once (peakFreq≥1) get demoted to small queue instead of evicted
  • Extended ghost capacity - 8x cache size for ghost tracking, tuned via binary search
  • Death row buffer - capacity/768 buffer holds recently evicted items for instant resurrection
  • Ghost frequency ring buffer - fixed-size 256-entry ring replaces map allocations; -5.1% string latency, -44.5% memory

License

Apache 2.0

Documentation

Overview

Package multicache provides a high-performance cache with optional persistence.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is an in-memory cache. All operations are synchronous and infallible.

func New

func New[K comparable, V any](opts ...Option) *Cache[K, V]

New creates an in-memory cache.

func (*Cache[K, V]) Close

func (*Cache[K, V]) Close()

Close is a no-op for Cache (provided for API symmetry with TieredCache).

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(key K)

Delete removes a key from the cache.

func (*Cache[K, V]) Flush

func (c *Cache[K, V]) Flush() int

Flush removes all entries. Returns count removed.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(key K) (V, bool)

Get returns the value for key, or zero and false if not found.

func (*Cache[K, V]) GetSet

func (c *Cache[K, V]) GetSet(key K, loader func() (V, error), ttl ...time.Duration) (V, error)

GetSet returns cached value or calls loader to compute it. Concurrent calls for the same key share one loader invocation.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(key K, value V, ttl ...time.Duration)

Set stores a value. Uses default TTL if none provided.

func (*Cache[K, V]) SetIfAbsent

func (c *Cache[K, V]) SetIfAbsent(key K, value V, ttl ...time.Duration) (V, bool)

SetIfAbsent stores value only if key is missing. No deduplication.

type Option

type Option func(*config)

Option configures a Cache.

func Size

func Size(n int) Option

Size sets maximum entries. Default 16384.

func TTL

func TTL(d time.Duration) Option

TTL sets default expiration. Default 0 (none).

type Store

type Store[K comparable, V any] interface {
	ValidateKey(key K) error
	Get(ctx context.Context, key K) (V, time.Time, bool, error)
	Set(ctx context.Context, key K, value V, expiry time.Time) error
	Delete(ctx context.Context, key K) error
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)
	Location(key K) string
	Flush(ctx context.Context) (int, error)
	Len(ctx context.Context) (int, error)
	Close() error
}

Store is the persistence backend interface.

type TieredCache

type TieredCache[K comparable, V any] struct {
	Store Store[K, V] // direct access to persistence layer
	// contains filtered or unexported fields
}

TieredCache combines an in-memory cache with persistent storage.

func NewTiered

func NewTiered[K comparable, V any](store Store[K, V], opts ...Option) (*TieredCache[K, V], error)

NewTiered creates a cache backed by the given store.

func (*TieredCache[K, V]) Close

func (c *TieredCache[K, V]) Close() error

Close releases store resources.

func (*TieredCache[K, V]) Delete

func (c *TieredCache[K, V]) Delete(ctx context.Context, key K) error

Delete removes from memory and persistence.

func (*TieredCache[K, V]) Flush

func (c *TieredCache[K, V]) Flush(ctx context.Context) (int, error)

Flush clears memory and persistence. Returns total entries removed.

func (*TieredCache[K, V]) Get

func (c *TieredCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get checks memory, then persistence. Found values are cached in memory.

func (*TieredCache[K, V]) GetSet

func (c *TieredCache[K, V]) GetSet(ctx context.Context, key K, loader func(context.Context) (V, error), ttl ...time.Duration) (V, error)

GetSet returns cached value or calls loader. Concurrent calls share one loader.

func (*TieredCache[K, V]) Len

func (c *TieredCache[K, V]) Len() int

Len returns the memory cache size. Use Store.Len for persistence count.

func (*TieredCache[K, V]) Set

func (c *TieredCache[K, V]) Set(ctx context.Context, key K, value V, ttl ...time.Duration) error

Set stores to memory first (always), then persistence.

func (*TieredCache[K, V]) SetAsync

func (c *TieredCache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl ...time.Duration) error

SetAsync stores to memory synchronously, persistence asynchronously. Persistence errors are logged, not returned.

Directories

Path Synopsis
pkg
store/localfs module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL