bdcache

package module
v0.9.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 4, 2025 License: Apache-2.0 Imports: 7 Imported by: 0

README ΒΆ

bdcache - Big Dumb Cache

bdcache logo

Go Reference Go Report Card License


Stupid fast in-memory Go cache with optional L2 persistence layer.

Designed originally for persistently caching HTTP fetches in unreliable environments like Google Cloud Run, this cache has something for everyone.

Features

  • Faster than a bat out of hell - Best-in-class latency and throughput
  • S3-FIFO eviction - Better hit-rates than LRU (learn more)
  • Pluggable persistence - Bring your own database or use built-in backends:
  • Per-item TTL - Optional expiration
  • Graceful degradation - Cache works even if persistence fails
  • Zero allocation reads - minimal GC thrashing
  • Type safe - Go generics

Usage

As a stupid-fast in-memory cache:

import "github.com/codeGROOVE-dev/bdcache"

// strings as keys, ints as values
cache := bdcache.Memory[string, int]()
cache.Set("answer", 42, 0)
val, found := cache.Get("answer")

or with local file persistence to survive restarts:

import (
  "github.com/codeGROOVE-dev/bdcache"
  "github.com/codeGROOVE-dev/bdcache/persist/localfs"
)

p, _ := localfs.New[string, User]("myapp", "")
cache, _ := bdcache.Persistent[string, User](ctx, p)

cache.SetAsync(ctx, "user:123", user, 0) // Don't wait for the key to persist
cache.Store.Len(ctx)                      // Access persistence layer directly

A persistent cache suitable for Cloud Run or local development; uses Cloud Datastore if available

p, _ := cloudrun.New[string, User](ctx, "myapp")
cache, _ := bdcache.Persistent[string, User](ctx, p)

Performance against the Competition

bdcache prioritizes high hit-rates and low read latency, but it performs quite well all around.

Here's the results from an M4 MacBook Pro - run make bench to see the results for yourself:

Hit Rate (Zipf Ξ±=0.99, 1M ops, 1M keyspace)
Cache Size=1% Size=2.5% Size=5%
bdcache 🟑 94.45% 94.91% 95.09%
otter 🦦 94.28% 94.69% 95.09%
ristretto β˜• 91.63% 92.44% 93.02%
tinylfu πŸ”¬ 94.31% 94.87% 95.09%
freecache πŸ†“ 94.03% 94.15% 94.75%
lru πŸ“š 94.10% 94.84% 95.09%

πŸ† Hit rate: +0.1% better than 2nd best (tinylfu)

Single-Threaded Latency (sorted by Get)
Cache Get ns/op Get B/op Get allocs Set ns/op Set B/op Set allocs
bdcache 🟑 7.0 0 0 12.0 0 0
lru πŸ“š 24.0 0 0 22.0 0 0
ristretto β˜• 30.0 13 0 69.0 119 3
otter 🦦 32.0 0 0 145.0 51 1
freecache πŸ†“ 72.0 15 1 57.0 4 0
tinylfu πŸ”¬ 89.0 3 0 106.0 175 3

πŸ† Get latency: +243% faster than 2nd best (lru) πŸ† Set latency: +83% faster than 2nd best (lru)

Single-Threaded Throughput (mixed read/write)
Cache Get QPS Set QPS
bdcache 🟑 77.36M 61.54M
lru πŸ“š 34.69M 35.25M
ristretto β˜• 29.44M 13.61M
otter 🦦 25.63M 7.10M
freecache πŸ†“ 12.92M 15.65M
tinylfu πŸ”¬ 10.87M 8.93M

πŸ† Get throughput: +123% faster than 2nd best (lru) πŸ† Set throughput: +75% faster than 2nd best (lru)

Concurrent Throughput (mixed read/write): 4 threads
Cache Get QPS Set QPS
bdcache 🟑 45.67M 38.65M
otter 🦦 28.11M 4.06M
ristretto β˜• 27.06M 13.41M
freecache πŸ†“ 24.67M 20.84M
lru πŸ“š 9.29M 9.56M
tinylfu πŸ”¬ 5.72M 4.94M

πŸ† Get throughput: +62% faster than 2nd best (otter) πŸ† Set throughput: +85% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 8 threads
Cache Get QPS Set QPS
bdcache 🟑 22.31M 22.84M
otter 🦦 19.49M 3.30M
ristretto β˜• 18.67M 11.46M
freecache πŸ†“ 17.34M 16.36M
lru πŸ“š 7.66M 7.75M
tinylfu πŸ”¬ 4.81M 4.11M

πŸ† Get throughput: +14% faster than 2nd best (otter) πŸ† Set throughput: +40% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 12 threads
Cache Get QPS Set QPS
bdcache 🟑 26.25M 24.04M
ristretto β˜• 21.71M 11.49M
otter 🦦 19.78M 2.93M
freecache πŸ†“ 15.84M 16.10M
lru πŸ“š 7.50M 8.92M
tinylfu πŸ”¬ 4.08M 3.37M

πŸ† Get throughput: +21% faster than 2nd best (ristretto) πŸ† Set throughput: +49% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 16 threads
Cache Get QPS Set QPS
bdcache 🟑 16.92M 16.00M
ristretto β˜• 15.73M 11.97M
otter 🦦 15.70M 2.89M
freecache πŸ†“ 14.67M 14.42M
lru πŸ“š 7.53M 8.07M
tinylfu πŸ”¬ 4.75M 3.41M

πŸ† Get throughput: +7.6% faster than 2nd best (ristretto) πŸ† Set throughput: +11% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 24 threads
Cache Get QPS Set QPS
bdcache 🟑 20.08M 16.56M
ristretto β˜• 16.76M 12.81M
otter 🦦 15.71M 2.93M
freecache πŸ†“ 14.43M 14.59M
lru πŸ“š 7.71M 7.75M
tinylfu πŸ”¬ 4.80M 3.09M

πŸ† Get throughput: +20% faster than 2nd best (ristretto) πŸ† Set throughput: +14% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 32 threads
Cache Get QPS Set QPS
bdcache 🟑 15.84M 15.29M
ristretto β˜• 15.36M 13.49M
otter 🦦 15.04M 2.91M
freecache πŸ†“ 14.87M 13.95M
lru πŸ“š 7.79M 8.23M
tinylfu πŸ”¬ 5.34M 3.09M

πŸ† Get throughput: +3.1% faster than 2nd best (ristretto) πŸ† Set throughput: +9.6% faster than 2nd best (freecache)

NOTE: Performance characteristics often have trade-offs. There are almost certainly workloads where other cache implementations are faster, but nobody blends speed and persistence the way that bdcache does.

License

Apache 2.0

Documentation ΒΆ

Overview ΒΆ

Package bdcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index ΒΆ

Constants ΒΆ

This section is empty.

Variables ΒΆ

This section is empty.

Functions ΒΆ

This section is empty.

Types ΒΆ

type Entry ΒΆ

type Entry[K comparable, V any] struct {
	Key       K
	Value     V
	Expiry    time.Time
	UpdatedAt time.Time
}

Entry represents a cache entry with its metadata.

type MemoryCache ΒΆ added in v0.9.7

type MemoryCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MemoryCache is a fast in-memory cache without persistence. All operations are context-free and never return errors.

func Memory ΒΆ added in v0.9.7

func Memory[K comparable, V any](opts ...Option) *MemoryCache[K, V]

Memory creates a new memory-only cache.

Example:

cache := bdcache.Memory[string, User](
    bdcache.WithSize(10000),
    bdcache.WithTTL(time.Hour),
)
defer cache.Close()

cache.Set("user:123", user)              // uses default TTL
cache.Set("user:123", user, time.Hour)   // explicit TTL
user, ok := cache.Get("user:123")

func (*MemoryCache[K, V]) Close ΒΆ added in v0.9.7

func (*MemoryCache[K, V]) Close()

Close releases resources held by the cache. For MemoryCache this is a no-op, but provided for API consistency.

func (*MemoryCache[K, V]) Delete ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) Delete(key K)

Delete removes a value from the cache.

func (*MemoryCache[K, V]) Flush ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) Flush() int

Flush removes all entries from the cache. Returns the number of entries removed.

func (*MemoryCache[K, V]) Get ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) Get(key K) (V, bool)

Get retrieves a value from the cache. Returns the value and true if found, or the zero value and false if not found.

func (*MemoryCache[K, V]) GetOrSet ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) GetOrSet(key K, loader func() V, ttl ...time.Duration) V

GetOrSet retrieves a value from the cache, or computes and stores it if not found. The loader function is only called if the key is not in the cache. If no TTL is provided, the default TTL is used. This is optimized to perform a single shard lookup and lock acquisition.

func (*MemoryCache[K, V]) Len ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) Len() int

Len returns the number of items in the cache.

func (*MemoryCache[K, V]) Set ΒΆ added in v0.9.7

func (c *MemoryCache[K, V]) Set(key K, value V, ttl ...time.Duration)

Set stores a value in the cache. If no TTL is provided, the default TTL is used. If no default TTL is configured, the item never expires.

func (*MemoryCache[K, V]) SetIfAbsent ΒΆ added in v0.9.8

func (c *MemoryCache[K, V]) SetIfAbsent(key K, value V, ttl ...time.Duration) (V, bool)

SetIfAbsent stores a value only if the key is not already in the cache. Returns the existing value and true if found, or the new value and false if inserted. This is optimized to perform a single shard lookup and lock acquisition.

type Option ΒΆ

type Option func(*config)

Option configures a MemoryCache or PersistentCache.

func WithSize ΒΆ added in v0.9.7

func WithSize(n int) Option

WithSize sets the maximum number of items in the memory cache.

func WithTTL ΒΆ added in v0.9.7

func WithTTL(d time.Duration) Option

WithTTL sets the default TTL for cache items. Items without an explicit TTL will use this value.

func WithWarmup ΒΆ

func WithWarmup(n int) Option

WithWarmup enables cache warmup by loading the N most recently updated entries from persistence on startup. Only applies to PersistentCache. By default, warmup is disabled (0). Set to a positive number to load that many entries.

type PersistenceLayer ΒΆ

type PersistenceLayer[K comparable, V any] interface {
	// ValidateKey checks if a key is valid for this persistence layer.
	// Returns an error if the key violates constraints.
	ValidateKey(key K) error

	// Load retrieves a value from persistent storage.
	// Returns the value, expiry time, whether it was found, and any error.
	Load(ctx context.Context, key K) (V, time.Time, bool, error)

	// Store saves a value to persistent storage with an expiry time.
	Store(ctx context.Context, key K, value V, expiry time.Time) error

	// Delete removes a value from persistent storage.
	Delete(ctx context.Context, key K) error

	// LoadRecent returns channels for streaming the most recently updated entries from persistent storage.
	// Used for warming up the cache on startup. Returns up to 'limit' most recently updated entries.
	// If limit is 0, returns all entries.
	// The entry channel should be closed when all entries have been sent.
	// If an error occurs, send it on the error channel.
	LoadRecent(ctx context.Context, limit int) (<-chan Entry[K, V], <-chan error)

	// Cleanup removes expired entries from persistent storage.
	// maxAge specifies how old entries must be before deletion.
	// Returns the number of entries deleted and any error.
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)

	// Location returns the storage location/identifier for a given key.
	// For file-based persistence, this returns the file path.
	// For database persistence, this returns the database key/ID.
	// Useful for testing and debugging to verify where items are stored.
	Location(key K) string

	// Flush removes all entries from persistent storage.
	// Returns the number of entries removed and any error.
	Flush(ctx context.Context) (int, error)

	// Len returns the number of entries in persistent storage.
	Len(ctx context.Context) (int, error)

	// Close releases any resources held by the persistence layer.
	Close() error
}

PersistenceLayer defines the interface for cache persistence backends.

type PersistentCache ΒΆ added in v0.9.7

type PersistentCache[K comparable, V any] struct {
	// Store provides direct access to the persistence layer.
	// Use this for persistence-specific operations:
	//   cache.Store.Len(ctx)
	//   cache.Store.Flush(ctx)
	//   cache.Store.Cleanup(ctx, maxAge)
	Store PersistenceLayer[K, V]
	// contains filtered or unexported fields
}

PersistentCache is a cache backed by both memory and persistent storage. Core operations require context for I/O, while memory operations like Len() do not.

func Persistent ΒΆ added in v0.9.7

func Persistent[K comparable, V any](ctx context.Context, p PersistenceLayer[K, V], opts ...Option) (*PersistentCache[K, V], error)

Persistent creates a cache with persistence backing.

Example:

store, _ := localfs.New[string, User]("myapp", "")
cache, err := bdcache.Persistent[string, User](ctx, store,
    bdcache.WithSize(10000),
    bdcache.WithTTL(time.Hour),
    bdcache.WithWarmup(1000),
)
if err != nil {
    return err
}
defer cache.Close()

cache.Set(ctx, "user:123", user)              // uses default TTL
cache.Set(ctx, "user:123", user, time.Hour)   // explicit TTL
user, ok, err := cache.Get(ctx, "user:123")
storeCount, _ := cache.Store.Len(ctx)

func (*PersistentCache[K, V]) Close ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Close() error

Close releases resources held by the cache.

func (*PersistentCache[K, V]) Delete ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Delete(ctx context.Context, key K) error

Delete removes a value from the cache. The value is always removed from memory. Returns an error if persistence deletion fails.

func (*PersistentCache[K, V]) Flush ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Flush(ctx context.Context) (int, error)

Flush removes all entries from the cache, including persistent storage. Returns the total number of entries removed from memory and persistence.

func (*PersistentCache[K, V]) Get ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence.

func (*PersistentCache[K, V]) GetOrSet ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) GetOrSet(ctx context.Context, key K, loader func(context.Context) (V, error), ttl ...time.Duration) (V, error)

GetOrSet retrieves a value from the cache, or computes and stores it if not found. The loader function is only called if the key is not in the cache. If no TTL is provided, the default TTL is used. If the loader returns an error, it is propagated.

func (*PersistentCache[K, V]) Len ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Len() int

Len returns the number of items in the memory cache. For persistence item count, use cache.Store.Len(ctx).

func (*PersistentCache[K, V]) Set ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) Set(ctx context.Context, key K, value V, ttl ...time.Duration) error

Set stores a value in the cache. If no TTL is provided, the default TTL is used. The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails.

func (*PersistentCache[K, V]) SetAsync ΒΆ added in v0.9.7

func (c *PersistentCache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl ...time.Duration) (<-chan error, error)

SetAsync stores a value in the cache, handling persistence asynchronously. If no TTL is provided, the default TTL is used. Key validation and in-memory caching happen synchronously. Returns a channel that will receive any persistence error (or be closed on success). Returns an error only for validation failures (e.g., invalid key format).

Directories ΒΆ

Path Synopsis
persist
cloudrun module
datastore module
localfs module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL