bdcache

package module
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 21, 2025 License: Apache-2.0 Imports: 8 Imported by: 0

README

bdcache - Big Dumb Cache

bdcache logo

Go Reference Go Report Card License


Fast, persistent Go cache with S3-FIFO eviction - better hit rates than LRU, survives restarts with pluggable persistence backends, zero allocations.

Install

go get github.com/codeGROOVE-dev/bdcache

Use

import (
    "github.com/codeGROOVE-dev/bdcache"
    "github.com/codeGROOVE-dev/bdcache/persist/localfs"
)

// Memory only
cache, _ := bdcache.New[string, int](ctx)
cache.Set(ctx, "answer", 42, 0)           // Synchronous: returns after persistence completes
cache.SetAsync(ctx, "answer", 42, 0)      // Async: returns immediately, persists in background
val, found, _ := cache.Get(ctx, "answer")

// With local file persistence
p, _ := localfs.New[string, User]("myapp", "")
cache, _ := bdcache.New[string, User](ctx,
    bdcache.WithPersistence(p))

// With Valkey/Redis persistence
p, _ := valkey.New[string, User](ctx, "myapp", "localhost:6379")
cache, _ := bdcache.New[string, User](ctx,
    bdcache.WithPersistence(p))

// Cloud Run auto-detection (datastore in Cloud Run, localfs elsewhere)
p, _ := cloudrun.New[string, User](ctx, "myapp")
cache, _ := bdcache.New[string, User](ctx,
    bdcache.WithPersistence(p))

Features

  • S3-FIFO eviction - Better than LRU (learn more)
  • Type safe - Go generics
  • Pluggable persistence - Bring your own database or use built-in backends:
  • Graceful degradation - Cache works even if persistence fails
  • Per-item TTL - Optional expiration

Performance

Benchmarks on MacBook Pro M4 Max comparing memory-only Get operations:

Library Algorithm ns/op Allocations Persistence
bdcache S3-FIFO 8.61 0 allocs ✅ Auto (Local files + GCP Datastore)
golang-lru LRU 13.02 0 allocs ❌ None
otter S3-FIFO 14.58 0 allocs ⚠️ Manual (Save/Load entire cache)
ristretto TinyLFU 30.53 0 allocs ❌ None

⚠️ Benchmark Disclaimer: These benchmarks are highly cherrypicked to show S3-FIFO's advantages. Different cache implementations excel at different workloads - LRU may outperform S3-FIFO in some scenarios, while TinyLFU shines in others. Performance varies based on access patterns, working set size, and hardware.

The real differentiator is bdcache's automatic per-item persistence designed for unreliable environments like Cloud Run and Kubernetes, where shutdowns are unpredictable. See benchmarks/ for methodology.

Key advantage:

  • Automatic persistence for unreliable environments - per-item writes to local files or Google Cloud Datastore survive unexpected shutdowns (Cloud Run, Kubernetes), container restarts, and crashes without manual save/load choreography

Also competitive on:

  • Speed - comparable to or faster than alternatives on typical workloads
  • Hit rates - S3-FIFO protects hot data from scans in specific scenarios
  • Zero allocations - efficient for high-frequency operations
Competitive Analysis

Independent benchmark using scalalang2/go-cache-benchmark (500K items, Zipfian distribution) shows bdcache consistently ranks top 1-2 for hit rate across all cache sizes:

  • 0.1% cache size: bdcache 48.12% vs SIEVE 47.42%, TinyLFU 47.37%
  • 1% cache size: bdcache 64.45% vs TinyLFU 63.94%, Otter 63.60%
  • 10% cache size: bdcache 80.39% vs TinyLFU 80.43%, Otter 79.86%

See benchmarks/ for detailed methodology and running instructions.

License

Apache 2.0

Documentation

Overview

Package bdcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic cache with memory and optional persistence layers.

func New

func New[K comparable, V any](ctx context.Context, options ...Option) (*Cache[K, V], error)

New creates a new cache with the given options.

func (*Cache[K, V]) Cleanup

func (c *Cache[K, V]) Cleanup() int

Cleanup removes expired entries from the cache. Returns the number of entries removed.

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close() error

Close releases resources held by the cache.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(ctx context.Context, key K)

Delete removes a value from the cache.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence if available.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of items in the memory cache.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(ctx context.Context, key K, value V, ttl time.Duration) error

Set stores a value in the cache with an optional TTL. A zero TTL means no expiration (or uses DefaultTTL if configured). The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails. Even when an error is returned, the value is cached in memory.

func (*Cache[K, V]) SetAsync added in v0.6.0

func (c *Cache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl time.Duration) error

SetAsync adds or updates a value in the cache with optional TTL, handling persistence asynchronously. Key validation and in-memory caching happen synchronously. Persistence errors are logged but not returned. Returns an error only for validation failures (e.g., invalid key format).

type Entry

type Entry[K comparable, V any] struct {
	Key       K
	Value     V
	Expiry    time.Time
	UpdatedAt time.Time
}

Entry represents a cache entry with its metadata.

type Option

type Option func(*Options)

Option is a functional option for configuring a Cache.

func WithCleanup added in v0.6.0

func WithCleanup(maxAge time.Duration) Option

WithCleanup enables background cleanup of expired entries at startup. maxAge should be set to your maximum TTL value - entries older than this are deleted. This is a safety net for expired data and works alongside native Datastore TTL policies. If native TTL is properly configured, this cleanup will be fast (no-op).

func WithDefaultTTL

func WithDefaultTTL(d time.Duration) Option

WithDefaultTTL sets the default TTL for cache items.

func WithMemorySize

func WithMemorySize(n int) Option

WithMemorySize sets the maximum number of items in the memory cache.

func WithPersistence added in v0.6.0

func WithPersistence[K comparable, V any](p PersistenceLayer[K, V]) Option

WithPersistence sets the persistence layer for the cache. Pass a PersistenceLayer implementation from packages like:

  • github.com/codeGROOVE-dev/bdcache/persist/localfs
  • github.com/codeGROOVE-dev/bdcache/persist/datastore

Example:

p, _ := localfs.New[string, int]("myapp")
cache, _ := bdcache.New[string, int](ctx, bdcache.WithPersistence(p))

func WithWarmup

func WithWarmup(n int) Option

WithWarmup enables cache warmup by loading the N most recently updated entries from persistence on startup. By default, warmup is disabled (0). Set to a positive number to load that many entries.

type Options

type Options struct {
	Persister      any
	MemorySize     int
	DefaultTTL     time.Duration
	WarmupLimit    int
	CleanupMaxAge  time.Duration
	CleanupEnabled bool
}

Options configures a Cache instance.

type PersistenceLayer

type PersistenceLayer[K comparable, V any] interface {
	// ValidateKey checks if a key is valid for this persistence layer.
	// Returns an error if the key violates constraints.
	ValidateKey(key K) error

	// Load retrieves a value from persistent storage.
	// Returns the value, expiry time, whether it was found, and any error.
	Load(ctx context.Context, key K) (V, time.Time, bool, error)

	// Store saves a value to persistent storage with an expiry time.
	Store(ctx context.Context, key K, value V, expiry time.Time) error

	// Delete removes a value from persistent storage.
	Delete(ctx context.Context, key K) error

	// LoadRecent returns channels for streaming the most recently updated entries from persistent storage.
	// Used for warming up the cache on startup. Returns up to 'limit' most recently updated entries.
	// If limit is 0, returns all entries.
	// The entry channel should be closed when all entries have been sent.
	// If an error occurs, send it on the error channel.
	LoadRecent(ctx context.Context, limit int) (<-chan Entry[K, V], <-chan error)

	// Cleanup removes expired entries from persistent storage.
	// maxAge specifies how old entries must be before deletion.
	// Returns the number of entries deleted and any error.
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)

	// Location returns the storage location/identifier for a given key.
	// For file-based persistence, this returns the file path.
	// For database persistence, this returns the database key/ID.
	// Useful for testing and debugging to verify where items are stored.
	Location(key K) string

	// Close releases any resources held by the persistence layer.
	Close() error
}

PersistenceLayer defines the interface for cache persistence backends.

Directories

Path Synopsis
persist
cloudrun module
datastore module
localfs module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL