bdcache

package module
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 20, 2025 License: Apache-2.0 Imports: 17 Imported by: 0

README

bdcache - Big Dumb Cache

bdcache logo

Go Reference Go Report Card License


Fast, persistent Go cache with S3-FIFO eviction - better hit rates than LRU, survives restarts with local files or Google Cloud Datastore, zero allocations.

Install

go get github.com/codeGROOVE-dev/bdcache

Use

// Memory only
cache, err := bdcache.New[string, int](ctx)
if err != nil {
    return err
}
if err := cache.Set(ctx, "answer", 42, 0); err != nil {
    return err
}
val, found, err := cache.Get(ctx, "answer")

// With smart persistence (local files for dev, Google Cloud Datastore for Cloud Run)
cache, err := bdcache.New[string, User](ctx, bdcache.WithBestStore("myapp"))

Features

  • S3-FIFO eviction - Better than LRU (learn more)
  • Type safe - Go generics
  • Persistence - Local files (gob) or Google Cloud Datastore (JSON)
  • Graceful degradation - Cache works even if persistence fails
  • Per-item TTL - Optional expiration

Performance

Benchmarks on MacBook Pro M4 Max comparing memory-only Get operations:

Library Algorithm ns/op Allocations Persistence
bdcache S3-FIFO 8.61 0 allocs ✅ Auto (Local files + GCP Datastore)
golang-lru LRU 13.02 0 allocs ❌ None
otter S3-FIFO 14.58 0 allocs ⚠️ Manual (Save/Load entire cache)
ristretto TinyLFU 30.53 0 allocs ❌ None

⚠️ Benchmark Disclaimer: These benchmarks are highly cherrypicked to show S3-FIFO's advantages. Different cache implementations excel at different workloads - LRU may outperform S3-FIFO in some scenarios, while TinyLFU shines in others. Performance varies based on access patterns, working set size, and hardware.

The real differentiator is bdcache's automatic per-item persistence designed for unreliable environments like Cloud Run and Kubernetes, where shutdowns are unpredictable. See benchmarks/ for methodology.

Key advantage:

  • Automatic persistence for unreliable environments - per-item writes to local files or Google Cloud Datastore survive unexpected shutdowns (Cloud Run, Kubernetes), container restarts, and crashes without manual save/load choreography

Also competitive on:

  • Speed - comparable to or faster than alternatives on typical workloads
  • Hit rates - S3-FIFO protects hot data from scans in specific scenarios
  • Zero allocations - efficient for high-frequency operations
Competitive Analysis

Independent benchmark using scalalang2/go-cache-benchmark (500K items, Zipfian distribution):

Hit Rate Leadership:

  • 0.1% cache size: bdcache 48.12% vs SIEVE 47.42%, TinyLFU 47.37%, S3-FIFO 47.16%
  • 1% cache size: bdcache 64.45% vs TinyLFU 63.94%, Otter 63.60%, S3-FIFO 63.59%, SIEVE 63.33%
  • 10% cache size: bdcache 80.39% vs TinyLFU 80.43%, Otter 79.86%, S3-FIFO 79.84%

Consistently ranks top 1-2 for hit rate across all cache sizes while maintaining competitive throughput (5-12M QPS). The S3-FIFO implementation prioritizes cache efficiency over raw speed, making bdcache ideal when hit rate matters.

Detailed Benchmarks

Memory-only operations:

BenchmarkCache_Get_Hit-16      56M ops/sec    17.8 ns/op       0 B/op     0 allocs
BenchmarkCache_Set-16          56M ops/sec    17.8 ns/op       0 B/op     0 allocs

With file persistence enabled:

BenchmarkCache_Get_PersistMemoryHit-16    85M ops/sec    11.8 ns/op       0 B/op     0 allocs
BenchmarkCache_Get_PersistDiskRead-16     73K ops/sec    13.8 µs/op    7921 B/op   178 allocs
BenchmarkCache_Set_WithPersistence-16      9K ops/sec   112.3 µs/op    2383 B/op    36 allocs

License

Apache 2.0

Documentation

Overview

Package bdcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic cache with memory and optional persistence layers.

Example (Basic)
package main

import (
	"context"
	"fmt"

	"github.com/codeGROOVE-dev/bdcache"
)

func main() {
	ctx := context.Background()

	// Create a simple in-memory cache
	cache, err := bdcache.New[string, int](ctx)
	if err != nil {
		panic(err)
	}
	defer func() { _ = cache.Close() }() //nolint:errcheck // Example code, error handling would clutter the example

	// Store a value
	if err := cache.Set(ctx, "answer", 42, 0); err != nil {
		panic(err)
	}

	// Retrieve it
	val, found, err := cache.Get(ctx, "answer")
	if err != nil {
		panic(err)
	}
	if found {
		fmt.Printf("The answer is %d\n", val)
	}

}
Output:

The answer is 42
Example (StructValues)
package main

import (
	"context"
	"fmt"

	"github.com/codeGROOVE-dev/bdcache"
)

func main() {
	ctx := context.Background()

	type User struct {
		ID    int
		Name  string
		Email string
	}

	// Cache can store any type
	cache, err := bdcache.New[int, User](ctx)
	if err != nil {
		panic(err)
	}
	defer func() { _ = cache.Close() }() //nolint:errcheck // Example code, error handling would clutter the example

	// Store a struct
	user := User{
		ID:    1,
		Name:  "Alice",
		Email: "alice@example.com",
	}
	if err := cache.Set(ctx, user.ID, user, 0); err != nil {
		panic(err)
	}

	// Retrieve it
	retrieved, found, err := cache.Get(ctx, 1)
	if err != nil {
		panic(err)
	}
	if found {
		fmt.Printf("User: %s (%s)\n", retrieved.Name, retrieved.Email)
	}

}
Output:

User: Alice (alice@example.com)
Example (WithBestStore)
package main

import (
	"context"
	"fmt"

	"github.com/codeGROOVE-dev/bdcache"
)

func main() {
	ctx := context.Background()

	// Automatically selects best storage:
	// - Cloud Datastore if K_SERVICE env var is set (Cloud Run/Knative)
	// - Local files otherwise
	cache, err := bdcache.New[string, int](ctx,
		bdcache.WithBestStore("myapp"),
	)
	if err != nil {
		panic(err)
	}
	defer func() { _ = cache.Close() }() //nolint:errcheck // Example code, error handling would clutter the example

	if err := cache.Set(ctx, "counter", 100, 0); err != nil {
		panic(err)
	}

	val, found, err := cache.Get(ctx, "counter")
	if err != nil {
		panic(err)
	}
	if found {
		fmt.Printf("Counter: %d\n", val)
	}

}
Output:

Counter: 100
Example (WithLocalStore)
package main

import (
	"context"
	"fmt"

	"github.com/codeGROOVE-dev/bdcache"
)

func main() {
	ctx := context.Background()

	// Create cache with local file persistence
	cache, err := bdcache.New[string, string](ctx,
		bdcache.WithLocalStore("myapp"),
		bdcache.WithMemorySize(5000),
	)
	if err != nil {
		panic(err)
	}
	defer func() { _ = cache.Close() }() //nolint:errcheck // Example code, error handling would clutter the example

	// Values are cached in memory and persisted to disk
	if err := cache.Set(ctx, "config", "production", 0); err != nil {
		panic(err)
	}

	// After restart, values are loaded from disk automatically
	val, found, err := cache.Get(ctx, "config")
	if err != nil {
		panic(err)
	}
	if found {
		fmt.Printf("Config: %s\n", val)
	}

}
Output:

Config: production
Example (WithTTL)
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/codeGROOVE-dev/bdcache"
)

func main() {
	ctx := context.Background()

	// Create cache with default TTL
	cache, err := bdcache.New[string, string](ctx,
		bdcache.WithDefaultTTL(5*time.Minute),
	)
	if err != nil {
		panic(err)
	}
	defer func() { _ = cache.Close() }() //nolint:errcheck // Example code, error handling would clutter the example

	// Set with default TTL (ttl=0 uses configured DefaultTTL)
	if err := cache.Set(ctx, "session", "user-123", 0); err != nil {
		panic(err)
	}

	// Set with custom TTL (overrides default)
	if err := cache.Set(ctx, "token", "abc123", 1*time.Hour); err != nil {
		panic(err)
	}

	// Retrieve values
	session, found, err := cache.Get(ctx, "session")
	if err != nil {
		panic(err)
	}
	if found {
		fmt.Printf("Session: %s\n", session)
	}

}
Output:

Session: user-123

func New

func New[K comparable, V any](ctx context.Context, options ...Option) (*Cache[K, V], error)

New creates a new cache with the given options.

func (*Cache[K, V]) Cleanup

func (c *Cache[K, V]) Cleanup() int

Cleanup removes expired entries from the cache. Returns the number of entries removed.

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close() error

Close releases resources held by the cache.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(ctx context.Context, key K)

Delete removes a value from the cache.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(ctx context.Context, key K) (value V, found bool, err error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence if available.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of items in the memory cache.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(ctx context.Context, key K, value V, ttl time.Duration) error

Set stores a value in the cache with an optional TTL. A zero TTL means no expiration (or uses DefaultTTL if configured). The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails. Even when an error is returned, the value is cached in memory.

type Entry

type Entry[K comparable, V any] struct {
	Key       K
	Value     V
	Expiry    time.Time
	UpdatedAt time.Time
}

Entry represents a cache entry with its metadata.

type Option

type Option func(*Options)

Option is a functional option for configuring a Cache.

func WithBestStore

func WithBestStore(cacheID string) Option

WithBestStore automatically selects the best persistence option: - If K_SERVICE environment variable is set (Google Cloud Run/Knative): uses Cloud Datastore - Otherwise: uses local file store.

func WithCloudDatastore

func WithCloudDatastore(cacheID string) Option

WithCloudDatastore enables Cloud Datastore persistence using the given cache ID as database ID. An empty project ID will auto-detect the correct project.

func WithDefaultTTL

func WithDefaultTTL(d time.Duration) Option

WithDefaultTTL sets the default TTL for cache items.

func WithLocalStore

func WithLocalStore(cacheID string) Option

WithLocalStore enables local file persistence using the given cache ID as subdirectory name. Files are stored in os.UserCacheDir()/cacheID.

func WithMemorySize

func WithMemorySize(n int) Option

WithMemorySize sets the maximum number of items in the memory cache.

func WithWarmup

func WithWarmup(n int) Option

WithWarmup enables cache warmup by loading the N most recently updated entries from persistence on startup. By default, warmup is disabled (0). Set to a positive number to load that many entries.

type Options

type Options struct {
	CacheID      string
	MemorySize   int
	DefaultTTL   time.Duration
	WarmupLimit  int
	UseDatastore bool
}

Options configures a Cache instance.

type PersistenceLayer

type PersistenceLayer[K comparable, V any] interface {
	// ValidateKey checks if a key is valid for this persistence layer.
	// Returns an error if the key violates constraints.
	ValidateKey(key K) error

	// Load retrieves a value from persistent storage.
	// Returns the value, expiry time, whether it was found, and any error.
	Load(ctx context.Context, key K) (V, time.Time, bool, error)

	// Store saves a value to persistent storage with an expiry time.
	Store(ctx context.Context, key K, value V, expiry time.Time) error

	// Delete removes a value from persistent storage.
	Delete(ctx context.Context, key K) error

	// LoadRecent returns channels for streaming the most recently updated entries from persistent storage.
	// Used for warming up the cache on startup. Returns up to 'limit' most recently updated entries.
	// The entry channel should be closed when all entries have been sent.
	// If an error occurs, send it on the error channel.
	LoadRecent(ctx context.Context, limit int) (<-chan Entry[K, V], <-chan error)

	// LoadAll returns channels for streaming all entries from persistent storage.
	// Equivalent to LoadRecent(ctx, 0).
	LoadAll(ctx context.Context) (<-chan Entry[K, V], <-chan error)

	// Close releases any resources held by the persistence layer.
	Close() error
}

PersistenceLayer defines the interface for cache persistence backends.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL