fido

package module
v1.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 2, 2026 License: Apache-2.0 Imports: 11 Imported by: 0

README

Fido

Go Report Card Go Reference Release License

fido logo

fido is a high-performance cache for Go, focusing on high hit-rates, high throughput, and low latency. Optimized using the best algorithms and lock-free data structures, nobody fetches better than Fido. Designed to thrive in unstable environments like Kubernetes, Cloud Run, or Borg, it also features an optional multi-tier persistence architecture.

As of January 2026, nobody fetches better - and we have the benchmarks to prove it.

benchmarks

Install

go get github.com/codeGROOVE-dev/fido

Use

c := fido.New[string, int](fido.Size(10000))
c.Set("answer", 42)
val, ok := c.Get("answer")

With persistence:

store, err := localfs.New[string, User]("myapp", "")
cache, err := fido.NewTiered(store)

err = cache.Set(ctx, "user:123", user)       // sync write
err = cache.SetAsync(ctx, "user:456", user)  // async write

Fetch deduplicates concurrent loads to prevent thundering herd situations:

user, err := cache.Fetch("user:123", func() (User, error) {
    return db.LoadUser("123")
})

Options

fido.Size(n)           // max entries (default 16384)
fido.TTL(time.Hour)    // default expiration

Persistence

Memory cache backed by durable storage. Reads check memory first; writes go to both.

Backend Import
Local filesystem pkg/store/localfs
Valkey/Redis pkg/store/valkey
Google Cloud Datastore pkg/store/datastore
Auto-detect (Cloud Run) pkg/store/cloudrun

For maximum efficiency, all backends support S2 or Zstd compression via pkg/store/compress.

Performance

fido has been exhaustively tested for performance using gocachemark.

Where fido wins:

  • Throughput: 727M int gets/sec avg (2.7X faster than otter). 70M string sets/sec avg (22X faster than otter).
  • Hit rate: Wins 6 of 9 workloads. Highest average across all datasets (+2.8% vs otter, +0.9% vs sieve).
  • Latency: 8ns int gets, 10ns string gets, zero allocations (4X lower latency than otter)

Where others win:

  • Memory: freelru and otter use less memory per entry (49 bytes/item overhead vs 15 for otter)
  • Specific workloads: sieve +0.5% on thesios-block, clock +0.1% on ibm-docker, theine +0.6% on zipf

Much of the credit for high throughput goes to puzpuzpuz/xsync and its lock-free data structures.

Run make benchmark for full results, or see benchmarks/gocachemark_results.md.

Algorithm

fido uses S3-FIFO, which features three queues: small (new entries), main (promoted entries), and ghost (recently evicted keys). New items enter small; items accessed twice move to main. The ghost queue tracks evicted keys in a bloom filter to fast-track their return.

fido has been hyper-tuned for high performance, and deviates from the original paper in a handful of ways:

  • Size-adaptive small queue - 12-15% vs paper's 10%, interpolated per cache size via binary search tuning
  • Full ghost frequency restoration - returning keys restore 100% of their previous access count
  • Increased frequency cap - max freq=5 vs paper's 3, tuned via binary search for best average hit rate
  • Death row - hot items (high peakFreq) get a second chance before eviction
  • Size-adaptive ghost capacity - 0.9x to 2.2x cache size, larger caches need more ghost tracking
  • Ghost frequency ring buffer - fixed-size 256-entry ring replaces map allocations

License

Apache 2.0

Documentation

Overview

Package fido provides a high-performance cache with optional persistence.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is an in-memory cache. All operations are synchronous and infallible.

func New

func New[K comparable, V any](opts ...Option) *Cache[K, V]

New creates an in-memory cache.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(key K)

Delete removes a key from the cache.

func (*Cache[K, V]) Fetch added in v1.10.0

func (c *Cache[K, V]) Fetch(key K, loader func() (V, error)) (V, error)

Fetch returns cached value or calls loader to compute it. Concurrent calls for the same key share one loader invocation. Computed values are stored with the default TTL.

func (*Cache[K, V]) FetchTTL added in v1.10.0

func (c *Cache[K, V]) FetchTTL(key K, ttl time.Duration, loader func() (V, error)) (V, error)

FetchTTL is like Fetch but stores computed values with an explicit TTL.

func (*Cache[K, V]) Flush

func (c *Cache[K, V]) Flush() int

Flush removes all entries. Returns count removed.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(key K) (V, bool)

Get returns the value for key, or zero and false if not found.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(key K, value V)

Set stores a value using the default TTL specified at cache creation. If no default TTL was set, the entry never expires.

func (*Cache[K, V]) SetTTL

func (c *Cache[K, V]) SetTTL(key K, value V, ttl time.Duration)

SetTTL stores a value with an explicit TTL. A zero or negative TTL means the entry never expires.

type Option

type Option func(*config)

Option configures a Cache.

func Size

func Size(n int) Option

Size sets maximum entries. Default 16384.

func TTL

func TTL(d time.Duration) Option

TTL sets default expiration. Default 0 (none).

type Store

type Store[K comparable, V any] interface {
	ValidateKey(key K) error
	Get(ctx context.Context, key K) (V, time.Time, bool, error)
	Set(ctx context.Context, key K, value V, expiry time.Time) error
	Delete(ctx context.Context, key K) error
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)
	Flush(ctx context.Context) (int, error)
	Len(ctx context.Context) (int, error)
	Close() error
}

Store is the persistence backend interface.

type TieredCache

type TieredCache[K comparable, V any] struct {
	Store Store[K, V] // direct access to persistence layer
	// contains filtered or unexported fields
}

TieredCache combines an in-memory cache with persistent storage.

func NewTiered

func NewTiered[K comparable, V any](store Store[K, V], opts ...Option) (*TieredCache[K, V], error)

NewTiered creates a cache backed by the given store.

func (*TieredCache[K, V]) Close

func (c *TieredCache[K, V]) Close() error

Close releases store resources.

func (*TieredCache[K, V]) Delete

func (c *TieredCache[K, V]) Delete(ctx context.Context, key K) error

Delete removes from memory and persistence.

func (*TieredCache[K, V]) Fetch added in v1.10.0

func (c *TieredCache[K, V]) Fetch(ctx context.Context, key K, loader func(context.Context) (V, error)) (V, error)

Fetch returns cached value or calls loader. Concurrent calls share one loader. Computed values are stored with the default TTL.

func (*TieredCache[K, V]) FetchTTL added in v1.10.0

func (c *TieredCache[K, V]) FetchTTL(ctx context.Context, key K, ttl time.Duration, loader func(context.Context) (V, error)) (V, error)

FetchTTL is like Fetch but stores computed values with an explicit TTL.

func (*TieredCache[K, V]) Flush

func (c *TieredCache[K, V]) Flush(ctx context.Context) (int, error)

Flush clears memory and persistence. Returns total entries removed.

func (*TieredCache[K, V]) Get

func (c *TieredCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get checks memory, then persistence. Found values are cached in memory.

func (*TieredCache[K, V]) Len

func (c *TieredCache[K, V]) Len() int

Len returns the memory cache size. Use Store.Len for persistence count.

func (*TieredCache[K, V]) Set

func (c *TieredCache[K, V]) Set(ctx context.Context, key K, value V) error

Set stores to memory first (always), then persistence. Uses the default TTL specified at cache creation.

func (*TieredCache[K, V]) SetAsync

func (c *TieredCache[K, V]) SetAsync(ctx context.Context, key K, value V) error

SetAsync stores to memory synchronously, persistence asynchronously. Uses the default TTL. Persistence errors are logged, not returned.

func (*TieredCache[K, V]) SetAsyncTTL

func (c *TieredCache[K, V]) SetAsyncTTL(ctx context.Context, key K, value V, ttl time.Duration) error

SetAsyncTTL stores to memory synchronously, persistence asynchronously with explicit TTL. Persistence errors are logged, not returned.

func (*TieredCache[K, V]) SetTTL

func (c *TieredCache[K, V]) SetTTL(ctx context.Context, key K, value V, ttl time.Duration) error

SetTTL stores to memory first (always), then persistence with explicit TTL. A zero or negative TTL means the entry never expires.

Directories

Path Synopsis
pkg
store/localfs module
store/null module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL