hypercache

package module
v0.1.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 21, 2025 License: MPL-2.0 Imports: 15 Imported by: 0

README

HyperCache

Go CodeQL golangci-lint

Synopsis

HyperCache is a thread-safe high-performance cache implementation in Go that supports multiple backends with optional size limits, item expiration, and pluggable eviction algorithms. It can be used as a standalone cache (single process or distributed via Redis / Redis Cluster) or wrapped by the service interface to decorate operations with middleware (logging, metrics, tracing, etc.).

It is optimized for performance and flexibility:

  • Tunable expiration and eviction intervals (or fully proactive eviction when the eviction interval is set to 0).
  • Debounced & coalesced expiration trigger channel to avoid thrashing.
  • Non-blocking manual TriggerEviction() signal.
  • Serializer‑aware memory accounting (item size reflects the backend serialization format when available).
  • Multiple eviction algorithms with the ability to register custom ones.
  • Multiple stats collectors (default histogram) and middleware hooks.

It ships with a default histogram stats collector and several eviction algorithms. You can register new ones by implementing the Eviction Algorithm interface:

Features
  • Thread-safe & lock‑optimized (sharded map + worker pool)
  • High-performance (low allocations on hot paths, pooled items, serializer‑aware sizing)
  • Multiple backends (extensible):
    1. In-memory
    2. Redis
    3. Redis Cluster
  • Item expiration & proactive expiration triggering (debounced/coalesced)
  • Background or proactive (interval = 0) eviction using pluggable algorithms
  • Manual, non-blocking eviction triggering (TriggerEviction())
  • Maximum cache size (bytes) & capacity (item count) controls
  • Serializer‑aware size accounting for consistent memory tracking across backends
  • Stats collection (histogram by default) + pluggable collectors
  • Middleware-friendly service wrapper (logging, metrics, tracing, custom)
  • Zero-cost if an interval is disabled (tickers are only created when > 0)

Installation

Install HyperCache:

go get -u github.com/hyp3rd/hypercache
Performance

Sample benchmark output (numbers will vary by hardware / Go version):

goos: darwin
goarch: arm64
pkg: github.com/hyp3rd/hypercache/tests/benchmark
cpu: Apple M2 Pro
BenchmarkHyperCache_Get-12                        72005894          66.71 ns/op         0 B/op           0 allocs/op
BenchmarkHyperCache_Get_ProactiveEviction-12      71068249          67.22 ns/op         0 B/op           0 allocs/op
BenchmarkHyperCache_List-12                       36435114          129.5 ns/op        80 B/op           1 allocs/op
BenchmarkHyperCache_Set-12                        10365289          587.4 ns/op       191 B/op           3 allocs/op
BenchmarkHyperCache_Set_Proactive_Eviction-12      3264818           1521 ns/op       282 B/op           5 allocs/op
PASS
ok   github.com/hyp3rd/hypercache/tests/benchmark 26.853s
Examples

To run the examples, use the following command:

make run-example group=eviction  # or any other example

For a complete list of examples, refer to the examples directory.

Observability (OpenTelemetry)

HyperCache provides optional OpenTelemetry middleware for tracing and metrics.

  • Tracing: wrap the service with middleware.NewOTelTracingMiddleware using a trace.Tracer.
  • Metrics: wrap with middleware.NewOTelMetricsMiddleware using a metric.Meter.

Example wiring:

svc := hypercache.ApplyMiddleware(svc,
    func(next hypercache.Service) hypercache.Service { return middleware.NewOTelTracingMiddleware(next, tracer) },
    func(next hypercache.Service) hypercache.Service { mw, _ := middleware.NewOTelMetricsMiddleware(next, meter); return mw },
)

Use your preferred OpenTelemetry SDK setup for exporters and processors in production; the snippet uses no-op providers for simplicity.

Eviction algorithms

Available algorithm names you can pass to WithEvictionAlgorithm:

  • "lru" — Least Recently Used (default)
  • "lfu" — Least Frequently Used (with LRU tie-breaker for equal frequencies)
  • "clock" — Second-chance clock
  • "cawolfu" — Cache-Aware Write-Optimized LFU
  • "arc" — Adaptive Replacement Cache (experimental; not registered by default)

Note: ARC is experimental and isn’t included in the default registry. If you choose to use it, register it manually or enable it explicitly in your build.

API

NewInMemoryWithDefaults(capacity) is the quickest way to start:

cache, err := hypercache.NewInMemoryWithDefaults(100)
if err != nil {
    // handle error
}

For fine‑grained control use NewConfig + New:

config := hypercache.NewConfig[backend.InMemory](constants.InMemoryBackend)
config.HyperCacheOptions = []hypercache.Option[backend.InMemory]{
    hypercache.WithEvictionInterval[backend.InMemory](time.Minute * 5),
    hypercache.WithEvictionAlgorithm[backend.InMemory]("cawolfu"),
    hypercache.WithExpirationTriggerDebounce[backend.InMemory](250 * time.Millisecond),
    hypercache.WithMaxEvictionCount[backend.InMemory](100),
    hypercache.WithMaxCacheSize[backend.InMemory](64 * 1024 * 1024), // 64 MiB logical cap
}
config.InMemoryOptions = []backend.Option[backend.InMemory]{ backend.WithCapacity[backend.InMemory](10) }

cache, err := hypercache.New(hypercache.GetDefaultManager(), config)
if err != nil {
    fmt.Fprintln(os.Stderr, err)
    return
}
Advanced options quick reference
Option Purpose
WithEvictionInterval Periodic eviction loop; set to 0 for proactive per-write eviction.
WithExpirationInterval Periodic scan for expired items.
WithExpirationTriggerBuffer Buffer size for coalesced expiration trigger channel.
WithExpirationTriggerDebounce Drop rapid-fire triggers within a window.
WithEvictionAlgorithm Select eviction algorithm (lru, lfu, clock, cawolfu, arc*).
WithMaxEvictionCount Cap number of items evicted per cycle.
WithMaxCacheSize Max cumulative serialized item size (bytes).
WithStatsCollector Choose stats collector implementation.

*ARC is experimental (not registered by default).

Redis / Redis Cluster notes

When using Redis or Redis Cluster, item size accounting uses the configured serializer (e.g. msgpack) to align in-memory and remote representations. Provide the serializer via backend options (WithSerializer / WithClusterSerializer).

Refer to config.go for complete option definitions and the GoDoc on pkg.go.dev for an exhaustive API reference.

Usage

Examples can be too broad for a readme, refer to the examples directory for a more comprehensive overview.

License

The code and documentation in this project are released under Mozilla Public License 2.0.

Author

I'm a surfer, and a software architect with 15 years of experience designing highly available distributed production systems and developing cloud-native apps in public and private clouds. Feel free to connect with me on LinkedIn.

LinkedIn

Documentation

Overview

Package hypercache provides a high-performance, generic caching library with configurable backends and eviction algorithms. It supports multiple backend types including in-memory and Redis, with various eviction strategies like LRU, LFU, and more. The package is designed to be flexible and extensible, allowing users to customize cache behavior through configuration options.

Example usage:

config := hypercache.NewConfig[string]("inmemory")
cache := hypercache.NewHyperCache[string](config)
cache.Set("key", "value", time.Hour)
value, found := cache.Get("key")

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ApplyHyperCacheOptions added in v0.0.4

func ApplyHyperCacheOptions[T backend.IBackendConstrain](cache *HyperCache[T], options ...Option[T])

ApplyHyperCacheOptions applies the given options to the given cache.

Types

type BackendManager added in v0.1.3

type BackendManager struct {
	// contains filtered or unexported fields
}

BackendManager is a factory for creating HyperCache backend instances. It maintains a registry of backend constructors. We store them as any internally, and cast to the typed constructor at use site based on T.

func GetDefaultManager added in v0.1.3

func GetDefaultManager() *BackendManager

GetDefaultManager returns a new BackendManager with default backends pre-registered. This replaces the previous global instance with a factory function.

func NewBackendManager added in v0.1.3

func NewBackendManager() *BackendManager

NewBackendManager creates a new BackendManager with default backends pre-registered.

func NewEmptyBackendManager added in v0.1.6

func NewEmptyBackendManager() *BackendManager

NewEmptyBackendManager creates a new BackendManager without default backends. This is useful for testing or when you want to register only specific backends.

func (*BackendManager) RegisterBackend added in v0.1.3

func (hcm *BackendManager) RegisterBackend(name string, constructor any)

RegisterBackend registers a new backend constructor. The constructor should be a value implementing IBackendConstructor[T] for some T; stored as any.

type Config added in v0.0.4

type Config[T backend.IBackendConstrain] struct {
	// BackendType is the type of the backend to use.
	BackendType string
	// InMemoryOptions is a slice of options that can be used to configure the `InMemory`.
	InMemoryOptions []backend.Option[backend.InMemory]
	// RedisOptions is a slice of options that can be used to configure the `Redis`.
	RedisOptions []backend.Option[backend.Redis]
	// RedisClusterOptions is a slice of options to configure the `RedisCluster` backend.
	RedisClusterOptions []backend.Option[backend.RedisCluster]
	// HyperCacheOptions is a slice of options that can be used to configure `HyperCache`.
	HyperCacheOptions []Option[T]
}

Config is a struct that wraps all the configuration options to setup `HyperCache` and its backend.

func NewConfig added in v0.0.4

func NewConfig[T backend.IBackendConstrain](backendType string) *Config[T]

NewConfig returns a new `Config` struct with default values:

  • `InMemoryOptions` is empty
  • `RedisOptions` is empty
  • `HyperCacheOptions` is set to: -- `WithExpirationInterval[T](30 * time.Minute)` -- `WithEvictionAlgorithm[T]("lfu")` -- `WithEvictionInterval[T](10 * time.Minute)`

Each of the above options can be overridden by passing a different option to the `NewConfig` function. It can be used to configure `HyperCache` and its backend and customize the behavior of the cache.

type HyperCache

type HyperCache[T backend.IBackendConstrain] struct {

	// StatsCollector to collect cache statistics
	StatsCollector stats.ICollector
	// contains filtered or unexported fields
}

HyperCache is a cache that stores items with a key and expiration duration. It supports multiple backends and multiple eviction algorithms. The default in-memory implementation has a custom `ConcurrentMap` to store the items in the cache, The configuration is provided by the `Config` struct and can be customized by using the `With` functions. The cache has two loops that run in the background:

  • The expiration loop runs every `expirationInterval` and checks for expired items.
  • The eviction loop runs every `evictionInterval` and evicts items using the eviction algorithm.

The cache leverages two channels to signal the expiration and eviction loops to start:

  • The expirationTriggerCh channel is used to signal the expiration loop to start.
  • The evictCh channel is used to signal the eviction loop to start.

The cache also has a mutex that is used to protect the eviction algorithm from concurrent access. The stop channel is used to signal the expiration and eviction loops to stop. The evictCh channel is used to signal the eviction loop to start.

func New added in v0.0.5

func New[T backend.IBackendConstrain](bm *BackendManager, config *Config[T]) (*HyperCache[T], error)

New initializes a new HyperCache with the given configuration. The default configuration is:

  • The eviction interval is set to 5 minutes.
  • The eviction algorithm is set to LRU.
  • The expiration interval is set to 30 minutes.
  • The stats collector is set to the HistogramStatsCollector stats collector.

func NewInMemoryWithDefaults added in v0.0.5

func NewInMemoryWithDefaults(capacity int) (*HyperCache[backend.InMemory], error)

NewInMemoryWithDefaults initializes a new HyperCache with the default configuration. The default configuration is:

  • The eviction interval is set to 10 minutes.
  • The eviction algorithm is set to LRU.
  • The expiration interval is set to 30 minutes.
  • The capacity of the in-memory backend is set to 0 items (no limitations) unless specified.
  • The maximum cache size in bytes is set to 0 (no limitations).

func (*HyperCache[T]) Allocation added in v0.1.0

func (hyperCache *HyperCache[T]) Allocation() int64

Allocation returns the size allocation in bytes of the current cache.

func (*HyperCache[T]) Capacity

func (hyperCache *HyperCache[T]) Capacity() int

Capacity returns the capacity of the cache.

func (*HyperCache[T]) Clear

func (hyperCache *HyperCache[T]) Clear(ctx context.Context) error

Clear removes all items from the cache.

func (*HyperCache[T]) Count added in v0.1.0

func (hyperCache *HyperCache[T]) Count(ctx context.Context) int

Count returns the number of items in the cache.

func (*HyperCache[T]) Get

func (hyperCache *HyperCache[T]) Get(ctx context.Context, key string) (any, bool)

Get retrieves the item with the given key from the cache returning the value and a boolean indicating if the item was found.

func (*HyperCache[T]) GetMultiple

func (hyperCache *HyperCache[T]) GetMultiple(ctx context.Context, keys ...string) (map[string]any, map[string]error)

GetMultiple retrieves the items with the given keys from the cache.

func (*HyperCache[T]) GetOrSet

func (hyperCache *HyperCache[T]) GetOrSet(ctx context.Context, key string, value any, expiration time.Duration) (any, error)

GetOrSet retrieves the item with the given key. If the item is not found, it adds the item to the cache with the given value and expiration duration. If the capacity of the cache is reached, leverage the eviction algorithm.

func (*HyperCache[T]) GetStats added in v0.0.4

func (hyperCache *HyperCache[T]) GetStats() stats.Stats

GetStats returns the stats collected by the cache.

func (*HyperCache[T]) GetWithInfo added in v0.1.0

func (hyperCache *HyperCache[T]) GetWithInfo(ctx context.Context, key string) (*cache.Item, bool)

GetWithInfo retrieves the item with the given key from the cache returning the `Item` object and a boolean indicating if the item was found.

func (*HyperCache[T]) List

func (hyperCache *HyperCache[T]) List(ctx context.Context, filters ...backend.IFilter) ([]*cache.Item, error)

List lists the items in the cache that meet the specified criteria. It takes in a variadic number of any type as filters, it then checks the backend type, and calls the corresponding implementation of the List function for that backend, with the filters passed in as arguments.

func (*HyperCache[T]) MaxCacheSize added in v0.1.0

func (hyperCache *HyperCache[T]) MaxCacheSize() int64

MaxCacheSize returns the maximum size in bytes of the cache.

func (*HyperCache[T]) Remove

func (hyperCache *HyperCache[T]) Remove(ctx context.Context, keys ...string) error

Remove removes items with the given key from the cache. If an item is not found, it does nothing.

func (*HyperCache[T]) Set

func (hyperCache *HyperCache[T]) Set(ctx context.Context, key string, value any, expiration time.Duration) error

Set adds an item to the cache with the given key and value. If an item with the same key already exists, it updates the value of the existing item. If the expiration duration is greater than zero, the item will expire after the specified duration. If the capacity of the cache is reached, the cache will leverage the eviction algorithm proactively if the evictionInterval is zero. If not, the background process will take care of the eviction.

func (*HyperCache[T]) SetCapacity

func (hyperCache *HyperCache[T]) SetCapacity(ctx context.Context, capacity int)

SetCapacity sets the capacity of the cache. If the new capacity is smaller than the current number of items in the cache, it evicts the excess items from the cache.

func (*HyperCache[T]) Stop

func (hyperCache *HyperCache[T]) Stop()

Stop function stops the expiration and eviction loops and closes the stop channel.

func (*HyperCache[T]) TriggerEviction

func (hyperCache *HyperCache[T]) TriggerEviction()

TriggerEviction sends a signal to the eviction loop to start.

type IBackendConstructor added in v0.1.3

type IBackendConstructor[T backend.IBackendConstrain] interface {
	Create(cfg *Config[T]) (backend.IBackend[T], error)
}

IBackendConstructor is an interface for backend constructors with type safety. It returns a typed backend.IBackend[T] instead of any.

type InMemoryBackendConstructor added in v0.1.3

type InMemoryBackendConstructor struct{}

InMemoryBackendConstructor constructs InMemory backends.

func (InMemoryBackendConstructor) Create added in v0.1.3

Create creates a new InMemory backend.

type JobFunc added in v0.1.3

type JobFunc func() error

JobFunc is a function that can be enqueued in a worker pool.

type Middleware added in v0.0.4

type Middleware func(Service) Service

Middleware describes a service middleware.

type Option

type Option[T backend.IBackendConstrain] func(*HyperCache[T])

Option is a function type that can be used to configure the `HyperCache` struct.

func WithEvictionAlgorithm added in v0.0.4

func WithEvictionAlgorithm[T backend.IBackendConstrain](name string) Option[T]

WithEvictionAlgorithm is an option that sets the eviction algorithm name field of the `HyperCache` struct. The eviction algorithm name determines which eviction algorithm will be used to evict items from the cache. The eviction algorithm name must be one of the following:

  • "LRU" (Least Recently Used) - Implemented in the `eviction/lru.go` file
  • "LFU" (Least Frequently Used) - Implemented in the `eviction/lfu.go` file
  • "CAWOLFU" (Cache-Aware Write-Optimized LFU) - Implemented in the `eviction/cawolfu.go` file
  • "FIFO" (First In First Out)
  • "RANDOM" (Random)
  • "CLOCK" (Clock) - Implemented in the `eviction/clock.go` file
  • "ARC" (Adaptive Replacement Cache) - Experimental (not enabled by default)
  • "TTL" (Time To Live)
  • "LFUDA" (Least Frequently Used with Dynamic Aging)
  • "SLRU" (Segmented Least Recently Used)

func WithEvictionInterval

func WithEvictionInterval[T backend.IBackendConstrain](evictionInterval time.Duration) Option[T]

WithEvictionInterval is an option that sets the eviction interval field of the `HyperCache` struct. The eviction interval determines how often the cache will run the eviction process to remove the least recently used items.

func WithExpirationInterval

func WithExpirationInterval[T backend.IBackendConstrain](expirationInterval time.Duration) Option[T]

WithExpirationInterval is an option that sets the expiration interval field of the `HyperCache` struct. The expiration interval determines how often the cache will check for and remove expired items.

func WithExpirationTriggerBuffer added in v0.1.6

func WithExpirationTriggerBuffer[T backend.IBackendConstrain](size int) Option[T]

WithExpirationTriggerBuffer sets the buffer size of the expiration trigger channel. If set to <= 0, the default (capacity/2, minimum 1) is used.

func WithExpirationTriggerDebounce added in v0.1.6

func WithExpirationTriggerDebounce[T backend.IBackendConstrain](interval time.Duration) Option[T]

WithExpirationTriggerDebounce sets an optional debounce interval for coalescing expiration triggers. Triggers arriving within this interval after the last accepted trigger may be dropped.

func WithMaxCacheSize added in v0.1.0

func WithMaxCacheSize[T backend.IBackendConstrain](maxCacheSize int64) Option[T]

WithMaxCacheSize is an option that sets the maximum size of the cache. The maximum size of the cache is the maximum number of items that can be stored in the cache. If the maximum size of the cache is reached, the least recently used item will be evicted from the cache.

func WithMaxEvictionCount

func WithMaxEvictionCount[T backend.IBackendConstrain](maxEvictionCount uint) Option[T]

WithMaxEvictionCount is an option that sets the max eviction count field of the `HyperCache` struct. The max eviction count determines the maximum number of items that can be removed during a single eviction run.

func WithStatsCollector

func WithStatsCollector[T backend.IBackendConstrain](name string) Option[T]

WithStatsCollector is an option that sets the stats collector field of the `HyperCache` struct. The stats collector is used to collect statistics about the cache.

type RedisBackendConstructor added in v0.1.3

type RedisBackendConstructor struct{}

RedisBackendConstructor constructs Redis backends.

func (RedisBackendConstructor) Create added in v0.1.3

Create creates a new Redis backend.

type RedisClusterBackendConstructor added in v0.1.8

type RedisClusterBackendConstructor struct{}

RedisClusterBackendConstructor constructs Redis Cluster backends.

func (RedisClusterBackendConstructor) Create added in v0.1.8

Create creates a new Redis Cluster backend.

type Service added in v0.0.4

type Service interface {

	// Capacity returns the capacity of the cache
	Capacity() int
	// Allocation returns the allocation in bytes of the current cache
	Allocation() int64
	// Count returns the number of items in the cache
	Count(ctx context.Context) int
	// TriggerEviction triggers the eviction of the cache
	TriggerEviction()
	// Stop stops the cache
	Stop()
	// GetStats returns the stats of the cache
	GetStats() stats.Stats
	// contains filtered or unexported methods
}

Service is the service interface for the HyperCache. It enables middleware to be added to the service.

func ApplyMiddleware added in v0.0.4

func ApplyMiddleware(svc Service, mw ...Middleware) Service

ApplyMiddleware applies middlewares to a service.

type WorkerPool added in v0.1.3

type WorkerPool struct {
	// contains filtered or unexported fields
}

WorkerPool is a pool of workers that can execute jobs concurrently.

func NewWorkerPool added in v0.1.3

func NewWorkerPool(workers int) *WorkerPool

NewWorkerPool creates a new worker pool with the given number of workers.

func (*WorkerPool) Enqueue added in v0.1.3

func (pool *WorkerPool) Enqueue(job JobFunc)

Enqueue adds a job to the worker pool.

func (*WorkerPool) Errors added in v0.1.3

func (pool *WorkerPool) Errors() <-chan error

Errors returns a channel that can be used to receive errors from the worker pool.

func (*WorkerPool) Resize added in v0.1.3

func (pool *WorkerPool) Resize(newSize int)

Resize resizes the worker pool.

func (*WorkerPool) Shutdown added in v0.1.3

func (pool *WorkerPool) Shutdown()

Shutdown shuts down the worker pool. It waits for all jobs to finish.

Directories

Path Synopsis
__examples
clear command
eviction command
get command
list command
observability command
redis command
service command
size command
stats command
internal
constants
Package constants defines default configuration values and backend types for the hypercache system.
Package constants defines default configuration values and backend types for the hypercache system.
introspect
Package introspect provides utilities for runtime inspection and type checking of cache backends.
Package introspect provides utilities for runtime inspection and type checking of cache backends.
libs/serializer
Package serializer provides serialization interfaces and implementations for converting Go values to and from byte slices.
Package serializer provides serialization interfaces and implementations for converting Go values to and from byte slices.
sentinel
Package sentinel provides standardized error definitions for the hypercache system.
Package sentinel provides standardized error definitions for the hypercache system.
pkg
backend
Package backend provides interfaces and types for implementing cache backends.
Package backend provides interfaces and types for implementing cache backends.
backend/redis
Package redis provides configuration options and utilities for Redis backend implementation.
Package redis provides configuration options and utilities for Redis backend implementation.
backend/rediscluster
Package rediscluster provides configuration options and utilities for Redis Cluster backend implementation.
Package rediscluster provides configuration options and utilities for Redis Cluster backend implementation.
cache
Package cache provides a thread-safe concurrent map implementation with sharding for improved performance in high-concurrency scenarios.
Package cache provides a thread-safe concurrent map implementation with sharding for improved performance in high-concurrency scenarios.
cache/v2
Package cachev2 provides a high-performance concurrent map implementation optimized for cache operations.
Package cachev2 provides a high-performance concurrent map implementation optimized for cache operations.
eviction
Package eviction - Adaptive Replacement Cache (ARC) algorithm implementation.
Package eviction - Adaptive Replacement Cache (ARC) algorithm implementation.
middleware
Package middleware provides various middleware implementations for the hypercache service.
Package middleware provides various middleware implementations for the hypercache service.
stats
Package stats provides a comprehensive statistics collection system for hypercache.
Package stats provides a comprehensive statistics collection system for hypercache.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL