tiercache

package module
v1.0.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 12, 2026 License: MIT Imports: 5 Imported by: 0

README

Tiercache

Go Version

Tiercache is a flexible, multi-level caching library for Go. It helps improve application performance by orchestrating multiple caching layers, such as a fast in-memory cache and a larger distributed cache.

Features

  • Multi-Level Caching: Combine multiple cache stores (e.g., in-memory, Redis) into a single, cohesive cache.
  • Cache-Aside Pattern: Automatically fetches data from your primary data source on a cache miss and back-populates the cache layers.
  • Extensible: Use middleware to add custom logic like logging, metrics, or tracing to any cache store.
  • Generic: Works with any comparable key type and any value type.
  • Simple API: Easy-to-use interface for getting, setting, and deleting cache entries.

Installation

go get github.com/mbeoliero/tiercache

Presets (Quick Start)

Tiercache provides pre-configured cache patterns for common use cases in the preset package.

1. Distributed Cache (Redis + DB)

Ideal for most distributed applications requiring shared state.

import "github.com/mbeoliero/tiercache/preset"

// ...

// Create a standardized Redis cache
userCache := preset.NewRedisCache[int, User](
    redisClient,       // Redis client
    "users:",          // Key prefix
    time.Hour,         // Redis TTL
    func(ctx context.Context, id int) (User, error) {
        return db.GetUser(id) // Simple fetcher function
    },
)

// Use it
user, err := userCache.Get(ctx, 1001)
2. Hybrid Cache (Local + Redis + DB)

Perfect for hot data (e.g., configurations, popular content) to reduce network latency and Redis load.

// Create a local + Redis cache with local memory layer
configCache := preset.NewLocalAndRedisCache[string, Config](
    redisClient,
    "config:",
    time.Hour,         // Redis TTL
    5*time.Minute,     // Local Memory TTL
    func(ctx context.Context, key string) (Config, error) {
        return db.GetConfig(key)
    },
)

Advanced Usage (Custom)

For more control, you can manually compose layers using the core API.

Here's a simple example of how to set up a two-level cache (in-memory and Redis) with a data source.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/mbeoliero/tiercache"
	"github.com/mbeoliero/tiercache/datasource"
	"github.com/mbeoliero/tiercache/localcache"
	"github.com/mbeoliero/tiercache/middleware"
	"github.com/mbeoliero/tiercache/rediscache"
	"github.com/redis/go-redis/v9"
)

// Your data model
type User struct {
	ID   int
	Name string
}

// A mock database function
func fetchUsersFromDB(ctx context.Context, keys []int) (map[int]User, error) {
	fmt.Println("Fetching from database for keys:", keys)
	results := make(map[int]User)
	for _, key := range keys {
		// In a real app, you would query your database here
		results[key] = User{ID: key, Name: fmt.Sprintf("User-%d", key)}
	}
	return results, nil
}

func main() {
	ctx := context.Background()

	// 1. Set up your cache stores
	// Level 1: In-memory cache with a 5-minute TTL
	localStore := localcache.NewLocalCache[int, User](5 * time.Minute)

	// Level 2: Redis cache with a 1-hour TTL
	redisClient := redis.NewClient(&redis.Options{
		Addr: "localhost:6379",
	})
	redisStore := rediscache.NewRedisCache[int, User](redisClient, 1*time.Hour).ToStore()

	// 2. Set up your data source
	ds := datasource.NewDataSource(fetchUsersFromDB)

	// 3. Create the multi-level cache
	cache := tiercache.NewMultiLevelCache[int, User](
		localStore,
		redisStore,
		ds,
	).Build()

	// --- Usage Example ---

	// First Get: Data is not in any cache, so it's fetched from the DB
	// and populates both Redis and the local in-memory cache.
	fmt.Println("--- First request ---")
	user, err := cache.Get(ctx, 123)
	if err != nil {
		panic(err)
	}
	fmt.Printf("Got user: %+v\n\n", user)

	// Second Get: Data is now in the local in-memory cache, so it's returned from there.
	// No database call is made.
	fmt.Println("--- Second request ---")
	user, err = cache.Get(ctx, 123)
	if err != nil {
		panic(err)
	}
	fmt.Printf("Got user: %+v\n", user)
}

Middleware

Tiercache supports middleware to add custom logic to any cache store. This is useful for tasks like logging, metrics, or tracing.

Here's how you can apply the built-in logger middleware:

import "github.com/mbeoliero/tiercache/middleware"

// ... (inside your main function)

// Create a new cache and apply middleware
cacheWithLogger := tiercache.NewMultiLevelCache[int, User](
    localStore,
    redisStore,
    ds,
).Use(middleware.LoggerMiddleware[int, User]()).Build()


// Now, all operations on the cache will be logged
fmt.Println("--- Request with logging ---")
user, err := cacheWithLogger.Get(ctx, 456)
if err != nil {
    panic(err)
}
fmt.Printf("Got user: %+v\n", user)

This will produce detailed logs for each cache operation, helping you debug and monitor your cache's behavior.

Options

Skipping Layers

You can dynamically skip specific cache layers for a request using WithShouldSkipLayer. This is useful for cases like force-refreshing data from a lower level (e.g., skipping local cache to hit Redis or DB).

import (
    "github.com/mbeoliero/tiercache"
    "github.com/mbeoliero/tiercache/cacher"
)

// ...

// Skip the first level (Level 1, e.g., local in-memory cache)
user, _, err := cache.Get(ctx, 123, tiercache.WithShouldSkipLayer(func(ctx context.Context, info cacher.BaseInfo) bool {
    // Skip if Level is 1
    return cacher.GetRunInfo(ctx).Level() == 1
}))

// Or skip by name (if your store implements Name())
user, _, err = cache.Get(ctx, 123, tiercache.WithShouldSkipLayer(func(ctx context.Context, info cacher.BaseInfo) bool {
    return info.Name() == "local-cache"
}))

Testing

To run the project's tests:

go test ./...

License

This project is licensed under the MIT License. See the LICENSE file for details.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type LevelCache

type LevelCache[K comparable, V any] struct {
	Store       cacher.Interface[K, V]
	Middlewares []cacher.Middleware[K, V]
}

type MultiLevelCache

type MultiLevelCache[K comparable, V any] struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

func NewMultiLevelCache

func NewMultiLevelCache[K comparable, V any](stores ...cacher.Interface[K, V]) *MultiLevelCache[K, V]

func (*MultiLevelCache[K, V]) Build

func (c *MultiLevelCache[K, V]) Build() *MultiLevelCache[K, V]

func (*MultiLevelCache[K, V]) Del

func (c *MultiLevelCache[K, V]) Del(ctx context.Context, key K, opts ...OptFunc) error

func (*MultiLevelCache[K, V]) Get

func (c *MultiLevelCache[K, V]) Get(ctx context.Context, key K, opts ...OptFunc) (V, bool, error)

func (*MultiLevelCache[K, V]) MDel

func (c *MultiLevelCache[K, V]) MDel(ctx context.Context, keys []K, opts ...OptFunc) error

func (*MultiLevelCache[K, V]) MGet

func (c *MultiLevelCache[K, V]) MGet(ctx context.Context, keys []K, opts ...OptFunc) (map[K]V, error)

func (*MultiLevelCache[K, V]) MSet

func (c *MultiLevelCache[K, V]) MSet(ctx context.Context, entities map[K]V, opts ...OptFunc) error

func (*MultiLevelCache[K, V]) Set

func (c *MultiLevelCache[K, V]) Set(ctx context.Context, key K, value V, opts ...OptFunc) error

func (*MultiLevelCache[K, V]) Use

func (c *MultiLevelCache[K, V]) Use(middleware cacher.Middleware[K, V]) *MultiLevelCache[K, V]

type OptFunc

type OptFunc func(*cacheOpts)

func WithFallbackOnLayerError

func WithFallbackOnLayerError(shouldFallback func(ctx context.Context, info cacher.BaseInfo, err error) bool) OptFunc

WithFallbackOnLayerError sets whether to fallback to the next layer when an error occurs in the current layer (e.g., Redis connection failure). The function should return true to indicate fallback (default behavior), or false to return the error immediately.

func WithShouldSkipLayer

func WithShouldSkipLayer(shouldSkip func(ctx context.Context, info cacher.BaseInfo) bool) OptFunc

WithShouldSkipLayer sets the rule for skipping a cache layer. When the shouldSkip function returns true, the corresponding cache layer will be skipped, and the query will proceed directly to the next layer.

Note: Cache levels are 1-based (e.g., Level 1, Level 2...), not 0-based.

Example: Skip Level 1 or a cache layer named "redis"

cache.Get(ctx, key, tiercache.WithShouldSkipLayer(func(ctx context.Context, info cacher.BaseInfo) bool {
    // cacher.GetRunInfo(ctx).Level() gets the current level (starts from 1)
    return cacher.GetRunInfo(ctx).Level() == 1 || info.Name() == "redis"
}))

Directories

Path Synopsis
examples
cache command
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL