lcw

package module
v2.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 20, 2024 License: MIT Imports: 15 Imported by: 1

Documentation

Overview

Package lcw adds a thin layer on top of lru and expirable cache providing more limits and common interface. The primary method to get (and set) data to/from the cache is LoadingCache.Get returning stored data for a given key or call provided func to retrieve and store, similar to Guava loading cache. Limits allow max values for key size, number of keys, value size and total size of values in the cache. CacheStat gives general stats on cache performance. 3 flavors of cache provided - NoP (do-nothing cache), ExpirableCache (TTL based), and LruCache

Example (LoadingCacheMutability)

nolint:govet //false positive due to example name ExampleLoadingCacheMutability illustrates changing mutable stored item outside of cache, works only for non-Redis cache.

o := NewOpts[[]string]()
c, err := NewExpirableCache(o.MaxKeys(10), o.TTL(time.Minute*30)) // make expirable cache (30m o.TTL) with up to 10 keys
if err != nil {
	panic("can' make cache")
}
defer c.Close()

mutableSlice := []string{"key1", "key2"}

// put mutableSlice in "mutableSlice" cache key
_, _ = c.Get("mutableSlice", func() ([]string, error) {
	return mutableSlice, nil
})

// get from cache, func won't run because mutableSlice is cached
// value is original now
v, _ := c.Get("mutableSlice", func() ([]string, error) {
	return nil, nil
})
fmt.Printf("got %v slice from cache\n", v)

mutableSlice[0] = "another_key_1"
mutableSlice[1] = "another_key_2"

// get from cache, func won't run because mutableSlice is cached
// value is changed inside the cache now because mutableSlice stored as-is, in mutable state
v, _ = c.Get("mutableSlice", func() ([]string, error) {
	return nil, nil
})
fmt.Printf("got %v slice from cache after it's change outside of cache\n", v)
Output:

got [key1 key2] slice from cache
got [another_key_1 another_key_2] slice from cache after it's change outside of cache

Index

Examples

Constants

View Source
const RedisValueSizeLimit = 512 * 1024 * 1024

RedisValueSizeLimit is maximum allowed value size in Redis

Variables

This section is empty.

Functions

This section is empty.

Types

type CacheStat

type CacheStat struct {
	Hits   int64
	Misses int64
	Keys   int
	Size   int64
	Errors int64
}

CacheStat represent stats values

func (CacheStat) String

func (s CacheStat) String() string

String formats cache stats

type ExpirableCache

type ExpirableCache[V any] struct {
	Workers[V]
	CacheStat
	// contains filtered or unexported fields
}

ExpirableCache implements LoadingCache with TTL.

func NewExpirableCache

func NewExpirableCache[V any](opts ...Option[V]) (*ExpirableCache[V], error)

NewExpirableCache makes expirable LoadingCache implementation, 1000 max keys by default and 5m TTL

func (*ExpirableCache[V]) Close

func (c *ExpirableCache[V]) Close() error

Close supposed to kill cleanup goroutine, but it's not possible before https://github.com/hashicorp/golang-lru/issues/159 is solved so for now it just cleans it.

func (*ExpirableCache[V]) Delete

func (c *ExpirableCache[V]) Delete(key string)

Delete cache item by key

func (*ExpirableCache[V]) Get

func (c *ExpirableCache[V]) Get(key string, fn func() (V, error)) (data V, err error)

Get gets value by key or load with fn if not found in cache

func (*ExpirableCache[V]) Invalidate

func (c *ExpirableCache[V]) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*ExpirableCache[V]) Keys

func (c *ExpirableCache[V]) Keys() (res []string)

Keys returns cache keys

func (*ExpirableCache[V]) Peek

func (c *ExpirableCache[V]) Peek(key string) (V, bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*ExpirableCache[V]) Purge

func (c *ExpirableCache[V]) Purge()

Purge clears the cache completely.

func (*ExpirableCache[V]) Stat

func (c *ExpirableCache[V]) Stat() CacheStat

Stat returns cache statistics

type FlusherRequest

type FlusherRequest struct {
	// contains filtered or unexported fields
}

FlusherRequest used as input for cache.Flush

func Flusher

func Flusher(partition string) FlusherRequest

Flusher makes new FlusherRequest with empty scopes

func (FlusherRequest) Scopes

func (f FlusherRequest) Scopes(scopes ...string) FlusherRequest

Scopes adds scopes to FlusherRequest

type Key

type Key struct {
	// contains filtered or unexported fields
}

Key for scoped cache. Created foe given partition (can be empty) and set with ID and Scopes. example: k := NewKey("sys1").ID(postID).Scopes("last_posts", customer_id)

func NewKey

func NewKey(partition ...string) Key

NewKey makes base key for given partition. Partition can be omitted.

func (Key) ID

func (k Key) ID(id string) Key

ID sets key id

func (Key) Scopes

func (k Key) Scopes(scopes ...string) Key

Scopes of the key

func (Key) String

func (k Key) String() string

String makes full string key from primary key, partition and scopes key string made as <partition>@@<id>@@<scope1>$$<scope2>....

type LoadingCache

type LoadingCache[V any] interface {
	Get(key string, fn func() (V, error)) (val V, err error) // load or get from cache
	Peek(key string) (V, bool)                               // get from cache by key
	Invalidate(fn func(key string) bool)                     // invalidate items for func(key) == true
	Delete(key string)                                       // delete by key
	Purge()                                                  // clear cache
	Stat() CacheStat                                         // cache stats
	Keys() []string                                          // list of all keys
	Close() error                                            // close open connections
}

LoadingCache defines guava-like cache with Get method returning cached value ao retrieving it if not in cache

func New

func New[V any](uri string) (LoadingCache[V], error)

New parses uri and makes any of supported caches supported URIs:

  • redis://<ip>:<port>?db=123&max_keys=10
  • mem://lru?max_keys=10&max_cache_size=1024
  • mem://expirable?ttl=30s&max_val_size=100
  • nop://

type LruCache

type LruCache[V any] struct {
	Workers[V]
	CacheStat
	// contains filtered or unexported fields
}

LruCache wraps lru.LruCache with loading cache Get and size limits

Example

LruCache illustrates the use of LRU loading cache

// set up test server for single response
var hitCount int
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
	if r.URL.String() == "/post/42" && hitCount == 0 {
		_, _ = w.Write([]byte("<html><body>test response</body></html>"))
		return
	}
	w.WriteHeader(404)
}))

// load page function
loadURL := func(url string) (string, error) {
	resp, err := http.Get(url) // nolint
	if err != nil {
		return "", err
	}
	b, err := io.ReadAll(resp.Body)
	_ = resp.Body.Close()
	if err != nil {
		return "", err
	}
	return string(b), nil
}

// fixed size LRU cache, 100 items, up to 10k in total size
o := NewOpts[string]()
cache, err := NewLruCache(o.MaxKeys(100), o.MaxCacheSize(10*1024))
if err != nil {
	log.Printf("can't make lru cache, %v", err)
}

// url not in cache, load data
url := ts.URL + "/post/42"
val, err := cache.Get(url, func() (val string, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val)

// url not in cache, load data
val, err = cache.Get(url, func() (val string, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val)

// url cached, skip load and get from the cache
val, err = cache.Get(url, func() (val string, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val)

// get cache stats
stats := cache.Stat()
fmt.Printf("%+v\n", stats)

// close test HTTP server after all log.Fatalf are passed
ts.Close()
Output:

<html><body>test response</body></html>
<html><body>test response</body></html>
<html><body>test response</body></html>
{hits:2, misses:1, ratio:0.67, keys:1, size:0, errors:0}

func NewLruCache

func NewLruCache[V any](opts ...Option[V]) (*LruCache[V], error)

NewLruCache makes LRU LoadingCache implementation, 1000 max keys by default

func (*LruCache[V]) Close

func (c *LruCache[V]) Close() error

Close does nothing for this type of cache

func (*LruCache[V]) Delete

func (c *LruCache[V]) Delete(key string)

Delete cache item by key

func (*LruCache[V]) Get

func (c *LruCache[V]) Get(key string, fn func() (V, error)) (data V, err error)

Get gets value by key or load with fn if not found in cache

func (*LruCache[V]) Invalidate

func (c *LruCache[V]) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*LruCache[V]) Keys

func (c *LruCache[V]) Keys() (res []string)

Keys returns cache keys

func (*LruCache[V]) Peek

func (c *LruCache[V]) Peek(key string) (V, bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*LruCache[V]) Purge

func (c *LruCache[V]) Purge()

Purge clears the cache completely.

func (*LruCache[V]) Stat

func (c *LruCache[V]) Stat() CacheStat

Stat returns cache statistics

type Nop

type Nop[V any] struct{}

Nop is do-nothing implementation of LoadingCache

func NewNopCache

func NewNopCache[V any]() *Nop[V]

NewNopCache makes new do-nothing cache

func (*Nop[V]) Close

func (n *Nop[V]) Close() error

Close does nothing for nop cache

func (*Nop[V]) Delete

func (n *Nop[V]) Delete(string)

Delete does nothing for nop cache

func (*Nop[V]) Get

func (n *Nop[V]) Get(_ string, fn func() (V, error)) (V, error)

Get calls fn without any caching

func (*Nop[V]) Invalidate

func (n *Nop[V]) Invalidate(func(key string) bool)

Invalidate does nothing for nop cache

func (*Nop[V]) Keys

func (n *Nop[V]) Keys() []string

Keys does nothing for nop cache

func (*Nop[V]) Peek

func (n *Nop[V]) Peek(string) (V, bool)

Peek does nothing and always returns false

func (*Nop[V]) Purge

func (n *Nop[V]) Purge()

Purge does nothing for nop cache

func (*Nop[V]) Stat

func (n *Nop[V]) Stat() CacheStat

Stat always 0s for nop cache

type Option

type Option[V any] func(o *Workers[V]) error

Option func type

type RedisCache

type RedisCache[V any] struct {
	Workers[V]
	CacheStat
	// contains filtered or unexported fields
}

RedisCache implements LoadingCache for Redis.

func NewRedisCache

func NewRedisCache[V any](backend *redis.Client, opts ...Option[V]) (*RedisCache[V], error)

NewRedisCache makes Redis LoadingCache implementation. Supports only string and string-based types and will return error otherwise.

func (*RedisCache[V]) Close

func (c *RedisCache[V]) Close() error

Close closes underlying connections

func (*RedisCache[V]) Delete

func (c *RedisCache[V]) Delete(key string)

Delete cache item by key

func (*RedisCache[V]) Get

func (c *RedisCache[V]) Get(key string, fn func() (V, error)) (data V, err error)

Get gets value by key or load with fn if not found in cache

func (*RedisCache[V]) Invalidate

func (c *RedisCache[V]) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*RedisCache[V]) Keys

func (c *RedisCache[V]) Keys() (res []string)

Keys gets all keys for the cache

func (*RedisCache[V]) Peek

func (c *RedisCache[V]) Peek(key string) (data V, found bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*RedisCache[V]) Purge

func (c *RedisCache[V]) Purge()

Purge clears the cache completely.

func (*RedisCache[V]) Stat

func (c *RedisCache[V]) Stat() CacheStat

Stat returns cache statistics

type Scache

type Scache[V any] struct {
	// contains filtered or unexported fields
}

Scache wraps LoadingCache with partitions (sub-system), and scopes. Simplified interface with just 4 funcs - Get, Flush, Stats and Close

Example

LruCache illustrates the use of LRU loading cache

// set up test server for single response
var hitCount int
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
	if r.URL.String() == "/post/42" && hitCount == 0 {
		_, _ = w.Write([]byte("<html><body>test response</body></html>"))
		return
	}
	w.WriteHeader(404)
}))

// load page function
loadURL := func(url string) ([]byte, error) {
	resp, err := http.Get(url) // nolint
	if err != nil {
		return nil, err
	}
	b, err := io.ReadAll(resp.Body)
	_ = resp.Body.Close()
	if err != nil {
		return nil, err
	}
	return b, nil
}

// fixed size LRU cache, 100 items, up to 10k in total size
o := NewOpts[[]byte]()
backend, err := NewLruCache(o.MaxKeys(100), o.MaxCacheSize(10*1024))
if err != nil {
	log.Fatalf("can't make lru cache, %v", err)
}

cache := NewScache[[]byte](backend)

// url not in cache, load data
url := ts.URL + "/post/42"
key := NewKey().ID(url).Scopes("test")
val, err := cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// url not in cache, load data
key = NewKey().ID(url).Scopes("test")
val, err = cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// url cached, skip load and get from the cache
key = NewKey().ID(url).Scopes("test")
val, err = cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// get cache stats
stats := cache.Stat()
fmt.Printf("%+v\n", stats)

// close cache and test HTTP server after all log.Fatalf are passed
ts.Close()
err = cache.Close()
if err != nil {
	log.Fatalf("can't close cache %v", err)
}
Output:

<html><body>test response</body></html>
<html><body>test response</body></html>
<html><body>test response</body></html>
{hits:2, misses:1, ratio:0.67, keys:1, size:0, errors:0}

func NewScache

func NewScache[V any](lc LoadingCache[V]) *Scache[V]

NewScache creates Scache on top of LoadingCache

func (*Scache[V]) Close

func (m *Scache[V]) Close() error

Close calls Close function of the underlying cache

func (*Scache[V]) Flush

func (m *Scache[V]) Flush(req FlusherRequest)

Flush clears cache and calls postFlushFn async

func (*Scache[V]) Get

func (m *Scache[V]) Get(key Key, fn func() (V, error)) (data V, err error)

Get retrieves a key from underlying backend

func (*Scache[V]) Stat

func (m *Scache[V]) Stat() CacheStat

Stat delegates the call to the underlying cache backend

type Sizer

type Sizer interface {
	Size() int
}

Sizer allows to perform size-based restrictions, optional. If not defined both maxValueSize and maxCacheSize checks will be ignored

type WorkerOptions

type WorkerOptions[T any] struct{}

WorkerOptions holds the option setting methods

func NewOpts

func NewOpts[T any]() *WorkerOptions[T]

NewOpts creates a new WorkerOptions instance

func (*WorkerOptions[V]) EventBus

func (o *WorkerOptions[V]) EventBus(pubSub eventbus.PubSub) Option[V]

EventBus sets PubSub for distributed cache invalidation

func (*WorkerOptions[V]) MaxCacheSize

func (o *WorkerOptions[V]) MaxCacheSize(max int64) Option[V]

MaxCacheSize functional option defines the total size of cached data. By default, it is 0, which means unlimited.

func (*WorkerOptions[V]) MaxKeySize

func (o *WorkerOptions[V]) MaxKeySize(max int) Option[V]

MaxKeySize functional option defines the largest key's size allowed to be used in cache By default it is 0, which means unlimited.

func (*WorkerOptions[V]) MaxKeys

func (o *WorkerOptions[V]) MaxKeys(max int) Option[V]

MaxKeys functional option defines how many keys to keep. By default, it is 0, which means unlimited.

func (*WorkerOptions[V]) MaxValSize

func (o *WorkerOptions[V]) MaxValSize(max int) Option[V]

MaxValSize functional option defines the largest value's size allowed to be cached By default it is 0, which means unlimited.

func (*WorkerOptions[V]) OnEvicted

func (o *WorkerOptions[V]) OnEvicted(fn func(key string, value V)) Option[V]

OnEvicted sets callback on invalidation event

func (*WorkerOptions[V]) StrToV

func (o *WorkerOptions[V]) StrToV(fn func(string) V) Option[V]

StrToV sets strToV function for RedisCache

func (*WorkerOptions[V]) TTL

func (o *WorkerOptions[V]) TTL(ttl time.Duration) Option[V]

TTL functional option defines duration. Works for ExpirableCache only

type Workers

type Workers[V any] struct {
	// contains filtered or unexported fields
}

Directories

Path Synopsis
Package eventbus provides PubSub interface used for distributed cache invalidation, as well as NopPubSub and RedisPubSub implementations.
Package eventbus provides PubSub interface used for distributed cache invalidation, as well as NopPubSub and RedisPubSub implementations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL