Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type LRUStore ¶
type LRUStore[K comparable, V any] struct { // contains filtered or unexported fields }
LRUStore is an example of a memory-bounded cache implementation using the Least Recently Used (LRU) eviction policy. This demonstrates how to create a custom cache store that prevents unbounded memory growth.
This is a simple example implementation. For production use, consider using optimized libraries like github.com/elastic/go-freelru or github.com/Yiling-J/theine-go.
func (*LRUStore[K, V]) Close ¶
func (s *LRUStore[K, V]) Close()
Close stops the background cleanup goroutine.
func (*LRUStore[K, V]) Delete ¶
func (s *LRUStore[K, V]) Delete(key K)
Delete removes an entry from the cache.
func (*LRUStore[K, V]) Get ¶
Get retrieves a value from the cache. Returns the value and true if found and not expired, or the zero value and false otherwise. Marks the entry as recently used.
func (*LRUStore[K, V]) GetOrCompute ¶
GetOrCompute atomically gets an existing value or computes and stores a new value.
type Store ¶
type Store[K comparable, V any] interface { // Set adds or updates an entry in the cache. The implementation should handle the TTL. Set(key K, value V, ttl time.Duration) // Get retrieves a value from the cache. Returns the value and true if found and not expired, // or the zero value and false otherwise. Get(key K) (V, bool) // GetOrCompute atomically gets an existing value or computes and stores a new value. // This method prevents duplicate computation when multiple goroutines request the same key. // The compute function is called only if the key is not found or has expired. GetOrCompute(key K, compute func() V, ttl time.Duration) V // Delete removes an entry from the cache. Delete(key K) // Purge removes all items from the cache. Purge() // Close releases any resources used by the cache, such as background goroutines. Close() }
Store defines the interface for a pluggable cache. This allows users to provide their own caching implementations, such as LRU, LFU, or even distributed caches. The cache implementation is responsible for handling its own eviction policies (TTL, size limits, etc.).
func NewLRUStore ¶
func NewLRUStore[K comparable, V any](maxSize int) Store[K, V]
NewLRUStore creates a new LRU cache with the specified maximum size. When the cache reaches maxSize, the least recently used items are evicted.
func NewTTLStore ¶
func NewTTLStore[K comparable, V any]() Store[K, V]
NewTTLStore creates a new TTL-based cache store with a default 1-minute cleaner interval.
func NewTTLStoreWithInterval ¶
func NewTTLStoreWithInterval[K comparable, V any](cleanInterval time.Duration) Store[K, V]
NewTTLStoreWithInterval creates a new TTL-based cache store with a configurable cleaner interval.
func NewTTLStoreWithMaxSize ¶
func NewTTLStoreWithMaxSize[K comparable, V any](maxSize int) Store[K, V]
NewTTLStoreWithMaxSize creates a new TTL-based cache store with a maximum number of entries. When the cache exceeds maxSize during Set or GetOrCompute, the oldest entries are evicted. Note: eviction is O(n) per insertion that exceeds maxSize. For large caches where eviction performance matters, use LRUStore instead. A maxSize of 0 or less means unlimited.
type TTLStore ¶
type TTLStore[K comparable, V any] struct { // contains filtered or unexported fields }
TTLStore is a time-to-live based cache implementation that mimics the original htmgo caching behavior. It stores values with expiration times and periodically cleans up expired entries.
func (*TTLStore[K, V]) Close ¶
func (s *TTLStore[K, V]) Close()
Close stops the background cleaner goroutine.
func (*TTLStore[K, V]) Delete ¶
func (s *TTLStore[K, V]) Delete(key K)
Delete removes an entry from the cache.
func (*TTLStore[K, V]) Get ¶
Get retrieves a value from the cache. Returns the value and true if found and not expired, or the zero value and false otherwise.
func (*TTLStore[K, V]) GetOrCompute ¶
GetOrCompute gets an existing value or computes and stores a new value. Uses per-key deduplication so that concurrent requests for the same key only trigger a single computation, without blocking operations on other keys.