Documentation
¶
Overview ¶
Package kcache provides thread-safe caching implementations. It includes a generic LRU (Least Recently Used) cache with TTL support.
Index ¶
- Constants
- type AtomicCache
- func (c *AtomicCache[K, V]) BatchGet(keys []K) map[K]V
- func (c *AtomicCache[K, V]) BatchSet(items map[K]V)
- func (c *AtomicCache[K, V]) Clear()
- func (c *AtomicCache[K, V]) Delete(key K)
- func (c *AtomicCache[K, V]) FastGet(key K) (*V, bool)
- func (c *AtomicCache[K, V]) Get(key K) (V, bool)
- func (c *AtomicCache[K, V]) Has(key K) bool
- func (c *AtomicCache[K, V]) Keys() []K
- func (c *AtomicCache[K, V]) Range(f func(key K, value V) bool)
- func (c *AtomicCache[K, V]) Set(key K, value V)
- func (c *AtomicCache[K, V]) SetWithTTL(key K, value V, ttl time.Duration)
- func (c *AtomicCache[K, V]) Size() int
- func (c *AtomicCache[K, V]) Stats() *Stats
- type Cache
- type FastLRU
- type LRU
- func (c *LRU[K, V]) Clear()
- func (c *LRU[K, V]) Delete(key K)
- func (c *LRU[K, V]) Get(key K) (V, bool)
- func (c *LRU[K, V]) Has(key K) bool
- func (c *LRU[K, V]) Set(key K, value V)
- func (c *LRU[K, V]) SetWithTTL(key K, value V, ttl time.Duration)
- func (c *LRU[K, V]) Size() int
- func (c *LRU[K, V]) Stats() *Stats
- type ShardedLRU
- func (c *ShardedLRU[K, V]) Clear()
- func (c *ShardedLRU[K, V]) Delete(key K)
- func (c *ShardedLRU[K, V]) Get(key K) (V, bool)
- func (c *ShardedLRU[K, V]) Has(key K) bool
- func (c *ShardedLRU[K, V]) Keys() []K
- func (c *ShardedLRU[K, V]) Range(f func(key K, value V) bool)
- func (c *ShardedLRU[K, V]) Set(key K, value V)
- func (c *ShardedLRU[K, V]) SetWithTTL(key K, value V, ttl time.Duration)
- func (c *ShardedLRU[K, V]) Size() int
- func (c *ShardedLRU[K, V]) Stats() *Stats
- type Stats
Constants ¶
const ( // DefaultShards is the default number of shards (should be power of 2). DefaultShards = 256 // MaxShards limits the maximum number of shards. MaxShards = 1024 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AtomicCache ¶
type AtomicCache[K comparable, V any] struct { // contains filtered or unexported fields }
AtomicCache is a lock-free cache for read-heavy workloads. It uses atomic operations and RCU (Read-Copy-Update) pattern.
Suitable for:
- Read-heavy workloads (90%+ reads)
- Small to medium sized caches
- Low latency requirements
Type parameters:
- K: The type of keys (must be comparable)
- V: The type of values (can be any type)
func NewAtomicCache ¶
func NewAtomicCache[K comparable, V any](capacity int) *AtomicCache[K, V]
NewAtomicCache creates a new atomic cache with the specified capacity.
func (*AtomicCache[K, V]) BatchGet ¶
func (c *AtomicCache[K, V]) BatchGet(keys []K) map[K]V
BatchGet retrieves multiple values in a single operation.
func (*AtomicCache[K, V]) BatchSet ¶
func (c *AtomicCache[K, V]) BatchSet(items map[K]V)
BatchSet stores multiple key-value pairs in a single operation.
func (*AtomicCache[K, V]) Clear ¶
func (c *AtomicCache[K, V]) Clear()
Clear removes all entries from the cache.
func (*AtomicCache[K, V]) Delete ¶
func (c *AtomicCache[K, V]) Delete(key K)
Delete removes a key from the cache.
func (*AtomicCache[K, V]) FastGet ¶
func (c *AtomicCache[K, V]) FastGet(key K) (*V, bool)
FastGet returns a pointer to avoid copying large values. WARNING: The returned pointer should not be modified as it's shared across goroutines.
func (*AtomicCache[K, V]) Get ¶
func (c *AtomicCache[K, V]) Get(key K) (V, bool)
Get retrieves a value from the cache using lock-free read.
func (*AtomicCache[K, V]) Has ¶
func (c *AtomicCache[K, V]) Has(key K) bool
Has checks if a key exists in the cache.
func (*AtomicCache[K, V]) Keys ¶
func (c *AtomicCache[K, V]) Keys() []K
Keys returns all keys in the cache.
func (*AtomicCache[K, V]) Range ¶
func (c *AtomicCache[K, V]) Range(f func(key K, value V) bool)
Range calls f for each key-value pair in the cache. If f returns false, iteration stops.
func (*AtomicCache[K, V]) Set ¶
func (c *AtomicCache[K, V]) Set(key K, value V)
Set stores a key-value pair in the cache.
func (*AtomicCache[K, V]) SetWithTTL ¶
func (c *AtomicCache[K, V]) SetWithTTL(key K, value V, ttl time.Duration)
SetWithTTL stores a key-value pair with TTL using RCU pattern.
func (*AtomicCache[K, V]) Size ¶
func (c *AtomicCache[K, V]) Size() int
Size returns the current number of entries.
func (*AtomicCache[K, V]) Stats ¶
func (c *AtomicCache[K, V]) Stats() *Stats
Stats returns cache statistics.
type Cache ¶
type Cache[K comparable, V any] interface { // Get retrieves a value from the cache by its key. // Returns the value and true if found, or zero value and false if not found or expired. Get(key K) (V, bool) // Set stores a key-value pair in the cache without expiration. Set(key K, value V) // SetWithTTL stores a key-value pair in the cache with a TTL (Time To Live). // The entry will be automatically removed after the specified duration. SetWithTTL(key K, value V, ttl time.Duration) // Delete removes a key-value pair from the cache. Delete(key K) // Clear removes all entries from the cache. Clear() // Size returns the current number of entries in the cache. Size() int // Has checks if a key exists in the cache without retrieving its value. // Returns false if the key doesn't exist or if the entry has expired. Has(key K) bool }
Cache defines the interface for cache implementations. It provides a generic interface for storing and retrieving key-value pairs with optional TTL support.
Type parameters:
- K: The type of keys (must be comparable)
- V: The type of values (can be any type)
type FastLRU ¶
type FastLRU[K comparable, V any] struct { // contains filtered or unexported fields }
FastLRU is an LRU implementation with reduced locking.
type LRU ¶
type LRU[K comparable, V any] struct { // contains filtered or unexported fields }
LRU implements a Least Recently Used cache with optional TTL support. It is thread-safe and uses a doubly-linked list. The cache automatically evicts the least recently used items when capacity is reached.
Type parameters:
- K: The type of keys (must be comparable)
- V: The type of values (can be any type)
Example:
cache := NewLRU[string, int](100)
cache.Set("key", 42)
if val, ok := cache.Get("key"); ok {
fmt.Println(val) // Output: 42
}
func NewLRU ¶
func NewLRU[K comparable, V any](capacity int) *LRU[K, V]
NewLRU creates a new LRU cache with the specified capacity. If capacity is <= 0, it defaults to 128.
Parameters:
- capacity: Maximum number of entries the cache can hold
Returns:
- *LRU[K, V]: A new LRU cache instance
Example:
cache := NewLRU[string, interface{}](1000)
func (*LRU[K, V]) Clear ¶
func (c *LRU[K, V]) Clear()
Clear removes all entries from the cache and resets it to an empty state. All cached entries are returned to the pool for reuse.
This operation is O(n) where n is the number of entries.
func (*LRU[K, V]) Delete ¶
func (c *LRU[K, V]) Delete(key K)
Delete removes a key-value pair from the cache. If the key doesn't exist, this operation is a no-op.
Parameters:
- key: The key to delete
This operation is O(1) in time complexity.
func (*LRU[K, V]) Get ¶
Get retrieves a value from the cache by its key. If the key exists and hasn't expired, it moves the entry to the front (most recently used) and returns the value with true. Otherwise, returns zero value with false.
Parameters:
- key: The key to retrieve
Returns:
- V: The value associated with the key (or zero value if not found)
- bool: true if the key was found and valid, false otherwise
This operation is O(1) in time complexity.
func (*LRU[K, V]) Has ¶
Has checks if a key exists in the cache without retrieving its value. This method doesn't affect the LRU ordering. Returns false if the key doesn't exist or if the entry has expired.
Parameters:
- key: The key to check
Returns:
- bool: true if the key exists and is valid, false otherwise
This operation is O(1) in time complexity.
func (*LRU[K, V]) Set ¶
func (c *LRU[K, V]) Set(key K, value V)
Set stores a key-value pair in the cache without expiration. If the key already exists, it updates the value and moves it to the front. If the cache is at capacity, it evicts the least recently used entry.
Parameters:
- key: The key to store
- value: The value to associate with the key
This operation is O(1) in time complexity.
func (*LRU[K, V]) SetWithTTL ¶
SetWithTTL stores a key-value pair in the cache with a Time To Live. After the TTL expires, the entry will be considered invalid and removed on next access. If ttl is 0 or negative, the entry will not expire.
Parameters:
- key: The key to store
- value: The value to associate with the key
- ttl: Time To Live duration for the entry
Example:
cache.SetWithTTL("session", userData, 30*time.Minute)
This operation is O(1) in time complexity.
func (*LRU[K, V]) Size ¶
Size returns the current number of entries in the cache. This count includes only valid (non-expired) entries.
Returns:
- int: The number of entries currently in the cache
This operation is O(1) in time complexity.
func (*LRU[K, V]) Stats ¶
Stats returns the cache performance statistics. The statistics are not reset by this call and accumulate over the cache lifetime.
Returns:
- Stats: Current cache statistics including hits, misses, sets, and evictions
Example:
stats := cache.Stats()
hitRate := float64(stats.Hits.Load()) / float64(stats.Hits.Load() + stats.Misses.Load())
fmt.Printf("Cache hit rate: %.2f%%\n", hitRate * 100)
type ShardedLRU ¶
type ShardedLRU[K comparable, V any] struct { // contains filtered or unexported fields }
ShardedLRU is a sharded LRU cache that reduces lock contention by distributing entries across multiple independent LRU caches.
Type parameters:
- K: The type of keys (must be comparable)
- V: The type of values (can be any type)
func NewShardedLRU ¶
func NewShardedLRU[K comparable, V any](capacity int, numShards int) *ShardedLRU[K, V]
NewShardedLRU creates a new sharded LRU cache.
func (*ShardedLRU[K, V]) Clear ¶
func (c *ShardedLRU[K, V]) Clear()
Clear removes all entries from the sharded cache.
func (*ShardedLRU[K, V]) Delete ¶
func (c *ShardedLRU[K, V]) Delete(key K)
Delete removes a key from the sharded cache.
func (*ShardedLRU[K, V]) Get ¶
func (c *ShardedLRU[K, V]) Get(key K) (V, bool)
Get retrieves a value from the sharded cache.
func (*ShardedLRU[K, V]) Has ¶
func (c *ShardedLRU[K, V]) Has(key K) bool
Has checks if a key exists in the sharded cache.
func (*ShardedLRU[K, V]) Keys ¶
func (c *ShardedLRU[K, V]) Keys() []K
Keys returns all keys in the cache across all shards.
func (*ShardedLRU[K, V]) Range ¶
func (c *ShardedLRU[K, V]) Range(f func(key K, value V) bool)
Range calls f for each key-value pair in the cache. If f returns false, iteration stops.
func (*ShardedLRU[K, V]) Set ¶
func (c *ShardedLRU[K, V]) Set(key K, value V)
Set stores a key-value pair in the sharded cache.
func (*ShardedLRU[K, V]) SetWithTTL ¶
func (c *ShardedLRU[K, V]) SetWithTTL(key K, value V, ttl time.Duration)
SetWithTTL stores a key-value pair with TTL in the sharded cache.
func (*ShardedLRU[K, V]) Size ¶
func (c *ShardedLRU[K, V]) Size() int
Size returns the total number of entries across all shards.
func (*ShardedLRU[K, V]) Stats ¶
func (c *ShardedLRU[K, V]) Stats() *Stats
Stats returns aggregated statistics from all shards.
type Stats ¶
type Stats struct {
// Hits is the number of successful cache retrievals.
Hits *atomic.Uint64
// Misses is the number of failed cache retrievals.
Misses *atomic.Uint64
// Sets is the number of cache insertions.
Sets *atomic.Uint64
// Evictions is the number of entries removed due to capacity limits.
Evictions *atomic.Uint64
}
Stats holds cache statistics for monitoring. This struct uses pointers to atomic values to avoid copy issues.