Documentation
¶
Index ¶
- Constants
- type BatchResult
- type Cache
- func (c *Cache[K, V]) Clear()
- func (c *Cache[K, V]) Close()
- func (c *Cache[K, V]) Delete(key K) bool
- func (c *Cache[K, V]) DeleteMany(keys []K) int
- func (c *Cache[K, V]) Get(key K) (V, bool)
- func (c *Cache[K, V]) GetBatch(keys []K) *BatchResult[K, V]
- func (c *Cache[K, V]) GetBatchOptimized(keys []K) *BatchResult[K, V]
- func (c *Cache[K, V]) GetBatchToMap(keys []K) map[K]V
- func (c *Cache[K, V]) GetMany(keys []K) map[K]V
- func (c *Cache[K, V]) Has(key K) bool
- func (c *Cache[K, V]) Len() int
- func (c *Cache[K, V]) Metrics() MetricsSnapshot
- func (c *Cache[K, V]) Scan(cursor uint64, count int) *Iterator[K, V]
- func (c *Cache[K, V]) ScanMatch(pattern string, cursor uint64, count int) *Iterator[K, V]
- func (c *Cache[K, V]) ScanPrefix(prefix string, cursor uint64, count int) *Iterator[K, V]
- func (c *Cache[K, V]) Set(key K, value V, ttl time.Duration) bool
- func (c *Cache[K, V]) SetMany(items []Item[K, V]) int
- func (c *Cache[K, V]) SetWithCost(key K, value V, cost int64, ttl time.Duration) bool
- func (c *Cache[K, V]) Wait()
- type CacheDriver
- func (mc *CacheDriver) Close() map[string]interface{}
- func (mc *CacheDriver) GCBufferQueue() int
- func (mc *CacheDriver) Get(key string) (interface{}, bool)
- func (mc *CacheDriver) GetPointer(key string) (interface{}, bool)
- func (mc *CacheDriver) Len() int
- func (mc *CacheDriver) Remove(key string)
- func (mc *CacheDriver) Set(key string, value interface{}, ttl time.Duration) error
- func (mc *CacheDriver) SetPointer(key string, value interface{}, ttl time.Duration) error
- func (mc *CacheDriver) Truncate()
- type Item
- type Iterator
- func (it *Iterator[K, V]) All() []Item[K, V]
- func (it *Iterator[K, V]) Count() int
- func (it *Iterator[K, V]) Cursor() uint64
- func (it *Iterator[K, V]) Entry() (K, V)
- func (it *Iterator[K, V]) Err() error
- func (it *Iterator[K, V]) ForEach(fn func(key K, value V) bool)
- func (it *Iterator[K, V]) Key() K
- func (it *Iterator[K, V]) Keys() []K
- func (it *Iterator[K, V]) Next() bool
- func (it *Iterator[K, V]) Value() V
- func (it *Iterator[K, V]) Values() []V
- type Metrics
- type MetricsSnapshot
- type Option
- func WithBufferItems[K comparable, V any](n int64) Option[K, V]
- func WithCostFunc[K comparable, V any](fn func(V) int64) Option[K, V]
- func WithDefaultTTL[K comparable, V any](ttl time.Duration) Option[K, V]
- func WithGCFreeStorage[K comparable, V any]() Option[K, V]
- func WithIgnoreInternalCost[K comparable, V any](ignore bool) Option[K, V]
- func WithKeyHasher[K comparable, V any](fn func(K) uint64) Option[K, V]
- func WithMaxCost[K comparable, V any](cost int64) Option[K, V]
- func WithMaxEntries[K comparable, V any](n int64) Option[K, V]
- func WithNumCounters[K comparable, V any](n int64) Option[K, V]
- func WithOnEvict[K comparable, V any](fn func(K, V, int64)) Option[K, V]
- func WithOnExpire[K comparable, V any](fn func(K, V)) Option[K, V]
- func WithOnReject[K comparable, V any](fn func(K, V)) Option[K, V]
- func WithShardCount[K comparable, V any](n int) Option[K, V]
- func WithStandardStorage[K comparable, V any]() Option[K, V]
Constants ¶
const TTL_FOREVER = 0
TTL_FOREVER represents an infinite TTL (no expiration).
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BatchResult ¶
type BatchResult[K comparable, V any] struct { Keys []K Values []V Found []bool Hashes []uint64 }
BatchResult holds the result of a batch get operation.
type Cache ¶
type Cache[K comparable, V any] struct { // contains filtered or unexported fields }
Cache is a generic, high-performance in-memory cache.
func NewCache ¶
func NewCache[K comparable, V any](opts ...Option[K, V]) *Cache[K, V]
NewCache creates a new generic Cache with the given options.
func (*Cache[K, V]) Close ¶
func (c *Cache[K, V]) Close()
Close stops the cache and releases resources.
func (*Cache[K, V]) Delete ¶
Delete removes a value from the cache. Returns true if the value was found and deleted.
func (*Cache[K, V]) DeleteMany ¶
DeleteMany removes multiple keys from the cache. Returns the number of keys successfully deleted.
func (*Cache[K, V]) Get ¶
Get retrieves a value from the cache. Returns the value and true if found, zero value and false otherwise.
func (*Cache[K, V]) GetBatch ¶
func (c *Cache[K, V]) GetBatch(keys []K) *BatchResult[K, V]
GetBatch retrieves multiple values from the cache with optimized prefetching. This is more efficient than calling Get in a loop, especially for large batches.
func (*Cache[K, V]) GetBatchOptimized ¶
func (c *Cache[K, V]) GetBatchOptimized(keys []K) *BatchResult[K, V]
GetBatchOptimized retrieves multiple values with shard-order optimization. Keys are processed in shard order for better cache locality, but results are returned in the original key order.
func (*Cache[K, V]) GetBatchToMap ¶
func (c *Cache[K, V]) GetBatchToMap(keys []K) map[K]V
GetBatchToMap retrieves multiple values and returns them as a map. More convenient than GetBatch when you need map access.
func (*Cache[K, V]) GetMany ¶
func (c *Cache[K, V]) GetMany(keys []K) map[K]V
GetMany retrieves multiple values from the cache. Returns a map of found keys to their values.
func (*Cache[K, V]) Metrics ¶
func (c *Cache[K, V]) Metrics() MetricsSnapshot
Metrics returns the cache metrics.
func (*Cache[K, V]) Scan ¶
Scan returns an iterator over cache entries. cursor is the starting position (0 for beginning). count is the maximum number of entries to return per iteration.
func (*Cache[K, V]) ScanMatch ¶
ScanMatch returns an iterator over entries with keys matching the glob pattern. Only works when K is string. Supported patterns: * (any chars), ? (single char), [abc] (char class).
func (*Cache[K, V]) ScanPrefix ¶
ScanPrefix returns an iterator over entries with keys matching the prefix. Only works when K is string.
func (*Cache[K, V]) Set ¶
Set stores a value in the cache with the given TTL. A TTL of 0 means the entry never expires. Returns true if the value was stored, false if rejected by admission policy.
func (*Cache[K, V]) SetMany ¶
SetMany stores multiple items in the cache. Returns the number of items successfully stored.
func (*Cache[K, V]) SetWithCost ¶
SetWithCost stores a value with a specified cost. Cost is used for eviction decisions when MaxCost is set.
type CacheDriver ¶
type CacheDriver struct {
// contains filtered or unexported fields
}
CacheDriver manages cache operations with storage and expiration.
func StartInstance ¶
func StartInstance() *CacheDriver
StartInstance is deprecated; use New instead.
func (*CacheDriver) Close ¶
func (mc *CacheDriver) Close() map[string]interface{}
Close stops the GC and returns all non-expired entries.
func (*CacheDriver) GCBufferQueue ¶
func (mc *CacheDriver) GCBufferQueue() int
GCBufferQueue returns the count of pending expirations in the GC.
func (*CacheDriver) Get ¶
func (mc *CacheDriver) Get(key string) (interface{}, bool)
Get retrieves a value by key. Returns (value, true) if found and not expired.
func (*CacheDriver) GetPointer ¶
func (mc *CacheDriver) GetPointer(key string) (interface{}, bool)
GetPointer is deprecated; use Get instead.
func (*CacheDriver) Len ¶
func (mc *CacheDriver) Len() int
Len returns the number of current cache entries.
func (*CacheDriver) Remove ¶
func (mc *CacheDriver) Remove(key string)
Remove deletes a key from the cache and expiration tracking.
func (*CacheDriver) Set ¶
func (mc *CacheDriver) Set(key string, value interface{}, ttl time.Duration) error
Set inserts or updates a key with the given value and TTL.
func (*CacheDriver) SetPointer ¶
func (mc *CacheDriver) SetPointer(key string, value interface{}, ttl time.Duration) error
SetPointer is deprecated; use Set instead.
func (*CacheDriver) Truncate ¶
func (mc *CacheDriver) Truncate()
Truncate clears all cache entries and pending expirations.
type Item ¶
type Item[K comparable, V any] struct { Key K Value V Cost int64 // 0 = auto-calculate (cost of 1) TTL time.Duration // 0 = no expiration }
Item represents an item to be stored in the cache.
type Iterator ¶
type Iterator[K comparable, V any] struct { // contains filtered or unexported fields }
Iterator provides a Redis-style iterator over cache entries.
func (*Iterator[K, V]) All ¶
All collects all remaining entries and returns them. Warning: This may be memory-intensive for large result sets.
func (*Iterator[K, V]) Cursor ¶
Cursor returns the current cursor position. This can be used to resume iteration later.
func (*Iterator[K, V]) Entry ¶
func (it *Iterator[K, V]) Entry() (K, V)
Entry returns the current entry (key, value pair).
func (*Iterator[K, V]) ForEach ¶
ForEach calls fn for each remaining entry. If fn returns false, iteration stops.
func (*Iterator[K, V]) Keys ¶
func (it *Iterator[K, V]) Keys() []K
Keys collects all remaining keys and returns them.
func (*Iterator[K, V]) Next ¶
Next advances the iterator to the next entry. Returns true if there is an entry available, false when exhausted.
type Metrics ¶
type Metrics struct {
// contains filtered or unexported fields
}
Metrics holds cache statistics.
func (*Metrics) Snapshot ¶
func (m *Metrics) Snapshot() MetricsSnapshot
Snapshot returns a point-in-time snapshot of the metrics.
type MetricsSnapshot ¶
type MetricsSnapshot struct {
Hits int64 // Total cache hits
Misses int64 // Total cache misses
Sets int64 // Total successful sets
Deletes int64 // Total successful deletes
Evictions int64 // Total evictions due to size/cost limit
Expirations int64 // Total expirations due to TTL
Rejections int64 // Total rejections by admission policy
CostAdded int64 // Total cost added over time
CostEvicted int64 // Total cost evicted over time
HitRatio float64 // Hit ratio (hits / (hits + misses))
}
MetricsSnapshot is a point-in-time snapshot of cache metrics.
type Option ¶
type Option[K comparable, V any] func(*config[K, V])
Option is a function that configures a Cache.
func WithBufferItems ¶
func WithBufferItems[K comparable, V any](n int64) Option[K, V]
WithBufferItems sets the write buffer size. Writes are batched in this buffer before being applied to the cache. Default is 64.
func WithCostFunc ¶
func WithCostFunc[K comparable, V any](fn func(V) int64) Option[K, V]
WithCostFunc sets a custom function to calculate the cost of a value. If not set, each entry has a cost of 1.
func WithDefaultTTL ¶
func WithDefaultTTL[K comparable, V any](ttl time.Duration) Option[K, V]
WithDefaultTTL sets the default TTL for entries that don't specify one. A value of 0 means no expiration (default).
func WithGCFreeStorage ¶
func WithGCFreeStorage[K comparable, V any]() Option[K, V]
WithGCFreeStorage enables GC-free storage mode. This mode stores values as serialized bytes, reducing GC pressure. Best suited for caches with []byte values.
func WithIgnoreInternalCost ¶
func WithIgnoreInternalCost[K comparable, V any](ignore bool) Option[K, V]
WithIgnoreInternalCost configures whether internal metadata cost should be ignored when calculating total cache cost.
func WithKeyHasher ¶
func WithKeyHasher[K comparable, V any](fn func(K) uint64) Option[K, V]
WithKeyHasher sets a custom function to hash keys. If not set, a default hasher is used based on the key type.
func WithMaxCost ¶
func WithMaxCost[K comparable, V any](cost int64) Option[K, V]
WithMaxCost sets the maximum total cost of entries in the cache. Each entry's cost is determined by CostFunc or defaults to 1. A value of 0 means unlimited cost (default).
func WithMaxEntries ¶
func WithMaxEntries[K comparable, V any](n int64) Option[K, V]
WithMaxEntries sets the maximum number of entries in the cache. When the limit is reached, entries are evicted using the configured policy. A value of 0 means unlimited entries (default).
func WithNumCounters ¶
func WithNumCounters[K comparable, V any](n int64) Option[K, V]
WithNumCounters sets the number of counters for TinyLFU frequency estimation. Recommended value is 10x the expected number of entries. A value of 0 uses a default based on MaxEntries.
func WithOnEvict ¶
func WithOnEvict[K comparable, V any](fn func(K, V, int64)) Option[K, V]
WithOnEvict sets a callback function that is called when an entry is evicted. The callback receives the key, value, and cost of the evicted entry.
func WithOnExpire ¶
func WithOnExpire[K comparable, V any](fn func(K, V)) Option[K, V]
WithOnExpire sets a callback function that is called when an entry expires. The callback receives the key and value of the expired entry.
func WithOnReject ¶
func WithOnReject[K comparable, V any](fn func(K, V)) Option[K, V]
WithOnReject sets a callback function that is called when an entry is rejected by the TinyLFU admission policy.
func WithShardCount ¶
func WithShardCount[K comparable, V any](n int) Option[K, V]
WithShardCount sets the number of shards for concurrent access. Must be a power of 2. Default is 1024.
func WithStandardStorage ¶
func WithStandardStorage[K comparable, V any]() Option[K, V]
WithStandardStorage uses the standard storage mode (default). This mode supports any value type but values are tracked by GC.
Directories
¶
| Path | Synopsis |
|---|---|
|
internal
|
|
|
alloc
Package alloc provides aligned memory allocation utilities for SIMD operations.
|
Package alloc provides aligned memory allocation utilities for SIMD operations. |
|
buffer
Package buffer provides lock-free ring buffer for write coalescing.
|
Package buffer provides lock-free ring buffer for write coalescing. |
|
clock
Package clock provides a cached time source for high-performance scenarios.
|
Package clock provides a cached time source for high-performance scenarios. |
|
glob
Package glob provides glob pattern matching for cache keys.
|
Package glob provides glob pattern matching for cache keys. |
|
hash
Package hash provides optimized hash functions for cache keys.
|
Package hash provides optimized hash functions for cache keys. |
|
hashtable
Package hashtable provides high-performance hash table implementations.
|
Package hashtable provides high-performance hash table implementations. |
|
policy
Package policy provides cache eviction policies.
|
Package policy provides cache eviction policies. |
|
pool
Package pool provides sync.Pool utilities for reducing allocations.
|
Package pool provides sync.Pool utilities for reducing allocations. |
|
prefetch
Package prefetch provides software prefetching utilities for cache optimization.
|
Package prefetch provides software prefetching utilities for cache optimization. |
|
radix
Package radix provides a radix tree for efficient prefix search.
|
Package radix provides a radix tree for efficient prefix search. |
|
store
Package store provides storage backends for the cache.
|
Package store provides storage backends for the cache. |
|
Package item defines the structure used for cache entries.
|
Package item defines the structure used for cache entries. |