utils

package
v0.0.0-...-61396ba Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 10, 2026 License: Apache-2.0 Imports: 34 Imported by: 0

Documentation

Index

Constants

View Source
const (
	RateLimitTierStandard   = "standard"
	RateLimitTierPremium    = "premium"
	RateLimitTierRestricted = "restricted"
	RateLimitTierBlocked    = "blocked"
)

Tier name constants for type safety and easier code reviews

Variables

View Source
var (
	ConfigurationFileDirectory string
)
View Source
var (
	DefaultTierStandard = RateLimitTier{
		Name:              RateLimitTierStandard,
		MaxReadRPS:        800,
		MaxWriteRPS:       800,
		MaxListRPS:        100,
		MaxReadBandwidth:  1 << 30,
		MaxWriteBandwidth: 1 << 30,
		MaxObjects:        0,
		MaxSizeBytes:      0,
	}
)

Default tier configurations

Functions

func Crc32PoolGetHasher

func Crc32PoolGetHasher() hash.Hash32

func Crc32PoolPutHasher

func Crc32PoolPutHasher(h hash.Hash32)

func Crc64nvmePoolGetHasher

func Crc64nvmePoolGetHasher() hash.Hash64

func Crc64nvmePoolPutHasher

func Crc64nvmePoolPutHasher(h hash.Hash64)

func DetectedHostAddress

func DetectedHostAddress() string

func FnvPoolGetHasher

func FnvPoolGetHasher() hash.Hash64

func FnvPoolPutHasher

func FnvPoolPutHasher(h hash.Hash64)

func GetBuffer

func GetBuffer(size int) []byte

GetBuffer returns a byte slice of at least the requested size. The returned slice may be larger than requested (rounded up to power of 2). Use PutBuffer to return it to the pool when done.

For sizes > 1MB, allocates directly without pooling.

func GetBufferCap

func GetBufferCap(capacity int) []byte

GetBufferCap returns a byte slice with at least the requested capacity. The slice length is 0, but capacity is the pool size. Useful when you need to append to a buffer.

func GetDefaultTiers

func GetDefaultTiers() map[string]RateLimitTier

GetDefaultTiers returns the built-in tier definitions

func GetServerDialOption

func GetServerDialOption(certFile, keyFile, caFile string) (grpc.DialOption, error)

GetServerDialOption returns the appropriate grpc.DialOption for client connections Use the same cert/key files if doing mTLS, otherwise pass empty strings for insecure

func GetServerOption

func GetServerOption(certFile, keyFile string) (grpc.ServerOption, error)

GetServerOption returns the appropriate grpc.ServerOption for server setup Returns nil if certFile and keyFile are empty (insecure mode)

func Jitter

func Jitter(base time.Duration, fraction float64) time.Duration

Jitter adds random jitter to a duration to prevent thundering herd. The jitter is applied as a percentage of the base duration.

Example: Jitter(time.Minute, 0.1) returns 54s-66s (±10%)

func JitterUp

func JitterUp(base time.Duration, fraction float64) time.Duration

JitterUp adds random jitter that only increases the duration. Useful when you want minimum spacing but allow longer delays.

Example: JitterUp(time.Minute, 0.25) returns 60s-75s (+0-25%)

func JitteredTicker

func JitteredTicker(base time.Duration, fraction float64) (<-chan time.Time, func())

JitteredTicker returns a channel that sends at jittered intervals. Each tick has independent jitter applied. The returned stop function must be called to clean up resources.

func JoinHostPort

func JoinHostPort(host string, port int) string

func LoadClientTLSConfig

func LoadClientTLSConfig(certFile, keyFile, caFile string) (credentials.TransportCredentials, error)

LoadClientTLSConfig loads TLS configuration for gRPC client certFile: client certificate (optional, for mTLS) keyFile: client key (optional, for mTLS) caFile: CA certificate to verify server (optional, uses system CA if empty) Returns insecure credentials if all files are empty

func LoadConfiguration

func LoadConfiguration(configFileName string, required bool) bool

func LoadServerTLSConfig

func LoadServerTLSConfig(certFile, keyFile string) (credentials.TransportCredentials, error)

LoadServerTLSConfig loads TLS configuration for gRPC server Returns nil if certFile and keyFile are empty (insecure mode)

func Md5PoolGetHasher

func Md5PoolGetHasher() hash.Hash

func Md5PoolPutHasher

func Md5PoolPutHasher(h hash.Hash)

func NewListener

func NewListener(addr string, timeout time.Duration) (ipListener net.Listener, err error)

func PutBuffer

func PutBuffer(buf []byte)

PutBuffer returns a buffer to the pool. Only buffers obtained from GetBuffer/GetBufferCap should be returned. Buffers larger than maxPoolSize are silently discarded.

WARNING: Do not use the buffer after calling PutBuffer.

func ResolvePath

func ResolvePath(path string) string

func Sha256PoolGetHasher

func Sha256PoolGetHasher() hash.Hash

func Sha256PoolPutHasher

func Sha256PoolPutHasher(h hash.Hash)

func SyncPoolGetBuffer

func SyncPoolGetBuffer() *bytes.Buffer

func SyncPoolPutBuffer

func SyncPoolPutBuffer(buffer *bytes.Buffer)

func TestWritableFile

func TestWritableFile(folder string) error

Types

type BloomFilter

type BloomFilter struct {
	// contains filtered or unexported fields
}

BloomFilter is a space-efficient probabilistic data structure for set membership tests. It can have false positives but never false negatives.

Thread-safe for concurrent use.

Usage:

// Create a filter for ~10000 items with 1% false positive rate
bf := NewBloomFilter(10000, 0.01)
bf.Add("key1")
bf.Add("key2")

bf.Contains("key1") // true
bf.Contains("key3") // false (probably)

func NewBloomFilter

func NewBloomFilter(expectedItems int, falsePositiveRate float64) *BloomFilter

NewBloomFilter creates a new Bloom filter optimized for the expected number of items and desired false positive rate.

Parameters:

  • expectedItems: expected number of items to be added
  • falsePositiveRate: desired false positive rate (e.g., 0.01 for 1%)

Memory usage is approximately: -1.44 * expectedItems * ln(falsePositiveRate) / 8 bytes

func NewBloomFilterFromSize

func NewBloomFilterFromSize(numBits, numHash uint64) *BloomFilter

NewBloomFilterFromSize creates a Bloom filter with explicit bit size and hash count. Use this when you want precise control over memory usage.

func (*BloomFilter) Add

func (bf *BloomFilter) Add(item string)

Add adds an item to the filter.

func (*BloomFilter) AddBytes

func (bf *BloomFilter) AddBytes(item []byte)

AddBytes adds a byte slice to the filter.

func (*BloomFilter) Clear

func (bf *BloomFilter) Clear()

Clear resets the filter to empty state.

func (*BloomFilter) Contains

func (bf *BloomFilter) Contains(item string) bool

Contains tests whether an item might be in the filter. Returns true if the item is possibly in the set (may be false positive). Returns false if the item is definitely not in the set.

func (*BloomFilter) ContainsBytes

func (bf *BloomFilter) ContainsBytes(item []byte) bool

ContainsBytes tests whether a byte slice might be in the filter.

func (*BloomFilter) Count

func (bf *BloomFilter) Count() uint64

Count returns the approximate number of items added to the filter.

func (*BloomFilter) EstimatedFalsePositiveRate

func (bf *BloomFilter) EstimatedFalsePositiveRate() float64

EstimatedFalsePositiveRate returns the current estimated false positive rate based on the number of items added.

func (*BloomFilter) FillRatio

func (bf *BloomFilter) FillRatio() float64

FillRatio returns the ratio of set bits to total bits. A high fill ratio (>0.5) indicates the filter may have too many false positives.

func (*BloomFilter) SizeBytes

func (bf *BloomFilter) SizeBytes() int

SizeBytes returns the memory usage of the filter in bytes.

type BloomFilterSet

type BloomFilterSet struct {
	// contains filtered or unexported fields
}

BloomFilterSet wraps a BloomFilter with a backing store for rebuilding. Useful when you need to periodically rebuild the filter from source data.

func NewBloomFilterSet

func NewBloomFilterSet(capacity int, falsePositiveRate float64) *BloomFilterSet

NewBloomFilterSet creates a new BloomFilterSet with backing store.

func (*BloomFilterSet) Add

func (bfs *BloomFilterSet) Add(item string)

Add adds an item to both the filter and backing store.

func (*BloomFilterSet) Contains

func (bfs *BloomFilterSet) Contains(item string) bool

Contains checks if an item might be in the set using the Bloom filter.

func (*BloomFilterSet) ContainsExact

func (bfs *BloomFilterSet) ContainsExact(item string) bool

ContainsExact checks if an item is definitely in the set using the backing store.

func (*BloomFilterSet) Len

func (bfs *BloomFilterSet) Len() int

Len returns the number of items in the backing store.

func (*BloomFilterSet) Rebuild

func (bfs *BloomFilterSet) Rebuild()

Rebuild reconstructs the Bloom filter from the backing store. Call this periodically if items are removed frequently.

func (*BloomFilterSet) Remove

func (bfs *BloomFilterSet) Remove(item string)

Remove removes an item from the backing store. Note: Item will still be in the Bloom filter until Rebuild() is called.

type BufferPoolLevel

type BufferPoolLevel struct {
	Size int // Buffer size at this level
}

type BufferPoolStats

type BufferPoolStats struct {
	Levels []BufferPoolLevel
}

BufferPoolStats returns statistics about buffer pool usage. Useful for debugging and monitoring.

func GetBufferPoolStats

func GetBufferPoolStats() BufferPoolStats

type Conn

type Conn struct {
	net.Conn
	ReadTimeout  time.Duration
	WriteTimeout time.Duration
	// contains filtered or unexported fields
}

Conn wraps a net.Conn, and sets a deadline for every read and write operation.

func (*Conn) Close

func (c *Conn) Close() error

func (*Conn) Read

func (c *Conn) Read(b []byte) (count int, e error)

func (*Conn) Write

func (c *Conn) Write(b []byte) (count int, e error)

type FreeSpace

type FreeSpace struct {
	Type    FreeSpaceType
	Bytes   uint64
	Percent float32
	Raw     string
}

func ParseMinFreeSpace

func ParseMinFreeSpace(s string) (*FreeSpace, error)

func (FreeSpace) IsLow

func (s FreeSpace) IsLow(freeBytes uint64, freePercent float32) (bool, string)

func (FreeSpace) String

func (s FreeSpace) String() string

type FreeSpaceType

type FreeSpaceType int
const (
	AsPercent FreeSpaceType = iota
	AsBytes
)

type HasherPool

type HasherPool struct {
	// contains filtered or unexported fields
}

HasherPool provides pooled hashers for high-throughput scenarios.

func NewHasherPool

func NewHasherPool() *HasherPool

NewHasherPool creates a new pool of FNV hashers.

func (*HasherPool) Get

func (hp *HasherPool) Get() hash.Hash64

Get returns a hasher from the pool.

func (*HasherPool) Put

func (hp *HasherPool) Put(h hash.Hash64)

Put returns a hasher to the pool after resetting it.

type HyperLogLog

type HyperLogLog struct {
	// contains filtered or unexported fields
}

HyperLogLog is a probabilistic data structure for cardinality estimation. It can estimate the number of unique elements in a set using minimal memory.

Thread-safe for concurrent use.

Memory usage: 2^precision bytes (e.g., precision=14 uses 16KB) Error rate: approximately 1.04 / sqrt(2^precision)

Usage:

hll := NewHyperLogLog(14) // ~0.8% error, 16KB memory
hll.Add("user1")
hll.Add("user2")
hll.Add("user1") // duplicate
count := hll.Count() // ~2

func NewHyperLogLog

func NewHyperLogLog(precision uint8) *HyperLogLog

NewHyperLogLog creates a new HyperLogLog with the given precision. Precision must be between 4 and 18 (inclusive).

Common precision values:

  • 10: ~1024 bytes, ~3.25% error
  • 12: ~4KB, ~1.625% error
  • 14: ~16KB, ~0.8% error (recommended)
  • 16: ~64KB, ~0.4% error

func (*HyperLogLog) Add

func (hll *HyperLogLog) Add(item string)

Add adds an element to the HyperLogLog.

func (*HyperLogLog) AddBytes

func (hll *HyperLogLog) AddBytes(item []byte)

AddBytes adds a byte slice to the HyperLogLog.

func (*HyperLogLog) AddHash

func (hll *HyperLogLog) AddHash(hash uint64)

AddHash adds a pre-computed 64-bit hash to the HyperLogLog. Useful when you already have a hash value.

func (*HyperLogLog) Clear

func (hll *HyperLogLog) Clear()

Clear resets the HyperLogLog to empty state.

func (*HyperLogLog) Count

func (hll *HyperLogLog) Count() uint64

Count returns the estimated cardinality.

func (*HyperLogLog) ErrorRate

func (hll *HyperLogLog) ErrorRate() float64

ErrorRate returns the theoretical standard error rate.

func (*HyperLogLog) Merge

func (hll *HyperLogLog) Merge(other *HyperLogLog)

Merge combines another HyperLogLog into this one. Both must have the same precision.

func (*HyperLogLog) SizeBytes

func (hll *HyperLogLog) SizeBytes() int

SizeBytes returns the memory usage in bytes.

type HyperLogLogSet

type HyperLogLogSet struct {
	// contains filtered or unexported fields
}

HyperLogLogSet manages multiple HyperLogLogs for different keys. Useful for tracking cardinality per bucket, user, etc.

func NewHyperLogLogSet

func NewHyperLogLogSet(precision uint8) *HyperLogLogSet

NewHyperLogLogSet creates a new set of HyperLogLogs.

func (*HyperLogLogSet) Add

func (hlls *HyperLogLogSet) Add(key, item string)

Add adds an item to the HyperLogLog for the given key.

func (*HyperLogLogSet) Count

func (hlls *HyperLogLogSet) Count(key string) uint64

Count returns the estimated cardinality for a key.

func (*HyperLogLogSet) Delete

func (hlls *HyperLogLogSet) Delete(key string)

Delete removes a key's HyperLogLog.

func (*HyperLogLogSet) Keys

func (hlls *HyperLogLogSet) Keys() []string

Keys returns all tracked keys.

func (*HyperLogLogSet) Len

func (hlls *HyperLogLogSet) Len() int

Len returns the number of tracked keys.

func (*HyperLogLogSet) MergeInto

func (hlls *HyperLogLogSet) MergeInto(destKey, srcKey string)

MergeInto merges all HyperLogLogs from one key into another.

func (*HyperLogLogSet) SizeBytes

func (hlls *HyperLogLogSet) SizeBytes() int

SizeBytes returns the total memory usage across all HyperLogLogs.

type Listener

type Listener struct {
	net.Listener
	ReadTimeout  time.Duration
	WriteTimeout time.Duration
}

Listener wraps a net.Listener, and gives a place to store the timeout parameters. On Accept, it will wrap the net.Conn with our own Conn for us.

func (*Listener) Accept

func (l *Listener) Accept() (net.Conn, error)

type LockFreeMap

type LockFreeMap[K comparable, V any] struct {
	// contains filtered or unexported fields
}

LockFreeMap is a high-performance concurrent map using xsync.MapOf. Provides lock-free reads and fine-grained locking for writes. Best for mixed read/write workloads with high concurrency.

Performance characteristics:

  • Reads: Lock-free O(1)
  • Writes: Fine-grained locking O(1)
  • Memory: Optimized for concurrent access

This is a thin wrapper around xsync.MapOf to provide a consistent API with ShardedMap and ShardedMapCOW.

func NewLockFreeMap

func NewLockFreeMap[K comparable, V any](opts ...LockFreeMapOption[K, V]) *LockFreeMap[K, V]

NewLockFreeMap creates a new lock-free map.

func (*LockFreeMap[K, V]) Clear

func (lf *LockFreeMap[K, V]) Clear()

Clear removes all entries.

func (*LockFreeMap[K, V]) Compute

func (lf *LockFreeMap[K, V]) Compute(key K, f func(oldValue V, exists bool) (newValue V, store bool)) (V, bool)

Compute atomically computes a new value for a key. If f returns false, the entry is deleted.

func (*LockFreeMap[K, V]) Delete

func (lf *LockFreeMap[K, V]) Delete(key K)

Delete removes a key from the map.

func (*LockFreeMap[K, V]) DeleteIf

func (lf *LockFreeMap[K, V]) DeleteIf(predicate func(key K, value V) bool) int

DeleteIf deletes entries where predicate returns true. Returns the number of deleted entries.

func (*LockFreeMap[K, V]) Keys

func (lf *LockFreeMap[K, V]) Keys() []K

Keys returns all keys.

func (*LockFreeMap[K, V]) Len

func (lf *LockFreeMap[K, V]) Len() int

Len returns the number of entries. This is an approximation due to concurrent access.

func (*LockFreeMap[K, V]) Load

func (lf *LockFreeMap[K, V]) Load(key K) (V, bool)

Load returns the value for a key. Lock-free.

func (*LockFreeMap[K, V]) LoadAndDelete

func (lf *LockFreeMap[K, V]) LoadAndDelete(key K) (V, bool)

LoadAndDelete atomically loads and deletes a key.

func (*LockFreeMap[K, V]) LoadAndStore

func (lf *LockFreeMap[K, V]) LoadAndStore(key K, value V) (V, bool)

LoadAndStore atomically loads the old value and stores a new value. Returns the old value and whether it existed.

func (*LockFreeMap[K, V]) LoadOrCompute

func (lf *LockFreeMap[K, V]) LoadOrCompute(key K, compute func() V) (V, bool)

LoadOrCompute returns the existing value if present, otherwise computes and stores a new value. The compute function is called exactly once if the key doesn't exist.

func (*LockFreeMap[K, V]) LoadOrStore

func (lf *LockFreeMap[K, V]) LoadOrStore(key K, value V) (V, bool)

LoadOrStore returns the existing value if present, otherwise stores and returns the new value. Returns true if loaded, false if stored.

func (*LockFreeMap[K, V]) Range

func (lf *LockFreeMap[K, V]) Range(f func(key K, value V) bool)

Range calls f for each key-value pair. If f returns false, iteration stops.

func (*LockFreeMap[K, V]) Store

func (lf *LockFreeMap[K, V]) Store(key K, value V)

Store sets a value for a key.

type LockFreeMapOption

type LockFreeMapOption[K comparable, V any] func(*xsync.MapConfig)

LockFreeMapOption configures a LockFreeMap.

func WithLockFreeGrowOnly

func WithLockFreeGrowOnly[K comparable, V any]() LockFreeMapOption[K, V]

WithLockFreeGrowOnly disables shrinking, useful for maps that only grow.

func WithLockFreeSize

func WithLockFreeSize[K comparable, V any](size int) LockFreeMapOption[K, V]

WithLockFreeSize sets the initial size hint for the map.

type RateLimitTier

type RateLimitTier struct {
	Name              string
	MaxReadRPS        int
	MaxWriteRPS       int
	MaxListRPS        int
	MaxReadBandwidth  int64
	MaxWriteBandwidth int64
	MaxObjects        int64
	MaxSizeBytes      int64
}

type ShardedMap

type ShardedMap[K comparable, V any] struct {
	// contains filtered or unexported fields
}

ShardedMap is a concurrent map with sharding for reduced lock contention. Supports generic key types and configurable shard count. More efficient than sync.Map for high-throughput scenarios with mixed reads/writes.

func NewShardedMap

func NewShardedMap[K comparable, V any](opts ...ShardedMapOption[K, V]) *ShardedMap[K, V]

NewShardedMap creates a new sharded map with the given options.

func (*ShardedMap[K, V]) Clear

func (sm *ShardedMap[K, V]) Clear()

Clear removes all entries from the map.

func (*ShardedMap[K, V]) Compute

func (sm *ShardedMap[K, V]) Compute(key K, f func(oldValue V, exists bool) (newValue V, store bool)) (V, bool)

Compute atomically computes a new value for a key. If the key exists, f receives the current value and true. If the key doesn't exist, f receives zero value and false. If f returns false, the key is deleted (if it exists). Returns the new value and whether it was stored.

func (*ShardedMap[K, V]) Delete

func (sm *ShardedMap[K, V]) Delete(key K)

Delete removes a key from the map.

func (*ShardedMap[K, V]) DeleteIf

func (sm *ShardedMap[K, V]) DeleteIf(predicate func(key K, value V) bool) int

DeleteIf deletes entries where the predicate returns true. Returns the number of entries deleted.

func (*ShardedMap[K, V]) Keys

func (sm *ShardedMap[K, V]) Keys() []K

Keys returns all keys in the map. Note: This is not atomic - keys may be added/removed during iteration.

func (*ShardedMap[K, V]) Len

func (sm *ShardedMap[K, V]) Len() int

Len returns the total number of entries across all shards.

func (*ShardedMap[K, V]) Load

func (sm *ShardedMap[K, V]) Load(key K) (V, bool)

Load returns the value for a key, or the zero value if not found.

func (*ShardedMap[K, V]) LoadAndDelete

func (sm *ShardedMap[K, V]) LoadAndDelete(key K) (V, bool)

LoadAndDelete loads and deletes a key atomically. Returns the value and true if found, zero value and false otherwise.

func (*ShardedMap[K, V]) LoadOrStore

func (sm *ShardedMap[K, V]) LoadOrStore(key K, value V) (V, bool)

LoadOrStore returns the existing value if present, otherwise stores and returns the new value. Returns true if the value was loaded, false if stored.

func (*ShardedMap[K, V]) Range

func (sm *ShardedMap[K, V]) Range(f func(key K, value V) bool)

Range calls f for each key-value pair in the map. If f returns false, iteration stops.

func (*ShardedMap[K, V]) Store

func (sm *ShardedMap[K, V]) Store(key K, value V)

Store sets a value for a key.

type ShardedMapCOW

type ShardedMapCOW[K comparable, V any] struct {
	// contains filtered or unexported fields
}

ShardedMapCOW is a copy-on-write sharded map optimized for read-heavy workloads. Reads are completely lock-free, while writes copy the entire shard. Best for scenarios with high read:write ratios (>10:1).

Trade-offs:

  • Reads: O(1) lock-free
  • Writes: O(n) where n is shard size (copies entire shard)
  • Memory: Higher during writes (temporary copies)

func NewShardedMapCOW

func NewShardedMapCOW[K comparable, V any](opts ...ShardedMapCOWOption[K, V]) *ShardedMapCOW[K, V]

NewShardedMapCOW creates a new copy-on-write sharded map.

func (*ShardedMapCOW[K, V]) Clear

func (sm *ShardedMapCOW[K, V]) Clear()

Clear removes all entries from the map.

func (*ShardedMapCOW[K, V]) Compute

func (sm *ShardedMapCOW[K, V]) Compute(key K, f func(oldValue V, exists bool) (newValue V, store bool)) (V, bool)

Compute atomically computes a new value for a key.

func (*ShardedMapCOW[K, V]) Delete

func (sm *ShardedMapCOW[K, V]) Delete(key K)

Delete removes a key from the map.

func (*ShardedMapCOW[K, V]) DeleteIf

func (sm *ShardedMapCOW[K, V]) DeleteIf(predicate func(key K, value V) bool) int

DeleteIf deletes entries where predicate returns true.

func (*ShardedMapCOW[K, V]) Keys

func (sm *ShardedMapCOW[K, V]) Keys() []K

Keys returns all keys. Lock-free snapshot.

func (*ShardedMapCOW[K, V]) Len

func (sm *ShardedMapCOW[K, V]) Len() int

Len returns the total number of entries. Lock-free but not atomic across shards.

func (*ShardedMapCOW[K, V]) Load

func (sm *ShardedMapCOW[K, V]) Load(key K) (V, bool)

Load returns the value for a key. Lock-free read.

func (*ShardedMapCOW[K, V]) LoadAndDelete

func (sm *ShardedMapCOW[K, V]) LoadAndDelete(key K) (V, bool)

LoadAndDelete atomically loads and deletes a key.

func (*ShardedMapCOW[K, V]) LoadOrStore

func (sm *ShardedMapCOW[K, V]) LoadOrStore(key K, value V) (V, bool)

LoadOrStore returns the existing value if present, otherwise stores and returns the new value. Returns true if loaded, false if stored.

func (*ShardedMapCOW[K, V]) Range

func (sm *ShardedMapCOW[K, V]) Range(f func(key K, value V) bool)

Range calls f for each key-value pair. Lock-free iteration. Note: Iterates over a snapshot, so concurrent writes won't affect this iteration.

func (*ShardedMapCOW[K, V]) Snapshot

func (sm *ShardedMapCOW[K, V]) Snapshot() map[K]V

Snapshot returns a copy of all entries. Lock-free.

func (*ShardedMapCOW[K, V]) Store

func (sm *ShardedMapCOW[K, V]) Store(key K, value V)

Store sets a value for a key. Copies the entire shard.

type ShardedMapCOWOption

type ShardedMapCOWOption[K comparable, V any] func(*ShardedMapCOW[K, V])

ShardedMapCOWOption configures a ShardedMapCOW.

func WithCOWHashFunc

func WithCOWHashFunc[K comparable, V any](fn func(K) uint64) ShardedMapCOWOption[K, V]

WithCOWHashFunc sets a custom hash function.

func WithCOWShardCount

func WithCOWShardCount[K comparable, V any](count int) ShardedMapCOWOption[K, V]

WithCOWShardCount sets the number of shards.

type ShardedMapOption

type ShardedMapOption[K comparable, V any] func(*ShardedMap[K, V])

ShardedMapOption configures a ShardedMap.

func WithHashFunc

func WithHashFunc[K comparable, V any](fn func(K) uint64) ShardedMapOption[K, V]

WithHashFunc sets a custom hash function for keys. Default uses FNV-1a for strings and maphash for other types.

func WithShardCount

func WithShardCount[K comparable, V any](count int) ShardedMapOption[K, V]

WithShardCount sets the number of shards. Default is 256. More shards = less contention but more memory overhead.

type TierConfig

type TierConfig struct {
	DefaultTier     string
	Tiers           map[string]RateLimitTier
	CollectionTiers map[string]string // collection (bucket) -> tier overrides
	UserTiers       map[string]string // user ID -> tier overrides
}

TierConfig holds tier configuration loaded from tiers.toml

func LoadTierConfig

func LoadTierConfig() *TierConfig

LoadTierConfig loads tier configuration from tiers.toml Returns config with defaults if file not found

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL