Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AtomicMap ¶
type AtomicMap[K comparable, V any] interface { Map[K, V] }
AtomicMap is a goroutine-safe hash map based on atomic operations. Readers are non-blocking, writers use a mutex per bucket, and a single resize mutex. The map is optimized for read/write operations and uses a pool for memory allocation.
Implementation ¶
The map uses a variant of reference counting with two reference counts described in "C++ Concurrency in Action" by Anthony Williams. Each entry has an external reference count and an internal reference count.
Readers increment the external reference count when acquire an entry, and *decrement* the internal reference count when release an entry.
Writes atomically swap the reference with a new one, and increment the internal reference count by the (external reference count - 1).
When the internal reference count reaches zero, the entry is freed and returned to the pool.
Benchmarks ¶
cpu: Apple M1 Pro BenchmarkAtomicMap_Read-10 74086842 14.26 ns/op 70.15 mops 0 B/op 0 allocs/op BenchmarkAtomicMap_Read_Parallel-10 61127808 18.12 ns/op 55.20 mops 0 B/op 0 allocs/op BenchmarkAtomicMap_Write-10 27489913 43.14 ns/op 23.18 mops 0 B/op 0 allocs/op BenchmarkAtomicMap_Write_Parallel-10 7670359 161.00 ns/op 6.21 mops 0 B/op 0 allocs/op BenchmarkAtomicMap_Read_Write_Parallel-10 7588120 163.60 ns/op 2.57 rmops 6.111 wmops 0 B/op 0 allocs/op BenchmarkAtomicMap_Read_Parallel_Write_Parallel-10 3187329 360.30 ns/op 43.66 rmops 2.775 wmops 0 B/op 0 allocs/op
func NewAtomicMap ¶
func NewAtomicMap[K comparable, V any]() AtomicMap[K, V]
NewAtomicMap returns a new atomic map.
type AtomicShardedMap ¶
type AtomicShardedMap[K comparable, V any] interface { Map[K, V] }
AtomicShardedMap is a goroutine-safe hash map based on atomic operations, multiple shards to reduce contention, and hierarchical hashing.
Readers are non-blocking, writers use a mutex per bucket, and a resize mutex per shard. See AtomicMap for the implementation details.
Benchmarks:
cpu: Apple M1 Pro BenchmarkAtomicShardedMap_Read-10 67143218 17.81 ns/op 56.15 mops 0 B/op 0 allocs/op BenchmarkAtomicShardedMap_Read_Parallel-10 66365866 19.75 ns/op 50.63 mops 0 B/op 0 allocs/op BenchmarkAtomicShardedMap_Write-10 26752994 44.23 ns/op 22.61 mops 0 B/op 0 allocs/op BenchmarkAtomicShardedMap_Write_Parallel-10 24188610 43.98 ns/op 22.74 mops 0 B/op 0 allocs/op BenchmarkAtomicShardedMap_Read_Write_Parallel-10 24252631 49.88 ns/op 5.58 rmops 20.05 wmops 0 B/op 0 allocs/op BenchmarkAtomicShardedMap_Read_Parallel_Write_Parallel-10 5750632 237.40 ns/op 43.42 rmops 4.21 wmops 0 B/op 0 allocs/op
func NewAtomicShardedMap ¶
func NewAtomicShardedMap[K comparable, V any]() AtomicShardedMap[K, V]
NewAtomicShardedMap returns a new atomic sharded map.
type KeyLock ¶
type KeyLock interface { // Lock returns a channel receiving from which locks the key. Lock() <-chan struct{} // Unlock unlocks the key lock. Unlock() // Free frees the acquired key. Free() }
KeyLock is a single lock for a key, the lock must be freed after use.
type LockMap ¶
type LockMap[K comparable] interface { // Get returns a key lock, the lock must be freed after use. // // Usage: // // m := NewLockMap[int]() // // lock := m.Get(123) // defer lock.Free() // // select { // case <-lock.Lock(): // case <-time.After(time.Second): // return status.Timeout // case <-ctx.Wait(): // return ctx.Status() // } // defer lock.Unlock() Get(key K) KeyLock // Contains returns true if the key is present. // // Usually it means that the key is locked, but it is not guaranteed. // In the latter case the key is unlocked but is not yet freed. Contains(key K) bool // Lock returns a locked key, the key must be freed after use. // // Usage: // // m := NewLockMap[int]() // // lock, st := m.Lock(ctx, 123) // if !st.OK() { // return st // } // defer lock.Free() Lock(ctx async.Context, key K) (LockedKey, status.Status) // LockMap locks the map itself, internally it locks all buckets. // // Usage: // // m := NewLockMap[int]() // // locks := m.LockMap() // defer locks.Free() // // for key := range keys { // ok := locks.Contains(key) // // ... // } LockMap() LockedLockMap[K] }
LockMap holds locks for different keys.
The map uses multiple buckets (shards) each with its own mutex. Buckets are stored in multiple cache lines to try to avoid false sharing. The number of cache lines is equal to the number of CPUs.
Benchmarks ¶
cpu: Apple M1 Pro BenchmarkLockMap_Get-10 20442770 53.08 ns/op 18.84 mops 8 B/op 1 allocs/op BenchmarkLockMap_Get_Parallel-10 18444724 66.06 ns/op 15.14 mops 8 B/op 1 allocs/op BenchmarkLockMap_Lock-10 12843542 93.09 ns/op 10.74 mops 8 B/op 1 allocs/op BenchmarkLockMap_Lock_Parallel-10 14246732 88.46 ns/op 11.30 mops 8 B/op 1 allocs/op
type LockedKey ¶
type LockedKey interface {
// Free unlocks and freed the key.
Free()
}
LockedKey is a locked key which is unlocked when freed.
type LockedLockMap ¶
type LockedLockMap[K comparable] interface { // Contains returns true if the key is present. Contains(key K) bool // Range ranges over all keys. Range(fn func(key K) bool) // Free unlocks the map itself, internally it unlocks all buckets. Free() }
LockedLockMap is an interface to interact with a map locked in exclusive mode.
type LockedMap ¶
type LockedMap[K comparable, V any] interface { // Clear deletes all items. Clear() // Range iterates over all key-value pairs. // The iteration stops if the function returns false. Range(fn func(K, V) bool) // Free unlocks the locked map. Free() }
LockedMap provides an exclusive access to an async map. The map must be freed after usage.
type Map ¶
type Map[K comparable, V any] interface { // Len returns the number of keys. Len() int // Clear deletes all items. Clear() // Contains returns true if a key exists. Contains(key K) bool // Get returns a key value, or false. Get(key K) (V, bool) // GetOrSet returns a key value and true, or sets a value and false. GetOrSet(key K, value V) (V, bool) // Delete deletes a key value, and returns the previous value. Delete(key K) (V, bool) // LockMap exclusively locks the map. // // Usage: // // m := NewAtomicMap[int, int]() // // locked := m.LockMap() // defer locked.Free() // // // Handle items if required // locked.Range(func(k int, v int) bool { // return true // }) // // // Clear items if required // locked.Clear() LockMap() LockedMap[K, V] // Set sets a value for a key. Set(key K, value V) // SetAbsent sets a key value if absent, returns true if set. SetAbsent(key K, value V) bool // Swap swaps a key value and returns the previous value. Swap(key K, value V) (V, bool) // Range iterates over all key-value pairs. // The iteration stops if the function returns false. Range(fn func(K, V) bool) }
Map is an abstract interface for a concurrent map.
func NewShardedMap ¶
func NewShardedMap[K comparable, V any]() Map[K, V]
NewShardedMap returns a new sharded map.
type ShardedMap ¶
type ShardedMap[K comparable, V any] interface { Map[K, V] }
ShardedMap is a map which uses multiple hash maps each guarded by a separate mutex.
This map is optimized for read-write operations. Use SyncMap if you need a map optimized mostly for read operations. Use AtomicMap or AtomicShardedMap if you need a map optimized for read/write operations and non-blocking reads.
Benchmarks ¶
cpu: Apple M1 Pro BenchmarkShardedMap_Read-10 28416379 37.92 ns/op 26.37 mops 0 B/op 0 allocs/op BenchmarkShardedMap_Read_Parallel-10 45233161 25.85 ns/op 38.68 mops 0 B/op 0 allocs/op BenchmarkShardedMap_Write-10 28739761 41.63 ns/op 24.02 mops 0 B/op 0 allocs/op BenchmarkShardedMap_Write_Parallel-10 11624372 100.70 ns/op 9.92 mops 0 B/op 0 allocs/op BenchmarkShardedMap_Read_Write_Parallel-10 8651703 139.70 ns/op 1.31 rmops 7.15 wmops 0 B/op 0 allocs/op BenchmarkShardedMap_Read_Parallel_Write_Parallel-10 1000000 1162.00 ns/op 22.56 rmops 0.86 wmops 0 B/op 0 allocs/op
type SyncMap ¶
type SyncMap[K comparable, V any] interface { Map[K, V] }
SyncMap is a generic wrapper around the standard sync.Map.
This map is optimized mostly for read operations. Writes operations are slower and allocate memory. Write performance degrades heavily if the keys are deleted from the map.
Use AtomicMap or AtomicShardedMap if you need a map optimized for read-write operations.
Benchmarks ¶
cpu: Apple M1 Pro BenchmarkSyncMap_Read-10 38069961 30.49 ns/op 32.79 mops 0 B/op 0 allocs/op BenchmarkSyncMap_Read_Parallel-10 274098883 4.34 ns/op 230.10 mops 0 B/op 0 allocs/op BenchmarkSyncMap_Write-10 15118998 79.18 ns/op 12.63 mops 28 B/op 2 allocs/op BenchmarkSyncMap_Write_Parallel-10 4176376 290.10 ns/op 3.44 mops 28 B/op 2 allocs/op BenchmarkSyncMap_Read_Write_Parallel-10 31551691 37.75 ns/op 11.16 rmops 26.49 wmops 28 B/op 2 allocs/op BenchmarkSyncMap_Read_Parallel_Write_Parallel-10 11116138 126.10 ns/op 174.20 rmops 7.93 wmops 28 B/op 2 allocs/op
func NewSyncMap ¶
func NewSyncMap[K comparable, V any]() SyncMap[K, V]
NewSyncMap returns a new generic wrapper around a standard sync.Map.
Source Files
¶
- atomic_map.go
- atomic_map_bucket.go
- atomic_map_entry.go
- atomic_map_entry_ref.go
- atomic_map_locked.go
- atomic_map_shard.go
- atomic_map_state.go
- atomic_sharded_locked.go
- atomic_sharded_map.go
- lock_map.go
- lock_map_bucket.go
- lock_map_entry.go
- lock_map_item.go
- lock_map_key_lock.go
- lock_map_key_locked.go
- lock_map_locked.go
- map.go
- sharded_map.go
- sharded_map_shard.go
- sync_map.go