bigcache

package module
v3.0.1+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 14, 2020 License: Apache-2.0 Imports: 9 Imported by: 0

README

fb2converter

converts fb2 files to epub, kepub, mobi, azw3

GoDoc Go Report Card


BigCache

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that, operations on byte slices take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Requires Go 1.13 or newer.

This is very reluctant fork of original BigCache v2.2.2. For any documentation, issues or discussion - please, visit original project.

Original is presently unstable (see issues there - in my production tests I cannot keep it up for more than 10 minutes before it dies with index out of range) and when working on a fix I got carried away with too many changes. While cleaning I attempted to keep code as efficient as it was while making it somewhat easier to read and modify. Original code has unused functions (result of functionality move to shard.go), unnecessary flags trying to fix edge conditions (q.full? really?), too many OnRemove callbacks in attempt to keep backward compatibility, outdated logger (and overall configuration - to some degree). Locking iterator design is a bit strange for highly concurrent code.

All in all I got a feeling that contributors are more interested in a expanding feature set with additional callbacks rather than having readable, simple, stable and performing code base as was the case for a very long time with v1.

Until I decide if I want to make upstream PR I will keep this fork - it is presently benchmarks as good or faster as original bigcache at v2.2.2 and has some additional functionality I need (see GetWithProcesssing).

License

BigCache is released under the Apache 2.0 license (see LICENSE)

Documentation

Overview

Example
cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))
Output:

value
Example (Custom)
// When cache load can be predicted in advance then it is better to use custom initialization
// because additional memory allocation can be avoided in that way.
config := bigcache.Config{
	// number of shards (must be a power of 2)
	Shards: 1024,

	// time after which entry can be evicted
	LifeWindow: 10 * time.Minute,

	// Interval between removing expired entries (clean up).
	// If set to <= 0 then no action is performed.
	// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
	CleanWindow: 5 * time.Minute,

	// rps * lifeWindow, used only in initial memory allocation
	MaxEntriesInWindow: 1000 * 10 * 60,

	// max entry size in bytes, used only in initial memory allocation
	MaxEntrySize: 500,

	// cache will not allocate more memory than this limit, value in MB
	// if value is reached then the oldest entries can be overridden for the new ones
	// 0 value means no size limit
	HardMaxCacheSize: 8192,

	// callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	OnRemove: nil,
}

cache, initErr := bigcache.NewBigCache(config)
if initErr != nil {
	log.Fatal(initErr)
}

err := cache.Set("my-unique-key", []byte("value"))
if err != nil {
	log.Fatal(err)
}

entry, err := cache.Get("my-unique-key")
if err != nil {
	log.Fatal(err)
}
fmt.Println(string(entry))
Output:

value

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	ErrEntryNotFound       = errors.New("entry not found")
	ErrInvalidShardsNumber = errors.New("invalid number of shards, must be power of two")
)
View Source
var (
	ErrQueueEmpty        = errors.New("byte queue is empty")
	ErrQueueFull         = errors.New("byte queue is full, size limit is reached")
	ErrQueueInvalidIndex = errors.New("byte queue index is out of bounds (0 <= index < right)")
	ErrQueueEntryTooBig  = errors.New("byte queue cannot expand, entry is too big")
)
View Source
var (
	ErrCacheEntryCorrupted = errors.New("cache entry is corrupted, unable to read")
)

Functions

This section is empty.

Types

type BigCache

type BigCache struct {
	// contains filtered or unexported fields
}

BigCache is fast, concurrent, evicting cache created to keep big number of entries without impact on performance. It keeps entries on heap but omits GC for them. To achieve that, operations take place on byte arrays, therefore entries (de)serialization in front of the cache will be needed in most use cases.

func NewBigCache

func NewBigCache(config Config) (*BigCache, error)

NewBigCache initializes new instance of BigCache.

func (*BigCache) Append

func (c *BigCache) Append(key string, entry []byte) error

Append appends entry under the key if key exists, otherwise it will set the key (same behaviour as Set()). With Append() you can concatenate multiple entries under the same key in an lock optimized way.

func (*BigCache) AppendHashed

func (c *BigCache) AppendHashed(hashedKey uint64, entry []byte) error

AppendHashed appends entry under the key if key exists, otherwise it will set the key (same behaviour as Set()). With Append() you can concatenate multiple entries under the same key in an lock optimized way. NOTE: it expectes already hashed key.

func (*BigCache) Capacity added in v1.2.0

func (c *BigCache) Capacity() int

Capacity returns amount of bytes store in the cache.

func (*BigCache) Close added in v1.2.0

func (c *BigCache) Close() error

Close is used to signal a shutdown of the cache when you are done with it. This allows the cleaning goroutines to exit and ensures references are not kept to the cache preventing GC of the entire cache.

func (*BigCache) Delete added in v1.1.0

func (c *BigCache) Delete(key string) error

Delete removes the key.

func (*BigCache) DeleteHashed

func (c *BigCache) DeleteHashed(hashedKey uint64) error

DeleteHashed removes the key. NOTE: it expectes already hashed key.

func (*BigCache) Get

func (c *BigCache) Get(key string) ([]byte, error)

Get reads entry for the key returning copy of cached data. It returns an ErrEntryNotFound when no entry exists for the given key.

func (*BigCache) GetHashed

func (c *BigCache) GetHashed(hashedKey uint64) ([]byte, error)

GetHashed reads entry for the key returning copy of cached data. It returns an ErrEntryNotFound when no entry exists for the given key. NOTE: it expectes already hashed key.

func (*BigCache) GetHashedWithProcessing

func (c *BigCache) GetHashedWithProcessing(hashedKey uint64, processor Processor) error

GetHashedWithProcessing reads entry for the key. If found it gives provided Processor closure a chance to process cached entry effectively. It returns an ErrEntryNotFound when no entry exists for the given key. NOTE: it expectes already hashed key.

func (*BigCache) GetWithProcessing

func (c *BigCache) GetWithProcessing(key string, processor Processor) error

GetWithProcessing reads entry for the key. If found it gives provided Processor closure a chance to process cached entry effectively. It returns an ErrEntryNotFound when no entry exists for the given key.

func (*BigCache) Len

func (c *BigCache) Len() int

Len computes number of entries in cache.

func (*BigCache) Range

func (c *BigCache) Range(f Processor) error

Range attempts to call f sequentially for each key and value present in the cache. If at any point f returns ErrEntryNotFound, Range stops the iteration.

Range does not necessarily correspond to any consistent snapshot of the cache contents: no entry will be visited more than once, but if the entry is stored or deleted concurrently, Range may reflect any mapping for that entry from any point during the Range call or miss it completely.

Range may be O(N) with the number of elements in the map even if f returns false after a constant number of calls.

Range is replacement for over-complicated EntryInfoIterator.

func (*BigCache) Reset

func (c *BigCache) Reset() error

Reset empties all cache shards.

func (*BigCache) Set

func (c *BigCache) Set(key string, entry []byte) error

Set saves entry under the key.

func (*BigCache) SetHashed

func (c *BigCache) SetHashed(hashedKey uint64, entry []byte) error

SetHashed saves entry under the key. NOTE: it expectes already hashed key.

func (*BigCache) Stats added in v1.1.0

func (c *BigCache) Stats() Stats

Stats returns cache's statistics.

type CacheEntry

type CacheEntry struct {
	TS   uint64
	Hash uint64
	Key  []byte
	Data []byte
}

CacheEntry is used to interface between bigcache and shards and to provide user with access to low level cache memory when necessary. NOTE: In some cases (usually in callbacks) for efficiency Key and Data fields give access to underlying queue buffer, which is only safe while shard lock is held. To make this obvious Copy methods exit - it is up to user to decide how to use it.

func (*CacheEntry) CopyData

func (ce *CacheEntry) CopyData(newCap int) []byte

CopyData returns copy of underlying slice - safe to use without shard lock. As an optimization you could allocate bigger backing array.

func (*CacheEntry) CopyKey

func (ce *CacheEntry) CopyKey() string

CopyKey returns copy of Key slice as a string - safe to use without shard lock.

func (*CacheEntry) CopyKeyData

func (ce *CacheEntry) CopyKeyData() (s []byte)

CopyKeyData returns copy of Key slice - safe to use without shard lock.

func (*CacheEntry) Size

func (ce *CacheEntry) Size() int

Size returns number of bytes needed to store entry. When called on nil entry returns size of the header - minimal size of any entry in the cache. NOTE: size includes size of the size itself and that is what is being written into underlying buffer when entry is stored.

type Config

type Config struct {
	// Number of cache shards, value must be a power of two
	Shards int
	// Time after which entry can be evicted
	LifeWindow time.Duration
	// Interval between removing expired entries (clean up).
	// If set to <= 0 then no action is performed. Setting to < 1 second is counterproductive — bigcache has a one second resolution.
	CleanWindow time.Duration
	// Max number of entries in life window. Used only to calculate initial size for cache shards.
	// When proper value is set then additional memory allocation does not occur.
	MaxEntriesInWindow int
	// Max size of entry in bytes. Used only to calculate initial size for cache shards.
	MaxEntrySize int
	// Hasher used to map between string keys and unsigned 64bit integers, by default fnv64 hashing is used.
	Hasher Hasher
	// HardMaxCacheSize is a limit for cache size in MB. Cache will not allocate more memory than this limit.
	// It can protect application from consuming all available memory on machine, therefore from running OOM Killer.
	// Default value is 0 which means unlimited size. When the limit is higher than 0 and reached then
	// the oldest entries are overridden for the new ones.
	HardMaxCacheSize int
	// OnRemove is a callback fired when the entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called.
	// Default value is nil which means no callback
	OnRemove OnRemoveCallback
	// Logger is a logging interface. Defaults to `NopLogger()`
	Logger Logger
}

Config for BigCache.

func DefaultConfig

func DefaultConfig(eviction time.Duration) Config

DefaultConfig initializes config with sane default values. When load for BigCache can be predicted in advance then it is better to use custom config.

type Hasher

type Hasher interface {
	Sum64(string) uint64
}

Hasher is responsible for generating unsigned, 64 bit hash of provided string. Hasher should minimize collisions (generating same hash for different strings) and while performance is also important fast functions are preferable (i.e. you can use FarmHash family).

type Logger added in v1.1.0

type Logger interface {
	Printf(format string, v ...interface{})
}

Logger interface.

func DefaultLogger added in v1.1.0

func DefaultLogger() Logger

DefaultLogger returns implementation backed by stdlib's log.

type OnRemoveCallback

type OnRemoveCallback func(*CacheEntry, RemoveReason)

type Processor

type Processor func(*CacheEntry) error

Processor is a closure supplied to set of WithProcessing functions to take ownership of data []byte avoiding extra memory allocation and copy. It has access to exported CacheEntry interface, including byte slice pointing to actual underlying shard buffer (not a copy!).

type RemoveReason added in v1.2.0

type RemoveReason int

RemoveReason is a value used to signal to the user why a particular key was removed in the OnRemove callback.

const (
	NoReason RemoveReason = iota
	Expired               // key is past its LifeWindow.
	NoSpace               // key is the oldest and the cache size was at its maximum when Set() was called, or entry size exceeded the maximum shard size.
	Deleted               // key was removed as a result of Delete() call

)

type Stats added in v1.1.0

type Stats struct {
	// Hits is a number of successfully found keys
	Hits int64 `json:"hits"`
	// Misses is a number of not found keys
	Misses int64 `json:"misses"`
	// DelHits is a number of successfully deleted keys
	DelHits int64 `json:"delete_hits"`
	// DelMisses is a number of not deleted keys
	DelMisses int64 `json:"delete_misses"`
	// Collisions is a number of happened key-collisions
	Collisions int64 `json:"collisions"`
	// EvictedExpired is a number of entries evicted due to expiration
	EvictedExpired int64 `json:"expired"`
	// EvictedNoSpace is a number of entries evicted due to absence of free space
	EvictedNoSpace int64 `json:"nospace"`
}

Stats stores cache statistics.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL