bigcache

package module
v3.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 24, 2022 License: Apache-2.0 Imports: 11 Imported by: 206

README

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that, operations on byte slices take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Requires Go 1.12 or newer.

Usage

Simple initialization
import (
	"fmt"
	"context"
	"github.com/allegro/bigcache/v3"
)

cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))
Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memory allocation can be avoided in that way.

import (
	"log"

	"github.com/allegro/bigcache/v3"
)

config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,

		// time after which entry can be evicted
		LifeWindow: 10 * time.Minute,

		// Interval between removing expired entries (clean up).
		// If set to <= 0 then no action is performed.
		// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
		CleanWindow: 5 * time.Minute,

		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,

		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,

		// prints information about additional memory allocation
		Verbose: true,

		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,

		// callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,

		// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A constant representing the reason will be passed through.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		// Ignored if OnRemove is specified.
		OnRemoveWithReason: nil,
	}

cache, initErr := bigcache.New(context.Background(), config)
if initErr != nil {
	log.Fatal(initErr)
}

cache.Set("my-unique-key", []byte("value"))

if entry, err := cache.Get("my-unique-key"); err == nil {
	fmt.Println(string(entry))
}
LifeWindow & CleanWindow
  1. LifeWindow is a time. After that time, an entry can be called dead but not deleted.

  2. CleanWindow is a time. After that time, all the dead entries will be deleted, but not the entries that still have life.

Benchmarks

Three caches were compared: bigcache, freecache and map. Benchmark tests were made using an i7-6700K CPU @ 4.00GHz with 32GB of RAM on Ubuntu 18.04 LTS (5.2.12-050212-generic).

Benchmarks source code can be found here

Writes and reads
go version
go version go1.13 linux/amd64

go test -bench=. -benchmem -benchtime=4s ./... -timeout 30m
goos: linux
goarch: amd64
pkg: github.com/allegro/bigcache/v3/caches_bench
BenchmarkMapSet-8                     	12999889	       376 ns/op	     199 B/op	       3 allocs/op
BenchmarkConcurrentMapSet-8           	 4355726	      1275 ns/op	     337 B/op	       8 allocs/op
BenchmarkFreeCacheSet-8               	11068976	       703 ns/op	     328 B/op	       2 allocs/op
BenchmarkBigCacheSet-8                	10183717	       478 ns/op	     304 B/op	       2 allocs/op
BenchmarkMapGet-8                     	16536015	       324 ns/op	      23 B/op	       1 allocs/op
BenchmarkConcurrentMapGet-8           	13165708	       401 ns/op	      24 B/op	       2 allocs/op
BenchmarkFreeCacheGet-8               	10137682	       690 ns/op	     136 B/op	       2 allocs/op
BenchmarkBigCacheGet-8                	11423854	       450 ns/op	     152 B/op	       4 allocs/op
BenchmarkBigCacheSetParallel-8        	34233472	       148 ns/op	     317 B/op	       3 allocs/op
BenchmarkFreeCacheSetParallel-8       	34222654	       268 ns/op	     350 B/op	       3 allocs/op
BenchmarkConcurrentMapSetParallel-8   	19635688	       240 ns/op	     200 B/op	       6 allocs/op
BenchmarkBigCacheGetParallel-8        	60547064	        86.1 ns/op	     152 B/op	       4 allocs/op
BenchmarkFreeCacheGetParallel-8       	50701280	       147 ns/op	     136 B/op	       3 allocs/op
BenchmarkConcurrentMapGetParallel-8   	27353288	       175 ns/op	      24 B/op	       2 allocs/op
PASS
ok  	github.com/allegro/bigcache/v3/caches_bench	256.257s

Writes and reads in bigcache are faster than in freecache. Writes to map are the slowest.

GC pause time
go version
go version go1.13 linux/amd64

go run caches_gc_overhead_comparison.go

Number of entries:  20000000
GC pause for bigcache:  1.506077ms
GC pause for freecache:  5.594416ms
GC pause for map:  9.347015ms
go version
go version go1.13 linux/arm64

go run caches_gc_overhead_comparison.go
Number of entries:  20000000
GC pause for bigcache:  22.382827ms
GC pause for freecache:  41.264651ms
GC pause for map:  72.236853ms

Test shows how long are the GC pauses for caches filled with 20mln of entries. Bigcache and freecache have very similar GC pause time.

Memory usage

You may encounter system memory reporting what appears to be an exponential increase, however this is expected behaviour. Go runtime allocates memory in chunks or 'spans' and will inform the OS when they are no longer required by changing their state to 'idle'. The 'spans' will remain part of the process resource usage until the OS needs to repurpose the address. Further reading available here.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit its content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in byte slices, to omit GC again. Byte slices size can grow to gigabytes without impact on performance because GC will only see single pointer to it.

Collisions

BigCache does not handle collisions. When new item is inserted and it's hash collides with previously stored item, new item overwrites previously stored value.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.

Results from benchmark tests are presented above. One of the advantage of bigcache over freecache is that you don’t need to know the size of the cache in advance, because when bigcache is full, it can allocate additional memory for new entries instead of overwriting existing ones as freecache does currently. However hard max size in bigcache also can be set, check HardMaxCacheSize.

HTTP Server

This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the server package.

More

Bigcache genesis is described in allegro.tech blog post: writing a very fast cache service in Go

License

BigCache is released under the Apache 2.0 license (see LICENSE)

Documentation

Overview

Example
cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10*time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))
Output:

value
Example (Custom)
// When cache load can be predicted in advance then it is better to use custom initialization
// because additional memory allocation can be avoided in that way.
config := bigcache.Config{
	// number of shards (must be a power of 2)
	Shards: 1024,

	// time after which entry can be evicted
	LifeWindow: 10 * time.Minute,

	// Interval between removing expired entries (clean up).
	// If set to <= 0 then no action is performed.
	// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
	CleanWindow: 5 * time.Minute,

	// rps * lifeWindow, used only in initial memory allocation
	MaxEntriesInWindow: 1000 * 10 * 60,

	// max entry size in bytes, used only in initial memory allocation
	MaxEntrySize: 500,

	// prints information about additional memory allocation
	Verbose: true,

	// cache will not allocate more memory than this limit, value in MB
	// if value is reached then the oldest entries can be overridden for the new ones
	// 0 value means no size limit
	HardMaxCacheSize: 8192,

	// callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	OnRemove: nil,

	// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called. A constant representing the reason will be passed through.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	// Ignored if OnRemove is specified.
	OnRemoveWithReason: nil,
}

cache, initErr := bigcache.New(context.Background(), config)
if initErr != nil {
	log.Fatal(initErr)
}

err := cache.Set("my-unique-key", []byte("value"))
if err != nil {
	log.Fatal(err)
}

entry, err := cache.Get("my-unique-key")
if err != nil {
	log.Fatal(err)
}
fmt.Println(string(entry))
Output:

value

Index

Examples

Constants

View Source
const (
	// Expired means the key is past its LifeWindow.
	Expired = RemoveReason(1)
	// NoSpace means the key is the oldest and the cache size was at its maximum when Set was called, or the
	// entry exceeded the maximum shard size.
	NoSpace = RemoveReason(2)
	// Deleted means Delete was called and this key was removed as a result.
	Deleted = RemoveReason(3)
)
View Source
const ErrCannotRetrieveEntry = iteratorError("Could not retrieve entry from cache")

ErrCannotRetrieveEntry is reported when entry cannot be retrieved from underlying

View Source
const ErrInvalidIteratorState = iteratorError("Iterator is in invalid state. Use SetNext() to move to next position")

ErrInvalidIteratorState is reported when iterator is in invalid state

Variables

View Source
var (
	// ErrEntryNotFound is an error type struct which is returned when entry was not found for provided key
	ErrEntryNotFound = errors.New("Entry not found")
)

Functions

func DefaultLogger

func DefaultLogger() *log.Logger

DefaultLogger returns a `Logger` implementation backed by stdlib's log

Types

type BigCache

type BigCache struct {
	// contains filtered or unexported fields
}

BigCache is fast, concurrent, evicting cache created to keep big number of entries without impact on performance. It keeps entries on heap but omits GC for them. To achieve that, operations take place on byte arrays, therefore entries (de)serialization in front of the cache will be needed in most use cases.

func New added in v3.1.0

func New(ctx context.Context, config Config) (*BigCache, error)

New initialize new instance of BigCache

func NewBigCache deprecated

func NewBigCache(config Config) (*BigCache, error)

NewBigCache initialize new instance of BigCache

Deprecated: NewBigCache is deprecated, please use New(ctx, config) instead, New takes in context and can gracefully shutdown with context cancellations

func (*BigCache) Append

func (c *BigCache) Append(key string, entry []byte) error

Append appends entry under the key if key exists, otherwise it will set the key (same behaviour as Set()). With Append() you can concatenate multiple entries under the same key in an lock optimized way.

func (*BigCache) Capacity

func (c *BigCache) Capacity() int

Capacity returns amount of bytes store in the cache.

func (*BigCache) Close

func (c *BigCache) Close() error

Close is used to signal a shutdown of the cache when you are done with it. This allows the cleaning goroutines to exit and ensures references are not kept to the cache preventing GC of the entire cache.

func (*BigCache) Delete

func (c *BigCache) Delete(key string) error

Delete removes the key

func (*BigCache) Get

func (c *BigCache) Get(key string) ([]byte, error)

Get reads entry for the key. It returns an ErrEntryNotFound when no entry exists for the given key.

func (*BigCache) GetWithInfo

func (c *BigCache) GetWithInfo(key string) ([]byte, Response, error)

GetWithInfo reads entry for the key with Response info. It returns an ErrEntryNotFound when no entry exists for the given key.

func (*BigCache) Iterator

func (c *BigCache) Iterator() *EntryInfoIterator

Iterator returns iterator function to iterate over EntryInfo's from whole cache.

func (*BigCache) KeyMetadata

func (c *BigCache) KeyMetadata(key string) Metadata

KeyMetadata returns number of times a cached resource was requested.

func (*BigCache) Len

func (c *BigCache) Len() int

Len computes number of entries in cache

func (*BigCache) Reset

func (c *BigCache) Reset() error

Reset empties all cache shards

func (*BigCache) ResetStats added in v3.1.0

func (c *BigCache) ResetStats() error

ResetStats resets cache stats

func (*BigCache) Set

func (c *BigCache) Set(key string, entry []byte) error

Set saves entry under the key

func (*BigCache) Stats

func (c *BigCache) Stats() Stats

Stats returns cache's statistics

type Config

type Config struct {
	// Number of cache shards, value must be a power of two
	Shards int
	// Time after which entry can be evicted
	LifeWindow time.Duration
	// Interval between removing expired entries (clean up).
	// If set to <= 0 then no action is performed. Setting to < 1 second is counterproductive — bigcache has a one second resolution.
	CleanWindow time.Duration
	// Max number of entries in life window. Used only to calculate initial size for cache shards.
	// When proper value is set then additional memory allocation does not occur.
	MaxEntriesInWindow int
	// Max size of entry in bytes. Used only to calculate initial size for cache shards.
	MaxEntrySize int
	// StatsEnabled if true calculate the number of times a cached resource was requested.
	StatsEnabled bool
	// Verbose mode prints information about new memory allocation
	Verbose bool
	// Hasher used to map between string keys and unsigned 64bit integers, by default fnv64 hashing is used.
	Hasher Hasher
	// HardMaxCacheSize is a limit for BytesQueue size in MB.
	// It can protect application from consuming all available memory on machine, therefore from running OOM Killer.
	// Default value is 0 which means unlimited size. When the limit is higher than 0 and reached then
	// the oldest entries are overridden for the new ones. The max memory consumption will be bigger than
	// HardMaxCacheSize due to Shards' s additional memory. Every Shard consumes additional memory for map of keys
	// and statistics (map[uint64]uint32) the size of this map is equal to number of entries in
	// cache ~ 2×(64+32)×n bits + overhead or map itself.
	HardMaxCacheSize int
	// OnRemove is a callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	// ignored if OnRemoveWithMetadata is specified.
	OnRemove func(key string, entry []byte)
	// OnRemoveWithMetadata is a callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called. A structure representing details about that specific entry.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	OnRemoveWithMetadata func(key string, entry []byte, keyMetadata Metadata)
	// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry, or because delete was called. A constant representing the reason will be passed through.
	// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	// Ignored if OnRemove is specified.
	OnRemoveWithReason func(key string, entry []byte, reason RemoveReason)

	// Logger is a logging interface and used in combination with `Verbose`
	// Defaults to `DefaultLogger()`
	Logger Logger
	// contains filtered or unexported fields
}

Config for BigCache

func DefaultConfig

func DefaultConfig(eviction time.Duration) Config

DefaultConfig initializes config with default values. When load for BigCache can be predicted in advance then it is better to use custom config.

func (Config) OnRemoveFilterSet

func (c Config) OnRemoveFilterSet(reasons ...RemoveReason) Config

OnRemoveFilterSet sets which remove reasons will trigger a call to OnRemoveWithReason. Filtering out reasons prevents bigcache from unwrapping them, which saves cpu.

type EntryInfo

type EntryInfo struct {
	// contains filtered or unexported fields
}

EntryInfo holds informations about entry in the cache

func (EntryInfo) Hash

func (e EntryInfo) Hash() uint64

Hash returns entry's hash value

func (EntryInfo) Key

func (e EntryInfo) Key() string

Key returns entry's underlying key

func (EntryInfo) Timestamp

func (e EntryInfo) Timestamp() uint64

Timestamp returns entry's timestamp (time of insertion)

func (EntryInfo) Value

func (e EntryInfo) Value() []byte

Value returns entry's underlying value

type EntryInfoIterator

type EntryInfoIterator struct {
	// contains filtered or unexported fields
}

EntryInfoIterator allows to iterate over entries in the cache

func (*EntryInfoIterator) SetNext

func (it *EntryInfoIterator) SetNext() bool

SetNext moves to next element and returns true if it exists.

func (*EntryInfoIterator) Value

func (it *EntryInfoIterator) Value() (EntryInfo, error)

Value returns current value from the iterator

type Hasher

type Hasher interface {
	Sum64(string) uint64
}

Hasher is responsible for generating unsigned, 64 bit hash of provided string. Hasher should minimize collisions (generating same hash for different strings) and while performance is also important fast functions are preferable (i.e. you can use FarmHash family).

type Logger

type Logger interface {
	Printf(format string, v ...interface{})
}

Logger is invoked when `Config.Verbose=true`

type Metadata

type Metadata struct {
	RequestCount uint32
}

Metadata contains information of a specific entry

type RemoveReason

type RemoveReason uint32

RemoveReason is a value used to signal to the user why a particular key was removed in the OnRemove callback.

type Response

type Response struct {
	EntryStatus RemoveReason
}

Response will contain metadata about the entry for which GetWithInfo(key) was called

type Stats

type Stats struct {
	// Hits is a number of successfully found keys
	Hits int64 `json:"hits"`
	// Misses is a number of not found keys
	Misses int64 `json:"misses"`
	// DelHits is a number of successfully deleted keys
	DelHits int64 `json:"delete_hits"`
	// DelMisses is a number of not deleted keys
	DelMisses int64 `json:"delete_misses"`
	// Collisions is a number of happened key-collisions
	Collisions int64 `json:"collisions"`
}

Stats stores cache statistics

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL