freecache

package
v1.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 1, 2019 License: Apache-2.0, MIT Imports: 10 Imported by: 0

README

FreeCache - A cache library for Go with zero GC overhead and high concurrent performance.

Long lived objects in memory introduce expensive GC overhead, With FreeCache, you can cache unlimited number of objects in memory without increased latency and degraded throughput.

Build Status GoCover GoDoc

Features

  • Store hundreds of millions of entries
  • Zero GC overhead
  • High concurrent thread-safe access
  • Pure Go implementation
  • Expiration support
  • Nearly LRU algorithm
  • Strictly limited memory usage
  • Come with a toy server that supports a few basic Redis commands with pipeline

Performance

Here is the benchmark result compares to built-in map, Set performance is about 2x faster than built-in map, Get performance is about 1/2x slower than built-in map. Since it is single threaded benchmark, in multi-threaded environment, FreeCache should be many times faster than single lock protected built-in map.

BenchmarkCacheSet        3000000               446 ns/op
BenchmarkMapSet          2000000               861 ns/op
BenchmarkCacheGet        3000000               517 ns/op
BenchmarkMapGet         10000000               212 ns/op

Example Usage

cacheSize := 100 * 1024 * 1024
cache := freecache.NewCache(cacheSize)
debug.SetGCPercent(20)
key := []byte("abc")
val := []byte("def")
expire := 60 // expire in 60 seconds
cache.Set(key, val, expire)
got, err := cache.Get(key)
if err != nil {
    fmt.Println(err)
} else {
    fmt.Println(string(got))
}
affected := cache.Del(key)
fmt.Println("deleted key ", affected)
fmt.Println("entry count ", cache.EntryCount())

Notice

  • Memory is preallocated.
  • If you allocate large amount of memory, you may need to set debug.SetGCPercent() to a much lower percentage to get a normal GC frequency.

How it is done

FreeCache avoids GC overhead by reducing the number of pointers. No matter how many entries stored in it, there are only 512 pointers. The data set is sharded into 256 segments by the hash value of the key. Each segment has only two pointers, one is the ring buffer that stores keys and values, the other one is the index slice which used to lookup for an entry. Each segment has its own lock, so it supports high concurrent access.

TODO

  • Support dump to file and load from file.
  • Support resize cache size at runtime.

License

The MIT License

Documentation

Index

Constants

View Source
const ENTRY_HDR_SIZE = 24
View Source
const HASH_ENTRY_SIZE = 16

Variables

View Source
var ErrLargeEntry = errors.New("The entry size is larger than 1/1024 of cache size")
View Source
var ErrLargeKey = errors.New("The key is larger than 65535")
View Source
var ErrNotFound = errors.New("Entry not found")
View Source
var ErrOutOfRange = errors.New("out of range")

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

func NewCache

func NewCache(size int) (cache *Cache)

The cache size will be set to 512KB at minimum. If the size is set relatively large, you should call `debug.SetGCPercent()`, set it to a much smaller value to limit the memory consumption and GC pause time.

func (*Cache) AverageAccessTime

func (cache *Cache) AverageAccessTime() int64

The average unix timestamp when a entry being accessed. Entries have greater access time will be evacuated when it is about to be overwritten by new value.

func (*Cache) Clear

func (cache *Cache) Clear()

func (*Cache) Del

func (cache *Cache) Del(key []byte) (affected bool)

func (*Cache) DelInt

func (cache *Cache) DelInt(key int64) (affected bool)

func (*Cache) EntryCount

func (cache *Cache) EntryCount() (entryCount int64)

func (*Cache) EvacuateCount

func (cache *Cache) EvacuateCount() (count int64)

func (*Cache) ExpiredCount

func (cache *Cache) ExpiredCount() (count int64)

func (*Cache) Get

func (cache *Cache) Get(key []byte) (value []byte, err error)

Get the value or not found error.

func (*Cache) GetInt

func (cache *Cache) GetInt(key int64) (value []byte, err error)

func (*Cache) HitCount

func (cache *Cache) HitCount() int64

func (*Cache) HitRate

func (cache *Cache) HitRate() float64

func (*Cache) LookupCount

func (cache *Cache) LookupCount() int64

func (*Cache) OverwriteCount

func (cache *Cache) OverwriteCount() (overwriteCount int64)

func (*Cache) ResetStatistics

func (cache *Cache) ResetStatistics()

func (*Cache) Set

func (cache *Cache) Set(key, value []byte, expireSeconds int) (err error)

If the key is larger than 65535 or value is larger than 1/1024 of the cache size, the entry will not be written to the cache. expireSeconds <= 0 means no expire, but it can be evicted when cache is full.

func (*Cache) SetInt

func (cache *Cache) SetInt(key int64, value []byte, expireSeconds int) (err error)

type RingBuf

type RingBuf struct {
	// contains filtered or unexported fields
}

Ring buffer has a fixed size, when data exceeds the size, old data will be overwritten by new data. It only contains the data in the stream from begin to end

func NewRingBuf

func NewRingBuf(size int, begin int64) (rb RingBuf)

func (*RingBuf) Begin

func (rb *RingBuf) Begin() int64

func (*RingBuf) Dump

func (rb *RingBuf) Dump() []byte

Create a copy of the buffer.

func (*RingBuf) End

func (rb *RingBuf) End() int64

func (*RingBuf) EqualAt

func (rb *RingBuf) EqualAt(p []byte, off int64) bool

func (*RingBuf) Evacuate

func (rb *RingBuf) Evacuate(off int64, length int) (newOff int64)

Evacuate read the data at off, then write it to the the data stream, Keep it from being overwritten by new data.

func (*RingBuf) ReadAt

func (rb *RingBuf) ReadAt(p []byte, off int64) (n int, err error)

read up to len(p), at off of the data stream.

func (*RingBuf) Resize

func (rb *RingBuf) Resize(newSize int)

func (*RingBuf) Size

func (rb *RingBuf) Size() int64

func (*RingBuf) Skip

func (rb *RingBuf) Skip(length int64)

func (*RingBuf) String

func (rb *RingBuf) String() string

func (*RingBuf) Write

func (rb *RingBuf) Write(p []byte) (n int, err error)

func (*RingBuf) WriteAt

func (rb *RingBuf) WriteAt(p []byte, off int64) (n int, err error)

Directories

Path Synopsis
Native (and fast) implementation of Austin Appleby's MurmurHash3.
Native (and fast) implementation of Austin Appleby's MurmurHash3.
A basic freecache server supports redis protocol
A basic freecache server supports redis protocol

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL