cache

package module
v1.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 10, 2022 License: Apache-2.0 Imports: 13 Imported by: 2

README

go-cache

GoDev Build Status Go Report Card codecov Coverage Status License

Photo by Ashley McNamara, via ashleymcnamara/gophers (CC BY-NC-SA 4.0)

A library of mixed version of key:value store interacts with private (in-memory) cache and shared cache (i.e. Redis) in Go. It provides Cache-Aside strategy when dealing with both, and maintains the consistency of private cache between distributed systems by Pub-Sub pattern.

Caching is a common technique that aims to improve the performance and scalability of a system. It does this by temporarily copying frequently accessed data to fast storage close to the application. Distributed applications typically implement either or both of the following strategies when caching data:

  • Using a private cache, where data is held locally on the computer that's running an instance of an application or service.
  • Using a shared cache, serving as a common source that can be accessed by multiple processes and machines.

Using a local private cache with a shared cache Ref: https://docs.microsoft.com/en-us/azure/architecture/best-practices/images/caching/caching3.png

Considering the flexibility, efficiency and consistency, we starts to build up our own framework.

Features

  • Easy to use : provide a friendly interface to deal with both caching mechnaism by simple configuration. Limit the resource on single instance (pod) as well.
  • Maintain consistency : evict keys between distributed systems by Pub-Sub pattern.
  • Data compression : provide customized marshal and unmarshal functions.
  • Fix concurrency issue : prevent data racing happened on single instance (pod).
  • Metric : provide callback functions to measure the performance. (i.e. hit rate, private cache usage, ...)

Data flow

Load the cache with Cache-Aside strategy
sequenceDiagram
    participant APP as Application
    participant M as go-cache
    participant L as Local Cache
    participant S as Shared Cache
    participant R as Resource (Microservice / DB)
    
    APP ->> M: Cache.Get() / Cache.MGet()
    alt Local Cache hit
        M ->> L: Adapter.MGet()
        L -->> M: {[]Value, error}
        M -->> APP: return
    else Local Cache miss but Shared Cache hit
        M ->> L: Adapter.MGet()
        L -->> M: cache miss
        M ->> S: Adapter.MGet()
        S -->> M: {[]Value, error}
        M ->> L: Adapter.MSet()
        M -->> APP: return
    else All miss
        M ->> L: Adapter.MGet()
        L -->> M: cache miss
        M ->> S: Adapter.MGet()
        S -->> M: cache miss
        M ->> R: OneTimeGetterFunc() / MGetterFunc()
        R -->> M: return from getter
        M ->> S: Adapter.MSet()
        M ->> L: Adapter.MSet()
        M -->> APP: return
    end

Evict the cache
sequenceDiagram
    participant APP as Application
    participant M as go-cache
    participant L as Local Cache
    participant S as Shared Cache
    participant PS as PubSub
    
    APP ->> M: Cache.Del()
    M ->> S: Adapter.Del()
    S -->> M: return error if necessary
    M ->> L: Adapter.Del()
    L -->> M: return error if necessary
    M ->> PS: Pubsub.Pub() (broadcast key eviction)
    M -->> APP: return nil or error

Installation

go get github.com/viney-shih/go-cache

Get Started

Basic usage: Set-And-Get

By adopting singleton pattern, initialize the Factory in main.go at the beginning, and deliver it to each package or business logic.

// Initialize the Factory in main.go
tinyLfu := cache.NewTinyLFU(10000)
rds := cache.NewRedis(redis.NewRing(&redis.RingOptions{
    Addrs: map[string]string{
        "server1": ":6379",
    },
}))

cacheFactory := cache.NewFactory(rds, tinyLfu)

Treat it as a common key:value store like using Redis. But more advanced, it coordinated the usage between multi-level caching mechanism inside.

type Object struct {
    Str string
    Num int
}

func Example_setAndGetPattern() {
    // We create a group of cache named "set-and-get".
    // It uses the shared cache only with TTL of ten seconds.
    c := cacheFactory.NewCache([]cache.Setting{
        {
            Prefix: "set-and-get",
            CacheAttributes: map[cache.Type]cache.Attribute{
                cache.SharedCacheType: {TTL: 10 * time.Second},
            },
        },
    })

    ctx := context.TODO()

    // set the cache
    obj := &Object{
        Str: "value1",
        Num: 1,
    }
    if err := c.Set(ctx, "set-and-get", "key", obj); err != nil {
        panic("not expected")
    }

    // read the cache
    container := &Object{}
    if err := c.Get(ctx, "set-and-get", "key", container); err != nil {
        panic("not expected")
    }
    fmt.Println(container) // Output: Object{ Str: "value1", Num: 1}

    // read the cache but failed
    if err := c.Get(ctx, "set-and-get", "no-such-key", container); err != nil {
        fmt.Println(err) //  Output: errors.New("cache key is missing")
    }

    // Output:
    // &{value1 1}
    // cache key is missing
}

Advanced usage: Cache-Aside strategy

GetByFunc() is the easier way to deal with the cache by implementing the getter function in the parameter. When the cache is missing, it will read the data with the getter function and refill it in cache automatically.

func ExampleCache_GetByFunc() {
    // We create a group of cache named "get-by-func".
    // It uses the local cache only with TTL of ten minutes.
    c := cacheFactory.NewCache([]cache.Setting{
        {
            Prefix: "get-by-func",
            CacheAttributes: map[cache.Type]cache.Attribute{
                cache.LocalCacheType: {TTL: 10 * time.Minute},
            },
        },
    })

    ctx := context.TODO()
    container2 := &Object{}
    if err := c.GetByFunc(ctx, "get-by-func", "key2", container2, func() (interface{}, error) {
        // The getter is used to generate data when cache missed, and refill it to the cache automatically..
        // You can read from DB or other microservices.
        // Assume we read from MySQL according to the key "key2" and get the value of Object{Str: "value2", Num: 2}
        return Object{Str: "value2", Num: 2}, nil
    }); err != nil {
        panic("not expected")
    }

    fmt.Println(container2) // Object{ Str: "value2", Num: 2}

    // Output:
    // &{value2 2}
}

MGetter is another approaching way to do this. Set this function durning registering the Setting.

func ExampleService_Create_mGetter() {
    // We create a group of cache named "mgetter".
    // It uses both shared and local caches with separated TTL of one hour and ten minutes.
    c := cacheFactory.NewCache([]cache.Setting{
        {
            Prefix: "mgetter",
            CacheAttributes: map[cache.Type]cache.Attribute{
                cache.SharedCacheType: {TTL: time.Hour},
                cache.LocalCacheType:  {TTL: 10 * time.Minute},
            },
            MGetter: func(keys ...string) (interface{}, error) {
                // The MGetter is used to generate data when cache missed, and refill it to the cache automatically..
                // You can read from DB or other microservices.
                // Assume we read from MySQL according to the key "key3" and get the value of Object{Str: "value3", Num: 3}
                // HINT: remember to return as a slice, and the item order needs to consist with the keys in the parameters.
                return []Object{{Str: "value3", Num: 3}}, nil
            },
        },
    })

    ctx := context.TODO()
    container3 := &Object{}
    if err := c.Get(ctx, "mgetter", "key3", container3); err != nil {
        panic("not expected")
    }

    fmt.Println(container3) // Object{ Str: "value3", Num: 3}

    // Output:
    // &{value3 3}
}

More examples

References

License

Apache-2.0

Documentation

Overview

Example (ReadThroughPattern)

Example_readThroughPattern will demo multiple cache layers and multiple prefix keys at the same time.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/go-redis/redis/v8"

	"github.com/viney-shih/go-cache"
)

type Person struct {
	FirstName string
	LastName  string
	Age       int
}

// Example_readThroughPattern will demo multiple cache layers and multiple
// prefix keys at the same time.
func main() {
	tinyLfu := cache.NewTinyLFU(10000)
	rds := cache.NewRedis(redis.NewRing(&redis.RingOptions{
		Addrs: map[string]string{
			"server1": ":6379",
		},
	}))

	cacheF := cache.NewFactory(rds, tinyLfu)

	c := cacheF.NewCache([]cache.Setting{
		{
			Prefix: "teacher",
			CacheAttributes: map[cache.Type]cache.Attribute{
				cache.SharedCacheType: {TTL: time.Hour},
				cache.LocalCacheType:  {TTL: 10 * time.Minute},
			},
		},
		{
			Prefix: "student",
			CacheAttributes: map[cache.Type]cache.Attribute{
				cache.SharedCacheType: {TTL: time.Hour},
				cache.LocalCacheType:  {TTL: 10 * time.Minute},
			},
			MGetter: func(keys ...string) (interface{}, error) {
				// The MGetter is used to generate data when cache missed, and refill it to the cache automatically..
				// You can read from DB or other microservices.
				// Assume we read from MySQL according to the key "jacky" and get the value of
				// Person{FirstName: "jacky", LastName: "Lin", Age: 38}
				// HINT: remember to return as a slice, and the item order needs to consist with the keys in the parameters.
				if len(keys) == 1 && keys[0] == "jacky" {
					return []Person{{FirstName: "Jacky", LastName: "Lin", Age: 38}}, nil
				}

				return nil, fmt.Errorf("XD")
			},
		},
	})

	ctx := context.TODO()
	teacher := &Person{}
	if err := c.GetByFunc(ctx, "teacher", "jacky", teacher, func() (interface{}, error) {
		// The getter is used to generate data when cache missed, and refill it to the cache automatically..
		// You can read from DB or other microservices.
		// Assume we read from MySQL according to the key "jacky" and get the value of Object{Str: "value2", Num: 2}
		return Person{FirstName: "Jacky", LastName: "Wang", Age: 83}, nil
	}); err != nil {
		panic("not expected")
	}

	fmt.Println(teacher) // Output: {FirstName: "Jacky", LastName: "Wang", Age: 83}

	student := &Person{}
	if err := c.Get(ctx, "student", "jacky", student); err != nil {
		panic("not expected")
	}

	fmt.Println(student) 
Output:

&{Jacky Wang 83}
&{Jacky Lin 38}
Example (SetAndGetPattern)
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/go-redis/redis/v8"

	"github.com/viney-shih/go-cache"
)

type Object struct {
	Str string
	Num int
}

func main() {
	tinyLfu := cache.NewTinyLFU(10000)
	rds := cache.NewRedis(redis.NewRing(&redis.RingOptions{
		Addrs: map[string]string{
			"server1": ":6379",
		},
	}))

	cacheF := cache.NewFactory(rds, tinyLfu)

	// We create a group of cache named "set-and-get".
	// It uses the shared cache only with TTL of ten seconds.
	c := cacheF.NewCache([]cache.Setting{
		{
			Prefix: "set-and-get",
			CacheAttributes: map[cache.Type]cache.Attribute{
				cache.SharedCacheType: {TTL: 10 * time.Second},
			},
		},
	})

	ctx := context.TODO()

	// set the cache
	obj := &Object{
		Str: "value1",
		Num: 1,
	}
	if err := c.Set(ctx, "set-and-get", "key", obj); err != nil {
		panic("not expected")
	}

	// read the cache
	container := &Object{}
	if err := c.Get(ctx, "set-and-get", "key", container); err != nil {
		panic("not expected")
	}
	fmt.Println(container) // Output: Object{ Str: "value1", Num: 1}

	// read the cache but failed
	if err := c.Get(ctx, "set-and-get", "no-such-key", container); err != nil {
		fmt.Println(err) 
Output:

&{value1 1}
cache key is missing

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrCacheMiss indicates the key is missing
	ErrCacheMiss = errors.New("cache key is missing")
	// ErrPfxNotRegistered means the prefix is not registered
	ErrPfxNotRegistered = errors.New("prefix not registered")
	// ErrMGetterResponseLengthInvalid means mgetter return a slice with wrong length,
	// the response length should be equal to the getterParams length
	ErrMGetterResponseLengthInvalid = errors.New("wrong mgetter response length")
	// ErrMGetterResponseNotSlice means mgetter's response type is not slice
	ErrMGetterResponseNotSlice = errors.New("mgetter response not a slice")
	// ErrResultIndexInvalid means the index for Result.Get is out of range
	ErrResultIndexInvalid = errors.New("index out of range")
)
View Source
var (
	// ErrSelfEvent indicates event triggered by itself.
	ErrSelfEvent = errors.New("event triggered by itself")
)

Functions

func ClearPrefix

func ClearPrefix()

ClearPrefix is only used by unit tests that clean up registered prefix, otherwise duplicated prefix registration panic might occur due to multiple tests.

Types

type Adapter

type Adapter interface {
	MGet(context context.Context, keys []string) ([]Value, error)
	MSet(context context.Context, keyVals map[string][]byte, ttl time.Duration, options ...MSetOptions) error
	Del(context context.Context, keys ...string) error
}

Adapter is the interface communicating with shared/local caches.

func NewEmpty

func NewEmpty() Adapter

NewEmpty generates Adapter without implementation

func NewTinyLFU

func NewTinyLFU(size int, options ...TinyLFUOptions) Adapter

NewTinyLFU generates Adapter with tinylfu

type Attribute

type Attribute struct {
	TTL time.Duration
}

Attribute specified details. For example, you need to indicate the TTL for each key to expire.

type Cache

type Cache interface {
	// GetByFunc returns a value in the cache. It also follows up the read-through pattern.
	// When cache-miss happened, it relaods the value by the getter, and fill in the cache again.
	GetByFunc(context context.Context, prefix, key string, container interface{}, getter OneTimeGetterFunc) error
	// Get returns a value in the cache.
	// When cache-miss happened, it relaods the value by MGetter specified in the setting if possible.
	// Or returns the error of ErrCacheMiss.
	Get(context context.Context, prefix, key string, container interface{}) error
	// MGet returns values in the cache with the interface Result.
	// When cache-miss happened, it relaods values by MGetter specified in the setting if possible.
	// Or returns the error of ErrCacheMiss.
	MGet(context context.Context, prefix string, keys ...string) (Result, error)
	// Del remove keys in the cache
	Del(context context.Context, prefix string, keys ...string) error
	// Set sets up a value into the cache.
	Set(context context.Context, prefix string, key string, value interface{}) error
	// MSet sets up values into the cache.
	MSet(context context.Context, prefix string, keyValues map[string]interface{}) error
}

Cache is generated by Factory based on the need specified in the Setting slice. Use the following methods to create key/value store.

type EventType added in v1.1.0

type EventType int32

EventType is an enumeration of events used to communicate with each other via Pubsub.

ENUM( None // Not registered Event by default. Evict // Evict presents eviction event. )

const (
	// EventTypeNone is a EventType of type None.
	// Not registered Event by default.
	EventTypeNone EventType = iota
	// EventTypeEvict is a EventType of type Evict.
	// Evict presents eviction event.
	EventTypeEvict
)

func ParseEventType added in v1.1.0

func ParseEventType(name string) (EventType, error)

ParseEventType attempts to convert a string to a EventType.

func (EventType) String added in v1.1.0

func (x EventType) String() string

String implements the Stringer interface.

func (EventType) Topic added in v1.1.0

func (x EventType) Topic() string

Topic generates the topic for specified event.

type Factory

type Factory interface {
	NewCache(settings []Setting) Cache
	Close()
}

Factory is initialized in the main.go, and used to generate the Cache for each business logic

func NewFactory

func NewFactory(sharedCache Adapter, localCache Adapter, options ...ServiceOptions) Factory

NewFactory returns the Factory initialized in the main.go.

type MGetterFunc

type MGetterFunc func(keys ...string) (interface{}, error)

MGetterFunc should response a slice of elements which has 1-1 mapping with the provided keys

type MSetOptions

type MSetOptions func(opts *msetOptions)

MSetOptions is an alias for functional argument.

func WithOnCostAddFunc

func WithOnCostAddFunc(f func(key string, cost int)) MSetOptions

WithOnCostAddFunc sets up the callback when adding the cache with key and cost.

func WithOnCostEvictFunc

func WithOnCostEvictFunc(f func(key string, cost int)) MSetOptions

WithOnCostEvictFunc sets up the callback when evicting the cache with key and cost.

type MarshalFunc

type MarshalFunc func(interface{}) ([]byte, error)

MarshalFunc specifies the algorithm during marshaling the value to bytes. The default is json.Marshal.

type Message

type Message interface {
	// Topic returns the topic
	Topic() string
	// Content returns the content of the message
	Content() []byte
}

Message is the interface to receive messages from message queue

type OneTimeGetterFunc

type OneTimeGetterFunc func() (interface{}, error)

OneTimeGetterFunc should be provided as a parameter in GetByFunc()

type Pubsub

type Pubsub interface {
	// Pub publishes the message to the message queue with specified topic
	Pub(context context.Context, topic string, message []byte) error
	// Sub subscribes messages from the message queue with specified topics
	Sub(context context.Context, topic ...string) <-chan Message
	// Close closes the subscription only if Sub() is used.
	// In other word, should handle un-normal usage when Sub() didn't happen before.
	Close()
}

Pubsub is the interface to deal with the message queue

type Redis

type Redis interface {
	Adapter
	Pubsub
}

Redis support two interface: Adapter and Pubsub

func NewRedis

func NewRedis(ring *redis.Ring) Redis

NewRedis generates Adapter with go-redis

type Result

type Result interface {
	Len() int
	Get(ctx context.Context, index int, container interface{}) error
}

Result is the return values from MGet(). You need a for loop to parse whole values.

type ServiceOptions

type ServiceOptions func(opts *serviceOptions)

ServiceOptions is an alias for functional argument.

func OnCacheHitFunc

func OnCacheHitFunc(f func(prefix string, key string, count int)) ServiceOptions

OnCacheHitFunc sets up the callback function on cache hitted

func OnCacheMissFunc

func OnCacheMissFunc(f func(prefix string, key string, count int)) ServiceOptions

OnCacheMissFunc sets up the callback function on cache missed

func OnLocalCacheCostAddFunc

func OnLocalCacheCostAddFunc(f func(prefix string, key string, cost int)) ServiceOptions

OnLocalCacheCostAddFunc sets up the callback function on adding the cost of key in local cache

func OnLocalCacheCostEvictFunc

func OnLocalCacheCostEvictFunc(f func(prefix string, key string, cost int)) ServiceOptions

OnLocalCacheCostEvictFunc sets up the callback function on evicting the cost of key in local cache

func WithMarshalFunc

func WithMarshalFunc(f MarshalFunc) ServiceOptions

WithMarshalFunc sets up the specified marshal function. Needs to consider with unmarshal function at the same time.

func WithPubSub

func WithPubSub(pb Pubsub) ServiceOptions

WithPubSub is used to evict keys in local cache

func WithUnmarshalFunc

func WithUnmarshalFunc(f UnmarshalFunc) ServiceOptions

WithUnmarshalFunc sets up the specified unmarshal function. Needs to consider with marshal function at the same time.

type Setting

type Setting struct {
	// Prefix is unique id for a group of the cache.
	Prefix string
	// CacheAttributes includes all detail attributes.
	CacheAttributes map[Type]Attribute
	// MGetter should be provided when using read-through pattern
	MGetter MGetterFunc
	// MarshalFunc specified the marshal function
	// Needs to consider with unmarshal function at the same time.
	MarshalFunc MarshalFunc
	// UnmarshalFunc specified the unmarshal function
	// Needs to consider with marshal function at the same time.
	UnmarshalFunc UnmarshalFunc
}

Setting provides a relation between Prefix and detailed Attributes. One Setting stands for a one group of a cache, and it use Prefix stands for the unique id. In other words, a group of a cache has it's own Attributes like TTL.

type TinyLFUOptions

type TinyLFUOptions func(opts *tinyLFUOptions)

TinyLFUOptions is an alias for functional argument.

func WithOffset

func WithOffset(offset time.Duration) TinyLFUOptions

WithOffset sets up the offset which is used to randomize TTL preventing expiring at the same time.

type Type

type Type int32

Type decides which components are used in multi-layer cache structure

const (
	// NoneType
	NoneType Type = iota
	// SharedCacheType means shared caching. It ensures that different application instances see the same view of cached data.
	// The famous frameworks are Redis, Memcached, ... (Ref: https://en.wikipedia.org/wiki/Distributed_cache)
	SharedCacheType
	// LocalCacheType means private caching in a single application instance, and the most basic type of cache is an in-memory store.
	// It's held in the address space of a single process and accessed directly by the code that runs in that process.
	// Due to the limited space of memory, we need to consider the efficient cache eviction policy to keep the most important
	// items in it. (Ref: https://en.wikipedia.org/wiki/Cache_replacement_policies)
	LocalCacheType
)

All kinds of cache component type

type UnmarshalFunc

type UnmarshalFunc func([]byte, interface{}) error

UnmarshalFunc specifies the algorithm during unmarshaling the bytes to the value. The default is json.Unmarshal

type Value

type Value struct {
	// Valid stands for existing in cache or not.
	Valid bool
	// Bytes stands for the return value in byte format.
	Bytes []byte
}

Value is returned by MGet()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL