datastorecache

package
v0.0.0-...-d60a78d Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 29, 2023 License: Apache-2.0 Imports: 22 Imported by: 3

Documentation

Overview

Package datastorecache implements a managed versatile datastore caching. Each datastorecache client obtains its own Cache instance for its specific cache type. That cache instance is given a "name" and managed independently from other cache types.

Each cache instance additionally requires the configuration of a management cron task to handle that specific cache "name". This instance will require a handler to be registered for that cache name and a cron task configured to periodically hit that handler.

Periodically, the management cron task will iterate through all cache entries and use their Handler to refresh those that are near expiration and delete those that haven't been used in a while.

Manager Task

The manager task runs periodically, triggered by cron. Each pass, it queries for all currently-registered cache entries and chooses an action:

  • If the entry hasn't been accessed in a while, it will be deleted.
  • If the entry references a Handler that isn't registered, it will be deleted eventually.
  • If the entry's "last refresh" timestamp is past its refresh period, it will be refreshed via its Handler.
  • Otherwise, the entry is left alone for the next pass.

TODO: Each datastorecache cache is designed to be shard-able if the manager refresh ever becomes too burdensome for a single cron session. However, sharding isn't currently implemented.

Index

Constants

View Source
const (

	// DefaultCacheNamespace is the default datastore namespace for cache entries.
	DefaultCacheNamespace = "luci.datastoreCache"
)

Variables

View Source
var ErrDeleteCacheEntry = errors.New("delete this cache entry")

ErrDeleteCacheEntry is a sentinel error value that, if returned from a Handler's Refresh function, indicates that the cache key that is being refreshed is not necessary and should be deleted.

View Source
var ErrFailedToLock = memlock.ErrFailedToLock

ErrFailedToLock is a sentinel error returned by Locker.TryWithLock if the lock is already held by another entity.

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// Name is the name of this cache. This must be unique from other caches
	// managed by this GAE instance, and will be used to differentiate cache
	// keys from other caches sharing the same datastore.
	//
	// If Name is empty, the cache will choose a default name. It is critical
	// that either Name or Namespace be unique to this cache, or else its cache
	// entries will conflict with other cache instances.
	Name string

	// Namespace, if not empty, overrides the datastore namespace where entries
	// for this cache will be stored. If empty, DefaultCacheNamespace will be
	// used.
	Namespace string

	// AccessUpdateInterval is the amount of time after a cached entry has been
	// last marked as accessed before it should be re-marked as accessed. We do
	// this sparingly, enough that the entity is not likely to be considered
	// candidate for pruning if it's being actively used.
	//
	// Recommended interval is 1 day. The only hard requirement is that this is
	// less than the PruneInterval. If this is <= 0, cached entities will never
	// have their access times updated.
	AccessUpdateInterval time.Duration

	// PruneFactor is the number of additional AccessUpdateInterval periods old
	// that a cache entry can be before it becomes candidate for pruning.
	//
	// An entry becomes candidate for pruning after
	// [AccessUpdateInterval * (PruneFactor+1)] time has passed since the last
	// AccessUpdateInterval.
	//
	// If this is <= 0, no entities will ever be pruned. This is potentially
	// acceptable when the total number of expected cached entries is expected to
	// be low.
	PruneFactor int

	// Parallel is the number of parallel refreshes that this Handler type should
	// execute during the course of a single maintenance run.
	//
	// If this is <= 0, at most one parallel request will happen per shard.
	Parallel int

	// HandlerFunc returns a Handler implementation to use for this cache. It is
	// used both by the running application and by the manager cron to perform
	// cache operations.
	//
	// If Handler is nil, or if Handler returns nil, the cache will not be
	// accessible.
	HandlerFunc func(context.Context) Handler
}

Cache defines a generic basic datastore cache. Content that is added to the cache is periodically refeshed and (if unused) pruned by a supportive cron task.

Using this cache requires a cache handler to be installed and a cron task to be configured to hit the handler's endpoint.

The cache works as follows:

  • The cache is scanned for an item
  • Locks are used to provide best-effort deduplication of refreshes for the same entity. Inability to lock will not prevent cache operations.
  • Upon access a entry's "Accessed" timestamp will be updated to note that it is still in use. This happens probabilistically after an "access update interval" so we don't pointlessly incur the update cost every time.

The support cron task will execute periodically and maintian the cache:

  • If a cached entry's "Accessed" timestamp falls too far behind, it will be deleted as part of a periodic cron task.
  • If the cached entry is near expiration, the cron task will refresh the entry's data.

TODO(dnj): This caching scheme intentionally lends itself to sharding. This would be implemented by having the maintenance cron task kick off processing shard tasks, each querying a subset of the cached entity keyspace, rather than handling the whole key space itself. To this end, some areas of code for this cache will be programmed to operate on shards, even though currently the number of shards will always equal 1 (#0).

func (*Cache) Get

func (cache *Cache) Get(c context.Context, key []byte) (Value, error)

Get retrieves a cached Value from the cache.

If the value is not defined, the Handler's Refresh function will be used to obtain the value and, upon success, the Value will be added to the cache and returned.

If the Refresh function returns an error, that error will be propagated as-is and returned.

func (*Cache) InstallCronRoute

func (cache *Cache) InstallCronRoute(path string, r *router.Router, base router.MiddlewareChain)

InstallCronRoute installs a handler for this Cache's management cron task into the supplied Router at the specified path.

It is recommended to assert in the middleware that this endpoint is only accessible from a cron task.

type Handler

type Handler interface {
	// RefreshInterval is the amount of time that can expire before data becomes
	// candidate for refresh.
	//
	// This depends on the freshness of the data, and should be chosen by the
	// implementation. The only hard requirement is that this is less than the
	// PruneInterval. If this is <= 0, cached entities will never be refreshed.
	RefreshInterval(key []byte) time.Duration

	// Refresh is a callback function to refresh a given cache entity.
	//
	// This function must be concurrency-safe.
	//
	// The entity is described by key, which is the byte key for this entity. v
	// holds the current cache value for the entry; if there is no current cached
	// value, it will be a zero-value struct.
	//
	// If the ErrDeleteCacheEntry sentinel error is returned, the entity will be
	// deleted. If an error is returned, it will be propagated verbatim to the
	// caller. Otherwise, the return value will be used to update the cache
	// entity.
	Refresh(c context.Context, key []byte, v Value) (Value, error)

	// Locker returns the Locker instance to use.
	//
	// The Locker is optional, and serves to prevent multiple independent cache
	// calls for the same data from each independently refreshing that data. If
	// Locker returns nil, no such locking will be performed.
	Locker(c context.Context) Locker
}

Handler is a cache handler for a specific type of data. It is used at cache runtime to make decisions on how to populate and manage cache entries.

type Locker

type Locker interface {
	// TryWithLock blocks on acquiring a lock for the specified key, invokes the
	// supplied function while holding the lock, and releases the lock before
	// returning.
	//
	// If the lock is already held, TryWithLock should return ErrFailedToLock.
	// Otherwise, TryWithLock will forward the return value of fn.
	TryWithLock(c context.Context, key string, fn func(context.Context) error) error
}

Locker is an interface to a generic locking function.

func MemLocker

func MemLocker(c context.Context) Locker

MemLocker returns a Locker instance that uses a memcache lock bound to the current request ID.

type Value

type Value struct {
	// Schema is an optional schema string that will be encoded in the cache
	// entry.
	Schema string

	// Data is the cache entry's data value.
	Data []byte

	// Description is an optional description string that will be added to the
	// datastore entry for humans.
	Description string
}

Value is a cached value.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL