Package dscache provides a transparent cache for RawDatastore which is backed by Memcache.


Although this is not a port of any particular implementation, it takes inspiration from these fine libraries:



Memcache contains cache entries for single datastore entities. The memcache key looks like

"gae:" | vers | ":" | shard | ":" | Base64_std_nopad(SHA1(datastore.Key))


- vers is an ascii-hex-encoded number (currently 1).
- shard is a zero-based ascii-hex-encoded number (depends on shardsForKey).
- SHA1 has been chosen as unlikely (p == 1e-18) to collide, given dedicated
  memcache sizes of up to 170 Exabytes (assuming an average entry size of
  100KB including the memcache key). This is clearly overkill, but MD5
  could start showing collisions at this probability in as small as a 26GB
  cache (and also MD5 sucks).

The memcache value is a compression byte, indicating the scheme, followed by the encoded (and possibly compressed) value. Encoding is done with datastore.PropertyMap.Write(). The memcache value may also be the empty byte sequence, indicating that this entity is deleted.

The memcache entry may also have a 'flags' value set to one of the following:

- 0 "entity" (cached value)
- 1 "lock"   (someone is mutating this entry)

Algorithm - Put and Delete

On a Put (or Delete), an empty value is unconditionally written to memcache with a MutationLockTimeout expiration (default 2 min), and a memcache flag value of 0x1 (indicating that it's a put-locked key). The random value is to preclude Get operations from believing that they possess the lock.

NOTE: If this memcache Set fails, it's a HARD ERROR. See DANGER ZONE.

The datastore operation will then occur. Assuming success, Put will then unconditionally delete all of the memcache locks. At some point later, a Get will write its own lock, get the value from datastore, and compare and swap to populate the value (detailed below).

Algorithm - Get

On a Get, "Add" a lock for it (which only does something if there's no entry in memcache yet) with a nonce value. We immediately Get the memcache entries back (for CAS purposes later).

If it doesn't exist (unlikely since we just Add'd it) or if its flag is "lock" and the Value != the nonce we put there, go hit the datastore without trying to update memcache.

If its flag is "entity", decode the object and return it. If the Value is the empty byte sequence, return ErrNoSuchEntity.

If its flag is "lock" and the Value equals the nonce, go get it from the datastore. If that's successful, then encode the value to bytes, and CAS the object to memcache. The CAS will succeed if nothing else touched the memcache in the meantime (like a Put, a memcache expiration/eviction, etc.).

Algorithm - Transactions

In a transaction, all Put memcache operations are held until the very end of the transaction. Right before the transaction is committed, all accumulated Put memcache items are unconditionally set into memcache.

NOTE: If this memcache Set fails, it's a HARD ERROR. See DANGER ZONE.

If the transaction is successfully committed (err == nil), then all the locks will be deleted.

The assumption here is that get operations apply all outstanding transactions before they return data (, and so it is safe to purge all the locks if the transaction is known-good.

If the transaction succeeds, but RunInTransaction returns an error (which can happen), or if the transaction fails, then the lock entries time out naturally. This will mean Gets will directly hit datastore until the locks expire, but it's the more-correct thing to do.

Gets and Queries in a transaction pass right through without reading or writing memcache.

Cache control

An entity may expose the following metadata (see datastore.PropertyLoadSaver.GetMeta) to control the behavior of its cache.

- `gae:"$dscache.enable,<true|false>"` - whether or not this entity should
   be cached at all. If ommitted, dscache defaults to true.
- `gae:"$dscache.expiration,#seconds"` - the number of seconds of
  persistence to use when this item is cached. 0 is infinite. If omitted,
  defaults to CacheDuration.

In addition, the application may set a function shardsForKey(key) which returns the number of shards to use for a given datastore key. This function is set with the invocation of AddShardFunctions.

Shards have the effect that all write (Put/Delete) operations clear all memcache entries for the given datastore entry, and all reads read (and possibly populate) one of the memcache entries. So if an entity has 4 shards, a datastore Get for it will pull from one of the 4 possible memcache keys at random. This is good for heavily-read, but infrequently updated, entities. The purpose of sharding is to alleviate hot memcache keys, as recommended by .


A couple things to note that may differ from other appengine datastore caching libraries (like goon, nds, or ndb).

- It does NOT provide in-memory ("per-request") caching.
- It's INtolerant of some memcache failures, but in exchange will not return
  inconsistent results. See DANGER ZONE for details.
- Queries do not interact with the cache at all.
- Negative lookups (e.g. ErrNoSuchEntity) are cached.


As mentioned in the Put/Delete/Transactions sections above, if the memcache Set fails, that's a HARD ERROR. The reason for this is that otherwise in the event of transient memcache failures, the memcache may be permanently left in an inconsistent state, since there will be nothing to actually ensure that the bad value is flushed from memcache. As long as the Put is allowed to write the lock, then all will be (eventually) well, and so all other memcache operations are best effort.

So, if memcache is DOWN, you will effectively see tons of errors in the logs, and all cached datastore access will be essentially degraded to a slow read-only state.

AppEngine's memcache reserves the right to evict keys at any moment. This is especially true for shared memcache, which is subject to pressures outside of your application. When eviction happens due to memory pressure, least-recently-used values are evicted first.

Eviction presents a potential race window, as lock items that were put in memcache may be evicted prior to the locked operations completing (or failing), causing concurrent Get operations to cache bad data indefinitely.

In practice, a dedicated memcache will be safe. LRU-based eviction means that that locks recently added will almost certainly not be evicted before their operations are complete, and a dedicated memcache lowers eviction pressure to a single application's operation. Production systems that have data integrity requirements are encouraged to use a dedicated memcache.

Note that flusing memcache of a running application may also induce this race. Flushes should be performed with this concern in mind.

TODO: A potential mitigation to lock eviction poisoning is to use memcache Statistics to identify the oldest memcache item and use that age to bound the lifetime of cached datastore entries. This would cause dscache items created around the time of a flush to expire quickly (instead of never), bounding the period of time when poisoned data may reside in the cache.



View Source
const (
	// MutationLockTimeout is expiration time of a "lock" memcache entry that
	// protects mutations (Put/Delete/Commit). It should be larger than the
	// maximum expected duration of datastore mutating operations. Must have
	// seconds precision.
	MutationLockTimeout = 120 * time.Second

	// RefreshLockTimeout is expiration time of a "lock" memcache entry that
	// protects the cache refresh process (during Get). It should be larger than
	// expected Get duration, but it's not a big deal if the lock expires sooner.
	// Must have seconds precision.
	RefreshLockTimeout = 20 * time.Second

	// CacheDuration is the default duration that a cached entity will be retained
	// (memcache contention notwithstanding). Must have seconds precision.
	CacheDuration = time.Hour * 24

	// CompressionThreshold is the number of bytes of entity value after which
	// compression kicks in.
	CompressionThreshold = 16 * 1024

	// DefaultShards is the default number of key sharding to do.
	DefaultShards = 1

	// MaxShards is the maximum number of shards a single entity can have.
	MaxShards = 256

	// MemcacheVersion will be incremented in the event that the in-memcache
	// representation of the cache data is modified.
	MemcacheVersion = "1"

	// KeyFormat is the format string used to generate memcache keys. It's
	//   gae:<version>:<shard#>:<base64_std_nopad(sha1(datastore.Key))>
	KeyFormat = "gae:" + MemcacheVersion + ":%x:%s"

	// ValueSizeLimit is the maximum encoded size a datastore key+entry may
	// occupy. If a datastore entity is too large, it will have an indefinite
	// lock which will cause all clients to fetch it from the datastore.
	// See
	// 80 is approximately the internal GAE padding. 36 is maximum length of our
	// keys.
	ValueSizeLimit = (1024 * 1024) - 80 - 36

	// CacheEnableMeta is the gae metadata key name for whether or not dscache
	// is enabled for an entity type at all.
	CacheEnableMeta = "dscache.enable"

	// CacheExpirationMeta is the gae metadata key name for the default
	// expiration time (in seconds) for an entity type.
	CacheExpirationMeta = "dscache.expiration"

	// NonceBytes is the number of bytes to use in the 'lock' nonce.
	NonceBytes = 8


View Source
var UseZstd = false

UseZstd, if true, instructs the dscache to use zstd algorithm for compression instead of zlib.

Already compressed zlib entities are supported even when UseZstd is true.


func AddShardFunctions

func AddShardFunctions(ctx context.Context, shardFns ...ShardFunction) context.Context

AddShardFunctions appends the provided shardFn functions to the internal list of shard functions. They are evaluated left to right, bottom to top.

nil functions will cause a panic.


ctx = AddShardFunctions(ctx, A, B, C)
ctx = AddShardFunctions(ctx, D, E, F)

Would evaluate `D, E, F, A, B, C`

func FilterRDS

func FilterRDS(ctx context.Context, impl Cache) context.Context

FilterRDS installs a caching RawDatastore filter in the context.

It uses the given `impl` to actually do caching. If nil, uses GAE Memcache.


type Cache

type Cache interface {
	// PutLocks is called before mutating entities during Put/Delete/Commit.
	// `keys` represent CacheItem keys of all shards of all to-be-mutated
	// entities. The implementation should unconditionally write locks into all
	// these keys keys.
	// Errors are treated as fatal.
	PutLocks(ctx context.Context, keys []string, timeout time.Duration) error

	// DropLocks is called after finishing Put/Delete/Commit.
	// The implementation should unconditionally remove these keys, thus unlocking
	// them (if they were locked).
	// Errors are logged, but ignored.
	DropLocks(ctx context.Context, keys []string) error

	// TryLockAndFetch is called before executing Get.
	// Each key is either empty, or contains some random shard of a to-be-fetched
	// entity (one such key per entity). For each non-empty key, if it doesn't
	// exist yet, the implementation should try to write a lock with the nonce.
	// It then should fetch all keys (whatever they might be).
	// Should always return len(keys) items, even on errors. Items matching empty
	// keys should be nil. Items that do not exist in the cache should also be
	// represented by nils.
	// Errors are logged, but ignored (i.e. treated as cache misses).
	TryLockAndFetch(ctx context.Context, keys []string, nonce []byte, timeout time.Duration) ([]CacheItem, error)

	// CompareAndSwap stores promoted items (see CacheItem) in place of locks
	// they formerly represented iff the cache still has the same locks there.
	// Errors are logged, but ignored.
	CompareAndSwap(ctx context.Context, items []CacheItem) error

Cache abstracts a particular memcache implementation.

This interface is tightly coupled to the dscache algorithm (rather than trying to emulate a generic cache API) to allow the implementation to be as efficient as possible.

type CacheItem

type CacheItem interface {
	// Key is the item's key as passed to TryLockAndFetch.
	Key() string

	// Nonce returns nil for data items or a lock nonce for lock items.
	Nonce() []byte

	// Data returns nil for lock items or an item's data for data items.
	Data() []byte

	// Prefix should be written to the data buffer passed to PromoteToData.
	Prefix() []byte

	// PromoteToData converts this lock item into a data item.
	// `data` must start with whatever Prefix() returned.
	// Panics if self is not a lock item.
	PromoteToData(data []byte, exp time.Duration)

	// PromoteToIndefiniteLock converts this lock into an indefinite lock.
	// An indefinite lock means that the datastore item is not cacheable for some
	// reasons and 'Get' should not try to cache it. Such locks are removed by
	// PutLocks/DropLocks.
	// Panics if self is not a lock item.

CacheItem represents either a cached datastore entity or a placeholder lock that "promises" that such entity is being fetched now (either by us or by someone else).

CacheItem is created by TryLockAndFetch. An item that represents a lock can be "promoted" into either a data item or a permanent lock. Such promoted items are stored by CompareAndSwap.

type ShardFunction

type ShardFunction func(*ds.Key) (shards int, ok bool)

ShardFunction is a user-controllable function which calculates the number of shards to use for a certain datastore key. The provided key will always be valid and complete. It should return ok=true if it recognized the Key, and false otherwise.

The # of shards returned may be between 1 and 256. Values above this range will be clamped into that range. A return value of 0 means that NO cache operations should be done for this key, regardless of the dscache.enable setting.