blobstore

package
v0.0.0-...-938d447 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 5, 2019 License: Apache-2.0 Imports: 24 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BlobAccess

type BlobAccess interface {
	Get(ctx context.Context, digest *util.Digest) (int64, io.ReadCloser, error)
	Put(ctx context.Context, digest *util.Digest, sizeBytes int64, r io.ReadCloser) error
	Delete(ctx context.Context, digest *util.Digest) error
	FindMissing(ctx context.Context, digests []*util.Digest) ([]*util.Digest, error)
}

BlobAccess is an abstraction for a data store that can be used to hold both a Bazel Action Cache (AC) and Content Addressable Storage (CAS).

func NewActionCacheBlobAccess

func NewActionCacheBlobAccess(client *grpc.ClientConn) BlobAccess

NewActionCacheBlobAccess creates a BlobAccess handle that relays any requests to a GRPC service that implements the remoteexecution.ActionCache service. That is the service that Bazel uses to access action results stored in the Action Cache.

func NewCloudBlobAccess

func NewCloudBlobAccess(bucket *blob.Bucket, keyPrefix string, keyFormat util.DigestKeyFormat) BlobAccess

NewCloudBlobAccess creates a BlobAccess that uses a cloud-based blob storage as a backend.

func NewContentAddressableStorageBlobAccess

func NewContentAddressableStorageBlobAccess(client *grpc.ClientConn, readChunkSize int) BlobAccess

NewContentAddressableStorageBlobAccess creates a BlobAccess handle that relays any requests to a GRPC service that implements the bytestream.ByteStream and remoteexecution.ContentAddressableStorage services. Those are the services that Bazel uses to access blobs stored in the Content Addressable Storage.

func NewErrorBlobAccess

func NewErrorBlobAccess(err error) BlobAccess

NewErrorBlobAccess creates a BlobAccess that returns a fixed error response. Such an implementation is useful for adding explicit rejection of oversized requests or disabling storage entirely.

func NewMerkleBlobAccess

func NewMerkleBlobAccess(blobAccess BlobAccess) BlobAccess

NewMerkleBlobAccess creates an adapter that validates that blobs read from and written to storage correspond with the digest that is used for identification. It ensures that the size and the SHA-256 based checksum match. This is used to ensure clients cannot corrupt the CAS and that if corruption were to occur, use of corrupted data is prevented.

func NewMetricsBlobAccess

func NewMetricsBlobAccess(blobAccess BlobAccess, name string) BlobAccess

NewMetricsBlobAccess creates an adapter for BlobAccess that adds basic instrumentation in the form of Prometheus metrics.

func NewRedisBlobAccess

func NewRedisBlobAccess(redisClient *redis.Client, blobKeyFormat util.DigestKeyFormat) BlobAccess

NewRedisBlobAccess creates a BlobAccess that uses Redis as its backing store.

func NewRemoteBlobAccess

func NewRemoteBlobAccess(address, prefix string) BlobAccess

NewRemoteBlobAccess for use of HTTP/1.1 cache backend.

See: https://docs.bazel.build/versions/master/remote-caching.html#http-caching-protocol

func NewSizeDistinguishingBlobAccess

func NewSizeDistinguishingBlobAccess(smallBlobAccess BlobAccess, largeBlobAccess BlobAccess, cutoffSizeBytes int64) BlobAccess

NewSizeDistinguishingBlobAccess creates a BlobAccess that splits up requests between two backends based on the size of the object specified in the digest. Backends tend to have different performance characteristics based on blob size. This adapter may be used to optimize performance based on that.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL