blob

package
v0.8.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 26, 2021 License: Apache-2.0 Imports: 6 Imported by: 6

Documentation

Overview

Package blob implements simple storage of immutable, unstructured binary large objects (BLOBs).

Index

Constants

This section is empty.

Variables

View Source
var ErrBlobNotFound = errors.New("BLOB not found")

ErrBlobNotFound is returned when a BLOB cannot be found in storage.

View Source
var ErrInvalidRange = errors.Errorf("invalid blob offset or length")

ErrInvalidRange is returned when the requested blob offset or length is invalid.

View Source
var ErrSetTimeUnsupported = errors.Errorf("SetTime is not supported")

ErrSetTimeUnsupported is returned by implementations of Storage that don't support SetTime.

Functions

func AddSupportedStorage

func AddSupportedStorage(
	urlScheme string,
	defaultConfigFunc func() interface{},
	createStorageFunc func(context.Context, interface{}) (Storage, error),
)

AddSupportedStorage registers factory function to create storage with a given type name.

func EnsureLengthAndTruncate added in v0.8.0

func EnsureLengthAndTruncate(b []byte, length int64) ([]byte, error)

EnsureLengthAndTruncate validates that length of the given slice is at least the provided value and returns ErrInvalidRange if the length is of the slice if not. As a special case length < 0 disables validation.

func EnsureLengthExactly added in v0.8.0

func EnsureLengthExactly(b []byte, length int64) ([]byte, error)

EnsureLengthExactly validates that length of the given slice is exactly the provided value. and returns ErrInvalidRange if the length is of the slice if not. As a special case length < 0 disables validation.

func IterateAllPrefixesInParallel

func IterateAllPrefixesInParallel(ctx context.Context, parallelism int, st Storage, prefixes []ID, callback func(Metadata) error) error

IterateAllPrefixesInParallel invokes the provided callback and returns the first error returned by the callback or nil.

Types

type Bytes added in v0.6.0

type Bytes interface {
	io.WriterTo

	Length() int
	Reader() io.Reader
}

Bytes encapsulates a sequence of bytes, possibly stored in a non-contiguous buffers, which can be written sequentially or treated as a io.Reader.

type ConnectionInfo

type ConnectionInfo struct {
	Type   string
	Config interface{}
}

ConnectionInfo represents JSON-serializable configuration of a blob storage.

func (ConnectionInfo) MarshalJSON

func (c ConnectionInfo) MarshalJSON() ([]byte, error)

MarshalJSON returns JSON-encoded storage configuration.

func (*ConnectionInfo) UnmarshalJSON

func (c *ConnectionInfo) UnmarshalJSON(b []byte) error

UnmarshalJSON parses the JSON-encoded data into ConnectionInfo.

type ID

type ID string

ID is a string that represents blob identifier.

type Metadata

type Metadata struct {
	BlobID    ID        `json:"id"`
	Length    int64     `json:"length"`
	Timestamp time.Time `json:"timestamp"`
}

Metadata represents metadata about a single BLOB in a storage.

func ListAllBlobs

func ListAllBlobs(ctx context.Context, st Storage, prefix ID) ([]Metadata, error)

ListAllBlobs returns Metadata for all blobs in a given storage that have the provided name prefix.

func (*Metadata) String added in v0.6.0

func (m *Metadata) String() string

type Reader added in v0.8.0

type Reader interface {
	// GetBlob returns full or partial contents of a blob with given ID.
	// If length>0, the the function retrieves a range of bytes [offset,offset+length)
	// If length<0, the entire blob must be fetched.
	// Returns ErrInvalidRange if the fetched blob length is invalid.
	GetBlob(ctx context.Context, blobID ID, offset, length int64) ([]byte, error)

	// GetMetadata returns Metadata about single blob.
	GetMetadata(ctx context.Context, blobID ID) (Metadata, error)

	// ListBlobs invokes the provided callback for each blob in the storage.
	// Iteration continues until the callback returns an error or until all matching blobs have been reported.
	ListBlobs(ctx context.Context, blobIDPrefix ID, cb func(bm Metadata) error) error

	// ConnectionInfo returns JSON-serializable data structure containing information required to
	// connect to storage.
	ConnectionInfo() ConnectionInfo

	// Name of the storage used for quick identification by humans.
	DisplayName() string
}

Reader defines read access API to blob storage.

type Storage

type Storage interface {
	Reader

	// PutBlob uploads the blob with given data to the repository or replaces existing blob with the provided
	// id with contents gathered from the specified list of slices.
	PutBlob(ctx context.Context, blobID ID, data Bytes) error

	// SetTime changes last modification time of a given blob, if supported, returns ErrSetTimeUnsupported otherwise.
	SetTime(ctx context.Context, blobID ID, t time.Time) error

	// DeleteBlob removes the blob from storage. Future Get() operations will fail with ErrNotFound.
	DeleteBlob(ctx context.Context, blobID ID) error

	// Close releases all resources associated with storage.
	Close(ctx context.Context) error
}

Storage encapsulates API for connecting to blob storage.

The underlying storage system must provide:

* high durability, availability and bit-rot protection * read-after-write - blob written using PubBlob() must be immediately readable using GetBlob() and ListBlobs() * atomicity - it mustn't be possible to observe partial results of PubBlob() via either GetBlob() or ListBlobs() * timestamps that don't go back in time (small clock skew up to minutes is allowed) * reasonably low latency for retrievals

The required semantics are provided by existing commercial cloud storage products (Google Cloud, AWS, Azure).

func NewStorage

func NewStorage(ctx context.Context, cfg ConnectionInfo) (Storage, error)

NewStorage creates new storage based on ConnectionInfo. The storage type must be previously registered using AddSupportedStorage.

Directories

Path Synopsis
Package azure implements Azure Blob Storage.
Package azure implements Azure Blob Storage.
Package b2 implements Storage based on an Backblaze B2 bucket.
Package b2 implements Storage based on an Backblaze B2 bucket.
Package filesystem implements filesystem-based Storage.
Package filesystem implements filesystem-based Storage.
Package gcs implements Storage based on Google Cloud Storage bucket.
Package gcs implements Storage based on Google Cloud Storage bucket.
Package logging implements wrapper around Storage that logs all activity.
Package logging implements wrapper around Storage that logs all activity.
Package providers registers all storage providers that are included as part of Kopia.
Package providers registers all storage providers that are included as part of Kopia.
Package rclone implements blob storage provider proxied by rclone (http://rclone.org)
Package rclone implements blob storage provider proxied by rclone (http://rclone.org)
Package readonly implements wrapper around readonlyStorage that prevents all mutations.
Package readonly implements wrapper around readonlyStorage that prevents all mutations.
Package retrying implements wrapper around blob.Storage that adds retry loop around all operations in case they return unexpected errors.
Package retrying implements wrapper around blob.Storage that adds retry loop around all operations in case they return unexpected errors.
Package s3 implements Storage based on an S3 bucket.
Package s3 implements Storage based on an S3 bucket.
Package sftp implements blob storage provided for SFTP/SSH.
Package sftp implements blob storage provided for SFTP/SSH.
Package sharded implements common support for sharded blob providers, such as filesystem or webdav.
Package sharded implements common support for sharded blob providers, such as filesystem or webdav.
Package webdav implements WebDAV-based Storage.
Package webdav implements WebDAV-based Storage.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL