dsblob

package module
v0.0.0-...-18b4d76 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 25, 2023 License: Apache-2.0 Imports: 13 Imported by: 1

README

go-ds-blob

A cloudy blobstore-backed datastore

This is a go-datastore adapter that uses several blob-store services offered by popular cloud providers. This datastore uses the Go Cloud Development Kit (homepage) blob library as the storage backend. Multiple cloud providers are supported through this library.

Why should I use this datastore?

  • I already run a go-datastore application and I want to make management easier
  • I am paying for empty block storage, e.g. badgerdb-on-ebs
  • I run workloads on multiple clouds
  • I have a large datastore that cannot fit on a single block-storage disk
  • I want to share a datastore with multiple processes, machines, containers, lambda/function
  • I want to use local files or cloud providers without plugins or rebuilding

Why should I not use this datastore?

  • I want my data to be local (although, local file backend is also supported)
  • I frequently query data by attributes or relations
  • I need ACID transactions

Get started

By default the backed is inferred from the name of the bucket you specify. If your default credentials are setup, you can make use of the default credentials.

// bkt := "gs://my-bucket"
// bkt := "s3://my-bucket"
// bkt := "azblob://my-bucket"
// bkt := "file://my-bucket"

bkt := "mem://my-bucket"

d, _ := New(context.Background(), bucket)

disclaimer

blobstores do not make a perfect datastore. Many blob-store services are eventually-consistent. Compared to other databases, the query api is typically basic, and it's difficult to support all datastore features well. However, while they may lack the rich query features often found in RDBMS or Document databases, they are often a magnitude less expensive to operate, and performant at high scale.

prior art

There is already an S3-only implementation maintained by Protocol Labs. github.com/ipfs/go-ds-s3

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type CloudDatastore

type CloudDatastore struct {
	// contains filtered or unexported fields
}

implements ipfs/go-datastore CloudDatastore interface uses google/go-cloud blob.Bucket to store data

func New

func New(ctx context.Context, bucketName string) (*CloudDatastore, error)

Create a new CloudDatastore using a bucket name and default parameters appropriate for the bucket type. GCS, S3, Azure, memory, and file buckets are suported. each system has its own default parameters. see https://gocloud.dev/howto/blob/ for information on each system.

bucketName should have an approppriate prefix for the bucket type. e.g.

  • "gs://my-bucket" for GCS
  • "s3://my-bucket" for S3
  • "azblob://my-bucket" for Azure
  • "file://my-bucket" for file
  • "mem://my-bucket" for memory

func NewGCPWithCredentials

func NewGCPWithCredentials(ctx context.Context, creds *google.Credentials, bucketName string) (*CloudDatastore, error)

NewGCPWithCredentials creates a new CloudDatastore with a GCP bucket. The user must separately authenticate with GCP and provide the credentials. bucketName should look like "my-bucket"

func NewS3WithConfig

func NewS3WithConfig(ctx context.Context, bucketName string, cfg aws.Config) (*CloudDatastore, error)

NewS3WithConfig creates a new CloudDatastore with the given bucket name and aws.Config.

func NewWithBucket

func NewWithBucket(bucket *blob.Bucket) *CloudDatastore

NewDatastore returns a new Datastore based on a google/go-cloud blob.Bucket

func (*CloudDatastore) Batch

func (cds *CloudDatastore) Batch(ctx context.Context) (datastore.Batch, error)

Batch returns a Batch object for batching operations this is required for the batching feature

func (*CloudDatastore) Close

func (cds *CloudDatastore) Close() error

Close closes the Datastore

func (*CloudDatastore) Delete

func (cds *CloudDatastore) Delete(ctx context.Context, key datastore.Key) error

Delete removes a key from the Datastore

func (*CloudDatastore) DiskUsage

func (cds *CloudDatastore) DiskUsage(ctx context.Context) (uint64, error)

DiskUsage returns the total size of the datastore this is required for the persistent feature this is not a cheap operation. It iterates over the entire bucket.

func (*CloudDatastore) Get

func (cds *CloudDatastore) Get(ctx context.Context, key datastore.Key) (value []byte, err error)

Get retrieves a value from the Datastore

func (*CloudDatastore) GetSize

func (cds *CloudDatastore) GetSize(ctx context.Context, key datastore.Key) (size int, err error)

GetSize returns the size of the `value` named by `key`. In some contexts, it may be much cheaper to only get the size of the value rather than retrieving the value itself.

func (*CloudDatastore) Has

func (cds *CloudDatastore) Has(ctx context.Context, key datastore.Key) (exists bool, err error)

Has returns whether the `key` is mapped to a `value`.

func (*CloudDatastore) Put

func (cds *CloudDatastore) Put(ctx context.Context, key datastore.Key, value []byte) error

Put stores a value in the Datastore

func (*CloudDatastore) Query

func (cds *CloudDatastore) Query(ctx context.Context, q query.Query) (query.Results, error)

Query searches the Datastore This implementation does not support filters or orders

func (*CloudDatastore) Sync

func (cds *CloudDatastore) Sync(ctx context.Context, prefix datastore.Key) error

Sync synchronizes the Datastore Writes from the put method are persisted when the writer is closed. however, underlying implementaions may have eventual consistency.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL