Documentation
¶
Overview ¶
Package storage uploads and downloads data from a GCS bucket.
Index ¶
- Constants
- func ConnectToBucket(ctx context.Context, p *ConnectParameters) (*storage.BucketHandle, bool)
- func DeleteObject(ctx context.Context, bucketHandle *storage.BucketHandle, objectName string, ...) error
- func ListObjects(ctx context.Context, bucketHandle *storage.BucketHandle, prefix, filter string, ...) ([]*storage.ObjectAttrs, error)
- type BucketConnector
- type Client
- type ConnectParameters
- type DefaultTokenGetter
- type HTTPClient
- type IOFileCopier
- type JSONCredentialsGetter
- type MultipartWriter
- type ParallelReader
- type ReadWriter
- func (rw *ReadWriter) Download(ctx context.Context) (int64, error)
- func (rw *ReadWriter) NewMultipartWriter(ctx context.Context, newClient HTTPClient, tokenGetter DefaultTokenGetter, ...) (*MultipartWriter, error)
- func (rw *ReadWriter) NewParallelReader(ctx context.Context, decodedKey []byte) (*ParallelReader, error)
- func (rw *ReadWriter) Read(p []byte) (n int, err error)
- func (rw *ReadWriter) Upload(ctx context.Context) (int64, error)
- func (rw *ReadWriter) Write(p []byte) (n int, err error)
Constants ¶
const DefaultChunkSizeMb = 16
DefaultChunkSizeMb provides a default chunk size when uploading data.
const DefaultLogDelay = time.Minute
DefaultLogDelay sets the default upload and download progress logging to once a minute.
Variables ¶
This section is empty.
Functions ¶
func ConnectToBucket ¶
func ConnectToBucket(ctx context.Context, p *ConnectParameters) (*storage.BucketHandle, bool)
ConnectToBucket creates the storage client with custom retry logic and attempts to connect to the GCS bucket. Returns false if there is a connection failure (bucket does not exist, invalid credentials, etc.) userAgentSuffix is an optional parameter to set the User-Agent header. verifyConnection requires bucket read access to list the bucket's objects.
func DeleteObject ¶
func DeleteObject(ctx context.Context, bucketHandle *storage.BucketHandle, objectName string, maxRetries int64) error
DeleteObject deletes the specified object from the bucket.
func ListObjects ¶
func ListObjects(ctx context.Context, bucketHandle *storage.BucketHandle, prefix, filter string, maxRetries int64) ([]*storage.ObjectAttrs, error)
ListObjects returns all objects in the bucket with the given prefix sorted by latest creation. The prefix can be empty and can contain multiple folders separated by forward slashes. The optional filter will only return filenames that contain the filter. Errors are returned if no handle is defined or if there are iteration issues.
Types ¶
type BucketConnector ¶
type BucketConnector func(ctx context.Context, p *ConnectParameters) (*storage.BucketHandle, bool)
BucketConnector abstracts ConnectToBucket() for unit testing purposes.
type ConnectParameters ¶
type ConnectParameters struct { StorageClient Client ServiceAccount string BucketName string UserAgentSuffix string Endpoint string OAuthToken secret.String VerifyConnection bool MaxRetries int64 UserAgent string }
ConnectParameters provides parameters for bucket connection.
type DefaultTokenGetter ¶
DefaultTokenGetter abstracts obtaining a default oauth2 token source.
type HTTPClient ¶
HTTPClient abstracts creating a new HTTP client to connect to GCS.
type IOFileCopier ¶
IOFileCopier abstracts copying data from an io.Reader to an io.Writer.
type JSONCredentialsGetter ¶
JSONCredentialsGetter abstracts obtaining JSON oauth2 google credentials.
type MultipartWriter ¶
type MultipartWriter struct {
// contains filtered or unexported fields
}
MultipartWriter is a writer following GCS multipart upload protocol.
func (*MultipartWriter) Close ¶
func (w *MultipartWriter) Close() error
Close waits for all transfers to complete then generates the final object.
type ParallelReader ¶
type ParallelReader struct {
// contains filtered or unexported fields
}
ParallelReader is a reader capable of downloading from GCS in parallel.
func (*ParallelReader) Close ¶
func (r *ParallelReader) Close() error
Close cancels any in progress transfers and performs any necessary clean up.
type ReadWriter ¶
type ReadWriter struct { // Reader must be set in order to upload data. Reader io.Reader // Writer must be set in order to download data. Writer io.Writer // Abstracted for testing purposes, production users should use io.Copy(). Copier IOFileCopier // ChunkSizeMb sets the max bytes that are written at a time to the bucket during uploads. // Objects smaller than the size will be sent in a single request, while larger // objects will be split over multiple requests. // // If ChunkSizeMb is set to zero, chunking will be disabled and the object will // be uploaded in a single request without the use of a buffer. This will // further reduce memory used during uploads, but will also prevent the writer // from retrying in case of a transient error from the server or resuming an // upload that fails midway through, since the buffer is required in order to // retry the failed request. ChunkSizeMb int64 // Call ConnectToBucket() to generate a bucket handle. BucketHandle *storage.BucketHandle // The name of the bucket must match the handle. BucketName string // ObjectName is the destination or source in the bucket for uploading or downloading data. ObjectName string // Metadata is an optional parameter to add metadata to uploads. Metadata map[string]string // CustomTime is an optional parameter to set the custom time for uploads. CustomTime time.Time // ObjectRetentionMode is an optional parameter to set the object retention mode for uploads. // Accepted values are "Locked" or "Unlocked". ObjectRetentionMode string // ObjectRetentionTime is an optional parameter to set the object retention time for uploads. ObjectRetentionTime time.Time // If TotalBytes is not set, percent completion cannot be calculated when logging progress. TotalBytes int64 // If LogDelay is not set, it will be defaulted to DefaultLogDelay. LogDelay time.Duration // Compress enables client side compression with gzip for uploads. // Downloads will decompress automatically based on the file's content type. Compress bool // EncryptionKey enables customer-supplied server side encryption with a // base64 encoded AES-256 key string. // Providing both EncryptionKey and KMSKey will result in an error. EncryptionKey string // KMSKey enables customer-managed server side encryption with a cloud // key management service encryption key. // Providing both EncryptionKey and KMSKey will result in an error. KMSKey string // VerifyUpload ensures the object is in the bucket and bytesWritten matches // the object's size in the bucket. Read access on the bucket is required. VerifyUpload bool // StorageClass sets the storage class for uploads, default is "STANDARD". StorageClass string // DumpData discards bytes during upload rather than write to the bucket. DumpData bool // RateLimitBytes caps the maximum number of bytes transferred per second. // A default value of 0 prevents rate limiting. RateLimitBytes int64 // MaxRetries sets the maximum amount of retries when executing an API call. // An exponential backoff delays each subsequent retry. MaxRetries int64 // RetryBackoffInitial is an optional parameter to set the initial backoff // interval. The default value is 10 seconds. RetryBackoffInitial time.Duration // RetryBackoffMax is an optional parameter to set the maximum backoff // interval. The default value is 300 seconds. RetryBackoffMax time.Duration // RetryBackoffMultiplier is an optional parameter to set the multiplier for // the backoff interval. Must be greater than 1 and the default value is 2. RetryBackoffMultiplier float64 // XMLMultipartUpload is an optional parameter to configure the upload to use // the XML multipart API rather than the GCS storage client API. XMLMultipartUpload bool // XMLMultipartWorkers defines the number of workers, or parts, used in the // XML multipart upload. XMLMultipartUpload must be set to true to use. XMLMultipartWorkers int64 // XMLMultipartServiceAccount will override application default credentials // with a JSON credentials file for authenticating API requests. XMLMultipartServiceAccount string // XMLMultipartEndpoint will override the default client endpoint used for // making requests. XMLMultipartEndpoint string // ParallelDownloadWorkers defines the number of workers, or parts, used in the // parallel reader download. If 0, the download will happen sequentially. ParallelDownloadWorkers int64 // ParallelDownloadConnectParams provides parameters for bucket connection for // downloading in parallel restore. ParallelDownloadConnectParams *ConnectParameters // contains filtered or unexported fields }
ReadWriter wraps io.Reader and io.Writer to provide progress updates when uploading or downloading files.
func (*ReadWriter) Download ¶
func (rw *ReadWriter) Download(ctx context.Context) (int64, error)
Download reads the data contained in ObjectName in the bucket and outputs to dest. Returns bytesWritten and any error due to creating the reader or reading the data.
func (*ReadWriter) NewMultipartWriter ¶
func (rw *ReadWriter) NewMultipartWriter(ctx context.Context, newClient HTTPClient, tokenGetter DefaultTokenGetter, jsonCredentialsGetter JSONCredentialsGetter) (*MultipartWriter, error)
NewMultipartWriter creates a writer and workers for a multipart upload.
func (*ReadWriter) NewParallelReader ¶
func (rw *ReadWriter) NewParallelReader(ctx context.Context, decodedKey []byte) (*ParallelReader, error)
NewParallelReader creates workers with readers for parallel download.
func (*ReadWriter) Read ¶
func (rw *ReadWriter) Read(p []byte) (n int, err error)
Read wraps io.Reader to provide download progress updates.