s3util

package
v0.0.11 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 7, 2024 License: Apache-2.0 Imports: 13 Imported by: 1

Documentation

Index

Constants

View Source
const (
	// DefaultS3ObjectCopySizeLimit is the max size of object for a single PUT Object Copy request.
	// As per AWS: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
	// the max size allowed is 5GB, but we use a smaller size here to speed up large file copies.
	DefaultS3ObjectCopySizeLimit = 256 << 20 // 256MiB

	// defaultS3MultipartCopyPartSize is the max size of each part when doing a multi-part copy.
	// Note: Though we can do parts of size up to defaultS3ObjectCopySizeLimit, for large files
	// using smaller size parts (concurrently) is much faster.
	DefaultS3MultipartCopyPartSize = 128 << 20 // 128MiB

)

Variables

View Source
var (
	// DefaultRetryPolicy is the default retry policy
	DefaultRetryPolicy = retry.MaxRetries(retry.Jitter(retry.Backoff(1*time.Second, time.Minute, 2), 0.25), defaultMaxRetries)
)

Functions

func CtxErr

func CtxErr(ctx context.Context, other error) error

CtxErr will return the context's error (if any) or the other error. This is particularly useful to interpret AWS S3 API call errors because AWS sometimes wraps context errors (context.Canceled or context.DeadlineExceeded).

func KindAndSeverity

func KindAndSeverity(err error) (errors.Kind, errors.Severity)

KindAndSeverity interprets a given error and returns errors.Kind and errors.Severity. This is particularly useful to interpret AWS S3 API call errors.

func Severity

func Severity(err error) errors.Severity

KindAndSeverity interprets a given error and returns errors.Severity. This is particularly useful to interpret AWS S3 API call errors.

Types

type Copier

type Copier struct {

	// S3ObjectCopySizeLimit is the max size of object for a single PUT Object Copy request.
	S3ObjectCopySizeLimit int64
	// S3MultipartCopyPartSize is the max size of each part when doing a multi-part copy.
	S3MultipartCopyPartSize int64

	Debugger
	// contains filtered or unexported fields
}

func NewCopier

func NewCopier(client s3iface.S3API) *Copier

func NewCopierWithParams

func NewCopierWithParams(client s3iface.S3API, retrier retry.Policy, s3ObjectCopySizeLimit int64, s3MultipartCopyPartSize int64, debugger Debugger) *Copier

func (*Copier) Copy

func (c *Copier) Copy(ctx context.Context, srcUrl, dstUrl string, srcSize int64, dstMetadata map[string]*string) error

Copy copies the S3 object from srcUrl to dstUrl (both expected to be full S3 URLs) The size of the source object (srcSize) determines behavior (whether done as single or multi-part copy).

dstMetadata must be set if the caller wishes to set the metadata on the dstUrl object. While the AWS API will copy the metadata over if done using CopyObject, but NOT when multi-part copy is done, this method requires that dstMetadata be always provided to remove ambiguity. So if metadata is desired on dstUrl object, *it must always be provided*.

type Debugger

type Debugger interface {
	Debugf(format string, args ...interface{})
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL