manager

package module
v1.5.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 17, 2021 License: Apache-2.0 Imports: 21 Imported by: 489

Documentation

Overview

Package manager provides utilities to upload and download objects from S3 concurrently. Helpful for when working with large objects.

Index

Examples

Constants

View Source
const DefaultDownloadConcurrency = 5

DefaultDownloadConcurrency is the default number of goroutines to spin up when using Download().

View Source
const DefaultDownloadPartSize = 1024 * 1024 * 5

DefaultDownloadPartSize is the default range of bytes to get at a time when using Download().

View Source
const DefaultPartBodyMaxRetries = 3

DefaultPartBodyMaxRetries is the default number of retries to make when a part fails to download.

View Source
const DefaultUploadConcurrency = 5

DefaultUploadConcurrency is the default number of goroutines to spin up when using Upload().

View Source
const DefaultUploadPartSize = MinUploadPartSize

DefaultUploadPartSize is the default part size to buffer chunks of a payload into.

View Source
const MaxUploadParts int32 = 10000

MaxUploadParts is the maximum allowed number of parts in a multi-part upload on Amazon S3.

View Source
const MinUploadPartSize int64 = 1024 * 1024 * 5

MinUploadPartSize is the minimum allowed part size when uploading a part to Amazon S3.

Variables

This section is empty.

Functions

func GetBucketRegion

func GetBucketRegion(ctx context.Context, client HeadBucketAPIClient, bucket string, optFns ...func(*s3.Options)) (string, error)

GetBucketRegion will attempt to get the region for a bucket using the client's configured region to determine which AWS partition to perform the query on.

The request will not be signed, and will not use your AWS credentials.

A BucketNotFound error will be returned if the bucket does not exist in the AWS partition the client region belongs to.

For example to get the region of a bucket which exists in "eu-central-1" you could provide a region hint of "us-west-2".

cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
	log.Println("error:", err)
	return
}

bucket := "my-bucket"
region, err := manager.GetBucketRegion(ctx, s3.NewFromConfig(cfg), bucket)
if err != nil {
	var bnf manager.BucketNotFound
	if errors.As(err, &bnf) {
		fmt.Fprintf(os.Stderr, "unable to find bucket %s's region\n", bucket)
	}
	return
}
fmt.Printf("Bucket %s is in %s region\n", bucket, region)

By default the request will be made to the Amazon S3 endpoint using the virtual-hosted-style addressing.

bucketname.s3.us-west-2.amazonaws.com/

To configure the GetBucketRegion to make a request via the Amazon S3 FIPS endpoints directly when a FIPS region name is not available, (e.g. fips-us-gov-west-1) set the EndpointResolver on the config or client the utility is called with.

cfg, err := config.LoadDefaultConfig(context.TODO(),
	config.WithEndpointResolver(
		aws.EndpointResolverFunc(func(service, region string) (aws.Endpoint, error) {
			return aws.Endpoint{URL: "https://s3-fips.us-west-2.amazonaws.com"}, nil
		}),
)
if err != nil {
	panic(err)
}

func WithDownloaderClientOptions

func WithDownloaderClientOptions(opts ...func(*s3.Options)) func(*Downloader)

WithDownloaderClientOptions appends to the Downloader's API request options.

func WithUploaderRequestOptions

func WithUploaderRequestOptions(opts ...func(*s3.Options)) func(*Uploader)

WithUploaderRequestOptions appends to the Uploader's API client options.

Types

type BucketNotFound

type BucketNotFound interface {
	error
	// contains filtered or unexported methods
}

BucketNotFound indicates the bucket was not found in the partition when calling GetBucketRegion.

type BufferedReadSeeker

type BufferedReadSeeker struct {
	// contains filtered or unexported fields
}

BufferedReadSeeker is buffered io.ReadSeeker

func NewBufferedReadSeeker

func NewBufferedReadSeeker(r io.ReadSeeker, b []byte) *BufferedReadSeeker

NewBufferedReadSeeker returns a new BufferedReadSeeker if len(b) == 0 then the buffer will be initialized to 64 KiB.

func (*BufferedReadSeeker) Read

func (b *BufferedReadSeeker) Read(p []byte) (n int, err error)

Read will read up len(p) bytes into p and will return the number of bytes read and any error that occurred. If the len(p) > the buffer size then a single read request will be issued to the underlying io.ReadSeeker for len(p) bytes. A Read request will at most perform a single Read to the underlying io.ReadSeeker, and may return < len(p) if serviced from the buffer.

func (*BufferedReadSeeker) ReadAt

func (b *BufferedReadSeeker) ReadAt(p []byte, off int64) (int, error)

ReadAt will read up to len(p) bytes at the given file offset. This will result in the buffer being cleared.

func (*BufferedReadSeeker) Seek

func (b *BufferedReadSeeker) Seek(offset int64, whence int) (int64, error)

Seek will position then underlying io.ReadSeeker to the given offset and will clear the buffer.

type BufferedReadSeekerWriteTo

type BufferedReadSeekerWriteTo struct {
	*BufferedReadSeeker
}

BufferedReadSeekerWriteTo wraps a BufferedReadSeeker with an io.WriteAt implementation.

func (*BufferedReadSeekerWriteTo) WriteTo

func (b *BufferedReadSeekerWriteTo) WriteTo(writer io.Writer) (int64, error)

WriteTo writes to the given io.Writer from BufferedReadSeeker until there's no more data to write or an error occurs. Returns the number of bytes written and any error encountered during the write.

type BufferedReadSeekerWriteToPool

type BufferedReadSeekerWriteToPool struct {
	// contains filtered or unexported fields
}

BufferedReadSeekerWriteToPool uses a sync.Pool to create and reuse []byte slices for buffering parts in memory

func NewBufferedReadSeekerWriteToPool

func NewBufferedReadSeekerWriteToPool(size int) *BufferedReadSeekerWriteToPool

NewBufferedReadSeekerWriteToPool will return a new BufferedReadSeekerWriteToPool that will create a pool of reusable buffers . If size is less then < 64 KiB then the buffer will default to 64 KiB. Reason: io.Copy from writers or readers that don't support io.WriteTo or io.ReadFrom respectively will default to copying 32 KiB.

func (*BufferedReadSeekerWriteToPool) GetWriteTo

func (p *BufferedReadSeekerWriteToPool) GetWriteTo(seeker io.ReadSeeker) (r ReadSeekerWriteTo, cleanup func())

GetWriteTo will wrap the provided io.ReadSeeker with a BufferedReadSeekerWriteTo. The provided cleanup must be called after operations have been completed on the returned io.ReadSeekerWriteTo in order to signal the return of resources to the pool.

type DeleteObjectsAPIClient

type DeleteObjectsAPIClient interface {
	DeleteObjects(context.Context, *s3.DeleteObjectsInput, ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error)
}

DeleteObjectsAPIClient is an S3 API client that can invoke the DeleteObjects operation.

type DownloadAPIClient

type DownloadAPIClient interface {
	GetObject(context.Context, *s3.GetObjectInput, ...func(*s3.Options)) (*s3.GetObjectOutput, error)
}

DownloadAPIClient is an S3 API client that can invoke the GetObject operation.

type Downloader

type Downloader struct {
	// The size (in bytes) to request from S3 for each part.
	// The minimum allowed part size is 5MB, and  if this value is set to zero,
	// the DefaultDownloadPartSize value will be used.
	//
	// PartSize is ignored if the Range input parameter is provided.
	PartSize int64

	// PartBodyMaxRetries is the number of retry attempts to make for failed part downloads.
	PartBodyMaxRetries int

	// Logger to send logging messages to
	Logger logging.Logger

	// Enable Logging of part download retry attempts
	LogInterruptedDownloads bool

	// The number of goroutines to spin up in parallel when sending parts.
	// If this is set to zero, the DefaultDownloadConcurrency value will be used.
	//
	// Concurrency of 1 will download the parts sequentially.
	//
	// Concurrency is ignored if the Range input parameter is provided.
	Concurrency int

	// An S3 client to use when performing downloads.
	S3 DownloadAPIClient

	// List of client options that will be passed down to individual API
	// operation requests made by the downloader.
	ClientOptions []func(*s3.Options)

	// Defines the buffer strategy used when downloading a part.
	//
	// If a WriterReadFromProvider is given the Download manager
	// will pass the io.WriterAt of the Download request to the provider
	// and will use the returned WriterReadFrom from the provider as the
	// destination writer when copying from http response body.
	BufferProvider WriterReadFromProvider
}

The Downloader structure that calls Download(). It is safe to call Download() on this structure for multiple objects and across concurrent goroutines. Mutating the Downloader's properties is not safe to be done concurrently.

func NewDownloader

func NewDownloader(c DownloadAPIClient, options ...func(*Downloader)) *Downloader

NewDownloader creates a new Downloader instance to downloads objects from S3 in concurrent chunks. Pass in additional functional options to customize the downloader behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// Load AWS Config
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
	panic(err)
}

// Create an S3 client using the loaded configuration
s3.NewFromConfig(cfg)

// Create a downloader passing it the S3 client
downloader := manager.NewDownloader(s3.NewFromConfig(cfg))

// Create a downloader with the client and custom downloader options
downloader := manager.NewDownloader(client, func(d *manager.Downloader) {
	d.PartSize = 64 * 1024 * 1024 // 64MB per part
})

func (Downloader) Download

func (d Downloader) Download(ctx context.Context, w io.WriterAt, input *s3.GetObjectInput, options ...func(*Downloader)) (n int64, err error)

Download downloads an object in S3 and writes the payload into w using concurrent GET requests. The n int64 returned is the size of the object downloaded in bytes.

DownloadWithContext is the same as Download with the additional support for Context input parameters. The Context must not be nil. A nil Context will cause a panic. Use the Context to add deadlining, timeouts, etc. The DownloadWithContext may create sub-contexts for individual underlying requests.

Additional functional options can be provided to configure the individual download. These options are copies of the Downloader instance Download is called from. Modifying the options will not impact the original Downloader instance. Use the WithDownloaderClientOptions helper function to pass in request options that will be applied to all API operations made with this downloader.

The w io.WriterAt can be satisfied by an os.File to do multipart concurrent downloads, or in memory []byte wrapper using aws.WriteAtBuffer. In case you download files into memory do not forget to pre-allocate memory to avoid additional allocations and GC runs.

Example:

// pre-allocate in memory buffer, where headObject type is *s3.HeadObjectOutput
buf := make([]byte, int(headObject.ContentLength))
// wrap with aws.WriteAtBuffer
w := s3manager.NewWriteAtBuffer(buf)
// download file into the memory
numBytesDownloaded, err := downloader.Download(ctx, w, &s3.GetObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(item),
})

Specifying a Downloader.Concurrency of 1 will cause the Downloader to download the parts from S3 sequentially.

It is safe to call this method concurrently across goroutines.

If the GetObjectInput's Range value is provided that will cause the downloader to perform a single GetObjectInput request for that object's range. This will caused the part size, and concurrency configurations to be ignored.

type HeadBucketAPIClient

type HeadBucketAPIClient interface {
	HeadBucket(context.Context, *s3.HeadBucketInput, ...func(*s3.Options)) (*s3.HeadBucketOutput, error)
}

HeadBucketAPIClient is an S3 API client that can invoke the HeadBucket operation.

type ListObjectsV2APIClient

type ListObjectsV2APIClient interface {
	ListObjectsV2(context.Context, *s3.ListObjectsV2Input, ...func(*s3.Options)) (*s3.ListObjectsV2Output, error)
}

ListObjectsV2APIClient is an S3 API client that can invoke the ListObjectV2 operation.

type MultiUploadFailure

type MultiUploadFailure interface {
	error

	// UploadID returns the upload id for the S3 multipart upload that failed.
	UploadID() string
}

A MultiUploadFailure wraps a failed S3 multipart upload. An error returned will satisfy this interface when a multi part upload failed to upload all chucks to S3. In the case of a failure the UploadID is needed to operate on the chunks, if any, which were uploaded.

Example:

u := manager.NewUploader(client)
output, err := u.upload(context.Background(), input)
if err != nil {
	var multierr manager.MultiUploadFailure
	if errors.As(err, &multierr) {
		fmt.Printf("upload failure UploadID=%s, %s\n", multierr.UploadID(), multierr.Error())
	} else {
		fmt.Printf("upload failure, %s\n", err.Error())
	}
}

type PooledBufferedReadFromProvider

type PooledBufferedReadFromProvider struct {
	// contains filtered or unexported fields
}

PooledBufferedReadFromProvider is a WriterReadFromProvider that uses a sync.Pool to manage allocation and reuse of *bufio.Writer structures.

func NewPooledBufferedWriterReadFromProvider

func NewPooledBufferedWriterReadFromProvider(size int) *PooledBufferedReadFromProvider

NewPooledBufferedWriterReadFromProvider returns a new PooledBufferedReadFromProvider Size is used to control the size of the underlying *bufio.Writer created for calls to GetReadFrom.

func (*PooledBufferedReadFromProvider) GetReadFrom

func (p *PooledBufferedReadFromProvider) GetReadFrom(writer io.Writer) (r WriterReadFrom, cleanup func())

GetReadFrom takes an io.Writer and wraps it with a type which satisfies the WriterReadFrom interface/ Additionally a cleanup function is provided which must be called after usage of the WriterReadFrom has been completed in order to allow the reuse of the *bufio.Writer

type ReadSeekerWriteTo

type ReadSeekerWriteTo interface {
	io.ReadSeeker
	io.WriterTo
}

ReadSeekerWriteTo defines an interface implementing io.WriteTo and io.ReadSeeker

type ReadSeekerWriteToProvider

type ReadSeekerWriteToProvider interface {
	GetWriteTo(seeker io.ReadSeeker) (r ReadSeekerWriteTo, cleanup func())
}

ReadSeekerWriteToProvider provides an implementation of io.WriteTo for an io.ReadSeeker

type ReaderSeekerCloser added in v0.2.0

type ReaderSeekerCloser struct {
	// contains filtered or unexported fields
}

ReaderSeekerCloser represents a reader that can also delegate io.Seeker and io.Closer interfaces to the underlying object if they are available.

func ReadSeekCloser added in v0.2.0

func ReadSeekCloser(r io.Reader) *ReaderSeekerCloser

ReadSeekCloser wraps a io.Reader returning a ReaderSeekerCloser. Allows the SDK to accept an io.Reader that is not also an io.Seeker for unsigned streaming payload API operations.

A readSeekCloser wrapping an nonseekable io.Reader used in an API operation's input will prevent that operation being retried in the case of network errors, and cause operation requests to fail if yhe operation requires payload signing.

Note: If using with S3 PutObject to stream an object upload. The SDK's S3 Upload Manager(s3manager.Uploader) provides support for streaming with the ability to retry network errors.

func (*ReaderSeekerCloser) Close added in v0.2.0

func (r *ReaderSeekerCloser) Close() error

Close closes the ReaderSeekerCloser.

If the ReaderSeekerCloser is not an io.Closer nothing will be done.

func (*ReaderSeekerCloser) GetLen added in v0.2.0

func (r *ReaderSeekerCloser) GetLen() (int64, error)

GetLen returns the length of the bytes remaining in the underlying reader. Checks first for Len(), then io.Seeker to determine the size of the underlying reader.

Will return -1 if the length cannot be determined.

func (*ReaderSeekerCloser) HasLen added in v0.2.0

func (r *ReaderSeekerCloser) HasLen() (int, bool)

HasLen returns the length of the underlying reader if the value implements the Len() int method.

func (*ReaderSeekerCloser) IsSeeker added in v0.2.0

func (r *ReaderSeekerCloser) IsSeeker() bool

IsSeeker returns if the underlying reader is also a seeker.

func (*ReaderSeekerCloser) Read added in v0.2.0

func (r *ReaderSeekerCloser) Read(p []byte) (int, error)

Read reads from the reader up to size of p. The number of bytes read, and error if it occurred will be returned.

If the reader is not an io.Reader zero bytes read, and nil error will be returned.

Performs the same functionality as io.Reader Read

func (*ReaderSeekerCloser) Seek added in v0.2.0

func (r *ReaderSeekerCloser) Seek(offset int64, whence int) (int64, error)

Seek sets the offset for the next Read to offset, interpreted according to whence: 0 means relative to the origin of the file, 1 means relative to the current offset, and 2 means relative to the end. Seek returns the new offset and an error, if any.

If the ReaderSeekerCloser is not an io.Seeker nothing will be done.

type UploadAPIClient

type UploadAPIClient interface {
	PutObject(context.Context, *s3.PutObjectInput, ...func(*s3.Options)) (*s3.PutObjectOutput, error)
	UploadPart(context.Context, *s3.UploadPartInput, ...func(*s3.Options)) (*s3.UploadPartOutput, error)
	CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput, ...func(*s3.Options)) (*s3.CreateMultipartUploadOutput, error)
	CompleteMultipartUpload(context.Context, *s3.CompleteMultipartUploadInput, ...func(*s3.Options)) (*s3.CompleteMultipartUploadOutput, error)
	AbortMultipartUpload(context.Context, *s3.AbortMultipartUploadInput, ...func(*s3.Options)) (*s3.AbortMultipartUploadOutput, error)
}

UploadAPIClient is an S3 API client that can invoke PutObject, UploadPart, CreateMultipartUpload, CompleteMultipartUpload, and AbortMultipartUpload operations.

type UploadOutput

type UploadOutput struct {
	// The URL where the object was uploaded to.
	Location string

	// The version of the object that was uploaded. Will only be populated if
	// the S3 Bucket is versioned. If the bucket is not versioned this field
	// will not be set.
	VersionID *string

	// The ID for a multipart upload to S3. In the case of an error the error
	// can be cast to the MultiUploadFailure interface to extract the upload ID.
	UploadID string
}

UploadOutput represents a response from the Upload() call.

type Uploader

type Uploader struct {
	// The buffer size (in bytes) to use when buffering data into chunks and
	// sending them as parts to S3. The minimum allowed part size is 5MB, and
	// if this value is set to zero, the DefaultUploadPartSize value will be used.
	PartSize int64

	// The number of goroutines to spin up in parallel per call to Upload when
	// sending parts. If this is set to zero, the DefaultUploadConcurrency value
	// will be used.
	//
	// The concurrency pool is not shared between calls to Upload.
	Concurrency int

	// Setting this value to true will cause the SDK to avoid calling
	// AbortMultipartUpload on a failure, leaving all successfully uploaded
	// parts on S3 for manual recovery.
	//
	// Note that storing parts of an incomplete multipart upload counts towards
	// space usage on S3 and will add additional costs if not cleaned up.
	LeavePartsOnError bool

	// MaxUploadParts is the max number of parts which will be uploaded to S3.
	// Will be used to calculate the partsize of the object to be uploaded.
	// E.g: 5GB file, with MaxUploadParts set to 100, will upload the file
	// as 100, 50MB parts. With a limited of s3.MaxUploadParts (10,000 parts).
	//
	// MaxUploadParts must not be used to limit the total number of bytes uploaded.
	// Use a type like to io.LimitReader (https://golang.org/pkg/io/#LimitedReader)
	// instead. An io.LimitReader is helpful when uploading an unbounded reader
	// to S3, and you know its maximum size. Otherwise the reader's io.EOF returned
	// error must be used to signal end of stream.
	//
	// Defaults to package const's MaxUploadParts value.
	MaxUploadParts int32

	// The client to use when uploading to S3.
	S3 UploadAPIClient

	// List of request options that will be passed down to individual API
	// operation requests made by the uploader.
	ClientOptions []func(*s3.Options)

	// Defines the buffer strategy used when uploading a part
	BufferProvider ReadSeekerWriteToProvider
	// contains filtered or unexported fields
}

The Uploader structure that calls Upload(). It is safe to call Upload() on this structure for multiple objects and across concurrent goroutines. Mutating the Uploader's properties is not safe to be done concurrently.

func NewUploader

func NewUploader(client UploadAPIClient, options ...func(*Uploader)) *Uploader

NewUploader creates a new Uploader instance to upload objects to S3. Pass In additional functional options to customize the uploader's behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// Load AWS Config
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
	panic(err)
}

// Create an S3 Client with the config
client := s3.NewFromConfig(cfg)

// Create an uploader passing it the client
uploader := manager.NewUploader(client)

// Create an uploader with the client and custom options
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
	u.PartSize = 64 * 1024 * 1024 // 64MB per part
})
Example (OverrideReadSeekerProvider)

ExampleNewUploader_overrideReadSeekerProvider gives an example on a custom ReadSeekerWriteToProvider can be provided to Uploader to define how parts will be buffered in memory.

package main

import (
	"bytes"
	"context"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		panic(err)
	}

	uploader := manager.NewUploader(s3.NewFromConfig(cfg), func(u *manager.Uploader) {
		// Define a strategy that will buffer 25 MiB in memory
		u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(25 * 1024 * 1024)
	})

	_, err = uploader.Upload(context.TODO(), &s3.PutObjectInput{
		Bucket: aws.String("examplebucket"),
		Key:    aws.String("largeobject"),
		Body:   bytes.NewReader([]byte("large_multi_part_upload")),
	})
	if err != nil {
		panic(err)
	}
}
Output:

Example (OverrideTransport)

ExampleNewUploader_overrideTransport gives an example on how to override the default HTTP transport. This can be used to tune timeouts such as response headers, or write / read buffer usage when writing or reading respectively from the net/http transport.

package main

import (
	"bytes"
	"context"
	"net/http"
	"time"

	"github.com/aws/aws-sdk-go-v2/aws"
	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		panic(err)
	}

	client := s3.NewFromConfig(cfg, func(o *s3.Options) {
		// Override Default Transport Values
		o.HTTPClient = awshttp.NewBuildableClient().WithTransportOptions(func(tr *http.Transport) {
			tr.ResponseHeaderTimeout = 1 * time.Second
			tr.WriteBufferSize = 1024 * 1024
			tr.ReadBufferSize = 1024 * 1024
		})
	})

	uploader := manager.NewUploader(client)

	_, err = uploader.Upload(context.TODO(), &s3.PutObjectInput{
		Bucket: aws.String("examplebucket"),
		Key:    aws.String("largeobject"),
		Body:   bytes.NewReader([]byte("large_multi_part_upload")),
	})
	if err != nil {
		panic(err)
	}
}
Output:

func (Uploader) Upload

func (u Uploader) Upload(ctx context.Context, input *s3.PutObjectInput, opts ...func(*Uploader)) (*UploadOutput, error)

Upload uploads an object to S3, intelligently buffering large files into smaller chunks and sending them in parallel across multiple goroutines. You can configure the buffer size and concurrency through the Uploader parameters.

Additional functional options can be provided to configure the individual upload. These options are copies of the Uploader instance Upload is called from. Modifying the options will not impact the original Uploader instance.

Use the WithUploaderRequestOptions helper function to pass in request options that will be applied to all API operations made with this uploader.

It is safe to call this method concurrently across goroutines.

type WriteAtBuffer added in v0.2.0

type WriteAtBuffer struct {

	// GrowthCoeff defines the growth rate of the internal buffer. By
	// default, the growth rate is 1, where expanding the internal
	// buffer will allocate only enough capacity to fit the new expected
	// length.
	GrowthCoeff float64
	// contains filtered or unexported fields
}

A WriteAtBuffer provides a in memory buffer supporting the io.WriterAt interface Can be used with the s3manager.Downloader to download content to a buffer in memory. Safe to use concurrently.

func NewWriteAtBuffer added in v0.2.0

func NewWriteAtBuffer(buf []byte) *WriteAtBuffer

NewWriteAtBuffer creates a WriteAtBuffer with an internal buffer provided by buf.

func (*WriteAtBuffer) Bytes added in v0.2.0

func (b *WriteAtBuffer) Bytes() []byte

Bytes returns a slice of bytes written to the buffer.

func (*WriteAtBuffer) WriteAt added in v0.2.0

func (b *WriteAtBuffer) WriteAt(p []byte, pos int64) (n int, err error)

WriteAt writes a slice of bytes to a buffer starting at the position provided The number of bytes written will be returned, or error. Can overwrite previous written slices if the write ats overlap.

type WriterReadFrom

type WriterReadFrom interface {
	io.Writer
	io.ReaderFrom
}

WriterReadFrom defines an interface implementing io.Writer and io.ReaderFrom

type WriterReadFromProvider

type WriterReadFromProvider interface {
	GetReadFrom(writer io.Writer) (w WriterReadFrom, cleanup func())
}

WriterReadFromProvider provides an implementation of io.ReadFrom for the given io.Writer

Directories

Path Synopsis
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL