Version: v1.0.3 Latest Latest

This package is not in the latest version of its module.

Go to latest
Published: Jul 6, 2017 License: Apache-2.0 Imports: 31 Imported by: 0




View Source
const (
	// S3AccessKeyParam is the query parameter for access_key in an S3 URI.
	S3AccessKeyParam = "AWS_ACCESS_KEY_ID"
	// S3SecretParam is the query parameter for the 'secret' in an S3 URI.

	// AzureAccountNameParam is the query parameter for account_name in an azure URI.
	AzureAccountNameParam = "AZURE_ACCOUNT_NAME"
	// AzureAccountKeyParam is the query parameter for account_key in an azure URI.
	AzureAccountKeyParam = "AZURE_ACCOUNT_KEY"
View Source
const ExportRequestLimit = 5

ExportRequestLimit is the number of Export requests that can run at once. Each extracts data from RocksDB to a temp file and then uploads it to cloud storage. In order to not exhaust the disk or memory, or saturate the network, limit the number of these that can be run in parallel. This number was chosen by a guess. If SST files are likely to not be over 200MB, then 5 parallel workers hopefully won't use more than 1GB of space in the temp directory. It could be improved by more measured heuristics.


This section is empty.


func ExportStorageConfFromURI

func ExportStorageConfFromURI(path string) (roachpb.ExportStorage, error)

ExportStorageConfFromURI generates an ExportStorage config from a URI string.

func FetchFile

func FetchFile(
	ctx context.Context, tempPrefix string, e ExportStorage, basename string,
) (string, func(), error)

FetchFile returns the path to a local file containing the content of the requested filename, and a cleanup func to be called when done reading it.

func SanitizeExportStorageURI

func SanitizeExportStorageURI(path string) (string, error)

SanitizeExportStorageURI returns the export storage URI with sensitive credentials stripped.


type ExportFileWriter

type ExportFileWriter interface {
	// LocalFile returns the path to a local path to which a caller should write.
	LocalFile() string

	// Finish indicates that no further writes to the local file are expected and
	// that the implementation should store the content (copy it, upload, etc) if
	// that has not already been done in a streaming fashion (e.g. via a pipe).
	Finish(ctx context.Context) error

	// Close removes any temporary files or resources that were made on behalf
	// of this writer. If `Finish` has not been called, any writes to
	// `LocalFile` will be lost. Implementations of `Close` are required to be
	// idempotent and should log any errors.
	Close(ctx context.Context)

ExportFileWriter provides a local path, to a file or pipe, that can be written to before calling Finish() to store the written content to an ExportStorage. This caters to non-Go clients (like RocksDB) that want to open and write to a file, rather than just use an Go io.Reader/io.Writer interface.

func MakeExportFileTmpWriter

func MakeExportFileTmpWriter(
	_ context.Context, tempPrefix string, store ExportStorage, name string,
) (ExportFileWriter, error)

MakeExportFileTmpWriter returns an ExportFileWriter backed by a tempfile.

type ExportStorage

type ExportStorage interface {

	// Conf should return the serializable configuration required to reconstruct
	// this ExportStorage implementation.
	Conf() roachpb.ExportStorage

	// ReadFile should return a Reader for requested name.
	ReadFile(ctx context.Context, basename string) (io.ReadCloser, error)

	// WriteFile should write the content to requested name.
	WriteFile(ctx context.Context, basename string, content io.ReadSeeker) error

	// Delete removes the named file from the store.
	Delete(ctx context.Context, basename string) error

ExportStorage provides functions to read and write files in some storage, namely various cloud storage providers, for example to store backups.

func MakeExportStorage

func MakeExportStorage(ctx context.Context, dest roachpb.ExportStorage) (ExportStorage, error)

MakeExportStorage creates an ExportStorage from the given config.

type KeyRewriter

type KeyRewriter struct {
	// contains filtered or unexported fields

KeyRewriter rewrites old table IDs to new table IDs. It is able to descend into interleaved keys, and is able to function on partial keys for spans and splits.

func MakeKeyRewriter

func MakeKeyRewriter(rekeys []roachpb.ImportRequest_TableRekey) (*KeyRewriter, error)

MakeKeyRewriter creates a KeyRewriter from the Rekeys field of an ImportRequest.

func (*KeyRewriter) RewriteKey

func (kr *KeyRewriter) RewriteKey(key []byte) ([]byte, bool, error)

RewriteKey modifies key similar to RewriteKey, but is also able to account for interleaved tables. This function works by inspecting the key for table and index IDs, then uses the corresponding table and index descriptors to determine if interleaved data is present and if it is, to find the next prefix of an interleaved child, then calls itself recursively until all interleaved children have been rekeyed.

type PrefixRewrite

type PrefixRewrite struct {
	OldPrefix []byte
	NewPrefix []byte

PrefixRewrite holds information for a single byte replacement of a prefix.

type PrefixRewriter

type PrefixRewriter []PrefixRewrite

PrefixRewriter is a matcher for an ordered list of pairs of byte prefix rewrite rules. For dependency reasons, the implementation of the matching is here, but the interesting constructor is in sqlccl.

func (PrefixRewriter) RewriteKey

func (p PrefixRewriter) RewriteKey(key []byte) ([]byte, bool)

RewriteKey modifies key using the first matching rule and returns it. If no rules matched, returns false and the original input key.


Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL