sql

package
v0.0.0-...-47db54f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 10, 2017 License: Apache-2.0 Imports: 61 Imported by: 0

Documentation

Overview

Package sql provides the user-facing API for access to a Cockroach datastore. As the name suggests, the API is based around SQL, the same SQL you find in traditional RDMBS systems like Oracle, MySQL or Postgres. The core Cockroach system implements a distributed, transactional, monolithic sorted key-value map. The sql package builds on top of this core system (provided by the storage and kv packages) adding parsing, query planning and query execution as well as defining the privilege model.

Databases and Tables

The two primary objects are databases and tables. A database is a namespace which holds a series of tables. Conceptually, a database can be viewed as a directory in a filesystem plus some additional metadata. A table is like a file on steroids: it contains a structured layout of rows and columns along with secondary indexes.

Like a directory, a database has a name and some metadata. The metadata is defined by the DatabaseDescriptor:

message DatabaseDescriptor {
  optional string name;
  optional uint32 id;
  optional PrivilegeDescriptor privileges;
}

As you can see, currently the metadata we store for databases just consists of privileges.

Similarly, tables have a TableDescriptor:

message TableDescriptor {
  optional string name;
  optional uint32 id;
  repeated ColumnDescriptor columns;
  optional IndexDescriptor primary_index;
  repeated IndexDescriptor indexes;
  optional PrivilegeDescriptor privileges;
}

Both the database ID and the table ID are allocated from the same "ID space" and IDs are never reused.

The namespace in which databases and tables exist contains only two levels: the root level contains databases and the database level contains tables. The "system.namespace" and "system.descriptor" tables implement the mapping from database/table name to ID and from ID to descriptor:

CREATE TABLE system.namespace (
  "parentID" INT,
  "name"     CHAR,
  "id"       INT,
  PRIMARY KEY (parentID, name)
);

Create TABLE system.descriptor (
  "id"         INT PRIMARY KEY,
  "descriptor" BLOB
);

The ID 0 is a reserved ID used for the "root" of the namespace in which the databases reside. In order to look up the ID of a database given its name, the system runs the underlying key-value operations that correspond to the following query:

SELECT id FROM system.namespace WHERE parentID = 0 AND name = <database-name>

And given a database/table ID, the system looks up the descriptor using the following query:

SELECT descriptor FROM system.descriptor WHERE id = <ID>

Let's also create two new tables to use as running examples, one relatively simple, and one a little more complex. The first table is just a list of stores, with a "store_id" primary key that is an automatically incremented unique integer as the primary key (the "SERIAL" datatype) and a name.

CREATE DATABASE test;
SET DATABASE TO test;

Create TABLE stores (
  "store_id" SERIAL PRIMARY KEY,
  "name" CHAR UNIQUE
);

The second table

CREATE TABLE inventory (
  "item_id" INT UNIQUE,
  "name" CHAR UNIQUE,
  "at_store" INT,
  "stock" INT,
  PRIMARY KEY (item_id, at_store),
  CONSTRAINT at_store_fk FOREIGN KEY (at_store) REFERENCES stores (store_id)
);

Primary Key Addressing

All of the SQL data stored in tables is mapped down to individual keys and values. We call the exact mapping converting any table or row to a key value pair "key addressing". Cockroach's key addressing relies upon a primary key, and thus all tables have a primary key, whether explicitly listed in the schema or automatically generated. Note that the notion of a "primary key" refers to the primary key in the SQL sense, and is unrelated to the "key" in Cockroach's underlying key-value pairs.

Primary keys consist of one or more non-NULL columns from the table. For a given row of the table, the columns for the primary key are encoded into a single string. For example, our inventory table would be encoded as:

/item_id/at_store

[Note that "/" is being used to disambiguate the components of the key. The actual encodings do not use the "/" character. The actual encoding is specified in the `util` package in `util/encoding`. These encoding routines allow for the encoding of NULL values, integers, floating point numbers and strings such that the lexicographic ordering of the encoded strings corresponds to the same ordering of the unencoded data.]

Before being stored in the monolithic key-value space, the encoded primary key columns are prefixed with the table ID and an ID indicating that the key corresponds to the primary index. The prefix for the inventory table looks like this:

/TableID/PrimaryIndexID/item_id/at_store

Each column value is stored in a key with that prefix. Every column has a unique ID (local to the table). The values for every cell is stored at the key:

/TableID/PrimaryIndexID/item_id/at_store/ColumnID -> ColumnValue

Thus, the scan over the range

[/TableID/PrimaryIndexID/item_id/at_store,
/TableID/PrimaryIndexID/item_id/at_storf)

Where the abuse of notation "namf" in the end key refers to the key resulting from incrementing the value of the start key. As an efficiency, we do not store columns NULL values. Thus, all returned rows from the above scan give us enough information to construct the entire row. However, a row that has exclusively NULL values in non-primary key columns would have nothing stored at all. Thus, to note the existence of a row with only a primary key and remaining NULLs, , every row also has a sentinel key indicating its existence. The sentinel key is simply the primary index key, with an empty value:

/TableID/PrimaryIndexID/item_id/at_store -> <empty>

Thus the above scan on such a row would return a single key, which we can use to reconstruct the row filling in NULLs for the non-primary-key values.

Column Families

The above structure is inefficient if we have many columns, since each row in an N-column table results in up to N+1 entries (1 sentinel key + N keys if every column was non-NULL). Thus, Cockroach has the ability to group multiple columns together and write them as a single key-value pair. We call this a "column family", and there are more details in this blog post: https://www.cockroachlabs.com/blog/sql-cockroachdb-column-families/

Secondary Indexes

Despite not being a formal part of the SQL standard, secondary indexes are one of its most powerful features. Secondary indexes are a level of indirection that allow quick lookups of a row using something other than the primary key. As an example, here is a secondary index on the "inventory" table, using only the "name" column:

CREATE INDEX name ON inventory (name);

This secondary index allows fast lookups based on just the "name". We use the following key addressing scheme for this non-unique index:

/TableId/SecondaryIndexID/name/item_id/at_store -> <empty>

Notice that while the index is on "name", the key contains both "name" and the values for item_id and at_store. This is done to ensure that each row for a table has a unique key for the non-unique index. In general, in order to guarantee that a non-unique index is unique, we encode the index's columns followed by any primary key columns that have not already been mentioned. Since the primary key must uniquely define a row, this transforms any non-unique index into a unique index.

Let's suppose that we had instead defined the index as:

CREATE UNIQUE INDEX name ON inventory (name, item_id);

Since this index is defined on creation as a unique index, we do not need to append the rest of the primary key columns to ensure uniqueness; instead, any insertion of a row into the table that would result in a duplication in the index will fail (and if there already are duplicates upon creation, the index creation itself will fail). However, we still need to be able to decode the full primary key by reading this index, as we will see later, in order to read any columns that are not in this index:

SELECT at_store FROM inventory WHERE name = "foo";

The solution is to put any remaining primary key columns into the value. Thus, the key addressing for this unique index looks like this:

/TableID/SecondaryIndexID/name/item_id -> at_store

The value for a unique index is composed of any primary key columns that are not already part of the index ("at_store" in this example). The goal of this key addressing scheme is to ensure that the primary key is fully specified by the key-value pair, and that the key portion is unique. However, any lookup of a non-primary and non-index column requires two reads, first to decode the primary key, and then to read the full row for the primary key, which contains all the columns. For instance, to read the value of the "stock" column" in this table:

SELECT stock FROM inventory WHERE name = "foo";

Looking this up by the index on "name" does not give us the value of the "stock" column. Instead, to process this query, Cockroach does two key-value reads, which are morally equivalent to the following two SQL queries:

SELECT (item_id, at_store) FROM inventory WHERE "name = "foo";

Then we use the values for the primary key that we received from the first query to perform the lookup:

SELECT stock FROM inventory WHERE item_id = "..." AND at_store = "...";

Query Planning and Execution

SQL queries are executed by converting every SQL query into a set of transactional key-value operations. The Cockroach distributed transactional key-value store provides a few operations, of which we shall discuss execution using two important ones: conditional puts, and ordered scans.

Query planning is the system which takes a parsed SQL statement (described by an abstract syntax tree) and creates an execution plan which is itself a tree of operations. The execution tree consists of leaf nodes that are SCANs and PUTs, and internal nodes that consist of operations such as join, groupby, sort, or projection.. For the bulk of SQL statements, query planning is straightforward: the complexity lies in SELECT.

At one end of the performance spectrum, an implementation of SELECT can be straightforward: do a full scan of the (joined) tables in the FROM clause, filter rows based on the WHERE clause, group the resulting rows based on the GROUP BY clause, filter those rows using the HAVING clause, and sort the remaining rows using the ORDER BY clause. There are a number of steps, but they all have well defined semantics and are mostly just an exercise in software engineering: retrieve the rows as quickly as possible and then send them through the pipeline of filtering, grouping, filtering and sorting.

However, this naive execution plan would have poor performance if the first scans return large amounts of data: if we are scanning orders of magnitude extra data, only to discard the vast majority of rows as we filter out the few rows that we need, this is needlessly inefficient. Instead, the query planner attempts to take advantage of secondary indexes to limit the data retrieved by the leafs. Additionally, the query planner makes joins between tables faster by taking advantage of the different sort orders of various secondary indexes, and avoiding re-sorting or (or taking advantage of partial sorts to limit the amount of sorting done). As query planning is under active development, the details of how we implement this are in flux and will continue to be in flux for the foreseeable future. This section is intended to provide a high-level overview of a few of the techniques involved.

For a SELECT query, after parsing it, the query planner performs semantic analysis to statically verify if the query obeys basic type-safety checks, and to resolve names within the query to actual objects within the system. Let's consider a query which looks up the stock of an item in the inventory table named "foo" with item_id X:

SELECT stock FROM inventory WHERE item_id = X AND name = 'test'

The query planner first needs to resolve the "inventory" qualified name in the FROM clause to the appropriate TableDescriptor. It also needs to resolve the "item_id", "stock" and "name" column references to the appropriate column descriptions with the "inventory" TableDescriptor. Lastly, as part of semantic analysis, the query planner verifies that the expressions in the select targets and the WHERE clause are valid (e.g. the WHERE clause evaluates to a boolean).

From that starting point, the query planner then analyzes the GROUP BY and ORDER BY clauses, adding "hidden" targets for expressions used in those clauses that are not explicit targets of the query. Our example query does not have any GROUP BY or ORDER BY clauses, so we move straight to the next step: index selection. Index selection is the stage where the query planner selects the best index to scan and selects the start and end keys that minimize the amount of scanned data. Depending on the complexity of the query, the query planner might even select multiple ranges to scan from an index or multiple ranges from different indexes.

How does the query planner decide which index to use and which range of the index to scan? We currently use a restricted form of value propagation in order to determine the range of possible values for columns referenced in the WHERE clause. Using this range information, each index is examined to determine if it is a potential candidate and ranked according to its specificity. In addition to ranking indexes by the column value range information, they are ranked by how well they match the sorting required by the ORDER BY clause. A more detailed description is here: https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/, but back to the example above, the range information would determine that:

item_id >= 0 AND item_id <= 0 AND name >= 'test' and name <= 'test

Since there are two indexes on the "inventory" table, one index on "name" and another unique index on "item_id" and "name", the latter is selected as the candidate for performing a scan. To perform this scan, we need a start (inclusive) and end key (exclusive). The start key is computed using the SecondaryIndexID of the chosen index, and the constraints on the range information above:

/inventory/SecondaryIndexID/item_id/name

The end key is:

/inventory/SecondaryIndexID/item_id/namf

The "namf" suffix is not a typo: it is an abuse of notation to demonstrate how we calculate the end key: the end key is computed by incrementing the final byte of the start key such that "t" becomes "u".

Our example scan will return two key-value pairs:

/system.descriptor/primary/0/test    -> NULL
/system.descriptor/primary/0/test/id -> <ID>

The first key is the sentinel key, and the value from the second key returned by the scan is the result we need to return as the result of this SQL query.

Index

Constants

View Source
const (
	// PgServerVersion is the latest version of postgres that we claim to support.
	PgServerVersion = "9.5.0"
)
View Source
const TableTruncateChunkSize = indexTruncateChunkSize

TableTruncateChunkSize is the maximum number of keys deleted per chunk during a table truncation.

Variables

View Source
var (
	MetaTxnBegin           = metric.Metadata{Name: "sql.txn.begin.count"}
	MetaTxnCommit          = metric.Metadata{Name: "sql.txn.commit.count"}
	MetaTxnAbort           = metric.Metadata{Name: "sql.txn.abort.count"}
	MetaTxnRollback        = metric.Metadata{Name: "sql.txn.rollback.count"}
	MetaSelect             = metric.Metadata{Name: "sql.select.count"}
	MetaSQLExecLatency     = metric.Metadata{Name: "sql.exec.latency"}
	MetaDistSQLSelect      = metric.Metadata{Name: "sql.distsql.select.count"}
	MetaDistSQLExecLatency = metric.Metadata{Name: "sql.distsql.exec.latency"}
	MetaUpdate             = metric.Metadata{Name: "sql.update.count"}
	MetaInsert             = metric.Metadata{Name: "sql.insert.count"}
	MetaDelete             = metric.Metadata{Name: "sql.delete.count"}
	MetaDdl                = metric.Metadata{Name: "sql.ddl.count"}
	MetaMisc               = metric.Metadata{Name: "sql.misc.count"}
	MetaQuery              = metric.Metadata{Name: "sql.query.count"}
)

Fully-qualified names for metrics.

View Source
var (
	// LeaseDuration is the mean duration a lease will be acquired for. The
	// actual duration is jittered in the range
	// [0.75,1.25]*LeaseDuration. Exported for testing purposes only.
	LeaseDuration = 5 * time.Minute
	// MinLeaseDuration is the minimum duration a lease will have remaining upon
	// acquisition. Exported for testing purposes only.
	MinLeaseDuration = time.Minute
)
View Source
var (
	// SchemaChangeLeaseDuration is the duration a lease will be acquired for.
	// Exported for testing purposes only.
	SchemaChangeLeaseDuration = 5 * time.Minute
	// MinSchemaChangeLeaseDuration is the minimum duration a lease will have
	// remaining upon acquisition. Exported for testing purposes only.
	MinSchemaChangeLeaseDuration = time.Minute
)
View Source
var NilVirtualTabler nilVirtualTabler

NilVirtualTabler implements VirtualTabler that returns nil.

Functions

func AddPlanHook

func AddPlanHook(f planHookFn)

AddPlanHook adds a hook used to short-circuit creating a planNode from a parser.Statement. If the func returned by the hook is non-nil, it is used to construct a planNode that runs that func during Start.

func CreateTestTableDescriptor

func CreateTestTableDescriptor(
	parentID, id sqlbase.ID, schema string, privileges *sqlbase.PrivilegeDescriptor,
) (sqlbase.TableDescriptor, error)

CreateTestTableDescriptor converts a SQL string to a table for test purposes. Will fail on complex tables where that operation requires e.g. looking up other tables or otherwise utilizing a planner, since the planner used here is just a zero value placeholder.

func EvalAsOfTimestamp

func EvalAsOfTimestamp(
	evalCtx *parser.EvalContext, asOf parser.AsOfClause, max hlc.Timestamp,
) (hlc.Timestamp, error)

EvalAsOfTimestamp evaluates and returns the timestamp from an AS OF SYSTEM TIME clause.

func GenerateInsertRow

func GenerateInsertRow(
	defaultExprs []parser.TypedExpr,
	insertColIDtoRowIndex map[sqlbase.ColumnID]int,
	insertCols []sqlbase.ColumnDescriptor,
	evalCtx parser.EvalContext,
	tableDesc *sqlbase.TableDescriptor,
	rowVals parser.Datums,
) (parser.Datums, error)

GenerateInsertRow prepares a row tuple for insertion. It fills in default expressions, verifies non-nullable columns, and checks column widths.

func GenerateUniqueDescID

func GenerateUniqueDescID(txn *client.Txn) (sqlbase.ID, error)

GenerateUniqueDescID returns the next available Descriptor ID and increments the counter.

func GetKeysForTableDescriptor

func GetKeysForTableDescriptor(
	tableDesc *sqlbase.TableDescriptor,
) (zoneKey roachpb.Key, nameKey roachpb.Key, descKey roachpb.Key)

GetKeysForTableDescriptor retrieves the KV keys corresponding to the zone, name and descriptor of a table.

func GetTableDesc

func GetTableDesc(cfg config.SystemConfig, id sqlbase.ID) (*sqlbase.TableDescriptor, error)

GetTableDesc returns the table descriptor for the table with 'id'. Returns nil if the descriptor is not present, or is present but is not a table.

func GetUserHashedPassword

func GetUserHashedPassword(
	ctx context.Context, executor *Executor, metrics *MemoryMetrics, username string,
) ([]byte, error)

GetUserHashedPassword returns the hashedPassword for the given username if found in system.users.

func GetZoneConfig

func GetZoneConfig(cfg config.SystemConfig, id uint32) (config.ZoneConfig, bool, error)

GetZoneConfig returns the zone config for the object with 'id'.

func MakeTableDesc

func MakeTableDesc(
	txn *client.Txn,
	vt VirtualTabler,
	searchPath parser.SearchPath,
	n *parser.CreateTable,
	parentID, id sqlbase.ID,
	privileges *sqlbase.PrivilegeDescriptor,
	affected map[sqlbase.ID]*sqlbase.TableDescriptor,
	sessionDB string,
) (sqlbase.TableDescriptor, error)

MakeTableDesc creates a table descriptor from a CreateTable statement.

func MustGetDatabaseDesc

func MustGetDatabaseDesc(
	txn *client.Txn, vt VirtualTabler, name string,
) (*sqlbase.DatabaseDescriptor, error)

MustGetDatabaseDesc looks up the database descriptor given its name, returning an error if the descriptor is not found.

func ProcessDefaultColumns

func ProcessDefaultColumns(
	cols []sqlbase.ColumnDescriptor,
	tableDesc *sqlbase.TableDescriptor,
	parse *parser.Parser,
	evalCtx *parser.EvalContext,
) ([]sqlbase.ColumnDescriptor, []parser.TypedExpr, error)

ProcessDefaultColumns adds columns with DEFAULT to cols if not present and returns the defaultExprs for cols.

func SetDefaultDistSQLMode

func SetDefaultDistSQLMode(mode string) func()

SetDefaultDistSQLMode changes the default DistSQL mode; returns a function that can be used to restore the previous mode.

func SetTxnTimestamps

func SetTxnTimestamps(txn *client.Txn, ts hlc.Timestamp)

SetTxnTimestamps sets the transaction's proto timestamps and deadline to ts. This is for use with AS OF queries, and should be called in the retry block (except in the case of prepare which doesn't use retry). The deadline-checking code checks that the `Timestamp` field of the proto hasn't exceeded the deadline. Since we set the Timestamp field each retry, it won't ever exceed the deadline, and thus setting the deadline here is not strictly needed. However, it doesn't do anything incorrect and it will possibly find problems if things change in the future, so it is left in.

func TestDisableTableLeases

func TestDisableTableLeases() func()

TestDisableTableLeases disables table leases and returns a function that can be used to enable it.

Types

type AuthorizationAccessor

type AuthorizationAccessor interface {
	// CheckPrivilege verifies that the user has `privilege` on `descriptor`.
	CheckPrivilege(
		descriptor sqlbase.DescriptorProto, privilege privilege.Kind,
	) error

	// SuperUser errors if the session user is the super-user (i.e. root).
	// Includes the named action in thhe error message.
	RequireSuperUser(action string) error
	// contains filtered or unexported methods
}

AuthorizationAccessor for checking authorization (e.g. desc privileges).

type CopyDataBlock

type CopyDataBlock struct {
	Done bool
}

CopyDataBlock represents a data block of a COPY FROM statement.

func (CopyDataBlock) Format

func (CopyDataBlock) Format(buf *bytes.Buffer, f parser.FmtFlags)

Format implements the NodeFormatter interface.

func (CopyDataBlock) StatementTag

func (CopyDataBlock) StatementTag() string

StatementTag returns a short string identifying the type of statement.

func (CopyDataBlock) StatementType

func (CopyDataBlock) StatementType() parser.StatementType

StatementType implements the Statement interface.

func (CopyDataBlock) String

func (CopyDataBlock) String() string

type DatabaseAccessor

type DatabaseAccessor interface {
	// contains filtered or unexported methods
}

DatabaseAccessor provides helper methods for using SQL database descriptors.

type DependencyAnalyzer

type DependencyAnalyzer interface {
	// Independent determines if the provided planNodes are independent from one
	// another. Implementations of Independent are always commutative.
	Independent(*planNode, *planNode) bool
}

DependencyAnalyzer determines if plans are independent of one another, where independent plans are defined by whether their execution could be safely reordered without having an effect on their runtime semantics or on their results. This means that DependencyAnalyzer can be used to test whether it is safe for multiple statements to be run concurrently by the PipelineQueue.

var NoDependenciesAnalyzer DependencyAnalyzer = dependencyAnalyzerFunc(func(
	_ *planNode, _ *planNode,
) bool {
	return true
})

NoDependenciesAnalyzer is a DependencyAnalyzer that performs no analysis on planNodes and asserts that all plans are independent.

type DescriptorAccessor

type DescriptorAccessor interface {
	// contains filtered or unexported methods
}

DescriptorAccessor provides helper methods for using descriptors to SQL objects.

type EventLogType

type EventLogType string

EventLogType represents an event type that can be recorded in the event log.

const (
	// EventLogCreateDatabase is recorded when a database is created.
	EventLogCreateDatabase EventLogType = "create_database"
	// EventLogDropDatabase is recorded when a database is dropped.
	EventLogDropDatabase EventLogType = "drop_database"

	// EventLogCreateTable is recorded when a table is created.
	EventLogCreateTable EventLogType = "create_table"
	// EventLogDropTable is recorded when a table is dropped.
	EventLogDropTable EventLogType = "drop_table"
	// EventLogAlterTable is recorded when a table is altered.
	EventLogAlterTable EventLogType = "alter_table"

	// EventLogCreateIndex is recorded when an index is created.
	EventLogCreateIndex EventLogType = "create_index"
	// EventLogDropIndex is recorded when an index is dropped.
	EventLogDropIndex EventLogType = "drop_index"

	// EventLogCreateView is recorded when a view is created.
	EventLogCreateView EventLogType = "create_view"
	// EventLogDropView is recorded when a view is dropped.
	EventLogDropView EventLogType = "drop_view"

	// EventLogReverseSchemaChange is recorded when an in-progress schema change
	// encounters a problem and is reversed.
	EventLogReverseSchemaChange EventLogType = "reverse_schema_change"
	// EventLogFinishSchemaChange is recorded when a previously initiated schema
	// change has completed.
	EventLogFinishSchemaChange EventLogType = "finish_schema_change"

	// EventLogNodeJoin is recorded when a node joins the cluster.
	EventLogNodeJoin EventLogType = "node_join"
	// EventLogNodeRestart is recorded when an existing node rejoins the cluster
	// after being offline.
	EventLogNodeRestart EventLogType = "node_restart"
)

NOTE: When you add a new event type here. Please manually add it to ui/app/util/eventTypes.ts so that it will be recognized in the UI.

type EventLogger

type EventLogger struct {
	InternalExecutor
}

An EventLogger exposes methods used to record events to the event table.

func MakeEventLogger

func MakeEventLogger(leaseMgr *LeaseManager) EventLogger

MakeEventLogger constructs a new EventLogger. A LeaseManager is required in order to correctly execute SQL statements.

func (EventLogger) InsertEventRecord

func (ev EventLogger) InsertEventRecord(
	ctx context.Context,
	txn *client.Txn,
	eventType EventLogType,
	targetID, reportingID int32,
	info interface{},
) error

InsertEventRecord inserts a single event into the event log as part of the provided transaction.

type Executor

type Executor struct {

	// Transient stats.
	SelectCount *metric.Counter
	// The subset of SELECTs that are processed through DistSQL.
	DistSQLSelectCount *metric.Counter
	DistSQLExecLatency *metric.Histogram
	SQLExecLatency     *metric.Histogram
	TxnBeginCount      *metric.Counter

	// txnCommitCount counts the number of times a COMMIT was attempted.
	TxnCommitCount *metric.Counter

	TxnAbortCount    *metric.Counter
	TxnRollbackCount *metric.Counter
	UpdateCount      *metric.Counter
	InsertCount      *metric.Counter
	DeleteCount      *metric.Counter
	DdlCount         *metric.Counter
	MiscCount        *metric.Counter
	QueryCount       *metric.Counter
	// contains filtered or unexported fields
}

An Executor executes SQL statements. Executor is thread-safe.

func NewExecutor

func NewExecutor(cfg ExecutorConfig, stopper *stop.Stopper) *Executor

NewExecutor creates an Executor and registers a callback on the system config.

func (*Executor) AnnotateCtx

func (e *Executor) AnnotateCtx(ctx context.Context) context.Context

AnnotateCtx is a convenience wrapper; see AmbientContext.

func (*Executor) CopyData

func (e *Executor) CopyData(session *Session, data string) StatementResults

CopyData adds data to the COPY buffer and executes if there are enough rows.

func (*Executor) CopyDone

func (e *Executor) CopyDone(session *Session) StatementResults

CopyDone executes the buffered COPY data.

func (*Executor) ExecuteStatements

func (e *Executor) ExecuteStatements(
	session *Session, stmts string, pinfo *parser.PlaceholderInfo,
) StatementResults

ExecuteStatements executes the given statement(s) and returns a response.

func (*Executor) IsVirtualDatabase

func (e *Executor) IsVirtualDatabase(name string) bool

IsVirtualDatabase checks if the provided name corresponds to a virtual database, exposing this information on the Executor object itself.

func (*Executor) Prepare

func (e *Executor) Prepare(
	query string, session *Session, pinfo parser.PlaceholderTypes,
) (*PreparedStatement, error)

Prepare returns the result types of the given statement. pinfo may contain partial type information for placeholders. Prepare will populate the missing types. The PreparedStatement is returned (or nil if there are no results).

func (*Executor) SetDistSQLSpanResolver

func (e *Executor) SetDistSQLSpanResolver(spanResolver distsqlplan.SpanResolver)

SetDistSQLSpanResolver changes the SpanResolver used for DistSQL. It is the caller's responsibility to make sure no queries are being run with DistSQL at the same time.

func (*Executor) Start

func (e *Executor) Start(
	ctx context.Context, startupMemMetrics *MemoryMetrics, nodeDesc roachpb.NodeDescriptor,
)

Start starts workers for the executor and initializes the distSQLPlanner.

type ExecutorConfig

type ExecutorConfig struct {
	AmbientCtx   log.AmbientContext
	NodeID       *base.NodeIDContainer
	DB           *client.DB
	Gossip       *gossip.Gossip
	DistSender   *kv.DistSender
	RPCContext   *rpc.Context
	LeaseManager *LeaseManager
	Clock        *hlc.Clock
	DistSQLSrv   *distsqlrun.ServerImpl

	TestingKnobs              *ExecutorTestingKnobs
	SchemaChangerTestingKnobs *SchemaChangerTestingKnobs
	// HistogramWindowInterval is (server.Context).HistogramWindowInterval.
	HistogramWindowInterval time.Duration
}

An ExecutorConfig encompasses the auxiliary objects and configuration required to create an executor. All fields holding a pointer or an interface are required to create a Executor; the rest will have sane defaults set if omitted.

type ExecutorTestingKnobs

type ExecutorTestingKnobs struct {
	// WaitForGossipUpdate causes metadata-mutating operations to wait
	// for the new metadata to back-propagate through gossip.
	WaitForGossipUpdate bool

	// CheckStmtStringChange causes Executor.execStmtsInCurrentTxn to verify
	// that executed statements are not modified during execution.
	CheckStmtStringChange bool

	// StatementFilter can be used to trap execution of SQL statements and
	// optionally change their results. The filter function is invoked after each
	// statement has been executed.
	StatementFilter StatementFilter

	// DisableAutoCommit, if set, disables the auto-commit functionality of some
	// SQL statements. That functionality allows some statements to commit
	// directly when they're executed in an implicit SQL txn, without waiting for
	// the Executor to commit the implicit txn.
	// This has to be set in tests that need to abort such statements using a
	// StatementFilter; otherwise, the statement commits immediately after
	// execution so there'll be nothing left to abort by the time the filter runs.
	DisableAutoCommit bool
}

ExecutorTestingKnobs is part of the context used to control parts of the system during testing.

func (*ExecutorTestingKnobs) ModuleTestingKnobs

func (*ExecutorTestingKnobs) ModuleTestingKnobs()

ModuleTestingKnobs is part of the base.ModuleTestingKnobs interface.

type FKCheck

type FKCheck int

FKCheck indicates a kind of FK check (delete, insert, or both).

const (
	// CheckDeletes checks if rows reference a changed value.
	CheckDeletes FKCheck = iota
	// CheckInserts checks if a new/changed value references an existing row.
	CheckInserts
	// CheckUpdates checks all references (CheckDeletes+CheckInserts).
	CheckUpdates
)

type InternalExecutor

type InternalExecutor struct {
	LeaseManager *LeaseManager
}

InternalExecutor can be used internally by cockroach to execute SQL statements without needing to open a SQL connection. InternalExecutor assumes that the caller has access to a cockroach KV client to handle connection and transaction management.

func (InternalExecutor) ExecuteStatementInTransaction

func (ie InternalExecutor) ExecuteStatementInTransaction(
	ctx context.Context, opName string, txn *client.Txn, statement string, qargs ...interface{},
) (int, error)

ExecuteStatementInTransaction executes the supplied SQL statement as part of the supplied transaction. Statements are currently executed as the root user.

func (InternalExecutor) GetTableSpan

func (ie InternalExecutor) GetTableSpan(
	ctx context.Context, user string, txn *client.Txn, dbName, tableName string,
) (roachpb.Span, error)

GetTableSpan gets the key span for a SQL table, including any indices.

type LeaseManager

type LeaseManager struct {
	LeaseStore
	// contains filtered or unexported fields
}

LeaseManager manages acquiring and releasing per-table leases. It also handles resolving table names to descriptor IDs.

Exported only for testing.

The locking order is: LeaseManager.mu > tableState.mu > tableNameCache.mu > LeaseState.mu

func NewLeaseManager

func NewLeaseManager(
	nodeID *base.NodeIDContainer,
	db client.DB,
	clock *hlc.Clock,
	testingKnobs LeaseManagerTestingKnobs,
	stopper *stop.Stopper,
	memMetrics *MemoryMetrics,
) *LeaseManager

NewLeaseManager creates a new LeaseManager.

stopper is used to run async tasks. Can be nil in tests.

func (*LeaseManager) Acquire

func (m *LeaseManager) Acquire(
	ctx context.Context, txn *client.Txn, tableID sqlbase.ID, version sqlbase.DescriptorVersion,
) (*LeaseState, error)

Acquire acquires a read lease for the specified table ID. If version is non-zero the lease is grabbed for the specified version. Otherwise it is grabbed for the most recent version of the descriptor that the lease manager knows about. TODO(andrei): move the tests that use this to the sql package and un-export it.

func (*LeaseManager) AcquireByName

func (m *LeaseManager) AcquireByName(
	ctx context.Context, txn *client.Txn, dbID sqlbase.ID, tableName string,
) (*LeaseState, error)

AcquireByName acquires a read lease for the specified table. The lease is grabbed for the most recent version of the descriptor that the lease manager knows about.

func (*LeaseManager) RefreshLeases

func (m *LeaseManager) RefreshLeases(s *stop.Stopper, db *client.DB, gossip *gossip.Gossip)

RefreshLeases starts a goroutine that refreshes the lease manager leases for tables received in the latest system configuration via gossip.

func (*LeaseManager) Release

func (m *LeaseManager) Release(lease *LeaseState) error

Release releases a previously acquired read lease.

func (*LeaseManager) SetDraining

func (m *LeaseManager) SetDraining(drain bool)

SetDraining (when called with 'true') removes all inactive leases. Any leases that are active will be removed once the lease's reference count drops to 0.

type LeaseManagerTestingKnobs

type LeaseManagerTestingKnobs struct {
	// A callback called when a gossip update is received, before the leases are
	// refreshed. Careful when using this to block for too long - you can block
	// all the gossip users in the system.
	GossipUpdateEvent func(config.SystemConfig)
	// A callback called after the leases are refreshed as a result of a gossip update.
	TestingLeasesRefreshedEvent func(config.SystemConfig)

	LeaseStoreTestingKnobs LeaseStoreTestingKnobs
}

LeaseManagerTestingKnobs contains test knobs.

func (*LeaseManagerTestingKnobs) ModuleTestingKnobs

func (*LeaseManagerTestingKnobs) ModuleTestingKnobs()

ModuleTestingKnobs is part of the base.ModuleTestingKnobs interface.

type LeaseState

type LeaseState struct {
	// This descriptor is immutable and can be shared by many goroutines.
	// Care must be taken to not modify it.
	sqlbase.TableDescriptor
	// contains filtered or unexported fields
}

LeaseState holds the state for a lease. Exported only for testing.

func (*LeaseState) Expiration

func (s *LeaseState) Expiration() time.Time

Expiration returns the expiration time of the lease.

func (*LeaseState) Refcount

func (s *LeaseState) Refcount() int

Refcount returns the reference count of the lease.

func (*LeaseState) String

func (s *LeaseState) String() string

type LeaseStore

type LeaseStore struct {
	// contains filtered or unexported fields
}

LeaseStore implements the operations for acquiring and releasing leases and publishing a new version of a descriptor. Exported only for testing.

func (LeaseStore) Acquire

func (s LeaseStore) Acquire(
	ctx context.Context,
	txn *client.Txn,
	tableID sqlbase.ID,
	minVersion sqlbase.DescriptorVersion,
	minExpirationTime parser.DTimestamp,
) (*LeaseState, error)

Acquire a lease on the most recent version of a table descriptor. If the lease cannot be obtained because the descriptor is in the process of being dropped, the error will be errTableDropped.

func (LeaseStore) Publish

func (s LeaseStore) Publish(
	ctx context.Context,
	tableID sqlbase.ID,
	update func(*sqlbase.TableDescriptor) error,
	logEvent func(*client.Txn) error,
) (*sqlbase.Descriptor, error)

Publish updates a table descriptor. It also maintains the invariant that there are at most two versions of the descriptor out in the wild at any time by first waiting for all nodes to be on the current (pre-update) version of the table desc. The update closure is called after the wait, and it provides the new version of the descriptor to be written. In a multi-step schema operation, this update should perform a single step. The closure may be called multiple times if retries occur; make sure it does not have side effects. Returns the updated version of the descriptor.

func (LeaseStore) Release

func (s LeaseStore) Release(ctx context.Context, stopper *stop.Stopper, lease *LeaseState)

Release a previously acquired table descriptor lease.

func (LeaseStore) WaitForOneVersion

func (s LeaseStore) WaitForOneVersion(
	ctx context.Context, tableID sqlbase.ID, retryOpts retry.Options,
) (sqlbase.DescriptorVersion, error)

WaitForOneVersion returns once there are no unexpired leases on the previous version of the table descriptor. It returns the current version. After returning there can only be versions of the descriptor >= to the returned version. Lease acquisition (see acquire()) maintains the invariant that no new leases for desc.Version-1 will be granted once desc.Version exists.

type LeaseStoreTestingKnobs

type LeaseStoreTestingKnobs struct {
	// Called after a lease is removed from the store, with any operation error.
	// See LeaseRemovalTracker.
	LeaseReleasedEvent func(lease *LeaseState, err error)
	// Called after a lease is acquired, with any operation error.
	LeaseAcquiredEvent func(lease *LeaseState, err error)
	// Allow the use of expired leases.
	CanUseExpiredLeases bool
	// RemoveOnceDereferenced forces leases to be removed
	// as soon as they are dereferenced.
	RemoveOnceDereferenced bool
}

LeaseStoreTestingKnobs contains testing knobs.

func (*LeaseStoreTestingKnobs) ModuleTestingKnobs

func (*LeaseStoreTestingKnobs) ModuleTestingKnobs()

ModuleTestingKnobs is part of the base.ModuleTestingKnobs interface.

type MemoryMetrics

type MemoryMetrics struct {
	MaxBytesHist         *metric.Histogram
	CurBytesCount        *metric.Counter
	TxnMaxBytesHist      *metric.Histogram
	TxnCurBytesCount     *metric.Counter
	SessionMaxBytesHist  *metric.Histogram
	SessionCurBytesCount *metric.Counter
}

MemoryMetrics contains pointers to the metrics object for one of the SQL endpoints: - "client" for connections received via pgwire. - "admin" for connections received via the admin RPC. - "internal" for activities related to leases, schema changes, etc.

func MakeMemMetrics

func MakeMemMetrics(endpoint string, histogramWindow time.Duration) MemoryMetrics

MakeMemMetrics instantiates the metric objects for an SQL endpoint.

func (MemoryMetrics) MetricStruct

func (MemoryMetrics) MetricStruct()

MetricStruct implements the metrics.Struct interface.

type PipelineQueue

type PipelineQueue struct {
	// contains filtered or unexported fields
}

PipelineQueue maintains a set of planNodes running with pipelined execution. It uses a DependencyAnalyzer to determine dependencies between plans. Using this knowledge, the queue provides the following guarantees about the execution of plans:

  1. No two plans will ever be run concurrently if they are dependent of one another.
  2. If two dependent plans are added to the queue, the plan added first will be executed before the plan added second.
  3. No plans will begin execution once an error has been seen until Wait is called to drain the plans and reset the error state.

The queue performs all computation on pointers to planNode interfaces. This is because it wants to operate on unique objects, and equality of interfaces does not necessarily imply pointer equality.

func MakePipelineQueue

func MakePipelineQueue(analyzer DependencyAnalyzer) PipelineQueue

MakePipelineQueue creates a new empty PipelineQueue that uses the provided DependencyAnalyzer to determine plan dependencies.

func (*PipelineQueue) Add

func (pq *PipelineQueue) Add(plan *planNode, exec func() error)

Add inserts a new plan in the queue and executes the provided function when appropriate, obeying the guarantees made by the PipelineQueue.

Add should not be called concurrently with Wait. See Wait's comment for more details.

func (*PipelineQueue) Err

func (pq *PipelineQueue) Err() error

Err returns the PipelineQueue's error.

func (*PipelineQueue) Len

func (pq *PipelineQueue) Len() int

Len returns the number of plans in the PipelineQueue.

func (*PipelineQueue) Wait

func (pq *PipelineQueue) Wait() error

Wait blocks until the PipelineQueue finishes executing all plans. It then returns the error of the last batch of pipelined execution before reseting the error to allow for future use.

Wait can not be called concurrently with Add. If we need to lift this restriction, consider replacing the sync.WaitGroup with a syncutil.RWMutex, which will provide the desired starvation and ordering properties. Those being that once Wait is called, future Adds will not be reordered ahead of Waits attempts to drain all running and pending plans.

type PlanHookState

type PlanHookState interface {
	ExecCfg() *ExecutorConfig
	AuthorizationAccessor
}

PlanHookState exposes the subset of planner needed by plan hooks. We pass this as one interface, rather than individually passing each field or interface as we find we need them, to avoid churn in the planHookFn sig and the hooks that implement it.

type PreparedPortal

type PreparedPortal struct {
	Stmt  *PreparedStatement
	Qargs parser.QueryArguments

	ProtocolMeta interface{} // a field for protocol implementations to hang metadata off of.
	// contains filtered or unexported fields
}

PreparedPortal is a PreparedStatement that has been bound with query arguments.

type PreparedPortals

type PreparedPortals struct {
	// contains filtered or unexported fields
}

PreparedPortals is a mapping of PreparedPortal names to their corresponding PreparedPortals.

func (PreparedPortals) Delete

func (pp PreparedPortals) Delete(ctx context.Context, name string) bool

Delete removes the PreparedPortal with the provided name from the PreparedPortals. The method returns whether a portal with that name was found and removed.

func (PreparedPortals) Exists

func (pp PreparedPortals) Exists(name string) bool

Exists returns whether a PreparedPortal with the provided name exists.

func (PreparedPortals) Get

func (pp PreparedPortals) Get(name string) (*PreparedPortal, bool)

Get returns the PreparedPortal with the provided name.

func (PreparedPortals) New

New creates a new PreparedPortal with the provided name and corresponding PreparedStatement, binding the statement using the given QueryArguments.

type PreparedStatement

type PreparedStatement struct {
	Query    string
	Type     parser.StatementType
	SQLTypes parser.PlaceholderTypes
	Columns  ResultColumns

	ProtocolMeta interface{} // a field for protocol implementations to hang metadata off of.
	// contains filtered or unexported fields
}

PreparedStatement is a SQL statement that has been parsed and the types of arguments and results have been determined.

type PreparedStatements

type PreparedStatements struct {
	// contains filtered or unexported fields
}

PreparedStatements is a mapping of PreparedStatement names to their corresponding PreparedStatements.

func (PreparedStatements) Delete

func (ps PreparedStatements) Delete(ctx context.Context, name string) bool

Delete removes the PreparedStatement with the provided name from the PreparedStatements. The method returns whether a statement with that name was found and removed.

func (*PreparedStatements) DeleteAll

func (ps *PreparedStatements) DeleteAll(ctx context.Context)

DeleteAll removes all PreparedStatements from the PreparedStatements. This will in turn remove all PreparedPortals from the session's PreparedPortals. This is used by the "delete" message in the pgwire protocol; after DeleteAll statements and portals can be added again.

func (PreparedStatements) Exists

func (ps PreparedStatements) Exists(name string) bool

Exists returns whether a PreparedStatement with the provided name exists.

func (PreparedStatements) Get

Get returns the PreparedStatement with the provided name.

func (PreparedStatements) New

func (ps PreparedStatements) New(
	e *Executor, name, query string, placeholderHints parser.PlaceholderTypes,
) (*PreparedStatement, error)

New creates a new PreparedStatement with the provided name and corresponding query string, using the given PlaceholderTypes hints to assist in inferring placeholder types.

ps.session.Ctx() is used as the logging context for the prepare operation.

type Result

type Result struct {
	Err error
	// The type of statement that the result is for.
	Type parser.StatementType
	// The tag of the statement that the result is for.
	PGTag string
	// RowsAffected will be populated if the statement type is "RowsAffected".
	RowsAffected int
	// Columns will be populated if the statement type is "Rows". It will contain
	// the names and types of the columns returned in the result set in the order
	// specified in the SQL statement. The number of columns will equal the number
	// of values in each Row.
	Columns ResultColumns
	// Rows will be populated if the statement type is "Rows". It will contain
	// the result set of the result.
	// TODO(nvanbenschoten): Can this be streamed from the planNode?
	Rows *RowContainer
}

Result corresponds to the execution of a single SQL statement.

func (*Result) Close

func (r *Result) Close(ctx context.Context)

Close ensures that the resources claimed by the result are released.

type ResultColumn

type ResultColumn struct {
	Name string
	Typ  parser.Type
	// contains filtered or unexported fields
}

ResultColumn contains the name and type of a SQL "cell".

type ResultColumns

type ResultColumns []ResultColumn

ResultColumns is the type used throughout the sql module to describe the column types of a table.

type ResultList

type ResultList []Result

ResultList represents a list of results for a list of SQL statements. There is one result object per SQL statement in the request.

func (ResultList) Close

func (rl ResultList) Close(ctx context.Context)

Close ensures that the resources claimed by the results are released.

type RowBuffer

type RowBuffer struct {
	*RowContainer
	// contains filtered or unexported fields
}

RowBuffer is a buffer for rows of DTuples. Rows must be added using AddRow(), once the work is done the Close() method must be called to release the allocated memory.

This is intended for nodes where it is simpler to compute a batch of rows at once instead of maintaining internal state in order to operate correctly under the constraints imposed by Next() and Values() under the planNode interface.

func (*RowBuffer) Next

func (rb *RowBuffer) Next() bool

Next here is analogous to Next() as defined under planNode, if no pre-computed results were buffered in prior to the call we return false. Else we stage the next output value for the subsequent call to Values().

func (*RowBuffer) Values

func (rb *RowBuffer) Values() parser.Datums

Values here is analogous to Values() as defined under planNode.

Available after Next(), result only valid until the next call to Next()

type RowContainer

type RowContainer struct {
	// contains filtered or unexported fields
}

RowContainer is a container for rows of Datums which tracks the approximate amount of memory allocated for row data. Rows must be added using AddRow(); once the work is done the Close() method must be called to release the allocated memory.

TODO(knz): this does not currently track the amount of memory used for the outer array of Datums references.

func NewRowContainer

func NewRowContainer(acc mon.BoundAccount, h ResultColumns, rowCapacity int) *RowContainer

NewRowContainer allocates a new row container.

The acc argument indicates where to register memory allocations by this row container. Should probably be created by Session.makeBoundAccount() or Session.TxnState.makeBoundAccount().

The rowCapacity argument indicates how many rows are to be expected; it is used to pre-allocate the outer array of row references, in the fashion of Go's capacity argument to the make() function.

Note that we could, but do not (yet), report the size of the row container itself to the monitor in this constructor. This is because the various planNodes are not (yet) equipped to call Close() upon encountering errors in their constructor (all nodes initializing a RowContainer there) and SetLimitHint() (for sortNode which initializes a RowContainer there). This would be rather error-prone to implement consistently and hellishly difficult to test properly. The trade-off is that very large table schemas or column selections could cause unchecked and potentially dangerous memory growth.

func (*RowContainer) AddRow

func (c *RowContainer) AddRow(ctx context.Context, row parser.Datums) (parser.Datums, error)

AddRow attempts to insert a new row in the RowContainer. The row slice is not used directly: the Datum values inside the Datums are copied to internal storage. Returns an error if the allocation was denied by the MemoryMonitor.

func (*RowContainer) At

func (c *RowContainer) At(i int) parser.Datums

At accesses a row at a specific index.

func (*RowContainer) Close

func (c *RowContainer) Close(ctx context.Context)

Close releases the memory associated with the RowContainer.

func (*RowContainer) Len

func (c *RowContainer) Len() int

Len reports the number of rows currently held in this RowContainer.

func (*RowContainer) NumCols

func (c *RowContainer) NumCols() int

NumCols reports the number of columns held in this RowContainer.

func (*RowContainer) PopFirst

func (c *RowContainer) PopFirst()

PopFirst discards the the first rows added to the RowContainer.

func (*RowContainer) Replace

func (c *RowContainer) Replace(ctx context.Context, i int, newRow parser.Datums) error

Replace substitutes one row for another. This does query the MemoryMonitor to determine whether the new row fits the allowance.

func (*RowContainer) Swap

func (c *RowContainer) Swap(i, j int)

Swap exchanges two rows. Used for sorting.

type RowInserter

type RowInserter struct {
	InsertColIDtoRowIndex map[sqlbase.ColumnID]int
	// contains filtered or unexported fields
}

RowInserter abstracts the key/value operations for inserting table rows.

func MakeRowInserter

func MakeRowInserter(
	txn *client.Txn,
	tableDesc *sqlbase.TableDescriptor,
	fkTables tableLookupsByID,
	insertCols []sqlbase.ColumnDescriptor,
	checkFKs bool,
) (RowInserter, error)

MakeRowInserter creates a RowInserter for the given table.

insertCols must contain every column in the primary key.

func (*RowInserter) InsertRow

func (ri *RowInserter) InsertRow(
	ctx context.Context, b puter, values []parser.Datum, ignoreConflicts bool,
) error

InsertRow adds to the batch the kv operations necessary to insert a table row with the given values.

type SchemaAccessor

type SchemaAccessor interface {
	// contains filtered or unexported methods
}

SchemaAccessor provides helper methods for using the SQL schema.

type SchemaChangeManager

type SchemaChangeManager struct {
	// contains filtered or unexported fields
}

SchemaChangeManager processes pending schema changes seen in gossip updates. Most schema changes are executed synchronously by the node that created the schema change. If the node dies while processing the schema change this manager acts as a backup execution mechanism.

func NewSchemaChangeManager

func NewSchemaChangeManager(
	testingKnobs *SchemaChangerTestingKnobs,
	db client.DB,
	nodeDesc roachpb.NodeDescriptor,
	rpcContext *rpc.Context,
	distSQLServ *distsqlrun.ServerImpl,
	distSender *kv.DistSender,
	gossip *gossip.Gossip,
	leaseMgr *LeaseManager,
) *SchemaChangeManager

NewSchemaChangeManager returns a new SchemaChangeManager.

func (*SchemaChangeManager) Start

func (s *SchemaChangeManager) Start(stopper *stop.Stopper)

Start starts a goroutine that runs outstanding schema changes for tables received in the latest system configuration via gossip.

type SchemaChanger

type SchemaChanger struct {
	// contains filtered or unexported fields
}

SchemaChanger is used to change the schema on a table.

func NewSchemaChangerForTesting

func NewSchemaChangerForTesting(
	tableID sqlbase.ID,
	mutationID sqlbase.MutationID,
	nodeID roachpb.NodeID,
	db client.DB,
	leaseMgr *LeaseManager,
) SchemaChanger

NewSchemaChangerForTesting only for tests.

func (*SchemaChanger) AcquireLease

AcquireLease acquires a schema change lease on the table if an unexpired lease doesn't exist. It returns the lease.

func (*SchemaChanger) ExtendLease

func (sc *SchemaChanger) ExtendLease(
	existingLease *sqlbase.TableDescriptor_SchemaChangeLease,
) error

ExtendLease for the current leaser. This needs to be called often while doing a schema change to prevent more than one node attempting to apply a schema change (which is still safe, but unwise). It updates existingLease with the new lease.

func (*SchemaChanger) MaybeIncrementVersion

func (sc *SchemaChanger) MaybeIncrementVersion(ctx context.Context) (*sqlbase.Descriptor, error)

MaybeIncrementVersion increments the version if needed. If the version is to be incremented, it also assures that all nodes are on the current (pre-increment) version of the descriptor. Returns the (potentially updated) descriptor.

func (*SchemaChanger) ReleaseLease

ReleaseLease releases the table lease if it is the one registered with the table descriptor.

func (*SchemaChanger) RunStateMachineBeforeBackfill

func (sc *SchemaChanger) RunStateMachineBeforeBackfill(ctx context.Context) error

RunStateMachineBeforeBackfill moves the state machine forward and wait to ensure that all nodes are seeing the latest version of the table.

type SchemaChangerTestingKnobs

type SchemaChangerTestingKnobs struct {
	// SyncFilter is called before running schema changers synchronously (at
	// the end of a txn). The function can be used to clear the schema
	// changers (if the test doesn't want them run using the synchronous path)
	// or to temporarily block execution. Note that this has nothing to do
	// with the async path for running schema changers. To block that, set
	// AsyncExecNotification.
	SyncFilter SyncSchemaChangersFilter

	// RunBeforeBackfille is called just before starting the backfill.
	RunBeforeBackfill func() error

	// RunBeforeBackfillChunk is called before executing each chunk of a
	// backfill during a schema change operation. It is called with the
	// current span and returns an error which eventually is returned to the
	// caller of SchemaChanger.exec(). It is called at the start of the
	// backfill function passed into the transaction executing the chunk.
	RunBeforeBackfillChunk func(sp roachpb.Span) error

	// RunAfterBackfillChunk is called after executing each chunk of a
	// backfill during a schema change operation. It is called just before
	// returning from the backfill function passed into the transaction
	// executing the chunk. It is always called even when the backfill
	// function returns an error, or if the table has already been dropped.
	RunAfterBackfillChunk func()

	// RenameOldNameNotInUseNotification is called during a rename schema
	// change, after all leases on the version of the descriptor with the old
	// name are gone, and just before the mapping of the old name to the
	// descriptor id is about to be deleted.
	RenameOldNameNotInUseNotification func()

	// AsyncExecNotification is a function called before running a schema
	// change asynchronously. Returning an error will prevent the asynchronous
	// execution path from running.
	AsyncExecNotification func() error

	// AsyncExecQuickly executes queued schema changes as soon as possible.
	AsyncExecQuickly bool

	// WriteCheckpointInterval is the interval after which a checkpoint is
	// written.
	WriteCheckpointInterval time.Duration

	// BackfillChunkSize is to be used for all backfill chunked operations.
	BackfillChunkSize int64

	// RunAfterTableNameDropped is called when a table is being dropped.
	// It is called as soon as the table name is released and before the
	// table is truncated.
	RunAfterTableNameDropped func() error
}

SchemaChangerTestingKnobs for testing the schema change execution path through both the synchronous and asynchronous paths.

func (*SchemaChangerTestingKnobs) ModuleTestingKnobs

func (*SchemaChangerTestingKnobs) ModuleTestingKnobs()

ModuleTestingKnobs is part of the base.ModuleTestingKnobs interface.

type Session

type Session struct {
	Database string
	// SearchPath is a list of databases that will be searched for a table name
	// before the database. Currently, this is used only for SELECTs.
	// Names in the search path must have been normalized already.
	SearchPath  parser.SearchPath
	User        string
	Syntax      int32
	DistSQLMode distSQLExecMode

	// Info about the open transaction (if any).
	TxnState txnState

	PreparedStatements PreparedStatements
	PreparedPortals    PreparedPortals

	Location              *time.Location
	DefaultIsolationLevel enginepb.IsolationType
	// contains filtered or unexported fields
}

Session contains the state of a SQL client connection. Create instances using NewSession().

func NewSession

func NewSession(
	ctx context.Context, args SessionArgs, e *Executor, remote net.Addr, memMetrics *MemoryMetrics,
) *Session

NewSession creates and initializes a new Session object. remote can be nil.

func (*Session) ClearStatementsAndPortals

func (s *Session) ClearStatementsAndPortals(ctx context.Context)

ClearStatementsAndPortals de-registers all statements and portals. Afterwards none can be added any more.

func (*Session) CopyEnd

func (session *Session) CopyEnd(ctx context.Context)

CopyEnd ends the COPY mode. Any buffered data is discarded.

func (*Session) Ctx

func (s *Session) Ctx() context.Context

Ctx returns the current context for the session: if there is an active SQL transaction it returns the transaction context, otherwise it returns the session context. Note that in some cases we may want the session context even if there is an active transaction (an example is when we want to log an event to the session event log); in that case s.context should be used directly.

func (*Session) Finish

func (s *Session) Finish(e *Executor)

Finish releases resources held by the Session.

func (*Session) OpenAccount

func (s *Session) OpenAccount() WrappableMemoryAccount

OpenAccount interfaces between Session and mon.MemoryMonitor.

func (*Session) StartMonitor

func (s *Session) StartMonitor(pool *mon.MemoryMonitor, reserved mon.BoundAccount)

StartMonitor interfaces between Session and mon.MemoryMonitor

func (*Session) StartUnlimitedMonitor

func (s *Session) StartUnlimitedMonitor()

StartUnlimitedMonitor interfaces between Session and mon.MemoryMonitor

type SessionArgs

type SessionArgs struct {
	Database string
	User     string
}

SessionArgs contains arguments for creating a new Session with NewSession().

type StatementFilter

type StatementFilter func(context.Context, string, *Result)

StatementFilter is the type of callback that ExecutorTestingKnobs.StatementFilter takes.

type StatementResults

type StatementResults struct {
	ResultList
	// Indicates that after parsing, the request contained 0 non-empty statements.
	Empty bool
}

StatementResults represents a list of results from running a batch of SQL statements, plus some meta info about the batch.

func (*StatementResults) Close

func (s *StatementResults) Close(ctx context.Context)

Close ensures that the resources claimed by the results are released.

type SyncSchemaChangersFilter

type SyncSchemaChangersFilter func(TestingSchemaChangerCollection)

SyncSchemaChangersFilter is the type of a hook to be installed through the ExecutorContext for blocking or otherwise manipulating schema changers run through the sync schema changers path.

type TestingSchemaChangerCollection

type TestingSchemaChangerCollection struct {
	// contains filtered or unexported fields
}

TestingSchemaChangerCollection is an exported (for testing) version of schemaChangerCollection. TODO(andrei): get rid of this type once we can have tests internal to the sql package (as of April 2016 we can't because sql can't import server).

func (TestingSchemaChangerCollection) ClearSchemaChangers

func (tscc TestingSchemaChangerCollection) ClearSchemaChangers()

ClearSchemaChangers clears the schema changers from the collection. If this is called from a SyncSchemaChangersFilter, no schema changer will be run.

type TxnStateEnum

type TxnStateEnum int

TxnStateEnum represents the state of a SQL txn.

const (
	// No txn is in scope. Either there never was one, or it got committed/rolled back.
	NoTxn TxnStateEnum = iota
	// A txn is in scope.
	Open
	// The txn has encountered a (non-retriable) error.
	// Statements will be rejected until a COMMIT/ROLLBACK is seen.
	Aborted
	// The txn has encountered a retriable error.
	// Statements will be rejected until a RESTART_TRANSACTION is seen.
	RestartWait
	// The KV txn has been committed successfully through a RELEASE.
	// Statements are rejected until a COMMIT is seen.
	CommitWait
)

func (TxnStateEnum) String

func (i TxnStateEnum) String() string

type VirtualTabler

type VirtualTabler interface {
	// contains filtered or unexported methods
}

VirtualTabler is used to fetch descriptors for virtual tables and databases.

type WrappableMemoryAccount

type WrappableMemoryAccount struct {
	// contains filtered or unexported fields
}

WrappableMemoryAccount encapsulates a MemoryAccount to give it the Wsession()/Wtxn() method below.

func (*WrappableMemoryAccount) Wsession

Wsession captures the current session monitor pointer so it can be provided transparently to the other Account APIs below.

func (*WrappableMemoryAccount) Wtxn

Wtxn captures the current txn-specific monitor pointer so it can be provided transparently to the other Account APIs below.

type WrappedMemoryAccount

type WrappedMemoryAccount struct {
	// contains filtered or unexported fields
}

WrappedMemoryAccount is the transient structure that carries the extra argument to the MemoryAccount APIs.

func (WrappedMemoryAccount) Clear

func (w WrappedMemoryAccount) Clear(ctx context.Context)

Clear interfaces between Session and mon.MemoryMonitor.

func (WrappedMemoryAccount) Close

func (w WrappedMemoryAccount) Close(ctx context.Context)

Close interfaces between Session and mon.MemoryMonitor.

func (WrappedMemoryAccount) Grow

func (w WrappedMemoryAccount) Grow(ctx context.Context, extraSize int64) error

Grow interfaces between Session and mon.MemoryMonitor.

func (WrappedMemoryAccount) OpenAndInit

func (w WrappedMemoryAccount) OpenAndInit(ctx context.Context, initialAllocation int64) error

OpenAndInit interfaces between Session and mon.MemoryMonitor.

func (WrappedMemoryAccount) ResizeItem

func (w WrappedMemoryAccount) ResizeItem(ctx context.Context, oldSize, newSize int64) error

ResizeItem interfaces between Session and mon.MemoryMonitor.

Directories

Path Synopsis
Package distsqlrun is a generated protocol buffer package.
Package distsqlrun is a generated protocol buffer package.
Package sqlbase is a generated protocol buffer package.
Package sqlbase is a generated protocol buffer package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL