kv

package
v0.0.0-...-d0b943e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 5, 2016 License: Apache-2.0 Imports: 37 Imported by: 0

Documentation

Overview

Package kv provides a key-value API to an underlying cockroach datastore. Cockroach itself provides a single, monolithic, sorted key value map, distributed over multiple nodes. Each node holds a set of key ranges. Package kv translates between the monolithic, logical map which Cockroach clients experience to the physically distributed key ranges which comprise the whole.

Package kv implements the logic necessary to locate appropriate nodes based on keys being read or written. In some cases, requests may span a range of keys, in which case multiple RPCs may be sent out.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func GetDefaultDistSenderRetryOptions

func GetDefaultDistSenderRetryOptions() retry.Options

GetDefaultDistSenderRetryOptions returns the default retry options for a DistSender. This is helpful for users that want to overwrite a subset of the default options when creating a custom DistSenderContext.

Types

type DBServer

type DBServer struct {
	// contains filtered or unexported fields
}

A DBServer provides an HTTP server endpoint serving the key-value API. It accepts either JSON or serialized protobuf content types.

func NewDBServer

func NewDBServer(ctx *base.Context, sender client.Sender, stopper *stop.Stopper) *DBServer

NewDBServer allocates and returns a new DBServer.

func (*DBServer) Batch

Batch implements the roachpb.KVServer interface.

type DistSender

type DistSender struct {
	opentracing.Tracer
	// contains filtered or unexported fields
}

A DistSender provides methods to access Cockroach's monolithic, distributed key value store. Each method invocation triggers a lookup or lookups to find replica metadata for implicated key ranges. RPCs are sent to one or more of the replicas to satisfy the method invocation.

func NewDistSender

func NewDistSender(ctx *DistSenderContext, gossip *gossip.Gossip) *DistSender

NewDistSender returns a batch.Sender instance which connects to the Cockroach cluster via the supplied gossip instance. Supplying a DistSenderContext or the fields within is optional. For omitted values, sane defaults will be used.

func (*DistSender) CountRanges

func (ds *DistSender) CountRanges(rs roachpb.RSpan) (int64, *roachpb.Error)

CountRanges returns the number of ranges that encompass the given key span.

func (*DistSender) FirstRange

func (ds *DistSender) FirstRange() (*roachpb.RangeDescriptor, *roachpb.Error)

FirstRange returns the RangeDescriptor for the first range on the cluster, which is retrieved from the gossip protocol instead of the datastore.

func (*DistSender) RangeLookup

func (ds *DistSender) RangeLookup(
	key roachpb.RKey, desc *roachpb.RangeDescriptor, considerIntents, useReverseScan bool,
) ([]roachpb.RangeDescriptor, *roachpb.Error)

RangeLookup dispatches a RangeLookup request for the given metadata key to the replicas of the given range. Note that we allow inconsistent reads when doing range lookups for efficiency. Getting stale data is not a correctness problem but instead may infrequently result in additional latency as additional range lookups may be required. Note also that rangeLookup bypasses the DistSender's Send() method, so there is no error inspection and retry logic here; this is not an issue since the lookup performs a single inconsistent read only.

func (*DistSender) Send

Send implements the batch.Sender interface. It subdivides the Batch into batches admissible for sending (preventing certain illegal mixtures of requests), executes each individual part (which may span multiple ranges), and recombines the response. When the request spans ranges, it is split up and the corresponding ranges queried serially, in ascending order. In particular, the first write in a transaction may not be part of the first request sent. This is relevant since the first write is a BeginTransaction request, thus opening up a window of time during which there may be intents of a transaction, but no entry. Pushing such a transaction will succeed, and may lead to the transaction being aborted early.

type DistSenderContext

type DistSenderContext struct {
	Clock                    *hlc.Clock
	RangeDescriptorCacheSize int32
	// RangeLookupMaxRanges sets how many ranges will be prefetched into the
	// range descriptor cache when dispatching a range lookup request.
	RangeLookupMaxRanges int32
	LeaderCacheSize      int32
	RPCRetryOptions      *retry.Options

	// The RPC dispatcher. Defaults to send but can be changed here for testing
	// purposes.
	RPCSend           rpcSendFn
	RPCContext        *rpc.Context
	RangeDescriptorDB RangeDescriptorDB
	Tracer            opentracing.Tracer
	// contains filtered or unexported fields
}

DistSenderContext holds auxiliary objects that can be passed to NewDistSender.

type LocalTestCluster

type LocalTestCluster struct {
	Manual *hlc.ManualClock
	Clock  *hlc.Clock
	Gossip *gossip.Gossip
	Eng    engine.Engine
	Store  *storage.Store
	DB     *client.DB

	Sender *TxnCoordSender

	Stopper *stop.Stopper
	Latency time.Duration // sleep for each RPC sent
	// contains filtered or unexported fields
}

A LocalTestCluster encapsulates an in-memory instantiation of a cockroach node with a single store using a local sender. Example usage of a LocalTestCluster follows:

s := &server.LocalTestCluster{}
s.Start(t)
defer s.Stop()

Note that the LocalTestCluster is different from server.TestCluster in that although it uses a distributed sender, there is no RPC traffic.

func (*LocalTestCluster) Start

func (ltc *LocalTestCluster) Start(t util.Tester)

Start starts the test cluster by bootstrapping an in-memory store (defaults to maximum of 50M). The server is started, launching the node RPC server and all HTTP endpoints. Use the value of TestServer.Addr after Start() for client connections. Use Stop() to shutdown the server after the test completes.

func (*LocalTestCluster) Stop

func (ltc *LocalTestCluster) Stop()

Stop stops the cluster.

type RangeDescriptorDB

type RangeDescriptorDB interface {
	// rangeLookup takes a meta key to look up descriptors for,
	// for example \x00\x00meta1aa or \x00\x00meta2f.
	// The two booleans are considerIntents and useReverseScan respectively.
	RangeLookup(roachpb.RKey, *roachpb.RangeDescriptor, bool, bool) ([]roachpb.RangeDescriptor, *roachpb.Error)
	// FirstRange returns the descriptor for the first Range. This is the
	// Range containing all \x00\x00meta1 entries.
	FirstRange() (*roachpb.RangeDescriptor, *roachpb.Error)
}

RangeDescriptorDB is a type which can query range descriptors from an underlying datastore. This interface is used by rangeDescriptorCache to initially retrieve information which will be cached.

type ReplicaInfo

type ReplicaInfo struct {
	roachpb.ReplicaDescriptor
	NodeDesc *roachpb.NodeDescriptor
}

ReplicaInfo extends the Replica structure with the associated node descriptor.

type ReplicaSlice

type ReplicaSlice []ReplicaInfo

A ReplicaSlice is a slice of ReplicaInfo.

func (ReplicaSlice) FindReplica

func (rs ReplicaSlice) FindReplica(storeID roachpb.StoreID) int

FindReplica returns the index of the replica which matches the specified store ID. If no replica matches, -1 is returned.

func (ReplicaSlice) FindReplicaByNodeID

func (rs ReplicaSlice) FindReplicaByNodeID(nodeID roachpb.NodeID) int

FindReplicaByNodeID returns the index of the replica which matches the specified node ID. If no replica matches, -1 is returned.

func (ReplicaSlice) MoveToFront

func (rs ReplicaSlice) MoveToFront(i int)

MoveToFront moves the replica at the given index to the front of the slice, keeping the order of the remaining elements stable. The function will panic when invoked with an invalid index.

func (ReplicaSlice) SortByCommonAttributePrefix

func (rs ReplicaSlice) SortByCommonAttributePrefix(attrs []string) int

SortByCommonAttributePrefix rearranges the ReplicaSlice by comparing the attributes to the given reference attributes. The basis for the comparison is that of the common prefix of replica attributes (i.e. the number of equal attributes, starting at the first), with a longer prefix sorting first. The number of attributes successfully matched to at least one replica is returned (hence, if the return value equals the length of the ReplicaSlice, at least one replica matched all attributes).

func (ReplicaSlice) Swap

func (rs ReplicaSlice) Swap(i, j int)

Swap interchanges the replicas stored at the given indices.

type SendOptions

type SendOptions struct {
	context.Context // must not be nil
	// Ordering indicates how the available endpoints are ordered when
	// deciding which to send to (if there are more than one).
	Ordering orderingPolicy
	// SendNextTimeout is the duration after which RPCs are sent to
	// other replicas in a set.
	SendNextTimeout time.Duration
	// Timeout is the maximum duration of an RPC before failure.
	// 0 for no timeout.
	Timeout time.Duration
}

A SendOptions structure describes the algorithm for sending RPCs to one or more replicas, depending on error conditions and how many successful responses are required.

type TxnCoordSender

type TxnCoordSender struct {
	sync.Mutex // protects txns and txnStats
	// contains filtered or unexported fields
}

A TxnCoordSender is an implementation of client.Sender which wraps a lower-level Sender (either a storage.Stores or a DistSender) to which it sends commands. It acts as a man-in-the-middle, coordinating transaction state for clients. After a transaction is started, the TxnCoordSender starts asynchronously sending heartbeat messages to that transaction's txn record, to keep it live. It also keeps track of each written key or key range over the course of the transaction. When the transaction is committed or aborted, it clears accumulated write intents for the transaction.

func NewTxnCoordSender

func NewTxnCoordSender(
	wrapped client.Sender,
	clock *hlc.Clock,
	linearizable bool,
	tracer opentracing.Tracer,
	stopper *stop.Stopper,
	txnMetrics *TxnMetrics,
) *TxnCoordSender

NewTxnCoordSender creates a new TxnCoordSender for use from a KV distributed DB instance.

func (*TxnCoordSender) Send

Send implements the batch.Sender interface. If the request is part of a transaction, the TxnCoordSender adds the transaction to a map of active transactions and begins heartbeating it. Every subsequent request for the same transaction updates the lastUpdate timestamp to prevent live transactions from being considered abandoned and garbage collected. Read/write mutating requests have their key or key range added to the transaction's interval tree of key ranges for eventual cleanup via resolved write intents; they're tagged to an outgoing EndTransaction request, with the receiving replica in charge of resolving them.

type TxnMetrics

type TxnMetrics struct {
	Aborts    metric.Rates
	Commits   metric.Rates
	Abandons  metric.Rates
	Durations metric.Histograms

	// Restarts is the number of times we had to restart the transaction.
	Restarts *metric.Histogram
}

TxnMetrics holds all metrics relating to KV transactions.

func NewTxnMetrics

func NewTxnMetrics(txnRegistry *metric.Registry) *TxnMetrics

NewTxnMetrics returns a new instance of txnMetrics that contains metrics which have been registered with the provided Registry.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL