module

package
v0.23.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 31, 2022 License: AGPL-3.0 Imports: 18 Imported by: 64

Documentation

Index

Constants

View Source
const (
	ConsumeProgressVerificationBlockHeight = "ConsumeProgressVerificationBlockHeight"
	ConsumeProgressVerificationChunkIndex  = "ConsumeProgressVerificationChunkIndex"
)

Variables

View Source
var ErrMultipleStartup = errors.New("component may only be started once")

Functions

This section is empty.

Types

type AggregatingSigner

type AggregatingSigner interface {
	AggregatingVerifier
	Sign(msg []byte) (crypto.Signature, error)
	Aggregate(sigs []crypto.Signature) (crypto.Signature, error)
}

AggregatingSigner is a signer that can sign a simple message and aggregate multiple signatures into a single aggregated signature.

type AggregatingVerifier

type AggregatingVerifier interface {
	Verifier
	VerifyMany(msg []byte, sig crypto.Signature, keys []crypto.PublicKey) (bool, error)
}

AggregatingVerifier can verify a message against a signature from either a single key or many keys.

type BlockRequester

type BlockRequester interface {

	// RequestBlock indicates that the given block should be queued for retrieval.
	RequestBlock(blockID flow.Identifier)

	// RequestHeight indicates that the given block height should be queued for retrieval.
	RequestHeight(height uint64)

	// Manually Prune requests
	Prune(final *flow.Header)
}

BlockRequester enables components to request particular blocks by ID from synchronization system.

type Builder

type Builder interface {

	// BuildOn generates a new payload that is valid with respect to the parent
	// being built upon, with the view being provided by the consensus algorithm.
	// The builder stores the block and validates it against the protocol state
	// before returning it.
	//
	// NOTE: Since the block is stored within Builder, HotStuff MUST propose the
	// block once BuildOn succcessfully returns.
	BuildOn(parentID flow.Identifier, setter func(*flow.Header) error) (*flow.Header, error)
}

Builder represents an abstracted block construction module that can be used in more than one consensus algorithm. The resulting block is consistent within itself and can be wrapped with additional consensus information such as QCs.

type CacheMetrics

type CacheMetrics interface {
	// report the total number of cached items
	CacheEntries(resource string, entries uint)
	// report the number of times the queried item is found in the cache
	CacheHit(resource string)
	// report the number of items the queried item is not found in the cache, nor found in the database
	CacheNotFound(resource string)
	// report the number of items the queried item is not found in the cache, but found in the database
	CacheMiss(resource string)
}

type ChunkAssigner

type ChunkAssigner interface {
	// Assign generates the assignment
	// error returns:
	//  * NoValidChildBlockError indicates that no valid child block is known
	//    (which contains the block's source of randomness)
	//  * unexpected errors should be considered symptoms of internal bugs
	Assign(result *flow.ExecutionResult, blockID flow.Identifier) (*chmodels.Assignment, error)
}

ChunkAssigner presents an interface for assigning chunks to the verifier nodes

type ChunkVerifier

type ChunkVerifier interface {
	// Verify verifies the given VerifiableChunk by executing it and checking the final state commitment
	// It returns a Spock Secret as a byte array, verification fault of the chunk, and an error.
	// Note: Verify should only be executed on non-system chunks. It returns an error if it is invoked on
	// system chunk.
	// TODO return challenges plus errors
	Verify(ch *verification.VerifiableChunkData) ([]byte, chmodels.ChunkFault, error)

	// SystemChunkVerify verifies a given VerifiableChunk corresponding to a system chunk.
	// by executing it and checking the final state commitment
	// It returns a Spock Secret as a byte array, verification fault of the chunk, and an error.
	// Note: Verify should only be executed on system chunks. It returns an error if it is invoked on
	// non-system chunks.
	SystemChunkVerify(ch *verification.VerifiableChunkData) ([]byte, chmodels.ChunkFault, error)
}

ChunkVerifier provides functionality to verify chunks

type CleanerMetrics

type CleanerMetrics interface {
	RanGC(took time.Duration)
}

type ClusterRootQCVoter

type ClusterRootQCVoter interface {

	// Vote handles the full procedure of generating a vote, submitting it to the
	// epoch smart contract, and verifying submission. Returns an error only if
	// there is a critical error that would make it impossible for the vote to be
	// submitted. Otherwise, exits when the vote has been successfully submitted.
	//
	// It is safe to run Vote multiple times within a single setup phase.
	Vote(context.Context, protocol.Epoch) error
}

ClusterRootQCVoter is responsible for submitting a vote to the cluster QC contract to coordinate generation of a valid root quorum certificate for the next epoch.

type CollectionMetrics

type CollectionMetrics interface {
	// TransactionIngested is called when a new transaction is ingested by the
	// node. It increments the total count of ingested transactions and starts
	// a tx->col span for the transaction.
	TransactionIngested(txID flow.Identifier)

	// ClusterBlockProposed is called when a new collection is proposed by us or
	// any other node in the cluster.
	ClusterBlockProposed(block *cluster.Block)

	// ClusterBlockFinalized is called when a collection is finalized.
	ClusterBlockFinalized(block *cluster.Block)
}

type ComplianceMetrics

type ComplianceMetrics interface {
	FinalizedHeight(height uint64)
	CommittedEpochFinalView(view uint64)
	SealedHeight(height uint64)
	BlockFinalized(*flow.Block)
	BlockSealed(*flow.Block)
	BlockProposalDuration(duration time.Duration)
	CurrentEpochCounter(counter uint64)
	CurrentEpochPhase(phase flow.EpochPhase)
	CurrentEpochFinalView(view uint64)
	CurrentDKGPhase1FinalView(view uint64)
	CurrentDKGPhase2FinalView(view uint64)
	CurrentDKGPhase3FinalView(view uint64)
	EpochEmergencyFallbackTriggered()
}

type ConsensusMetrics

type ConsensusMetrics interface {
	// StartCollectionToFinalized reports Metrics C1: Collection Received by CCL→ Collection Included in Finalized Block
	StartCollectionToFinalized(collectionID flow.Identifier)

	// FinishCollectionToFinalized reports Metrics C1: Collection Received by CCL→ Collection Included in Finalized Block
	FinishCollectionToFinalized(collectionID flow.Identifier)

	// StartBlockToSeal reports Metrics C4: Block Received by CCL → Block Seal in finalized block
	StartBlockToSeal(blockID flow.Identifier)

	// FinishBlockToSeal reports Metrics C4: Block Received by CCL → Block Seal in finalized block
	FinishBlockToSeal(blockID flow.Identifier)

	// EmergencySeal increments the number of seals that were created in emergency mode
	EmergencySeal()

	// OnReceiptProcessingDuration records the number of seconds spent processing a receipt
	OnReceiptProcessingDuration(duration time.Duration)

	// OnApprovalProcessingDuration records the number of seconds spent processing an approval
	OnApprovalProcessingDuration(duration time.Duration)

	// CheckSealingDuration records absolute time for the full sealing check by the consensus match engine
	CheckSealingDuration(duration time.Duration)
}

type DKGContractClient added in v0.22.4

type DKGContractClient interface {

	// Broadcast broadcasts a message to all other nodes participating in the
	// DKG. The message is broadcast by submitting a transaction to the DKG
	// smart contract. An error is returned if the transaction has failed has
	// failed.
	// TBD: retry logic
	Broadcast(msg messages.BroadcastDKGMessage) error

	// ReadBroadcast reads the broadcast messages from the smart contract.
	// Messages are returned in the order in which they were broadcast (received
	// and stored in the smart contract). The parameters are:
	//
	// * fromIndex: return messages with index >= fromIndex
	// * referenceBlock: a marker for the state against which the query should
	//   be executed
	//
	// DKG nodes should call ReadBroadcast one final time once they have
	// observed the phase deadline trigger to guarantee they receive all
	// messages for that phase.
	ReadBroadcast(fromIndex uint, referenceBlock flow.Identifier) ([]messages.BroadcastDKGMessage, error)

	// SubmitResult submits the final public result of the DKG protocol. This
	// represents the group public key and the node's local computation of the
	// public keys for each DKG participant.
	//
	// SubmitResult must be called strictly after the final phase has ended.
	SubmitResult(crypto.PublicKey, []crypto.PublicKey) error
}

DKGContractClient enables interacting with the DKG smart contract. This contract is deployed to the service account as part of a collection of smart contracts that facilitate and manage epoch transitions.

type DKGController added in v0.22.4

type DKGController interface {

	// Run starts the DKG controller and starts phase 1. It is a blocking call
	// that blocks until the controller is shutdown or until an error is
	// encountered in one of the protocol phases.
	Run() error

	// EndPhase1 notifies the controller to end phase 1, and start phase 2.
	EndPhase1() error

	// EndPhase2 notifies the controller to end phase 2, and start phase 3.
	EndPhase2() error

	// End terminates the DKG state machine and records the artifacts.
	End() error

	// Shutdown stops the controller regardless of the current state.
	Shutdown()

	// Poll instructs the controller to actively fetch broadcast messages (ex.
	// read from DKG smart contract). The method does not return until all
	// received messages are processed.
	Poll(blockReference flow.Identifier) error

	// GetArtifacts returns our node's private key share, the group public key,
	// and the list of all nodes' public keys (including ours), as computed by
	// the DKG.
	GetArtifacts() (crypto.PrivateKey, crypto.PublicKey, []crypto.PublicKey)

	// GetIndex returns the index of this node in the DKG committee list.
	GetIndex() int

	// SubmitResult instructs the broker to publish the results of the DKG run
	// (ex. publish to DKG smart contract).
	SubmitResult() error
}

DKGController controls the execution of a Joint Feldman DKG instance.

type DKGControllerFactory added in v0.22.4

type DKGControllerFactory interface {

	// Create instantiates a new DKGController.
	Create(dkgInstanceID string, participants flow.IdentityList, seed []byte) (DKGController, error)
}

DKGControllerFactory is a factory to create instances of DKGController.

type EngineMetrics

type EngineMetrics interface {
	MessageSent(engine string, message string)
	MessageReceived(engine string, message string)
	MessageHandled(engine string, messages string)
}

type EntriesFunc added in v0.14.6

type EntriesFunc func() uint

type EpochLookup added in v0.20.0

type EpochLookup interface {

	// EpochForView returns the counter of the epoch that the view belongs to.
	EpochForView(view uint64) (epochCounter uint64, err error)

	// EpochForViewWithFallback returns the counter of the epoch that the view belongs to.
	EpochForViewWithFallback(view uint64) (epochCounter uint64, err error)
}

EpochLookup provides a method to find epochs by view.

type ExecutionMetrics

type ExecutionMetrics interface {
	LedgerMetrics
	RuntimeMetrics
	ProviderMetrics
	WALMetrics

	// StartBlockReceivedToExecuted starts a span to trace the duration of a block
	// from being received for execution to execution being finished
	StartBlockReceivedToExecuted(blockID flow.Identifier)

	// FinishBlockReceivedToExecuted finishes a span to trace the duration of a block
	// from being received for execution to execution being finished
	FinishBlockReceivedToExecuted(blockID flow.Identifier)

	// ExecutionStateReadsPerBlock reports number of state access/read operations per block
	ExecutionStateReadsPerBlock(reads uint64)

	// ExecutionStorageStateCommitment reports the storage size of a state commitment in bytes
	ExecutionStorageStateCommitment(bytes int64)

	// ExecutionLastExecutedBlockHeight reports last executed block height
	ExecutionLastExecutedBlockHeight(height uint64)

	// ExecutionBlockExecuted reports the total time and computation spent on executing a block
	ExecutionBlockExecuted(dur time.Duration, compUsed uint64, txCounts int, colCounts int)

	// ExecutionCollectionExecuted reports the total time and computation spent on executing a collection
	ExecutionCollectionExecuted(dur time.Duration, compUsed uint64, txCounts int)

	// ExecutionTransactionExecuted reports the total time and computation spent on executing a single transaction
	ExecutionTransactionExecuted(dur time.Duration, compUsed uint64, eventCounts int, failed bool)

	// ExecutionScriptExecuted reports the time spent on executing an script
	ExecutionScriptExecuted(dur time.Duration, compUsed uint64)

	// ExecutionCollectionRequestSent reports when a request for a collection is sent to a collection node
	ExecutionCollectionRequestSent()

	// Unused
	ExecutionCollectionRequestRetried()

	// ExecutionSync reports when the state syncing is triggered or stopped.
	ExecutionSync(syncing bool)

	ExecutionBlockDataUploadStarted()

	ExecutionBlockDataUploadFinished(dur time.Duration)
}

type Finalizer

type Finalizer interface {

	// MakeValid will mark a block as having passed the consensus algorithm's
	// internal validation.
	MakeValid(blockID flow.Identifier) error

	// MakeFinal will declare a block and all of its ancestors as finalized, which
	// makes it an immutable part of the blockchain. Returning an error indicates
	// some fatal condition and will cause the finalization logic to terminate.
	MakeFinal(blockID flow.Identifier) error
}

Finalizer is used by the consensus algorithm to inform other components for (such as the protocol state) about validity of block headers and finalization of blocks.

Since we have two different protocol states: one for the main consensus, the other for the collection cluster consensus, the Finalizer interface allows the two different protocol states to provide different implementations for updating its state when a block has been validated or finalized.

Why MakeValid and MakeFinal need to return an error? Updating the protocol state should always succeed when the data is consistent. However, in case the protocol state is corrupted, error should be returned and the consensus algorithm should halt. So the error returned from MakeValid and MakeFinal is for the protocol state to report exceptions.

type HotStuff

type HotStuff interface {
	ReadyDoneAware

	// SubmitProposal submits a new block proposal to the HotStuff event loop.
	// This method blocks until the proposal is accepted to the event queue.
	//
	// Block proposals must be submitted in order and only if they extend a
	// block already known to HotStuff core.
	SubmitProposal(proposal *flow.Header, parentView uint64)

	// SubmitVote submits a new vote to the HotStuff event loop.
	// This method blocks until the vote is accepted to the event queue.
	//
	// Votes may be submitted in any order.
	SubmitVote(originID flow.Identifier, blockID flow.Identifier, view uint64, sigData []byte)
}

HotStuff defines the interface to the core HotStuff algorithm. It includes a method to start the event loop, and utilities to submit block proposals and votes received from other replicas.

type HotStuffFollower

type HotStuffFollower interface {
	ReadyDoneAware

	// SubmitProposal feeds a new block proposal into the HotStuffFollower.
	// This method blocks until the proposal is accepted to the event queue.
	//
	// Block proposals must be submitted in order, i.e. a proposal's parent must
	// have been previously processed by the HotStuffFollower.
	SubmitProposal(proposal *flow.Header, parentView uint64)
}

HotStuffFollower is run by non-consensus nodes to observe the block chain and make local determination about block finalization. While the process of reaching consensus (while guaranteeing its safety and liveness) is very intricate, the criteria to confirm that consensus has been reached are relatively straight forward. Each non-consensus node can simply observe the blockchain and determine locally which blocks have been finalized without requiring additional information from the consensus nodes.

Specifically, the HotStuffFollower informs other components within the node about finalization of blocks. It consumes block proposals broadcasted by the consensus node, verifies the block header and locally evaluates the finalization rules.

Notes:

  • HotStuffFollower does not handle disconnected blocks. Each block's parent must have been previously processed by the HotStuffFollower.
  • HotStuffFollower internally prunes blocks below the last finalized view. When receiving a block proposal, it might not have the proposal's parent anymore. Nevertheless, HotStuffFollower needs the parent's view, which must be supplied in addition to the proposal.

type HotstuffMetrics

type HotstuffMetrics interface {
	// HotStuffBusyDuration reports Metrics C6 HotStuff Busy Duration
	HotStuffBusyDuration(duration time.Duration, event string)

	// HotStuffIdleDuration reports Metrics C6 HotStuff Idle Duration
	HotStuffIdleDuration(duration time.Duration)

	// HotStuffWaitDuration reports Metrics C6 HotStuff Idle Duration
	HotStuffWaitDuration(duration time.Duration, event string)

	// SetCurView reports Metrics C8: Current View
	SetCurView(view uint64)

	// SetQCView reports Metrics C9: View of Newest Known QC
	SetQCView(view uint64)

	// CountSkipped reports the number of times we skipped ahead.
	CountSkipped()

	// CountTimeout reports the number of times we timed out.
	CountTimeout()

	// SetTimeout sets the current timeout duration
	SetTimeout(duration time.Duration)

	// CommitteeProcessingDuration measures the time which the HotStuff's core logic
	// spends in the hotstuff.Committee component, i.e. the time determining consensus
	// committee relations.
	CommitteeProcessingDuration(duration time.Duration)

	// SignerProcessingDuration measures the time which the HotStuff's core logic
	// spends in the hotstuff.Signer component, i.e. the with crypto-related operations.
	SignerProcessingDuration(duration time.Duration)

	// ValidatorProcessingDuration measures the time which the HotStuff's core logic
	// spends in the hotstuff.Validator component, i.e. the with verifying
	// consensus messages.
	ValidatorProcessingDuration(duration time.Duration)

	// PayloadProductionDuration measures the time which the HotStuff's core logic
	// spends in the module.Builder component, i.e. the with generating block payloads.
	PayloadProductionDuration(duration time.Duration)
}

type Job added in v0.15.0

type Job interface {
	// each job has a unique ID for deduplication
	ID() JobID
}

type JobConsumer added in v0.15.0

type JobConsumer interface {
	NewJobListener

	// Start starts processing jobs from a job queue. If this is the first time, a processed index
	// will be initialized in the storage. If it fails to initialize, an error will be returned
	Start(defaultIndex uint64) error

	// Stop gracefully stops the consumer from reading new jobs from the job queue. It does not stop
	// the existing worker finishing their jobs
	// It blocks until the existing worker finish processing the job
	Stop()

	// NotifyJobIsDone let the consumer know a job has been finished, so that consumer will take
	// the next job from the job queue if there are workers available. It returns the last processed job index.
	NotifyJobIsDone(JobID) uint64

	// Size returns the number of processing jobs in consumer.
	Size() uint
}

JobConsumer consumes jobs from a job queue, and it remembers which job it has processed, and is able to resume processing from the next.

type JobID added in v0.15.0

type JobID string

JobID is a unique ID of the job.

type JobQueue added in v0.15.0

type JobQueue interface {
	// Add a job to the job queue
	Add(job Job) error
}

type Jobs added in v0.15.0

type Jobs interface {
	AtIndex(index uint64) (Job, error)

	// Head returns the index of the last job
	Head() (uint64, error)
}

Jobs is the reader for an ordered job queue. Job can be fetched by the index, which start from 0

type LedgerMetrics

type LedgerMetrics interface {
	// ForestApproxMemorySize records approximate memory usage of forest (all in-memory trees)
	ForestApproxMemorySize(bytes uint64)

	// ForestNumberOfTrees current number of trees in a forest (in memory)
	ForestNumberOfTrees(number uint64)

	// LatestTrieRegCount records the number of unique register allocated (the latest created trie)
	LatestTrieRegCount(number uint64)

	// LatestTrieRegCountDiff records the difference between the number of unique register allocated of the latest created trie and parent trie
	LatestTrieRegCountDiff(number uint64)

	// LatestTrieMaxDepth records the maximum depth of the last created trie
	LatestTrieMaxDepth(number uint64)

	// LatestTrieMaxDepthDiff records the difference between the max depth of the latest created trie and parent trie
	LatestTrieMaxDepthDiff(number uint64)

	// UpdateCount increase a counter of performed updates
	UpdateCount()

	// ProofSize records a proof size
	ProofSize(bytes uint32)

	// UpdateValuesNumber accumulates number of updated values
	UpdateValuesNumber(number uint64)

	// UpdateValuesSize total size (in bytes) of updates values
	UpdateValuesSize(byte uint64)

	// UpdateDuration records absolute time for the update of a trie
	UpdateDuration(duration time.Duration)

	// UpdateDurationPerItem records update time for single value (total duration / number of updated values)
	UpdateDurationPerItem(duration time.Duration)

	// ReadValuesNumber accumulates number of read values
	ReadValuesNumber(number uint64)

	// ReadValuesSize total size (in bytes) of read values
	ReadValuesSize(byte uint64)

	// ReadDuration records absolute time for the read from a trie
	ReadDuration(duration time.Duration)

	// ReadDurationPerItem records read time for single value (total duration / number of read values)
	ReadDurationPerItem(duration time.Duration)
}

LedgerMetrics provides an interface to record Ledger Storage metrics. Ledger storage is non-linear (fork-aware) so certain metrics are averaged and computed before emitting for better visibility

type Local

type Local interface {

	// NodeID returns the node ID of the local node.
	NodeID() flow.Identifier

	// Address returns the (listen) address of the local node.
	Address() string

	// Sign provides a signature oracle that given a message and hasher, it
	// generates and returns a signature over the message using the node's private key
	// as well as the input hasher
	Sign([]byte, hash.Hasher) (crypto.Signature, error)

	// NotMeFilter returns handy not-me filter for searching identity
	NotMeFilter() flow.IdentityFilter

	// SignFunc provides a signature oracle that given a message, a hasher, and a signing function, it
	// generates and returns a signature over the message using the node's private key
	// as well as the input hasher by invoking the given signing function. The overall idea of this function
	// is to not expose the private key to the caller.
	SignFunc([]byte, hash.Hasher, func(crypto.PrivateKey, []byte, hash.Hasher) (crypto.Signature,
		error)) (crypto.Signature, error)
}

Local encapsulates the stable local node information.

type MempoolMetrics

type MempoolMetrics interface {
	MempoolEntries(resource string, entries uint)
	Register(resource string, entriesFunc EntriesFunc) error
}

type Merger

type Merger interface {
	// Join concatenates the provided signatures. The merger has an internal notion
	// of the expected length of each signature. It returns the sentinel error
	// `verification.ErrInvalidFormat` if one of the signatures does not conform
	// to the expected byte length.
	Join(sig1, sig2 crypto.Signature) ([]byte, error)
	// Split separates the concatenated signature into its two components. The
	// merger has an internal notion of the expected byte length of each
	// signature. It returns the sentinel error `verification.ErrInvalidFormat`
	// if either signatures does not conform to the expected length.
	Split(combined []byte) (crypto.Signature, crypto.Signature, error)
}

Merger is responsible for combining two signatures, but it must be done in a cryptographically unaware way (agnostic of the byte structure of the signatures). However, the merger has an internal notion of the expected length of each signature and errors in case the inputs have incompatible length.

type NetworkMetrics

type NetworkMetrics interface {
	ResolverMetrics

	// NetworkMessageSent size in bytes and count of the network message sent
	NetworkMessageSent(sizeBytes int, topic string, messageType string)

	// NetworkMessageReceived size in bytes and count of the network message received
	NetworkMessageReceived(sizeBytes int, topic string, messageType string)

	// NetworkDuplicateMessagesDropped counts number of messages dropped due to duplicate detection
	NetworkDuplicateMessagesDropped(topic string, messageType string)

	// Message receive queue metrics
	// MessageAdded increments the metric tracking the number of messages in the queue with the given priority
	MessageAdded(priority int)

	// MessageRemoved decrements the metric tracking the number of messages in the queue with the given priority
	MessageRemoved(priority int)

	// QueueDuration tracks the time spent by a message with the given priority in the queue
	QueueDuration(duration time.Duration, priority int)

	// InboundProcessDuration tracks the time a queue worker blocked by an engine for processing an incoming message on specified topic (i.e., channel).
	InboundProcessDuration(topic string, duration time.Duration)

	// OutboundConnections updates the metric tracking the number of outbound connections of this node
	OutboundConnections(connectionCount uint)

	// InboundConnections updates the metric tracking the number of inbound connections of this node
	InboundConnections(connectionCount uint)

	// UnstakedOutboundConnections updates the metric tracking the number of outbound connections to unstaked nodes
	UnstakedOutboundConnections(connectionCount uint)

	// UnstakedInboundConnections updates the metric tracking the number of inbound connections from unstaked nodes
	UnstakedInboundConnections(connectionCount uint)
}

type NewJobListener added in v0.15.0

type NewJobListener interface {
	// Check let the producer notify the consumer that a new job has been added, so that the consumer
	// can check if there is worker available to process that job.
	Check()
}

type NoopReadDoneAware added in v0.20.6

type NoopReadDoneAware struct{}

func (*NoopReadDoneAware) Done added in v0.20.6

func (n *NoopReadDoneAware) Done() <-chan struct{}

func (*NoopReadDoneAware) Ready added in v0.20.6

func (n *NoopReadDoneAware) Ready() <-chan struct{}

type PendingBlockBuffer

type PendingBlockBuffer interface {
	Add(originID flow.Identifier, proposal *messages.BlockProposal) bool

	ByID(blockID flow.Identifier) (*flow.PendingBlock, bool)

	ByParentID(parentID flow.Identifier) ([]*flow.PendingBlock, bool)

	DropForParent(parentID flow.Identifier)

	PruneByHeight(height uint64)

	Size() uint
}

PendingBlockBuffer defines an interface for a cache of pending blocks that cannot yet be processed because they do not connect to the rest of the chain state. They are indexed by parent ID to enable processing all of a parent's children once the parent is received.

type PendingClusterBlockBuffer

type PendingClusterBlockBuffer interface {
	Add(originID flow.Identifier, proposal *messages.ClusterBlockProposal) bool

	ByID(blockID flow.Identifier) (*cluster.PendingBlock, bool)

	ByParentID(parentID flow.Identifier) ([]*cluster.PendingBlock, bool)

	DropForParent(parentID flow.Identifier)

	PruneByHeight(height uint64)

	Size() uint
}

PendingClusterBlockBuffer is the same thing as PendingBlockBuffer, but for collection node cluster consensus.

type PingMetrics

type PingMetrics interface {
	// NodeReachable tracks the round trip time in milliseconds taken to ping a node
	// The nodeInfo provides additional information about the node such as the name of the node operator
	NodeReachable(node *flow.Identity, nodeInfo string, rtt time.Duration)

	// NodeInfo tracks the software version, sealed height and hotstuff view of a node
	NodeInfo(node *flow.Identity, nodeInfo string, version string, sealedHeight uint64, hotstuffCurView uint64)
}

type ProcessingNotifier added in v0.15.0

type ProcessingNotifier interface {
	Notify(entityID flow.Identifier)
}

ProcessingNotifier is for the worker's underneath engine to report an entity has been processed without knowing the job queue. It is a callback so that the worker can convert the entity id into a job id, and notify the consumer about a finished job.

At the current version, entities used in this interface are chunks and blocks ids.

type ProviderMetrics

type ProviderMetrics interface {
	// ChunkDataPackRequested is executed every time a chunk data pack request is arrived at execution node.
	// It increases the request counter by one.
	ChunkDataPackRequested()
}

type QCContractClient

type QCContractClient interface {

	// SubmitVote submits the given vote to the cluster QC aggregator smart
	// contract. This function returns only once the transaction has been
	// processed by the network. An error is returned if the transaction has
	// failed and should be re-submitted.
	SubmitVote(ctx context.Context, vote *model.Vote) error

	// Voted returns true if we have successfully submitted a vote to the
	// cluster QC aggregator smart contract for the current epoch.
	Voted(ctx context.Context) (bool, error)
}

QCContractClient enables interacting with the cluster QC aggregator smart contract. This contract is deployed to the service account as part of a collection of smart contracts that facilitate and manage epoch transitions.

type ReadyDoneAware

type ReadyDoneAware interface {
	// Ready commences startup of the module, and returns a ready channel that is closed once
	// startup has completed. Note that the ready channel may never close if errors are
	// encountered during startup.
	// If shutdown has already commenced before this method is called for the first time,
	// startup will not be performed and the returned channel will also never close.
	// This should be an idempotent method.
	Ready() <-chan struct{}

	// Done commences shutdown of the module, and returns a done channel that is closed once
	// shutdown has completed. Note that the done channel should be closed even if errors are
	// encountered during shutdown.
	// This should be an idempotent method.
	Done() <-chan struct{}
}

WARNING: The semantics of this interface will be changing in the near future, with startup / shutdown capabilities being delegated to the Startable interface instead. For more details, see https://github.com/onflow/flow-go/pull/1167

ReadyDoneAware provides an easy interface to wait for module startup and shutdown. Modules that implement this interface only support a single start-stop cycle, and will not restart if Ready() is called again after shutdown has already commenced.

type ReceiptValidator added in v0.14.0

type ReceiptValidator interface {

	// Validate verifies that the ExecutionReceipt satisfies
	// the following conditions:
	// 	* is from Execution node with positive weight
	//	* has valid signature
	//	* chunks are in correct format
	// 	* execution result has a valid parent and satisfies the subgraph check
	// Returns nil if all checks passed successfully.
	// Expected errors during normal operations:
	// * engine.InvalidInputError
	//   if receipt violates protocol condition
	// * engine.UnverifiableInputError
	//   if receipt's parent result is unknown
	Validate(receipts *flow.ExecutionReceipt) error

	// ValidatePayload verifies the ExecutionReceipts and ExecutionResults
	// in the payload for compliance with the protocol:
	// Receipts:
	// 	* are from Execution node with positive weight
	//	* have valid signature
	//	* chunks are in correct format
	//  * no duplicates in fork
	// Results:
	// 	* have valid parents and satisfy the subgraph check
	//  * extend the execution tree, where the tree root is the latest
	//    finalized block and only results from this fork are included
	//  * no duplicates in fork
	// Expected errors during normal operations:
	// * engine.InvalidInputError
	//   if some receipts in the candidate block violate protocol condition
	// * engine.UnverifiableInputError
	//   if for some of the receipts, their respective parent result is unknown
	ValidatePayload(candidate *flow.Block) error
}

ReceiptValidator is an interface which is used for validating receipts with respect to current protocol state.

type Requester

type Requester interface {
	// EntityByID will request an entity through the request engine backing
	// the interface. The additional selector will be applied to the subset
	// of valid providers for the entity and allows finer-grained control
	// over which providers to request a given entity from. Use `filter.Any`
	// if no additional restrictions are required. Data integrity of response
	// will be checked upon arrival. This function should be used for requesting
	// entites by their IDs.
	EntityByID(entityID flow.Identifier, selector flow.IdentityFilter)

	// Query will request data through the request engine backing the interface.
	// The additional selector will be applied to the subset
	// of valid providers for the data and allows finer-grained control
	// over which providers to request data from. Doesn't perform integrity check
	// can be used to get entities without knowing their ID.
	Query(key flow.Identifier, selector flow.IdentityFilter)

	// Force will force the dispatcher to send all possible batches immediately.
	// It can be used in cases where responsiveness is of utmost importance, at
	// the cost of additional network messages.
	Force()
}

type ResolverMetrics added in v0.21.0

type ResolverMetrics interface {
	// DNSLookupDuration tracks the time spent to resolve a DNS address.
	DNSLookupDuration(duration time.Duration)

	// OnDNSCacheMiss tracks the total number of dns requests resolved through looking up the network.
	OnDNSCacheMiss()

	// DNSCacheResolution tracks the total number of dns requests resolved through the cache without
	// looking up the network.
	OnDNSCacheHit()

	// OnDNSCacheInvalidated is called whenever dns cache is invalidated for an entry
	OnDNSCacheInvalidated()
}

ResolverMetrics encapsulates the metrics collectors for dns resolver module of the networking layer.

type RuntimeMetrics

type RuntimeMetrics interface {
	// TransactionParsed reports the time spent parsing a single transaction
	RuntimeTransactionParsed(dur time.Duration)

	// TransactionChecked reports the time spent checking a single transaction
	RuntimeTransactionChecked(dur time.Duration)

	// TransactionInterpreted reports the time spent interpreting a single transaction
	RuntimeTransactionInterpreted(dur time.Duration)

	// RuntimeSetNumberOfAccounts Sets the total number of accounts on the network
	RuntimeSetNumberOfAccounts(count uint64)
}

type SDKClientWrapper added in v0.20.0

type SDKClientWrapper interface {
	GetAccount(context.Context, sdk.Address, ...grpc.CallOption) (*sdk.Account, error)
	GetAccountAtLatestBlock(context.Context, sdk.Address, ...grpc.CallOption) (*sdk.Account, error)
	SendTransaction(context.Context, sdk.Transaction, ...grpc.CallOption) error
	GetLatestBlock(context.Context, bool, ...grpc.CallOption) (*sdk.Block, error)
	GetTransactionResult(context.Context, sdk.Identifier, ...grpc.CallOption) (*sdk.TransactionResult, error)
	ExecuteScriptAtLatestBlock(context.Context, []byte, []cadence.Value, ...grpc.CallOption) (cadence.Value, error)
	ExecuteScriptAtBlockID(context.Context, sdk.Identifier, []byte, []cadence.Value, ...grpc.CallOption) (cadence.Value, error)
}

SDKClientWrapper is a temporary solution to mocking the `sdk.Client` interface from `flow-go-sdk`

type SealValidator added in v0.14.0

type SealValidator interface {
	Validate(candidate *flow.Block) (*flow.Seal, error)
}

SealValidator checks seals with respect to current protocol state. Accepts `candidate` block with seals that needs to be verified for protocol state validity. Returns the following values: * last seal in `candidate` block - in case of success * engine.InvalidInputError - in case if `candidate` block violates protocol state. * exception in case of any other error, usually this is not expected. PREREQUISITE: The SealValidator can only process blocks which are attached to the main chain (without any missing ancestors). This is the case because:

  • the Seal validator walks the chain backwards and requires the relevant ancestors to be known and validated
  • the storage.Seals only holds seals for block that are attached to the main chain.

type Signer

type Signer interface {
	Verifier
	Sign(msg []byte) (crypto.Signature, error)
}

Signer is a simple cryptographic signer that can sign a simple message to generate a signature, and verify the signature against the message.

type Startable added in v0.22.4

type Startable interface {
	// Start starts the component. Any irrecoverable errors encountered while the component is running
	// should be thrown with the given SignalerContext.
	// This method should only be called once, and subsequent calls should panic with ErrMultipleStartup.
	Start(irrecoverable.SignalerContext)
}

Startable provides an interface to start a component. Once started, the component can be stopped by cancelling the given context.

type SyncCore

type SyncCore interface {

	// HandleBlock handles receiving a new block. It returns true if the block
	// should be passed along to the rest of the system for processing, or false
	// if it should be discarded.
	HandleBlock(header *flow.Header) bool

	// HandleHeight handles receiving a new highest finalized height from another node.
	HandleHeight(final *flow.Header, height uint64)

	// ScanPending scans all pending block statuses for blocks that should be
	// requested. It apportions requestable items into range and batch requests
	// according to configured maximums, giving precedence to range requests.
	ScanPending(final *flow.Header) ([]flow.Range, []flow.Batch)

	// WithinTolerance returns whether or not the given height is within configured
	// height tolerance, wrt the given local finalized header.
	WithinTolerance(final *flow.Header, height uint64) bool

	// RangeRequested updates sync state after a range is requested.
	RangeRequested(ran flow.Range)

	// BatchRequested updates sync state after a batch is requested.
	BatchRequested(batch flow.Batch)
}

SyncCore represents state management for chain state synchronization.

type ThresholdSigner

type ThresholdSigner interface {
	ThresholdVerifier
	Sign(msg []byte) (crypto.Signature, error)
	Reconstruct(size uint, shares []crypto.Signature, indices []uint) (crypto.Signature, error)
}

ThresholdSigner is a signer that can sign a message to generate a signature share and construct a threshold signature from the given shares.

type ThresholdSignerStore added in v0.20.0

type ThresholdSignerStore interface {
	GetThresholdSigner(view uint64) (ThresholdSigner, error)
}

type ThresholdVerifier

type ThresholdVerifier interface {
	Verifier
	VerifyThreshold(msg []byte, sig crypto.Signature, key crypto.PublicKey) (bool, error)
}

ThresholdVerifier can verify a message against a signature share from a single key or a threshold signature against many keys.

type TraceSpan added in v0.22.0

type TraceSpan opentracing.Span

type Tracer

type Tracer interface {
	ReadyDoneAware

	// StartBlockSpan starts an span for a block, built as a child of rootSpan
	// it also returns the context including this span which can be used for nested calls.
	// and also a boolean reporting if this span is sampled (is used for avoiding unncessary computation for tags)
	StartBlockSpan(
		ctx context.Context,
		blockID flow.Identifier,
		spanName trace.SpanName,
		opts ...opentracing.StartSpanOption) (opentracing.Span, context.Context, bool)

	// StartCollectionSpan starts an span for a collection, built as a child of rootSpan
	// it also returns the context including this span which can be used for nested calls.
	// and also a boolean reporting if this span is sampled (is used for avoiding unncessary computation for tags)
	StartCollectionSpan(
		ctx context.Context,
		collectionID flow.Identifier,
		spanName trace.SpanName,
		opts ...opentracing.StartSpanOption) (opentracing.Span, context.Context, bool)

	// StartTransactionSpan starts an span for a transaction, built as a child of rootSpan
	// it also returns the context including this span which can be used for nested calls.
	// and also a boolean reporting if this span is sampled (is used for avoiding unncessary computation for tags)
	StartTransactionSpan(
		ctx context.Context,
		transactionID flow.Identifier,
		spanName trace.SpanName,
		opts ...opentracing.StartSpanOption) (opentracing.Span, context.Context, bool)

	StartSpanFromContext(
		ctx context.Context,
		operationName trace.SpanName,
		opts ...opentracing.StartSpanOption,
	) (opentracing.Span, context.Context)

	StartSpanFromParent(
		span opentracing.Span,
		operationName trace.SpanName,
		opts ...opentracing.StartSpanOption,
	) opentracing.Span

	// RecordSpanFromParent records an span at finish time
	// start time will be computed by reducing time.Now() - duration
	RecordSpanFromParent(
		span opentracing.Span,
		operationName trace.SpanName,
		duration time.Duration,
		logs []opentracing.LogRecord,
		opts ...opentracing.StartSpanOption,
	)

	// WithSpanFromContext encapsulates executing a function within an span, i.e., it starts a span with the specified SpanName from the context,
	// executes the function f, and finishes the span once the function returns.
	WithSpanFromContext(ctx context.Context,
		operationName trace.SpanName,
		f func(),
		opts ...opentracing.StartSpanOption)
}

Tracer interface for tracers in flow. Uses open tracing span definitions

type TransactionMetrics

type TransactionMetrics interface {
	// TransactionReceived starts tracking of transaction execution/finalization/sealing
	TransactionReceived(txID flow.Identifier, when time.Time)

	// TransactionFinalized reports the time spent between the transaction being received and finalized. Reporting only
	// works if the transaction was earlier added as received.
	TransactionFinalized(txID flow.Identifier, when time.Time)

	// TransactionExecuted reports the time spent between the transaction being received and executed. Reporting only
	// works if the transaction was earlier added as received.
	TransactionExecuted(txID flow.Identifier, when time.Time)

	// TransactionExpired tracks number of expired transactions
	TransactionExpired(txID flow.Identifier)

	// TransactionSubmissionFailed should be called whenever we try to submit a transaction and it fails
	TransactionSubmissionFailed()
}

type VerificationMetrics

type VerificationMetrics interface {
	// OnBlockConsumerJobDone is invoked by block consumer whenever it is notified a job is done by a worker. It
	// sets the last processed block job index.
	OnBlockConsumerJobDone(uint64)
	// OnChunkConsumerJobDone is invoked by chunk consumer whenever it is notified a job is done by a worker. It
	// sets the last processed chunk job index.
	OnChunkConsumerJobDone(uint64)
	// OnExecutionResultReceivedAtAssignerEngine is called whenever a new execution result arrives
	// at Assigner engine. It increments total number of received execution results.
	OnExecutionResultReceivedAtAssignerEngine()

	// OnVerifiableChunkReceivedAtVerifierEngine increments a counter that keeps track of number of verifiable chunks received at
	// verifier engine from fetcher engine.
	OnVerifiableChunkReceivedAtVerifierEngine()

	// OnFinalizedBlockArrivedAtAssigner sets a gauge that keeps track of number of the latest block height arrives
	// at assigner engine. Note that it assumes blocks are coming to assigner engine in strictly increasing order of their height.
	OnFinalizedBlockArrivedAtAssigner(height uint64)

	// OnChunksAssignmentDoneAtAssigner increments a counter that keeps track of the total number of assigned chunks to
	// the verification node.
	OnChunksAssignmentDoneAtAssigner(chunks int)

	// OnAssignedChunkProcessedAtAssigner increments a counter that keeps track of the total number of assigned chunks pushed by
	// assigner engine to the fetcher engine.
	OnAssignedChunkProcessedAtAssigner()

	// OnAssignedChunkReceivedAtFetcher increments a counter that keeps track of number of assigned chunks arrive at fetcher engine.
	OnAssignedChunkReceivedAtFetcher()

	// OnChunkDataPackRequestSentByFetcher increments a counter that keeps track of number of chunk data pack requests that fetcher engine
	// sends to requester engine.
	OnChunkDataPackRequestSentByFetcher()

	// OnChunkDataPackRequestReceivedByRequester increments a counter that keeps track of number of chunk data pack requests
	// arrive at the requester engine from the fetcher engine.
	OnChunkDataPackRequestReceivedByRequester()

	// OnChunkDataPackRequestDispatchedInNetwork increments a counter that keeps track of number of chunk data pack requests that the
	// requester engine dispatches in the network (to the execution nodes).
	OnChunkDataPackRequestDispatchedInNetworkByRequester()

	// OnChunkDataPackResponseReceivedFromNetwork increments a counter that keeps track of number of chunk data pack responses that the
	// requester engine receives from execution nodes (through network).
	OnChunkDataPackResponseReceivedFromNetworkByRequester()

	// SetMaxChunkDataPackAttemptsForNextUnsealedHeightAtRequester is invoked when a cycle of requesting chunk data packs is done by requester engine.
	// It updates the maximum number of attempts made by requester engine for requesting the chunk data packs of the next unsealed height.
	// The maximum is taken over the history of all chunk data packs requested during that cycle that belong to the next unsealed height.
	SetMaxChunkDataPackAttemptsForNextUnsealedHeightAtRequester(attempts uint64)

	// OnChunkDataPackSentToFetcher increments a counter that keeps track of number of chunk data packs sent to the fetcher engine from
	// requester engine.
	OnChunkDataPackSentToFetcher()

	// OnChunkDataPackArrivedAtFetcher increments a counter that keeps track of number of chunk data packs arrived at fetcher engine from
	// requester engine.
	OnChunkDataPackArrivedAtFetcher()

	// OnVerifiableChunkSentToVerifier increments a counter that keeps track of number of verifiable chunks fetcher engine sent to verifier engine.
	OnVerifiableChunkSentToVerifier()

	// OnResultApprovalDispatchedInNetwork increments a counter that keeps track of number of result approvals dispatched in the network
	// by verifier engine.
	OnResultApprovalDispatchedInNetworkByVerifier()
}

type Verifier

type Verifier interface {
	Verify(msg []byte, sig crypto.Signature, key crypto.PublicKey) (bool, error)
}

Verifier is responsible for generating a signature on the given message.

type WALMetrics added in v0.15.4

type WALMetrics interface {
	// DiskSize records the amount of disk space used by the storage (in bytes)
	DiskSize(uint64)
}

Directories

Path Synopsis
builder
Package dkg implements a controller that manages the lifecycle of a Joint Feldman DKG node, as well as a broker that enables the controller to communicate with other nodes Controller A new controller must be instantiated for every epoch.
Package dkg implements a controller that manages the lifecycle of a Joint Feldman DKG node, as well as a broker that enables the controller to communicate with other nodes Controller A new controller must be instantiated for every epoch.
finalizer
Package ingress implements accepting transactions into the system.
Package ingress implements accepting transactions into the system.
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
stdmap
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
Package mocks is a generated GoMock package.
Package mocks is a generated GoMock package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL