Documentation
¶
Index ¶
Constants ¶
const LastSubmittedDataHeightKey = "last-submitted-data-height"
LastSubmittedDataHeightKey is the key used for persisting the last submitted data height in store.
Variables ¶
var ( // DefaultTxCacheRetention is the default time to keep transaction hashes in cache DefaultTxCacheRetention = 24 * time.Hour )
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache[T any] struct { // contains filtered or unexported fields }
Cache is a generic cache that maintains items that are seen and hard confirmed
func (*Cache[T]) LoadFromDisk ¶
LoadFromDisk loads the cache contents from disk from the specified folder. It populates the current cache instance. If files are missing, corresponding parts of the cache will be empty. It's the caller's responsibility to ensure that type T (and any types it contains) are registered with the gob package if necessary (e.g., using gob.Register).
func (*Cache[T]) SaveToDisk ¶
SaveToDisk saves the cache contents to disk in the specified folder. It's the caller's responsibility to ensure that type T (and any types it contains) are registered with the gob package if necessary (e.g., using gob.Register).
type Manager ¶
type Manager interface {
// Header operations
IsHeaderSeen(hash string) bool
SetHeaderSeen(hash string, blockHeight uint64)
GetHeaderDAIncluded(hash string) (uint64, bool)
SetHeaderDAIncluded(hash string, daHeight uint64, blockHeight uint64)
RemoveHeaderDAIncluded(hash string)
DaHeight() uint64
// Data operations
IsDataSeen(hash string) bool
SetDataSeen(hash string, blockHeight uint64)
GetDataDAIncluded(hash string) (uint64, bool)
SetDataDAIncluded(hash string, daHeight uint64, blockHeight uint64)
// Transaction operations
IsTxSeen(hash string) bool
SetTxSeen(hash string)
CleanupOldTxs(olderThan time.Duration) int
// Pending operations
GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, error)
GetPendingData(ctx context.Context) ([]*types.SignedData, error)
SetLastSubmittedHeaderHeight(ctx context.Context, height uint64)
SetLastSubmittedDataHeight(ctx context.Context, height uint64)
NumPendingHeaders() uint64
NumPendingData() uint64
// Pending events syncing coordination
GetNextPendingEvent(blockHeight uint64) *common.DAHeightEvent
SetPendingEvent(blockHeight uint64, event *common.DAHeightEvent)
// Disk operations
SaveToDisk() error
LoadFromDisk() error
ClearFromDisk() error
// Cleanup operations
DeleteHeight(blockHeight uint64)
}
Manager provides centralized cache management for both executing and syncing components
type PendingData ¶
type PendingData struct {
// contains filtered or unexported fields
}
PendingData maintains Data that need to be published to DA layer
Important assertions: - data is safely stored in database before submission to DA - data is always pushed to DA in order (by height) - DA submission of multiple data is atomic - it's impossible to submit only part of a batch
lastSubmittedDataHeight is updated only after receiving confirmation from DA. Worst case scenario is when data was successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case data is re-submitted to DA (it's extra cost). evolve is able to skip duplicate data so this shouldn't affect full nodes. Note: Submission of pending data to DA should account for the DA max blob size.
func NewPendingData ¶
NewPendingData returns a new PendingData struct
func (*PendingData) GetPendingData ¶
GetPendingData returns a sorted slice of pending Data.
func (*PendingData) NumPendingData ¶
func (pd *PendingData) NumPendingData() uint64
func (*PendingData) SetLastSubmittedDataHeight ¶
func (pd *PendingData) SetLastSubmittedDataHeight(ctx context.Context, newLastSubmittedDataHeight uint64)
type PendingHeaders ¶
type PendingHeaders struct {
// contains filtered or unexported fields
}
PendingHeaders maintains headers that need to be published to DA layer
Important assertions: - headers are safely stored in database before submission to DA - headers are always pushed to DA in order (by height) - DA submission of multiple headers is atomic - it's impossible to submit only part of a batch
lastSubmittedHeaderHeight is updated only after receiving confirmation from DA. Worst case scenario is when headers was successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case headers are re-submitted to DA (it's extra cost). evolve is able to skip duplicate headers so this shouldn't affect full nodes. TODO(tzdybal): we shouldn't try to push all pending headers at once; this should depend on max blob size
func NewPendingHeaders ¶
NewPendingHeaders returns a new PendingHeaders struct
func (*PendingHeaders) GetPendingHeaders ¶
func (ph *PendingHeaders) GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, error)
GetPendingHeaders returns a sorted slice of pending headers.
func (*PendingHeaders) NumPendingHeaders ¶
func (ph *PendingHeaders) NumPendingHeaders() uint64
func (*PendingHeaders) SetLastSubmittedHeaderHeight ¶
func (ph *PendingHeaders) SetLastSubmittedHeaderHeight(ctx context.Context, newLastSubmittedHeaderHeight uint64)