Documentation
¶
Index ¶
- Constants
- Variables
- type Cache
- type CacheConfig
- type CacheManager
- type Manager
- type PendingData
- func (pd *PendingData) GetLastSubmittedDataHeight() uint64
- func (pd *PendingData) GetPendingData(ctx context.Context) ([]*types.Data, [][]byte, error)
- func (pd *PendingData) NumPendingData() uint64
- func (pd *PendingData) SetLastSubmittedDataHeight(ctx context.Context, newLastSubmittedDataHeight uint64)
- type PendingHeaders
- func (ph *PendingHeaders) GetLastSubmittedHeaderHeight() uint64
- func (ph *PendingHeaders) GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
- func (ph *PendingHeaders) NumPendingHeaders() uint64
- func (ph *PendingHeaders) SetLastSubmittedHeaderHeight(ctx context.Context, newLastSubmittedHeaderHeight uint64)
- type PendingManager
Constants ¶
const ( // DefaultItemsCacheSize is the default size for items cache. DefaultItemsCacheSize = 200_000 // DefaultHashesCacheSize is the default size for hash tracking. DefaultHashesCacheSize = 200_000 // DefaultDAIncludedCacheSize is the default size for DA inclusion tracking. DefaultDAIncludedCacheSize = 200_000 )
const DefaultPendingCacheSize = 200_000
DefaultPendingCacheSize is the default size for the pending items cache.
const LastSubmittedDataHeightKey = "last-submitted-data-height"
LastSubmittedDataHeightKey is the key used for persisting the last submitted data height in store.
Variables ¶
var ( // DefaultTxCacheRetention is the default time to keep transaction hashes in cache DefaultTxCacheRetention = 24 * time.Hour )
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache[T any] struct { // contains filtered or unexported fields }
Cache is a generic cache that maintains items that are seen and hard confirmed. Uses bounded thread-safe LRU caches to prevent unbounded memory growth.
func NewCacheWithConfig ¶
func NewCacheWithConfig[T any](config CacheConfig) (*Cache[T], error)
NewCacheWithConfig returns a new Cache struct with custom sizes
func (*Cache[T]) LoadFromDisk ¶
LoadFromDisk loads the cache contents from disk from the specified folder. It populates the current cache instance. If files are missing, corresponding parts of the cache will be empty. It's the caller's responsibility to ensure that type T (and any types it contains) are registered with the gob package if necessary (e.g., using gob.Register).
func (*Cache[T]) SaveToDisk ¶
SaveToDisk saves the cache contents to disk in the specified folder. It's the caller's responsibility to ensure that type T (and any types it contains) are registered with the gob package if necessary (e.g., using gob.Register).
type CacheConfig ¶
CacheConfig holds configuration for cache sizes.
func DefaultCacheConfig ¶
func DefaultCacheConfig() CacheConfig
DefaultCacheConfig returns the default cache configuration.
type CacheManager ¶
type CacheManager interface {
// Header operations
IsHeaderSeen(hash string) bool
SetHeaderSeen(hash string, blockHeight uint64)
GetHeaderDAIncluded(hash string) (uint64, bool)
SetHeaderDAIncluded(hash string, daHeight uint64, blockHeight uint64)
RemoveHeaderDAIncluded(hash string)
DaHeight() uint64
// Data operations
IsDataSeen(hash string) bool
SetDataSeen(hash string, blockHeight uint64)
GetDataDAIncluded(hash string) (uint64, bool)
SetDataDAIncluded(hash string, daHeight uint64, blockHeight uint64)
// Transaction operations
IsTxSeen(hash string) bool
SetTxSeen(hash string)
CleanupOldTxs(olderThan time.Duration) int
// Pending events syncing coordination
GetNextPendingEvent(blockHeight uint64) *common.DAHeightEvent
SetPendingEvent(blockHeight uint64, event *common.DAHeightEvent)
// Disk operations
SaveToDisk() error
LoadFromDisk() error
ClearFromDisk() error
// Cleanup operations
DeleteHeight(blockHeight uint64)
}
CacheManager provides thread-safe cache operations for tracking seen blocks and DA inclusion status during block execution and syncing.
func NewCacheManager ¶
NewCacheManager creates a new cache manager instance
type Manager ¶
type Manager interface {
CacheManager
PendingManager
}
Manager provides centralized cache management for both executing and syncing components
type PendingData ¶
type PendingData struct {
// contains filtered or unexported fields
}
PendingData maintains Data that need to be published to DA layer
Important assertions: - data is safely stored in database before submission to DA - data is always pushed to DA in order (by height) - DA submission of multiple data is atomic - it's impossible to submit only part of a batch
lastSubmittedDataHeight is updated only after receiving confirmation from DA. Worst case scenario is when data was successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case data is re-submitted to DA (it's extra cost). evolve is able to skip duplicate data so this shouldn't affect full nodes. Note: Submission of pending data to DA should account for the DA max blob size.
func NewPendingData ¶
NewPendingData returns a new PendingData struct
func (*PendingData) GetLastSubmittedDataHeight ¶
func (pd *PendingData) GetLastSubmittedDataHeight() uint64
func (*PendingData) GetPendingData ¶
GetPendingData returns a sorted slice of pending Data along with their marshalled bytes.
func (*PendingData) NumPendingData ¶
func (pd *PendingData) NumPendingData() uint64
func (*PendingData) SetLastSubmittedDataHeight ¶
func (pd *PendingData) SetLastSubmittedDataHeight(ctx context.Context, newLastSubmittedDataHeight uint64)
type PendingHeaders ¶
type PendingHeaders struct {
// contains filtered or unexported fields
}
PendingHeaders maintains headers that need to be published to DA layer
Important assertions: - headers are safely stored in database before submission to DA - headers are always pushed to DA in order (by height) - DA submission of multiple headers is atomic - it's impossible to submit only part of a batch
lastSubmittedHeaderHeight is updated only after receiving confirmation from DA. Worst case scenario is when headers were successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case headers are re-submitted to DA (it's extra cost). evolve is able to skip duplicate headers so this shouldn't affect full nodes.
func NewPendingHeaders ¶
NewPendingHeaders returns a new PendingHeaders struct
func (*PendingHeaders) GetLastSubmittedHeaderHeight ¶
func (ph *PendingHeaders) GetLastSubmittedHeaderHeight() uint64
func (*PendingHeaders) GetPendingHeaders ¶
func (ph *PendingHeaders) GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
GetPendingHeaders returns a sorted slice of pending headers along with their marshalled bytes.
func (*PendingHeaders) NumPendingHeaders ¶
func (ph *PendingHeaders) NumPendingHeaders() uint64
func (*PendingHeaders) SetLastSubmittedHeaderHeight ¶
func (ph *PendingHeaders) SetLastSubmittedHeaderHeight(ctx context.Context, newLastSubmittedHeaderHeight uint64)
type PendingManager ¶
type PendingManager interface {
GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
GetPendingData(ctx context.Context) ([]*types.SignedData, [][]byte, error)
SetLastSubmittedHeaderHeight(ctx context.Context, height uint64)
GetLastSubmittedHeaderHeight() uint64
SetLastSubmittedDataHeight(ctx context.Context, height uint64)
GetLastSubmittedDataHeight() uint64
NumPendingHeaders() uint64
NumPendingData() uint64
}
PendingManager provides operations for managing pending headers and data
func NewPendingManager ¶
NewPendingManager creates a new pending manager instance