nodestore

package
v0.0.0-...-2b6a5f4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 15, 2026 License: ISC Imports: 15 Imported by: 0

Documentation

Overview

Package nodestore provides blockchain state storage for XRPL node data.

It stores and retrieves SHAMap tree nodes (inner nodes and leaf data) that make up ledger state and transaction trees. The nodestore is built on top of the kvstore interface, with support for batched writes, LRU caching, negative caching, and rotating database backends for online deletion.

Node data is keyed by its SHA-512Half hash and encoded with type and compression metadata.

Package nodestore provides persistent key-value storage optimized for XRPL ledger objects. It offers content-addressable storage using SHA-256 hashes as keys with features like caching and asynchronous I/O.

Index

Constants

View Source
const (
	// DefaultPreallocationSize is the default number of writes to preallocate space for.
	DefaultPreallocationSize = 256

	// DefaultLimitSize is the default maximum number of writes in a batch before flushing.
	DefaultLimitSize = 65536

	// DefaultFlushInterval is the default interval between periodic flushes.
	DefaultFlushInterval = 100 * time.Millisecond
)

Variables

View Source
var (
	// ErrNotFound is returned when a requested node is not present in the store.
	ErrNotFound = errors.New("node not found")

	// ErrDataCorrupt indicates that stored data is corrupted.
	ErrDataCorrupt = errors.New("data corrupt")

	// ErrBackendClosed indicates that the backend is closed.
	ErrBackendClosed = errors.New("backend closed")

	// ErrInvalidNode indicates that a node is invalid.
	ErrInvalidNode = errors.New("invalid node")

	// ErrInvalidHash indicates that a hash is invalid.
	ErrInvalidHash = errors.New("invalid hash")

	// ErrInvalidConfig indicates that the configuration is invalid.
	ErrInvalidConfig = errors.New("invalid config")

	// ErrShutdown indicates that the database is shutting down.
	ErrShutdown = errors.New("nodestore shutdown")
)

Functions

func AvailableBackends

func AvailableBackends() []string

AvailableBackends returns a list of available backend names.

func IsBackendAvailable

func IsBackendAvailable(name string) bool

IsBackendAvailable checks if a backend with the given name is available.

func IsBackendClosed

func IsBackendClosed(err error) bool

IsBackendClosed checks if an error indicates that the backend is closed.

func IsDataCorrupt

func IsDataCorrupt(err error) bool

IsDataCorrupt checks if an error indicates data corruption.

func IsNotFound

func IsNotFound(err error) bool

IsNotFound checks if an error indicates that a node was not found.

func IsShutdown

func IsShutdown(err error) bool

IsShutdown checks if an error indicates that the database is shutting down.

func IsZero

func IsZero(h Hash256) bool

func RegisterBackend

func RegisterBackend(name string, factory BackendFactory)

RegisterBackend registers a backend factory with the given name.

Types

type Backend

type Backend interface {
	// Name returns a human-readable name for this backend.
	Name() string

	// Open opens the backend for use.
	Open(createIfMissing bool) error

	// Close closes the backend and releases resources.
	Close() error

	// IsOpen returns true if the backend is currently open.
	IsOpen() bool

	// Fetch retrieves a single object by key.
	Fetch(key Hash256) (*Node, Status)

	// FetchBatch retrieves multiple objects efficiently.
	FetchBatch(keys []Hash256) ([]*Node, Status)

	// Store saves a single object.
	Store(node *Node) Status

	// StoreBatch saves multiple objects efficiently.
	StoreBatch(nodes []*Node) Status

	// Sync forces pending writes to be flushed.
	Sync() Status

	// ForEach iterates over all objects in the backend.
	ForEach(fn func(*Node) error) error

	// GetWriteLoad returns an estimate of pending write operations.
	GetWriteLoad() int

	// SetDeletePath marks the backend for deletion when closed.
	SetDeletePath()

	// FdRequired returns the number of file descriptors needed.
	FdRequired() int
}

Backend defines the interface for storage backends.

func CreateBackend

func CreateBackend(name string, config *Config) (Backend, error)

CreateBackend creates a new backend instance for the given name and configuration.

func NewMemoryBackendFromConfig

func NewMemoryBackendFromConfig(config *Config) (Backend, error)

NewMemoryBackendFromConfig creates a new in-memory backend from config. The config is ignored for memory backends but required for the BackendFactory signature.

func NewPebbleBackend

func NewPebbleBackend(config *Config) (Backend, error)

NewPebbleBackend creates a new optimized PebbleDB backend.

type BackendFactory

type BackendFactory func(config *Config) (Backend, error)

BackendFactory is a function that creates a new backend instance.

type BackendInfo

type BackendInfo struct {
	Name            string // Backend name
	Description     string // Human-readable description
	FileDescriptors int    // Number of file descriptors required
	Persistent      bool   // Whether the backend provides persistent storage
	Compression     bool   // Whether the backend supports compression
}

BackendInfo provides information about a backend.

func (BackendInfo) String

func (bi BackendInfo) String() string

String returns a string representation of the backend info.

type BackendStats

type BackendStats struct {
	Reads        int64 // Number of read operations
	Writes       int64 // Number of write operations
	BytesRead    int64 // Total bytes read
	BytesWritten int64 // Total bytes written
	NodeCount    int64 // Number of nodes stored
}

BackendStats holds statistics for a backend.

type BackendVerifier

type BackendVerifier struct {
	// contains filtered or unexported fields
}

BackendVerifier wraps any Backend to provide verification capabilities.

func NewBackendVerifier

func NewBackendVerifier(backend Backend) *BackendVerifier

NewBackendVerifier creates a new verifier for the given backend.

func (*BackendVerifier) Verify

func (v *BackendVerifier) Verify() error

Verify implements the Verifier interface.

func (*BackendVerifier) VerifyAll

func (v *BackendVerifier) VerifyAll(opts *VerifyOptions) (*VerificationResult, error)

VerifyAll performs full verification and returns detailed results.

func (*BackendVerifier) VerifyNode

func (v *BackendVerifier) VerifyNode(hash Hash256) error

VerifyNode verifies a single node by its hash.

func (*BackendVerifier) VerifyWithOptions

func (v *BackendVerifier) VerifyWithOptions(opts *VerifyOptions) error

VerifyWithOptions performs verification with custom options.

type BackendWithInfo

type BackendWithInfo interface {
	Backend
	Info() BackendInfo
}

BackendWithInfo is an interface that backends can implement to provide additional information about their capabilities.

type BatchWriteCollector

type BatchWriteCollector struct {
	// contains filtered or unexported fields
}

BatchWriteCollector collects results from multiple batch write operations.

func NewBatchWriteCollector

func NewBatchWriteCollector() *BatchWriteCollector

NewBatchWriteCollector creates a new collector for batch write results.

func (*BatchWriteCollector) Add

func (c *BatchWriteCollector) Add(hash Hash256, result <-chan error)

Add adds a write result channel to the collector.

func (*BatchWriteCollector) Clear

func (c *BatchWriteCollector) Clear()

Clear resets the collector for reuse.

func (*BatchWriteCollector) Count

func (c *BatchWriteCollector) Count() int

Count returns the number of tracked writes.

func (*BatchWriteCollector) Wait

Wait waits for all writes to complete and returns the results.

func (*BatchWriteCollector) WaitWithErrors

func (c *BatchWriteCollector) WaitWithErrors() error

WaitWithErrors waits for all writes and returns only the errors.

type BatchWriteConfig

type BatchWriteConfig struct {
	// PreallocationSize is the initial capacity of the write buffer.
	PreallocationSize int

	// LimitSize is the maximum number of writes to batch before flushing.
	LimitSize int

	// FlushInterval is the maximum time between flushes.
	FlushInterval time.Duration

	// SyncOnFlush determines whether to sync the backend after each flush.
	SyncOnFlush bool
}

BatchWriteConfig holds configuration for the batch writer. Kept for backwards compatibility; batch writing is now handled internally.

func DefaultBatchWriteConfig

func DefaultBatchWriteConfig() *BatchWriteConfig

DefaultBatchWriteConfig returns a BatchWriteConfig with sensible defaults.

func (*BatchWriteConfig) Validate

func (c *BatchWriteConfig) Validate() error

Validate checks if the configuration is valid.

type BatchWriteResult

type BatchWriteResult struct {
	Hash  Hash256 // Hash of the written node
	Error error   // Error that occurred (nil if successful)
}

BatchWriteResult holds the result of a batch write operation.

type BatchWriter

type BatchWriter struct {
	// contains filtered or unexported fields
}

BatchWriter is kept for backwards compatibility. It is a stub that wraps a Backend for synchronous writes.

func NewBatchWriter

func NewBatchWriter(backend Backend, config *BatchWriteConfig) (*BatchWriter, error)

NewBatchWriter creates a new BatchWriter with the given backend. The config is validated but ignored (batch writing is now synchronous).

func (*BatchWriter) Close

func (bw *BatchWriter) Close() error

Close is a no-op.

func (*BatchWriter) Flush

func (bw *BatchWriter) Flush() error

Flush is a no-op since writes are synchronous.

func (*BatchWriter) PendingCount

func (bw *BatchWriter) PendingCount() int

PendingCount always returns 0 for synchronous writes.

func (*BatchWriter) Stats

func (bw *BatchWriter) Stats() BatchWriterStats

Stats returns statistics about the batch writer.

func (*BatchWriter) Write

func (bw *BatchWriter) Write(hash Hash256, data []byte) <-chan error

Write submits a write operation synchronously.

func (*BatchWriter) WriteNode

func (bw *BatchWriter) WriteNode(node *Node) <-chan error

WriteNode submits a node for writing.

func (*BatchWriter) WriteNodeSync

func (bw *BatchWriter) WriteNodeSync(node *Node) error

WriteNodeSync submits a node for writing and waits for completion.

func (*BatchWriter) WriteSync

func (bw *BatchWriter) WriteSync(hash Hash256, data []byte) error

WriteSync submits a write operation and waits for completion.

type BatchWriterStats

type BatchWriterStats struct {
	TotalWrites   int64 // Total number of writes submitted
	BatchedWrites int64 // Number of writes successfully batched
	Flushes       int64 // Number of flush operations
	Errors        int64 // Number of errors encountered
	BytesWritten  int64 // Total bytes written
	PendingCount  int   // Current number of pending writes
}

BatchWriterStats holds statistics for the batch writer.

func (BatchWriterStats) String

func (s BatchWriterStats) String() string

String returns a formatted string representation of the statistics.

type Blob

type Blob []byte

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache implements an LRU cache with TTL support for NodeStore.

func NewCache

func NewCache(maxSize int, ttl time.Duration) *Cache

NewCache creates a new LRU cache with the specified configuration.

func (*Cache) ByteSize

func (c *Cache) ByteSize() int

ByteSize returns the current total bytes stored in the cache.

func (*Cache) Clear

func (c *Cache) Clear()

Clear removes all entries from the cache.

func (*Cache) Get

func (c *Cache) Get(hash Hash256) (*Node, bool)

Get retrieves a node from the cache. Returns the node and true if found, nil and false otherwise.

func (*Cache) Put

func (c *Cache) Put(node *Node)

Put stores a node in the cache.

func (*Cache) Remove

func (c *Cache) Remove(hash Hash256)

Remove removes a node from the cache.

func (*Cache) SetMaxSize

func (c *Cache) SetMaxSize(maxSize int)

SetMaxSize updates the maximum size of the cache. If the new size is smaller than the current size, oldest entries are evicted.

func (*Cache) SetTTL

func (c *Cache) SetTTL(ttl time.Duration)

SetTTL updates the TTL for the cache. This only affects new entries; existing entries keep their original expiration.

func (*Cache) Size

func (c *Cache) Size() int

Size returns the current number of items in the cache.

func (*Cache) Stats

func (c *Cache) Stats() CacheStats

Stats returns cache statistics.

func (*Cache) Sweep

func (c *Cache) Sweep() int

Sweep removes expired entries from the cache.

type CacheStats

type CacheStats struct {
	Hits         uint64        // Number of cache hits
	Misses       uint64        // Number of cache misses
	Evictions    uint64        // Number of evictions due to size limit
	Expirations  uint64        // Number of expirations due to TTL
	CurrentSize  int           // Current number of items
	CurrentBytes int           // Current total bytes stored
	MaxSize      int           // Maximum number of items
	TTL          time.Duration // Time to live for entries
}

CacheStats holds statistics about cache performance.

func (CacheStats) HitRate

func (s CacheStats) HitRate() float64

HitRate returns the cache hit rate as a percentage.

func (CacheStats) String

func (s CacheStats) String() string

String returns a string representation of the cache statistics.

type Config

type Config struct {
	// Backend specifies the storage backend to use.
	Backend string `json:"backend" yaml:"backend"`

	// Path specifies the file system path for data storage.
	Path string `json:"path" yaml:"path"`

	// Cache configuration.
	CacheSize int           `json:"cache_size" yaml:"cache_size"`
	CacheTTL  time.Duration `json:"cache_ttl" yaml:"cache_ttl"`

	// Compressor is kept for backwards compatibility but is ignored.
	// Pebble handles compression natively via SnappyCompression.
	Compressor string `json:"compressor" yaml:"compressor"`

	// CompressionLevel is kept for backwards compatibility but is ignored.
	CompressionLevel int `json:"compression_level" yaml:"compression_level"`

	// BatchSize is the default batch size for operations.
	BatchSize int `json:"batch_size" yaml:"batch_size"`

	// CreateIfMissing controls whether the database should be created if it doesn't exist.
	CreateIfMissing bool `json:"create_if_missing" yaml:"create_if_missing"`
}

Config holds configuration options for the NodeStore.

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig returns a configuration with sensible defaults.

func (*Config) Clone

func (c *Config) Clone() *Config

Clone creates a copy of the configuration.

func (*Config) Validate

func (c *Config) Validate() error

Validate checks if the configuration is valid.

type Database

type Database interface {
	// Store persists a node to the store.
	Store(ctx context.Context, node *Node) error

	// Fetch retrieves a node by its hash synchronously.
	Fetch(ctx context.Context, hash Hash256) (*Node, error)

	// FetchBatch retrieves multiple nodes efficiently in a single operation.
	FetchBatch(ctx context.Context, hashes []Hash256) ([]*Node, error)

	// FetchAsync retrieves a node asynchronously, returning a channel for the result.
	FetchAsync(ctx context.Context, hash Hash256) <-chan Result

	// StoreBatch stores multiple nodes efficiently in a single operation.
	StoreBatch(ctx context.Context, nodes []*Node) error

	// Sweep removes expired entries from caches.
	Sweep() error

	// Stats returns performance statistics.
	Stats() Statistics

	// Close gracefully closes the database and releases resources.
	Close() error

	// Sync forces any pending writes to be flushed to disk.
	// The supplied ctx unblocks the caller on cancellation; the
	// underlying backend flush is uninterruptible and continues
	// running so partial fsync state is never observed.
	//
	// Concurrency contract: callers MUST serialise Sync invocations.
	// On ctx cancellation Sync returns to the caller while the
	// in-flight backend flush is still running; a subsequent Sync
	// would invoke the backend concurrently with that flush, and
	// not all backends are required to be re-entrant. The current
	// in-tree caller (Service.persistLedger) is serialised by the
	// Service mutex.
	Sync(ctx context.Context) error
}

Database defines the main interface for the NodeStore.

type DatabaseConfig

type DatabaseConfig struct {
	// CacheSize is the maximum number of items in the positive cache.
	CacheSize int

	// CacheTTL is the time-to-live for positive cache entries.
	CacheTTL time.Duration

	// NegativeCacheTTL is the time-to-live for negative cache entries.
	// Set to 0 to disable negative caching.
	NegativeCacheTTL time.Duration

	// NegativeCacheMaxSize is the maximum number of entries in the negative cache.
	NegativeCacheMaxSize int

	// BatchWriteConfig is kept for backwards compatibility but is ignored.
	BatchWriteConfig *BatchWriteConfig
}

DatabaseConfig holds configuration for creating a Database.

func DefaultDatabaseConfig

func DefaultDatabaseConfig() *DatabaseConfig

DefaultDatabaseConfig returns a DatabaseConfig with sensible defaults.

type DatabaseImpl

type DatabaseImpl struct {
	// contains filtered or unexported fields
}

DatabaseImpl wraps a Backend to implement the Database interface.

func NewDatabase

func NewDatabase(backend Backend, cacheSize int, cacheTTL time.Duration) *DatabaseImpl

NewDatabase creates a new Database from a Backend.

func NewDatabaseWithConfig

func NewDatabaseWithConfig(backend Backend, config *DatabaseConfig) (*DatabaseImpl, error)

NewDatabaseWithConfig creates a new Database from a Backend with full configuration.

func (*DatabaseImpl) BatchWriter

func (d *DatabaseImpl) BatchWriter() *BatchWriter

BatchWriter returns the batch writer (may be nil if not configured).

func (*DatabaseImpl) Close

func (d *DatabaseImpl) Close() error

Close gracefully closes the database.

func (*DatabaseImpl) ExtendedStats

func (d *DatabaseImpl) ExtendedStats() ExtendedStatistics

ExtendedStats returns extended statistics including negative cache stats.

func (*DatabaseImpl) Fetch

func (d *DatabaseImpl) Fetch(ctx context.Context, hash Hash256) (*Node, error)

Fetch retrieves a node by its hash.

func (*DatabaseImpl) FetchAsync

func (d *DatabaseImpl) FetchAsync(ctx context.Context, hash Hash256) <-chan Result

FetchAsync retrieves a node asynchronously.

func (*DatabaseImpl) FetchBatch

func (d *DatabaseImpl) FetchBatch(ctx context.Context, hashes []Hash256) ([]*Node, error)

FetchBatch retrieves multiple nodes efficiently.

func (*DatabaseImpl) NegativeCache

func (d *DatabaseImpl) NegativeCache() *NegativeCache

NegativeCache returns the negative cache (for advanced operations).

func (*DatabaseImpl) Stats

func (d *DatabaseImpl) Stats() Statistics

Stats returns performance statistics.

func (*DatabaseImpl) Store

func (d *DatabaseImpl) Store(ctx context.Context, node *Node) error

Store persists a node to the store.

func (*DatabaseImpl) StoreAsync

func (d *DatabaseImpl) StoreAsync(ctx context.Context, node *Node) <-chan error

StoreAsync stores a node asynchronously. Falls back to synchronous storage.

func (*DatabaseImpl) StoreBatch

func (d *DatabaseImpl) StoreBatch(ctx context.Context, nodes []*Node) error

StoreBatch stores multiple nodes efficiently.

func (*DatabaseImpl) Sweep

func (d *DatabaseImpl) Sweep() error

Sweep removes expired entries from caches.

func (*DatabaseImpl) Sync

func (d *DatabaseImpl) Sync(ctx context.Context) error

Sync forces pending writes to disk. The flush itself is uninterruptible (a partial fsync would be worse than blocking), but ctx cancellation unblocks the caller while the flush continues in the background.

type ExtendedStatistics

type ExtendedStatistics struct {
	Statistics

	// Negative cache metrics
	NegativeCacheHits    uint64 // Number of negative cache hits
	NegativeCacheSize    uint64 // Current size of negative cache
	NegativeCacheMaxSize uint64 // Maximum size of negative cache

	// Batch writer metrics (kept for backwards compatibility)
	BatchWriterPending int    // Number of pending batch writes
	BatchWriterFlushes uint64 // Number of batch flushes
}

ExtendedStatistics holds extended performance metrics including negative cache stats.

type Hash256

type Hash256 [32]byte

func ComputeHash256

func ComputeHash256(data Blob) Hash256

ComputeHash256 computes SHA-256 hash from data

func Hash256FromData

func Hash256FromData(b Blob) (Hash256, error)

type KVDatabaseImpl

type KVDatabaseImpl struct {
	// contains filtered or unexported fields
}

KVDatabaseImpl wraps a kvstore.KeyValueStore to implement the Database interface. This is the new preferred implementation that uses the generic KV layer.

func NewKVDatabase

func NewKVDatabase(store kvstore.KeyValueStore, name string, cacheSize int, cacheTTL time.Duration) *KVDatabaseImpl

NewKVDatabase creates a new Database from a kvstore.KeyValueStore.

func NewKVDatabaseWithConfig

func NewKVDatabaseWithConfig(store kvstore.KeyValueStore, name string, config *DatabaseConfig) (*KVDatabaseImpl, error)

NewKVDatabaseWithConfig creates a new Database from a kvstore.KeyValueStore with full configuration.

func (*KVDatabaseImpl) Close

func (d *KVDatabaseImpl) Close() error

Close closes the database.

func (*KVDatabaseImpl) Fetch

func (d *KVDatabaseImpl) Fetch(ctx context.Context, hash Hash256) (*Node, error)

Fetch retrieves a node by its hash.

func (*KVDatabaseImpl) FetchAsync

func (d *KVDatabaseImpl) FetchAsync(ctx context.Context, hash Hash256) <-chan Result

FetchAsync retrieves a node asynchronously.

func (*KVDatabaseImpl) FetchBatch

func (d *KVDatabaseImpl) FetchBatch(ctx context.Context, hashes []Hash256) ([]*Node, error)

FetchBatch retrieves multiple nodes, going through the cache for each.

func (*KVDatabaseImpl) Stats

func (d *KVDatabaseImpl) Stats() Statistics

Stats returns performance statistics.

func (*KVDatabaseImpl) Store

func (d *KVDatabaseImpl) Store(ctx context.Context, node *Node) error

Store persists a node to the store.

func (*KVDatabaseImpl) StoreBatch

func (d *KVDatabaseImpl) StoreBatch(ctx context.Context, nodes []*Node) error

StoreBatch stores multiple nodes efficiently using a batch.

func (*KVDatabaseImpl) Sweep

func (d *KVDatabaseImpl) Sweep() error

Sweep removes expired entries from caches.

func (*KVDatabaseImpl) Sync

func (d *KVDatabaseImpl) Sync(ctx context.Context) error

Sync forces pending writes to disk. The flush itself is uninterruptible; ctx cancellation unblocks the caller while the underlying store flush continues in the background.

type MemoryBackend

type MemoryBackend struct {
	// contains filtered or unexported fields
}

MemoryBackend implements an in-memory Backend for testing purposes. It provides thread-safe operations and is useful for unit tests and development.

func NewMemoryBackend

func NewMemoryBackend() *MemoryBackend

NewMemoryBackend creates a new in-memory backend.

func (*MemoryBackend) Clear

func (m *MemoryBackend) Clear()

Clear removes all nodes from the backend without closing it.

func (*MemoryBackend) Close

func (m *MemoryBackend) Close() error

Close closes the backend and clears all data.

func (*MemoryBackend) Delete

func (m *MemoryBackend) Delete(hash Hash256) Status

Delete removes a node by its hash.

func (*MemoryBackend) FdRequired

func (m *MemoryBackend) FdRequired() int

FdRequired returns the number of file descriptors needed (0 for memory backend).

func (*MemoryBackend) Fetch

func (m *MemoryBackend) Fetch(key Hash256) (*Node, Status)

Fetch retrieves a single object by key.

func (*MemoryBackend) FetchBatch

func (m *MemoryBackend) FetchBatch(keys []Hash256) ([]*Node, Status)

FetchBatch retrieves multiple objects efficiently.

func (*MemoryBackend) ForEach

func (m *MemoryBackend) ForEach(fn func(*Node) error) error

ForEach iterates over all objects in the backend.

func (*MemoryBackend) GetWriteLoad

func (m *MemoryBackend) GetWriteLoad() int

GetWriteLoad returns an estimate of pending write operations (always 0 for memory backend).

func (*MemoryBackend) HasNode

func (m *MemoryBackend) HasNode(hash Hash256) bool

HasNode checks if a node with the given hash exists.

func (*MemoryBackend) Info

func (m *MemoryBackend) Info() BackendInfo

Info returns information about this backend.

func (*MemoryBackend) IsOpen

func (m *MemoryBackend) IsOpen() bool

IsOpen returns true if the backend is currently open.

func (*MemoryBackend) Name

func (m *MemoryBackend) Name() string

Name returns the name of this backend.

func (*MemoryBackend) Open

func (m *MemoryBackend) Open(createIfMissing bool) error

Open opens the backend for use.

func (*MemoryBackend) SetDeletePath

func (m *MemoryBackend) SetDeletePath()

SetDeletePath marks the backend for deletion when closed (no-op for memory backend).

func (*MemoryBackend) Size

func (m *MemoryBackend) Size() int

Size returns the number of nodes stored in the backend.

func (*MemoryBackend) Stats

func (m *MemoryBackend) Stats() BackendStats

Stats returns performance statistics.

func (*MemoryBackend) Store

func (m *MemoryBackend) Store(node *Node) Status

Store saves a single object.

func (*MemoryBackend) StoreBatch

func (m *MemoryBackend) StoreBatch(nodes []*Node) Status

StoreBatch saves multiple objects efficiently.

func (*MemoryBackend) Sync

func (m *MemoryBackend) Sync() Status

Sync forces pending writes to be flushed (no-op for memory backend).

func (*MemoryBackend) Verify

func (m *MemoryBackend) Verify() error

Verify implements the Verifier interface for MemoryBackend.

func (*MemoryBackend) VerifyAll

func (m *MemoryBackend) VerifyAll(opts *VerifyOptions) (*VerificationResult, error)

VerifyAll performs full verification and returns detailed results for MemoryBackend.

func (*MemoryBackend) VerifyNode

func (m *MemoryBackend) VerifyNode(hash Hash256) error

VerifyNode implements the Verifier interface for MemoryBackend.

func (*MemoryBackend) VerifyWithOptions

func (m *MemoryBackend) VerifyWithOptions(opts *VerifyOptions) error

VerifyWithOptions performs verification with custom options for MemoryBackend.

type NegativeCache

type NegativeCache struct {
	// contains filtered or unexported fields
}

NegativeCache tracks nodes that are known to be missing from the store. This optimization prevents repeated backend lookups for non-existent nodes.

func NewNegativeCache

func NewNegativeCache(ttl time.Duration) *NegativeCache

NewNegativeCache creates a new negative cache with the given TTL.

func NewNegativeCacheWithConfig

func NewNegativeCacheWithConfig(config *NegativeCacheConfig) *NegativeCache

NewNegativeCacheWithConfig creates a new negative cache with the given configuration.

func (*NegativeCache) Clear

func (nc *NegativeCache) Clear()

Clear removes all entries from the negative cache.

func (*NegativeCache) Close

func (nc *NegativeCache) Close() error

Close closes the negative cache.

func (*NegativeCache) IsMissing

func (nc *NegativeCache) IsMissing(hash Hash256) bool

IsMissing checks if a node is known to be missing. Returns true if the node is in the negative cache and not expired.

func (*NegativeCache) MarkMissing

func (nc *NegativeCache) MarkMissing(hash Hash256)

MarkMissing records that a node is not present in the store.

func (*NegativeCache) Remove

func (nc *NegativeCache) Remove(hash Hash256)

Remove removes an entry from the negative cache. This should be called when a node is added to the store.

func (*NegativeCache) SetMaxSize

func (nc *NegativeCache) SetMaxSize(maxSize int)

SetMaxSize updates the maximum size of the cache.

func (*NegativeCache) SetTTL

func (nc *NegativeCache) SetTTL(ttl time.Duration)

SetTTL updates the TTL for new entries.

func (*NegativeCache) Size

func (nc *NegativeCache) Size() int

Size returns the current number of entries in the cache.

func (*NegativeCache) Stats

func (nc *NegativeCache) Stats() NegativeCacheStats

Stats returns statistics about the negative cache.

func (*NegativeCache) Sweep

func (nc *NegativeCache) Sweep() int

Sweep removes all expired entries from the cache.

type NegativeCacheConfig

type NegativeCacheConfig struct {
	// TTL is the time-to-live for negative cache entries.
	TTL time.Duration

	// MaxSize is the maximum number of entries to cache (0 = unlimited).
	MaxSize int

	// SweepInterval is how often to sweep expired entries (0 = manual sweep only).
	SweepInterval time.Duration
}

NegativeCacheConfig holds configuration for the negative cache.

func DefaultNegativeCacheConfig

func DefaultNegativeCacheConfig() *NegativeCacheConfig

DefaultNegativeCacheConfig returns a NegativeCacheConfig with sensible defaults.

type NegativeCacheStats

type NegativeCacheStats struct {
	Hits        int64         // Number of cache hits
	Misses      int64         // Number of cache misses
	Insertions  int64         // Number of entries added
	Expirations int64         // Number of entries expired
	Evictions   int64         // Number of entries evicted
	Size        int           // Current number of entries
	MaxSize     int           // Maximum number of entries
	TTL         time.Duration // Time-to-live for entries
}

NegativeCacheStats holds statistics for the negative cache.

func (NegativeCacheStats) HitRate

func (s NegativeCacheStats) HitRate() float64

HitRate returns the cache hit rate as a percentage.

func (NegativeCacheStats) String

func (s NegativeCacheStats) String() string

String returns a formatted string representation of the statistics.

type NegativeCacheSweeper

type NegativeCacheSweeper struct {
	// contains filtered or unexported fields
}

NegativeCacheSweeper automatically sweeps expired entries from a negative cache.

func NewNegativeCacheSweeper

func NewNegativeCacheSweeper(cache *NegativeCache, interval time.Duration) *NegativeCacheSweeper

NewNegativeCacheSweeper creates a new sweeper for the given cache.

func (*NegativeCacheSweeper) Start

func (s *NegativeCacheSweeper) Start()

Start starts the sweeper background goroutine.

func (*NegativeCacheSweeper) Stop

func (s *NegativeCacheSweeper) Stop()

Stop stops the sweeper background goroutine.

type Node

type Node struct {
	Type      NodeType  // Type of the ledger object
	Hash      Hash256   // SHA-256 content hash (serves as the key)
	Data      Blob      // Serialized ledger object data
	LedgerSeq uint32    // Optional ledger sequence number
	CreatedAt time.Time // Timestamp when the node was created
}

Node represents a stored ledger object with its metadata.

func NewNode

func NewNode(nodeType NodeType, data Blob) *Node

NewNode creates a new Node with the specified type and data. The hash is computed automatically from the data.

func (*Node) IsValid

func (n *Node) IsValid() bool

IsValid returns true if the node has valid data and hash.

func (*Node) Size

func (n *Node) Size() int

Size returns the size of the node's data in bytes.

type NodeType

type NodeType uint32

NodeType represents the type of ledger object stored in the nodestore.

const (
	// NodeUnknown represents an unknown or invalid node type
	NodeUnknown NodeType = 0
	// NodeLedger represents a complete ledger header
	NodeLedger NodeType = 1
	// NodeAccount represents an account state object
	NodeAccount NodeType = 3
	// NodeTransaction represents a transaction object
	NodeTransaction NodeType = 4
	// NodeDummy represents an invalid or missing object (used for negative caching)
	NodeDummy NodeType = 512
)

func (NodeType) String

func (nt NodeType) String() string

String returns the string representation of the NodeType.

type Option

type Option func(*Config)

Option represents a functional option for configuring the NodeStore.

func WithBackend

func WithBackend(backend string) Option

WithBackend sets the storage backend.

func WithBatchSize

func WithBatchSize(size int) Option

WithBatchSize sets the default batch size for operations.

func WithCacheSize

func WithCacheSize(size int) Option

WithCacheSize sets the cache size (number of items).

func WithCacheTTL

func WithCacheTTL(ttl time.Duration) Option

WithCacheTTL sets the cache time-to-live duration.

func WithCreateIfMissing

func WithCreateIfMissing(create bool) Option

WithCreateIfMissing controls whether the database should be created if it doesn't exist.

func WithPath

func WithPath(path string) Option

WithPath sets the storage path.

type PebbleBackend

type PebbleBackend struct {
	// contains filtered or unexported fields
}

PebbleBackend implements a high-performance PebbleDB storage backend.

func (*PebbleBackend) BackendInfo

func (p *PebbleBackend) BackendInfo() BackendInfo

BackendInfo returns information about this backend.

func (*PebbleBackend) Close

func (p *PebbleBackend) Close() error

Close closes the backend and releases resources.

func (*PebbleBackend) Compact

func (p *PebbleBackend) Compact() error

Compact triggers manual compaction of the database.

func (*PebbleBackend) EstimateSize

func (p *PebbleBackend) EstimateSize(start, end Hash256) (uint64, error)

EstimateSize returns an estimate of the total size of data in the given range.

func (*PebbleBackend) FdRequired

func (p *PebbleBackend) FdRequired() int

FdRequired returns the number of file descriptors needed.

func (*PebbleBackend) Fetch

func (p *PebbleBackend) Fetch(key Hash256) (*Node, Status)

Fetch retrieves a single object by key - optimized for zero allocations.

func (*PebbleBackend) FetchBatch

func (p *PebbleBackend) FetchBatch(keys []Hash256) ([]*Node, Status)

FetchBatch retrieves multiple objects efficiently using individual gets.

func (*PebbleBackend) ForEach

func (p *PebbleBackend) ForEach(fn func(*Node) error) error

ForEach iterates over all objects in the backend.

func (*PebbleBackend) GetWriteLoad

func (p *PebbleBackend) GetWriteLoad() int

GetWriteLoad returns 0 (no async write queue).

func (*PebbleBackend) IsOpen

func (p *PebbleBackend) IsOpen() bool

IsOpen returns true if the backend is currently open.

func (*PebbleBackend) Name

func (p *PebbleBackend) Name() string

Name returns the name of this backend.

func (*PebbleBackend) Open

func (p *PebbleBackend) Open(createIfMissing bool) error

Open opens the backend for use.

func (*PebbleBackend) SetDeletePath

func (p *PebbleBackend) SetDeletePath()

SetDeletePath marks the backend for deletion when closed.

func (*PebbleBackend) Stats

func (p *PebbleBackend) Stats() map[string]interface{}

Stats returns performance statistics.

func (*PebbleBackend) Store

func (p *PebbleBackend) Store(node *Node) Status

Store saves a single object synchronously.

func (*PebbleBackend) StoreBatch

func (p *PebbleBackend) StoreBatch(nodes []*Node) Status

StoreBatch saves multiple objects efficiently using batched writes.

func (*PebbleBackend) Sync

func (p *PebbleBackend) Sync() Status

Sync forces pending writes to be flushed.

type Result

type Result struct {
	Node *Node // The retrieved node (nil if not found or error occurred)
	Err  error // Error that occurred during the operation (nil if successful)
}

Result represents the result of an asynchronous operation.

type RotatingDatabase

type RotatingDatabase struct {
	// contains filtered or unexported fields
}

RotatingDatabase wraps primary and rotating backends for database rotation. It stores new data in the primary backend and reads from both primary and rotating backends. The rotating backend contains older data that can be disposed of after the retention period.

func NewRotatingDatabase

func NewRotatingDatabase(config *RotationConfig, factory BackendFactory) (*RotatingDatabase, error)

NewRotatingDatabase creates a new rotating database with the given configuration.

func (*RotatingDatabase) Close

func (rd *RotatingDatabase) Close() error

Close closes all backends in the rotating database.

func (*RotatingDatabase) FdRequired

func (rd *RotatingDatabase) FdRequired() int

FdRequired returns the total number of file descriptors needed.

func (*RotatingDatabase) Fetch

func (rd *RotatingDatabase) Fetch(key Hash256) (*Node, Status)

Fetch retrieves a node by its hash. It tries the primary backend first, then the rotating backends from newest to oldest.

func (*RotatingDatabase) FetchBatch

func (rd *RotatingDatabase) FetchBatch(keys []Hash256) ([]*Node, Status)

FetchBatch retrieves multiple nodes efficiently.

func (*RotatingDatabase) ForEach

func (rd *RotatingDatabase) ForEach(fn func(*Node) error) error

ForEach iterates over all objects in all backends.

func (*RotatingDatabase) GetWriteLoad

func (rd *RotatingDatabase) GetWriteLoad() int

GetWriteLoad returns an estimate of pending write operations.

func (*RotatingDatabase) IsOpen

func (rd *RotatingDatabase) IsOpen() bool

IsOpen returns true if the rotating database is open.

func (*RotatingDatabase) Name

func (rd *RotatingDatabase) Name() string

Name returns the name of this backend.

func (*RotatingDatabase) Open

func (rd *RotatingDatabase) Open(createIfMissing bool) error

Open opens the rotating database.

func (*RotatingDatabase) PrimaryBackend

func (rd *RotatingDatabase) PrimaryBackend() Backend

PrimaryBackend returns the primary backend (for advanced operations).

func (*RotatingDatabase) Rotate

func (rd *RotatingDatabase) Rotate() error

Rotate performs a hot-swap of backends. The current primary becomes a rotating backend, and a new primary is created.

func (*RotatingDatabase) RotatingBackends

func (rd *RotatingDatabase) RotatingBackends() []Backend

RotatingBackends returns the rotating backends (for advanced operations).

func (*RotatingDatabase) SetDeletePath

func (rd *RotatingDatabase) SetDeletePath()

SetDeletePath marks all backends for deletion when closed.

func (*RotatingDatabase) ShouldRotate

func (rd *RotatingDatabase) ShouldRotate() bool

ShouldRotate returns true if the primary backend has exceeded the rotation threshold.

func (*RotatingDatabase) Stats

Stats returns statistics about the rotating database.

func (*RotatingDatabase) Store

func (rd *RotatingDatabase) Store(node *Node) Status

Store saves a node to the primary backend only.

func (*RotatingDatabase) StoreBatch

func (rd *RotatingDatabase) StoreBatch(nodes []*Node) Status

StoreBatch saves multiple nodes to the primary backend only.

func (*RotatingDatabase) Sync

func (rd *RotatingDatabase) Sync() Status

Sync forces pending writes to be flushed to all backends.

type RotatingDatabaseStats

type RotatingDatabaseStats struct {
	Rotations        int64 // Number of rotation operations performed
	PrimaryReads     int64 // Number of reads from primary backend
	RotatingReads    int64 // Number of reads from rotating backends
	PrimaryWrites    int64 // Number of writes to primary backend
	BytesWritten     int64 // Total bytes written
	BytesRead        int64 // Total bytes read
	DisposedBackends int64 // Number of backends disposed after retention
	RotatingCount    int   // Current number of rotating backends
}

RotatingDatabaseStats holds statistics for the rotating database.

func (RotatingDatabaseStats) String

func (s RotatingDatabaseStats) String() string

String returns a formatted string representation of the statistics.

type RotationConfig

type RotationConfig struct {
	// RotationThreshold is the number of nodes after which rotation should occur.
	// When the primary backend exceeds this threshold, rotation is recommended.
	RotationThreshold int64

	// RetentionPeriod is how long to keep rotating backends before disposal.
	RetentionPeriod time.Duration

	// PrimaryConfig is the configuration for the primary backend.
	PrimaryConfig *Config

	// RotatingPath is the base path for rotating backends.
	// Rotating backends will be created at RotatingPath_N where N is a sequence number.
	RotatingPath string
}

RotationConfig holds configuration for database rotation.

func DefaultRotationConfig

func DefaultRotationConfig() *RotationConfig

DefaultRotationConfig returns a RotationConfig with sensible defaults.

func (*RotationConfig) Validate

func (c *RotationConfig) Validate() error

Validate checks if the rotation configuration is valid.

type Statistics

type Statistics struct {
	// Read metrics
	Reads        uint64 // Total number of read operations
	CacheHits    uint64 // Number of successful cache hits
	CacheMisses  uint64 // Number of cache misses
	ReadBytes    uint64 // Total bytes read
	ReadDuration uint64 // Total read duration in microseconds

	// Write metrics
	Writes        uint64 // Total number of write operations
	WriteBytes    uint64 // Total bytes written
	WriteDuration uint64 // Total write duration in microseconds

	// Cache metrics
	CacheSize    uint64 // Current number of items in cache
	CacheMaxSize uint64 // Maximum cache size

	// Backend metrics
	BackendName string // Name of the storage backend
	AsyncReads  uint64 // Number of pending async reads
}

Statistics holds performance metrics for the NodeStore.

func (Statistics) String

func (s Statistics) String() string

String returns a formatted string representation of the statistics.

type Status

type Status int

Status represents the status of a backend operation.

const (
	// OK indicates the operation was successful
	OK Status = iota
	// NotFound indicates the requested object was not found
	NotFound
	// DataCorrupt indicates the stored data is corrupted
	DataCorrupt
	// BackendError indicates an error in the storage backend
	BackendError
	// Unknown indicates an unknown error occurred
	Unknown
)

func (Status) String

func (s Status) String() string

String returns the string representation of Status.

type VerificationResult

type VerificationResult struct {
	TotalNodes    int64     // Total number of nodes checked
	CorruptNodes  int64     // Number of corrupt nodes found
	MissingData   int64     // Number of nodes with missing data
	HashMismatch  int64     // Number of nodes with hash mismatches
	CorruptHashes []Hash256 // List of corrupt node hashes (limited to first 100)
}

VerificationResult holds the result of a verification operation.

func (*VerificationResult) IsValid

func (r *VerificationResult) IsValid() bool

IsValid returns true if no corruption was detected.

func (*VerificationResult) String

func (r *VerificationResult) String() string

String returns a formatted string representation of the verification result.

type Verifier

type Verifier interface {
	// Verify checks the integrity of all nodes in the backend.
	// It returns an error if any node fails verification.
	Verify() error

	// VerifyNode verifies a single node by its hash.
	// It returns an error if the node doesn't exist or fails verification.
	VerifyNode(hash Hash256) error
}

Verifier defines the interface for data verification operations.

type VerifyOptions

type VerifyOptions struct {
	// StopOnFirstError stops verification when the first error is encountered.
	StopOnFirstError bool

	// MaxCorruptNodes limits the number of corrupt node hashes collected.
	// Default is 100.
	MaxCorruptNodes int

	// ProgressCallback is called periodically with the number of nodes verified.
	// Can be nil to disable progress reporting.
	ProgressCallback func(verified int64)

	// ProgressInterval specifies how often to call ProgressCallback.
	// Default is every 10000 nodes.
	ProgressInterval int64
}

VerifyOptions holds options for verification operations.

func DefaultVerifyOptions

func DefaultVerifyOptions() *VerifyOptions

DefaultVerifyOptions returns default verification options.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL