Based Sequencer
Overview
The Based Sequencer is a sequencer implementation that exclusively retrieves transactions from the Data Availability (DA) layer via the forced inclusion mechanism. Unlike other sequencer types, it does not accept transactions from a mempool or reaper - it treats the DA layer as a transaction queue.
This design ensures that all transactions are force-included from DA, making the sequencer completely "based" on the DA layer's transaction ordering.
Architecture
Core Components
- ForcedInclusionRetriever: Fetches transactions from DA at epoch boundaries
- CheckpointStore: Persists processing position to enable crash recovery
- BasedSequencer: Orchestrates transaction retrieval and batch creation
Key Interfaces
The Based Sequencer implements the Sequencer interface from core/sequencer/sequencing.go:
SubmitBatchTxs() - No-op for based sequencer (transactions are not accepted)
GetNextBatch() - Retrieves the next batch from DA via forced inclusion
VerifyBatch() - Always returns true (all transactions come from DA)
Epoch-Based Transaction Retrieval
How Epochs Work
Transactions are retrieved from DA in epochs, not individual DA blocks. An epoch is a range of DA blocks defined by DAEpochForcedInclusion in the genesis configuration.
Example: If DAStartHeight = 100 and DAEpochForcedInclusion = 10:
- Epoch 1: DA heights 100-109
- Epoch 2: DA heights 110-119
- Epoch 3: DA heights 120-129
Epoch Boundary Fetching
The ForcedInclusionRetriever only returns transactions when queried at the epoch end (the last DA height in an epoch):
// When NOT at epoch end -> returns empty transactions
if daHeight != epochEnd {
return &ForcedInclusionEvent{
StartDaHeight: daHeight,
EndDaHeight: daHeight,
Txs: [][]byte{},
}, nil
}
// When AT epoch end -> fetches entire epoch
// Retrieves ALL transactions from epochStart to epochEnd (inclusive)
When at an epoch end, the retriever fetches transactions from all DA blocks in that epoch:
- Fetches forced inclusion blobs from
epochStart
- Fetches forced inclusion blobs from each height between start and end
- Fetches forced inclusion blobs from
epochEnd
- Returns all transactions as a single
ForcedInclusionEvent
Why Epoch-Based
- Efficiency: Reduces the number of DA queries
- Batching: Allows processing multiple DA blocks worth of transactions together
- Determinism: Clear boundaries for when to fetch from DA
- Gas optimization: Fewer DA reads means lower operational costs
Checkpoint System
Purpose
The checkpoint system tracks the exact position in the transaction stream to enable crash recovery and ensure no transactions are lost or duplicated.
Checkpoint Structure
type Checkpoint struct {
// DAHeight is the DA block height currently being processed
DAHeight uint64
// TxIndex is the index of the next transaction to process
// within the DA block's forced inclusion batch
TxIndex uint64
}
How Checkpoints Work
1. Initial State
Checkpoint: (DAHeight: 100, TxIndex: 0)
- Ready to fetch epoch starting at DA height 100
2. Fetching Transactions
When GetNextBatch() is called and we're at an epoch end:
Request: GetNextBatch(maxBytes: 1MB)
Action: Fetch all transactions from epoch (DA heights 100-109)
Result: currentBatchTxs = [tx1, tx2, tx3, ..., txN] (from entire epoch)
3. Processing Transactions
Transactions are processed sequentially to maintain ordering and enable crash recovery. Processing stops at the first FilterPostpone to ensure we don't skip ahead:
Epoch txs: [tx0, tx1, tx2, tx3, tx4]
Batch 1: Filter returns [OK, Postpone, OK, OK, OK]
Processing stops at tx1 (Postpone)
Result: [tx0] consumed
Checkpoint: (DAHeight: 100, TxIndex: 1)
Batch 2: Slice from TxIndex=1 → [tx1, tx2, tx3, tx4]
Filter returns [OK, OK, OK, OK]
Result: [tx1, tx2, tx3, tx4] consumed
Checkpoint: (DAHeight: 101, TxIndex: 0)
- Moved to next DA epoch
Why sequential processing?
- Maintains forced inclusion ordering guarantees
- TxIndex accurately tracks consumed txs for crash recovery
- On restart, we can safely skip already-processed txs by slicing from TxIndex
4. Checkpoint Persistence
Critical: The checkpoint is persisted to disk after every batch of transactions is processed:
// Advance TxIndex by the number of consumed transactions (OK + Remove)
s.checkpoint.TxIndex += consumedCount
// Check if we've consumed all transactions from the epoch
if s.checkpoint.TxIndex >= uint64(len(s.currentBatchTxs)) {
// All txs consumed, advance to next DA epoch
s.checkpoint.DAHeight = daHeight + 1
s.checkpoint.TxIndex = 0
s.currentBatchTxs = nil
s.SetDAHeight(s.checkpoint.DAHeight)
}
// Persist checkpoint to disk
if err := s.checkpointStore.Save(ctx, s.checkpoint); err != nil {
return nil, fmt.Errorf("failed to save checkpoint: %w", err)
}
Key points:
TxIndex is incremented by consumed count (OK + Remove), not reset to 0
- Original cache (
currentBatchTxs) is preserved, not replaced
- On next batch, we slice from
TxIndex to get remaining txs
Crash Recovery Behavior
Scenario: Crash Mid-Epoch
Setup:
- Epoch spans DA height 100
- Fetched 5 transactions: [tx0, tx1, tx2, tx3, tx4]
- Processed tx0, tx1, tx2 (TxIndex = 3)
- Crash occurs before processing tx3, tx4
On Restart:
- Load Checkpoint:
(DAHeight: 100, TxIndex: 3)
- Lost Cache:
currentBatchTxs is empty (in-memory only)
- Fetch Epoch:
RetrieveForcedIncludedTxs(100)
- Re-fetch: Retrieve all 5 transactions again: [tx0, tx1, tx2, tx3, tx4]
- Resume: Slice from TxIndex=3 → [tx3, tx4]
- Continue: Process tx3, tx4 without re-executing tx0, tx1, tx2
Important Implications
The entire epoch will be re-fetched after a crash, even with fine-grained checkpoints.
Why?
- Transactions are only available at epoch boundaries
- In-memory cache (
currentBatchTxs) is lost on restart
- Must wait until the next epoch end to fetch transactions again
What the checkpoint prevents:
- ✅ Re-execution of already processed transactions
- ✅ Correct resumption within a DA block's transaction list
- ✅ No transaction loss or duplication
What the checkpoint does NOT prevent:
- ❌ Re-fetching the entire epoch from DA
- ❌ Re-validation of previously fetched transactions
Checkpoint Storage
The checkpoint is stored using a key-value datastore:
// Checkpoint key in the datastore
checkpointKey = ds.NewKey("/based/checkpoint")
// Operations
checkpoint, err := checkpointStore.Load(ctx) // Load from disk
err := checkpointStore.Save(ctx, checkpoint) // Save to disk
err := checkpointStore.Delete(ctx) // Delete from disk
The checkpoint is serialized using Protocol Buffers (pb.SequencerDACheckpoint) for efficient storage and cross-version compatibility.