Documentation ¶
Index ¶
- Constants
- type CheckPoint
- type ClientRecord
- type Consensus
- type EngineStatus
- type NormalLog
- type RequestTimer
- type Set
- type Stage
- type StateEngine
- func (s *StateEngine) GetON(newVID int64) (int64, int64, message.OMessage, message.OMessage, *message.ViewChange)
- func (s *StateEngine) InspireConsensus(request *message.Request) error
- func (s *StateEngine) ResetState(reply *message.Reply)
- func (s *StateEngine) StartConsensus(sig chan interface{})
- func (s *StateEngine) ViewChange()
- type VCCache
Constants ¶
const CheckPointInterval = 1 << 5 //32
const CheckPointK = 2 * CheckPointInterval //64
const MaxStateMsgNO = 100
const StateTimerOut = 5 * time.Second
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type CheckPoint ¶
type CheckPoint struct { Seq int64 `json:"sequence"` Digest string `json:"digest"` IsStable bool `json:"isStable"` ViewID int64 `json:"viewID"` CPMsg map[int64]*message.CheckPoint `json:"checks"` }
Generating these proofs after executing every operation would be expensive. Instead, they are generated periodically,
when a Request with a sequence number di- visible by some constant (e.g., 100) is executed. We will refer to the states produced by the execution of these re- quests as checkpoints and we will say that a checkpoint with a proof is a stable checkpoint.
A replica maintains several logical copies of the service state: the last stable checkpoint, zero or more
checkpoints that are not stable, and a current state. Copy-on-write techniques can be used to reduce the space overhead to store the extra copies of the state, as discussed in Section 6.3.
The proof of correctness for a checkpoint is generated as follows. When a replica produces a checkpoint, it
multicasts a message <CHECKPOINT, n, d, i> to the other replicas, where n is the sequence number of the last Request whose execution is reflected in the state and d is the digest of the state. Each replica collects checkpoint messages in its log until it has 2f + 1 of them for sequence number n with the same digest signed by different replicas (including possibly its own such message). These 2f + 1 messages are the proof of correctness for the checkpoint.
A checkpoint with a proof becomes stable and the replica discards all pre-Prepare, Prepare, and Commit messages
with sequence number less than or equal to n from its log; it also discards all earlier checkpoints and checkpoint messages.
Computing the proofs is efficient because the digest can be computed using incremental cryptography [1] as
discussed in Section 6.3, and proofs are generated rarely.
The checkpoint protocol is used to advance the low and high water marks (which limit what messages will be accepted).
The low-water mark h is equal to the sequence number of the last stable checkpoint. The high water mark H = h + k, where k is big enough so that replicas do not stall waiting for a checkpoint to become stable. For example, if checkpoints are taken every 100 requests, k might be 200.
func NewCheckPoint ¶
func NewCheckPoint(sq, vi int64) *CheckPoint
type ClientRecord ¶
type ClientRecord struct { LastReplyTime int64 `json:"lastReply"` Request map[int64]*message.Request `json:"Request"` Reply map[int64]*message.Reply `json:"Reply"` }
func NewClientRecord ¶
func NewClientRecord() *ClientRecord
type EngineStatus ¶
type EngineStatus int8
const ( Syncing EngineStatus = iota Serving ViewChanging )
func (EngineStatus) String ¶
func (es EngineStatus) String() string
type NormalLog ¶
type NormalLog struct { Stage Stage `json:"Stage"` PrePrepare *message.PrePrepare `json:"PrePrepare"` Prepare message.PrepareMsg `json:"Prepare"` Commit map[int64]*message.Commit `json:"Commit"` // contains filtered or unexported fields }
func NewNormalLog ¶
func NewNormalLog() *NormalLog
type RequestTimer ¶
type StateEngine ¶
type StateEngine struct { NodeID int64 `json:"nodeID"` CurViewID int64 `json:"viewID"` CurSequence int64 `json:"curSeq"` LasExeSeq int64 `json:"lastExeSeq"` PrimaryID int64 `json:"primaryID"` Timer *RequestTimer MsgChan <-chan *message.ConMessage MiniSeq int64 `json:"miniSeq"` MaxSeq int64 `json:"maxSeq"` // contains filtered or unexported fields }
func InitConsensus ¶
func InitConsensus(id int64, cChan chan<- *message.RequestRecord, rChan chan<- *message.Reply) *StateEngine
func (*StateEngine) GetON ¶
func (s *StateEngine) GetON(newVID int64) (int64, int64, message.OMessage, message.OMessage, *message.ViewChange)
func (*StateEngine) InspireConsensus ¶
func (s *StateEngine) InspireConsensus(request *message.Request) error
func (*StateEngine) ResetState ¶
func (s *StateEngine) ResetState(reply *message.Reply)
func (*StateEngine) StartConsensus ¶
func (s *StateEngine) StartConsensus(sig chan interface{})
func (*StateEngine) ViewChange ¶
func (s *StateEngine) ViewChange()
View-Change Messages
When a backup i suspects the primary for view v is faulty, it enters view v + 1 and multicasts a
⟨VIEW-CHANGE, v + 1, h, C, P, Q, i⟩αi message to all replicas. Here h is the sequence number of the latest stable checkpoint known to i, C is a set of pairs with the sequence number and digest of each checkpoint stored at i, and P and Q are the sets described above. These sets are updated before sending the VIEW-CHANGE message using the information in the log, as explained in Figure 3. Once the VIEW-CHANGE message has been sent, i removes PRE-PREPARE, PREPARE, and COMMIT messages from its log. The number of tuples in Q may grow without bound if the algorithm changes views repeatedly without making progress. In Castro [2001], we describe a modification to the algorithm that bounds the size of the Q by a constant. It is interesting to note that VIEW-CHANGE messages do not include PRE-PREPARE, PREPARE, or CHECKPOINT messages.
type VCCache ¶
type VCCache struct {
// contains filtered or unexported fields
}
func NewVCCache ¶
func NewVCCache() *VCCache