Version: v0.0.0-...-0efea2d Latest Latest

This package is not in the latest version of its module.

Go to latest
Published: Feb 25, 2021 License: GPL-3.0 Imports: 5 Imported by: 1




View Source
const CheckPointInterval = 1 << 5 //32
View Source
const CheckPointK = 2 * CheckPointInterval //64
View Source
const MaxStateMsgNO = 100
View Source
const StateTimerOut = 5 * time.Second


This section is empty.


This section is empty.


type CheckPoint

type CheckPoint struct {
	Seq      int64                         `json:"sequence"`
	Digest   string                        `json:"digest"`
	IsStable bool                          `json:"isStable"`
	ViewID   int64                         `json:"viewID"`
	CPMsg    map[int64]*message.CheckPoint `json:"checks"`
Generating these proofs after executing every operation would be expensive. Instead, they are generated periodically,

when a Request with a sequence number di- visible by some constant (e.g., 100) is executed. We will refer to the states produced by the execution of these re- quests as checkpoints and we will say that a checkpoint with a proof is a stable checkpoint.

A replica maintains several logical copies of the service state: the last stable checkpoint, zero or more

checkpoints that are not stable, and a current state. Copy-on-write techniques can be used to reduce the space overhead to store the extra copies of the state, as discussed in Section 6.3.

The proof of correctness for a checkpoint is generated as follows. When a replica produces a checkpoint, it

multicasts a message <CHECKPOINT, n, d, i> to the other replicas, where n is the sequence number of the last Request whose execution is reflected in the state and d is the digest of the state. Each replica collects checkpoint messages in its log until it has 2f + 1 of them for sequence number n with the same digest signed by different replicas (including possibly its own such message). These 2f + 1 messages are the proof of correctness for the checkpoint.

A checkpoint with a proof becomes stable and the replica discards all pre-Prepare, Prepare, and Commit messages

with sequence number less than or equal to n from its log; it also discards all earlier checkpoints and checkpoint messages.

Computing the proofs is efficient because the digest can be computed using incremental cryptography [1] as

discussed in Section 6.3, and proofs are generated rarely.

The checkpoint protocol is used to advance the low and high water marks (which limit what messages will be accepted).

The low-water mark h is equal to the sequence number of the last stable checkpoint. The high water mark H = h + k, where k is big enough so that replicas do not stall waiting for a checkpoint to become stable. For example, if checkpoints are taken every 100 requests, k might be 200.

func NewCheckPoint

func NewCheckPoint(sq, vi int64) *CheckPoint

type ClientRecord

type ClientRecord struct {
	LastReplyTime int64                      `json:"lastReply"`
	Request       map[int64]*message.Request `json:"Request"`
	Reply         map[int64]*message.Reply   `json:"Reply"`

func NewClientRecord

func NewClientRecord() *ClientRecord

type Consensus

type Consensus interface {

type EngineStatus

type EngineStatus int8
const (
	Syncing EngineStatus = iota

func (EngineStatus) String

func (es EngineStatus) String() string

type NormalLog

type NormalLog struct {
	Stage      Stage                     `json:"Stage"`
	PrePrepare *message.PrePrepare       `json:"PrePrepare"`
	Prepare    message.PrepareMsg        `json:"Prepare"`
	Commit     map[int64]*message.Commit `json:"Commit"`
	// contains filtered or unexported fields

func NewNormalLog

func NewNormalLog() *NormalLog

type RequestTimer

type RequestTimer struct {
	IsOk bool

type Set

type Set map[interface{}]bool

type Stage

type Stage int
const (
	Idle Stage = iota

func (Stage) String

func (s Stage) String() string

type StateEngine

type StateEngine struct {
	NodeID      int64 `json:"nodeID"`
	CurViewID   int64 `json:"viewID"`
	CurSequence int64 `json:"curSeq"`
	LasExeSeq   int64 `json:"lastExeSeq"`
	PrimaryID   int64 `json:"primaryID"`

	Timer *RequestTimer

	MsgChan <-chan *message.ConMessage

	MiniSeq int64 `json:"miniSeq"`
	MaxSeq  int64 `json:"maxSeq"`
	// contains filtered or unexported fields

func InitConsensus

func InitConsensus(id int64, cChan chan<- *message.RequestRecord, rChan chan<- *message.Reply) *StateEngine

func (*StateEngine) GetON

func (*StateEngine) InspireConsensus

func (s *StateEngine) InspireConsensus(request *message.Request) error

func (*StateEngine) ResetState

func (s *StateEngine) ResetState(reply *message.Reply)

func (*StateEngine) StartConsensus

func (s *StateEngine) StartConsensus(sig chan interface{})

func (*StateEngine) ViewChange

func (s *StateEngine) ViewChange()

View-Change Messages

When a backup i suspects the primary for view v is faulty, it enters view v + 1 and multicasts a

⟨VIEW-CHANGE, v + 1, h, C, P, Q, i⟩αi message to all replicas. Here h is the sequence number of the latest stable checkpoint known to i, C is a set of pairs with the sequence number and digest of each checkpoint stored at i, and P and Q are the sets described above. These sets are updated before sending the VIEW-CHANGE message using the information in the log, as explained in Figure 3. Once the VIEW-CHANGE message has been sent, i removes PRE-PREPARE, PREPARE, and COMMIT messages from its log. The number of tuples in Q may grow without bound if the algorithm changes views repeatedly without making progress. In Castro [2001], we describe a modification to the algorithm that bounds the size of the Q by a constant. It is interesting to note that VIEW-CHANGE messages do not include PRE-PREPARE, PREPARE, or CHECKPOINT messages.

type VCCache

type VCCache struct {
	// contains filtered or unexported fields

func NewVCCache

func NewVCCache() *VCCache

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL