Documentation ¶
Overview ¶
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Index ¶
- Constants
- Variables
- func CleanupRaft(cfg *Config) error
- func LastStateRaw(cfg *Config) (io.Reader, bool, error)
- func OfflineState(cfg *Config, store ds.Datastore) (state.State, error)
- func SnapshotSave(cfg *Config, newState state.State, pids []peer.ID) error
- type Config
- type Consensus
- func (cc *Consensus) AddPeer(ctx context.Context, pid peer.ID) error
- func (cc *Consensus) Clean(ctx context.Context) error
- func (cc *Consensus) Distrust(ctx context.Context, pid peer.ID) error
- func (cc *Consensus) IsTrustedPeer(ctx context.Context, p peer.ID) bool
- func (cc *Consensus) Leader(ctx context.Context) (peer.ID, error)
- func (cc *Consensus) LogPin(ctx context.Context, pin *api.Pin) error
- func (cc *Consensus) LogUnpin(ctx context.Context, pin *api.Pin) error
- func (cc *Consensus) Peers(ctx context.Context) ([]peer.ID, error)
- func (cc *Consensus) Ready(ctx context.Context) <-chan struct{}
- func (cc *Consensus) RmPeer(ctx context.Context, pid peer.ID) error
- func (cc *Consensus) Rollback(state state.State) error
- func (cc *Consensus) SetClient(c *rpc.Client)
- func (cc *Consensus) Shutdown(ctx context.Context) error
- func (cc *Consensus) State(ctx context.Context) (state.ReadOnly, error)
- func (cc *Consensus) Trust(ctx context.Context, pid peer.ID) error
- func (cc *Consensus) WaitForSync(ctx context.Context) error
- type LogOp
- type LogOpType
Constants ¶
const ( LogOpPin = iota + 1 LogOpUnpin )
Type of consensus operation
Variables ¶
var ( DefaultDataSubFolder = "raft" DefaultWaitForLeaderTimeout = 15 * time.Second DefaultCommitRetries = 1 DefaultNetworkTimeout = 10 * time.Second DefaultCommitRetryDelay = 200 * time.Millisecond DefaultBackupsRotate = 6 DefaultDatastoreNamespace = "/r" // from "/raft" )
Configuration defaults
var RaftLogCacheSize = 512
RaftLogCacheSize is the maximum number of logs to cache in-memory. This is used to reduce disk I/O for the recently committed entries.
var RaftMaxSnapshots = 5
RaftMaxSnapshots indicates how many snapshots to keep in the consensus data folder. TODO: Maybe include this in Config. Not sure how useful it is to touch this anyways.
Functions ¶
func CleanupRaft ¶ added in v0.3.2
CleanupRaft moves the current data folder to a backup location
func LastStateRaw ¶ added in v0.3.1
LastStateRaw returns the bytes of the last snapshot stored, its metadata, and a flag indicating whether any snapshot was found.
func OfflineState ¶ added in v0.11.0
OfflineState state returns a cluster state by reading the Raft data and writing it to the given datastore which is then wrapped as a state.State. Usually an in-memory datastore suffices. The given datastore should be thread-safe.
func SnapshotSave ¶ added in v0.3.1
SnapshotSave saves the provided state to a snapshot in the raft data path. Old raft data is backed up and replaced by the new snapshot. pids contains the config-specified peer ids to include in the snapshot metadata if no snapshot exists from which to copy the raft metadata
Types ¶
type Config ¶ added in v0.2.0
type Config struct { config.Saver // A folder to store Raft's data. DataFolder string // InitPeerset provides the list of initial cluster peers for new Raft // peers (with no prior state). It is ignored when Raft was already // initialized or when starting in staging mode. InitPeerset []peer.ID // LeaderTimeout specifies how long to wait for a leader before // failing an operation. WaitForLeaderTimeout time.Duration // NetworkTimeout specifies how long before a Raft network // operation is timed out NetworkTimeout time.Duration // CommitRetries specifies how many times we retry a failed commit until // we give up. CommitRetries int // How long to wait between retries CommitRetryDelay time.Duration // BackupsRotate specifies the maximum number of Raft's DataFolder // copies that we keep as backups (renaming) after cleanup. BackupsRotate int // Namespace to use when writing keys to the datastore DatastoreNamespace string // A Hashicorp Raft's configuration object. RaftConfig *hraft.Config // Tracing enables propagation of contexts across binary boundaries. Tracing bool // contains filtered or unexported fields }
Config allows to configure the Raft Consensus component for ipfs-cluster. The component's configuration section is represented by ConfigJSON. Config implements the ComponentConfig interface.
func (*Config) ApplyEnvVars ¶ added in v0.10.0
ApplyEnvVars fills in any Config fields found as environment variables.
func (*Config) ConfigKey ¶ added in v0.2.0
ConfigKey returns a human-friendly indentifier for this Config.
func (*Config) Default ¶ added in v0.2.0
Default initializes this configuration with working defaults.
func (*Config) GetDataFolder ¶ added in v0.4.0
GetDataFolder returns the Raft data folder that we are using.
func (*Config) LoadJSON ¶ added in v0.2.0
LoadJSON parses a json-encoded configuration (see jsonConfig). The Config will have default values for all fields not explicited in the given json object.
type Consensus ¶
type Consensus struct {
// contains filtered or unexported fields
}
Consensus handles the work of keeping a shared-state between the peers of an IPFS Cluster, as well as modifying that state and applying any updates in a thread-safe manner.
func NewConsensus ¶
func NewConsensus( host host.Host, cfg *Config, store ds.Datastore, staging bool, ) (*Consensus, error)
NewConsensus builds a new ClusterConsensus component using Raft.
Raft saves state snapshots regularly and persists log data in a bolt datastore. Therefore, unless memory usage is a concern, it is recommended to use an in-memory go-datastore as store parameter.
The staging parameter controls if the Raft peer should start in staging mode (used when joining a new Raft peerset with other peers).
The store parameter should be a thread-safe datastore.
func (*Consensus) AddPeer ¶ added in v0.3.0
AddPeer adds a new peer to participate in this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) IsTrustedPeer ¶ added in v0.11.0
IsTrustedPeer returns true. In Raft we trust all peers.
func (*Consensus) Leader ¶
Leader returns the peerID of the Leader of the cluster. It returns an error when there is no leader.
func (*Consensus) LogPin ¶
LogPin submits a Cid to the shared state of the cluster. It will forward the operation to the leader if this is not it.
func (*Consensus) Peers ¶ added in v0.3.0
Peers return the current list of peers in the consensus. The list will be sorted alphabetically.
func (*Consensus) Ready ¶
Ready returns a channel which is signaled when the Consensus algorithm has finished bootstrapping and is ready to use
func (*Consensus) RmPeer ¶ added in v0.3.0
RmPeer removes a peer from this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) Rollback ¶
Rollback replaces the current agreed-upon state with the state provided. Only the consensus leader can perform this operation.
func (*Consensus) Shutdown ¶
Shutdown stops the component so it will not process any more updates. The underlying consensus is permanently shutdown, along with the libp2p transport.
func (*Consensus) State ¶
State retrieves the current consensus State. It may error if no State has been agreed upon or the state is not consistent. The returned State is the last agreed-upon State known by this node. No writes are allowed, as all writes to the shared state should happen through the Consensus component methods.
type LogOp ¶
type LogOp struct { SpanCtx trace.SpanContext `codec:"s,omitempty"` TagCtx []byte `codec:"t,omitempty"` Cid *api.Pin `codec:"c,omitempty"` Type LogOpType `codec:"p,omitempty"` // contains filtered or unexported fields }
LogOp represents an operation for the OpLogConsensus system. It implements the consensus.Op interface and it is used by the Consensus component.