Documentation ¶
Overview ¶
Package crdt implements the IPFS Cluster consensus interface using CRDT-datastore to replicate the cluster global state to every peer.
Index ¶
- Variables
- func Clean(ctx context.Context, cfg *Config, store ds.Datastore) error
- func OfflineState(cfg *Config, store ds.Datastore) (state.BatchingState, error)
- type BatchingConfig
- type Config
- type Consensus
- func (css *Consensus) AddPeer(ctx context.Context, pid peer.ID) error
- func (css *Consensus) Clean(ctx context.Context) error
- func (css *Consensus) Distrust(ctx context.Context, pid peer.ID) error
- func (css *Consensus) IsTrustedPeer(ctx context.Context, pid peer.ID) bool
- func (css *Consensus) Leader(ctx context.Context) (peer.ID, error)
- func (css *Consensus) LogPin(ctx context.Context, pin api.Pin) error
- func (css *Consensus) LogUnpin(ctx context.Context, pin api.Pin) error
- func (css *Consensus) Peers(ctx context.Context) ([]peer.ID, error)
- func (css *Consensus) Ready(ctx context.Context) <-chan struct{}
- func (css *Consensus) RmPeer(ctx context.Context, pid peer.ID) error
- func (css *Consensus) SetClient(c *rpc.Client)
- func (css *Consensus) Shutdown(ctx context.Context) error
- func (css *Consensus) State(ctx context.Context) (state.ReadOnly, error)
- func (css *Consensus) Trust(ctx context.Context, pid peer.ID) error
- func (css *Consensus) WaitForSync(ctx context.Context) error
Constants ¶
This section is empty.
Variables ¶
var ( DefaultClusterName = "ipfs-cluster" DefaultPeersetMetric = "ping" DefaultDatastoreNamespace = "/c" // from "/crdt" DefaultRebroadcastInterval = time.Minute DefaultTrustedPeers = []peer.ID{} DefaultTrustAll = true DefaultBatchingMaxQueueSize = 50000 DefaultRepairInterval = time.Hour )
Default configuration values
var ( ErrNoLeader = errors.New("crdt consensus component does not provide a leader") ErrRmPeer = errors.New("crdt consensus component cannot remove peers") ErrMaxQueueSizeReached = errors.New("batching max_queue_size reached. Too many operations are waiting to be batched. Try increasing the max_queue_size or adjusting the batching options") )
Common variables for the module.
Functions ¶
func OfflineState ¶
OfflineState returns an offline, batching state using the given datastore. This allows to inspect and modify the shared state in offline mode.
Types ¶
type BatchingConfig ¶ added in v0.13.3
BatchingConfig configures parameters for batching multiple pins in a single CRDT-put operation.
MaxBatchSize will trigger a commit whenever the number of pins in the batch reaches the limit.
MaxBatchAge will trigger a commit when the oldest update in the batch reaches it. Setting both values to 0 means batching is disabled.
MaxQueueSize specifies how many items can be waiting to be batched before the LogPin/Unpin operations block.
type Config ¶
type Config struct { config.Saver // The topic we wish to subscribe to ClusterName string // TrustAll specifies whether we should trust all peers regardless of // the TrustedPeers contents. TrustAll bool // Any update received from a peer outside this set is ignored and not // forwarded. Trusted peers can also access additional RPC endpoints // for this peer that are forbidden for other peers. TrustedPeers []peer.ID // Specifies whether to batch CRDT updates for increased // performance. Batching BatchingConfig // The interval before re-announcing the current state // to the network when no activity is observed. RebroadcastInterval time.Duration // The name of the metric we use to obtain the peerset (every peer // with valid metric of this type is part of it). PeersetMetric string // All keys written to the datastore will be namespaced with this prefix DatastoreNamespace string // How often the underlying crdt store triggers a repair when the // datastore is marked dirty. RepairInterval time.Duration // Tracing enables propagation of contexts across binary boundaries. Tracing bool // contains filtered or unexported fields }
Config is the configuration object for Consensus.
func (*Config) ApplyEnvVars ¶
ApplyEnvVars fills in any Config fields found as environment variables.
func (*Config) ToDisplayJSON ¶ added in v0.13.0
ToDisplayJSON returns JSON config as a string.
type Consensus ¶
type Consensus struct {
// contains filtered or unexported fields
}
Consensus implement ipfscluster.Consensus and provides the facility to add and remove pins from the Cluster shared state. It uses a CRDT-backed implementation of go-datastore (go-ds-crdt).
func New ¶
func New( host host.Host, dht routing.Routing, pubsub *pubsub.PubSub, cfg *Config, store ds.Datastore, ) (*Consensus, error)
New creates a new crdt Consensus component. The given PubSub will be used to broadcast new heads. The given thread-safe datastore will be used to persist data and all will be prefixed with cfg.DatastoreNamespace.
func (*Consensus) AddPeer ¶
AddPeer is a no-op as we do not need to do peerset management with Merkle-CRDTs. Therefore adding a peer to the peerset means doing nothing.
func (*Consensus) IsTrustedPeer ¶
IsTrustedPeer returns whether the given peer is taken into account when submitting updates to the consensus state.
func (*Consensus) Peers ¶
Peers returns the current known peerset. It uses the monitor component and considers every peer with valid known metrics a member.
func (*Consensus) Ready ¶
Ready returns a channel which is signalled when the component is ready to use.
func (*Consensus) RmPeer ¶
RmPeer is a no-op which always errors, as, since we do not do peerset management, we also have no ability to remove a peer from it.
func (*Consensus) SetClient ¶
SetClient gives the component the ability to communicate and leaves it ready to use.
func (*Consensus) Shutdown ¶
Shutdown closes this component, cancelling the pubsub subscription and closing the datastore.
func (*Consensus) State ¶
State returns the cluster shared state. It will block until the consensus component is ready, shutdown or the given context has been cancelled.