ipfscluster

package module
v0.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 25, 2017 License: MIT Imports: 31 Imported by: 0

README

ipfs-cluster

standard-readme compliant GoDoc Go Report Card Build Status Coverage Status

Clustering for IPFS nodes.

WORK IN PROGRESS

DO NOT USE IN PRODUCTION

ipfs-cluster is a tool which groups a number of IPFS nodes together, allowing to collectively perform operations such as pinning.

In order to do so IPFS Cluster nodes use a libp2p-based consensus algorithm (currently Raft) to agree on a log of operations and build a consistent state across the cluster. The state represents which objects should be pinned by which nodes.

Additionally, cluster nodes act as a proxy/wrapper to the IPFS API, so they can be used as a regular node, with the difference that pin requests are handled by the Cluster.

IPFS Cluster provides a cluster-node application (ipfs-cluster-service), a Go API, a HTTP API and a command-line tool (ipfs-cluster-ctl).

Current functionality only allows pinning in all cluster members, but more strategies (like setting a replication factor for each pin) will be developed.

Table of Contents

Background

Since the start of IPFS it was clear that a tool to coordinate a number of different nodes (and the content they are supposed to store) would add a great value to the IPFS ecosystem. Naïve approaches are possible, but they present some weaknesses, specially at dealing with error handling, recovery and implementation of advanced pinning strategies.

ipfs-cluster aims to address this issues by providing a IPFS node wrapper which coordinates multiple cluster members via a consensus algorithm. This ensures that the desired state of the system is always agreed upon and can be easily maintained by the members of the cluster. Thus, every cluster member knows which content is tracked, can decide whether asking IPFS to pin it and can react to any contingencies like node reboots.

Captain

This project is captained by @hsanjuan. See the captain's log for information about current status and upcoming features. You can also check out the project's Roadmap.

Install

In order to install the ipfs-cluster-service the ipfs-cluster-ctl tool simply download this repository and run make as follows:

$ go get -u -d github.com/ipfs/ipfs-cluster
$ cd $GOPATH/src/github.com/ipfs/ipfs-cluster
$ make install

This will install ipfs-cluster-service and ipfs-cluster-ctl in your $GOPATH/bin folder.

Usage

ipfs-cluster-service

ipfs-cluster-service runs a member node for the cluster. Usage information can be obtained running:

$ ipfs-cluster-service -h

Before running ipfs-cluster-service for the first time, initialize a configuration file with:

$ ipfs-cluster-service -init

The configuration will be placed in ~/.ipfs-cluster/service.json by default.

You can add the multiaddresses for the other members of the cluster in the cluster_peers variable. For example, here is a valid configuration for a cluster of 4 members:

{
    "id": "QmSGCzHkz8gC9fNndMtaCZdf9RFtwtbTEEsGo4zkVfcykD",
    "private_key" : "<redacted>",
    "cluster_peers" : [
          "/ip4/192.168.1.2/tcp/9096/ipfs/QmcQ5XvrSQ4DouNkQyQtEoLczbMr6D9bSenGy6WQUCQUBt",
          "/ip4/192.168.1.3/tcp/9096/ipfs/QmdFBMf9HMDH3eCWrc1U11YCPenC3Uvy9mZQ2BedTyKTDf",
          "/ip4/192.168.1.4/tcp/9096/ipfs/QmYY1ggjoew5eFrvkenTR3F4uWqtkBkmgfJk8g9Qqcwy51",
          "/ip4/192.168.1.5/tcp/9096/ipfs/QmSGCzHkz8gC9fNndMtaCZdf9RFtwtbTEEsGo4zkVfcykD"
          ],
    "cluster_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "api_listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
    "ipfs_proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
    "ipfs_node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
    "consensus_data_folder": "/home/user/.ipfs-cluster/data",
    "raft_config": {
        "snapshot_interval_seconds": 120,
        "enable_single_node": true
    }

The configuration file should probably be identical among all cluster members, except for the id and private_key fields. To facilitate configuration, cluster_peers may include its own address, but it does not have to. For additional information about the configuration format, see the JSONConfig documentation.

Once every cluster member has the configuration in place, you can run ipfs-cluster-service to start the cluster.

ipfs-cluster-ctl

ipfs-cluster-ctl is the client application to manage the cluster nodes and perform actions. ipfs-cluster-ctl uses the HTTP API provided by the nodes.

After installing, you can run ipfs-cluster-ctl --help to display general description and options, or alternatively ipfs-cluster-ctl help [cmd] to display information about supported commands.

In summary, it works as follows:

$ ipfs-cluster-ctl member ls                                                # list cluster members
$ ipfs-cluster-ctl pin add Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58   # pins a Cid in the cluster
$ ipfs-cluster-ctl pin rm Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58    # unpins a Cid from the cluster
$ ipfs-cluster-ctl status                                                   # display tracked Cids information
$ ipfs-cluster-ctl sync Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58      # recover Cids in error status
Go

IPFS Cluster nodes can be launched directly from Go. The Cluster object provides methods to interact with the cluster and perform actions.

Documentation and examples on how to use IPFS Cluster from Go can be found in godoc.org/github.com/ipfs/ipfs-cluster.

API

TODO: Swagger

Contribute

PRs accepted.

Small note: If editing the README, please conform to the standard-readme specification.

License

MIT © Protocol Labs, Inc.

Documentation

Overview

Package ipfscluster implements a wrapper for the IPFS deamon which allows to orchestrate pinning operations among several IPFS nodes.

IPFS Cluster uses a go-libp2p-raft to keep a shared state between the different members of the cluster. It also uses LibP2P to enable communication between its different components, which perform different tasks like managing the underlying IPFS daemons, or providing APIs for external control.

Index

Constants

View Source
const (
	DefaultConfigCrypto    = crypto.RSA
	DefaultConfigKeyLength = 2048
	DefaultAPIAddr         = "/ip4/127.0.0.1/tcp/9094"
	DefaultIPFSProxyAddr   = "/ip4/127.0.0.1/tcp/9095"
	DefaultIPFSNodeAddr    = "/ip4/127.0.0.1/tcp/5001"
	DefaultClusterAddr     = "/ip4/0.0.0.0/tcp/9096"
)

Default parameters for the configuration

View Source
const (
	LogOpPin = iota + 1
	LogOpUnpin
)

Type of pin operation

View Source
const (
	// IPFSStatus should never take this value
	Bug = iota
	// The cluster node is offline or not responding
	ClusterError
	// An error occurred pinning
	PinError
	// An error occurred unpinning
	UnpinError
	// The IPFS daemon has pinned the item
	Pinned
	// The IPFS daemon is currently pinning the item
	Pinning
	// The IPFS daemon is currently unpinning the item
	Unpinning
	// The IPFS daemon is not pinning the item
	Unpinned
	// The IPFS deamon is not pinning the item but it is being tracked
	RemotePin
)

IPFSStatus values

View Source
const Version = ""

Version is the current cluster version. Version alignment between components, apis and tools ensures compatibility among them.

Variables

View Source
var (
	// maximum duration before timing out read of the request
	IPFSProxyServerReadTimeout = 5 * time.Second
	// maximum duration before timing out write of the response
	IPFSProxyServerWriteTimeout = 10 * time.Second
	// server-side the amount of time a Keep-Alive connection will be
	// kept idle before being reused
	IPFSProxyServerIdleTimeout = 60 * time.Second
)

IPFS Proxy settings

View Source
var (
	PinningTimeout   = 15 * time.Minute
	UnpinningTimeout = 10 * time.Second
)

A Pin or Unpin operation will be considered failed if the Cid has stayed in Pinning or Unpinning state for longer than these values.

View Source
var (
	// maximum duration before timing out read of the request
	RESTAPIServerReadTimeout = 5 * time.Second
	// maximum duration before timing out write of the response
	RESTAPIServerWriteTimeout = 10 * time.Second
	// server-side the amount of time a Keep-Alive connection will be
	// kept idle before being reused
	RESTAPIServerIdleTimeout = 60 * time.Second
)

Server settings

View Source
var Commit string

Commit is the current build commit of cluster. See Makefile

View Source
var LeaderTimeout = 120 * time.Second

LeaderTimeout specifies how long to wait during initialization before failing for not having a leader.

View Source
var RPCProtocol = protocol.ID("/ipfscluster/" + Version + "/rpc")

RPCProtocol is used to send libp2p messages between cluster members

View Source
var SilentRaft = true

SilentRaft controls whether all Raft log messages are discarded.

Functions

func SetLogLevel

func SetLogLevel(l string)

SetLogLevel sets the level in the logs

Types

type API

type API interface {
	Component
}

API is a component which offers an API for Cluster. This is a base component.

type CidArg

type CidArg struct {
	Cid string
}

CidArg is an arguments that carry a Cid. It may carry more things in the future.

func NewCidArg

func NewCidArg(c *cid.Cid) *CidArg

NewCidArg returns a CidArg which carries the given Cid. It panics if it is nil.

func (*CidArg) CID

func (arg *CidArg) CID() (*cid.Cid, error)

CID decodes and returns a Cid from a CidArg.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster is the main IPFS cluster component. It provides the go-API for it and orchestrates the components that make up the system.

func NewCluster

func NewCluster(cfg *Config, api API, ipfs IPFSConnector, state State, tracker PinTracker) (*Cluster, error)

NewCluster builds a new IPFS Cluster. It initializes a LibP2P host, creates and RPC Server and client and sets up all components.

func (*Cluster) GlobalSync

func (c *Cluster) GlobalSync() ([]GlobalPinInfo, error)

GlobalSync triggers Sync() operations in all members of the Cluster.

func (*Cluster) GlobalSyncCid

func (c *Cluster) GlobalSyncCid(h *cid.Cid) (GlobalPinInfo, error)

GlobalSyncCid triggers a LocalSyncCid() operation for a given Cid in all members of the Cluster.

GlobalSyncCid will only return when all operations have either failed, succeeded or timed-out.

func (*Cluster) ID

func (c *Cluster) ID() ID

func (*Cluster) LocalSync

func (c *Cluster) LocalSync() ([]PinInfo, error)

LocalSync makes sure that the current state the Tracker matches the IPFS daemon state by triggering a Tracker.Sync() and Recover() on all items that need it. Returns PinInfo for items changed on Sync().

LocalSync triggers recoveries asynchronously, and will not wait for them to fail or succeed before returning.

func (*Cluster) LocalSyncCid

func (c *Cluster) LocalSyncCid(h *cid.Cid) (PinInfo, error)

LocalSyncCid performs a Tracker.Sync() operation followed by a Recover() when needed. It returns the latest known PinInfo for the Cid.

LocalSyncCid will wait for the Recover operation to fail or succeed before returning.

func (*Cluster) Members

func (c *Cluster) Members() []peer.ID

Members returns the IDs of the members of this Cluster

func (*Cluster) Pin

func (c *Cluster) Pin(h *cid.Cid) error

Pin makes the cluster Pin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state. Depending on the cluster pinning strategy, the PinTracker may then request the IPFS daemon to pin the Cid.

Pin returns an error if the operation could not be persisted to the global state. Pin does not reflect the success or failure of underlying IPFS daemon pinning operations.

func (*Cluster) Pins

func (c *Cluster) Pins() []*cid.Cid

Pins returns the list of Cids managed by Cluster and which are part of the current global state. This is the source of truth as to which pins are managed, but does not indicate if the item is successfully pinned.

func (*Cluster) Shutdown

func (c *Cluster) Shutdown() error

Shutdown stops the IPFS cluster components

func (*Cluster) StateSync

func (c *Cluster) StateSync() ([]PinInfo, error)

StateSync syncs the consensus state to the Pin Tracker, ensuring that every Cid that should be tracked is tracked. It returns PinInfo for Cids which were added or deleted.

func (*Cluster) Status

func (c *Cluster) Status() ([]GlobalPinInfo, error)

Status returns the GlobalPinInfo for all tracked Cids. If an error happens, the slice will contain as much information as could be fetched.

func (*Cluster) StatusCid

func (c *Cluster) StatusCid(h *cid.Cid) (GlobalPinInfo, error)

StatusCid returns the GlobalPinInfo for a given Cid. If an error happens, the GlobalPinInfo should contain as much information as could be fetched.

func (*Cluster) Unpin

func (c *Cluster) Unpin(h *cid.Cid) error

Unpin makes the cluster Unpin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state.

Unpin returns an error if the operation could not be persisted to the global state. Unpin does not reflect the success or failure of underlying IPFS daemon unpinning operations.

func (*Cluster) Version

func (c *Cluster) Version() string

Version returns the current IPFS Cluster version

type Component

type Component interface {
	SetClient(*rpc.Client)
	Shutdown() error
}

Component represents a piece of ipfscluster. Cluster components usually run their own goroutines (a http server for example). They communicate with the main Cluster component and other components (both local and remote), using an instance of rpc.Client.

type Config

type Config struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         peer.ID
	PrivateKey crypto.PrivKey

	// List of multiaddresses of the peers of this cluster.
	ClusterPeers []ma.Multiaddr

	// Listen parameters for the Cluster libp2p Host. Used by
	// the RPC and Consensus components.
	ClusterAddr ma.Multiaddr

	// Listen parameters for the the Cluster HTTP API component.
	APIAddr ma.Multiaddr

	// Listen parameters for the IPFS Proxy. Used by the IPFS
	// connector component.
	IPFSProxyAddr ma.Multiaddr

	// Host/Port for the IPFS daemon.
	IPFSNodeAddr ma.Multiaddr

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string

	// Hashicorp's Raft configuration
	RaftConfig *hashiraft.Config
}

Config represents an ipfs-cluster configuration. It is used by Cluster components. An initialized version of it can be obtained with NewDefaultConfig().

func LoadConfig

func LoadConfig(path string) (*Config, error)

LoadConfig reads a JSON configuration file from the given path, parses it and returns a new Config object.

func NewDefaultConfig

func NewDefaultConfig() (*Config, error)

NewDefaultConfig returns a default configuration object with a randomly generated ID and private key.

func (*Config) Save

func (cfg *Config) Save(path string) error

Save stores a configuration as a JSON file in the given path.

func (*Config) ToJSONConfig

func (cfg *Config) ToJSONConfig() (j *JSONConfig, err error)

ToJSONConfig converts a Config object to its JSON representation which is focused on user presentation and easy understanding.

type Consensus

type Consensus struct {
	// contains filtered or unexported fields
}

Consensus handles the work of keeping a shared-state between the members of an IPFS Cluster, as well as modifying that state and applying any updates in a thread-safe manner.

func NewConsensus

func NewConsensus(cfg *Config, host host.Host, state State) (*Consensus, error)

NewConsensus builds a new ClusterConsensus component. The state is used to initialize the Consensus system, so any information in it is discarded.

func (*Consensus) Leader

func (cc *Consensus) Leader() (peer.ID, error)

Leader returns the peerID of the Leader of the cluster. It returns an error when there is no leader.

func (*Consensus) LogPin

func (cc *Consensus) LogPin(c *cid.Cid) error

LogPin submits a Cid to the shared state of the cluster. It will forward the operation to the leader if this is not it.

func (*Consensus) LogUnpin

func (cc *Consensus) LogUnpin(c *cid.Cid) error

LogUnpin removes a Cid from the shared state of the cluster.

func (*Consensus) Rollback

func (cc *Consensus) Rollback(state State) error

Rollback replaces the current agreed-upon state with the state provided. Only the consensus leader can perform this operation.

func (*Consensus) SetClient

func (cc *Consensus) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requets

func (*Consensus) Shutdown

func (cc *Consensus) Shutdown() error

Shutdown stops the component so it will not process any more updates. The underlying consensus is permanently shutdown, along with the libp2p transport.

func (*Consensus) State

func (cc *Consensus) State() (State, error)

State retrieves the current consensus State. It may error if no State has been agreed upon or the state is not consistent. The returned State is the last agreed-upon State known by this node.

type GlobalPinInfo

type GlobalPinInfo struct {
	Cid    *cid.Cid
	Status map[peer.ID]PinInfo
}

GlobalPinInfo contains cluster-wide status information about a tracked Cid, indexed by cluster member.

type ID

type ID struct {
	ID                 peer.ID
	PublicKey          crypto.PubKey
	Addresses          []ma.Multiaddr
	Version            string
	Commit             string
	RPCProtocolVersion protocol.ID
}

type IPFSConnector

type IPFSConnector interface {
	Component
	Pin(*cid.Cid) error
	Unpin(*cid.Cid) error
	IsPinned(*cid.Cid) (bool, error)
}

IPFSConnector is a component which allows cluster to interact with an IPFS daemon. This is a base component.

type IPFSHTTPConnector

type IPFSHTTPConnector struct {
	// contains filtered or unexported fields
}

IPFSHTTPConnector implements the IPFSConnector interface and provides a component which does two tasks:

On one side, it proxies HTTP requests to the configured IPFS daemon. It is able to intercept these requests though, and perform extra operations on them.

On the other side, it is used to perform on-demand requests against the configured IPFS daemom (such as a pin request).

func NewIPFSHTTPConnector

func NewIPFSHTTPConnector(cfg *Config) (*IPFSHTTPConnector, error)

NewIPFSHTTPConnector creates the component and leaves it ready to be started

func (*IPFSHTTPConnector) IsPinned

func (ipfs *IPFSHTTPConnector) IsPinned(hash *cid.Cid) (bool, error)

IsPinned performs a "pin ls" request against the configured IPFS daemon. It returns true when the given Cid is pinned not indirectly.

func (*IPFSHTTPConnector) Pin

func (ipfs *IPFSHTTPConnector) Pin(hash *cid.Cid) error

Pin performs a pin request against the configured IPFS daemon.

func (*IPFSHTTPConnector) SetClient

func (ipfs *IPFSHTTPConnector) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requests.

func (*IPFSHTTPConnector) Shutdown

func (ipfs *IPFSHTTPConnector) Shutdown() error

Shutdown stops any listeners and stops the component from taking any requests.

func (*IPFSHTTPConnector) Unpin

func (ipfs *IPFSHTTPConnector) Unpin(hash *cid.Cid) error

Unpin performs an unpin request against the configured IPFS daemon.

type IPFSStatus

type IPFSStatus int

IPFSStatus represents the status of a tracked Cid in the IPFS daemon

func (IPFSStatus) String

func (st IPFSStatus) String() string

String converts an IPFSStatus into a readable string.

type JSONConfig

type JSONConfig struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         string `json:"id"`
	PrivateKey string `json:"private_key"`

	// List of multiaddresses of the peers of this cluster. This list may
	// include the multiaddress of this node.
	ClusterPeers []string `json:"cluster_peers"`

	// Listen address for the Cluster libp2p host. This is used for
	// interal RPC and Consensus communications between cluster members.
	ClusterListenMultiaddress string `json:"cluster_multiaddress"`

	// Listen address for the the Cluster HTTP API component.
	// Tools like ipfs-cluster-ctl will connect to his endpoint to
	// manage cluster.
	APIListenMultiaddress string `json:"api_listen_multiaddress"`

	// Listen address for the IPFS Proxy, which forwards requests to
	// an IPFS daemon.
	IPFSProxyListenMultiaddress string `json:"ipfs_proxy_listen_multiaddress"`

	// API address for the IPFS daemon.
	IPFSNodeMultiaddress string `json:"ipfs_node_multiaddress"`

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string `json:"consensus_data_folder"`

	// Raft configuration
	RaftConfig *RaftConfig `json:"raft_config"`
}

JSONConfig represents a Cluster configuration as it will look when it is saved using JSON. Most configuration keys are converted into simple types like strings, and key names aim to be self-explanatory for the user.

func (*JSONConfig) ToConfig

func (jcfg *JSONConfig) ToConfig() (c *Config, err error)

ToConfig converts a JSONConfig to its internal Config representation, where options are parsed into their native types.

type MapPinTracker

type MapPinTracker struct {
	// contains filtered or unexported fields
}

MapPinTracker is a PinTracker implementation which uses a Go map to store the status of the tracked Cids. This component is thread-safe.

func NewMapPinTracker

func NewMapPinTracker(cfg *Config) *MapPinTracker

NewMapPinTracker returns a new object which has been correcly initialized with the given configuration.

func (*MapPinTracker) Recover

func (mpt *MapPinTracker) Recover(c *cid.Cid) error

Recover will re-track or re-untrack a Cid in error state, possibly retriggering an IPFS pinning operation and returning only when it is done.

func (*MapPinTracker) SetClient

func (mpt *MapPinTracker) SetClient(c *rpc.Client)

SetClient makes the MapPinTracker ready to perform RPC requests to other components.

func (*MapPinTracker) Shutdown

func (mpt *MapPinTracker) Shutdown() error

Shutdown finishes the services provided by the MapPinTracker and cancels any active context.

func (*MapPinTracker) Status

func (mpt *MapPinTracker) Status() []PinInfo

Status returns information for all Cids tracked by this MapPinTracker.

func (*MapPinTracker) StatusCid

func (mpt *MapPinTracker) StatusCid(c *cid.Cid) PinInfo

StatusCid returns information for a Cid tracked by this MapPinTracker.

func (*MapPinTracker) Sync

func (mpt *MapPinTracker) Sync(c *cid.Cid) bool

Sync verifies that the status of a Cid matches the status of it in the IPFS daemon. If not, it will be transitioned to Pin or Unpin error. Sync returns true if the status was modified or the status is error. Pins in error states can be recovered with Recover().

func (*MapPinTracker) Track

func (mpt *MapPinTracker) Track(c *cid.Cid) error

Track tells the MapPinTracker to start managing a Cid, possibly trigerring Pin operations on the IPFS daemon.

func (*MapPinTracker) Untrack

func (mpt *MapPinTracker) Untrack(c *cid.Cid) error

Untrack tells the MapPinTracker to stop managing a Cid. If the Cid is pinned locally, it will be unpinned.

type MapState

type MapState struct {
	PinMap map[string]struct{}
	// contains filtered or unexported fields
}

MapState is a very simple database to store the state of the system using a Go map. It is thread safe. It implements the State interface.

func NewMapState

func NewMapState() *MapState

NewMapState initializes the internal map and returns a new MapState object.

func (*MapState) AddPin

func (st *MapState) AddPin(c *cid.Cid) error

AddPin adds a Cid to the internal map.

func (*MapState) HasPin

func (st *MapState) HasPin(c *cid.Cid) bool

HasPin returns true if the Cid belongs to the State.

func (*MapState) ListPins

func (st *MapState) ListPins() []*cid.Cid

ListPins provides a list of Cids in the State.

func (*MapState) RmPin

func (st *MapState) RmPin(c *cid.Cid) error

RmPin removes a Cid from the internal map.

type Peered

type Peered interface {
	AddPeer(p peer.ID)
	RmPeer(p peer.ID)
	SetPeers(peers []peer.ID)
}

Peered represents a component which needs to be aware of the peers in the Cluster and of any changes to the peer set.

type PinInfo

type PinInfo struct {
	CidStr string
	Peer   peer.ID
	IPFS   IPFSStatus
	TS     time.Time
	Error  string
}

PinInfo holds information about local pins. PinInfo is serialized when requesting the Global status, therefore we cannot use *cid.Cid.

type PinTracker

type PinTracker interface {
	Component
	// Track tells the tracker that a Cid is now under its supervision
	// The tracker may decide to perform an IPFS pin.
	Track(*cid.Cid) error
	// Untrack tells the tracker that a Cid is to be forgotten. The tracker
	// may perform an IPFS unpin operation.
	Untrack(*cid.Cid) error
	// Status returns the list of pins with their local status.
	Status() []PinInfo
	// StatusCid returns the local status of a given Cid.
	StatusCid(*cid.Cid) PinInfo
	// Sync makes sure that the Cid status reflect the real IPFS status.
	// The return value indicates if the Cid status deserved attention,
	// either because its state was updated or because it is in error state.
	Sync(*cid.Cid) bool
	// Recover retriggers a Pin/Unpin operation in Cids with error status.
	Recover(*cid.Cid) error
}

PinTracker represents a component which tracks the status of the pins in this cluster and ensures they are in sync with the IPFS daemon. This component should be thread safe.

type RESTAPI

type RESTAPI struct {
	// contains filtered or unexported fields
}

RESTAPI implements an API and aims to provides a RESTful HTTP API for Cluster.

func NewRESTAPI

func NewRESTAPI(cfg *Config) (*RESTAPI, error)

NewRESTAPI creates a new object which is ready to be started.

func (*RESTAPI) SetClient

func (api *RESTAPI) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requests.

func (*RESTAPI) Shutdown

func (api *RESTAPI) Shutdown() error

Shutdown stops any API listeners.

type RPCAPI

type RPCAPI struct {
	// contains filtered or unexported fields
}

RPCAPI is a go-libp2p-gorpc service which provides the internal ipfs-cluster API, which enables components and members of the cluster to communicate and request actions from each other.

The RPC API methods are usually redirects to the actual methods in the different components of ipfs-cluster, with very little added logic. Refer to documentation on those methods for details on their behaviour.

func (*RPCAPI) ConsensusLogPin

func (api *RPCAPI) ConsensusLogPin(in *CidArg, out *struct{}) error

ConsensusLogPin runs Consensus.LogPin().

func (*RPCAPI) ConsensusLogUnpin

func (api *RPCAPI) ConsensusLogUnpin(in *CidArg, out *struct{}) error

ConsensusLogUnpin runs Consensus.LogUnpin().

func (*RPCAPI) GlobalSync

func (api *RPCAPI) GlobalSync(in struct{}, out *[]GlobalPinInfo) error

GlobalSync runs Cluster.GlobalSync().

func (*RPCAPI) GlobalSyncCid

func (api *RPCAPI) GlobalSyncCid(in *CidArg, out *GlobalPinInfo) error

GlobalSyncCid runs Cluster.GlobalSyncCid().

func (*RPCAPI) ID

func (api *RPCAPI) ID(in struct{}, out *ID) error

func (*RPCAPI) IPFSIsPinned

func (api *RPCAPI) IPFSIsPinned(in *CidArg, out *bool) error

IPFSIsPinned runs IPFSConnector.IsPinned().

func (*RPCAPI) IPFSPin

func (api *RPCAPI) IPFSPin(in *CidArg, out *struct{}) error

IPFSPin runs IPFSConnector.Pin().

func (*RPCAPI) IPFSUnpin

func (api *RPCAPI) IPFSUnpin(in *CidArg, out *struct{}) error

IPFSUnpin runs IPFSConnector.Unpin().

func (*RPCAPI) LocalSync

func (api *RPCAPI) LocalSync(in struct{}, out *[]PinInfo) error

LocalSync runs Cluster.LocalSync().

func (*RPCAPI) LocalSyncCid

func (api *RPCAPI) LocalSyncCid(in *CidArg, out *PinInfo) error

LocalSyncCid runs Cluster.LocalSyncCid().

func (*RPCAPI) MemberList

func (api *RPCAPI) MemberList(in struct{}, out *[]peer.ID) error

MemberList runs Cluster.Members().

func (*RPCAPI) Pin

func (api *RPCAPI) Pin(in *CidArg, out *struct{}) error

Pin runs Cluster.Pin().

func (*RPCAPI) PinList

func (api *RPCAPI) PinList(in struct{}, out *[]string) error

PinList runs Cluster.Pins().

func (*RPCAPI) StateSync

func (api *RPCAPI) StateSync(in struct{}, out *[]PinInfo) error

StateSync runs Cluster.StateSync().

func (*RPCAPI) Status

func (api *RPCAPI) Status(in struct{}, out *[]GlobalPinInfo) error

Status runs Cluster.Status().

func (*RPCAPI) StatusCid

func (api *RPCAPI) StatusCid(in *CidArg, out *GlobalPinInfo) error

StatusCid runs Cluster.StatusCid().

func (*RPCAPI) Track

func (api *RPCAPI) Track(in *CidArg, out *struct{}) error

Track runs PinTracker.Track().

func (*RPCAPI) TrackerStatus

func (api *RPCAPI) TrackerStatus(in struct{}, out *[]PinInfo) error

TrackerStatus runs PinTracker.Status().

func (*RPCAPI) TrackerStatusCid

func (api *RPCAPI) TrackerStatusCid(in *CidArg, out *PinInfo) error

TrackerStatusCid runs PinTracker.StatusCid().

func (*RPCAPI) Unpin

func (api *RPCAPI) Unpin(in *CidArg, out *struct{}) error

Unpin runs Cluster.Unpin().

func (*RPCAPI) Untrack

func (api *RPCAPI) Untrack(in *CidArg, out *struct{}) error

Untrack runs PinTracker.Untrack().

func (*RPCAPI) Version

func (api *RPCAPI) Version(in struct{}, out *string) error

Version runs Cluster.Version().

type RaftConfig

type RaftConfig struct {
	SnapshotIntervalSeconds int  `json:"snapshot_interval_seconds"`
	EnableSingleNode        bool `json:"enable_single_node"`
}

RaftConfig is a configuration section which affects the behaviour of the Raft component. See https://godoc.org/github.com/hashicorp/raft#Config for more information. Only the options below are customizable, the rest will take the default values from raft.DefaultConfig().

type State

type State interface {
	// AddPin adds a pin to the State
	AddPin(*cid.Cid) error
	// RmPin removes a pin from the State
	RmPin(*cid.Cid) error
	// ListPins lists all the pins in the state
	ListPins() []*cid.Cid
	// HasPin returns true if the state is holding a Cid
	HasPin(*cid.Cid) bool
}

State represents the shared state of the cluster and it is used by the Consensus component to keep track of objects which objects are pinned. This component should be thread safe.

Notes

Bugs

  • See go-libp2p-raft#16 err = cc.p2pRaft.transport.Close()

    if err != nil {
     errMsgs += "could not close libp2p transport: " + err.Error() + ".\n"
    }
    

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL