network

package
v1.8.22 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 31, 2019 License: GPL-3.0 Imports: 21 Imported by: 0

README

Streaming

Streaming is a new protocol of the swarm bzz bundle of protocols. This protocol provides the basic logic for chunk-based data flow. It implements simple retrieve requests and delivery using priority queue. A data exchange stream is a directional flow of chunks between peers. The source of datachunks is the upstream, the receiver is called the downstream peer. Each streaming protocol defines an outgoing streamer and an incoming streamer, the former installing on the upstream, the latter on the downstream peer.

Subscribe on StreamerPeer launches an incoming streamer that sends a subscribe msg upstream. The streamer on the upstream peer handles the subscribe msg by installing the relevant outgoing streamer . The modules now engage in a process of upstream sending a sequence of hashes of chunks downstream (OfferedHashesMsg). The downstream peer evaluates which hashes are needed and get it delivered by sending back a msg (WantedHashesMsg).

Historical syncing is supported - currently not the right abstraction -- state kept across sessions by saving a series of intervals after their last batch actually arrived.

Live streaming is also supported, by starting session from the first item after the subscription.

Provable data exchange. In case a stream represents a swarm document's data layer or higher level chunks, streaming up to a certain index is always provable. It saves on sending intermediate chunks.

Using the streamer logic, various stream types are easy to implement:

  • light node requests:
    • url lookup with offset
    • document download
    • document upload
  • syncing
    • live session syncing
    • historical syncing
  • simple retrieve requests and deliveries
  • swarm feeds streams
  • receipting for finger pointing

Syncing

Syncing is the process that makes sure storer nodes end up storing all and only the chunks that are requested from them.

Requirements

  • eventual consistency: so each chunk historical should be syncable
  • since the same chunk can and will arrive from many peers, (network traffic should be optimised, only one transfer of data per chunk)
  • explicit request deliveries should be prioritised higher than recent chunks received during the ongoing session which in turn should be higher than historical chunks.
  • insured chunks should get receipted for finger pointing litigation, the receipts storage should be organised efficiently, upstream peer should also be able to find these receipts for a deleted chunk easily to refute their challenge.
  • syncing should be resilient to cut connections, metadata should be persisted that keep track of syncing state across sessions, historical syncing state should survive restart
  • extra data structures to support syncing should be kept at minimum
  • syncing is not organized separately for chunk types (Swarm feed updates v regular content chunk)
  • various types of streams should have common logic abstracted

Syncing is now entirely mediated by the localstore, ie., no processes or memory leaks due to network contention. When a new chunk is stored, its chunk hash is index by proximity bin

peers syncronise by getting the chunks closer to the downstream peer than to the upstream one. Consequently peers just sync all stored items for the kad bin the receiving peer falls into. The special case of nearest neighbour sets is handled by the downstream peer indicating they want to sync all kademlia bins with proximity equal to or higher than their depth.

This sync state represents the initial state of a sync connection session. Retrieval is dictated by downstream peers simply using a special streamer protocol.

Syncing chunks created during the session by the upstream peer is called live session syncing while syncing of earlier chunks is historical syncing.

Once the relevant chunk is retrieved, downstream peer looks up all hash segments in its localstore and sends to the upstream peer a message with a a bitvector to indicate missing chunks (e.g., for chunk k, hash with chunk internal index which case ) new items. In turn upstream peer sends the relevant chunk data alongside their index.

On sending chunks there is a priority queue system. If during looking up hashes in its localstore, downstream peer hits on an open request then a retrieve request is sent immediately to the upstream peer indicating that no extra round of checks is needed. If another peers syncer hits the same open request, it is slightly unsafe to not ask that peer too: if the first one disconnects before delivering or fails to deliver and therefore gets disconnected, we should still be able to continue with the other. The minimum redundant traffic coming from such simultaneous eventualities should be sufficiently rare not to warrant more complex treatment.

Session syncing involves downstream peer to request a new state on a bin from upstream. using the new state, the range (of chunks) between the previous state and the new one are retrieved and chunks are requested identical to the historical case. After receiving all the missing chunks from the new hashes, downstream peer will request a new range. If this happens before upstream peer updates a new state, we say that session syncing is live or the two peers are in sync. In general the time interval passed since downstream peer request up to the current session cursor is a good indication of a permanent (probably increasing) lag.

If there is no historical backlog, and downstream peer has an acceptable 'last synced' tag, then it is said to be fully synced with the upstream peer. If a peer is fully synced with all its storer peers, it can advertise itself as globally fully synced.

The downstream peer persists the record of the last synced offset. When the two peers disconnect and reconnect syncing can start from there. This situation however can also happen while historical syncing is not yet complete. Effectively this means that the peer needs to persist a record of an arbitrary array of offset ranges covered.

Delivery requests

once the appropriate ranges of the hashstream are retrieved and buffered, downstream peer just scans the hashes, looks them up in localstore, if not found, create a request entry. The range is referenced by the chunk index. Alongside the name (indicating the stream, e.g., content chunks for bin 6) and the range downstream peer sends a 128 long bitvector indicating which chunks are needed. Newly created requests are satisfied bound together in a waitgroup which when done, will promptt sending the next one. to be able to do check and storage concurrently, we keep a buffer of one, we start with two batches of hashes. If there is nothing to give, upstream peers SetNextBatch is blocking. Subscription ends with an unsubscribe. which removes the syncer from the map.

Canceling requests (for instance the late chunks of an erasure batch) should be a chan closed on the request

Simple request is also a subscribe different streaming protocols are different p2p protocols with same message types. the constructor is the Run function itself. which takes a streamerpeer as argument

provable streams

The swarm hash over the hash stream has many advantages. It implements a provable data transfer and provide efficient storage for receipts in the form of inclusion proofs useable for finger pointing litigation. When challenged on a missing chunk, upstream peer will provide an inclusion proof of a chunk hash against the state of the sync stream. In order to be able to generate such an inclusion proof, upstream peer needs to store the hash index (counting consecutive hash-size segments) alongside the chunk data and preserve it even when the chunk data is deleted until the chunk is no longer insured. if there is no valid insurance on the files the entry may be deleted. As long as the chunk is preserved, no takeover proof will be needed since the node can respond to any challenge. However, once the node needs to delete an insured chunk for capacity reasons, a receipt should be available to refute the challenge by finger pointing to a downstream peer. As part of the deletion protocol then, hashes of insured chunks to be removed are pushed to an infinite stream for every bin.

Downstream peer on the other hand needs to make sure that they can only be finger pointed about a chunk they did receive and store. For this the check of a state should be exhaustive. If historical syncing finishes on one state, all hashes before are covered, no surprises. In other words historical syncing this process is self verifying. With session syncing however, it is not enough to check going back covering the range from old offset to new. Continuity (i.e., that the new state is extension of the old) needs to be verified: after downstream peer reads the range into a buffer, it appends the buffer the last known state at the last known offset and verifies the resulting hash matches the latest state. Past intervals of historical syncing are checked via the session root. Upstream peer signs the states, downstream peers can use as handover proofs. Downstream peers sign off on a state together with an initial offset.

Once historical syncing is complete and the session does not lag, downstream peer only preserves the latest upstream state and store the signed version.

Upstream peer needs to keep the latest takeover states: each deleted chunk's hash should be covered by takeover proof of at least one peer. If historical syncing is complete, upstream peer typically will store only the latest takeover proof from downstream peer. Crucially, the structure is totally independent of the number of peers in the bin, so it scales extremely well.

implementation

The simplest protocol just involves upstream peer to prefix the key with the kademlia proximity order (say 0-15 or 0-31) and simply iterate on index per bin when syncing with a peer.

priority queues are used for sending chunks so that user triggered requests should be responded to first, session syncing second, and historical with lower priority. The request on chunks remains implemented as a dataless entry in the memory store. The lifecycle of this object should be more carefully thought through, ie., when it fails to retrieve it should be removed.

Documentation

Index

Constants

View Source
const (
	DefaultNetworkID = 3
)

Variables

View Source
var BzzSpec = &protocols.Spec{
	Name:       "bzz",
	Version:    8,
	MaxMsgSize: 10 * 1024 * 1024,
	Messages: []interface{}{
		HandshakeMsg{},
	},
}

BzzSpec is the spec of the generic swarm handshake

View Source
var DiscoverySpec = &protocols.Spec{
	Name:       "hive",
	Version:    8,
	MaxMsgSize: 10 * 1024 * 1024,
	Messages: []interface{}{
		peersMsg{},
		subPeersMsg{},
	},
}

DiscoverySpec is the spec for the bzz discovery subprotocols

View Source
var Pof = pot.DefaultPof(256)
View Source
var RequestTimeout = 10 * time.Second

Time to consider peer to be skipped. Also used in stream delivery.

Functions

func Label added in v1.8.12

func Label(e *entry) string

Label is a short tag for the entry for debug

func LogAddrs added in v1.8.12

func LogAddrs(nns [][]byte) string

func NewPeerPotMap added in v1.8.12

func NewPeerPotMap(neighbourhoodSize int, addrs [][]byte) map[string]*PeerPot

NewPeerPotMap creates a map of pot record of *BzzAddr with keys as hexadecimal representations of the address. the NeighbourhoodSize of the passed kademlia is used used for testing only TODO move to separate testing tools file

func NotifyDepth added in v1.8.12

func NotifyDepth(depth uint8, kad *Kademlia)

NotifyDepth sends a message to all connections if depth of saturation is changed

func NotifyPeer added in v1.8.12

func NotifyPeer(p *BzzAddr, k *Kademlia)

NotifyPeer informs all peers about a newly added node

Types

type Bzz

type Bzz struct {
	*Hive
	NetworkID uint64
	LightNode bool
	// contains filtered or unexported fields
}

Bzz is the swarm protocol bundle

func NewBzz added in v1.8.12

func NewBzz(config *BzzConfig, kad *Kademlia, store state.Store, streamerSpec *protocols.Spec, streamerRun func(*BzzPeer) error) *Bzz

NewBzz is the swarm protocol constructor arguments * bzz config * overlay driver * peer store

func (*Bzz) APIs added in v1.8.12

func (b *Bzz) APIs() []rpc.API

APIs returns the APIs offered by bzz * hive Bzz implements the node.Service interface

func (*Bzz) GetHandshake added in v1.8.12

func (b *Bzz) GetHandshake(peerID enode.ID) (*HandshakeMsg, bool)

GetHandshake returns the bzz handhake that the remote peer with peerID sent

func (*Bzz) NodeInfo added in v1.8.12

func (b *Bzz) NodeInfo() interface{}

NodeInfo returns the node's overlay address

func (*Bzz) Protocols added in v1.8.12

func (b *Bzz) Protocols() []p2p.Protocol

Protocols return the protocols swarm offers Bzz implements the node.Service interface * handshake/hive * discovery

func (*Bzz) RunProtocol added in v1.8.12

func (b *Bzz) RunProtocol(spec *protocols.Spec, run func(*BzzPeer) error) func(*p2p.Peer, p2p.MsgReadWriter) error

RunProtocol is a wrapper for swarm subprotocols returns a p2p protocol run function that can be assigned to p2p.Protocol#Run field arguments:

  • p2p protocol spec
  • run function taking BzzPeer as argument this run function is meant to block for the duration of the protocol session on return the session is terminated and the peer is disconnected

the protocol waits for the bzz handshake is negotiated the overlay address on the BzzPeer is set from the remote handshake

func (*Bzz) UpdateLocalAddr added in v1.8.12

func (b *Bzz) UpdateLocalAddr(byteaddr []byte) *BzzAddr

UpdateLocalAddr updates underlayaddress of the running node

type BzzAddr added in v1.8.12

type BzzAddr struct {
	OAddr []byte
	UAddr []byte
}

BzzAddr implements the PeerAddr interface

func NewAddr added in v1.8.17

func NewAddr(node *enode.Node) *BzzAddr

NewAddr constucts a BzzAddr from a node record.

func RandomAddr added in v1.8.12

func RandomAddr() *BzzAddr

RandomAddr is a utility method generating an address from a public key

func (*BzzAddr) Address added in v1.8.12

func (a *BzzAddr) Address() []byte

Address implements OverlayPeer interface to be used in Overlay.

func (*BzzAddr) ID added in v1.8.12

func (a *BzzAddr) ID() enode.ID

ID returns the node identifier in the underlay.

func (*BzzAddr) Over added in v1.8.12

func (a *BzzAddr) Over() []byte

Over returns the overlay address.

func (*BzzAddr) String added in v1.8.12

func (a *BzzAddr) String() string

String pretty prints the address

func (*BzzAddr) Under added in v1.8.12

func (a *BzzAddr) Under() []byte

Under returns the underlay address.

func (*BzzAddr) Update added in v1.8.12

func (a *BzzAddr) Update(na *BzzAddr) *BzzAddr

Update updates the underlay address of a peer record

type BzzConfig added in v1.8.12

type BzzConfig struct {
	OverlayAddr  []byte // base address of the overlay network
	UnderlayAddr []byte // node's underlay address
	HiveParams   *HiveParams
	NetworkID    uint64
	LightNode    bool
}

BzzConfig captures the config params used by the hive

type BzzPeer added in v1.8.12

type BzzPeer struct {
	*protocols.Peer // represents the connection for online peers
	*BzzAddr        // remote address -> implements Addr interface = protocols.Peer

	LightNode bool
	// contains filtered or unexported fields
}

BzzPeer is the bzz protocol view of a protocols.Peer (itself an extension of p2p.Peer) implements the Peer interface and all interfaces Peer implements: Addr, OverlayPeer

func NewBzzPeer added in v1.8.16

func NewBzzPeer(p *protocols.Peer) *BzzPeer

func (*BzzPeer) ID added in v1.8.17

func (p *BzzPeer) ID() enode.ID

ID returns the peer's underlay node identifier.

type Fetcher added in v1.8.16

type Fetcher struct {
	// contains filtered or unexported fields
}

Fetcher is created when a chunk is not found locally. It starts a request handler loop once and keeps it alive until all active requests are completed. This can happen:

  1. either because the chunk is delivered
  2. or becuse the requestor cancelled/timed out

Fetcher self destroys itself after it is completed. TODO: cancel all forward requests after termination

func NewFetcher added in v1.8.16

func NewFetcher(addr storage.Address, rf RequestFunc, skipCheck bool) *Fetcher

NewFetcher creates a new Fetcher for the given chunk address using the given request function.

func (*Fetcher) Offer added in v1.8.16

func (f *Fetcher) Offer(ctx context.Context, source *enode.ID)

Offer is called when an upstream peer offers the chunk via syncing as part of `OfferedHashesMsg` and the node does not have the chunk locally.

func (*Fetcher) Request added in v1.8.16

func (f *Fetcher) Request(ctx context.Context, hopCount uint8)

Request is called when an upstream peer request the chunk as part of `RetrieveRequestMsg`, or from a local request through FileStore, and the node does not have the chunk locally.

type FetcherFactory added in v1.8.16

type FetcherFactory struct {
	// contains filtered or unexported fields
}

FetcherFactory is initialised with a request function and can create fetchers

func NewFetcherFactory added in v1.8.16

func NewFetcherFactory(request RequestFunc, skipCheck bool) *FetcherFactory

NewFetcherFactory takes a request function and skip check parameter and creates a FetcherFactory

func (*FetcherFactory) New added in v1.8.16

func (f *FetcherFactory) New(ctx context.Context, source storage.Address, peersToSkip *sync.Map) storage.NetFetcher

New contructs a new Fetcher, for the given chunk. All peers in peersToSkip are not requested to deliver the given chunk. peersToSkip should always contain the peers which are actively requesting this chunk, to make sure we don't request back the chunks from them. The created Fetcher is started and returned.

type HandshakeMsg added in v1.8.12

type HandshakeMsg struct {
	Version   uint64
	NetworkID uint64
	Addr      *BzzAddr
	LightNode bool
	// contains filtered or unexported fields
}
Handshake

* Version: 8 byte integer version of the protocol * NetworkID: 8 byte integer network identifier * Addr: the address advertised by the node including underlay and overlay connecctions

func (*HandshakeMsg) String added in v1.8.12

func (bh *HandshakeMsg) String() string

String pretty prints the handshake

type Health added in v1.8.12

type Health struct {
	KnowNN           bool     // whether node knows all its neighbours
	CountKnowNN      int      // amount of neighbors known
	MissingKnowNN    [][]byte // which neighbours we should have known but we don't
	ConnectNN        bool     // whether node is connected to all its neighbours
	CountConnectNN   int      // amount of neighbours connected to
	MissingConnectNN [][]byte // which neighbours we should have been connected to but we're not
	Saturated        bool     // whether we are connected to all the peers we would have liked to
	Hive             string
}

Health state of the Kademlia used for testing only

type Hive

type Hive struct {
	*HiveParams             // settings
	*Kademlia               // the overlay connectiviy driver
	Store       state.Store // storage interface to save peers across sessions
	// contains filtered or unexported fields
}

Hive manages network connections of the swarm node

func NewHive

func NewHive(params *HiveParams, kad *Kademlia, store state.Store) *Hive

NewHive constructs a new hive HiveParams: config parameters Kademlia: connectivity driver using a network topology StateStore: to save peers across sessions

func (*Hive) NodeInfo added in v1.8.12

func (h *Hive) NodeInfo() interface{}

NodeInfo function is used by the p2p.server RPC interface to display protocol specific node information

func (*Hive) PeerInfo added in v1.8.12

func (h *Hive) PeerInfo(id enode.ID) interface{}

PeerInfo function is used by the p2p.server RPC interface to display protocol specific information any connected peer referred to by their NodeID

func (*Hive) Run added in v1.8.12

func (h *Hive) Run(p *BzzPeer) error

Run protocol run function

func (*Hive) Start

func (h *Hive) Start(server *p2p.Server) error

Start stars the hive, receives p2p.Server only at startup server is used to connect to a peer based on its NodeID or enode URL these are called on the p2p.Server which runs on the node

func (*Hive) Stop

func (h *Hive) Stop() error

Stop terminates the updateloop and saves the peers

type HiveParams

type HiveParams struct {
	Discovery             bool  // if want discovery of not
	PeersBroadcastSetSize uint8 // how many peers to use when relaying
	MaxPeersPerRequest    uint8 // max size for peer address batches
	KeepAliveInterval     time.Duration
}

HiveParams holds the config options to hive

func NewHiveParams

func NewHiveParams() *HiveParams

NewHiveParams returns hive config with only the

type KadParams added in v1.8.12

type KadParams struct {
	// adjustable parameters
	MaxProxDisplay    int   // number of rows the table shows
	NeighbourhoodSize int   // nearest neighbour core minimum cardinality
	MinBinSize        int   // minimum number of peers in a row
	MaxBinSize        int   // maximum number of peers in a row before pruning
	RetryInterval     int64 // initial interval before a peer is first redialed
	RetryExponent     int   // exponent to multiply retry intervals with
	MaxRetries        int   // maximum number of redial attempts
	// function to sanction or prevent suggesting a peer
	Reachable func(*BzzAddr) bool `json:"-"`
}

KadParams holds the config params for Kademlia

func NewKadParams added in v1.8.12

func NewKadParams() *KadParams

NewKadParams returns a params struct with default values

type Kademlia added in v1.8.12

type Kademlia struct {
	*KadParams // Kademlia configuration parameters
	// contains filtered or unexported fields
}

Kademlia is a table of live peers and a db of known peers (node records)

func NewKademlia added in v1.8.12

func NewKademlia(addr []byte, params *KadParams) *Kademlia

NewKademlia creates a Kademlia table for base address addr with parameters as in params if params is nil, it uses default values

func (*Kademlia) AddrCountC added in v1.8.12

func (k *Kademlia) AddrCountC() <-chan int

AddrCountC returns the channel that sends a new address count value on each change. Not receiving from the returned channel will block Register function when address count value changes.

func (*Kademlia) BaseAddr added in v1.8.12

func (k *Kademlia) BaseAddr() []byte

BaseAddr return the kademlia base address

func (*Kademlia) EachAddr added in v1.8.12

func (k *Kademlia) EachAddr(base []byte, o int, f func(*BzzAddr, int) bool)

EachAddr called with (base, po, f) is an iterator applying f to each known peer that has proximity order o or less as measured from the base if base is nil, kademlia base address is used

func (*Kademlia) EachConn added in v1.8.12

func (k *Kademlia) EachConn(base []byte, o int, f func(*Peer, int) bool)

EachConn is an iterator with args (base, po, f) applies f to each live peer that has proximity order po or less as measured from the base if base is nil, kademlia base address is used

func (*Kademlia) Healthy added in v1.8.12

func (k *Kademlia) Healthy(pp *PeerPot) *Health

Healthy reports the health state of the kademlia connectivity

The PeerPot argument provides an all-knowing view of the network The resulting Health object is a result of comparisons between what is the actual composition of the kademlia in question (the receiver), and what SHOULD it have been when we take all we know about the network into consideration.

used for testing only

func (*Kademlia) NeighbourhoodDepth added in v1.8.19

func (k *Kademlia) NeighbourhoodDepth() (depth int)

func (*Kademlia) NeighbourhoodDepthC added in v1.8.12

func (k *Kademlia) NeighbourhoodDepthC() <-chan int

NeighbourhoodDepthC returns the channel that sends a new kademlia neighbourhood depth on each change. Not receiving from the returned channel will block On function when the neighbourhood depth is changed. TODO: Why is this exported, and if it should be; why can't we have more subscribers than one?

func (*Kademlia) Off added in v1.8.12

func (k *Kademlia) Off(p *Peer)

Off removes a peer from among live peers

func (*Kademlia) On added in v1.8.12

func (k *Kademlia) On(p *Peer) (uint8, bool)

On inserts the peer as a kademlia peer into the live peers

func (*Kademlia) Register added in v1.8.12

func (k *Kademlia) Register(peers ...*BzzAddr) error

Register enters each address as kademlia peer record into the database of known peer addresses

func (*Kademlia) String added in v1.8.12

func (k *Kademlia) String() string

String returns kademlia table + kaddb table displayed with ascii

func (*Kademlia) SuggestPeer added in v1.8.12

func (k *Kademlia) SuggestPeer() (a *BzzAddr, o int, want bool)

SuggestPeer returns a known peer for the lowest proximity bin for the lowest bincount below depth naturally if there is an empty row it returns a peer for that

type Peer added in v1.8.12

type Peer struct {
	*BzzPeer
	// contains filtered or unexported fields
}

Peer wraps BzzPeer and embeds Kademlia overlay connectivity driver

func NewPeer added in v1.8.16

func NewPeer(p *BzzPeer, kad *Kademlia) *Peer

NewPeer constructs a discovery peer

func (*Peer) HandleMsg added in v1.8.16

func (d *Peer) HandleMsg(ctx context.Context, msg interface{}) error

HandleMsg is the message handler that delegates incoming messages

func (*Peer) NotifyDepth added in v1.8.16

func (d *Peer) NotifyDepth(po uint8)

NotifyDepth sends a subPeers Msg to the receiver notifying them about a change in the depth of saturation

func (*Peer) NotifyPeer added in v1.8.16

func (d *Peer) NotifyPeer(a *BzzAddr, po uint8)

NotifyPeer notifies the remote node (recipient) about a peer if the peer's PO is within the recipients advertised depth OR the peer is closer to the recipient than self unless already notified during the connection session

type PeerPot added in v1.8.12

type PeerPot struct {
	NNSet [][]byte
}

PeerPot keeps info about expected nearest neighbours used for testing only TODO move to separate testing tools file

type Request added in v1.8.16

type Request struct {
	Addr      storage.Address // chunk address
	Source    *enode.ID       // nodeID of peer to request from (can be nil)
	SkipCheck bool            // whether to offer the chunk first or deliver directly

	HopCount uint8 // number of forwarded requests (hops)
	// contains filtered or unexported fields
}

func NewRequest added in v1.8.16

func NewRequest(addr storage.Address, skipCheck bool, peersToSkip *sync.Map) *Request

NewRequest returns a new instance of Request based on chunk address skip check and a map of peers to skip.

func (*Request) SkipPeer added in v1.8.16

func (r *Request) SkipPeer(nodeID string) bool

SkipPeer returns if the peer with nodeID should not be requested to deliver a chunk. Peers to skip are kept per Request and for a time period of RequestTimeout. This function is used in stream package in Delivery.RequestFromPeers to optimize requests for chunks.

type RequestFunc added in v1.8.16

type RequestFunc func(context.Context, *Request) (*enode.ID, chan struct{}, error)

Directories

Path Synopsis
You can run this simulation using go run ./swarm/network/simulations/overlay.go
You can run this simulation using go run ./swarm/network/simulations/overlay.go

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL