rcmgr

package
v0.24.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 31, 2022 License: MIT Imports: 23 Imported by: 117

README

The libp2p Network Resource Manager

This package contains the canonical implementation of the libp2p Network Resource Manager interface.

The implementation is based on the concept of Resource Management Scopes, whereby resource usage is constrained by a DAG of scopes, accounting for multiple levels of resource constraints.

The Resource Manager doesn't prioritize resource requests at all, it simply checks if the resource being requested is currently below the defined limits and returns an error if the limit is reached. It has no notion of honest vs bad peers.

The Resource Manager does have a special notion of allowlisted multiaddrs that have their own limits if the normal system limits are reached.

Usage

The Resource Manager is intended to be used with go-libp2p. go-libp2p sets up a resource manager with the default autoscaled limits if none is provided, but if you want to configure things or if you want to enable metrics you'll use the resource manager like so:

// Start with the default scaling limits.
scalingLimits := rcmgr.DefaultLimits

// Add limits around included libp2p protocols
libp2p.SetDefaultServiceLimits(&scalingLimits)

// Turn the scaling limits into a static set of limits using `.AutoScale`. This
// scales the limits proportional to your system memory.
limits := scalingLimits.AutoScale()

// The resource manager expects a limiter, se we create one from our limits.
limiter := rcmgr.NewFixedLimiter(limits)

// (Optional if you want metrics) Construct the OpenCensus metrics reporter.
str, err := rcmgrObs.NewStatsTraceReporter()
if err != nil {
  panic(err)
}

// Initialize the resource manager
rm, err := rcmgr.NewResourceManager(limiter, rcmgr.WithTraceReporter(str))
if err != nil {
  panic(err)
}

// Create a libp2p host
host, err := libp2p.New(libp2p.ResourceManager(rm))

Basic Resources

Memory

Perhaps the most fundamental resource is memory, and in particular buffers used for network operations. The system must provide an interface for components to reserve memory that accounts for buffers (and possibly other live objects), which is scoped within the component. Before a new buffer is allocated, the component should try a memory reservation, which can fail if the resource limit is exceeded. It is then up to the component to react to the error condition, depending on the situation. For example, a muxer failing to grow a buffer in response to a window change should simply retain the old buffer and operate at perhaps degraded performance.

File Descriptors

File descriptors are an important resource that uses memory (and computational time) at the system level. They are also a scarce resource, as typically (unless the user explicitly intervenes) they are constrained by the system. Exhaustion of file descriptors may render the application incapable of operating (e.g., because it is unable to open a file). This is important for libp2p because most operating systems represent sockets as file descriptors.

Connections

Connections are a higher-level concept endemic to libp2p; in order to communicate with another peer, a connection must first be established. Connections are an important resource in libp2p, as they consume memory, goroutines, and possibly file descriptors.

We distinguish between inbound and outbound connections, as the former are initiated by remote peers and consume resources in response to network events and thus need to be tightly controlled in order to protect the application from overload or attack. Outbound connections are typically initiated by the application's volition and don't need to be controlled as tightly. However, outbound connections still consume resources and may be initiated in response to network events because of (potentially faulty) application logic, so they still need to be constrained.

Streams

Streams are the fundamental object of interaction in libp2p; all protocol interactions happen through a stream that goes over some connection. Streams are a fundamental resource in libp2p, as they consume memory and goroutines at all levels of the stack.

Streams always belong to a peer, specify a protocol and they may belong to some service in the system. Hence, this suggests that apart from global limits, we can constrain stream usage at finer granularity, at the protocol and service level.

Once again, we disinguish between inbound and outbound streams. Inbound streams are initiated by remote peers and consume resources in response to network events; controlling inbound stream usage is again paramount for protecting the system from overload or attack. Outbound streams are normally initiated by the application or some service in the system in order to effect some protocol interaction. However, they can also be initiated in response to network events because of application or service logic, so we still need to constrain them.

Resource Scopes

The Resource Manager is based on the concept of resource scopes. Resource Scopes account for resource usage that is temporally delimited for the span of the scope. Resource Scopes conceptually form a DAG, providing us with a mechanism to enforce multiresolution resource accounting. Downstream resource usage is aggregated at scopes higher up the graph.

The following diagram depicts the canonical scope graph:

System
  +------------> Transient.............+................+
  |                                    .                .
  +------------>  Service------------- . ----------+    .
  |                                    .           |    .
  +------------->  Protocol----------- . ----------+    .
  |                                    .           |    .
  +-------------->* Peer               \/          |    .
                     +------------> Connection     |    .
                     |                             \/   \/
                     +--------------------------->  Stream
The System Scope

The system scope is the top level scope that accounts for global resource usage at all levels of the system. This scope nests and constrains all other scopes and institutes global hard limits.

The Transient Scope

The transient scope accounts for resources that are in the process of full establishment. For instance, a new connection prior to the handshake does not belong to any peer, but it still needs to be constrained as this opens an avenue for attacks in transient resource usage. Similarly, a stream that has not negotiated a protocol yet is constrained by the transient scope.

The transient scope effectively represents a DMZ (DeMilitarized Zone), where resource usage can be accounted for connections and streams that are not fully established.

The Allowlist System Scope

Same as the normal system scope above, but is used if the normal system scope is already at its limits and the resource is from an allowlisted peer. See Allowlisting multiaddrs to mitigate eclipse attacks see for more information.

The Allowlist Transient Scope

Same as the normal transient scope above, but is used if the normal transient scope is already at its limits and the resource is from an allowlisted peer. See Allowlisting multiaddrs to mitigate eclipse attacks see for more information.

Service Scopes

The system is typically organized across services, which may be ambient and provide basic functionality to the system (e.g. identify, autonat, relay, etc). Alternatively, services may be explicitly instantiated by the application, and provide core components of its functionality (e.g. pubsub, the DHT, etc).

Services are logical groupings of streams that implement protocol flow and may additionally consume resources such as memory. Services typically have at least one stream handler, so they are subject to inbound stream creation and resource usage in response to network events. As such, the system explicitly models them allowing for isolated resource usage that can be tuned by the user.

Protocol Scopes

Protocol Scopes account for resources at the protocol level. They are an intermediate resource scope which can constrain streams which may not have a service associated or for resource control within a service. It also provides an opportunity for system operators to explicitly restrict specific protocols.

For instance, a service that is not aware of the resource manager and has not been ported to mark its streams, may still gain limits transparently without any programmer intervention. Furthermore, the protocol scope can constrain resource usage for services that implement multiple protocols for the sake of backwards compatibility. A tighter limit in some older protocol can protect the application from resource consumption caused by legacy clients or potential attacks.

For a concrete example, consider pubsub with the gossipsub router: the service also understands the floodsub protocol for backwards compatibility and support for unsophisticated clients that are lagging in the implementation effort. By specifying a lower limit for the floodsub protocol, we can can constrain the service level for legacy clients using an inefficient protocol.

Peer Scopes

The peer scope accounts for resource usage by an individual peer. This constrains connections and streams and limits the blast radius of resource consumption by a single remote peer.

This ensures that no single peer can use more resources than allowed by the peer limits. Every peer has a default limit, but the programmer may raise (or lower) limits for specific peers.

Connection Scopes

The connection scope is delimited to the duration of a connection and constrains resource usage by a single connection. The scope is a leaf in the DAG, with a span that begins when a connection is established and ends when the connection is closed. Its resources are aggregated to the resource usage of a peer.

Stream Scopes

The stream scope is delimited to the duration of a stream, and constrains resource usage by a single stream. This scope is also a leaf in the DAG, with span that begins when a stream is created and ends when the stream is closed. Its resources are aggregated to the resource usage of a peer, and constrained by a service and protocol scope.

User Transaction Scopes

User transaction scopes can be created as a child of any extant resource scope, and provide the programmer with a delimited scope for easy resource accounting. Transactions may form a tree that is rooted to some canonical scope in the scope DAG.

For instance, a programmer may create a transaction scope within a service that accounts for some control flow delimited resource usage. Similarly, a programmer may create a transaction scope for some interaction within a stream, e.g. a Request/Response interaction that uses a buffer.

Limits

Each resource scope has an associated limit object, which designates limits for all basic resources. The limit is checked every time some resource is reserved and provides the system with an opportunity to constrain resource usage.

There are separate limits for each class of scope, allowing for multiresolution and aggregate resource accounting. As such, we have limits for the system and transient scopes, default and specific limits for services, protocols, and peers, and limits for connections and streams.

Scaling Limits

When building software that is supposed to run on many different kind of machines, with various memory and CPU configurations, it is desireable to have limits that scale with the size of the machine.

This is done using the ScalingLimitConfig. For every scope, this configuration struct defines the absolutely bare minimum limits, and an (optional) increase of these limits, which will be applied on nodes that have sufficient memory.

A ScalingLimitConfig can be converted into a LimitConfig (which can then be used to initialize a fixed limiter with NewFixedLimiter) by calling the Scale method. The Scale method takes two parameters: the amount of memory and the number of file descriptors that an application is willing to dedicate to libp2p.

These amounts will differ between use cases. A blockchain node running on a dedicated server might have a lot of memory, and dedicate 1/4 of that memory to libp2p. On the other end of the spectrum, a desktop companion application running as a background task on a consumer laptop will probably dedicate significantly less than 1/4 of its system memory to libp2p.

For convenience, the ScalingLimitConfig also provides an AutoScale method, which determines the amount of memory and file descriptors available on the system, and dedicates up to 1/8 of the memory and 1/2 of the file descriptors to libp2p.

For example, one might set:

var scalingLimits = ScalingLimitConfig{
  SystemBaseLimit: BaseLimit{
    ConnsInbound:    64,
    ConnsOutbound:   128,
    Conns:           128,
    StreamsInbound:  512,
    StreamsOutbound: 1024,
    Streams:         1024,
    Memory:          128 << 20,
    FD:              256,
  },
  SystemLimitIncrease: BaseLimitIncrease{
    ConnsInbound:    32,
    ConnsOutbound:   64,
    Conns:           64,
    StreamsInbound:  256,
    StreamsOutbound: 512,
    Streams:         512,
    Memory:          256 << 20,
    FDFraction:      1,
  },
}

The base limit (SystemBaseLimit) here is the minimum configuration that any node will have, no matter how little memory it possesses. For every GB of memory passed into the Scale method, an increase of (SystemLimitIncrease) is added.

For Example, calling Scale with 4 GB of memory will result in a limit of 384 for Conns (128 + 4*64).

The FDFraction defines how many of the file descriptors are allocated to this scope. In the example above, when called with a file descriptor value of 1000, this would result in a limit of 1000 (1000 * 1) file descriptors for the system scope. See TestReadmeExample in limit_test.go.

Note that we only showed the configuration for the system scope here, equivalent configuration options apply to all other scopes as well.

Default limits

By default the resource manager ships with some reasonable scaling limits and makes a reasonable guess at how much system memory you want to dedicate to the go-libp2p process. For the default definitions see DefaultLimits and ScalingLimitConfig.AutoScale().

Tweaking Defaults

If the defaults seem mostly okay, but you want to adjust one facet you can simply copy the default struct object and update the field you want to change. You can apply changes to a BaseLimit, BaseLimitIncrease, and LimitConfig with .Apply.

Example

// An example on how to tweak the default limits
tweakedDefaults := DefaultLimits
tweakedDefaults.ProtocolBaseLimit.Streams = 1024
tweakedDefaults.ProtocolBaseLimit.StreamsInbound = 512
tweakedDefaults.ProtocolBaseLimit.StreamsOutbound = 512
How to tune your limits

Once you've set your limits and monitoring (see Monitoring below) you can now tune your limits better. The rcmgr_blocked_resources metric will tell you what was blocked and for what scope. If you see a steady stream of these blocked requests it means your resource limits are too low for your usage. If you see a rare sudden spike, this is okay and it means the resource manager protected you from some anomaly.

How to disable limits

Sometimes disabling all limits is useful when you want to see how much resources you use during normal operation. You can then use this information to define your initial limits. Disable the limits by using InfiniteLimits.

Debug "resource limit exceeded" errors

These errors occur whenever a limit is hit. For example, you'll get this error if you are at your limit for the number of streams you can have, and you try to open one more.

Example Log:

2022-08-12T15:49:35.459-0700	DEBUG	rcmgr	go-libp2p-resource-manager@v0.5.3/scope.go:541	blocked connection from constraining edge	{"scope": "conn-19667", "edge": "system", "direction": "Inbound", "usefd": false, "current": 100, "attempted": 1, "limit": 100, "stat": {"NumStreamsInbound":28,"NumStreamsOutbound":66,"NumConnsInbound":37,"NumConnsOutbound":63,"NumFD":33,"Memory":8687616}, "error": "system: cannot reserve connection: resource limit exceeded"}

The log line above is an example log line that gets emitted if you enable debug logging in the resource manager. You can do this by setting the environment variable GOLOG_LOG_LEVEL="rcmgr=info". By default only the error is returned to the caller, and nothing is logged by the resource manager itself.

The log line message (and returned error) will tell you which resource limit was hit (connection in the log above) and what blocked it (in this case it was the system scope that blocked it). The log will also include some more information about the current usage of the resources. In the example log above, there is a limit of 100 connections, and you can see that we have 37 inbound connections and 63 outbound connections. We've reached the limit and the resource manager will block any further connections.

The next step in debugging is seeing if this is a recurring problem or just a transient error. If it's a transient error it's okay to ignore it since the resource manager was doing its job in keeping resource usage under the limit. If it's recurring then you should understand what's causing you to hit these limits and either refactor your application or raise the limits.

To check if it's a recurring problem you can count the number of times you've seen the "resource limit exceeded" error over time. You can also check the rcmgr_blocked_resources metric to see how many times the resource manager has blocked a resource over time.

Example graph of blocked resources over time

If the resource is blocked by a protocol-level scope, take a look at the various resource usages in the metrics. For example, if you run into a new stream being blocked, you can check the rcmgr_streams metric and the "Streams by protocol" graph in the Grafana dashboard (assuming you've set that up or something similar – see Monitoring) to understand the usage pattern of that specific protocol. This can help answer questions such as: "Am I constantly around my limit?", "Does it make sense to raise my limit?", "Are there any patterns around hitting this limit?", and "should I refactor my protocol implementation?"

Monitoring

Once you have limits set, you'll want to monitor to see if you're running into your limits often. This could be a sign that you need to raise your limits (your process is more intensive than you originally thought) or that you need to fix something in your application (surely you don't need over 1000 streams?).

There are OpenCensus metrics that can be hooked up to the resource manager. See obs/stats_test.go for an example on how to enable this, and DefaultViews in stats.go for recommended views. These metrics can be hooked up to Prometheus or any other OpenCensus supported platform.

There is also an included Grafana dashboard to help kickstart your observability into the resource manager. Find more information about it at here.

Allowlisting multiaddrs to mitigate eclipse attacks

If you have a set of trusted peers and IP addresses, you can use the resource manager's Allowlist to protect yourself from eclipse attacks. The set of peers in the allowlist will have their own limits in case the normal limits are reached. This means you will always be able to connect to these trusted peers even if you've already reached your system limits.

Look at WithAllowlistedMultiaddrs and its example in the GoDoc to learn more.

ConnManager vs Resource Manager

go-libp2p already includes a connection manager, so what's the difference between the ConnManager and the ResourceManager?

ConnManager:

  1. Configured with a low and high watermark number of connections.
  2. Attempts to maintain the number of connections between the low and high markers.
  3. Connections can be given metadata and weight (e.g. a hole punched connection is more valuable than a connection to a publicly addressable endpoint since it took more effort to make the hole punched connection).
  4. The ConnManager will trim connections once the high watermark is reached. and trim down to the low watermark.
  5. Won't block adding another connection above the high watermark, but will trigger the trim mentioned above.
  6. Can trim and prioritize connections with custom logic.
  7. No concept of scopes (like the resource manager).

Resource Manager:

  1. Configured with limits on the number of outgoing and incoming connections at different resource scopes.
  2. Will block adding any more connections if any of the scope-specific limits would be exceeded.

The natural question when comparing these two managers is "how do the watermarks and limits interact with each other?". The short answer is that they don't know about each other. This can lead to some surprising subtleties, such as the trimming never happening because the resource manager's limit is lower than the high watermark. This is confusing, and we'd like to fix it. The issue is captured in go-libp2p#1640.

When configuring the resource manager and connection manager, you should set the limits in the resource manager as your hard limits that you would never want to go over, and set the low/high watermarks as the range at which your application works best.

Examples

Here we consider some concrete examples that can ellucidate the abstract design as described so far.

Stream Lifetime

Let's consider a stream and the limits that apply to it. When the stream scope is first opened, it is created by calling ResourceManager.OpenStream.

Initially the stream is constrained by:

  • the system scope, where global hard limits apply.
  • the transient scope, where unnegotiated streams live.
  • the peer scope, where the limits for the peer at the other end of the stream apply.

Once the protocol has been negotiated, the protocol is set by calling StreamManagementScope.SetProtocol. The constraint from the transient scope is removed and the stream is now constrained by the protocol instead.

More specifically, the following constraints apply:

  • the system scope, where global hard limits apply.
  • the peer scope, where the limits for the peer at the other end of the stream apply.
  • the protocol scope, where the limits of the specific protocol used apply.

The existence of the protocol limit allows us to implicitly constrain streams for services that have not been ported to the resource manager yet. Once the programmer attaches a stream to a service by calling StreamScope.SetService, the stream resources are aggregated and constrained by the service scope in addition to its protocol scope.

More specifically the following constraints apply:

  • the system scope, where global hard limits apply.
  • the peer scope, where the limits for the peer at the other end of the stream apply.
  • the service scope, where the limits of the specific service owning the stream apply.
  • the protcol scope, where the limits of the specific protocol for the stream apply.

The resource transfer that happens in the SetProtocol and SetService gives the opportunity to the resource manager to gate the streams. If the transfer results in exceeding the scope limits, then a error indicating "resource limit exceeded" is returned. The wrapped error includes the name of the scope rejecting the resource acquisition to aid understanding of applicable limits. Note that the (wrapped) error implements net.Error and is marked as temporary, so that the programmer can handle by backoff retry.

Implementation Notes

  • The package only exports a constructor for the resource manager and basic types for defining limits. Internals are not exposed.
  • Internally, there is a resources object that is embedded in every scope and implements resource accounting.
  • There is a single implementation of a generic resource scope, that provides all necessary interface methods.
  • There are concrete types for all canonical scopes, embedding a pointer to a generic resource scope.
  • Peer and Protocol scopes, which may be created in response to network events, are periodically garbage collected.

Design Considerations

  • The Resource Manager must account for basic resource usage at all levels of the stack, from the internals to application components that use the network facilities of libp2p.
  • Basic resources include memory, streams, connections, and file descriptors. These account for both space and time used by the stack, as each resource has a direct effect on the system availability and performance.
  • The design must support seamless integration for user applications, which should reap the benefits of resource management without any changes. That is, existing applications should be oblivious of the resource manager and transparently obtain limits which protects it from resource exhaustion and OOM conditions.
  • At the same time, the design must support opt-in resource usage accounting for applications that want to explicitly utilize the facilities of the system to inform about and constrain their own resource usage.
  • The design must allow the user to set their own limits, which can be static (fixed) or dynamic.

Documentation

Overview

Package rcmgr is the resource manager for go-libp2p. This allows you to track resources being used throughout your go-libp2p process. As well as making sure that the process doesn't use more resources than what you define as your limits. The resource manager only knows about things it is told about, so it's the responsibility of the user of this library (either go-libp2p or a go-libp2p user) to make sure they check with the resource manager before actually allocating the resource.

Index

Examples

Constants

This section is empty.

Variables

View Source
var DefaultLimits = ScalingLimitConfig{
	SystemBaseLimit: BaseLimit{
		ConnsInbound:    64,
		ConnsOutbound:   128,
		Conns:           128,
		StreamsInbound:  64 * 16,
		StreamsOutbound: 128 * 16,
		Streams:         128 * 16,
		Memory:          128 << 20,
		FD:              256,
	},

	SystemLimitIncrease: BaseLimitIncrease{
		ConnsInbound:    64,
		ConnsOutbound:   128,
		Conns:           128,
		StreamsInbound:  64 * 16,
		StreamsOutbound: 128 * 16,
		Streams:         128 * 16,
		Memory:          1 << 30,
		FDFraction:      1,
	},

	TransientBaseLimit: BaseLimit{
		ConnsInbound:    32,
		ConnsOutbound:   64,
		Conns:           64,
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          32 << 20,
		FD:              64,
	},

	TransientLimitIncrease: BaseLimitIncrease{
		ConnsInbound:    16,
		ConnsOutbound:   32,
		Conns:           32,
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          128 << 20,
		FDFraction:      0.25,
	},

	AllowlistedSystemBaseLimit: BaseLimit{
		ConnsInbound:    64,
		ConnsOutbound:   128,
		Conns:           128,
		StreamsInbound:  64 * 16,
		StreamsOutbound: 128 * 16,
		Streams:         128 * 16,
		Memory:          128 << 20,
		FD:              256,
	},

	AllowlistedSystemLimitIncrease: BaseLimitIncrease{
		ConnsInbound:    64,
		ConnsOutbound:   128,
		Conns:           128,
		StreamsInbound:  64 * 16,
		StreamsOutbound: 128 * 16,
		Streams:         128 * 16,
		Memory:          1 << 30,
		FDFraction:      1,
	},

	AllowlistedTransientBaseLimit: BaseLimit{
		ConnsInbound:    32,
		ConnsOutbound:   64,
		Conns:           64,
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          32 << 20,
		FD:              64,
	},

	AllowlistedTransientLimitIncrease: BaseLimitIncrease{
		ConnsInbound:    16,
		ConnsOutbound:   32,
		Conns:           32,
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          128 << 20,
		FDFraction:      0.25,
	},

	ServiceBaseLimit: BaseLimit{
		StreamsInbound:  1024,
		StreamsOutbound: 4096,
		Streams:         4096,
		Memory:          64 << 20,
	},

	ServiceLimitIncrease: BaseLimitIncrease{
		StreamsInbound:  512,
		StreamsOutbound: 2048,
		Streams:         2048,
		Memory:          128 << 20,
	},

	ServicePeerBaseLimit: BaseLimit{
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          16 << 20,
	},

	ServicePeerLimitIncrease: BaseLimitIncrease{
		StreamsInbound:  4,
		StreamsOutbound: 8,
		Streams:         8,
		Memory:          4 << 20,
	},

	ProtocolBaseLimit: BaseLimit{
		StreamsInbound:  512,
		StreamsOutbound: 2048,
		Streams:         2048,
		Memory:          64 << 20,
	},

	ProtocolLimitIncrease: BaseLimitIncrease{
		StreamsInbound:  256,
		StreamsOutbound: 512,
		Streams:         512,
		Memory:          164 << 20,
	},

	ProtocolPeerBaseLimit: BaseLimit{
		StreamsInbound:  64,
		StreamsOutbound: 128,
		Streams:         256,
		Memory:          16 << 20,
	},

	ProtocolPeerLimitIncrease: BaseLimitIncrease{
		StreamsInbound:  4,
		StreamsOutbound: 8,
		Streams:         16,
		Memory:          4,
	},

	PeerBaseLimit: BaseLimit{

		ConnsInbound:    8,
		ConnsOutbound:   8,
		Conns:           8,
		StreamsInbound:  256,
		StreamsOutbound: 512,
		Streams:         512,
		Memory:          64 << 20,
		FD:              4,
	},

	PeerLimitIncrease: BaseLimitIncrease{
		StreamsInbound:  128,
		StreamsOutbound: 256,
		Streams:         256,
		Memory:          128 << 20,
		FDFraction:      1.0 / 64,
	},

	ConnBaseLimit: BaseLimit{
		ConnsInbound:  1,
		ConnsOutbound: 1,
		Conns:         1,
		FD:            1,
		Memory:        32 << 20,
	},

	StreamBaseLimit: BaseLimit{
		StreamsInbound:  1,
		StreamsOutbound: 1,
		Streams:         1,
		Memory:          16 << 20,
	},
}

DefaultLimits are the limits used by the default limiter constructors.

View Source
var InfiniteLimits = LimitConfig{
	System:               infiniteBaseLimit,
	Transient:            infiniteBaseLimit,
	AllowlistedSystem:    infiniteBaseLimit,
	AllowlistedTransient: infiniteBaseLimit,
	ServiceDefault:       infiniteBaseLimit,
	ServicePeerDefault:   infiniteBaseLimit,
	ProtocolDefault:      infiniteBaseLimit,
	ProtocolPeerDefault:  infiniteBaseLimit,
	PeerDefault:          infiniteBaseLimit,
	Conn:                 infiniteBaseLimit,
	Stream:               infiniteBaseLimit,
}

InfiniteLimits are a limiter configuration that uses infinite limits, thus effectively not limiting anything. Keep in mind that the operating system limits the number of file descriptors that an application can use.

Functions

func IsConnScope

func IsConnScope(name string) bool

func IsSpan

func IsSpan(name string) bool

IsSpan will return true if this name was created by newResourceScopeSpan

func IsStreamScope

func IsStreamScope(name string) bool

func IsSystemScope

func IsSystemScope(name string) bool

func IsTransientScope

func IsTransientScope(name string) bool

func NewResourceManager

func NewResourceManager(limits Limiter, opts ...Option) (network.ResourceManager, error)

func ParsePeerScopeName

func ParsePeerScopeName(name string) peer.ID

ParsePeerScopeName returns "" if name is not a peerScopeName

func ParseProtocolScopeName

func ParseProtocolScopeName(name string) string

ParseProtocolScopeName returns the service name if name is a serviceScopeName. Otherwise returns ""

func ParseServiceScopeName

func ParseServiceScopeName(name string) string

ParseServiceScopeName returns the service name if name is a serviceScopeName. Otherwise returns ""

Types

type Allowlist

type Allowlist struct {
	// contains filtered or unexported fields
}

func GetAllowlist

func GetAllowlist(rcmgr network.ResourceManager) *Allowlist

GetAllowlist tries to get the allowlist from the given resourcemanager interface by checking to see if its concrete type is a resourceManager. Returns nil if it fails to get the allowlist.

func (*Allowlist) Add

func (al *Allowlist) Add(ma multiaddr.Multiaddr) error

Add takes a multiaddr and adds it to the allowlist. The multiaddr should be an ip address of the peer with or without a `/p2p` protocol. e.g. /ip4/1.2.3.4/p2p/QmFoo, /ip4/1.2.3.4, and /ip4/1.2.3.0/ipcidr/24 are valid. /p2p/QmFoo is not valid.

func (*Allowlist) Allowed

func (al *Allowlist) Allowed(ma multiaddr.Multiaddr) bool

func (*Allowlist) AllowedPeerAndMultiaddr

func (al *Allowlist) AllowedPeerAndMultiaddr(peerID peer.ID, ma multiaddr.Multiaddr) bool

func (*Allowlist) Remove

func (al *Allowlist) Remove(ma multiaddr.Multiaddr) error

type BaseLimit

type BaseLimit struct {
	Streams         int
	StreamsInbound  int
	StreamsOutbound int
	Conns           int
	ConnsInbound    int
	ConnsOutbound   int
	FD              int
	Memory          int64
}

BaseLimit is a mixin type for basic resource limits.

func (*BaseLimit) Apply

func (l *BaseLimit) Apply(l2 BaseLimit)

Apply overwrites all zero-valued limits with the values of l2 Must not use a pointer receiver.

func (*BaseLimit) GetConnLimit

func (l *BaseLimit) GetConnLimit(dir network.Direction) int

func (*BaseLimit) GetConnTotalLimit

func (l *BaseLimit) GetConnTotalLimit() int

func (*BaseLimit) GetFDLimit

func (l *BaseLimit) GetFDLimit() int

func (*BaseLimit) GetMemoryLimit

func (l *BaseLimit) GetMemoryLimit() int64

func (*BaseLimit) GetStreamLimit

func (l *BaseLimit) GetStreamLimit(dir network.Direction) int

func (*BaseLimit) GetStreamTotalLimit

func (l *BaseLimit) GetStreamTotalLimit() int

type BaseLimitIncrease

type BaseLimitIncrease struct {
	Streams         int
	StreamsInbound  int
	StreamsOutbound int
	Conns           int
	ConnsInbound    int
	ConnsOutbound   int
	// Memory is in bytes. Values over 1>>30 (1GiB) don't make sense.
	Memory int64
	// FDFraction is expected to be >= 0 and <= 1.
	FDFraction float64
}

BaseLimitIncrease is the increase per GiB of allowed memory.

func (*BaseLimitIncrease) Apply

func (l *BaseLimitIncrease) Apply(l2 BaseLimitIncrease)

Apply overwrites all zero-valued limits with the values of l2 Must not use a pointer receiver.

type Limit

type Limit interface {
	// GetMemoryLimit returns the (current) memory limit.
	GetMemoryLimit() int64
	// GetStreamLimit returns the stream limit, for inbound or outbound streams.
	GetStreamLimit(network.Direction) int
	// GetStreamTotalLimit returns the total stream limit
	GetStreamTotalLimit() int
	// GetConnLimit returns the connection limit, for inbound or outbound connections.
	GetConnLimit(network.Direction) int
	// GetConnTotalLimit returns the total connection limit
	GetConnTotalLimit() int
	// GetFDLimit returns the file descriptor limit.
	GetFDLimit() int
}

Limit is an object that specifies basic resource limits.

type LimitConfig

type LimitConfig struct {
	System    BaseLimit `json:",omitempty"`
	Transient BaseLimit `json:",omitempty"`

	// Limits that are applied to resources with an allowlisted multiaddr.
	// These will only be used if the normal System & Transient limits are
	// reached.
	AllowlistedSystem    BaseLimit `json:",omitempty"`
	AllowlistedTransient BaseLimit `json:",omitempty"`

	ServiceDefault BaseLimit            `json:",omitempty"`
	Service        map[string]BaseLimit `json:",omitempty"`

	ServicePeerDefault BaseLimit            `json:",omitempty"`
	ServicePeer        map[string]BaseLimit `json:",omitempty"`

	ProtocolDefault BaseLimit                 `json:",omitempty"`
	Protocol        map[protocol.ID]BaseLimit `json:",omitempty"`

	ProtocolPeerDefault BaseLimit                 `json:",omitempty"`
	ProtocolPeer        map[protocol.ID]BaseLimit `json:",omitempty"`

	PeerDefault BaseLimit             `json:",omitempty"`
	Peer        map[peer.ID]BaseLimit `json:",omitempty"`

	Conn   BaseLimit `json:",omitempty"`
	Stream BaseLimit `json:",omitempty"`
}

func (*LimitConfig) Apply

func (cfg *LimitConfig) Apply(c LimitConfig)

func (*LimitConfig) MarshalJSON

func (cfg *LimitConfig) MarshalJSON() ([]byte, error)

type Limiter

type Limiter interface {
	GetSystemLimits() Limit
	GetTransientLimits() Limit
	GetAllowlistedSystemLimits() Limit
	GetAllowlistedTransientLimits() Limit
	GetServiceLimits(svc string) Limit
	GetServicePeerLimits(svc string) Limit
	GetProtocolLimits(proto protocol.ID) Limit
	GetProtocolPeerLimits(proto protocol.ID) Limit
	GetPeerLimits(p peer.ID) Limit
	GetStreamLimits(p peer.ID) Limit
	GetConnLimits() Limit
}

Limiter is the interface for providing limits to the resource manager.

func NewDefaultLimiterFromJSON

func NewDefaultLimiterFromJSON(in io.Reader) (Limiter, error)

NewDefaultLimiterFromJSON creates a new limiter by parsing a json configuration, using the default limits for fallback.

func NewFixedLimiter

func NewFixedLimiter(conf LimitConfig) Limiter

func NewLimiterFromJSON

func NewLimiterFromJSON(in io.Reader, defaults LimitConfig) (Limiter, error)

NewLimiterFromJSON creates a new limiter by parsing a json configuration.

type MetricsReporter

type MetricsReporter interface {
	// AllowConn is invoked when opening a connection is allowed
	AllowConn(dir network.Direction, usefd bool)
	// BlockConn is invoked when opening a connection is blocked
	BlockConn(dir network.Direction, usefd bool)

	// AllowStream is invoked when opening a stream is allowed
	AllowStream(p peer.ID, dir network.Direction)
	// BlockStream is invoked when opening a stream is blocked
	BlockStream(p peer.ID, dir network.Direction)

	// AllowPeer is invoked when attaching ac onnection to a peer is allowed
	AllowPeer(p peer.ID)
	// BlockPeer is invoked when attaching ac onnection to a peer is blocked
	BlockPeer(p peer.ID)

	// AllowProtocol is invoked when setting the protocol for a stream is allowed
	AllowProtocol(proto protocol.ID)
	// BlockProtocol is invoked when setting the protocol for a stream is blocked
	BlockProtocol(proto protocol.ID)
	// BlockProtocolPeer is invoked when setting the protocol for a stream is blocked at the per protocol peer scope
	BlockProtocolPeer(proto protocol.ID, p peer.ID)

	// AllowService is invoked when setting the protocol for a stream is allowed
	AllowService(svc string)
	// BlockService is invoked when setting the protocol for a stream is blocked
	BlockService(svc string)
	// BlockServicePeer is invoked when setting the service for a stream is blocked at the per service peer scope
	BlockServicePeer(svc string, p peer.ID)

	// AllowMemory is invoked when a memory reservation is allowed
	AllowMemory(size int)
	// BlockMemory is invoked when a memory reservation is blocked
	BlockMemory(size int)
}

MetricsReporter is an interface for collecting metrics from resource manager actions

type Option

type Option func(*resourceManager) error

func WithAllowlistedMultiaddrs

func WithAllowlistedMultiaddrs(mas []multiaddr.Multiaddr) Option

WithAllowlistedMultiaddrs sets the multiaddrs to be in the allowlist

Example
somePeer, err := test.RandPeerID()
if err != nil {
	panic("Failed to generate somePeer")
}

limits := DefaultLimits.AutoScale()
rcmgr, err := NewResourceManager(NewFixedLimiter(limits), WithAllowlistedMultiaddrs([]multiaddr.Multiaddr{
	// Any peer connecting from this IP address
	multiaddr.StringCast("/ip4/1.2.3.4"),
	// Only the specified peer from this address
	multiaddr.StringCast("/ip4/2.2.3.4/p2p/" + somePeer.String()),
	// Only peers from this 1.2.3.0/24 IP address range
	multiaddr.StringCast("/ip4/1.2.3.0/ipcidr/24"),
}))
if err != nil {
	panic("Failed to start resource manager")
}

// Use rcmgr as before
_ = rcmgr
Output:

func WithMetrics

func WithMetrics(reporter MetricsReporter) Option

WithMetrics is a resource manager option to enable metrics collection

func WithTrace

func WithTrace(path string) Option

func WithTraceReporter

func WithTraceReporter(reporter TraceReporter) Option

type ResourceManagerStat

type ResourceManagerStat struct {
	System    network.ScopeStat
	Transient network.ScopeStat
	Services  map[string]network.ScopeStat
	Protocols map[protocol.ID]network.ScopeStat
	Peers     map[peer.ID]network.ScopeStat
}

type ResourceManagerState

type ResourceManagerState interface {
	ListServices() []string
	ListProtocols() []protocol.ID
	ListPeers() []peer.ID

	Stat() ResourceManagerStat
}

ResourceManagerStat is a trait that allows you to access resource manager state.

type ResourceScopeLimiter

type ResourceScopeLimiter interface {
	Limit() Limit
	SetLimit(Limit)
}

ResourceScopeLimiter is a trait interface that allows you to access scope limits.

type ScalingLimitConfig

type ScalingLimitConfig struct {
	SystemBaseLimit     BaseLimit
	SystemLimitIncrease BaseLimitIncrease

	TransientBaseLimit     BaseLimit
	TransientLimitIncrease BaseLimitIncrease

	AllowlistedSystemBaseLimit     BaseLimit
	AllowlistedSystemLimitIncrease BaseLimitIncrease

	AllowlistedTransientBaseLimit     BaseLimit
	AllowlistedTransientLimitIncrease BaseLimitIncrease

	ServiceBaseLimit     BaseLimit
	ServiceLimitIncrease BaseLimitIncrease
	ServiceLimits        map[string]baseLimitConfig // use AddServiceLimit to modify

	ServicePeerBaseLimit     BaseLimit
	ServicePeerLimitIncrease BaseLimitIncrease
	ServicePeerLimits        map[string]baseLimitConfig // use AddServicePeerLimit to modify

	ProtocolBaseLimit     BaseLimit
	ProtocolLimitIncrease BaseLimitIncrease
	ProtocolLimits        map[protocol.ID]baseLimitConfig // use AddProtocolLimit to modify

	ProtocolPeerBaseLimit     BaseLimit
	ProtocolPeerLimitIncrease BaseLimitIncrease
	ProtocolPeerLimits        map[protocol.ID]baseLimitConfig // use AddProtocolPeerLimit to modify

	PeerBaseLimit     BaseLimit
	PeerLimitIncrease BaseLimitIncrease
	PeerLimits        map[peer.ID]baseLimitConfig // use AddPeerLimit to modify

	ConnBaseLimit     BaseLimit
	ConnLimitIncrease BaseLimitIncrease

	StreamBaseLimit     BaseLimit
	StreamLimitIncrease BaseLimitIncrease
}

ScalingLimitConfig is a struct for configuring default limits. {}BaseLimit is the limits that Apply for a minimal node (128 MB of memory for libp2p) and 256 file descriptors. {}LimitIncrease is the additional limit granted for every additional 1 GB of RAM.

func (*ScalingLimitConfig) AddPeerLimit

func (cfg *ScalingLimitConfig) AddPeerLimit(p peer.ID, base BaseLimit, inc BaseLimitIncrease)

func (*ScalingLimitConfig) AddProtocolLimit

func (cfg *ScalingLimitConfig) AddProtocolLimit(proto protocol.ID, base BaseLimit, inc BaseLimitIncrease)

func (*ScalingLimitConfig) AddProtocolPeerLimit

func (cfg *ScalingLimitConfig) AddProtocolPeerLimit(proto protocol.ID, base BaseLimit, inc BaseLimitIncrease)

func (*ScalingLimitConfig) AddServiceLimit

func (cfg *ScalingLimitConfig) AddServiceLimit(svc string, base BaseLimit, inc BaseLimitIncrease)

func (*ScalingLimitConfig) AddServicePeerLimit

func (cfg *ScalingLimitConfig) AddServicePeerLimit(svc string, base BaseLimit, inc BaseLimitIncrease)

func (*ScalingLimitConfig) AutoScale

func (cfg *ScalingLimitConfig) AutoScale() LimitConfig

func (*ScalingLimitConfig) Scale

func (cfg *ScalingLimitConfig) Scale(memory int64, numFD int) LimitConfig

Scale scales up a limit configuration. memory is the amount of memory that the stack is allowed to consume, for a dedicated node it's recommended to use 1/8 of the installed system memory. If memory is smaller than 128 MB, the base configuration will be used.

type TraceEvt

type TraceEvt struct {
	Time string
	Type TraceEvtTyp

	Scope *scopeClass `json:",omitempty"`
	Name  string      `json:",omitempty"`

	Limit interface{} `json:",omitempty"`

	Priority uint8 `json:",omitempty"`

	Delta    int64 `json:",omitempty"`
	DeltaIn  int   `json:",omitempty"`
	DeltaOut int   `json:",omitempty"`

	Memory int64 `json:",omitempty"`

	StreamsIn  int `json:",omitempty"`
	StreamsOut int `json:",omitempty"`

	ConnsIn  int `json:",omitempty"`
	ConnsOut int `json:",omitempty"`

	FD int `json:",omitempty"`
}

type TraceEvtTyp

type TraceEvtTyp string
const (
	TraceStartEvt              TraceEvtTyp = "start"
	TraceCreateScopeEvt        TraceEvtTyp = "create_scope"
	TraceDestroyScopeEvt       TraceEvtTyp = "destroy_scope"
	TraceReserveMemoryEvt      TraceEvtTyp = "reserve_memory"
	TraceBlockReserveMemoryEvt TraceEvtTyp = "block_reserve_memory"
	TraceReleaseMemoryEvt      TraceEvtTyp = "release_memory"
	TraceAddStreamEvt          TraceEvtTyp = "add_stream"
	TraceBlockAddStreamEvt     TraceEvtTyp = "block_add_stream"
	TraceRemoveStreamEvt       TraceEvtTyp = "remove_stream"
	TraceAddConnEvt            TraceEvtTyp = "add_conn"
	TraceBlockAddConnEvt       TraceEvtTyp = "block_add_conn"
	TraceRemoveConnEvt         TraceEvtTyp = "remove_conn"
)

type TraceReporter

type TraceReporter interface {
	// ConsumeEvent consumes a trace event. This is called synchronously,
	// implementations should process the event quickly.
	ConsumeEvent(TraceEvt)
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL