inprocess

package
v1.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 7, 2021 License: Apache-2.0 Imports: 35 Imported by: 0

Documentation

Overview

Package inprocess contains code for spinning up M3 resources in-process for the sake of integration testing.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewAggregator

func NewAggregator(yamlCfg string, opts AggregatorOptions) (resources.Aggregator, error)

NewAggregator creates a new in-process aggregator based on the configuration and options provided.

func NewCluster

func NewCluster(opts ClusterOptions) (resources.M3Resources, error)

NewCluster creates a new M3 cluster based on the ClusterOptions provided. Expects at least a coordinator and a dbnode config.

func NewCoordinator

NewCoordinator creates a new in-process coordinator based on the configuration and options provided. Use NewCoordinator or any of the convenience constructors (e.g. NewCoordinatorFromYAML, NewCoordinatorFromConfigFile) to get a running coordinator.

The most typical usage of this method will be in an integration test to validate some behavior. For example, assuming we have a running DB node already, we could do the following to create a new namespace and write to it (note: ignoring error checking):

coord, _ := NewCoordinatorFromYAML(defaultCoordConfig, CoordinatorOptions{})
coord.AddNamespace(admin.NamespaceAddRequest{...})
coord.WaitForNamespace(namespaceName)
coord.WriteProm("cpu", map[string]string{"host", host}, samples)

The coordinator will start up as you specify in your config. However, there is some helper logic to avoid port and filesystem collisions when spinning up multiple components within the process. If you specify a GeneratePorts: true in the CoordinatorOptions, address ports will be replaced with an open port.

Similarly, filepath fields will be updated with a temp directory that will be cleaned up when the coordinator is destroyed. This should ensure that many of the same component can be spun up in-process without any issues with collisions.

func NewCoordinatorFromConfigFile

func NewCoordinatorFromConfigFile(pathToCfg string, opts CoordinatorOptions) (resources.Coordinator, error)

NewCoordinatorFromConfigFile creates a new in-process coordinator based on the config file and options provided.

func NewCoordinatorFromYAML

func NewCoordinatorFromYAML(yamlCfg string, opts CoordinatorOptions) (resources.Coordinator, error)

NewCoordinatorFromYAML creates a new in-process coordinator based on the YAML configuration string and options provided.

func NewDBNode

func NewDBNode(cfg config.Configuration, opts DBNodeOptions) (resources.Node, error)

NewDBNode creates a new in-process DB node based on the configuration and options provided. Use NewDBNode or any of the convenience constructors (e.g. NewDBNodeFromYAML, NewDBNodeFromConfigFile) to get a running dbnode.

The most typical usage of this method will be in an integration test to validate some behavior. For example, assuming we have a valid placement available already we could do the following to read and write to a namespace (note: ignoring error checking):

dbnode, _ := NewDBNodeFromYAML(defaultDBNodeConfig, DBNodeOptions{})
dbnode.WaitForBootstrap()
dbnode.WriteTaggedPoint(&rpc.WriteTaggedRequest{...}))
res, _ = dbnode.FetchTagged(&rpc.FetchTaggedRequest{...})

The dbnode will start up as you specify in your config. However, there is some helper logic to avoid port and filesystem collisions when spinning up multiple components within the process. If you specify a GeneratePorts: true in the DBNodeOptions, address ports will be replaced with an open port.

Similarly, filepath fields will be updated with a temp directory that will be cleaned up when the dbnode is destroyed. This should ensure that many of the same component can be spun up in-process without any issues with collisions.

func NewDBNodeFromConfigFile

func NewDBNodeFromConfigFile(pathToCfg string, opts DBNodeOptions) (resources.Node, error)

NewDBNodeFromConfigFile creates a new in-process DB node based on the config file and options provided.

func NewDBNodeFromYAML

func NewDBNodeFromYAML(yamlCfg string, opts DBNodeOptions) (resources.Node, error)

NewDBNodeFromYAML creates a new in-process DB node based on the YAML configuration string and options provided.

func NewEmbeddedCoordinator

func NewEmbeddedCoordinator(d *dbNode) (resources.Coordinator, error)

NewEmbeddedCoordinator creates a coordinator from one embedded within an existing db node. This method expects that the DB node has already been started before being called.

func NewM3Resources

func NewM3Resources(options ResourceOptions) resources.M3Resources

NewM3Resources returns an implementation of resources.M3Resources backed by in-process implementations of the M3 components.

Types

type AggregatorOptions

type AggregatorOptions struct {
	// Logger is the logger to use for the in-process aggregator.
	Logger *zap.Logger
}

AggregatorOptions are options of starting an in-process aggregator.

type ClusterOptions

type ClusterOptions struct {
	// Coordinator contains cluster options for spinning up a coordinator.
	Coordinator CoordinatorClusterOptions
	// DBNode contains cluster options for spinning up dbnodes.
	DBNode DBNodeClusterOptions
}

ClusterOptions contains options for spinning up a new M3 cluster composed of in-process components.

type CoordinatorClusterConfig

type CoordinatorClusterConfig struct {
	// ConfigString contains the configuration as a raw YAML string.
	ConfigString string
	// ConfigObject contains the configuration as an inflated object.
	ConfigObject *coordinatorcfg.Configuration
}

CoordinatorClusterConfig contains the configuration for coordinators in the cluster. Must specify one of the options but not both.

func (*CoordinatorClusterConfig) ToConfig

ToConfig generates a coordinatorcfg.Configuration object from the CoordinatorClusterConfig.

func (*CoordinatorClusterConfig) Validate

func (c *CoordinatorClusterConfig) Validate() error

Validate validates the CoordinatorClusterConfig.

type CoordinatorClusterOptions

type CoordinatorClusterOptions struct {
	// Config contains the coordinator configuration.
	Config CoordinatorClusterConfig
}

CoordinatorClusterOptions contains the options for spinning up a coordinator.

func (*CoordinatorClusterOptions) Validate

func (c *CoordinatorClusterOptions) Validate() error

Validate validates the CoordinatorClusterOptions.

type CoordinatorOptions

type CoordinatorOptions struct {
	// GeneratePorts will automatically update the config to use open ports
	// if set to true. If false, configuration is used as-is re: ports.
	GeneratePorts bool
	// Logger is the logger to use for the coordinator. If not provided,
	// a default one will be created.
	Logger *zap.Logger
}

CoordinatorOptions are options for starting a coordinator server.

type DBNodeClusterConfig

type DBNodeClusterConfig struct {
	// ConfigString contains the configuration as a raw YAML string.
	ConfigString string
	// ConfigObject contains the configuration as an inflated object.
	ConfigObject *dbcfg.Configuration
}

DBNodeClusterConfig contains the configuration for dbnodes in the cluster. Must specify one of the options but not both.

func (*DBNodeClusterConfig) ToConfig

func (d *DBNodeClusterConfig) ToConfig() (dbcfg.Configuration, error)

ToConfig generates a dbcfg.Configuration object from the DBNodeClusterConfig.

func (*DBNodeClusterConfig) Validate

func (d *DBNodeClusterConfig) Validate() error

Validate validates the DBNodeClusterConfig.

type DBNodeClusterOptions

type DBNodeClusterOptions struct {
	// Config contains the dbnode configuration.
	Config DBNodeClusterConfig
	// RF is the replication factor to use for the cluster.
	RF int32
	// NumShards is the number of shards to use for each RF.
	NumShards int32
	// NumInstances is the number of dbnode instances per RF.
	NumInstances int32
	// NumIsolationGroups is the number of isolation groups to split
	// nodes into.
	NumIsolationGroups int32
}

DBNodeClusterOptions contains the cluster options for spinning up dbnodes.

func NewDBNodeClusterOptions

func NewDBNodeClusterOptions() DBNodeClusterOptions

NewDBNodeClusterOptions creates DBNodeClusteOptions with sane defaults. DBNode config must still be provided.

func (*DBNodeClusterOptions) Validate

func (d *DBNodeClusterOptions) Validate() error

Validate validates the DBNodeClusterOptions.

type DBNodeOptions

type DBNodeOptions struct {
	// GeneratePorts will automatically update the config to use open ports
	// if set to true. If false, configuration is used as-is re: ports.
	GeneratePorts bool
	// GenerateHostID will automatically update the host ID specified in
	// the config if set to true. If false, configuration is used as-is re: host ID.
	GenerateHostID bool
	// Logger is the logger to use for the dbnode. If not provided,
	// a default one will be created.
	Logger *zap.Logger
}

DBNodeOptions are options for starting a DB node server.

type ResourceOptions

type ResourceOptions struct {
	Coordinator resources.Coordinator
	DBNodes     resources.Nodes
}

ResourceOptions are the options for creating new resources.M3Resources.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL