config

package
v1.0.1-0...-fbc24b0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 30, 2023 License: Apache-2.0 Imports: 31 Imported by: 0

Documentation

Overview

Package config contains configuration structures and related helper logic for all agent components.

Index

Constants

View Source
const (
	TraceExportFormatSAPM   = "sapm"
	TraceExportFormatZipkin = "zipkin"
)

Variables

View Source
var (
	// EnvReplacer replaces . and - with _
	EnvReplacer = strings.NewReplacer(".", "_", "-", "_")
)

Functions

func BundlePythonHomeEnvvar

func BundlePythonHomeEnvvar() string

BundlePythonHomeEnvvar returns an envvar string that sets the PYTHONHOME envvar to the bundled Python runtime. It is in a form that is ready to append to cmd.Env.

func CallConfigure

func CallConfigure(instance, conf interface{}) error

CallConfigure will call the Configure method on an observer or monitor with a `conf` object, typed to the correct type. This allows monitors/observers to set the type of the config object to their own config and not have to worry about casting or converting.

func ClientConfigFromWriterConfig

func ClientConfigFromWriterConfig(conf *WriterConfig) correlations.ClientConfig

func DecodeExtraConfig

func DecodeExtraConfig(in CustomConfigurable, out interface{}, strict bool) error

DecodeExtraConfig will pull out the OtherConfig values from both ObserverConfig and MonitorConfig and decode them to a struct that is provided in the `out` arg. Whether all fields have to be in 'out' is determined by the 'strict' flag. Any errors decoding will cause `out` to be nil.

func DecodeExtraConfigStrict

func DecodeExtraConfigStrict(in CustomConfigurable, out interface{}) error

DecodeExtraConfigStrict will pull out any config values from 'in' and put them on the 'out' struct, returning an error if anything in 'in' isn't in 'out'.

func FillInConfigTemplate

func FillInConfigTemplate(embeddedFieldName string, configTemplate interface{}, conf CustomConfigurable) error

FillInConfigTemplate takes a config template value that a monitor/observer provided and fills it in dynamically from the provided conf

func LoadConfig

func LoadConfig(ctx context.Context, configPath string) (<-chan *Config, error)

LoadConfig handles loading the main config file and recursively rendering any dynamic values in the config. If watchInterval is 0, the config will be loaded once and sent to the returned channel, after which the channel will be closed. Otherwise, the returned channel will remain open and will be sent any config updates.

func ToString

func ToString(conf interface{}) string

ToString converts a config struct to a pseudo-yaml text outut. If a struct field has the 'neverLog' tag, its value will be replaced by asterisks, or completely omitted if the tag value is 'omit'.

Types

type AdditionalConfig

type AdditionalConfig map[string]interface{}

AdditionalConfig is the type that should be used for any "catch-all" config fields in a monitor/observer. That field should be marked as `yaml:",inline"`. It will receive special handling when config is rendered to merge all values from multiple decoding rounds.

type CollectdConfig

type CollectdConfig struct {
	// If you won't be using any collectd monitors, this can be set to true to
	// prevent collectd from pre-initializing
	DisableCollectd bool `yaml:"disableCollectd" default:"false"`
	// How many read intervals before abandoning a metric. Doesn't affect much
	// in normal usage.
	// See [Timeout](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#timeout_iterations).
	Timeout int `yaml:"timeout" default:"40"`
	// Number of threads dedicated to executing read callbacks. See
	// [ReadThreads](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#readthreads_num)
	ReadThreads int `yaml:"readThreads" default:"5"`
	// Number of threads dedicated to writing value lists to write callbacks.
	// This should be much less than readThreads because writing is batched in
	// the write_http plugin that writes back to the agent.
	// See [WriteThreads](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writethreads_num).
	WriteThreads int `yaml:"writeThreads" default:"2"`
	// The maximum numbers of values in the queue to be written back to the
	// agent from collectd.  Since the values are written to a local socket
	// that the agent exposes, there should be almost no queuing and the
	// default should be more than sufficient. See
	// [WriteQueueLimitHigh](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writequeuelimithigh_highnum)
	WriteQueueLimitHigh int `yaml:"writeQueueLimitHigh" default:"500000"`
	// The lowest number of values in the collectd queue before which metrics
	// begin being randomly dropped.  See
	// [WriteQueueLimitLow](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writequeuelimitlow_lownum)
	WriteQueueLimitLow int `yaml:"writeQueueLimitLow" default:"400000"`
	// Collectd's log level -- info, notice, warning, or err
	LogLevel string `yaml:"logLevel" default:"notice"`
	// A default read interval for collectd plugins.  If zero or undefined,
	// will default to the global agent interval.  Some collectd python
	// monitors do not support overridding the interval at the monitor level,
	// but this setting will apply to them.
	IntervalSeconds int `yaml:"intervalSeconds" default:"0"`
	// The local IP address of the server that the agent exposes to which
	// collectd will send metrics.  This defaults to an arbitrary address in
	// the localhost subnet, but can be overridden if needed.
	WriteServerIPAddr string `yaml:"writeServerIPAddr" default:"127.9.8.7"`
	// The port of the agent's collectd metric sink server.  If set to zero
	// (the default) it will allow the OS to assign it a free port.
	WriteServerPort uint16 `yaml:"writeServerPort" default:"0"`
	// This is where the agent will write the collectd config files that it
	// manages.  If you have secrets in those files, consider setting this to a
	// path on a tmpfs mount.  The files in this directory should be considered
	// transient -- there is no value in editing them by hand.  If you want to
	// add your own collectd config, see the collectd/custom monitor.
	ConfigDir string `yaml:"configDir" default:"/var/run/signalfx-agent/collectd"`

	// The following are propagated from the top-level config
	BundleDir            string `yaml:"-"`
	HasGenericJMXMonitor bool   `yaml:"-"`
	// Assigned by manager, not by user
	InstanceName string `yaml:"-"`
	// A hack to allow custom collectd to easily specify a single monitorID via
	// query parameter
	WriteServerQuery string `yaml:"-"`
}

CollectdConfig high-level configurations

func (*CollectdConfig) ConfigFilePath

func (cc *CollectdConfig) ConfigFilePath() string

ConfigFilePath returns the path where collectd should render its main config file.

func (*CollectdConfig) Hash

func (cc *CollectdConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*CollectdConfig) InstanceConfigDir

func (cc *CollectdConfig) InstanceConfigDir() string

InstanceConfigDir is the directory underneath the ConfigDir that is specific to this collectd instance.

func (*CollectdConfig) ManagedConfigDir

func (cc *CollectdConfig) ManagedConfigDir() string

ManagedConfigDir returns the dir path where all monitor config should go.

func (*CollectdConfig) Validate

func (cc *CollectdConfig) Validate() error

Validate the collectd specific config

func (*CollectdConfig) WriteServerURL

func (cc *CollectdConfig) WriteServerURL() string

WriteServerURL is the local address served by the agent where collect should write datapoints

type Config

type Config struct {
	// The access token for the org that should receive the metrics emitted by
	// the agent.
	SignalFxAccessToken string `yaml:"signalFxAccessToken" neverLog:"true"`
	// The URL of SignalFx ingest server.  Should be overridden if using the
	// SignalFx Gateway.  If not set, this will be determined by the
	// `signalFxRealm` option below.  If you want to send trace spans to a
	// different location, set the `traceEndpointUrl` option.  If you want to
	// send events to a different location, set the `eventEndpointUrl` option.
	IngestURL string `yaml:"ingestUrl"`
	// The full URL (including path) to the event ingest server.  If this is
	// not set, all events will be sent to the same place as `ingestUrl`
	// above.
	EventEndpointURL string `yaml:"eventEndpointUrl"`
	// The full URL (including path) to the trace ingest server.  If this is
	// not set, all trace spans will be sent to the same place as `ingestUrl`
	// above.
	TraceEndpointURL string `yaml:"traceEndpointUrl"`
	// The SignalFx API base URL.  If not set, this will determined by the
	// `signalFxRealm` option below.
	APIURL string `yaml:"apiUrl"`
	// The SignalFx Realm that the organization you want to send to is a part
	// of.  This defaults to the original realm (`us0`) but if you are setting
	// up the agent for the first time, you quite likely need to change this.
	SignalFxRealm string `yaml:"signalFxRealm" default:"us0"`
	// The hostname that will be reported as the `host` dimension. If blank,
	// this will be auto-determined by the agent based on a reverse lookup of
	// the machine's IP address.
	Hostname string `yaml:"hostname"`
	// If true (the default), and the `hostname` option is not set, the
	// hostname will be determined by doing a reverse DNS query on the IP
	// address that is returned by querying for the bare hostname.  This is
	// useful in cases where the hostname reported by the kernel is a short
	// name. (**default**: `true`)
	UseFullyQualifiedHost *bool `yaml:"useFullyQualifiedHost" noDefault:"true"`
	// Our standard agent model is to collect metrics for services running on
	// the same host as the agent.  Therefore, host-specific dimensions (e.g.
	// `host`, `AWSUniqueId`, etc) are automatically added to every datapoint
	// that is emitted from the agent by default.  Set this to true if you are
	// using the agent primarily to monitor things on other hosts.  You can set
	// this option at the monitor level as well.
	DisableHostDimensions bool `yaml:"disableHostDimensions" default:"false"`
	// How often to send metrics to SignalFx.  Monitors can override this
	// individually.
	IntervalSeconds int `yaml:"intervalSeconds" default:"10"`
	// This flag sets the HTTP timeout duration for metadata queries from AWS, Azure and GCP.
	// This should be a duration string that is accepted by https://golang.org/pkg/time/#ParseDuration
	CloudMetadataTimeout timeutil.Duration `yaml:"cloudMetadataTimeout" default:"2s"`
	// Dimensions (key:value pairs) that will be added to every datapoint emitted by the agent.
	// To specify that all metrics should be high-resolution, add the dimension `sf_hires: 1`
	GlobalDimensions map[string]string `yaml:"globalDimensions" default:"{}"`
	// Tags (key:value pairs) that will be added to every span emitted by the agent.
	GlobalSpanTags map[string]string `yaml:"globalSpanTags" default:"{}"`
	// The logical environment/cluster that this agent instance is running in.
	// All of the services that this instance monitors should be in the same
	// environment as well. This value, if provided, will be synced as a
	// property onto the `host` dimension, or onto any cloud-provided specific
	// dimensions (`AWSUniqueId`, `gcp_id`, and `azure_resource_id`) when
	// available. Example values: "prod-usa", "dev"
	Cluster string `yaml:"cluster"`
	// If true, force syncing of the `cluster` property on the `host` dimension,
	// even when cloud-specific dimensions are present.
	SyncClusterOnHostDimension bool `yaml:"syncClusterOnHostDimension"`
	// If true, a warning will be emitted if a discovery rule contains
	// variables that will never possibly match a rule.  If using multiple
	// observers, it is convenient to set this to false to suppress spurious
	// errors.
	ValidateDiscoveryRules *bool `yaml:"validateDiscoveryRules" default:"false"`
	// A list of observers to use (see observer config)
	Observers []ObserverConfig `yaml:"observers" default:"[]"`
	// A list of monitors to use (see monitor config)
	Monitors []MonitorConfig `yaml:"monitors" default:"[]"`
	// Configuration of the datapoint/event writer
	Writer WriterConfig `yaml:"writer"`
	// Log configuration
	Logging LogConfig `yaml:"logging" default:"{}"`
	// Configuration of the managed collectd subprocess
	Collectd CollectdConfig `yaml:"collectd" default:"{}"`
	// This must be unset or explicitly set to true. In prior versions of the
	// agent, there was a filtering mechanism that relied heavily on an
	// external whitelist.json file to determine which metrics were sent by
	// default.  This is all inherent to the agent now and the old style of
	// filtering is no longer available.
	EnableBuiltInFiltering *bool `yaml:"enableBuiltInFiltering" default:"true"`
	// A list of metric filters that will include metrics.  These
	// filters take priority over the filters specified in `metricsToExclude`.
	MetricsToInclude []MetricFilter `yaml:"metricsToInclude" default:"[]"`
	// A list of metric filters
	MetricsToExclude []MetricFilter `yaml:"metricsToExclude" default:"[]"`
	// A list of properties filters
	PropertiesToExclude []PropertyFilterConfig `yaml:"propertiesToExclude" default:"[]"`

	// The host on which the internal status server will listen.  The internal
	// status HTTP server serves internal metrics and diagnostic information
	// about the agent and can be scraped by the `internal-metrics` monitor.
	// Can be set to `0.0.0.0` if you want to monitor the agent from another
	// host.  If you set this to blank/null, the internal status server will
	// not be started.  See `internalStatusPort`.
	InternalStatusHost string `yaml:"internalStatusHost" default:"localhost"`
	// The port on which the internal status server will listen.  See
	// `internalStatusHost`.
	InternalStatusPort uint16 `yaml:"internalStatusPort" default:"8095"`

	// Enables Go pprof endpoint on port 6060 that serves profiling data for
	// development
	EnableProfiling bool `yaml:"profiling" default:"false"`
	// The host/ip address for the pprof profile server to listen on.
	// `profiling` must be enabled for this to have any effect.
	ProfilingHost string `yaml:"profilingHost" default:"127.0.0.1"`
	// The port for the pprof profile server to listen on. `profiling` must be
	// enabled for this to have any effect.
	ProfilingPort int `yaml:"profilingPort" default:"6060"`
	// Path to the directory holding the agent dependencies.  This will
	// normally be derived automatically. Overrides the envvar
	// SIGNALFX_BUNDLE_DIR if set.
	BundleDir string `yaml:"bundleDir"`
	// This exists purely to give the user a place to put common yaml values to
	// reference in other parts of the config file.
	Scratch interface{} `yaml:"scratch" neverLog:"omit"`
	// Configuration of remote config stores
	Sources sources.SourceConfig `yaml:"configSources"`
	// Path to the host's `/proc` filesystem.
	// This is useful for containerized environments.
	ProcPath string `yaml:"procPath" default:"/proc"`
	// Path to the host's `/etc` directory.
	// This is useful for containerized environments.
	EtcPath string `yaml:"etcPath" default:"/etc"`
	// Path to the host's `/var` directory.
	// This is useful for containerized environments.
	VarPath string `yaml:"varPath" default:"/var"`
	// Path to the host's `/run` directory.
	// This is useful for containerized environments.
	RunPath string `yaml:"runPath" default:"/run"`
	// Path to the host's `/sys` directory.
	// This is useful for containerized environments.
	SysPath string `yaml:"sysPath" default:"/sys"`
}

Config is the top level config struct for configurations that are common to all platoforms

type CustomConfigurable

type CustomConfigurable interface {
	ExtraConfig() (map[string]interface{}, error)
}

CustomConfigurable should be implemented by config structs that have the concept of generic other config that is initially deserialized into a map[string]interface{} to be later transformed to another form.

type ExtraMetrics

type ExtraMetrics interface {
	GetExtraMetrics() []string
}

ExtraMetrics interface for monitors that support generating additional metrics to allow through.

type LogConfig

type LogConfig struct {
	// Valid levels include `debug`, `info`, `warn`, `error`.  Note that
	// `debug` logging may leak sensitive configuration (e.g. passwords) to the
	// agent output.
	Level string `yaml:"level" default:"info"`
	// The log output format to use.  Valid values are: `text`, `json`.
	Format string `yaml:"format" validate:"oneof=text json" default:"text"`
}

LogConfig contains configuration related to logging

func (*LogConfig) LogrusFormatter

func (lc *LogConfig) LogrusFormatter() log.Formatter

LogrusFormatter returns the formatter to use based on the config

func (*LogConfig) LogrusLevel

func (lc *LogConfig) LogrusLevel() *log.Level

LogrusLevel returns a logrus log level based on the configured level in LogConfig.

type MetricFilter

type MetricFilter struct {
	// A map of dimension key/values to match against.  All key/values must
	// match a datapoint for it to be matched.  The map values can be either a
	// single string or a list of strings.
	Dimensions map[string]interface{} `yaml:"dimensions" default:"{}"`
	// A list of metric names to match against
	MetricNames []string `yaml:"metricNames"`
	// A single metric name to match against
	MetricName string `yaml:"metricName"`
	// (**Only applicable for the top level filters**) Limits this scope of the
	// filter to datapoints from a specific monitor. If specified, any
	// datapoints not from this monitor type will never match against this
	// filter.
	MonitorType string `yaml:"monitorType"`
	// (**Only applicable for the top level filters**) Negates the result of
	// the match so that it matches all datapoints that do NOT match the metric
	// name and dimension values given. This does not negate monitorType, if
	// given.
	Negated bool `yaml:"negated"`
}

MetricFilter describes a set of subtractive filters applied to datapoints right before they are sent.

func AddOrMerge

func AddOrMerge(mtes []MetricFilter, mf2 MetricFilter) []MetricFilter

AddOrMerge MetricFilter to list or merge with existing MetricFilter

func (*MetricFilter) MakeFilter

func (mf *MetricFilter) MakeFilter() (dpfilters.DatapointFilter, error)

MakeFilter returns an actual filter instance from the config

func (*MetricFilter) MergeWith

func (mf *MetricFilter) MergeWith(mf2 MetricFilter) MetricFilter

MergeWith merges mf2's MetricFilter.MetricNames into receiver mf MetricFilter.MetricNames

func (*MetricFilter) Normalize

func (mf *MetricFilter) Normalize() (map[string][]string, error)

Normalize puts any singular metricName into the metricNames list and also returns a normalized dimension set.

func (*MetricFilter) ShouldMerge

func (mf *MetricFilter) ShouldMerge(mf2 MetricFilter) bool

ShouldMerge checks if mf2 MetricFilter should be merged into receiver mf MetricFilter Filters with same monitorType, negation, and dimensions should be merged

type MonitorConfig

type MonitorConfig struct {
	// The type of the monitor
	Type string `yaml:"type" json:"type"`
	// The rule used to match up this configuration with a discovered endpoint.
	// If blank, the configuration will be run immediately when the agent is
	// started.  If multiple endpoints match this rule, multiple instances of
	// the monitor type will be created with the same configuration (except
	// different host/port).
	DiscoveryRule string `yaml:"discoveryRule" json:"discoveryRule"`
	// If true, a warning will be emitted if a discovery rule contains
	// variables that will never possibly match a rule.  If using multiple
	// observers, it is convenient to set this to false to suppress spurious
	// errors.  The top-level setting `validateDiscoveryRules` acts as a
	// default if this isn't set.
	ValidateDiscoveryRule *bool `yaml:"validateDiscoveryRule"`
	// A set of extra dimensions (key:value pairs) to include on datapoints emitted by the
	// monitor(s) created from this configuration. To specify metrics from this
	// monitor should be high-resolution, add the dimension `sf_hires: 1`
	ExtraDimensions map[string]string `yaml:"extraDimensions" json:"extraDimensions"`
	// A set of extra span tags (key:value pairs) to include on spans emitted by the
	// monitor(s) created from this configuration.
	ExtraSpanTags map[string]string `yaml:"extraSpanTags" json:"extraSpanTags"`
	// A mapping of extra span tag names to a [discovery rule
	// expression](https://docs.splunk.com/observability/gdi/smart-agent/smart-agent-resources.html#service-discovery-using-the-smart-agent)
	// that is used to derive the value of the span tag.  For example, to use
	// a certain container label as a span tag, you could use something like this
	// in your monitor config block: `extraSpanTagsFromEndpoint: {env: 'Get(container_labels, "myapp.com/environment")'}`.
	// This only applies when the monitor has a `discoveryRule` or was
	// dynamically instantiated by an endpoint. It does nothing, for example,
	// in the `signalfx-forwarder` montior.
	ExtraSpanTagsFromEndpoint map[string]string `yaml:"extraSpanTagsFromEndpoint" json:"extraSpanTagsFromEndpoint"`
	// A set of default span tags (key:value pairs) to include on spans emitted by the
	// monitor(s) created from this configuration.
	DefaultSpanTags map[string]string `yaml:"defaultSpanTags" json:"defaultSpanTags"`
	// A mapping of default span tag names to a [discovery rule
	// expression](https://docs.splunk.com/observability/gdi/smart-agent/smart-agent-resources.html#service-discovery-using-the-smart-agent)
	// that is used to derive the default value of the span tag.  For example, to use
	// a certain container label as a span tag, you could use something like this
	// in your monitor config block: `defaultSpanTagsFromEndpoint: {env: 'Get(container_labels, "myapp.com/environment")'}`
	// This only applies when the monitor has a `discoveryRule` or was
	// dynamically instantiated by an endpoint. It does nothing, for example,
	// in the `signalfx-forwarder` montior.
	DefaultSpanTagsFromEndpoint map[string]string `yaml:"defaultSpanTagsFromEndpoint" json:"defaultSpanTagsFromEndpoint"`
	// A mapping of extra dimension names to a [discovery rule
	// expression](https://docs.splunk.com/observability/gdi/smart-agent/smart-agent-resources.html#service-discovery-using-the-smart-agent)
	// that is used to derive the value of the dimension.  For example, to use
	// a certain container label as a dimension, you could use something like this
	// in your monitor config block: `extraDimensionsFromEndpoint: {env: 'Get(container_labels, "myapp.com/environment")'}`.
	// This only applies when the monitor has a `discoveryRule` or was
	// dynamically instantiated by an endpoint. It does nothing, for example,
	// in the `signalfx-forwarder` montior.
	ExtraDimensionsFromEndpoint map[string]string `yaml:"extraDimensionsFromEndpoint" json:"extraDimensionsFromEndpoint"`
	// A set of mappings from a configuration option on this monitor to
	// attributes of a discovered endpoint.  The keys are the config option on
	// this monitor and the value can be any valid expression used in discovery
	// rules.
	ConfigEndpointMappings map[string]string `yaml:"configEndpointMappings" json:"configEndpointMappings"`
	// The interval (in seconds) at which to emit datapoints from the
	// monitor(s) created by this configuration.  If not set (or set to 0), the
	// global agent intervalSeconds config option will be used instead.
	IntervalSeconds int `yaml:"intervalSeconds" json:"intervalSeconds"`
	// If one or more configurations have this set to true, only those
	// configurations will be considered. This setting can be useful for testing.
	Solo bool `yaml:"solo" json:"solo"`
	// A list of datapoint filters.  These filters allow you to comprehensively
	// define which datapoints to exclude by metric name or dimension set, as
	// well as the ability to define overrides to re-include metrics excluded
	// by previous patterns within the same filter item.  See [monitor
	// filtering](./filtering.md#additional-monitor-level-filtering)
	// for examples and more information.
	DatapointsToExclude []MetricFilter `yaml:"datapointsToExclude" json:"datapointsToExclude" default:"[]"`
	// Some monitors pull metrics from services not running on the same host
	// and should not get the host-specific dimensions set on them (e.g.
	// `host`, `AWSUniqueId`, etc).  Setting this to `true` causes those
	// dimensions to be omitted.  You can disable this globally with the
	// `disableHostDimensions` option on the top level of the config.
	DisableHostDimensions bool `yaml:"disableHostDimensions" json:"disableHostDimensions" default:"false"`
	// This can be set to true if you don't want to include the dimensions that
	// are specific to the endpoint that was discovered by an observer.  This
	// is useful when you have an endpoint whose identity is not particularly
	// important since it acts largely as a proxy or adapter for other metrics.
	DisableEndpointDimensions bool `yaml:"disableEndpointDimensions" json:"disableEndpointDimensions"`
	// A map from _original_ metric name to a replacement value.  The keys are
	// intepreted as regular expressions and the values can contain
	// backreferences. This means that you should escape any RE characters in
	// the original metric name with `\` (the most common escape necessary will
	// be `\.` as period is interpreted as "all characters" if unescaped).  The
	// [Go regexp language](https://github.com/google/re2/wiki/Syntax), and
	// backreferences are of the form `$1`.
	// If there are multiple entries in list of maps, they will each be run in
	// sequence, using the transformation from the previous entry as the input
	// the subsequent transformation.
	// To add a common prefix to all metrics coming out of a monitor, use a
	// mapping like this: `(.*): myprefix.$1`
	MetricNameTransformations yaml.MapSlice `yaml:"metricNameTransformations"`
	// A map from dimension names emitted by the monitor to the desired
	// dimension name that will be emitted in the datapoint that goes to
	// SignalFx.  This can be useful if you have custom metrics from your
	// applications and want to make the dimensions from a monitor match those.
	// Also can be useful when scraping free-form metrics, say with the
	// `prometheus-exporter` monitor.  Right now, only static key/value
	// transformations are supported.  Note that filtering by dimensions will
	// be done on the *original* dimension name and not the new name. Note that
	// it is possible to remove unwanted dimensions via this configuration, by
	// making the desired dimension name an empty string.
	DimensionTransformations map[string]string `yaml:"dimensionTransformations" json:"dimensionTransformations"`
	// Extra metrics to enable besides the default included ones.  This is an
	// [overridable filter](https://docs.splunk.com/observability/gdi/smart-agent/smart-agent-resources.html#filtering-data-using-the-smart-agent).
	ExtraMetrics []string `yaml:"extraMetrics" json:"extraMetrics"`
	// Extra metric groups to enable in addition to the metrics that are
	// emitted by default.  A metric group is simply a collection of metrics,
	// and they are defined in each monitor's documentation.
	ExtraGroups []string `yaml:"extraGroups" json:"extraGroups"`
	// OtherConfig is everything else that is custom to a particular monitor
	OtherConfig map[string]interface{} `yaml:",inline" neverLog:"omit"`
	Hostname    string                 `yaml:"-" json:"-"`
	ProcPath    string                 `yaml:"-" json:"-"`
	// ValidationError is where a message concerning validation issues can go
	// so that diagnostics can output it.
	ValidationError string          `yaml:"-" json:"-" hash:"ignore"`
	MonitorID       types.MonitorID `yaml:"-" hash:"ignore"`
}

MonitorConfig is used to configure monitor instances. One instance of MonitorConfig may be used to configure multiple monitor instances. If a monitor's discovery rule does not match any discovered services, the monitor will not run.

func (*MonitorConfig) BundleDir

func (mc *MonitorConfig) BundleDir() string

BundleDir returns the path to the agent's bundle directory.

func (*MonitorConfig) Equals

func (mc *MonitorConfig) Equals(other *MonitorConfig) bool

Equals tests if two monitor configs are sufficiently equal to each other. Two monitors should only be equal if it doesn't make sense for two configurations to be active at the same time.

func (*MonitorConfig) ExtraConfig

func (mc *MonitorConfig) ExtraConfig() (map[string]interface{}, error)

ExtraConfig returns generic config as a map

func (*MonitorConfig) FilterSet

func (mc *MonitorConfig) FilterSet() (*dpfilters.FilterSet, error)

NewFilterSet makes a filter set using the new filter style

func (*MonitorConfig) HasAutoDiscovery

func (mc *MonitorConfig) HasAutoDiscovery() bool

HasAutoDiscovery returns whether the monitor is static (i.e. doesn't rely on autodiscovered services and is manually configured) or dynamic.

func (*MonitorConfig) Hash

func (mc *MonitorConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*MonitorConfig) IsCollectdBased

func (mc *MonitorConfig) IsCollectdBased() bool

IsCollectdBased returns whether this montior type depends on the collectd subprocess to run.

func (*MonitorConfig) MetricNameExprs

func (mc *MonitorConfig) MetricNameExprs() ([]*RegexpWithReplace, error)

func (*MonitorConfig) MonitorConfigCore

func (mc *MonitorConfig) MonitorConfigCore() *MonitorConfig

MonitorConfigCore provides a way of getting the MonitorConfig when embedded in a struct that is referenced through a more generic interface.

func (*MonitorConfig) ShouldValidateDiscoveryRule

func (mc *MonitorConfig) ShouldValidateDiscoveryRule() bool

ShouldValidateDiscoveryRule return ValidateDiscoveryRule or false if that is nil.

func (*MonitorConfig) Validate

func (mc *MonitorConfig) Validate() error

Validate ensures the config is correct beyond what basic YAML parsing ensures

type MonitorCustomConfig

type MonitorCustomConfig interface {
	MonitorConfigCore() *MonitorConfig
}

MonitorCustomConfig represents monitor-specific configuration that doesn't appear in the MonitorConfig struct.

type ObserverConfig

type ObserverConfig struct {
	// The type of the observer
	Type        string                 `yaml:"type,omitempty"`
	OtherConfig map[string]interface{} `yaml:",inline" default:"{}"`
}

ObserverConfig holds the configuration for an observer

func (*ObserverConfig) ExtraConfig

func (oc *ObserverConfig) ExtraConfig() (map[string]interface{}, error)

ExtraConfig returns generic config as a map

type PropertyFilterConfig

type PropertyFilterConfig struct {
	// A single property name to match
	PropertyName *string `yaml:"propertyName" default:"*"`
	// A property value to match
	PropertyValue *string `yaml:"propertyValue" default:"*"`
	// A dimension name to match
	DimensionName *string `yaml:"dimensionName" default:"*"`
	// A dimension value to match
	DimensionValue *string `yaml:"dimensionValue" default:"*"`
}

PropertyFilterConfig describes a set of subtractive filters applied to properties used to create a PropertyFilter

func (*PropertyFilterConfig) MakePropertyFilter

func (pfc *PropertyFilterConfig) MakePropertyFilter() (propfilters.DimensionFilter, error)

MakePropertyFilter returns an actual filter instance from the config

type RegexpWithReplace

type RegexpWithReplace struct {
	Regexp      *regexp.Regexp
	Replacement string
}

type SplunkConfig

type SplunkConfig struct {
	// Enable logging to a Splunk Enterprise instance
	Enabled bool `yaml:"enabled"`
	// Full URL (including path) of Splunk HTTP Event Collector (HEC) endpoint
	URL string `yaml:"url"`
	// Splunk HTTP Event Collector token
	Token string `yaml:"token"`
	// Splunk source field value, description of the source of the event
	MetricsSource string `yaml:"source"`
	// Splunk source type, optional name of a sourcetype field value
	MetricsSourceType string `yaml:"sourceType"`
	// Splunk index, optional name of the Splunk index to store the event in
	MetricsIndex string `yaml:"index"`
	// Splunk index, specifically for traces (must be event type)
	EventsIndex string `yaml:"eventsIndex"`
	// Splunk source field value, description of the source of the trace
	EventsSource string `yaml:"eventsSource"`
	// Splunk trace source type, optional name of a sourcetype field value
	EventsSourceType string `yaml:"eventsSourceType"`
	// Skip verifying the certificate of the HTTP Event Collector
	SkipTLSVerify bool `yaml:"skipTLSVerify"`

	// The maximum number of Splunk log entries of all types (e.g. metric,
	// event) to be buffered before old events are dropped.  Defaults to the
	// writer.maxDatapointsBuffered config if not specified.
	MaxBuffered int `yaml:"maxBuffered"`
	// The maximum number of simultaneous requests to the Splunk HEC endpoint.
	// Defaults to the writer.maxBuffered config if not specified.
	MaxRequests int `yaml:"maxRequests"`
	// The maximum number of Splunk log entries to submit in one request to the
	// HEC
	MaxBatchSize int `yaml:"maxBatchSize"`
}

SplunkConfig configures the writer specifically writing to Splunk.

type StoreConfig

type StoreConfig struct {
	OtherConfig map[string]interface{} `yaml:",inline,omitempty" default:"{}"`
}

StoreConfig holds configuration related to config stores (e.g. filesystem, zookeeper, etc)

func (*StoreConfig) ExtraConfig

func (sc *StoreConfig) ExtraConfig() map[string]interface{}

ExtraConfig returns generic config as a map

type WriterConfig

type WriterConfig struct {
	// The maximum number of datapoints to include in a batch before sending the
	// batch to the ingest server.  Smaller batch sizes than this will be sent
	// if datapoints originate in smaller chunks.
	DatapointMaxBatchSize int `yaml:"datapointMaxBatchSize" default:"1000"`
	// The maximum number of datapoints that are allowed to be buffered in the
	// agent (i.e. received from a monitor but have not yet received
	// confirmation of successful receipt by the target ingest/gateway server
	// downstream).  Any datapoints that come in beyond this number will
	// overwrite existing datapoints if they have not been sent yet, starting
	// with the oldest.
	MaxDatapointsBuffered int `yaml:"maxDatapointsBuffered" default:"25000"`
	// The analogue of `datapointMaxBatchSize` for trace spans.
	TraceSpanMaxBatchSize int `yaml:"traceSpanMaxBatchSize" default:"1000"`
	// Format to export traces in. Choices are "zipkin" and "sapm"
	TraceExportFormat string `yaml:"traceExportFormat" default:"zipkin"`
	// Deprecated: use `maxRequests` instead.
	DatapointMaxRequests int `yaml:"datapointMaxRequests"`
	// The maximum number of concurrent requests to make to a single ingest server
	// with datapoints/events/trace spans.  This number multiplied by
	// `datapointMaxBatchSize` is more or less the maximum number of datapoints
	// that can be "in-flight" at any given time.  Same thing for the
	// `traceSpanMaxBatchSize` option and trace spans.
	MaxRequests int `yaml:"maxRequests" default:"10"`
	// Timeout specifies a time limit for requests made to the ingest server.
	// The timeout includes connection time, any redirects, and reading the response body.
	// Default is 5 seconds, a Timeout of zero means no timeout.
	Timeout timeutil.Duration `yaml:"timeout" default:"5s"`
	// The agent does not send events immediately upon a monitor generating
	// them, but buffers them and sends them in batches.  The lower this
	// number, the less delay for events to appear in SignalFx.
	EventSendIntervalSeconds int `yaml:"eventSendIntervalSeconds" default:"1"`
	// The analogue of `maxRequests` for dimension property requests.
	PropertiesMaxRequests uint `yaml:"propertiesMaxRequests" default:"20"`
	// How many dimension property updates to hold pending being sent before
	// dropping subsequent property updates.  Property updates will be resent
	// eventually and they are slow to change so dropping them (esp on agent
	// start up) usually isn't a big deal.
	PropertiesMaxBuffered uint `yaml:"propertiesMaxBuffered" default:"10000"`
	// How long to wait for property updates to be sent once they are
	// generated.  Any duplicate updates to the same dimension within this time
	// frame will result in the latest property set being sent.  This helps
	// prevent spurious updates that get immediately overwritten by very flappy
	// property generation.
	PropertiesSendDelaySeconds uint `yaml:"propertiesSendDelaySeconds" default:"30"`
	// Properties that are synced to SignalFx are cached to prevent duplicate
	// requests from being sent, causing unnecessary load on our backend.
	PropertiesHistorySize uint `yaml:"propertiesHistorySize" default:"10000"`
	// If the log level is set to `debug` and this is true, all datapoints
	// generated by the agent will be logged.
	LogDatapoints bool `yaml:"logDatapoints"`
	// The analogue of `logDatapoints` for events.
	LogEvents bool `yaml:"logEvents"`
	// The analogue of `logDatapoints` for trace spans.
	LogTraceSpans bool `yaml:"logTraceSpans"`
	// If `true`, traces and spans which weren't successfully received by the
	// backend, will be logged as json
	LogTraceSpansFailedToShip bool `yaml:"logTraceSpansFailedToShip"`
	// If `true`, dimension updates will be logged at the INFO level.
	LogDimensionUpdates bool `yaml:"logDimensionUpdates"`
	// If true, and the log level is `debug`, filtered out datapoints will be
	// logged.
	LogDroppedDatapoints bool `yaml:"logDroppedDatapoints"`
	// If true, the dimensions specified in the top-level `globalDimensions`
	// configuration will be added to the tag set of all spans that are emitted
	// by the writer.  If this is false, only the "host id" dimensions such as
	// `host`, `AwsUniqueId`, etc. are added to the span tags.
	AddGlobalDimensionsAsSpanTags bool `yaml:"addGlobalDimensionsAsSpanTags"`
	// Whether to send host correlation metrics to correlate traced services
	// with the underlying host
	SendTraceHostCorrelationMetrics *bool `yaml:"sendTraceHostCorrelationMetrics" default:"true"`
	// How long to wait after a trace span's service name is last seen to
	// continue sending the correlation datapoints for that service.  This
	// should be a duration string that is accepted by
	// https://golang.org/pkg/time/#ParseDuration.  This option is irrelevant if
	// `sendTraceHostCorrelationMetrics` is false.
	StaleServiceTimeout timeutil.Duration `yaml:"staleServiceTimeout" default:"5m"`
	// How frequently to purge host correlation caches that are generated from
	// the service and environment names seen in trace spans sent through or by
	// the agent.  This should be a duration string that is accepted by
	// https://golang.org/pkg/time/#ParseDuration.
	TraceHostCorrelationPurgeInterval timeutil.Duration `yaml:"traceHostCorrelationPurgeInterval" default:"1m"`
	// How frequently to send host correlation metrics that are generated from
	// the service name seen in trace spans sent through or by the agent.  This
	// should be a duration string that is accepted by
	// https://golang.org/pkg/time/#ParseDuration.  This option is irrelevant if
	// `sendTraceHostCorrelationMetrics` is false.
	TraceHostCorrelationMetricsInterval timeutil.Duration `yaml:"traceHostCorrelationMetricsInterval" default:"1m"`
	// How many times to retry requests related to trace host correlation
	TraceHostCorrelationMaxRequestRetries uint `yaml:"traceHostCorrelationMaxRequestRetries" default:"2"`
	// How many trace spans are allowed to be in the process of sending.  While
	// this number is exceeded, the oldest spans will be discarded to
	// accommodate new spans generated to avoid memory exhaustion.  If you see
	// log messages about "Aborting pending trace requests..." or "Dropping new
	// trace spans..." it means that the downstream target for traces is not
	// able to accept them fast enough. Usually if the downstream is offline
	// you will get connection refused errors and most likely spans will not
	// build up in the agent (there is no retry mechanism). In the case of slow
	// downstreams, you might be able to increase `maxRequests` to increase the
	// concurrent stream of spans downstream (if the target can make efficient
	// use of additional connections) or, less likely, increase
	// `traceSpanMaxBatchSize` if your batches are maxing out (turn on debug
	// logging to see the batch sizes being sent) and being split up too much.
	// If neither of those options helps, your downstream is likely too slow to
	// handle the volume of trace spans and should be upgraded to more powerful
	// hardware/networking.
	MaxTraceSpansInFlight uint `yaml:"maxTraceSpansInFlight" default:"100000"`
	// Configures the writer specifically writing to Splunk.
	Splunk *SplunkConfig `yaml:"splunk"`
	// If set to `false`, output to SignalFx will be disabled.
	SignalFxEnabled *bool `yaml:"signalFxEnabled" default:"true"`
	// Additional headers to add to any outgoing HTTP requests from the agent.
	ExtraHeaders map[string]string `yaml:"extraHeaders"`
	// The following are propagated from elsewhere
	HostIDDims          map[string]string      `yaml:"-"`
	IngestURL           string                 `yaml:"-"`
	APIURL              string                 `yaml:"-"`
	EventEndpointURL    string                 `yaml:"-"`
	TraceEndpointURL    string                 `yaml:"-"`
	SignalFxAccessToken string                 `yaml:"-"`
	GlobalDimensions    map[string]string      `yaml:"-"`
	GlobalSpanTags      map[string]string      `yaml:"-"`
	MetricsToInclude    []MetricFilter         `yaml:"-"`
	MetricsToExclude    []MetricFilter         `yaml:"-"`
	PropertiesToExclude []PropertyFilterConfig `yaml:"-"`
}

WriterConfig holds configuration for the datapoint writer.

func (*WriterConfig) DatapointFilters

func (wc *WriterConfig) DatapointFilters() (*dpfilters.FilterSet, error)

DatapointFilters creates the filter set for datapoints

func (*WriterConfig) DefaultTraceEndpointPath

func (wc *WriterConfig) DefaultTraceEndpointPath() string

DefaultTraceEndpointPath returns the default path based on the export format.

func (*WriterConfig) Hash

func (wc *WriterConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*WriterConfig) IsSignalFxOutputEnabled

func (wc *WriterConfig) IsSignalFxOutputEnabled() bool

func (*WriterConfig) IsSplunkOutputEnabled

func (wc *WriterConfig) IsSplunkOutputEnabled() bool

func (*WriterConfig) ParsedAPIURL

func (wc *WriterConfig) ParsedAPIURL() *url.URL

ParsedAPIURL parses and returns the API server URL

func (*WriterConfig) ParsedEventEndpointURL

func (wc *WriterConfig) ParsedEventEndpointURL() *url.URL

ParsedEventEndpointURL parses and returns the event endpoint server URL

func (*WriterConfig) ParsedIngestURL

func (wc *WriterConfig) ParsedIngestURL() *url.URL

ParsedIngestURL parses and returns the ingest URL

func (*WriterConfig) ParsedTraceEndpointURL

func (wc *WriterConfig) ParsedTraceEndpointURL() *url.URL

ParsedTraceEndpointURL parses and returns the trace endpoint server URL

func (*WriterConfig) PropertyFilters

func (wc *WriterConfig) PropertyFilters() (*propfilters.FilterSet, error)

PropertyFilters creates the filter set for dimension properties

func (*WriterConfig) Validate

func (wc *WriterConfig) Validate() error

Directories

Path Synopsis
Package sources contains all of the config source logic.
Package sources contains all of the config source logic.
env
vault
Package vault contains the logic for using Vault as a remote config source
Package vault contains the logic for using Vault as a remote config source
zookeeper
Package zookeeper contains the logic for using Zookeeper as a config source.
Package zookeeper contains the logic for using Zookeeper as a config source.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL