config

package
v4.2.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2019 License: Apache-2.0 Imports: 27 Imported by: 0

Documentation

Overview

Package config contains configuration structures and related helper logic for all agent components.

Index

Constants

This section is empty.

Variables

View Source
var (
	// EnvReplacer replaces . and - with _
	EnvReplacer = strings.NewReplacer(".", "_", "-", "_")
)

Functions

func CallConfigure

func CallConfigure(instance, conf interface{}) error

CallConfigure will call the Configure method on an observer or monitor with a `conf` object, typed to the correct type. This allows monitors/observers to set the type of the config object to their own config and not have to worry about casting or converting.

func DecodeExtraConfig

func DecodeExtraConfig(in CustomConfigurable, out interface{}, strict bool) error

DecodeExtraConfig will pull out the OtherConfig values from both ObserverConfig and MonitorConfig and decode them to a struct that is provided in the `out` arg. Whether all fields have to be in 'out' is determined by the 'strict' flag. Any errors decoding will cause `out` to be nil.

func DecodeExtraConfigStrict

func DecodeExtraConfigStrict(in CustomConfigurable, out interface{}) error

DecodeExtraConfigStrict will pull out any config values from 'in' and put them on the 'out' struct, returning an error if anything in 'in' isn't in 'out'.

func FillInConfigTemplate

func FillInConfigTemplate(embeddedFieldName string, configTemplate interface{}, conf CustomConfigurable) error

FillInConfigTemplate takes a config template value that a monitor/observer provided and fills it in dynamically from the provided conf

func LoadConfig

func LoadConfig(ctx context.Context, configPath string) (<-chan *Config, error)

LoadConfig handles loading the main config file and recursively rendering any dynamic values in the config. If watchInterval is 0, the config will be loaded once and sent to the returned channel, after which the channel will be closed. Otherwise, the returned channel will remain open and will be sent any config updates.

func ToString

func ToString(conf interface{}) string

ToString converts a config struct to a pseudo-yaml text outut. If a struct field has the 'neverLog' tag, its value will be replaced by asterisks, or completely omitted if the tag value is 'omit'.

Types

type CollectdConfig

type CollectdConfig struct {
	// If you won't be using any collectd monitors, this can be set to true to
	// prevent collectd from pre-initializing
	DisableCollectd bool `yaml:"disableCollectd" default:"false"`
	// How many read intervals before abandoning a metric. Doesn't affect much
	// in normal usage.
	// See [Timeout](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#timeout_iterations).
	Timeout int `yaml:"timeout" default:"40"`
	// Number of threads dedicated to executing read callbacks. See
	// [ReadThreads](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#readthreads_num)
	ReadThreads int `yaml:"readThreads" default:"5"`
	// Number of threads dedicated to writing value lists to write callbacks.
	// This should be much less than readThreads because writing is batched in
	// the write_http plugin that writes back to the agent.
	// See [WriteThreads](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writethreads_num).
	WriteThreads int `yaml:"writeThreads" default:"2"`
	// The maximum numbers of values in the queue to be written back to the
	// agent from collectd.  Since the values are written to a local socket
	// that the agent exposes, there should be almost no queuing and the
	// default should be more than sufficient. See
	// [WriteQueueLimitHigh](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writequeuelimithigh_highnum)
	WriteQueueLimitHigh int `yaml:"writeQueueLimitHigh" default:"500000"`
	// The lowest number of values in the collectd queue before which metrics
	// begin being randomly dropped.  See
	// [WriteQueueLimitLow](https://collectd.org/documentation/manpages/collectd.conf.5.shtml#writequeuelimitlow_lownum)
	WriteQueueLimitLow int `yaml:"writeQueueLimitLow" default:"400000"`
	// Collectd's log level -- info, notice, warning, or err
	LogLevel string `yaml:"logLevel" default:"notice"`
	// A default read interval for collectd plugins.  If zero or undefined,
	// will default to the global agent interval.  Some collectd python
	// monitors do not support overridding the interval at the monitor level,
	// but this setting will apply to them.
	IntervalSeconds int `yaml:"intervalSeconds" default:"0"`
	// The local IP address of the server that the agent exposes to which
	// collectd will send metrics.  This defaults to an arbitrary address in
	// the localhost subnet, but can be overridden if needed.
	WriteServerIPAddr string `yaml:"writeServerIPAddr" default:"127.9.8.7"`
	// The port of the agent's collectd metric sink server.  If set to zero
	// (the default) it will allow the OS to assign it a free port.
	WriteServerPort uint16 `yaml:"writeServerPort" default:"0"`
	// This is where the agent will write the collectd config files that it
	// manages.  If you have secrets in those files, consider setting this to a
	// path on a tmpfs mount.  The files in this directory should be considered
	// transient -- there is no value in editing them by hand.  If you want to
	// add your own collectd config, see the collectd/custom monitor.
	ConfigDir string `yaml:"configDir" default:"/var/run/signalfx-agent/collectd"`

	// The following are propagated from the top-level config
	BundleDir            string `yaml:"-"`
	HasGenericJMXMonitor bool   `yaml:"-"`
	// Assigned by manager, not by user
	InstanceName string `yaml:"-"`
	// A hack to allow custom collectd to easily specify a single monitorID via
	// query parameter
	WriteServerQuery string `yaml:"-"`
}

CollectdConfig high-level configurations

func (*CollectdConfig) ConfigFilePath

func (cc *CollectdConfig) ConfigFilePath() string

ConfigFilePath returns the path where collectd should render its main config file.

func (*CollectdConfig) Hash

func (cc *CollectdConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*CollectdConfig) InstanceConfigDir

func (cc *CollectdConfig) InstanceConfigDir() string

InstanceConfigDir is the directory underneath the ConfigDir that is specific to this collectd instance.

func (*CollectdConfig) ManagedConfigDir

func (cc *CollectdConfig) ManagedConfigDir() string

ManagedConfigDir returns the dir path where all monitor config should go.

func (*CollectdConfig) Validate

func (cc *CollectdConfig) Validate() error

Validate the collectd specific config

func (*CollectdConfig) WriteServerURL

func (cc *CollectdConfig) WriteServerURL() string

WriteServerURL is the local address served by the agent where collect should write datapoints

type Config

type Config struct {
	// The access token for the org that should receive the metrics emitted by
	// the agent.
	SignalFxAccessToken string `yaml:"signalFxAccessToken" neverLog:"true"`
	// The URL of SignalFx ingest server.  Should be overridden if using the
	// SignalFx Gateway.  If not set, this will be determined by the
	// `signalFxRealm` option below.  If you want to send trace spans to a
	// different location, set the `traceEndpointUrl` option.
	IngestURL string `yaml:"ingestUrl"`
	// The full URL (including path) to the trace ingest server.  If this is
	// not set, all trace spans will be sent to the same place as `ingestUrl`
	// above.
	TraceEndpointURL string `yaml:"traceEndpointUrl"`
	// The SignalFx API base URL.  If not set, this will determined by the
	// `signalFxRealm` option below.
	APIURL string `yaml:"apiUrl"`
	// The SignalFx Realm that the organization you want to send to is a part
	// of.  This defaults to the original realm (`us0`) but if you are setting
	// up the agent for the first time, you quite likely need to change this.
	SignalFxRealm string `yaml:"signalFxRealm" default:"us0"`
	// The hostname that will be reported as the `host` dimension. If blank,
	// this will be auto-determined by the agent based on a reverse lookup of
	// the machine's IP address.
	Hostname string `yaml:"hostname"`
	// If true (the default), and the `hostname` option is not set, the
	// hostname will be determined by doing a reverse DNS query on the IP
	// address that is returned by querying for the bare hostname.  This is
	// useful in cases where the hostname reported by the kernel is a short
	// name. (**default**: `true`)
	UseFullyQualifiedHost *bool `yaml:"useFullyQualifiedHost"`
	// Our standard agent model is to collect metrics for services running on
	// the same host as the agent.  Therefore, host-specific dimensions (e.g.
	// `host`, `AWSUniqueId`, etc) are automatically added to every datapoint
	// that is emitted from the agent by default.  Set this to true if you are
	// using the agent primarily to monitor things on other hosts.  You can set
	// this option at the monitor level as well.
	DisableHostDimensions bool `yaml:"disableHostDimensions" default:"false"`
	// How often to send metrics to SignalFx.  Monitors can override this
	// individually.
	IntervalSeconds int `yaml:"intervalSeconds" default:"10"`
	// Dimensions (key:value pairs) that will be added to every datapoint emitted by the agent.
	// To specify that all metrics should be high-resolution, add the dimension `sf_hires: 1`
	GlobalDimensions map[string]string `yaml:"globalDimensions" default:"{}"`
	// Whether to send the machine-id dimension on all host-specific datapoints
	// generated by the agent.  This dimension is derived from the Linux
	// machine-id value.
	SendMachineID bool `yaml:"sendMachineID"`
	// A list of observers to use (see observer config)
	Observers []ObserverConfig `yaml:"observers" default:"[]" neverLog:"omit"`
	// A list of monitors to use (see monitor config)
	Monitors []MonitorConfig `yaml:"monitors" default:"[]" neverLog:"omit"`
	// Configuration of the datapoint/event writer
	Writer WriterConfig `yaml:"writer"`
	// Log configuration
	Logging LogConfig `yaml:"logging" default:"{}"`
	// Configuration of the managed collectd subprocess
	Collectd CollectdConfig `yaml:"collectd" default:"{}"`
	// A list of metric filters that will whitelist/include metrics.  These
	// filters take priority over the filters specified in `metricsToExclude`.
	MetricsToInclude []MetricFilter `yaml:"metricsToInclude" default:"[]"`
	// A list of metric filters
	MetricsToExclude []MetricFilter `yaml:"metricsToExclude" default:"[]"`
	// A list of properties filters
	PropertiesToExclude []PropertyFilterConfig `yaml:"propertiesToExclude" default:"[]"`

	// The host on which the internal status server will listen.  The internal
	// status HTTP server serves internal metrics and diagnostic information
	// about the agent and can be scraped by the `internal-metrics` monitor.
	// Can be set to `0.0.0.0` if you want to monitor the agent from another
	// host.  If you set this to blank/null, the internal status server will
	// not be started.  See `internalStatusPort`.
	InternalStatusHost string `yaml:"internalStatusHost" default:"localhost"`
	// The port on which the internal status server will listen.  See
	// `internalStatusHost`.
	InternalStatusPort uint16 `yaml:"internalStatusPort" default:"8095"`

	// Enables Go pprof endpoint on port 6060 that serves profiling data for
	// development
	EnableProfiling bool `yaml:"profiling" default:"false"`
	// The host/ip address for the pprof profile server to listen on.
	// `profiling` must be enabled for this to have any effect.
	ProfilingHost string `yaml:"profilingHost" default:"127.0.0.1"`
	// The port for the pprof profile server to listen on. `profiling` must be
	// enabled for this to have any effect.
	ProfilingPort int `yaml:"profilingPort" default:"6060"`
	// Path to the directory holding the agent dependencies.  This will
	// normally be derived automatically. Overrides the envvar
	// SIGNALFX_BUNDLE_DIR if set.
	BundleDir string `yaml:"bundleDir"`
	// This exists purely to give the user a place to put common yaml values to
	// reference in other parts of the config file.
	Scratch interface{} `yaml:"scratch" neverLog:"omit"`
	// Configuration of remote config stores
	Sources sources.SourceConfig `yaml:"configSources"`
	// Path to the host's `/proc` filesystem.
	// This is useful for containerized environments.
	ProcPath string `yaml:"procPath" default:"/proc"`
	// Path to the host's `/etc` directory.
	// This is useful for containerized environments.
	EtcPath string `yaml:"etcPath" default:"/etc"`
	// Path to the host's `/var` directory.
	// This is useful for containerized environments.
	VarPath string `yaml:"varPath" default:"/var"`
	// Path to the host's `/run` directory.
	// This is useful for containerized environments.
	RunPath string `yaml:"runPath" default:"/run"`
	// Path to the host's `/sys` directory.
	// This is useful for containerized environments.
	SysPath string `yaml:"sysPath" default:"/sys"`
}

Config is the top level config struct for configurations that are common to all platoforms

type CustomConfigurable

type CustomConfigurable interface {
	ExtraConfig() (map[string]interface{}, error)
}

CustomConfigurable should be implemented by config structs that have the concept of generic other config that is initially deserialized into a map[string]interface{} to be later transformed to another form.

type LogConfig

type LogConfig struct {
	// Valid levels include `debug`, `info`, `warn`, `error`.  Note that
	// `debug` logging may leak sensitive configuration (e.g. passwords) to the
	// agent output.
	Level string `yaml:"level" default:"info"`
}

LogConfig contains configuration related to logging

func (*LogConfig) LogrusLevel

func (lc *LogConfig) LogrusLevel() *log.Level

LogrusLevel returns a logrus log level based on the configured level in LogConfig.

type MetricFilter

type MetricFilter struct {
	// A map of dimension key/values to match against.  All key/values must
	// match a datapoint for it to be matched.
	Dimensions map[string]string `yaml:"dimensions" default:"{}"`
	// A list of metric names to match against, OR'd together
	MetricNames []string `yaml:"metricNames"`
	// A single metric name to match against
	MetricName string `yaml:"metricName"`
	// (**Only applicable for the top level filters**) Limits this scope of the
	// filter to datapoints from a specific monitor. If specified, any
	// datapoints not from this monitor type will never match against this
	// filter.
	MonitorType string `yaml:"monitorType"`
	// Negates the result of the match so that it matches all datapoints that
	// do NOT match the metric name and dimension values given. This does not
	// negate monitorType, if given.
	Negated bool `yaml:"negated"`
}

MetricFilter describes a set of subtractive filters applied to datapoints right before they are sent.

func AddOrMerge

func AddOrMerge(mtes []MetricFilter, mf2 MetricFilter) []MetricFilter

AddOrMerge MetricFilter to list or merge with existing MetricFilter

func (*MetricFilter) MakeFilter

func (mf *MetricFilter) MakeFilter() (dpfilters.DatapointFilter, error)

MakeFilter returns an actual filter instance from the config

func (*MetricFilter) MergeWith

func (mf *MetricFilter) MergeWith(mf2 MetricFilter) MetricFilter

MergeWith merges mf2's MetricFilter.MetricNames into receiver mf MetricFilter.MetricNames

func (*MetricFilter) ShouldMerge

func (mf *MetricFilter) ShouldMerge(mf2 MetricFilter) bool

ShouldMerge checks if mf2 MetricFilter should be merged into receiver mf MetricFilter Filters with same monitorType, negation, and dimensions should be merged

type MonitorConfig

type MonitorConfig struct {
	// The type of the monitor
	Type string `yaml:"type" json:"type"`
	// The rule used to match up this configuration with a discovered endpoint.
	// If blank, the configuration will be run immediately when the agent is
	// started.  If multiple endpoints match this rule, multiple instances of
	// the monitor type will be created with the same configuration (except
	// different host/port).
	DiscoveryRule string `yaml:"discoveryRule" json:"discoveryRule"`
	// A set of extra dimensions (key:value pairs) to include on datapoints emitted by the
	// monitor(s) created from this configuration. To specify metrics from this
	// monitor should be high-resolution, add the dimension `sf_hires: 1`
	ExtraDimensions map[string]string `yaml:"extraDimensions" json:"extraDimensions"`
	// A set of mappings from a configuration option on this monitor to
	// attributes of a discovered endpoint.  The keys are the config option on
	// this monitor and the value can be any valid expression used in discovery
	// rules.
	ConfigEndpointMappings map[string]string `yaml:"configEndpointMappings" json:"configEndpointMappings"`
	// The interval (in seconds) at which to emit datapoints from the
	// monitor(s) created by this configuration.  If not set (or set to 0), the
	// global agent intervalSeconds config option will be used instead.
	IntervalSeconds int `yaml:"intervalSeconds" json:"intervalSeconds"`
	// If one or more configurations have this set to true, only those
	// configurations will be considered. This setting can be useful for testing.
	Solo bool `yaml:"solo" json:"solo"`
	// A list of metric filters
	MetricsToExclude []MetricFilter `yaml:"metricsToExclude" json:"metricsToExclude" default:"[]"`
	// Some monitors pull metrics from services not running on the same host
	// and should not get the host-specific dimensions set on them (e.g.
	// `host`, `AWSUniqueId`, etc).  Setting this to `true` causes those
	// dimensions to be omitted.  You can disable this globally with the
	// `disableHostDimensions` option on the top level of the config.
	DisableHostDimensions bool `yaml:"disableHostDimensions" json:"disableHostDimensions" default:"false"`
	// This can be set to true if you don't want to include the dimensions that
	// are specific to the endpoint that was discovered by an observer.  This
	// is useful when you have an endpoint whose identity is not particularly
	// important since it acts largely as a proxy or adapter for other metrics.
	DisableEndpointDimensions bool `yaml:"disableEndpointDimensions" json:"disableEndpointDimensions"`
	// OtherConfig is everything else that is custom to a particular monitor
	OtherConfig map[string]interface{} `yaml:",inline" neverLog:"omit"`
	// ValidationError is where a message concerning validation issues can go
	// so that diagnostics can output it.
	Hostname        string               `yaml:"-" json:"-"`
	BundleDir       string               `yaml:"-" json:"-"`
	ValidationError string               `yaml:"-" json:"-" hash:"ignore"`
	MonitorID       types.MonitorID      `yaml:"-" hash:"ignore"`
	Filter          *dpfilters.FilterSet `yaml:"-" json:"-" hash:"ignore"`
}

MonitorConfig is used to configure monitor instances. One instance of MonitorConfig may be used to configure multiple monitor instances. If a monitor's discovery rule does not match any discovered services, the monitor will not run.

func (*MonitorConfig) Equals

func (mc *MonitorConfig) Equals(other *MonitorConfig) bool

Equals tests if two monitor configs are sufficiently equal to each other. Two monitors should only be equal if it doesn't make sense for two configurations to be active at the same time.

func (*MonitorConfig) ExtraConfig

func (mc *MonitorConfig) ExtraConfig() (map[string]interface{}, error)

ExtraConfig returns generic config as a map

func (*MonitorConfig) HasAutoDiscovery

func (mc *MonitorConfig) HasAutoDiscovery() bool

HasAutoDiscovery returns whether the monitor is static (i.e. doesn't rely on autodiscovered services and is manually configured) or dynamic.

func (*MonitorConfig) Hash

func (mc *MonitorConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*MonitorConfig) MonitorConfigCore

func (mc *MonitorConfig) MonitorConfigCore() *MonitorConfig

MonitorConfigCore provides a way of getting the MonitorConfig when embedded in a struct that is referenced through a more generic interface.

type MonitorCustomConfig

type MonitorCustomConfig interface {
	MonitorConfigCore() *MonitorConfig
}

MonitorCustomConfig represents monitor-specific configuration that doesn't appear in the MonitorConfig struct.

type ObserverConfig

type ObserverConfig struct {
	// The type of the observer
	Type        string                 `yaml:"type,omitempty"`
	OtherConfig map[string]interface{} `yaml:",inline" default:"{}"`
}

ObserverConfig holds the configuration for an observer

func (*ObserverConfig) ExtraConfig

func (oc *ObserverConfig) ExtraConfig() (map[string]interface{}, error)

ExtraConfig returns generic config as a map

type PropertyFilterConfig

type PropertyFilterConfig struct {
	// A single property name to match
	PropertyName *string `yaml:"propertyName" default:"*"`
	// A property value to match
	PropertyValue *string `yaml:"propertyValue" default:"*"`
	// A dimension name to match
	DimensionName *string `yaml:"dimensionName" default:"*"`
	// A dimension value to match
	DimensionValue *string `yaml:"dimensionValue" default:"*"`
}

PropertyFilterConfig describes a set of subtractive filters applied to properties used to create a PropertyFilter

func (*PropertyFilterConfig) MakePropertyFilter

func (pfc *PropertyFilterConfig) MakePropertyFilter() (propfilters.DimPropsFilter, error)

MakePropertyFilter returns an actual filter instance from the config

type StoreConfig

type StoreConfig struct {
	OtherConfig map[string]interface{} `yaml:",inline,omitempty" default:"{}"`
}

StoreConfig holds configuration related to config stores (e.g. filesystem, zookeeper, etc)

func (*StoreConfig) ExtraConfig

func (sc *StoreConfig) ExtraConfig() map[string]interface{}

ExtraConfig returns generic config as a map

type WriterConfig

type WriterConfig struct {
	// The maximum number of datapoints to include in a batch before sending the
	// batch to the ingest server.  Smaller batch sizes than this will be sent
	// if datapoints originate in smaller chunks.  Larger batch sizes may also
	// be used if the `maxRequests` requests limit is hit -- the next request
	// will consist of all of the datapoints queued in the meantime.
	DatapointMaxBatchSize int `yaml:"datapointMaxBatchSize" default:"1000"`
	// The maximum number of datapoints that are allowed to be buffered in the
	// agent (i.e. received from a monitor but have not yet received
	// confirmation of successful receipt by the target ingest/gateway server
	// downstream).  Any datapoints that come in beyond this number will
	// overwrite existing datapoints if they have not been sent yet, starting
	// with the oldest.
	MaxDatapointsBuffered int `yaml:"maxDatapointsBuffered" default:"5000"`
	// The analogue of `datapointMaxBatchSize` for trace spans.
	TraceSpanMaxBatchSize int `yaml:"traceSpanMaxBatchSize" default:"1000"`
	// Deprecated: use `maxRequests` instead.
	DatapointMaxRequests int `yaml:"datapointMaxRequests"`
	// The maximum number of concurrent requests to make to a single ingest server
	// with datapoints/events/trace spans.  This number multipled by
	// `datapointMaxBatchSize` is more or less the maximum number of datapoints
	// that can be "in-flight" at any given time.  Same thing for the
	// `traceSpanMaxBatchSize` option and trace spans.
	MaxRequests int `yaml:"maxRequests" default:"10"`
	// The agent does not send events immediately upon a monitor generating
	// them, but buffers them and sends them in batches.  The lower this
	// number, the less delay for events to appear in SignalFx.
	EventSendIntervalSeconds int `yaml:"eventSendIntervalSeconds" default:"1"`
	// The analogue of `maxRequests` for dimension property requests.
	PropertiesMaxRequests uint `yaml:"propertiesMaxRequests" default:"20"`
	// How many dimension property updates to hold pending being sent before
	// dropping subsequent property updates.  Property updates will be resent
	// eventually and they are slow to change so dropping them (esp on agent
	// start up) usually isn't a big deal.
	PropertiesMaxBuffered uint `yaml:"propertiesMaxBuffered" default:"10000"`
	// How long to wait for property updates to be sent once they are
	// generated.  Any duplicate updates to the same dimension within this time
	// frame will result in the latest property set being sent.  This helps
	// prevent spurious updates that get immediately overwritten by very flappy
	// property generation.
	PropertiesSendDelaySeconds uint `yaml:"propertiesSendDelaySeconds" default:"30"`
	// Properties that are synced to SignalFx are cached to prevent duplicate
	// requests from being sent, causing unnecessary load on our backend.
	PropertiesHistorySize uint `yaml:"propertiesHistorySize" default:"10000"`
	// If the log level is set to `debug` and this is true, all datapoints
	// generated by the agent will be logged.
	LogDatapoints bool `yaml:"logDatapoints"`
	// The analogue of `logDatapoints` for events.
	LogEvents bool `yaml:"logEvents"`
	// The analogue of `logDatapoints` for trace spans.
	LogTraceSpans bool `yaml:"logTraceSpans"`
	// If true, and the log level is `debug`, filtered out datapoints will be
	// logged.
	LogDroppedDatapoints bool `yaml:"logDroppedDatapoints"`
	// Whether to send host correlation metrics to correlation traced services
	// with the underlying host
	SendTraceHostCorrelationMetrics *bool `yaml:"sendTraceHostCorrelationMetrics" default:"false"`
	// How long to wait after a trace span's service name is last seen to
	// continue sending the correlation datapoints for that service.  This
	// should be a duration string that is accepted by
	// https://golang.org/pkg/time/#ParseDuration.  This option is irrelvant if
	// `sendTraceHostCorrelationMetrics` is false.
	StaleServiceTimeout time.Duration `yaml:"staleServiceTimeout" default:"5m"`
	// How frequently to send host correlation metrics that are generated from
	// the service name seen in trace spans sent through or by the agent.  This
	// should be a duration string that is accepted by
	// https://golang.org/pkg/time/#ParseDuration.  This option is irrelvant if
	// `sendTraceHostCorrelationMetrics` is false.
	TraceHostCorrelationMetricsInterval time.Duration `yaml:"traceHostCorrelationMetricsInterval" default:"1m"`
	// How many trace spans are allowed to be in the process of sending.  While
	// this number is exceeded, existing pending spans will be randomly dropped
	// if possible to accommodate new spans generated to avoid memory
	// exhaustion.  If you see log messages about "Aborting pending trace
	// requests..." or "Dropping new trace spans..." it means that the
	// downstream target for traces is not able to accept them fast enough.
	// Usually if the downstream is offline you will get connection refused
	// errors and most likely spans will not build up in the agent (there is no
	// retry mechanism). In the case of slow downstreams, you might be able to
	// increase `maxRequests` to increase the concurrent stream of spans
	// downstream (if the target can make efficient use of additional
	// connections) or, less likely, increase `traceSpanMaxBatchSize` if your
	// batches are maxing out (turn on debug logging to see the batch sizes
	// being sent) and being split up too much. If neither of those options
	// helps, your downstream is likely too slow to handle the volume of trace
	// spans and should be upgraded to more powerful hardware/networking.
	MaxTraceSpansInFlight uint `yaml:"maxTraceSpansInFlight" default:"100000"`
	// The following are propagated from elsewhere
	HostIDDims          map[string]string      `yaml:"-"`
	IngestURL           string                 `yaml:"-"`
	APIURL              string                 `yaml:"-"`
	TraceEndpointURL    string                 `yaml:"-"`
	SignalFxAccessToken string                 `yaml:"-"`
	GlobalDimensions    map[string]string      `yaml:"-"`
	MetricsToInclude    []MetricFilter         `yaml:"-"`
	MetricsToExclude    []MetricFilter         `yaml:"-"`
	PropertiesToExclude []PropertyFilterConfig `yaml:"-"`
}

WriterConfig holds configuration for the datapoint writer.

func (*WriterConfig) DatapointFilters

func (wc *WriterConfig) DatapointFilters() (*dpfilters.FilterSet, error)

DatapointFilters creates the filter set for datapoints

func (*WriterConfig) Hash

func (wc *WriterConfig) Hash() uint64

Hash calculates a unique hash value for this config struct

func (*WriterConfig) ParsedAPIURL

func (wc *WriterConfig) ParsedAPIURL() *url.URL

ParsedAPIURL parses and returns the API server URL

func (*WriterConfig) ParsedIngestURL

func (wc *WriterConfig) ParsedIngestURL() *url.URL

ParsedIngestURL parses and returns the ingest URL

func (*WriterConfig) ParsedTraceEndpointURL

func (wc *WriterConfig) ParsedTraceEndpointURL() *url.URL

ParsedTraceEndpointURL parses and returns the trace endpoint server URL

func (*WriterConfig) PropertyFilters

func (wc *WriterConfig) PropertyFilters() (*propfilters.FilterSet, error)

PropertyFilters creates the filter set for dimension properties

Directories

Path Synopsis
Package sources contains all of the config source logic.
Package sources contains all of the config source logic.
env
vault
Package vault contains the logic for using Vault as a remote config source How to use auth methods with Vault Go client: https://groups.google.com/forum/#!msg/vault-tool/cS7J2KbAwZg/7pu6PYSRAAAJ
Package vault contains the logic for using Vault as a remote config source How to use auth methods with Vault Go client: https://groups.google.com/forum/#!msg/vault-tool/cS7J2KbAwZg/7pu6PYSRAAAJ
zookeeper
Package zookeeper contains the logic for using Zookeeper as a config source.
Package zookeeper contains the logic for using Zookeeper as a config source.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL