veneur

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 2, 2017 License: MIT Imports: 48 Imported by: 0

README

Build Status GoDoc

Veneur (venn-urr) is a server implementation of the DogStatsD protocol for aggregating metrics and sending them to downstream storage , typically Datadog. It can also act as a global aggregator for histograms, sets and counters.

More generically, Veneur is a convenient, host-local sink for various observability primitives.

Status

Veneur is currently handling all metrics for Stripe and is considered production ready. It is under active development and maintenance!

Building veneur requires Go 1.7 or later.

Motivation

We wanted percentiles, histograms and sets to be global. Veneur helps us do that!

Veneur is a DogStatsD implementation that acts as a local collector and — optionally — as an aggregator for some metric types, such that the metrics are global rather than host-local. This is particularly useful for histograms, timers and sets, as in their normal, per-host configuration the percentiles for histograms can be less effective or even meaningless. Per-host unique sets are also often not what's desired.

Global *StatsD installations can be problematic, as they either require client-side or proxy sharding behavior to prevent an instance being a Single Point of Failure (SPoF) for all metrics. Veneur aims to solve this problem. Non-global metrics like counters and gauges are collected by a local Veneur instance and sent to storage at flush time. Global metrics (histograms and sets) are forwarded to a central Veneur instance for aggregation before being sent to storage.

How Veneur Is Different Than Official DogStatsD

Veneur is different for a few reasons. They are enumerated here.

Protocol

Veneur adheres to the official DogStatsD datagram format with the exceptions below:

  • The tag veneurlocalonly is stripped and influences forwarding behavior, as discussed below.
  • The tag veneurglobalonly is stripped and influences forwarding behavior, as discussed below.

Global Aggregation

If configured to do so, Veneur can selectively aggregate global metrics to be cumulative across all instances that report to a central Veneur, allowing global percentile calculation and global set counts.

For example, say you emit a timer foo.bar.call_duration_ms from 20 hosts that are configured to forward to a central veneur. In Datadog you'll see the following:

  • Metrics that have been "globalized"
    • foo.bar.call_duration_ms.50percentile: the p50 across all hosts, by tag
    • foo.bar.call_duration_ms.90percentile: the p90 across all hosts, by tag
    • foo.bar.call_duration_ms.95percentile: the p95 across all hosts, by tag
    • foo.bar.call_duration_ms.99percentile: the p99 across all hosts, by tag
  • Metrics that remain host-local
    • foo.bar.call_duration_ms.avg: by-host tagged average
    • foo.bar.call_duration_ms.count: by-host tagged count which (when summed) shows the total count of times this metric was emitted
    • foo.bar.call_duration_ms.max: by-host tagged maximum value
    • foo.bar.call_duration_ms.median: by-host tagged median value
    • foo.bar.call_duration_ms.min: by-host tagged minimum value
    • foo.bar.call_duration_ms.sum: by-host tagged sum value representing the total time

Clients can choose to override this behavior by including the tag veneurlocalonly.

Approximate Histograms

Because Veneur is built to handle lots and lots of data, it uses approximate histograms. We have our own implementation of Dunning's t-digest, which has bounded memory consumption and reduced error at extreme quantiles. Metrics are consistently routed to the same worker to distribute load and to be added to the same histogram.

Datadog's DogStatsD — and StatsD — uses an exact histogram which retains all samples and is reset every flush period. This means that there is a loss of precision when using Veneur, but the resulting percentile values are meant to be more representative of a global view.

Approximate Sets

Veneur uses HyperLogLogs for approximate unique sets. These are a very efficient unique counter with fixed memory consumption.

Global Counters

Via an optional magic tag Veneur will forward counters to a global host for accumulation. This feature was primarily developed to control tag cardinality. Some counters are valuable but do not require per-host tagging.

Lack of Host Tags for Aggregated Metrics

By definition the hostname is not applicable to global metrics that Veneur processes. Note that if you do include a hostname tag, Veneur will not strip it for you. Veneur will add its own hostname as configured to metrics sent to Datadog.

Expiration

Veneur expires all metrics on each flush. If a metric is no longer being sent (or is sent sparsely) Veneur will not send it as zeros! This was chosen because the combination of the approximation's features and the additional hysteresis imposed by retaining these approximations over time was deemed more complex than desirable.

Concepts

  • Global metrics are those that benefit from being aggregated for chunks — or all — of your infrastructure. These are histograms (including the percentiles generated by timers) and sets.
  • Metrics that are sent to another Veneur instance for aggregation are said to be "forwarded". This terminology helps to decipher configuration and metric options below.
  • Flushed, in Veneur, means metrics sent to Datadog.

By Metric Type Behavior

To clarify how each metric type behaves in Veneur, please use the following:

  • Counters: Locally accrued, flushed to Datadog (see magic tags for global version)
  • Gauges: Locally accrued, flushed to Datadog
  • Histograms: Locally accrued, count, max and min flushed to Datadog, percentiles forwarded to forward_address for global aggregation when set.
  • Timers: Locally accrued, count, max and min flushed to Datadog, percentiles forwarded to forward_address for global aggregation when set.
  • Sets: Locally accrued, forwarded to forward_address for global aggregation when set.

Usage

veneur -f example.yaml

See example.yaml for a sample config. Be sure and set your Datadog API key!

Plugins

Veneur includes optional plugins to extend it's capabilities. These plugins are enabled via configuration options. Please consult each plugin's README for more information:

  • S3 Plugin - Emit flushed metrics as a TSV file to Amazon S3
  • InfluxDB Plugin - Emit flushed metrics to InfluxDB (experimental)

Setup

Here we'll document some explanations of setup choices you may make when using Veneur.

Einhorn Usage

When you upgrade Veneur (deploy, stop, start with new binary) there will be a brief period where Veneur will not be able to handle HTTP requests. At Stripe we use Einhorn as a shared socket manager to bridge the gap until Veneur is ready to handle HTTP requests again.

You'll need to consult Einhorn's documentation for installation, setup and usage. But once you've done that you can tell Veneur to use Einhorn by setting http_address to einhorn@0. This informs goji/bind to use it's Einhorn handling code to bind to the file descriptor for HTTP.

Forwarding

Veneur instances can be configured to forward their global metrics to another Veneur instance. You can use this feature to get the best of both worlds: metrics that benefit from global aggregation can be passed up to a single global Veneur, but other metrics can be published locally with host-scoped information. Note: Forwarding adds an additional delay to metric availability corresponding to the value of the interval configuration option, as the local veneur will flush it to it's configured upstream, which will then flush any recieved metrics when it's interval expires.

To configure this feature, you need one Veneur, which we'll call the global instance, and one or more other Veneurs, which we'll call local instances. The local instances should have their forward_address configured to the global instance's http_address. The global instance should have an empty forward_address (ie just don't set it). You can then report metrics to any Veneur's udp_address as usual.

If a local instance receives a histogram or set, it will publish the local parts of that metric (the count, min and max) directly to DataDog, but instead of publishing percentiles, it will package the entire histogram and send it to the global instance. The global instance will aggregate all the histograms together and publish their percentiles to DataDog.

Note that the global instance can also receive metrics over UDP. It will publish a count, min and max for the samples that were sent directly to it, but not counting any samples from other Veneur instancess (this ensures that things don't get double-counted). You can even chain multiple levels of forwarding together if you want. This might be useful if, for example, your global Veneur is under too much load. The root of the tree will be the Veneur instance that has an empty forward_address. (Do not tell a Veneur instance to forward metrics to itself. We don't support that and it doesn't really make sense in the first place.)

With respect to the tags configuration option, the tags that will be added are those of the Veneur that actually publishes to DataDog. If a local instance forwards its histograms and sets to a global instance, the local instance's tags will not be attached to the forwarded structures. It will still use its own tags for the other metrics it publishes, but the percentiles will get extra tags only from the global instance.

Magic Tag

If you want a metric to be strictly host-local, you can tell Veneur not to forward it by including a veneurlocalonly tag in the metric packet, eg foo:1|h|#veneurlocalonly. This tag will not actually appear in DataDog; Veneur removes it.

Counters

Relatedly, if you want to forward a counter to the global Veneur instance to reduce tag cardinality, you can tell Veneur to flush it to the global instance by including a veneurglobalonly tag in the count's metric packet. This tag will also not appear in Datadog. Note: for global counters to report correctly, the local and global Veneur instances should be configured to have the same flush interval.

Hostname and Device

Veneur also honors the same "magic" tags that the dogstatsd daemon includes in the datadog agent. The tag host will override Hostname in the metric and device will override DeviceName.

Configuration

Veneur expects to have a config file supplied via -f PATH. The include example.yaml outlines the options:

  • api_hostname - The Datadog API URL to post to. Probably https://app.datadoghq.com.
  • metric_max_length - How big a buffer to allocate for incoming metric lengths. Metrics longer than this will get truncated!
  • flush_max_per_body - how many metrics to include in each JSON body POSTed to Datadog. Veneur will POST multiple bodies in parallel if it goes over this limit. A value around 5k-10k is recommended; in practice we've seen Datadog reject bodies over about 195k.
  • debug - Should we output lots of debug info? :)
  • hostname - The hostname to be used with each metric sent. Defaults to os.Hostname()
  • omit_empty_hostname - If true and hostname is empty ("") Veneur will not add a host tag to its own metrics.
  • interval - How often to flush. Something like 10s seems good. Note: If you change this, it breaks all kinds of things on Datadog's side. You'll have to change all your metric's metadata.
  • key - Your Datadog API key
  • percentiles - The percentiles to generate from our timers and histograms. Specified as array of float64s
  • aggregates - The aggregates to generate from our timers and histograms. Specified as array of strings, choices: min, max, median, avg, count, sum. Default: min, max, count
  • udp_address - The address on which to listen for metrics. Probably :8126 so as not to interfere with normal DogStatsD.
  • http_address - The address to serve HTTP healthchecks and other endpoints. This can be a simple ip:port combination like 127.0.0.1:8127. If you're under einhorn, you probably want einhorn@0.
  • forward_address - The address of an upstream Veneur to forward metrics to. See below.
  • num_workers - The number of worker goroutines to start.
  • num_readers - The number of reader goroutines to start. Veneur supports SO_REUSEPORT on Linux to scale to multiple readers. On other platforms, this should always be 1; other values will probably cause errors at startup. See below.
  • read_buffer_size_bytes - The size of the receive buffer for the UDP socket. Defaults to 2MB, as having a lot of buffer prevents packet drops during flush!
  • sentry_dsn A DSN for Sentry, where errors will be sent when they happen.
  • stats_address - The address to send internally generated metrics. Probably 127.0.0.1:8125. In practice this means you'll be sending metrics to yourself. This is expected!
  • tags - Tags to add to every metric that is sent to Veneur. Expects an array of strings!

Monitoring

Here are the important things to monitor with Veneur:

At Local Node

When running as a local instance, you will be primarily concerned with the following metrics:

  • veneur.flush*.error_total as a count of errors when flushing metrics to Datadog. This should rarely happen. Occasional errors are fine, but sustained is bad.
  • veneur.flush.total_duration_ns and veneur.flush.total_duration_ns.count. These metrics track the per-host time spent performing a flush to Datadog. The time should be minimal!
Forwarding

If you are forwarding metrics to central Veneur, you'll want to monitor these:

  • veneur.forward.error_total and the cause tag. This should pretty much never happen and definitely not be sustained.
  • veneur.forward.duration_ns and veneur.forward.duration_ns.count. These metrics track the per-host time spent performing a forward. The time should be minimal!

At Global Node

When forwarding you'll want to also monitor the global nodes you're using for aggregation:

  • veneur.import.request_error_total and the cause tag. This should pretty much never happen and definitely not be sustained.
  • veneur.import.response_duration_ns and veneur.import.response_duration_ns.count to monitor duration and number of received forwards. This should not fail and not take very long. How long it takes will depend on how many metrics you're forwarding.
  • And the same veneur.flush.* metrics from the "At Local Node" section.

Metrics

Veneur will emit metrics to the stats_address configured above in DogStatsD form. Those metrics are:

  • veneur.packet.error_total - Number of packets that Veneur could not parse due to some sort of formatting error by the client.
  • veneur.flush.post_metrics_total - The total number of time-series points that will be submitted to Datadog via POST. Datadog's rate limiting is roughly proportional to this number.
  • veneur.forward.post_metrics_total - Indicates how many metrics are being forwarded in a given POST request. A "metric", in this context, refers to a unique combination of name, tags and metric type.
  • veneur.*.content_length_bytes.* - The number of bytes in a single POST body. Remember that Veneur POSTs large sets of metrics in multiple separate bodies in parallel. Uses a histogram, so there are multiple metrics generated depending on your local DogStatsD config.
  • veneur.flush.duration_ns - Time taken for a single POST transaction to the Datadog API. Tagged by part for each sub-part marshal (assembling the request body) and post (blocking on an HTTP response).
  • veneur.forward.duration_ns - Same as flush.duration_ns, but for forwarding requests.
  • veneur.flush.total_duration_ns - Total time spent POSTing to Datadog, across all parallel requests. Under most circumstances, this should be roughly equal to the total veneur.flush.duration_ns. If it's not, then some of the POSTs are happening in sequence, which suggests some kind of goroutine scheduling issue.
  • veneur.flush.error_total - Number of errors received POSTing to Datadog.
  • veneur.forward.error_total - Number of errors received POSTing to an upstream Veneur. See also import.request_error_total below.
  • veneur.flush.worker_duration_ns - Per-worker timing — tagged by worker - for flush. This is important as it is the time in which the worker holds a lock and is unavailable for other work.
  • veneur.worker.metrics_processed_total - Total number of metric packets processed between flushes by workers, tagged by worker. This helps you find hot spots where a single worker is handling a lot of metrics. The sum across all workers should be approximately proportional to the number of packets received.
  • veneur.worker.metrics_flushed_total - Total number of metrics flushed at each flush time, tagged by metric_type. A "metric", in this context, refers to a unique combination of name, tags and metric type. You can use this metric to detect when your clients are introducing new instrumentation, or when you acquire new clients.
  • veneur.worker.metrics_imported_total - Total number of metrics received via the importing endpoint. A "metric", in this context, refers to a unique combination of name, tags, type and originating host. This metric indicates how much of a Veneur instance's load is coming from imports.
  • veneur.import.response_duration_ns - Time spent responding to import HTTP requests. This metric is broken into part tags for request (time spent blocking the client) and merge (time spent sending metrics to workers).
  • veneur.import.request_error_total - A counter for the number of import requests that have errored out. You can use this for monitoring and alerting when imports fail.

Error Handling

In addition to logging, Veneur will dutifully send any errors it generates to a Sentry instance. This will occur if you set the sentry_dsn configuration option. Not setting the option will disable Sentry reporting.

Performance

Processing packets quickly is the name of the game.

Benchmarks

The common use case for Veneur is as an aggregator and host-local replacement for DogStatsD, therefore processing UDP fast is no longer the priority. That said, we were processing > 60k packets/second in production before shifting to the current local aggregation method. This outperformed both the Datadog-provided DogStatsD and StatsD in our infrastructure.

Compressed, Chunked POST

Datadog's API is tuned for small POST bodies from lots of hosts since they work on a per-host basis. Also there are limits on the size of the body that can be posted. As a result Veneur chunks metrics in to smaller bits — governed by flush_max_per_body — and sends them (compressed) concurrently to Datadog. This is essential for reasonable performance as Datadog's API seems to be somewhat O(n) with the size of the body (which is proportional to the number of metrics).

We've found that our hosts generate around 5k metrics and have reasonable performance, so in our case 5k is used as the flush_max_per_body.

Sysctl

The following sysctl settings are used in testing, and are the same one would use for StatsD:

sysctl -w net.ipv4.udp_rmem_min=67108864
sysctl -w net.ipv4.udp_wmem_min=67108864
sysctl -w net.core.netdev_max_backlog=200000
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.rmem_default=16777216
sysctl -w net.ipv4.udp_mem="4648512 6198016 9297024"

SO_REUSEPORT

As other implementations have observed, there's a limit to how many UDP packets a single kernel thread can consume before it starts to fall over. Veneur now supports the SO_REUSEPORT socket option on Linux, allowing multiple threads to share the UDP socket with kernel-space balancing between them. If you've tried throwing more cores at Veneur and it's just not going fast enough, this feature can probably help by allowing more of those cores to work on the socket (which is Veneur's hottest code path by far). Note that this is only supported on Linux (right now). We have not added support for other platforms, like darwin and BSDs.

TCP connections

Veneur supports reading the statds protocol from TCP connections. This is mostly to support TLS encryption and authentication, but might be useful on its own. Since TCP is a continuous stream of bytes, this requires each stat to be terminated by a new line character ('\n'). Most statsd clients only add new lines between stats within a single UDP packet, and omit the final trailing new line. This means you will likely need to modify your client to use this feature.

TLS encryption and authentication

If you specify the tls_key and tls_certificate options, Veneur will only accept TLS connections on its TCP port. This allows the metrics sent to Veneur to be encrypted.

If you specify the tls_authority_certificate option, Veneur will require clients to present a client certificate, signed by this authority. This ensures that only authenticated clients can connect.

You can generate your own set of keys using openssl:

# Generate the authority key and certificate (2048-bit RSA signed using SHA-256)
openssl genrsa -out cakey.pem 2048
openssl req -new -x509 -sha256 -key cakey.pem -out cacert.pem -days 1095 -subj "/O=Example Inc/CN=Example Certificate Authority"

# Generate the server key and certificate, signed by the authority
openssl genrsa -out serverkey.pem 2048
openssl req -new -sha256 -key serverkey.pem -out serverkey.csr -days 1095 -subj "/O=Example Inc/CN=veneur.example.com"
openssl x509 -sha256 -req -in serverkey.csr -CA cacert.pem -CAkey cakey.pem -CAcreateserial -out servercert.pem -days 1095

# Generate a client key and certificate, signed by the authority
openssl genrsa -out clientkey.pem 2048
openssl req -new -sha256 -key clientkey.pem -out clientkey.csr -days 1095 -subj "/O=Example Inc/CN=Veneur client key"
openssl x509 -req -in clientkey.csr -CA cacert.pem -CAkey cakey.pem -CAcreateserial -out clientcert.pem -days 1095

Set tcp_address, tls_key, tls_certificate, and tls_authority_certificate:

tcp_address: "localhost:8129"
tls_certificate: |
  -----BEGIN CERTIFICATE-----
  MIIC8TCCAdkCCQDc2V7P5nCDLjANBgkqhkiG9w0BAQsFADBAMRUwEwYDVQQKEwxC
  ...
  -----END CERTIFICATE-----
tls_key: |
    -----BEGIN RSA PRIVATE KEY-----
  MIIEpAIBAAKCAQEA7Sntp4BpEYGzgwQR8byGK99YOIV2z88HHtPDwdvSP0j5ZKdg
  ...
  -----END RSA PRIVATE KEY-----
tls_authority_certificate: |
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

Name

The veneur is a person acting as superintendent of the chase and especially of hounds in French medieval venery and being an important officer of the royal household. In other words, it is the master of dogs. :)

Documentation

Index

Constants

This section is empty.

Variables

View Source
var VERSION = "dirty"

VERSION stores the current veneur version. It must be a var so it can be set at link time.

Functions

Types

type Config

type Config struct {
	Aggregates              []string  `yaml:"aggregates"`
	APIHostname             string    `yaml:"api_hostname"`
	AwsAccessKeyID          string    `yaml:"aws_access_key_id"`
	AwsRegion               string    `yaml:"aws_region"`
	AwsS3Bucket             string    `yaml:"aws_s3_bucket"`
	AwsSecretAccessKey      string    `yaml:"aws_secret_access_key"`
	Debug                   bool      `yaml:"debug"`
	EnableProfiling         bool      `yaml:"enable_profiling"`
	FlushMaxPerBody         int       `yaml:"flush_max_per_body"`
	ForwardAddress          string    `yaml:"forward_address"`
	Hostname                string    `yaml:"hostname"`
	HTTPAddress             string    `yaml:"http_address"`
	InfluxAddress           string    `yaml:"influx_address"`
	InfluxConsistency       string    `yaml:"influx_consistency"`
	InfluxDBName            string    `yaml:"influx_db_name"`
	Interval                string    `yaml:"interval"`
	Key                     string    `yaml:"key"`
	MetricMaxLength         int       `yaml:"metric_max_length"`
	NumReaders              int       `yaml:"num_readers"`
	NumWorkers              int       `yaml:"num_workers"`
	OmitEmptyHostname       bool      `yaml:"omit_empty_hostname"`
	Percentiles             []float64 `yaml:"percentiles"`
	ReadBufferSizeBytes     int       `yaml:"read_buffer_size_bytes"`
	SentryDsn               string    `yaml:"sentry_dsn"`
	StatsAddress            string    `yaml:"stats_address"`
	Tags                    []string  `yaml:"tags"`
	TcpAddress              string    `yaml:"tcp_address"`
	TLSAuthorityCertificate string    `yaml:"tls_authority_certificate"`
	TLSCertificate          string    `yaml:"tls_certificate"`
	TLSKey                  string    `yaml:"tls_key"`
	TraceAddress            string    `yaml:"trace_address"`
	TraceAPIAddress         string    `yaml:"trace_api_address"`
	TraceMaxLengthBytes     int       `yaml:"trace_max_length_bytes"`
	UdpAddress              string    `yaml:"udp_address"`
}

func ReadConfig

func ReadConfig(path string) (c Config, err error)

ReadConfig unmarshals the config file and slurps in it's data.

func (Config) ParseInterval

func (c Config) ParseInterval() (time.Duration, error)

ParseInterval handles parsing the flush interval as a time.Duration

type DatadogTraceSpan

type DatadogTraceSpan struct {
	Duration int64              `json:"duration"`
	Error    int64              `json:"error"`
	Meta     map[string]string  `json:"meta"`
	Metrics  map[string]float64 `json:"metrics"`
	Name     string             `json:"name"`
	ParentID int64              `json:"parent_id,omitempty"`
	Resource string             `json:"resource,omitempty"`
	Service  string             `json:"service"`
	SpanID   int64              `json:"span_id"`
	Start    int64              `json:"start"`
	TraceID  int64              `json:"trace_id"`
	Type     string             `json:"type"`
}

DatadogTraceSpan represents a trace span as JSON for the Datadog tracing API.

type EventWorker

type EventWorker struct {
	EventChan        chan samplers.UDPEvent
	ServiceCheckChan chan samplers.UDPServiceCheck
	// contains filtered or unexported fields
}

EventWorker is similar to a Worker but it collects events and service checks instead of metrics.

func NewEventWorker

func NewEventWorker(stats *statsd.Client) *EventWorker

NewEventWorker creates an EventWorker ready to collect events and service checks.

func (*EventWorker) Flush

Flush returns the EventWorker's stored events and service checks and resets the stored contents.

func (*EventWorker) Work

func (ew *EventWorker) Work()

Work will start the EventWorker listening for events and service checks. This function will never return.

type Server

type Server struct {
	Workers     []*Worker
	EventWorker *EventWorker
	TraceWorker *TraceWorker

	Hostname string
	Tags     []string

	DDHostname     string
	DDAPIKey       string
	DDTraceAddress string
	HTTPClient     *http.Client

	HTTPAddr    string
	ForwardAddr string
	UDPAddr     *net.UDPAddr
	TraceAddr   *net.UDPAddr
	RcvbufBytes int

	TCPAddr *net.TCPAddr

	HistogramPercentiles []float64
	FlushMaxPerBody      int

	HistogramAggregates samplers.HistogramAggregates
	// contains filtered or unexported fields
}

A Server is the actual veneur instance that will be run.

func NewFromConfig

func NewFromConfig(conf Config) (ret Server, err error)

NewFromConfig creates a new veneur server from a configuration specification.

func (*Server) ConsumePanic

func (s *Server) ConsumePanic(err interface{})

ConsumePanic is intended to be called inside a deferred function when recovering from a panic. It accepts the value of recover() as its only argument, and reports the panic to Sentry, prints the stack, and then repanics (to ensure your program terminates)

func (*Server) Flush

func (s *Server) Flush()

Flush takes the slices of metrics, combines then and marshals them to json for posting to Datadog.

func (*Server) FlushGlobal

func (s *Server) FlushGlobal(ctx context.Context)

FlushGlobal sends any global metrics to their destination.

func (*Server) FlushLocal

func (s *Server) FlushLocal(ctx context.Context)

FlushLocal takes the slices of metrics, combines then and marshals them to json for posting to Datadog.

func (*Server) HTTPServe

func (s *Server) HTTPServe()

HTTPServe starts the HTTP server and listens perpetually until it encounters an unrecoverable error.

func (*Server) HandleMetricPacket

func (s *Server) HandleMetricPacket(packet []byte) error

HandleMetricPacket processes each packet that is sent to the server, and sends to an appropriate worker (EventWorker or Worker).

func (*Server) HandleTracePacket

func (s *Server) HandleTracePacket(packet []byte)

HandleTracePacket accepts an incoming packet as bytes and sends it to the appropriate worker.

func (*Server) Handler

func (s *Server) Handler() http.Handler

Handler returns the Handler responsible for routing request processing.

func (*Server) ImportMetrics

func (s *Server) ImportMetrics(ctx context.Context, jsonMetrics []samplers.JSONMetric)

ImportMetrics feeds a slice of json metrics to the server's workers

func (*Server) IsLocal

func (s *Server) IsLocal() bool

IsLocal indicates whether veneur is running as a local instance (forwarding non-local data to a global veneur instance) or is running as a global instance (sending all data directly to the final destination).

func (*Server) ReadMetricSocket

func (s *Server) ReadMetricSocket(packetPool *sync.Pool, reuseport bool)

ReadMetricSocket listens for available packets to handle.

func (*Server) ReadTCPSocket

func (s *Server) ReadTCPSocket()

ReadTCPSocket listens on Server.TCPAddr for new connections, starting a goroutine for each.

func (*Server) ReadTraceSocket

func (s *Server) ReadTraceSocket(packetPool *sync.Pool, reuseport bool)

ReadTraceSocket listens for available packets to handle.

func (*Server) Shutdown

func (s *Server) Shutdown()

Shutdown signals the server to shut down after closing all current connections.

func (*Server) Start

func (s *Server) Start()

Start spins up the Server to do actual work, firing off goroutines for various workers and utilities.

func (*Server) TracingEnabled

func (s *Server) TracingEnabled() bool

TracingEnabled returns true if tracing is enabled.

type TraceWorker

type TraceWorker struct {
	TraceChan chan ssf.SSFSample
	// contains filtered or unexported fields
}

TraceWorker is similar to a Worker but it collects events and service checks instead of metrics.

func NewTraceWorker

func NewTraceWorker(stats *statsd.Client) *TraceWorker

NewTraceWorker creates an TraceWorker ready to collect events and service checks.

func (*TraceWorker) Flush

func (tw *TraceWorker) Flush() *ring.Ring

Flush returns the TraceWorker's stored spans and resets the stored contents.

func (*TraceWorker) Work

func (tw *TraceWorker) Work()

Work will start the EventWorker listening for events and service checks. This function will never return.

type Worker

type Worker struct {
	PacketChan chan samplers.UDPMetric
	ImportChan chan []samplers.JSONMetric
	QuitChan   chan struct{}
	// contains filtered or unexported fields
}

Worker is the doodad that does work.

func NewWorker

func NewWorker(id int, stats *statsd.Client, logger *logrus.Logger) *Worker

NewWorker creates, and returns a new Worker object.

func (*Worker) Flush

func (w *Worker) Flush() WorkerMetrics

Flush resets the worker's internal metrics and returns their contents.

func (*Worker) ImportMetric

func (w *Worker) ImportMetric(other samplers.JSONMetric)

ImportMetric receives a metric from another veneur instance

func (*Worker) ProcessMetric

func (w *Worker) ProcessMetric(m *samplers.UDPMetric)

ProcessMetric takes a Metric and samples it

This is standalone to facilitate testing

func (*Worker) Stop

func (w *Worker) Stop()

Stop tells the worker to stop listening for work requests.

Note that the worker will only stop *after* it has finished its work.

func (*Worker) Work

func (w *Worker) Work()

Work will start the worker listening for metrics to process or import. It will not return until the worker is sent a message to terminate using Stop()

type WorkerMetrics

type WorkerMetrics struct {
	// contains filtered or unexported fields
}

WorkerMetrics is just a plain struct bundling together the flushed contents of a worker

func NewWorkerMetrics

func NewWorkerMetrics() WorkerMetrics

NewWorkerMetrics initializes a WorkerMetrics struct

func (WorkerMetrics) Upsert

func (wm WorkerMetrics) Upsert(mk samplers.MetricKey, Scope samplers.MetricScope, tags []string) bool

Upsert creates an entry on the WorkerMetrics struct for the given metrickey (if one does not already exist) and updates the existing entry (if one already exists). Returns true if the metric entry was created and false otherwise.

Directories

Path Synopsis
cmd
s3
Package ssf is a generated protocol buffer package.
Package ssf is a generated protocol buffer package.
Package tdigest provides an implementation of Ted Dunning's t-digest, an approximate histogram for online, distributed applications.
Package tdigest provides an implementation of Ted Dunning's t-digest, an approximate histogram for online, distributed applications.
Package trace provies an experimental API for initiating traces.
Package trace provies an experimental API for initiating traces.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL