metric

package
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 26, 2020 License: Apache-2.0 Imports: 8 Imported by: 48

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Accumulation added in v0.7.0

type Accumulation struct {
	Metadata
	// contains filtered or unexported fields
}

Accumulation contains the exported data for a single metric instrument and label set, as prepared by an Accumulator for the Processor.

func NewAccumulation added in v0.7.0

func NewAccumulation(descriptor *metric.Descriptor, labels *label.Set, resource *resource.Resource, aggregator Aggregator) Accumulation

NewAccumulation allows Accumulator implementations to construct new Accumulations to send to Processors. The Descriptor, Labels, Resource, and Aggregator represent aggregate metric events received over a single collection period.

func (Accumulation) Aggregator added in v0.7.0

func (r Accumulation) Aggregator() Aggregator

Aggregator returns the checkpointed aggregator. It is safe to access the checkpointed state without locking.

type Aggregator

type Aggregator interface {
	// Aggregation returns an Aggregation interface to access the
	// current state of this Aggregator.  The caller is
	// responsible for synchronization and must not call any the
	// other methods in this interface concurrently while using
	// the Aggregation.
	Aggregation() aggregation.Aggregation

	// Update receives a new measured value and incorporates it
	// into the aggregation.  Update() calls may be called
	// concurrently.
	//
	// Descriptor.NumberKind() should be consulted to determine
	// whether the provided number is an int64 or float64.
	//
	// The Context argument comes from user-level code and could be
	// inspected for a `correlation.Map` or `trace.SpanContext`.
	Update(context.Context, metric.Number, *metric.Descriptor) error

	// SynchronizedMove is called during collection to finish one
	// period of aggregation by atomically saving the
	// currently-updating state into the argument Aggregator AND
	// resetting the current value to the zero state.
	//
	// SynchronizedMove() is called concurrently with Update().  These
	// two methods must be synchronized with respect to each
	// other, for correctness.
	//
	// After saving a synchronized copy, the Aggregator can be converted
	// into one or more of the interfaces in the `aggregation` sub-package,
	// according to kind of Aggregator that was selected.
	//
	// This method will return an InconsistentAggregatorError if
	// this Aggregator cannot be copied into the destination due
	// to an incompatible type.
	//
	// This call has no Context argument because it is expected to
	// perform only computation.
	SynchronizedMove(destination Aggregator, descriptor *metric.Descriptor) error

	// Merge combines the checkpointed state from the argument
	// Aggregator into this Aggregator.  Merge is not synchronized
	// with respect to Update or SynchronizedMove.
	//
	// The owner of an Aggregator being merged is responsible for
	// synchronization of both Aggregator states.
	Merge(Aggregator, *metric.Descriptor) error
}

Aggregator implements a specific aggregation behavior, e.g., a behavior to track a sequence of updates to an instrument. Sum-only instruments commonly use a simple Sum aggregator, but for the distribution instruments (ValueRecorder, ValueObserver) there are a number of possible aggregators with different cost and accuracy tradeoffs.

Note that any Aggregator may be attached to any instrument--this is the result of the OpenTelemetry API/SDK separation. It is possible to attach a Sum aggregator to a ValueRecorder instrument or a MinMaxSumCount aggregator to a Counter instrument.

type AggregatorSelector added in v0.7.0

type AggregatorSelector interface {
	// AggregatorFor allocates a variable number of aggregators of
	// a kind suitable for the requested export.  This method
	// initializes a `...*Aggregator`, to support making a single
	// allocation.
	//
	// When the call returns without initializing the *Aggregator
	// to a non-nil value, the metric instrument is explicitly
	// disabled.
	//
	// This must return a consistent type to avoid confusion in
	// later stages of the metrics export process, i.e., when
	// Merging or Checkpointing aggregators for a specific
	// instrument.
	//
	// Note: This is context-free because the aggregator should
	// not relate to the incoming context.  This call should not
	// block.
	AggregatorFor(*metric.Descriptor, ...*Aggregator)
}

AggregatorSelector supports selecting the kind of Aggregator to use at runtime for a specific metric instrument.

type CheckpointSet

type CheckpointSet interface {
	// ForEach iterates over aggregated checkpoints for all
	// metrics that were updated during the last collection
	// period. Each aggregated checkpoint returned by the
	// function parameter may return an error.
	//
	// The ExportKindSelector argument is used to determine
	// whether the Record is computed using Delta or Cumulative
	// aggregation.
	//
	// ForEach tolerates ErrNoData silently, as this is
	// expected from the Meter implementation. Any other kind
	// of error will immediately halt ForEach and return
	// the error to the caller.
	ForEach(ExportKindSelector, func(Record) error) error

	// Locker supports locking the checkpoint set.  Collection
	// into the checkpoint set cannot take place (in case of a
	// stateful processor) while it is locked.
	//
	// The Processor attached to the Accumulator MUST be called
	// with the lock held.
	sync.Locker

	// RLock acquires a read lock corresponding to this Locker.
	RLock()
	// RUnlock releases a read lock corresponding to this Locker.
	RUnlock()
}

CheckpointSet allows a controller to access a complete checkpoint of aggregated metrics from the Processor. This is passed to the Exporter which may then use ForEach to iterate over the collection of aggregated metrics.

type ExportKind added in v0.7.0

type ExportKind int

ExportKind indicates the kind of data exported by an exporter. These bits may be OR-d together when multiple exporters are in use.

const (
	// CumulativeExporter indicates that the Exporter expects a
	// Cumulative Aggregation.
	CumulativeExporter ExportKind = 1 // e.g., Prometheus

	// DeltaExporter indicates that the Exporter expects a
	// Delta Aggregation.
	DeltaExporter ExportKind = 2 // e.g., StatsD

	// PassThroughExporter indicates that the Exporter expects
	// either a Cumulative or a Delta Aggregation, whichever does
	// not require maintaining state for the given instrument.
	PassThroughExporter ExportKind = 4 // e.g., OTLP
)

func (ExportKind) ExportKindFor added in v0.7.0

func (kind ExportKind) ExportKindFor(_ *metric.Descriptor, _ aggregation.Kind) ExportKind

ExportKindFor returns a constant, as an implementation of ExportKindSelector.

func (ExportKind) Includes added in v0.7.0

func (kind ExportKind) Includes(has ExportKind) bool

Includes tests whether `kind` includes a specific kind of exporter.

func (ExportKind) MemoryRequired added in v0.7.0

func (kind ExportKind) MemoryRequired(mkind metric.Kind) bool

MemoryRequired returns whether an exporter of this kind requires memory to export correctly.

func (ExportKind) String added in v0.7.0

func (i ExportKind) String() string

type ExportKindSelector added in v0.7.0

type ExportKindSelector interface {
	// ExportKindFor should return the correct ExportKind that
	// should be used when exporting data for the given metric
	// instrument and Aggregator kind.
	ExportKindFor(*metric.Descriptor, aggregation.Kind) ExportKind
}

ExportKindSelector is a sub-interface of Exporter used to indicate whether the Processor should compute Delta or Cumulative Aggregations.

type Exporter

type Exporter interface {
	// Export is called immediately after completing a collection
	// pass in the SDK.
	//
	// The Context comes from the controller that initiated
	// collection.
	//
	// The CheckpointSet interface refers to the Processor that just
	// completed collection.
	Export(context.Context, CheckpointSet) error

	// ExportKindSelector is an interface used by the Processor
	// in deciding whether to compute Delta or Cumulative
	// Aggregations when passing Records to this Exporter.
	ExportKindSelector
}

Exporter handles presentation of the checkpoint of aggregate metrics. This is the final stage of a metrics export pipeline, where metric data are formatted for a specific system.

type Metadata added in v0.7.0

type Metadata struct {
	// contains filtered or unexported fields
}

Metadata contains the common elements for exported metric data that are shared by the Accumulator->Processor and Processor->Exporter steps.

func (Metadata) Descriptor added in v0.7.0

func (m Metadata) Descriptor() *metric.Descriptor

Descriptor describes the metric instrument being exported.

func (Metadata) Labels added in v0.7.0

func (m Metadata) Labels() *label.Set

Labels describes the labels associated with the instrument and the aggregated data.

func (Metadata) Resource added in v0.7.0

func (m Metadata) Resource() *resource.Resource

Resource contains common attributes that apply to this metric event.

type Processor added in v0.7.0

type Processor interface {
	// AggregatorSelector is responsible for selecting the
	// concrete type of Aggregator used for a metric in the SDK.
	//
	// This may be a static decision based on fields of the
	// Descriptor, or it could use an external configuration
	// source to customize the treatment of each metric
	// instrument.
	//
	// The result from AggregatorSelector.AggregatorFor should be
	// the same type for a given Descriptor or else nil.  The same
	// type should be returned for a given descriptor, because
	// Aggregators only know how to Merge with their own type.  If
	// the result is nil, the metric instrument will be disabled.
	//
	// Note that the SDK only calls AggregatorFor when new records
	// require an Aggregator. This does not provide a way to
	// disable metrics with active records.
	AggregatorSelector

	// Process is called by the SDK once per internal record,
	// passing the export Accumulation (a Descriptor, the corresponding
	// Labels, and the checkpointed Aggregator).  This call has no
	// Context argument because it is expected to perform only
	// computation.  An SDK is not expected to call exporters from
	// with Process, use a controller for that (see
	// ./controllers/{pull,push}.
	Process(Accumulation) error
}

Processor is responsible for deciding which kind of aggregation to use (via AggregatorSelector), gathering exported results from the SDK during collection, and deciding over which dimensions to group the exported data.

The SDK supports binding only one of these interfaces, as it has the sole responsibility of determining which Aggregator to use for each record.

The embedded AggregatorSelector interface is called (concurrently) in instrumentation context to select the appropriate Aggregator for an instrument.

The `Process` method is called during collection in a single-threaded context from the SDK, after the aggregator is checkpointed, allowing the processor to build the set of metrics currently being exported.

type Record

type Record struct {
	Metadata
	// contains filtered or unexported fields
}

Record contains the exported data for a single metric instrument and label set, as prepared by the Processor for the Exporter. This includes the effective start and end time for the aggregation.

func NewRecord

func NewRecord(descriptor *metric.Descriptor, labels *label.Set, resource *resource.Resource, aggregation aggregation.Aggregation, start, end time.Time) Record

NewRecord allows Processor implementations to construct export records. The Descriptor, Labels, and Aggregator represent aggregate metric events received over a single collection period.

func (Record) Aggregation added in v0.7.0

func (r Record) Aggregation() aggregation.Aggregation

Aggregation returns the aggregation, an interface to the record and its aggregator, dependent on the kind of both the input and exporter.

func (Record) EndTime added in v0.7.0

func (r Record) EndTime() time.Time

EndTime is the end time of the interval covered by this aggregation.

func (Record) StartTime added in v0.7.0

func (r Record) StartTime() time.Time

StartTime is the start time of the interval covered by this aggregation.

type Subtractor added in v0.7.0

type Subtractor interface {
	// Subtract subtracts the `operand` from this Aggregator and
	// outputs the value in `result`.
	Subtract(operand, result Aggregator, descriptor *metric.Descriptor) error
}

Subtractor is an optional interface implemented by some Aggregators. An Aggregator must support `Subtract()` in order to be configured for a Precomputed-Sum instrument (SumObserver, UpDownSumObserver) using a DeltaExporter.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL