stats

package module
Version: v4.1.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 13, 2018 License: MIT Imports: 13 Imported by: 0

README

stats CircleCI Go Report Card GoDoc

A Go package for abstracting stats collection.

Installation

go get github.com/segmentio/stats

Migration to v4

Version 4 of the stats package introduced a new way of producing metrics based on defining struct types with tags on certain fields that define how to interpret the values. This approach allows for much more efficient metric production as it allows the program to do quick assignments and increments of the struct fields to set the values to be reported, and submit them all with one call to the stats engine, resulting in orders of magnitude faster metrics production. Here's an example:

type funcMetrics struct {
    calls struct {
        count int           `metric:"count" type:"counter"`
        time  time.Duration `metric:"time"  type:"histogram"`
    } `metric:"func.calls"`
}
t := time.Now()
f()
callTime := time.Now().Sub(t)

m := &funcMetrics{}
m.calls.count = 1
m.calls.time = callTime

// Equivalent to:
//
//   stats.Incr("func.calls.count")
//   stats.Observe("func.calls.time", callTime)
//
stats.Report(m)

To avoid greatly increasing the complexity of the codebase some old APIs were removed in favor of this new approach, other were transformed to provide more flexibility and leverage new features.

The stats package used to only support float values, metrics can now be of various numeric types (see stats.MakeMeasure for a detailed description), therefore functions like stats.Add now accept an interface{} value instead of float64. stats.ObserveDuration was also removed since this new approach makes it obsolete (durations can be passed to stats.Observe directly).

The stats.Engine type used to be configured through a configuration object passed to its constructor function, and a few methods (like Register) were exposed to mutate engine instances. This required synchronization in order to be safe to modify an engine from multiple goroutines. We haven't had a use case for modifying an engine after creating it so the constraint on being thread-safe were lifted and the fields exposed on the stats.Engine struct type directly to communicate that they are unsafe to modify concurrently. The helper methods remain tho to make migration of existing code smoother.

Histogram buckets (mostly used for the prometheus client) are now defined by default on the stats.Buckets global variable instead of within the engine. This decoupling was made to avoid paying the cost of doing histogram bucket lookups when producing metrics to backends that don't use them (like datadog or influxdb for example).

The data model also changed a little. Handlers for metrics produced by an engine now accept a list of measures instead of single metrics, each measure being made of a name, a set of fields, and tags to apply to each of those fields. This allows a more generic and more efficient approach to metric production, better fits the influxdb data model, while still being compatible with other clients (datadog, prometheus, ...). A single timeseries is usually identified by the combination of the measure name, a field name and value, and the set of tags set on that measure. Refer to each client for a details about how measures are translated to individual metrics.

Note that no changes were made to the end metrics being produced by each sub-package (httpstats, procstats, ...). This was important as we must keep the behavior backward compatible since making changes here would implicitly break dashboards or monitors set on the various metric collection systems that this package supports, potentially causing production issues.

If you find a bug or an API is not available anymore but deserves to be ported feel free to open an issue.

Quick Start

Engine

A core concept of the stats package is the Engine. Every program importing the package gets a default engine where all metrics produced are aggregated. The program then has to instantiate clients that will consume from the engine at regular time intervals and report the state of the engine to metrics collection platforms.

package main

import (
    "github.com/segmentio/stats"
    "github.com/segmentio/stats/datadog"
)

func main() {
    // Creates a new datadog client publishing metrics to localhost:8125
    dd := datadog.NewClient("localhost:8125")

    // Register the client so it receives metrics from the default engine.
    stats.Register(dd)

    // Flush the default stats engine on return to ensure all buffered
    // metrics are sent to the dogstatsd server.
    defer stats.Flush()

    // That's it! Metrics produced by the application will now be reported!
    // ...
}
Metrics
package main

import (
    "github.com/segmentio/stats"
    "github.com/segmentio/stats/datadog"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    // Increment counters.
    stats.Incr("user.login")
    defer stats.Incr("user.logout")

    // Set a tag on a counter increment.
    stats.Incr("user.login", stats.Tag{"user", "luke"})

    // ...
}
Flushing Metrics

Metrics are stored in a buffer, which will be flushed when it reaches its capacity. For most use-cases, you do not need to explicitly send out metrics.

If you're producing metrics only very infrequently, you may have metrics that stay in the buffer and never get sent out. In that case, you can manually trigger stats flushes like so:

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    // Force a metrics flush every second
    go func() {
      for range time.Tick(time.Second) {
        stats.Flush()
      }
    }()

    // ...
}

Monitoring

Processes

The github.com/segmentio/stats/procstats package exposes an API for creating a statistics collector on local processes. Statistics are collected for the current process and metrics including Goroutine count and memory usage are reported.

Here's an example of how to use the collector:

package main

import (
    "github.com/segmentio/stats/datadog"
    "github.com/segmentio/stats/procstats"
)


func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Start a new collector for the current process, reporting Go metrics.
    c := procstats.StartCollector(procstats.NewGoMetrics())

    // Gracefully stops stats collection.
    defer c.Close()

    // ...
}

One can also collect additional statistics on resource delays, such as CPU delays, block I/O delays, and paging/swapping delays. This capability is currently only available on Linux, and can be optionally enabled as follows:

func main() {
    // As above...

    // Start a new collector for the current process, reporting Go metrics.
    c := procstats.StartCollector(procstats.NewDelayMetrics())
    defer c.Close()
}
HTTP Servers

The github.com/segmentio/stats/httpstats package exposes a decorator of http.Handler that automatically adds metric colleciton to a HTTP handler, reporting things like request processing time, error counters, header and body sizes...

Here's an example of how to use the decorator:

package main

import (
    "net/http"

    "github.com/segmentio/stats/datadog"
    "github.com/segmentio/stats/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // ...

    http.ListenAndServe(":8080", httpstats.NewHandler(
        http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
            // This HTTP handler is automatically reporting metrics for all
            // requests it handles.
            // ...
        }),
    ))
}
HTTP Clients

The github.com/segmentio/stats/httpstats package exposes a decorator of http.RoundTripper which collects and reports metrics for client requests the same way it's done on the server side.

Here's an exmaple of how to use the decorator:

package main

import (
    "net/http"

    "github.com/segmentio/stats/datadog"
    "github.com/segmentio/stats/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Make a new HTTP client with a transport that will report HTTP metrics,
    // set the engine to nil to use the default.
    httpc := &http.Client{
        Transport: httpstats.NewTransport(
            &http.Transport{},
        ),
    }

    // ...
}

You can also modify the default HTTP client to automatically get metrics for all packages using it, this is very convinient to get insights into dependencies.

package main

import (
    "net/http"

    "github.com/segmentio/stats/datadog"
    "github.com/segmentio/stats/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Wraps the default HTTP client's transport.
    http.DefaultClient.Transport = httpstats.NewTransport(http.DefaultClient.Transport)

    // ...
}
Redis

The github.com/segmentio/stats/redisstats package exposes:

Here's an exmaple of how to use the decorator on the client side:

package main

import (
    "github.com/segmentio/redis-go"
    "github.com/segmentio/stats/redisstats"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    client := redis.Client{
        Addr:      "127.0.0.1:6379",
        Transport: redisstats.NewTransport(&redis.Transport{}),
    }

    // ...
}

And on the server side:

package main

import (
    "github.com/segmentio/redis-go"
    "github.com/segmentio/stats/redisstats"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    handler := redis.HandlerFunc(func(res redis.ResponseWriter, req *redis.Request) {
      // Implement handler function here
    })

    server := redis.Server{
        Handler: redisstats.NewHandler(&handler),
    }

    server.ListenAndServe()

    // ...
}

Documentation

Overview

Package stats exposes tools for producing application performance metrics to various metric collection backends.

Index

Constants

This section is empty.

Variables

View Source
var Buckets = HistogramBuckets{}

Buckets is a registry where histogram buckets are placed. Some metric collection backends need to have histogram buckets defined by the program (like Prometheus), a common pattern is to use the init function of a package to register buckets for the various histograms that it produces.

View Source
var DefaultEngine = NewEngine(progname(), Discard)

DefaultEngine is the engine used by global helper functions.

View Source
var Discard = &discard{}

Discard is a handler that doesn't do anything with the measures it receives.

Functions

func Add

func Add(name string, value interface{}, tags ...Tag)

Add increments by value the counter identified by name and tags.

func Flush

func Flush()

Flush flushes the default engine.

func Incr

func Incr(name string, tags ...Tag)

Incr increments by one the counter identified by name and tags.

func Observe

func Observe(name string, value interface{}, tags ...Tag)

Observe reports value for the histogram identified by name and tags.

func Register

func Register(handler Handler)

Register adds handler to the default engine.

func Report

func Report(metrics interface{}, tags ...Tag)

Report is a helper function that delegates to DefaultEngine.

func ReportAt

func ReportAt(time time.Time, metrics interface{}, tags ...Tag)

ReportAt is a helper function that delegates to DefaultEngine.

func Set

func Set(name string, value interface{}, tags ...Tag)

Set sets to value the gauge identified by name and tags.

func TagsAreSorted

func TagsAreSorted(tags []Tag) bool

TagsAreSorted returns true if the given list of tags is sorted by tag name, false otherwise.

Types

type Buffer

type Buffer struct {
	// Target size of the memory buffer where metrics are serialized.
	//
	// If left to zero, a size of 1024 bytes is used as default (this is low,
	// you should set this value).
	//
	// Note that if the buffer size is small, the program may generate metrics
	// that don't fit into the configured buffer size. In that case the buffer
	// will still pass the serialized byte slice to its Serializer to leave the
	// decision of accepting or rejecting the metrics.
	BufferSize int

	// Size of the internal buffer pool, this controls how well the buffer
	// performs in highly concurrent environments. If unset, 2 x GOMAXPROCS
	// is used as a default value.
	BufferPoolSize int

	// The Serializer used to write the measures.
	//
	// This field cannot be nil.
	Serializer Serializer
	// contains filtered or unexported fields
}

Buffer is the implementation of a measure handler which uses a Serializer to serialize the metric into a memory buffer and write them once the buffer has reached a target size.

func (*Buffer) Flush

func (b *Buffer) Flush()

Flush satisfies the Flusher interface.

func (*Buffer) HandleMeasures

func (b *Buffer) HandleMeasures(time time.Time, measures ...Measure)

HandleMeasures satisfies the Handler interface.

type Clock

type Clock struct {
	// contains filtered or unexported fields
}

The Clock type can be used to report statistics on durations.

Clocks are useful to measure the duration taken by sequential execution steps and therefore aren't safe to be used concurrently by multiple goroutines.

func (*Clock) Stamp

func (c *Clock) Stamp(name string)

Stamp reports the time difference between now and the last time the method was called (or since the clock was created).

The metric produced by this method call will have a "stamp" tag set to name.

func (*Clock) StampAt

func (c *Clock) StampAt(name string, now time.Time)

StampAt reports the time difference between now and the last time the method was called (or since the clock was created).

The metric produced by this method call will have a "stamp" tag set to name.

func (*Clock) Stop

func (c *Clock) Stop()

Stop reports the time difference between now and the time the clock was created at.

The metric produced by this method call will have a "stamp" tag set to "total".

func (*Clock) StopAt

func (c *Clock) StopAt(now time.Time)

StopAt reports the time difference between now and the time the clock was created at.

The metric produced by this method call will have a "stamp" tag set to "total".

type Engine

type Engine struct {
	// The measure handler that the engine forwards measures to.
	Handler Handler

	// A prefix set on all metric names produced by the engine.
	Prefix string

	// A list of tags set on all metrics produced by the engine.
	//
	// The list of tags has to be sorted. This is automatically managed by the
	// helper methods WithPrefix, WithTags and the NewEngine function. A program
	// that manipulates this field directly has to respect this requirement.
	Tags []Tag
	// contains filtered or unexported fields
}

An Engine carries the context for producing metrics, it is configured by setting the exported fields or using the helper methods to create sub-engines that inherit the configuration of the base they were created from.

The program must not modify the engine's handler, prefix, or tags after it started using it. If changes need to be made new engines must be created by calls to WithPrefix or WithTags.

func NewEngine

func NewEngine(prefix string, handler Handler, tags ...Tag) *Engine

NewEngine creates and returns a new engine configured with prefix, handler, and tags.

func WithPrefix

func WithPrefix(prefix string, tags ...Tag) *Engine

WithPrefix returns a copy of the engine with prefix appended to default engine's current prefix and tags set to the merge of eng's current tags and those passed as argument. Both the default engine and the returned engine share the same handler.

func WithTags

func WithTags(tags ...Tag) *Engine

WithTags returns a copy of the engine with tags set to the merge of the default engine's current tags and those passed as arguments. Both the default engine and the returned engine share the same handler.

func (*Engine) Add

func (eng *Engine) Add(name string, value interface{}, tags ...Tag)

Add increments by value the counter identified by name and tags.

func (*Engine) Clock

func (eng *Engine) Clock(name string, tags ...Tag) *Clock

Clock returns a new clock identified by name and tags.

func (*Engine) Flush

func (eng *Engine) Flush()

Flush flushes eng's handler (if it implements the Flusher interface).

func (*Engine) Incr

func (eng *Engine) Incr(name string, tags ...Tag)

Incr increments by one the counter identified by name and tags.

func (*Engine) Observe

func (eng *Engine) Observe(name string, value interface{}, tags ...Tag)

Observe reports value for the histogram identified by name and tags.

func (*Engine) Register

func (eng *Engine) Register(handler Handler)

Register adds handler to eng.

func (*Engine) Report

func (eng *Engine) Report(metrics interface{}, tags ...Tag)

Report calls ReportAt with time.Now() as first argument.

func (*Engine) ReportAt

func (eng *Engine) ReportAt(time time.Time, metrics interface{}, tags ...Tag)

ReportAt reports a set of metrics for a given time. The metrics must be of type struct, pointer to struct, or a slice or array to one of those. See MakeMeasures for details about how to make struct types exposing metrics.

func (*Engine) Set

func (eng *Engine) Set(name string, value interface{}, tags ...Tag)

Set sets to value the gauge identified by name and tags.

func (*Engine) WithPrefix

func (eng *Engine) WithPrefix(prefix string, tags ...Tag) *Engine

WithPrefix returns a copy of the engine with prefix appended to eng's current prefix and tags set to the merge of eng's current tags and those passed as argument. Both eng and the returned engine share the same handler.

func (*Engine) WithTags

func (eng *Engine) WithTags(tags ...Tag) *Engine

WithTags returns a copy of the engine with tags set to the merge of eng's current tags and those passed as arguments. Both eng and the returned engine share the same handler.

type Field

type Field struct {
	Name  string
	Value Value
}

A Field is a key/value type that represents a single metric in a Measure.

func MakeField

func MakeField(name string, value interface{}, ftype FieldType) Field

MakeField constructs and returns a new Field from name, value, and ftype.

func (Field) String

func (f Field) String() string

func (Field) Type

func (f Field) Type() FieldType

Type returns the type of f.

type FieldType

type FieldType int32

FieldType is an enumeration of the different metric types that may be set on a Field value.

const (
	// Counter represents incrementing counter metrics.
	Counter FieldType = iota

	// Guage represents metrics that snapshot a value that may increase and
	// decrease.
	Gauge

	// Histogram represents metrics to observe the distribution of values.
	Histogram
)

func (FieldType) GoString

func (t FieldType) GoString() string

func (FieldType) String

func (t FieldType) String() string

type Flusher

type Flusher interface {
	Flush()
}

Flusher is an interface implemented by measure handlers in order to flush any buffered data.

type Handler

type Handler interface {
	// HandleMeasures is called by the Engine on which the handler was set
	// whenever new measures are produced by the program. The first argument
	// is the time at which the measures were taken.
	//
	// The method must treat the list of measures as read-only values, and
	// must not retain pointers to any of the measures or their sub-fields
	// after returning.
	HandleMeasures(time time.Time, measures ...Measure)
}

The Handler interface is implemented by types that produce measures to various metric collection backends.

func MultiHandler

func MultiHandler(handlers ...Handler) Handler

MultiHandler constructs a handler which dispatches measures to all given handlers.

type HandlerFunc

type HandlerFunc func(time.Time, ...Measure)

HandleFunc is a type alias making it possible to use simple functions as measure handlers.

func (HandlerFunc) HandleMeasures

func (f HandlerFunc) HandleMeasures(time time.Time, measures ...Measure)

HandleMeasures calls f, satisfies the Handler interface.

type HistogramBuckets

type HistogramBuckets map[Key][]Value

HistogramBuckets is a map type storing histogram buckets.

func (HistogramBuckets) Set

func (b HistogramBuckets) Set(key string, buckets ...interface{})

Set sets a set of buckets to the given list of sorted values.

type Key

type Key struct {
	Measure string
	Field   string
}

Key is a type used to uniquely identify metrics.

type Measure

type Measure struct {
	Name   string
	Fields []Field
	Tags   []Tag
}

Measure is a type that represents a single measure made by the application. Measures are identified by a name, a set of fields that define what has been instrumented, and a set of tags representing different dimensions of the measure.

Implementations of the Handler interface receive lists of measures produced by the application, and assume the tags will be sorted.

func MakeMeasures

func MakeMeasures(prefix string, value interface{}, tags ...Tag) []Measure

MakeMeasures takes a struct value or a pointer to a struct value as argument and extracts and returns the list of measures that it represented.

The rules for converting values to measure are:

1. All fields exposing a 'metric' tag are expected to be of type bool, int,
int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64, or time.Duration, and represent fields of the measures.
The struct fields may also define a 'type' tag with a value of "counter",
"gauge" or "histogram" to tune the behavior of the measure handlers.

2. All fields exposing a 'tag' tag are expected to be of type string and
represent tags of the measures.

3. All struct fields are searched recursively for fields matching rule (1)
and (2). Tags found within a struct are inherited by measures generated from
sub-fields, they may also be overwritten.

func (Measure) Clone

func (m Measure) Clone() Measure

Clone creates and returns a deep copy of m. The original and returned values and do not share any pointers to mutable types (but may share string values for example).

func (Measure) String

func (m Measure) String() string

type Serializer

type Serializer interface {
	io.Writer

	// Appends the serialized representation of the given measures into b.
	//
	// The method must not retain any of the arguments.
	AppendMeasures(b []byte, time time.Time, measures ...Measure) []byte
}

The Serializer interface is used to abstract the logic of serializing measures.

type Tag

type Tag struct {
	Name  string
	Value string
}

A Tag is a pair of a string key and value set on measures to define the dimensions of the metrics.

func SortTags

func SortTags(tags []Tag) []Tag

SortTags sorts the slice of tags.

func T

func T(k, v string) Tag

T is shorthand for `stats.Tag{Name: "blah", Value: "foo"}` It returns the tag for Name k and Value v

func (Tag) String

func (t Tag) String() string

type Type

type Type int32
const (
	Null Type = iota
	Bool
	Int
	Uint
	Float
	Duration
)

func (Type) GoString

func (t Type) GoString() string

func (Type) String

func (t Type) String() string

type Value

type Value struct {
	// contains filtered or unexported fields
}

func ValueOf

func ValueOf(v interface{}) Value

func (Value) Bool

func (v Value) Bool() bool

func (Value) Duration

func (v Value) Duration() time.Duration

func (Value) Float

func (v Value) Float() float64

func (Value) Int

func (v Value) Int() int64

func (Value) Interface

func (v Value) Interface() interface{}

func (Value) String

func (v Value) String() string

func (Value) Type

func (v Value) Type() Type

func (Value) Uint

func (v Value) Uint() uint64

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
t or T : Toggle theme light dark auto
y or Y : Canonical URL