README

Metrictank

Circle CI Go Report Card GoDoc

Introduction

Metrictank is a multi-tenant timeseries platform that can be used as a backend or replacement for Graphite. It provides long term storage, high availability, efficient storage, retrieval and processing for large scale environments.

GrafanaLabs has been running metrictank in production since December 2015. It currently requires an external datastore like Cassandra or Bigtable, and we highly recommend using Kafka to support clustering, as well as a clustering manager like Kubernetes. This makes it non-trivial to operate, though GrafanaLabs has an on-premise product that makes this process much easier.

Features

  • 100% open source
  • Heavily compressed chunks (inspired by the Facebook gorilla paper) dramatically lower cpu, memory, and storage requirements and get much greater performance out of Cassandra than other solutions.
  • Writeback RAM buffers and chunk caches, serving most data out of memory.
  • Multiple rollup functions can be configured per serie (or group of series). E.g. min/max/sum/count/average, which can be selected at query time via consolidateBy(). So we can do consolidation (combined runtime+archived) accurately and correctly, unlike most other graphite backends like whisper
  • Flexible tenancy: can be used as single tenant or multi tenant. Selected data can be shared across all tenants.
  • Input options: carbon, metrics2.0, kafka.
  • Guards against excessively large queries. (per-request series/points restrictions)
  • Data backfill/import from whisper
  • Speculative Execution means you can use replicas not only for High Availability but also to reduce query latency.
  • Write-Ahead buffer based on Kafka facilitates robust clustering and enables other analytics use cases.
  • Tags and Meta Tags support
  • Render response metadata: performance statistics, series lineage information and rollup indicator visible through Grafana
  • Index pruning (hide inactive/stale series)
  • Timeseries can change resolution (interval) over time, they will be merged seamlessly at read time. No need for any data migrations.

Relation to Graphite

The goal of Metrictank is to provide a more scalable, secure, resource efficient and performant version of Graphite that is backwards compatible, while also adding some novel functionality. (see Features, above)

There's 2 main ways to deploy Metrictank:

  • as a backend for Graphite-web, by setting the CLUSTER_SERVER configuration value.
  • as an alternative to a Graphite stack. This enables most of the additional functionality. Note that Metrictank's API is not quite on par yet with Graphite-web: some less commonly used functions are not implemented natively yet, in which case Metrictank relies on a graphite-web process to handle those requests. See our graphite comparison page for more details.

Limitations

  • No performance/availability isolation between tenants per instance. (only data isolation)
  • Minimum computation locality: we move the data from storage to processing code, which is both metrictank and graphite.
  • Can't overwrite old data. We support reordering the most recent time window but that's it. (unless you restart MT)

Interesting design characteristics (feature or limitation... up to you)

  • Upgrades / process restarts requires running multiple instances (potentially only for the duration of the maintenance) and possibly re-assigning the primary role. Otherwise data loss of current chunks will be incurred. See operations guide
  • clustering works best with an orchestrator like kubernetes. MT itself does not automate master promotions. See clustering for more.
  • Only float64 values. Ints and bools currently stored as floats (works quite well due to the gorilla compression),
  • Only uint32 unix timestamps in second resolution. For higher resolution, consider streaming directly to grafana
  • We distribute data by hashing keys, like many similar systems. This means no data locality (data that will be often used together may not live together)

Docs

installation, configuration and operation.
features in-depth
Other

Releases and versioning

  • releases and changelog

  • we aim to keep master stable and vet code before merging to master

  • We're pre-1.0 but adopt semver for our 0.MAJOR.MINOR format. The rules are simple:

    • MAJOR version for incompatible API or functionality changes
    • MINOR version when you add functionality in a backwards-compatible manner, and

    We don't do patch level releases since minor releases are frequent enough.

License

Copyright 2016-2019 Grafana Labs

This software is distributed under the terms of the GNU Affero General Public License.

Some specific packages have a different license:

Directories

Path Synopsis
api
Package batch implements batched processing for slices of points in particular aggregations
Package batch implements batched processing for slices of points in particular aggregations
Package clock provides aligned tickers.
Package clock provides aligned tickers.
cmd
mt-fakemetrics/metricbuilder
package metricbuilder provides various methods to build metrics, or more specifically MetricData structures be aware of this behavior when it comes to the MetricName property: | mpo | metricName has %d directive? | outcome | |-----|------------------------------|-----------------------------------| | 1 | Yes | use directive to always print 1 | | >1 | Yes | use directive to print the number | | 1 | No | don't use directive | | >1 | No | invalid: will panic |
package metricbuilder provides various methods to build metrics, or more specifically MetricData structures be aware of this behavior when it comes to the MetricName property: | mpo | metricName has %d directive? | outcome | |-----|------------------------------|-----------------------------------| | 1 | Yes | use directive to always print 1 | | >1 | Yes | use directive to print the number | | 1 | No | don't use directive | | >1 | No | invalid: will panic |
cmd-dev
Package conf reads config data from two of carbon's config files * storage-schemas.conf (old and new retention format) see https://graphite.readthedocs.io/en/0.9.9/config-carbon.html#storage-schemas-conf * storage-aggregation.conf see http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf as well as our own file index-rules.conf it also adds defaults (the same ones as graphite), so that even if nothing is matched in the user provided schemas or aggregations, a setting is *always* found uses some modified snippets from github.com/lomik/go-carbon and github.com/lomik/go-whisper
Package conf reads config data from two of carbon's config files * storage-schemas.conf (old and new retention format) see https://graphite.readthedocs.io/en/0.9.9/config-carbon.html#storage-schemas-conf * storage-aggregation.conf see http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf as well as our own file index-rules.conf it also adds defaults (the same ones as graphite), so that even if nothing is matched in the user provided schemas or aggregations, a setting is *always* found uses some modified snippets from github.com/lomik/go-carbon and github.com/lomik/go-whisper
Package consolidation provides an abstraction for consolidators
Package consolidation provides an abstraction for consolidators
argument types.
argument types.
idx
Package in provides interfaces, concrete implementations, and utilities to ingest data into metrictank
Package in provides interfaces, concrete implementations, and utilities to ingest data into metrictank
carbon
package carbon provides a traditional carbon input for metrictank note: it does not support the "carbon2.0" protocol that serializes metrics2.0 into a plaintext carbon-like protocol
package carbon provides a traditional carbon input for metrictank note: it does not support the "carbon2.0" protocol that serializes metrics2.0 into a plaintext carbon-like protocol
Package logger provides a custom TextFormatter for use with the github.com/sirupsen/logrus library.
Package logger provides a custom TextFormatter for use with the github.com/sirupsen/logrus library.
Package mdata stands for "managed data" or "metrics data" if you will it has all the stuff to keep metric data in memory, store it, and synchronize save states over the network
Package mdata stands for "managed data" or "metrics data" if you will it has all the stuff to keep metric data in memory, store it, and synchronize save states over the network
chunk
package chunk encodes timeseries in chunks of data see devdocs/chunk-format.md for more information.
package chunk encodes timeseries in chunks of data see devdocs/chunk-format.md for more information.
chunk/tsz
Package tsz implements time-series compression it is a fork of https://github.com/dgryski/go-tsz which implements http://www.vldb.org/pvldb/vol8/p1816-teller.pdf see devdocs/chunk-format.md for more info Package tsz implements time-series compression it is a fork of https://github.com/dgryski/go-tsz which implements http://www.vldb.org/pvldb/vol8/p1816-teller.pdf see devdocs/chunk-format.md for more info
Package tsz implements time-series compression it is a fork of https://github.com/dgryski/go-tsz which implements http://www.vldb.org/pvldb/vol8/p1816-teller.pdf see devdocs/chunk-format.md for more info Package tsz implements time-series compression it is a fork of https://github.com/dgryski/go-tsz which implements http://www.vldb.org/pvldb/vol8/p1816-teller.pdf see devdocs/chunk-format.md for more info
msg
stacktest
Package stats provides functionality for instrumenting metrics and reporting them The metrics can be user specified, or sourced from the runtime (reporters) To use this package correctly, you must instantiate exactly 1 output.
Package stats provides functionality for instrumenting metrics and reporting them The metrics can be user specified, or sourced from the runtime (reporters) To use this package correctly, you must instantiate exactly 1 output.
package test contains utility functions used by tests/benchmarks in various packages
package test contains utility functions used by tests/benchmarks in various packages
package tracing contains some helpers to make working with opentracing a tad simpler
package tracing contains some helpers to make working with opentracing a tad simpler