README
OpenTelemetry Go SDK Prometheus Remote Write Exporter for Cortex
This module contains an exporter that sends cumulative metrics data from the OpenTelemetry Go SDK to Cortex using the Prometheus Remote Write API. While it is aimed at Cortex, it should work with other backends that ingest data with the same API.
This exporter is push-based and integrates with the OpenTelemetry Go SDK's push
Controller.
The Controller periodically collects data and passes it to this exporter. The exporter
then converts this data into
TimeSeries
, a format that Cortex
accepts, and sends it to Cortex through HTTP POST requests. The request body is formatted
according to the protocol defined by the Prometheus Remote Write API. See Prometheus's
remote storage integration
documentation
for more details on the Remote Write API.
See the example
submodule for a working example of this exporter.
Table of Contents
Installation
go get -u go.opentelemetry.io/contrib/exporters/metric/cortex
Setting up the Exporter
Users only need to call the InstallNewPipeline
function to setup the exporter. It
requires a Config
struct and returns a push Controller and error. If the error is nil,
the setup is successful and the user can begin creating instruments. No other action is
needed.
// Create a Config struct named `config`.
pusher, err := cortex.InstallNewPipeline(config)
if err != nil {
return err
}
// Make instruments and record data using `global.MeterProvider`.
Configuring the Exporter
The Exporter requires certain information, such as the endpoint URL and push interval
duration, to function properly. This information is stored in a Config
struct, which is
passed into the Exporter during the setup pipeline.
There are two options for creating a Config
struct:
-
Use the
utils
submodule to read settings from a YAML file into a newConfig
struct- Call
utils.NewConfig(...)
to create the struct. More details can be found in theutils
module's README Config
structs have aValidate()
method that sets defaults and checks for errors, but it isn't necessary to call it asutils.NewConfig()
does that already.
- Call
-
Create the
Config
struct manually- Users should call the
Config
struct'sValidate()
method to set default values and check for errors.
- Users should call the
// (Option 1) Create Config struct using utils module.
config, err := utils.NewConfig("config.yml")
if err != nil {
return err
}
// (Option 2) Create Config struct manually.
configTwo := cortex.Config {
Endpoint: "http://localhost:9009/api/prom/push",
RemoteTimeout: 30 * time.Second,
PushInterval: 5 * time.Second,
Headers: map[string]string{
"test": "header",
},
}
// Validate() should be called when creating the Config struct manually.
err = config.Validate()
if err != nil {
return err
}
The Config struct supports many different configuration options. Here is the Config
struct definition as well as the supported YAML properties. The mapstructure
tags are
used by the utils
submodule.
type Config struct {
Endpoint string `mapstructure:"url"`
RemoteTimeout time.Duration `mapstructure:"remote_timeout"`
Name string `mapstructure:"name"`
BasicAuth map[string]string `mapstructure:"basic_auth"`
BearerToken string `mapstructure:"bearer_token"`
BearerTokenFile string `mapstructure:"bearer_token_file"`
TLSConfig map[string]string `mapstructure:"tls_config"`
ProxyURL string `mapstructure:"proxy_url"`
PushInterval time.Duration `mapstructure:"push_interval"`
Quantiles []float64 `mapstructure:"quantiles"`
HistogramBoundaries []float64 `mapstructure:"histogram_boundaries"`
Headers map[string]string `mapstructure:"headers"`
Client *http.Client
}
Supported YAML Properties
This is sourced from the Prometheus Remote Write Configuration documentation.
# The URL of the endpoint to send samples to.
url: <string>
# Timeout for requests to the remote write endpoint.
[ remote_timeout: <duration> | default = 30s ]
# Name of the remote write config, which if specified must be unique among remote write configs. The name will be used in metrics and logging in place of a generated value to help users distinguish between remote write configs.
[ name: <string>]
# Sets the `Authorization` header on every remote write request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string>]
[ password: <string>]
[ password_file: <string> ]
# Sets the `Authorization` header on every remote write request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every remote write request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the remote write request's TLS settings.
tls_config:
# CA certificate to validate API server certificate with.
[ ca_file: <filename>]
# Certificate and key files for client cert authentication to the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]
# ServerName extension to indicate the name of the server.
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]
# Disable validation of the server certificate.
[ insecure_skip_verify: <boolean> ]
# Optional proxy URL.
[ proxy_url: <string>]
# Quantiles for Distribution aggregations
[ quantiles: ]
- <string>
- <string>
- ...
# Histogram Buckets
[ histogram_buckets: ]
- <string>
- <string>
- ...
Securing the Exporter
Authentication
The exporter provides two forms of authentication which are shown below. Users can add
their own custom authentication by providing their own HTTP Client through the Config
struct and customizing it as needed.
-
Basic Authentication
// Basic Authentication properties in the Config struct. cortex.Config{ // ... BasicAuth: map[string]string{ "username": "user", "password": "password", "password_file": "passwordFile", }
# Basic Authentication properties in the YAML file. basic_auth: username: user password: password password_file: passwordfile
Basic authentication sets a HTTP Authorization header containing a
base64
encoded username/password pair. See RFC 7617 for more information. Note that the password and password file are mutually exclusive. TheConfig
struct'sValidate()
method will return an error if both are set. -
Bearer Token Authentication
// Bearer Token Authentication properties in the Config struct. cortex.Config{ BearerToken: "token", BearerTokenFile: "tokenfile", }
# Bearer Token Authentication properties in the YAML file. bearer_token: token bearer_token_file: tokenfile
Bearer token authentication sets a HTTP Authorization header containing a bearer token. See RFC 6750 for more information. Note that the bearer token and bearer token file are mutually exclusive. The
Config
struct'sValidate()
method will return an error if both are set.
TLS
Users can add TLS to the exporter's HTTP Client through the Config
struct by providing
certificate and key files. The certificate type does not matter. See TestBuildClient()
and TestMutualTLS()
in auth_test.go
for an example of how TLS can be used with the
exporter. TestMutualTLS()
checks certificates between the exporter and a server both
ways.
// TLS properties in the Config struct.
cortex.Config{
TLSConfig: map[string]string{
"ca_file": "cafile",
"cert_file": "certfile",
"key_file": "keyfile",
"server_name": "server",
"insecure_skip_verify": "0",
},
}
# TLS properties in the YAML file.
tls_config:
ca_file: cafile
cert_file: certfile
key_file: keyfile
server_name: server
insecure_skip_verify: true
Instrument to Aggregation Mapping
The exporter uses the simple
selector's NewWithHistogramDistribution()
. This means
that instruments are mapped to aggregations as shown in the table below.
Instrument | Aggregation |
---|---|
Counter | Sum |
UpDownCounter | Sum |
ValueRecorder | Histogram |
SumObserver | Sum |
UpDownSumObserver | Sum |
ValueObserver | Histogram |
Although only the Sum
and Histogram
aggregations are currently being used, the
exporter supports 5 different aggregations:
Sum
LastValue
MinMaxSumCount
Distribution
Histogram
Error Handling
In general, errors are returned to the calling function / method. Eventually, errors make
their way up to the push Controller where it calls the exporter's Export()
method. The
push Controller passes the errors to the OpenTelemetry Go SDK's global error handler.
The exception is when the exporter fails to send an HTTP request to Cortex. Regardless of status code, the error is ignored. See the retry logic section below for more details.
Retry Logic
The exporter does not implement any retry logic since the exporter sends cumulative metrics data, which means that data will be preserved even if some exports fail.
For example, consider a situation where a user increments a Counter
instrument 5 times
and an export happens between each increment. If the exports happen like so:
SUCCESS FAIL FAIL SUCCESS SUCCESS
1 2 3 4 5
Then the received data will be:
1 4 5
The end result is the same since the aggregations are cumulative.
Design Document
The document is not in this module as it contains large images which will increase the size of the overall repo significantly.
Future Enhancements
-
Add configuration option for different selectors
Users may not want to use the default Histogram selector and should be able to choose which selector they want to use.
Documentation
Index ¶
Constants ¶
Variables ¶
var ( // ErrTwoPasswords occurs when the YAML file contains both `password` and // `password_file`. ErrTwoPasswords = fmt.Errorf("cannot have two passwords in the YAML file") // ErrTwoBearerTokens occurs when the YAML file contains both `bearer_token` and // `bearer_token_file`. ErrTwoBearerTokens = fmt.Errorf("cannot have two bearer tokens in the YAML file") // ErrConflictingAuthorization occurs when the YAML file contains both BasicAuth and // bearer token authorization ErrConflictingAuthorization = fmt.Errorf("cannot have both basic auth and bearer token authorization") // ErrNoBasicAuthUsername occurs when no username was provided for basic // authentication. ErrNoBasicAuthUsername = fmt.Errorf("no username provided for basic authentication") // ErrNoBasicAuthPassword occurs when no password or password file was provided for // basic authentication. ErrNoBasicAuthPassword = fmt.Errorf("no password or password file provided for basic authentication") // ErrInvalidQuantiles occurs when the supplied quantiles are not between 0 and 1. ErrInvalidQuantiles = fmt.Errorf("cannot have quantiles that are less than 0 or greater than 1") )
var ErrFailedToReadFile = fmt.Errorf("failed to read password / bearer token file")
ErrFailedToReadFile occurs when a password / bearer token file exists, but could not be read.
Functions ¶
func InstallNewPipeline ¶
func InstallNewPipeline(config Config, options ...controller.Option) (*controller.Controller, error)
InstallNewPipeline registers a push Controller's MeterProvider globally.
func NewExportPipeline ¶
func NewExportPipeline(config Config, options ...controller.Option) (*controller.Controller, error)
NewExportPipeline sets up a complete export pipeline with a push Controller and Exporter.
Types ¶
type Config ¶
type Config struct { Endpoint string `mapstructure:"url"` RemoteTimeout time.Duration `mapstructure:"remote_timeout"` Name string `mapstructure:"name"` BasicAuth map[string]string `mapstructure:"basic_auth"` BearerToken string `mapstructure:"bearer_token"` BearerTokenFile string `mapstructure:"bearer_token_file"` TLSConfig map[string]string `mapstructure:"tls_config"` ProxyURL *url.URL `mapstructure:"proxy_url"` PushInterval time.Duration `mapstructure:"push_interval"` Quantiles []float64 `mapstructure:"quantiles"` HistogramBoundaries []float64 `mapstructure:"histogram_boundaries"` Headers map[string]string `mapstructure:"headers"` Client *http.Client }
Config contains properties the Exporter uses to export metrics data to Cortex.
type Exporter ¶
type Exporter struct {
// contains filtered or unexported fields
}
Exporter forwards metrics to a Cortex instance
func NewRawExporter ¶
NewRawExporter validates the Config struct and creates an Exporter with it.
func (*Exporter) ConvertToTimeSeries ¶
func (e *Exporter) ConvertToTimeSeries(checkpointSet export.CheckpointSet) ([]*prompb.TimeSeries, error)
ConvertToTimeSeries converts a CheckpointSet to a slice of TimeSeries pointers Based on the aggregation type, ConvertToTimeSeries will call helper functions like convertFromSum to generate the correct number of TimeSeries.
func (*Exporter) ExportKindFor ¶
func (e *Exporter) ExportKindFor(*apimetric.Descriptor, aggregation.Kind) metric.ExportKind
ExportKindFor returns CumulativeExporter so the Processor correctly aggregates data
Directories
Path | Synopsis |
---|---|
utils
module
|