datadogconnector

package module
v0.93.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 24, 2024 License: Apache-2.0 Imports: 14 Imported by: 2

README

Datadog Connector

Status
Distributions contrib
Issues Open issues Closed issues
Code Owners @mx-psi, @dineshg13
Emeritus @gbbr

Supported Pipeline Types

Exporter Pipeline Type Receiver Pipeline Type Stability Level
traces metrics beta
traces traces beta

Description

The Datadog Connector is a connector component that computes Datadog APM Stats pre-sampling in the event that your traces pipeline is sampled using components such as the tailsamplingprocessor or probabilisticsamplerprocessor.

The connector is most applicable when using the sampling components such as the tailsamplingprocessor, or the probabilisticsamplerprocessor in one of your pipelines. The sampled pipeline should be duplicated and the datadog connector should be added to the the pipeline that is not being sampled to ensure that Datadog APM Stats are accurate in the backend.

Usage

To use the Datadog Connector, add the connector to one set of the duplicated pipelines while sampling the other. The Datadog Connector will compute APM Stats on all spans that it sees. Here is an example on how to add it to a pipeline using the [probabilisticsampler]:

Before After
# ...
processors:
  # ...
  probabilistic_sampler:
    sampling_percentage: 20
  # add the "datadog" processor definition
  datadog:

exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}

service:
  pipelines:
    traces:
      receivers: [otlp]
      # prepend it to the sampler in your pipeline:
      processors: [batch, datadog, probabilistic_sampler]
      exporters: [datadog]

    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [datadog]
# ...
processors:
  probabilistic_sampler:
    sampling_percentage: 20

connectors:
    # add the "datadog" connector definition and further configurations
    datadog/connector:

exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}

service:
  pipelines:
   traces:
     receivers: [otlp]
     processors: [batch]
     exporters: [datadog/connector]

   traces/2: # this pipeline uses sampling
     receivers: [datadog/connector]
     processors: [batch, probabilistic_sampler]
     exporters: [datadog]

  metrics:
    receivers: [datadog/connector]
    processors: [batch]
    exporters: [datadog]

Here we have two traces pipelines that ingest the same data but one is being sampled. The one that is sampled has its data sent to the datadog backend for you to see the sampled subset of the total traces sent across. The other non-sampled pipeline of traces sends its data to the metrics pipeline to be used in the APM stats. This unsampled pipeline gives the full picture of how much data the application emits in traces.

Feature Gate for Performance

In case you are experiencing high memory usage with Datadog Connector, similar to issue, use the feature gate connector.datadogconnector.performance. With the feature gate enabled, Datadog Connector takes OTLP traces and produces OTLP metric with the name dd.internal.stats.payload. This Metric has an attribute dd.internal.stats.payload that contains the bytes for StatsPayload. With the feature gate, we can use Datadog Connector only in conjunction with Datadog Exporter. Please enable the feature only if needed for performance reasons and higher throughput. Enable the feature gate on all collectors (especially in gateway deployment) in the pipeline that sends data to Datadog. We plan to refactor this component in the future so that the signals produced are usable in any metrics pipeline.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewFactory

func NewFactory() connector.Factory

NewFactory creates a factory for tailtracer connector.

Types

type Config

type Config struct{}

Directories

Path Synopsis
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL