unrollprocessor

package module
v0.140.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 18, 2025 License: Apache-2.0 Imports: 10 Imported by: 1

README

Unroll Processor

Status
Stability alpha: logs
Distributions contrib
Issues Open issues Closed issues
Code coverage codecov
Code Owners @axw, @schmikei, @rnishtala-sumo

The Unroll Processor takes log records with slice bodies and expands each element of the slice into its own log record. This allows for better processing and analysis of structured log data that contains arrays or lists.

Supported pipelines

  • Logs

How it works

The Unroll Processor processes log records through the following steps:

  1. The processor examines each incoming log record to determine if the body contains a slice (array) structure
  2. For log records with slice bodies, each element of the slice is extracted and used to create a new individual log record
  3. Each new log record retains all the original metadata (timestamps, attributes, etc.) from the parent record
  4. When recursive is enabled, the processor will also unroll nested slices within slice elements

Config

General Config
unroll:
  recursive: false   # Whether to recursively unroll nested slices
Field Type Default Description
recursive bool false Whether to recursively unroll nested slices within slice elements
Example configuration
unroll:
  recursive: false

Examples

Basic Usage

The simplest configuration for the unroll processor:

processors:
  unroll:
    recursive: false

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [unroll]
      exporters: [logging]
Split a log record into multiple via a delimiter

The following configuration utilizes the transform processor to first split the original string body using a delimiter, and then the unroll processor creates multiple log records from the resulting slice.

receivers:
  filelog:
    include: [ ./test.txt ]
    start_at: beginning

processors:
  transform:
    log_statements:
      - context: log
        statements:
          - set(body, Split(body, ","))
  unroll:
    recursive: false

exporters:
  file:
    path: ./test/output.json

service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: [transform, unroll]
      exporters: [file]
Input and Output Example
Sample Input Data

Input file (test.txt):

1,2,3

After transform processor (before unroll): The body becomes a slice: ["1", "2", "3"]

Final Output (after unroll)
{
  "resourceLogs": [
    {
      "resource": {},
      "scopeLogs": [
        {
          "scope": {},
          "logRecords": [
            {
              "observedTimeUnixNano": "1733240156591852000",
              "body": { "stringValue": "1" },
              "attributes": [
                {
                  "key": "log.file.name",
                  "value": { "stringValue": "test.txt" }
                }
              ],
              "traceId": "",
              "spanId": ""
            },
            {
              "observedTimeUnixNano": "1733240156591852000",
              "body": { "stringValue": "2" },
              "attributes": [
                {
                  "key": "log.file.name",
                  "value": { "stringValue": "test.txt" }
                }
              ],
              "traceId": "",
              "spanId": ""
            },
            {
              "observedTimeUnixNano": "1733240156591852000",
              "body": { "stringValue": "3" },
              "attributes": [
                {
                  "key": "log.file.name",
                  "value": { "stringValue": "test.txt" }
                }
              ],
              "traceId": "",
              "spanId": ""
            }
          ]
        }
      ]
    }
  ]
}
Recursive Unrolling

When dealing with nested slices, you can enable recursive unrolling:

processors:
  unroll:
    recursive: true

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [unroll]
      exporters: [logging]

This configuration will unroll nested slices within slice elements, creating individual log records for all nested elements.

Common Issues
Log records not being unrolled
  • Cause: The log body is not a slice/array type
  • Solution: Ensure the log body contains a slice. You may need to use the transform processor to convert string data to slices first
Unexpected number of output records
  • Cause: Nested slices with recursive: false setting
  • Solution: Enable recursive: true if you want to unroll nested slices, or restructure your data to avoid nested arrays
Performance issues with large slices
  • Cause: Very large slices being unrolled into many individual log records
  • Solution: Consider preprocessing the data to limit slice sizes or batch processing

Warnings

The Unroll Processor modifies the structure and quantity of log records in your telemetry pipeline. Consider the following warnings:

  • Data Volume: Unrolling slices can significantly increase the number of log records, which may impact downstream processing performance and storage requirements.
  • Resource Usage: Large slices will consume more memory and CPU resources during the unrolling process.
  • Downstream Compatibility: Ensure that downstream processors and exporters can handle the increased volume of log records.
  • Metadata Duplication: Each unrolled log record retains the same metadata (timestamps, attributes, etc.) from the original record, which may result in data duplication.

Use this processor carefully in production environments and monitor resource usage and performance impact.

Documentation

Overview

Package unrollprocessor contains the logic to unroll log based telemetry.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewFactory

func NewFactory() processor.Factory

NewFactory returns a new factory for the Transform processor.

Types

type Config

type Config struct {
	Recursive bool `mapstructure:"recursive"`
	// contains filtered or unexported fields
}

Config is the configuration for the unroll processor.

func (*Config) Validate

func (*Config) Validate() error

Validate is a no-op for this as there's no configuration that is possibly invalid after unmarshalling

Directories

Path Synopsis
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL