metricio

package
v1.2.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 24, 2026 License: Apache-2.0 Imports: 22 Imported by: 0

Documentation

Overview

Package metricio provides readers and writers for metric data in various formats including Parquet, CSV, JSON, and Prometheus remote_write.

Index

Constants

View Source
const (
	// DefaultCompressionLevel is the default Brotli compression level.
	DefaultCompressionLevel = 8

	// NoCompression disables compression when passed to NewJSONWriterToWriter.
	NoCompression = -1
)
View Source
const (
	// DefaultTimeout is the default HTTP request timeout.
	DefaultTimeout = 30 * time.Second
	// DefaultMaxRetries is the default number of retries for failed requests.
	DefaultMaxRetries = 3
)
View Source
const (
	// DefaultBatchSize is the default number of metrics to read at a time.
	DefaultBatchSize = 10000
)

Variables

This section is empty.

Functions

func ConvertToTimeSeries

func ConvertToTimeSeries(metrics []types.ParquetMetric) []prompb.TimeSeries

ConvertToTimeSeries converts a slice of ParquetMetric to prompb.TimeSeries. Labels are sorted to ensure consistent ordering for Prometheus/Mimir.

func CreateWriteRequest

func CreateWriteRequest(timeSeries []prompb.TimeSeries) *prompb.WriteRequest

CreateWriteRequest creates a prompb.WriteRequest from a slice of TimeSeries.

func FindFiles

func FindFiles(dir string, filter func(path string) bool) ([]string, error)

FindFiles recursively finds all files in the given directory that match the filter. It follows symlinks to directories. If filter is nil, all files are included.

func FindSupportedFiles

func FindSupportedFiles(dir string) ([]string, error)

FindSupportedFiles recursively finds all supported metric files in the given directory. Supported formats: .csv, .parquet, .json, .json.br

func IsSupportedFile

func IsSupportedFile(path string) bool

IsSupportedFile returns true if the file has a supported extension for metric data. Supported formats: .csv, .parquet, .json, .json.br

func MetricToTimeSeries

func MetricToTimeSeries(m types.Metric) prompb.TimeSeries

MetricToTimeSeries converts a types.Metric to a prompb.TimeSeries. Labels are sorted alphabetically as required by Prometheus.

func TimeSeriesToMetrics

func TimeSeriesToMetrics(ts prompb.TimeSeries) []types.Metric

TimeSeriesToMetrics converts a prompb.TimeSeries to a slice of types.Metric. Each sample in the TimeSeries produces a separate Metric.

Types

type CAdvisorMetric

type CAdvisorMetric struct {
	Timestamp time.Time
	Value     float64
	Labels    map[string]string
}

CAdvisorMetric represents a row from the cAdvisor CSV export.

type CSVReader

type CSVReader struct {
	// contains filtered or unexported fields
}

CSVReader reads cAdvisor metrics from CSV files exported from Snowflake.

func NewCSVReader

func NewCSVReader(batchSize int) *CSVReader

NewCSVReader creates a new CSVReader with the specified batch size.

func (*CSVReader) ReadCSVFile

func (r *CSVReader) ReadCSVFile(path string, callback func([]prompb.TimeSeries) error) error

ReadCSVFile reads cAdvisor metrics from a CSV file with columns: USAGE_DATE, VALUE, LABELS. It calls the callback function with batches of TimeSeries.

type CSVWriter

type CSVWriter struct {
	// contains filtered or unexported fields
}

CSVWriter writes metrics to CSV in the Snowflake export format. The format uses columns: USAGE_DATE, VALUE, LABELS (with labels as JSON).

func NewCSVWriter

func NewCSVWriter(path string) (*CSVWriter, error)

NewCSVWriter creates a new CSVWriter that writes to the specified path.

func NewCSVWriterToWriter

func NewCSVWriterToWriter(dest io.Writer) (*CSVWriter, error)

NewCSVWriterToWriter creates a new CSVWriter that writes to the given io.Writer.

func (*CSVWriter) Close

func (w *CSVWriter) Close() error

Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.

func (*CSVWriter) Write

func (w *CSVWriter) Write(metrics []types.Metric) error

Write writes a batch of metrics to the CSV output.

func (*CSVWriter) WriteOne

func (w *CSVWriter) WriteOne(metric types.Metric) error

WriteOne writes a single metric to the CSV output.

type JSONReader

type JSONReader struct {
	// contains filtered or unexported fields
}

JSONReader reads metrics from JSON or Brotli-compressed JSON files.

func NewJSONReader

func NewJSONReader(batchSize int) *JSONReader

NewJSONReader creates a new JSONReader with the specified batch size.

func (*JSONReader) ReadAllMetrics

func (r *JSONReader) ReadAllMetrics(path string) ([]types.Metric, error)

ReadAllMetrics reads all metrics from a JSON or JSON.br file into memory. This is a convenience method for smaller files.

func (*JSONReader) ReadFromReader

func (r *JSONReader) ReadFromReader(reader io.Reader, callback func([]types.Metric) error) error

ReadFromReader reads metrics from an io.Reader containing JSON data. This is useful for reading from streams or testing.

func (*JSONReader) ReadJSONFile

func (r *JSONReader) ReadJSONFile(path string, callback func([]types.Metric) error) error

ReadJSONFile reads metrics from a JSON or JSON.br file. It detects compression from the file extension and calls the callback with batches of metrics.

type JSONWriter

type JSONWriter struct {
	// contains filtered or unexported fields
}

JSONWriter writes metrics to JSON or Brotli-compressed JSON.

func NewJSONWriter

func NewJSONWriter(path string) (*JSONWriter, error)

NewJSONWriter creates a new JSONWriter that writes to the specified path. If the path ends with .br, Brotli compression is used at the default level.

func NewJSONWriterToWriter

func NewJSONWriterToWriter(dest io.Writer, compressionLevel int) (*JSONWriter, error)

NewJSONWriterToWriter creates a new JSONWriter that writes to the given io.Writer. If compressionLevel >= 0, Brotli compression is applied at that level. If compressionLevel < 0 (e.g., NoCompression), no compression is used.

func NewJSONWriterWithCompression

func NewJSONWriterWithCompression(path string, compressionLevel int) (*JSONWriter, error)

NewJSONWriterWithCompression creates a new JSONWriter that writes to the specified path with a specific compression level. Use NoCompression (-1) to disable compression.

func (*JSONWriter) Close

func (w *JSONWriter) Close() error

Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.

func (*JSONWriter) Write

func (w *JSONWriter) Write(metrics []types.Metric) error

Write writes a batch of metrics to the JSON output.

func (*JSONWriter) WriteOne

func (w *JSONWriter) WriteOne(metric types.Metric) error

WriteOne writes a single metric to the JSON output.

type ParquetReader

type ParquetReader struct {
	// contains filtered or unexported fields
}

ParquetReader reads ParquetMetric records from Parquet files.

func NewParquetReader

func NewParquetReader(batchSize int) *ParquetReader

NewParquetReader creates a new ParquetReader with the specified batch size.

func (*ParquetReader) ReadFile

func (r *ParquetReader) ReadFile(path string, callback func([]types.ParquetMetric) error) error

ReadFile reads all ParquetMetric records from a single Parquet file. It calls the callback function with batches of metrics.

func (*ParquetReader) ReadFiles

func (r *ParquetReader) ReadFiles(paths []string, callback func(path string, metrics []types.ParquetMetric) error) error

ReadFiles reads all ParquetMetric records from multiple Parquet files. It calls the callback function with batches of metrics from each file.

type ParquetWriter

type ParquetWriter struct {
	// contains filtered or unexported fields
}

ParquetWriter writes metrics to a Snappy-compressed Parquet file.

func NewParquetWriter

func NewParquetWriter(path string) (*ParquetWriter, error)

NewParquetWriter creates a new ParquetWriter that writes to the specified path.

func NewParquetWriterToWriter

func NewParquetWriterToWriter(dest io.Writer) (*ParquetWriter, error)

NewParquetWriterToWriter creates a new ParquetWriter that writes to the given io.Writer.

func (*ParquetWriter) Close

func (w *ParquetWriter) Close() error

Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.

func (*ParquetWriter) Write

func (w *ParquetWriter) Write(metrics []types.Metric) error

Write writes a batch of metrics to the Parquet file. Metrics are converted to ParquetMetric format before writing.

func (*ParquetWriter) WriteOne

func (w *ParquetWriter) WriteOne(metric types.Metric) error

WriteOne writes a single metric to the Parquet file.

func (*ParquetWriter) WriteParquetMetrics

func (w *ParquetWriter) WriteParquetMetrics(metrics []types.ParquetMetric) error

WriteParquetMetrics writes pre-converted ParquetMetric records directly.

type RemoteWriter

type RemoteWriter struct {
	// contains filtered or unexported fields
}

RemoteWriter sends metrics to a Prometheus remote_write endpoint.

func NewRemoteWriter

func NewRemoteWriter(url string) *RemoteWriter

NewRemoteWriter creates a new RemoteWriter.

func (*RemoteWriter) Write

func (w *RemoteWriter) Write(ctx context.Context, timeSeries []prompb.TimeSeries) error

Write sends a batch of TimeSeries to the remote_write endpoint.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL