Documentation
¶
Overview ¶
Package metricio provides readers and writers for metric data in various formats including Parquet, CSV, JSON, and Prometheus remote_write.
Index ¶
- Constants
- func ConvertToTimeSeries(metrics []types.ParquetMetric) []prompb.TimeSeries
- func CreateWriteRequest(timeSeries []prompb.TimeSeries) *prompb.WriteRequest
- func FindFiles(dir string, filter func(path string) bool) ([]string, error)
- func FindSupportedFiles(dir string) ([]string, error)
- func IsSupportedFile(path string) bool
- func MetricToTimeSeries(m types.Metric) prompb.TimeSeries
- func TimeSeriesToMetrics(ts prompb.TimeSeries) []types.Metric
- type CAdvisorMetric
- type CSVReader
- type CSVWriter
- type JSONReader
- type JSONWriter
- type ParquetReader
- type ParquetWriter
- type RemoteWriter
Constants ¶
const ( // DefaultCompressionLevel is the default Brotli compression level. DefaultCompressionLevel = 8 // NoCompression disables compression when passed to NewJSONWriterToWriter. NoCompression = -1 )
const ( // DefaultTimeout is the default HTTP request timeout. DefaultTimeout = 30 * time.Second // DefaultMaxRetries is the default number of retries for failed requests. DefaultMaxRetries = 3 )
const (
// DefaultBatchSize is the default number of metrics to read at a time.
DefaultBatchSize = 10000
)
Variables ¶
This section is empty.
Functions ¶
func ConvertToTimeSeries ¶
func ConvertToTimeSeries(metrics []types.ParquetMetric) []prompb.TimeSeries
ConvertToTimeSeries converts a slice of ParquetMetric to prompb.TimeSeries. Labels are sorted to ensure consistent ordering for Prometheus/Mimir.
func CreateWriteRequest ¶
func CreateWriteRequest(timeSeries []prompb.TimeSeries) *prompb.WriteRequest
CreateWriteRequest creates a prompb.WriteRequest from a slice of TimeSeries.
func FindFiles ¶
FindFiles recursively finds all files in the given directory that match the filter. It follows symlinks to directories. If filter is nil, all files are included.
func FindSupportedFiles ¶
FindSupportedFiles recursively finds all supported metric files in the given directory. Supported formats: .csv, .parquet, .json, .json.br
func IsSupportedFile ¶
IsSupportedFile returns true if the file has a supported extension for metric data. Supported formats: .csv, .parquet, .json, .json.br
func MetricToTimeSeries ¶
func MetricToTimeSeries(m types.Metric) prompb.TimeSeries
MetricToTimeSeries converts a types.Metric to a prompb.TimeSeries. Labels are sorted alphabetically as required by Prometheus.
func TimeSeriesToMetrics ¶
func TimeSeriesToMetrics(ts prompb.TimeSeries) []types.Metric
TimeSeriesToMetrics converts a prompb.TimeSeries to a slice of types.Metric. Each sample in the TimeSeries produces a separate Metric.
Types ¶
type CAdvisorMetric ¶
CAdvisorMetric represents a row from the cAdvisor CSV export.
type CSVReader ¶
type CSVReader struct {
// contains filtered or unexported fields
}
CSVReader reads cAdvisor metrics from CSV files exported from Snowflake.
func NewCSVReader ¶
NewCSVReader creates a new CSVReader with the specified batch size.
func (*CSVReader) ReadCSVFile ¶
ReadCSVFile reads cAdvisor metrics from a CSV file with columns: USAGE_DATE, VALUE, LABELS. It calls the callback function with batches of TimeSeries.
type CSVWriter ¶
type CSVWriter struct {
// contains filtered or unexported fields
}
CSVWriter writes metrics to CSV in the Snowflake export format. The format uses columns: USAGE_DATE, VALUE, LABELS (with labels as JSON).
func NewCSVWriter ¶
NewCSVWriter creates a new CSVWriter that writes to the specified path.
func NewCSVWriterToWriter ¶
NewCSVWriterToWriter creates a new CSVWriter that writes to the given io.Writer.
func (*CSVWriter) Close ¶
Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.
type JSONReader ¶
type JSONReader struct {
// contains filtered or unexported fields
}
JSONReader reads metrics from JSON or Brotli-compressed JSON files.
func NewJSONReader ¶
func NewJSONReader(batchSize int) *JSONReader
NewJSONReader creates a new JSONReader with the specified batch size.
func (*JSONReader) ReadAllMetrics ¶
func (r *JSONReader) ReadAllMetrics(path string) ([]types.Metric, error)
ReadAllMetrics reads all metrics from a JSON or JSON.br file into memory. This is a convenience method for smaller files.
func (*JSONReader) ReadFromReader ¶
ReadFromReader reads metrics from an io.Reader containing JSON data. This is useful for reading from streams or testing.
func (*JSONReader) ReadJSONFile ¶
ReadJSONFile reads metrics from a JSON or JSON.br file. It detects compression from the file extension and calls the callback with batches of metrics.
type JSONWriter ¶
type JSONWriter struct {
// contains filtered or unexported fields
}
JSONWriter writes metrics to JSON or Brotli-compressed JSON.
func NewJSONWriter ¶
func NewJSONWriter(path string) (*JSONWriter, error)
NewJSONWriter creates a new JSONWriter that writes to the specified path. If the path ends with .br, Brotli compression is used at the default level.
func NewJSONWriterToWriter ¶
func NewJSONWriterToWriter(dest io.Writer, compressionLevel int) (*JSONWriter, error)
NewJSONWriterToWriter creates a new JSONWriter that writes to the given io.Writer. If compressionLevel >= 0, Brotli compression is applied at that level. If compressionLevel < 0 (e.g., NoCompression), no compression is used.
func NewJSONWriterWithCompression ¶
func NewJSONWriterWithCompression(path string, compressionLevel int) (*JSONWriter, error)
NewJSONWriterWithCompression creates a new JSONWriter that writes to the specified path with a specific compression level. Use NoCompression (-1) to disable compression.
func (*JSONWriter) Close ¶
func (w *JSONWriter) Close() error
Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.
type ParquetReader ¶
type ParquetReader struct {
// contains filtered or unexported fields
}
ParquetReader reads ParquetMetric records from Parquet files.
func NewParquetReader ¶
func NewParquetReader(batchSize int) *ParquetReader
NewParquetReader creates a new ParquetReader with the specified batch size.
func (*ParquetReader) ReadFile ¶
func (r *ParquetReader) ReadFile(path string, callback func([]types.ParquetMetric) error) error
ReadFile reads all ParquetMetric records from a single Parquet file. It calls the callback function with batches of metrics.
func (*ParquetReader) ReadFiles ¶
func (r *ParquetReader) ReadFiles(paths []string, callback func(path string, metrics []types.ParquetMetric) error) error
ReadFiles reads all ParquetMetric records from multiple Parquet files. It calls the callback function with batches of metrics from each file.
type ParquetWriter ¶
type ParquetWriter struct {
// contains filtered or unexported fields
}
ParquetWriter writes metrics to a Snappy-compressed Parquet file.
func NewParquetWriter ¶
func NewParquetWriter(path string) (*ParquetWriter, error)
NewParquetWriter creates a new ParquetWriter that writes to the specified path.
func NewParquetWriterToWriter ¶
func NewParquetWriterToWriter(dest io.Writer) (*ParquetWriter, error)
NewParquetWriterToWriter creates a new ParquetWriter that writes to the given io.Writer.
func (*ParquetWriter) Close ¶
func (w *ParquetWriter) Close() error
Close closes the writer, flushing any remaining data. If the underlying writer implements io.Closer, it will be closed.
func (*ParquetWriter) Write ¶
func (w *ParquetWriter) Write(metrics []types.Metric) error
Write writes a batch of metrics to the Parquet file. Metrics are converted to ParquetMetric format before writing.
func (*ParquetWriter) WriteOne ¶
func (w *ParquetWriter) WriteOne(metric types.Metric) error
WriteOne writes a single metric to the Parquet file.
func (*ParquetWriter) WriteParquetMetrics ¶
func (w *ParquetWriter) WriteParquetMetrics(metrics []types.ParquetMetric) error
WriteParquetMetrics writes pre-converted ParquetMetric records directly.
type RemoteWriter ¶
type RemoteWriter struct {
// contains filtered or unexported fields
}
RemoteWriter sends metrics to a Prometheus remote_write endpoint.
func NewRemoteWriter ¶
func NewRemoteWriter(url string) *RemoteWriter
NewRemoteWriter creates a new RemoteWriter.
func (*RemoteWriter) Write ¶
func (w *RemoteWriter) Write(ctx context.Context, timeSeries []prompb.TimeSeries) error
Write sends a batch of TimeSeries to the remote_write endpoint.