Documentation ¶
Index ¶
- Variables
- func ExpandChunks(iter chunks.Iterator) ([]chunks.Meta, error)
- func ExpandSamples(iter chunkenc.Iterator, newSampleFn func(t int64, v float64) tsdbutil.Sample) ([]tsdbutil.Sample, error)
- func NewListChunkSeriesIterator(chks ...chunks.Meta) chunks.Iterator
- func NewListSeriesIterator(samples Samples) chunkenc.Iterator
- type Appendable
- type Appender
- type BufferedSeriesIterator
- func (b *BufferedSeriesIterator) Buffer() chunkenc.Iterator
- func (b *BufferedSeriesIterator) Err() error
- func (b *BufferedSeriesIterator) Next() bool
- func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool)
- func (b *BufferedSeriesIterator) ReduceDelta(delta int64) bool
- func (b *BufferedSeriesIterator) Reset(it chunkenc.Iterator)
- func (b *BufferedSeriesIterator) Seek(t int64) bool
- func (b *BufferedSeriesIterator) Values() (int64, float64)
- type ChunkIteratable
- type ChunkQuerier
- type ChunkQueryable
- type ChunkSeries
- type ChunkSeriesEntry
- type ChunkSeriesSet
- type LabelQuerier
- type Labels
- type Querier
- type Queryable
- type QueryableFunc
- type SampleAndChunkQueryable
- type SampleIteratable
- type Samples
- type SelectHints
- type Series
- type SeriesEntry
- type SeriesSet
- type Storage
- type VerticalChunkSeriesMergeFunc
- type VerticalSeriesMergeFunc
- type Warnings
Constants ¶
This section is empty.
Variables ¶
var ( ErrNotFound = errors.New("not found") ErrOutOfOrderSample = errors.New("out of order sample") ErrDuplicateSampleForTimestamp = errors.New("duplicate sample for timestamp") ErrOutOfBounds = errors.New("out of bounds") )
The errors exposed.
Functions ¶
func ExpandChunks ¶ added in v1.25.3
ExpandChunks iterates over all chunks in the iterator, buffering all in slice.
func ExpandSamples ¶ added in v1.25.3
func ExpandSamples(iter chunkenc.Iterator, newSampleFn func(t int64, v float64) tsdbutil.Sample) ([]tsdbutil.Sample, error)
ExpandSamples iterates over all samples in the iterator, buffering all in slice. Optionally it takes samples constructor, useful when you want to compare sample slices with different sample implementations. if nil, sample type from this package will be used.
func NewListChunkSeriesIterator ¶ added in v1.25.3
NewListChunkSeriesIterator returns listChunkSeriesIterator that allows to iterate over provided chunks.
func NewListSeriesIterator ¶ added in v1.25.3
NewListSeriesIterator returns listSeriesIterator that allows to iterate over provided samples.
Types ¶
type Appendable ¶ added in v1.25.3
type Appendable interface { // Appender returns a new appender for the storage. The implementation // can choose whether or not to use the context, for deadlines or to check // for errors. Appender(ctx context.Context) Appender }
Appendable allows creating appenders.
type Appender ¶ added in v1.25.3
type Appender interface { // Add adds a sample pair for the given series. A reference number is // returned which can be used to add further samples in the same or later // transactions. // Returned reference numbers are ephemeral and may be rejected in calls // to AddFast() at any point. Adding the sample via Add() returns a new // reference number. // If the reference is 0 it must not be used for caching. Add(l labels.Labels, t int64, v float64) (uint64, error) // AddFast adds a sample pair for the referenced series. It is generally // faster than adding a sample by providing its full label set. AddFast(ref uint64, t int64, v float64) error // Commit submits the collected samples and purges the batch. If Commit // returns a non-nil error, it also rolls back all modifications made in // the appender so far, as Rollback would do. In any case, an Appender // must not be used anymore after Commit has been called. Commit() error // Rollback rolls back all modifications made in the appender so far. // Appender has to be discarded after rollback. Rollback() error }
Appender provides batched appends against a storage. It must be completed with a call to Commit or Rollback and must not be reused afterwards.
Operations on the Appender interface are not goroutine-safe.
type BufferedSeriesIterator ¶ added in v1.25.3
type BufferedSeriesIterator struct {
// contains filtered or unexported fields
}
BufferedSeriesIterator wraps an iterator with a look-back buffer.
func NewBuffer ¶ added in v1.25.3
func NewBuffer(delta int64) *BufferedSeriesIterator
NewBuffer returns a new iterator that buffers the values within the time range of the current element and the duration of delta before, initialized with an empty iterator. Use Reset() to set an actual iterator to be buffered.
func NewBufferIterator ¶ added in v1.25.3
func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterator
NewBufferIterator returns a new iterator that buffers the values within the time range of the current element and the duration of delta before.
func (*BufferedSeriesIterator) Buffer ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Buffer() chunkenc.Iterator
Buffer returns an iterator over the buffered data. Invalidates previously returned iterators.
func (*BufferedSeriesIterator) Err ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Err() error
Err returns the last encountered error.
func (*BufferedSeriesIterator) Next ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Next() bool
Next advances the iterator to the next element.
func (*BufferedSeriesIterator) PeekBack ¶ added in v1.25.3
func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool)
PeekBack returns the nth previous element of the iterator. If there is none buffered, ok is false.
func (*BufferedSeriesIterator) ReduceDelta ¶ added in v1.25.3
func (b *BufferedSeriesIterator) ReduceDelta(delta int64) bool
ReduceDelta lowers the buffered time delta, for the current SeriesIterator only.
func (*BufferedSeriesIterator) Reset ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Reset(it chunkenc.Iterator)
Reset re-uses the buffer with a new iterator, resetting the buffered time delta to its original value.
func (*BufferedSeriesIterator) Seek ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Seek(t int64) bool
Seek advances the iterator to the element at time t or greater.
func (*BufferedSeriesIterator) Values ¶ added in v1.25.3
func (b *BufferedSeriesIterator) Values() (int64, float64)
Values returns the current element of the iterator.
type ChunkIteratable ¶ added in v1.25.3
type ChunkQuerier ¶ added in v1.25.3
type ChunkQuerier interface { LabelQuerier // Select returns a set of series that matches the given label matchers. // Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance. // It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all. Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) ChunkSeriesSet }
ChunkQuerier provides querying access over time series data of a fixed time range.
func NewMergeChunkQuerier ¶ added in v1.25.3
func NewMergeChunkQuerier(primaries []ChunkQuerier, secondaries []ChunkQuerier, mergeFn VerticalChunkSeriesMergeFunc) ChunkQuerier
NewMergeChunkQuerier returns a new Chunk Querier that merges results of given primary and secondary chunk queriers. See NewFanout commentary to learn more about primary vs secondary differences.
In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used. TODO(bwplotka): Currently merge will compact overlapping chunks with bigger chunk, without limit. Split it: https://github.com/prometheus/tsdb/issues/670
func NoopChunkedQuerier ¶ added in v1.25.3
func NoopChunkedQuerier() ChunkQuerier
NoopChunkedQuerier is a ChunkQuerier that does nothing.
type ChunkQueryable ¶ added in v1.25.3
type ChunkQueryable interface { // ChunkQuerier returns a new ChunkQuerier on the storage. ChunkQuerier(ctx context.Context, mint, maxt int64) (ChunkQuerier, error) }
A ChunkQueryable handles queries against a storage. Use it when you need to have access to samples in encoded format.
type ChunkSeries ¶ added in v1.25.3
type ChunkSeries interface { Labels ChunkIteratable }
ChunkSeries exposes a single time series and allows iterating over chunks.
type ChunkSeriesEntry ¶ added in v1.25.3
func NewListChunkSeriesFromSamples ¶ added in v1.25.3
func NewListChunkSeriesFromSamples(lset labels.Labels, samples ...[]tsdbutil.Sample) *ChunkSeriesEntry
NewListChunkSeriesFromSamples returns chunk series entry that allows to iterate over provided samples. NOTE: It uses inefficient chunks encoding implementation, not caring about chunk size.
func (*ChunkSeriesEntry) Iterator ¶ added in v1.25.3
func (s *ChunkSeriesEntry) Iterator() chunks.Iterator
func (*ChunkSeriesEntry) Labels ¶ added in v1.25.3
func (s *ChunkSeriesEntry) Labels() labels.Labels
type ChunkSeriesSet ¶ added in v1.25.3
type ChunkSeriesSet interface { Next() bool // At returns full chunk series. Returned series should be iteratable even after Next is called. At() ChunkSeries // The error that iteration has failed with. // When an error occurs, set cannot continue to iterate. Err() error // A collection of warnings for the whole set. // Warnings could be return even iteration has not failed with error. Warnings() Warnings }
ChunkSeriesSet contains a set of chunked series.
func EmptyChunkSeriesSet ¶ added in v1.25.3
func EmptyChunkSeriesSet() ChunkSeriesSet
EmptyChunkSeriesSet returns a chunk series set that's always empty.
func ErrChunkSeriesSet ¶ added in v1.25.3
func ErrChunkSeriesSet(err error) ChunkSeriesSet
ErrChunkSeriesSet returns a chunk series set that wraps an error.
func NewMergeChunkSeriesSet ¶ added in v1.25.3
func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet
NewMergeChunkSeriesSet returns a new ChunkSeriesSet that merges many SeriesSet together.
func NewSeriesSetToChunkSet ¶ added in v1.25.3
func NewSeriesSetToChunkSet(chk SeriesSet) ChunkSeriesSet
NewSeriesSetToChunkSet converts SeriesSet to ChunkSeriesSet by encoding chunks from samples.
func NoopChunkedSeriesSet ¶ added in v1.25.3
func NoopChunkedSeriesSet() ChunkSeriesSet
NoopChunkedSeriesSet is a ChunkSeriesSet that does nothing.
type LabelQuerier ¶ added in v1.25.3
type LabelQuerier interface { // LabelValues returns all potential values for a label name. // It is not safe to use the strings beyond the lifefime of the querier. // If matchers are specified the returned result set is reduced // to label values of metrics matching the matchers. LabelValues(name string, matchers ...*labels.Matcher) ([]string, Warnings, error) // LabelNames returns all the unique label names present in the block in sorted order. // TODO(yeya24): support matchers or hints. LabelNames() ([]string, Warnings, error) // Close releases the resources of the Querier. Close() error }
LabelQuerier provides querying access over labels.
type Labels ¶ added in v1.25.3
type Labels interface { // Labels returns the complete set of labels. For series it means all labels identifying the series. Labels() labels.Labels }
Labels represents an item that has labels e.g. time series.
type Querier ¶ added in v1.25.3
type Querier interface { LabelQuerier // Select returns a set of series that matches the given label matchers. // Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance. // It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all. Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) SeriesSet }
Querier provides querying access over time series data of a fixed time range.
func NewMergeQuerier ¶ added in v1.25.3
func NewMergeQuerier(primaries []Querier, secondaries []Querier, mergeFn VerticalSeriesMergeFunc) Querier
NewMergeQuerier returns a new Querier that merges results of given primary and secondary queriers. See NewFanout commentary to learn more about primary vs secondary differences.
In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used.
func NoopQuerier ¶ added in v1.25.3
func NoopQuerier() Querier
NoopQuerier is a Querier that does nothing.
type Queryable ¶ added in v1.25.3
type Queryable interface { // Querier returns a new Querier on the storage. Querier(ctx context.Context, mint, maxt int64) (Querier, error) }
A Queryable handles queries against a storage. Use it when you need to have access to all samples without chunk encoding abstraction e.g promQL.
type QueryableFunc ¶ added in v1.25.3
TODO(bwplotka): Move to promql/engine_test.go? QueryableFunc is an adapter to allow the use of ordinary functions as Queryables. It follows the idea of http.HandlerFunc.
type SampleAndChunkQueryable ¶ added in v1.25.3
type SampleAndChunkQueryable interface { Queryable ChunkQueryable }
SampleAndChunkQueryable allows retrieving samples as well as encoded samples in form of chunks.
type SampleIteratable ¶ added in v1.25.3
type Samples ¶ added in v1.25.3
Samples interface allows to work on arrays of types that are compatible with tsdbutil.Sample.
type SelectHints ¶ added in v1.25.3
type SelectHints struct { Start int64 // Start time in milliseconds for this select. End int64 // End time in milliseconds for this select. Step int64 // Query step size in milliseconds. Func string // String representation of surrounding function or aggregation. Grouping []string // List of label names used in aggregation. By bool // Indicate whether it is without or by. Range int64 // Range vector selector range in milliseconds. }
SelectHints specifies hints passed for data selections. This is used only as an option for implementation to use.
type Series ¶ added in v1.25.3
type Series interface { Labels SampleIteratable }
Series exposes a single time series and allows iterating over samples.
func ChainedSeriesMerge ¶ added in v1.25.3
ChainedSeriesMerge returns single series from many same, potentially overlapping series by chaining samples together. If one or more samples overlap, one sample from random overlapped ones is kept and all others with the same timestamp are dropped.
This works the best with replicated series, where data from two series are exactly the same. This does not work well with "almost" the same data, e.g. from 2 Prometheus HA replicas. This is fine, since from the Prometheus perspective this never happens.
It's optimized for non-overlap cases as well.
type SeriesEntry ¶ added in v1.25.3
func NewListSeries ¶ added in v1.25.3
func NewListSeries(lset labels.Labels, s []tsdbutil.Sample) *SeriesEntry
NewListSeries returns series entry with iterator that allows to iterate over provided samples.
func (*SeriesEntry) Iterator ¶ added in v1.25.3
func (s *SeriesEntry) Iterator() chunkenc.Iterator
func (*SeriesEntry) Labels ¶ added in v1.25.3
func (s *SeriesEntry) Labels() labels.Labels
type SeriesSet ¶ added in v1.25.3
type SeriesSet interface { Next() bool // At returns full series. Returned series should be iteratable even after Next is called. At() Series // The error that iteration as failed with. // When an error occurs, set cannot continue to iterate. Err() error // A collection of warnings for the whole set. // Warnings could be return even iteration has not failed with error. Warnings() Warnings }
SeriesSet contains a set of series.
func EmptySeriesSet ¶ added in v1.25.3
func EmptySeriesSet() SeriesSet
EmptySeriesSet returns a series set that's always empty.
func ErrSeriesSet ¶ added in v1.25.3
ErrSeriesSet returns a series set that wraps an error.
func NewMergeSeriesSet ¶ added in v1.25.3
func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) SeriesSet
NewMergeSeriesSet returns a new SeriesSet that merges many SeriesSets together.
func NewSeriesSetFromChunkSeriesSet ¶ added in v1.25.3
func NewSeriesSetFromChunkSeriesSet(chk ChunkSeriesSet) SeriesSet
NewSeriesSetFromChunkSeriesSet converts ChunkSeriesSet to SeriesSet by decoding chunks one by one.
func NoopSeriesSet ¶ added in v1.25.3
func NoopSeriesSet() SeriesSet
NoopSeriesSet is a SeriesSet that does nothing.
type Storage ¶ added in v1.25.3
type Storage interface { SampleAndChunkQueryable Appendable // StartTime returns the oldest timestamp stored in the storage. StartTime() (int64, error) // Close closes the storage and all its underlying resources. Close() error }
Storage ingests and manages samples, along with various indexes. All methods are goroutine-safe. Storage implements storage.SampleAppender.
func NewFanout ¶ added in v1.25.3
NewFanout returns a new fanout Storage, which proxies reads and writes through to multiple underlying storages.
The difference between primary and secondary Storage is only for read (Querier) path and it goes as follows: * If the primary querier returns an error, then any of the Querier operations will fail. * If any secondary querier returns an error the result from that queries is discarded. The overall operation will succeed, and the error from the secondary querier will be returned as a warning.
NOTE: In the case of Prometheus, it treats all remote storages as secondary / best effort.
type VerticalChunkSeriesMergeFunc ¶ added in v1.25.3
type VerticalChunkSeriesMergeFunc func(...ChunkSeries) ChunkSeries
VerticalChunkSeriesMergeFunc returns merged chunk series implementation that merges potentially time-overlapping chunk series with the same labels into single ChunkSeries.
NOTE: It's up to implementation how series are vertically merged (if chunks are sorted, re-encoded etc).
func NewCompactingChunkSeriesMerger ¶ added in v1.25.3
func NewCompactingChunkSeriesMerger(mergeFunc VerticalSeriesMergeFunc) VerticalChunkSeriesMergeFunc
NewCompactingChunkSeriesMerger returns VerticalChunkSeriesMergeFunc that merges the same chunk series into single chunk series. In case of the chunk overlaps, it compacts those into one or more time-ordered non-overlapping chunks with merged data. Samples from overlapped chunks are merged using series vertical merge func. It expects the same labels for each given series.
NOTE: Use the returned merge function only when you see potentially overlapping series, as this introduces small a overhead to handle overlaps between series.
type VerticalSeriesMergeFunc ¶ added in v1.25.3
VerticalSeriesMergeFunc returns merged series implementation that merges series with same labels together. It has to handle time-overlapped series as well.