storage

package
v2.25.0-snrc.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 17, 2021 License: Apache-2.0 Imports: 17 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNotFound                    = errors.New("not found")
	ErrOutOfOrderSample            = errors.New("out of order sample")
	ErrDuplicateSampleForTimestamp = errors.New("duplicate sample for timestamp")
	ErrOutOfBounds                 = errors.New("out of bounds")
)

The errors exposed.

Functions

func ExpandChunks

func ExpandChunks(iter chunks.Iterator) ([]chunks.Meta, error)

ExpandChunks iterates over all chunks in the iterator, buffering all in slice.

func ExpandSamples

func ExpandSamples(iter chunkenc.Iterator, newSampleFn func(t int64, v float64) tsdbutil.Sample) ([]tsdbutil.Sample, error)

ExpandSamples iterates over all samples in the iterator, buffering all in slice. Optionally it takes samples constructor, useful when you want to compare sample slices with different sample implementations. if nil, sample type from this package will be used.

func NewListChunkSeriesIterator

func NewListChunkSeriesIterator(chks ...chunks.Meta) chunks.Iterator

NewListChunkSeriesIterator returns listChunkSeriesIterator that allows to iterate over provided chunks.

func NewListSeriesIterator

func NewListSeriesIterator(samples Samples) chunkenc.Iterator

NewListSeriesIterator returns listSeriesIterator that allows to iterate over provided samples.

Types

type Appendable

type Appendable interface {
	// Appender returns a new appender for the storage. The implementation
	// can choose whether or not to use the context, for deadlines or to check
	// for errors.
	Appender(ctx context.Context) Appender
}

Appendable allows creating appenders.

type Appender

type Appender interface {
	// Add adds a sample pair for the given series. A reference number is
	// returned which can be used to add further samples in the same or later
	// transactions.
	// Returned reference numbers are ephemeral and may be rejected in calls
	// to AddFast() at any point. Adding the sample via Add() returns a new
	// reference number.
	// If the reference is 0 it must not be used for caching.
	Add(l labels.Labels, t int64, v float64) (uint64, error)

	// AddFast adds a sample pair for the referenced series. It is generally
	// faster than adding a sample by providing its full label set.
	AddFast(ref uint64, t int64, v float64) error

	// Commit submits the collected samples and purges the batch. If Commit
	// returns a non-nil error, it also rolls back all modifications made in
	// the appender so far, as Rollback would do. In any case, an Appender
	// must not be used anymore after Commit has been called.
	Commit() error

	// Rollback rolls back all modifications made in the appender so far.
	// Appender has to be discarded after rollback.
	Rollback() error
}

Appender provides batched appends against a storage. It must be completed with a call to Commit or Rollback and must not be reused afterwards.

Operations on the Appender interface are not goroutine-safe.

type BufferedSeriesIterator

type BufferedSeriesIterator struct {
	// contains filtered or unexported fields
}

BufferedSeriesIterator wraps an iterator with a look-back buffer.

func NewBuffer

func NewBuffer(delta int64) *BufferedSeriesIterator

NewBuffer returns a new iterator that buffers the values within the time range of the current element and the duration of delta before, initialized with an empty iterator. Use Reset() to set an actual iterator to be buffered.

func NewBufferIterator

func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterator

NewBufferIterator returns a new iterator that buffers the values within the time range of the current element and the duration of delta before.

func (*BufferedSeriesIterator) Buffer

Buffer returns an iterator over the buffered data. Invalidates previously returned iterators.

func (*BufferedSeriesIterator) Err

func (b *BufferedSeriesIterator) Err() error

Err returns the last encountered error.

func (*BufferedSeriesIterator) Next

func (b *BufferedSeriesIterator) Next() bool

Next advances the iterator to the next element.

func (*BufferedSeriesIterator) PeekBack

func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool)

PeekBack returns the nth previous element of the iterator. If there is none buffered, ok is false.

func (*BufferedSeriesIterator) ReduceDelta

func (b *BufferedSeriesIterator) ReduceDelta(delta int64) bool

ReduceDelta lowers the buffered time delta, for the current SeriesIterator only.

func (*BufferedSeriesIterator) Reset

Reset re-uses the buffer with a new iterator, resetting the buffered time delta to its original value.

func (*BufferedSeriesIterator) Seek

func (b *BufferedSeriesIterator) Seek(t int64) bool

Seek advances the iterator to the element at time t or greater.

func (*BufferedSeriesIterator) Values

func (b *BufferedSeriesIterator) Values() (int64, float64)

Values returns the current element of the iterator.

type ChunkIteratable

type ChunkIteratable interface {
	// Iterator returns a new, independent iterator that iterates over potentially overlapping
	// chunks of the series, sorted by min time.
	Iterator() chunks.Iterator
}

type ChunkQuerier

type ChunkQuerier interface {
	LabelQuerier

	// Select returns a set of series that matches the given label matchers.
	// Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance.
	// It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all.
	Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) ChunkSeriesSet
}

ChunkQuerier provides querying access over time series data of a fixed time range.

func NewMergeChunkQuerier

func NewMergeChunkQuerier(primaries []ChunkQuerier, secondaries []ChunkQuerier, mergeFn VerticalChunkSeriesMergeFunc) ChunkQuerier

NewMergeChunkQuerier returns a new Chunk Querier that merges results of given primary and secondary chunk queriers. See NewFanout commentary to learn more about primary vs secondary differences.

In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used. TODO(bwplotka): Currently merge will compact overlapping chunks with bigger chunk, without limit. Split it: https://github.com/prometheus/tsdb/issues/670

func NoopChunkedQuerier

func NoopChunkedQuerier() ChunkQuerier

NoopChunkedQuerier is a ChunkQuerier that does nothing.

type ChunkQueryable

type ChunkQueryable interface {
	// ChunkQuerier returns a new ChunkQuerier on the storage.
	ChunkQuerier(ctx context.Context, mint, maxt int64) (ChunkQuerier, error)
}

A ChunkQueryable handles queries against a storage. Use it when you need to have access to samples in encoded format.

type ChunkSeries

type ChunkSeries interface {
	Labels
	ChunkIteratable
}

ChunkSeries exposes a single time series and allows iterating over chunks.

type ChunkSeriesEntry

type ChunkSeriesEntry struct {
	Lset            labels.Labels
	ChunkIteratorFn func() chunks.Iterator
}

func NewListChunkSeriesFromSamples

func NewListChunkSeriesFromSamples(lset labels.Labels, samples ...[]tsdbutil.Sample) *ChunkSeriesEntry

NewListChunkSeriesFromSamples returns chunk series entry that allows to iterate over provided samples. NOTE: It uses inefficient chunks encoding implementation, not caring about chunk size.

func (*ChunkSeriesEntry) Iterator

func (s *ChunkSeriesEntry) Iterator() chunks.Iterator

func (*ChunkSeriesEntry) Labels

func (s *ChunkSeriesEntry) Labels() labels.Labels

type ChunkSeriesSet

type ChunkSeriesSet interface {
	Next() bool
	// At returns full chunk series. Returned series should be iteratable even after Next is called.
	At() ChunkSeries
	// The error that iteration has failed with.
	// When an error occurs, set cannot continue to iterate.
	Err() error
	// A collection of warnings for the whole set.
	// Warnings could be return even iteration has not failed with error.
	Warnings() Warnings
}

ChunkSeriesSet contains a set of chunked series.

func EmptyChunkSeriesSet

func EmptyChunkSeriesSet() ChunkSeriesSet

EmptyChunkSeriesSet returns a chunk series set that's always empty.

func ErrChunkSeriesSet

func ErrChunkSeriesSet(err error) ChunkSeriesSet

ErrChunkSeriesSet returns a chunk series set that wraps an error.

func NewMergeChunkSeriesSet

func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet

NewMergeChunkSeriesSet returns a new ChunkSeriesSet that merges many SeriesSet together.

func NewSeriesSetToChunkSet

func NewSeriesSetToChunkSet(chk SeriesSet) ChunkSeriesSet

NewSeriesSetToChunkSet converts SeriesSet to ChunkSeriesSet by encoding chunks from samples.

func NoopChunkedSeriesSet

func NoopChunkedSeriesSet() ChunkSeriesSet

NoopChunkedSeriesSet is a ChunkSeriesSet that does nothing.

type LabelQuerier

type LabelQuerier interface {
	// LabelValues returns all potential values for a label name.
	// It is not safe to use the strings beyond the lifefime of the querier.
	// If matchers are specified the returned result set is reduced
	// to label values of metrics matching the matchers.
	LabelValues(name string, matchers ...*labels.Matcher) ([]string, Warnings, error)

	// LabelNames returns all the unique label names present in the block in sorted order.
	// TODO(yeya24): support matchers or hints.
	LabelNames() ([]string, Warnings, error)

	// Close releases the resources of the Querier.
	Close() error
}

LabelQuerier provides querying access over labels.

type Labels

type Labels interface {
	// Labels returns the complete set of labels. For series it means all labels identifying the series.
	Labels() labels.Labels
}

Labels represents an item that has labels e.g. time series.

type Querier

type Querier interface {
	LabelQuerier

	// Select returns a set of series that matches the given label matchers.
	// Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance.
	// It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all.
	Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) SeriesSet
}

Querier provides querying access over time series data of a fixed time range.

func NewMergeQuerier

func NewMergeQuerier(primaries []Querier, secondaries []Querier, mergeFn VerticalSeriesMergeFunc) Querier

NewMergeQuerier returns a new Querier that merges results of given primary and secondary queriers. See NewFanout commentary to learn more about primary vs secondary differences.

In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used.

func NoopQuerier

func NoopQuerier() Querier

NoopQuerier is a Querier that does nothing.

type Queryable

type Queryable interface {
	// Querier returns a new Querier on the storage.
	Querier(ctx context.Context, mint, maxt int64) (Querier, error)
}

A Queryable handles queries against a storage. Use it when you need to have access to all samples without chunk encoding abstraction e.g promQL.

type QueryableFunc

type QueryableFunc func(ctx context.Context, mint, maxt int64) (Querier, error)

TODO(bwplotka): Move to promql/engine_test.go? QueryableFunc is an adapter to allow the use of ordinary functions as Queryables. It follows the idea of http.HandlerFunc.

func (QueryableFunc) Querier

func (f QueryableFunc) Querier(ctx context.Context, mint, maxt int64) (Querier, error)

Querier calls f() with the given parameters.

type SampleAndChunkQueryable

type SampleAndChunkQueryable interface {
	Queryable
	ChunkQueryable
}

SampleAndChunkQueryable allows retrieving samples as well as encoded samples in form of chunks.

type SampleIteratable

type SampleIteratable interface {
	// Iterator returns a new, independent iterator of the data of the series.
	Iterator() chunkenc.Iterator
}

type Samples

type Samples interface {
	Get(i int) tsdbutil.Sample
	Len() int
}

Samples interface allows to work on arrays of types that are compatible with tsdbutil.Sample.

type SelectHints

type SelectHints struct {
	Start int64 // Start time in milliseconds for this select.
	End   int64 // End time in milliseconds for this select.

	Step int64  // Query step size in milliseconds.
	Func string // String representation of surrounding function or aggregation.

	Grouping []string // List of label names used in aggregation.
	By       bool     // Indicate whether it is without or by.
	Range    int64    // Range vector selector range in milliseconds.
}

SelectHints specifies hints passed for data selections. This is used only as an option for implementation to use.

type Series

type Series interface {
	Labels
	SampleIteratable
}

Series exposes a single time series and allows iterating over samples.

func ChainedSeriesMerge

func ChainedSeriesMerge(series ...Series) Series

ChainedSeriesMerge returns single series from many same, potentially overlapping series by chaining samples together. If one or more samples overlap, one sample from random overlapped ones is kept and all others with the same timestamp are dropped.

This works the best with replicated series, where data from two series are exactly the same. This does not work well with "almost" the same data, e.g. from 2 Prometheus HA replicas. This is fine, since from the Prometheus perspective this never happens.

It's optimized for non-overlap cases as well.

type SeriesEntry

type SeriesEntry struct {
	Lset             labels.Labels
	SampleIteratorFn func() chunkenc.Iterator
}

func NewListSeries

func NewListSeries(lset labels.Labels, s []tsdbutil.Sample) *SeriesEntry

NewListSeries returns series entry with iterator that allows to iterate over provided samples.

func (*SeriesEntry) Iterator

func (s *SeriesEntry) Iterator() chunkenc.Iterator

func (*SeriesEntry) Labels

func (s *SeriesEntry) Labels() labels.Labels

type SeriesSet

type SeriesSet interface {
	Next() bool
	// At returns full series. Returned series should be iteratable even after Next is called.
	At() Series
	// The error that iteration as failed with.
	// When an error occurs, set cannot continue to iterate.
	Err() error
	// A collection of warnings for the whole set.
	// Warnings could be return even iteration has not failed with error.
	Warnings() Warnings
}

SeriesSet contains a set of series.

func EmptySeriesSet

func EmptySeriesSet() SeriesSet

EmptySeriesSet returns a series set that's always empty.

func ErrSeriesSet

func ErrSeriesSet(err error) SeriesSet

ErrSeriesSet returns a series set that wraps an error.

func NewMergeSeriesSet

func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) SeriesSet

NewMergeSeriesSet returns a new SeriesSet that merges many SeriesSets together.

func NewSeriesSetFromChunkSeriesSet

func NewSeriesSetFromChunkSeriesSet(chk ChunkSeriesSet) SeriesSet

NewSeriesSetFromChunkSeriesSet converts ChunkSeriesSet to SeriesSet by decoding chunks one by one.

func NoopSeriesSet

func NoopSeriesSet() SeriesSet

NoopSeriesSet is a SeriesSet that does nothing.

type Storage

type Storage interface {
	SampleAndChunkQueryable
	Appendable

	// StartTime returns the oldest timestamp stored in the storage.
	StartTime() (int64, error)

	// Close closes the storage and all its underlying resources.
	Close() error
}

Storage ingests and manages samples, along with various indexes. All methods are goroutine-safe. Storage implements storage.SampleAppender. @Raj - New dbs will need to implement this interface. Currently the main() passes in the interface

func NewFanout

func NewFanout(logger log.Logger, primary Storage, secondaries ...Storage) Storage

NewFanout returns a new fanout Storage, which proxies reads and writes through to multiple underlying storages.

The difference between primary and secondary Storage is only for read (Querier) path and it goes as follows: * If the primary querier returns an error, then any of the Querier operations will fail. * If any secondary querier returns an error the result from that queries is discarded. The overall operation will succeed, and the error from the secondary querier will be returned as a warning.

NOTE: In the case of Prometheus, it treats all remote storages as secondary / best effort.

type VerticalChunkSeriesMergeFunc

type VerticalChunkSeriesMergeFunc func(...ChunkSeries) ChunkSeries

VerticalChunkSeriesMergeFunc returns merged chunk series implementation that merges potentially time-overlapping chunk series with the same labels into single ChunkSeries.

NOTE: It's up to implementation how series are vertically merged (if chunks are sorted, re-encoded etc).

func NewCompactingChunkSeriesMerger

func NewCompactingChunkSeriesMerger(mergeFunc VerticalSeriesMergeFunc) VerticalChunkSeriesMergeFunc

NewCompactingChunkSeriesMerger returns VerticalChunkSeriesMergeFunc that merges the same chunk series into single chunk series. In case of the chunk overlaps, it compacts those into one or more time-ordered non-overlapping chunks with merged data. Samples from overlapped chunks are merged using series vertical merge func. It expects the same labels for each given series.

NOTE: Use the returned merge function only when you see potentially overlapping series, as this introduces small a overhead to handle overlaps between series.

type VerticalSeriesMergeFunc

type VerticalSeriesMergeFunc func(...Series) Series

VerticalSeriesMergeFunc returns merged series implementation that merges series with same labels together. It has to handle time-overlapped series as well.

type Warnings

type Warnings []error

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL