tsdb

package module
v0.10.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 4, 2020 License: Apache-2.0 Imports: 36 Imported by: 0

README

TSDB

THIS PROJECT IS BEING MOVED TO PROMETHEUS REPOSITORY: https://github.com/prometheus/prometheus/pull/5805

Build Status GoDoc Go Report Card

This repository contains the Prometheus storage layer that is used in its 2.x releases.

A writeup of its design can be found here.

Based on the Gorilla TSDB white papers.

Video: Storing 16 Bytes at Scale from PromCon 2017.

See also the format documentation.

Documentation

Overview

Package tsdb implements a time series storage for float64 sample data.

Index

Constants

View Source
const (
	// WALMagic is a 4 byte number every WAL segment file starts with.
	WALMagic = uint32(0x43AF00EF)

	// WALFormatDefault is the version flag for the default outer segment file format.
	WALFormatDefault = byte(1)
)
View Source
const (
	// MagicTombstone is 4 bytes at the head of a tombstone file.
	MagicTombstone = 0x0130BA30
)

Variables

View Source
var (
	// ErrNotFound is returned if a looked up resource was not found.
	ErrNotFound = errors.Errorf("not found")

	// ErrOutOfOrderSample is returned if an appended sample has a
	// timestamp smaller than the most recent sample.
	ErrOutOfOrderSample = errors.New("out of order sample")

	// ErrAmendSample is returned if an appended sample has the same timestamp
	// as the most recent sample but a different value.
	ErrAmendSample = errors.New("amending sample")

	// ErrOutOfBounds is returned if an appended sample is out of the
	// writable time range.
	ErrOutOfBounds = errors.New("out of bounds")
)
View Source
var DefaultOptions = &Options{
	WALSegmentSize:         wal.DefaultSegmentSize,
	RetentionDuration:      15 * 24 * 60 * 60 * 1000,
	BlockRanges:            ExponentialBlockRanges(int64(2*time.Hour)/1e6, 3, 5),
	NoLockfile:             false,
	AllowOverlappingBlocks: false,
	WALCompression:         false,
}

DefaultOptions used for the DB. They are sane for setups using millisecond precision timestamps.

View Source
var ErrClosed = errors.New("db already closed")

ErrClosed is returned when the db is closed.

View Source
var ErrClosing = errors.New("block is closing")

ErrClosing is returned when a block is in the process of being closed.

Functions

func DeleteCheckpoints

func DeleteCheckpoints(dir string, maxIndex int) error

DeleteCheckpoints deletes all checkpoints in a directory below a given index.

func ExponentialBlockRanges

func ExponentialBlockRanges(minSize int64, steps, stepSize int) []int64

ExponentialBlockRanges returns the time ranges based on the stepSize.

func LastCheckpoint

func LastCheckpoint(dir string) (string, int, error)

LastCheckpoint returns the directory name and index of the most recent checkpoint. If dir does not contain any checkpoints, ErrNotFound is returned.

func MigrateWAL

func MigrateWAL(logger log.Logger, dir string) (err error)

MigrateWAL rewrites the deprecated write ahead log into the new format.

func PostingsForMatchers

func PostingsForMatchers(ix IndexReader, ms ...labels.Matcher) (index.Postings, error)

PostingsForMatchers assembles a single postings iterator against the index reader based on the given matchers.

Types

type Appendable

type Appendable interface {
	// Appender returns a new Appender against an underlying store.
	Appender() Appender
}

Appendable defines an entity to which data can be appended.

type Appender

type Appender interface {
	// Add adds a sample pair for the given series. A reference number is
	// returned which can be used to add further samples in the same or later
	// transactions.
	// Returned reference numbers are ephemeral and may be rejected in calls
	// to AddFast() at any point. Adding the sample via Add() returns a new
	// reference number.
	// If the reference is 0 it must not be used for caching.
	Add(l labels.Labels, t int64, v float64) (uint64, error)

	// AddFast adds a sample pair for the referenced series. It is generally
	// faster than adding a sample by providing its full label set.
	AddFast(ref uint64, t int64, v float64) error

	// Commit submits the collected samples and purges the batch.
	Commit() error

	// Rollback rolls back all modifications made in the appender so far.
	Rollback() error
}

Appender allows appending a batch of data. It must be completed with a call to Commit or Rollback and must not be reused afterwards.

Operations on the Appender interface are not goroutine-safe.

type Block

type Block struct {
	// contains filtered or unexported fields
}

Block represents a directory of time series data covering a continuous time range.

func OpenBlock

func OpenBlock(logger log.Logger, dir string, pool chunkenc.Pool) (pb *Block, err error)

OpenBlock opens the block in the directory. It can be passed a chunk pool, which is used to instantiate chunk structs.

func (*Block) Chunks

func (pb *Block) Chunks() (ChunkReader, error)

Chunks returns a new ChunkReader against the block data.

func (*Block) CleanTombstones

func (pb *Block) CleanTombstones(dest string, c Compactor) (*ulid.ULID, error)

CleanTombstones will remove the tombstones and rewrite the block (only if there are any tombstones). If there was a rewrite, then it returns the ULID of the new block written, else nil.

func (*Block) Close

func (pb *Block) Close() error

Close closes the on-disk block. It blocks as long as there are readers reading from the block.

func (*Block) Delete

func (pb *Block) Delete(mint, maxt int64, ms ...labels.Matcher) error

Delete matching series between mint and maxt in the block.

func (*Block) Dir

func (pb *Block) Dir() string

Dir returns the directory of the block.

func (*Block) GetSymbolTableSize

func (pb *Block) GetSymbolTableSize() uint64

GetSymbolTableSize returns the Symbol Table Size in the index of this block.

func (*Block) Index

func (pb *Block) Index() (IndexReader, error)

Index returns a new IndexReader against the block data.

func (*Block) LabelNames

func (pb *Block) LabelNames() ([]string, error)

LabelNames returns all the unique label names present in the Block in sorted order.

func (*Block) MaxTime added in v0.10.2

func (pb *Block) MaxTime() int64

MaxTime returns the max time of the meta.

func (*Block) Meta

func (pb *Block) Meta() BlockMeta

Meta returns meta information about the block.

func (*Block) MinTime added in v0.10.2

func (pb *Block) MinTime() int64

MinTime returns the min time of the meta.

func (*Block) OverlapsClosedInterval

func (pb *Block) OverlapsClosedInterval(mint, maxt int64) bool

OverlapsClosedInterval returns true if the block overlaps [mint, maxt].

func (*Block) Size added in v0.10.2

func (pb *Block) Size() int64

Size returns the number of bytes that the block takes up.

func (*Block) Snapshot

func (pb *Block) Snapshot(dir string) error

Snapshot creates snapshot of the block into dir.

func (*Block) String

func (pb *Block) String() string

func (*Block) Tombstones

func (pb *Block) Tombstones() (TombstoneReader, error)

Tombstones returns a new TombstoneReader against the block data.

type BlockDesc

type BlockDesc struct {
	ULID    ulid.ULID `json:"ulid"`
	MinTime int64     `json:"minTime"`
	MaxTime int64     `json:"maxTime"`
}

BlockDesc describes a block by ULID and time range.

type BlockMeta

type BlockMeta struct {
	// Unique identifier for the block and its contents. Changes on compaction.
	ULID ulid.ULID `json:"ulid"`

	// MinTime and MaxTime specify the time range all samples
	// in the block are in.
	MinTime int64 `json:"minTime"`
	MaxTime int64 `json:"maxTime"`

	// Stats about the contents of the block.
	Stats BlockStats `json:"stats,omitempty"`

	// Information on compactions the block was created from.
	Compaction BlockMetaCompaction `json:"compaction"`

	// Version of the index format.
	Version int `json:"version"`
}

BlockMeta provides meta information about a block.

type BlockMetaCompaction

type BlockMetaCompaction struct {
	// Maximum number of compaction cycles any source block has
	// gone through.
	Level int `json:"level"`
	// ULIDs of all source head blocks that went into the block.
	Sources []ulid.ULID `json:"sources,omitempty"`
	// Indicates that during compaction it resulted in a block without any samples
	// so it should be deleted on the next reload.
	Deletable bool `json:"deletable,omitempty"`
	// Short descriptions of the direct blocks that were used to create
	// this block.
	Parents []BlockDesc `json:"parents,omitempty"`
	Failed  bool        `json:"failed,omitempty"`
}

BlockMetaCompaction holds information about compactions a block went through.

type BlockReader

type BlockReader interface {
	// Index returns an IndexReader over the block's data.
	Index() (IndexReader, error)

	// Chunks returns a ChunkReader over the block's data.
	Chunks() (ChunkReader, error)

	// Tombstones returns a TombstoneReader over the block's deleted data.
	Tombstones() (TombstoneReader, error)

	// Meta provides meta information about the block reader.
	Meta() BlockMeta
}

BlockReader provides reading access to a data block.

type BlockStats

type BlockStats struct {
	NumSamples    uint64 `json:"numSamples,omitempty"`
	NumSeries     uint64 `json:"numSeries,omitempty"`
	NumChunks     uint64 `json:"numChunks,omitempty"`
	NumTombstones uint64 `json:"numTombstones,omitempty"`
}

BlockStats contains stats about contents of a block.

type CheckpointStats

type CheckpointStats struct {
	DroppedSeries     int
	DroppedSamples    int
	DroppedTombstones int
	TotalSeries       int // Processed series including dropped ones.
	TotalSamples      int // Processed samples including dropped ones.
	TotalTombstones   int // Processed tombstones including dropped ones.
}

CheckpointStats returns stats about a created checkpoint.

func Checkpoint

func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64) (*CheckpointStats, error)

Checkpoint creates a compacted checkpoint of segments in range [first, last] in the given WAL. It includes the most recent checkpoint if it exists. All series not satisfying keep and samples below mint are dropped.

The checkpoint is stored in a directory named checkpoint.N in the same segmented format as the original WAL itself. This makes it easy to read it through the WAL package and concatenate it with the original WAL.

type ChunkReader

type ChunkReader interface {
	// Chunk returns the series data chunk with the given reference.
	Chunk(ref uint64) (chunkenc.Chunk, error)

	// Close releases all underlying resources of the reader.
	Close() error
}

ChunkReader provides reading access of serialized time series data.

type ChunkSeriesSet

type ChunkSeriesSet interface {
	Next() bool
	At() (labels.Labels, []chunks.Meta, Intervals)
	Err() error
}

ChunkSeriesSet exposes the chunks and intervals of a series instead of the actual series itself.

func LookupChunkSeries

func LookupChunkSeries(ir IndexReader, tr TombstoneReader, ms ...labels.Matcher) (ChunkSeriesSet, error)

LookupChunkSeries retrieves all series for the given matchers and returns a ChunkSeriesSet over them. It drops chunks based on tombstones in the given reader.

type ChunkWriter

type ChunkWriter interface {
	// WriteChunks writes several chunks. The Chunk field of the ChunkMetas
	// must be populated.
	// After returning successfully, the Ref fields in the ChunkMetas
	// are set and can be used to retrieve the chunks from the written data.
	WriteChunks(chunks ...chunks.Meta) error

	// Close writes any required finalization and closes the resources
	// associated with the underlying writer.
	Close() error
}

ChunkWriter serializes a time block of chunked series data.

type Compactor

type Compactor interface {
	// Plan returns a set of directories that can be compacted concurrently.
	// The directories can be overlapping.
	// Results returned when compactions are in progress are undefined.
	Plan(dir string) ([]string, error)

	// Write persists a Block into a directory.
	// No Block is written when resulting Block has 0 samples, and returns empty ulid.ULID{}.
	Write(dest string, b BlockReader, mint, maxt int64, parent *BlockMeta) (ulid.ULID, error)

	// Compact runs compaction against the provided directories. Must
	// only be called concurrently with results of Plan().
	// Can optionally pass a list of already open blocks,
	// to avoid having to reopen them.
	// When resulting Block has 0 samples
	//  * No block is written.
	//  * The source dirs are marked Deletable.
	//  * Returns empty ulid.ULID{}.
	Compact(dest string, dirs []string, open []*Block) (ulid.ULID, error)
}

Compactor provides compaction against an underlying storage of time series data.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB handles reads and writes of time series falling into a hashed partition of a seriedb.

func Open

func Open(dir string, l log.Logger, r prometheus.Registerer, opts *Options) (db *DB, err error)

Open returns a new DB in the given directory.

func (*DB) Appender

func (db *DB) Appender() Appender

Appender opens a new appender against the database.

func (*DB) Blocks

func (db *DB) Blocks() []*Block

Blocks returns the databases persisted blocks.

func (*DB) CleanTombstones

func (db *DB) CleanTombstones() (err error)

CleanTombstones re-writes any blocks with tombstones.

func (*DB) Close

func (db *DB) Close() error

Close the partition.

func (*DB) Delete

func (db *DB) Delete(mint, maxt int64, ms ...labels.Matcher) error

Delete implements deletion of metrics. It only has atomicity guarantees on a per-block basis.

func (*DB) Dir

func (db *DB) Dir() string

Dir returns the directory of the database.

func (*DB) DisableCompactions

func (db *DB) DisableCompactions()

DisableCompactions disables auto compactions.

func (*DB) EnableCompactions

func (db *DB) EnableCompactions()

EnableCompactions enables auto compactions.

func (*DB) Head

func (db *DB) Head() *Head

Head returns the databases's head.

func (*DB) Querier

func (db *DB) Querier(mint, maxt int64) (Querier, error)

Querier returns a new querier over the data partition for the given time range. A goroutine must not handle more than one open Querier.

func (*DB) Snapshot

func (db *DB) Snapshot(dir string, withHead bool) error

Snapshot writes the current data to the directory. If withHead is set to true it will create a new block containing all data that's currently in the memory buffer/WAL.

func (*DB) String

func (db *DB) String() string

type DBReadOnly added in v0.10.2

type DBReadOnly struct {
	// contains filtered or unexported fields
}

DBReadOnly provides APIs for read only operations on a database. Current implementation doesn't support concurency so all API calls should happen in the same go routine.

func OpenDBReadOnly added in v0.10.2

func OpenDBReadOnly(dir string, l log.Logger) (*DBReadOnly, error)

OpenDBReadOnly opens DB in the given directory for read only operations.

func (*DBReadOnly) Blocks added in v0.10.2

func (db *DBReadOnly) Blocks() ([]BlockReader, error)

Blocks returns a slice of block readers for persisted blocks.

func (*DBReadOnly) Close added in v0.10.2

func (db *DBReadOnly) Close() error

Close all block readers.

func (*DBReadOnly) Querier added in v0.10.2

func (db *DBReadOnly) Querier(mint, maxt int64) (Querier, error)

Querier loads the wal and returns a new querier over the data partition for the given time range. Current implementation doesn't support multiple Queriers.

type Head struct {
	// contains filtered or unexported fields
}

Head handles reads and writes of time series data within a time window.

func NewHead

func NewHead(r prometheus.Registerer, l log.Logger, wal *wal.WAL, chunkRange int64) (*Head, error)

NewHead opens the head block in dir.

func (*Head) Appender

func (h *Head) Appender() Appender

Appender returns a new Appender on the database.

func (*Head) Chunks

func (h *Head) Chunks() (ChunkReader, error)

Chunks returns a ChunkReader against the block.

func (*Head) Close

func (h *Head) Close() error

Close flushes the WAL and closes the head.

func (*Head) Delete

func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error

Delete all samples in the range of [mint, maxt] for series that satisfy the given label matchers.

func (*Head) Index

func (h *Head) Index() (IndexReader, error)

Index returns an IndexReader against the block.

func (*Head) Init

func (h *Head) Init(minValidTime int64) error

Init loads data from the write ahead log and prepares the head for writes. It should be called before using an appender so that limits the ingested samples to the head min valid time.

func (*Head) MaxTime

func (h *Head) MaxTime() int64

MaxTime returns the highest timestamp seen in data of the head.

func (*Head) Meta added in v0.10.2

func (h *Head) Meta() BlockMeta

Meta returns meta information about the head. The head is dynamic so will return dynamic results.

func (*Head) MinTime

func (h *Head) MinTime() int64

MinTime returns the lowest time bound on visible data in the head.

func (*Head) NumSeries added in v0.10.2

func (h *Head) NumSeries() uint64

NumSeries returns the number of active series in the head.

func (*Head) Tombstones

func (h *Head) Tombstones() (TombstoneReader, error)

Tombstones returns a new reader over the head's tombstones

func (*Head) Truncate

func (h *Head) Truncate(mint int64) (err error)

Truncate removes old data before mint from the head.

type IndexReader

type IndexReader interface {
	// Symbols returns a set of string symbols that may occur in series' labels
	// and indices.
	Symbols() (map[string]struct{}, error)

	// LabelValues returns the possible label values.
	LabelValues(names ...string) (index.StringTuples, error)

	// Postings returns the postings list iterator for the label pair.
	// The Postings here contain the offsets to the series inside the index.
	// Found IDs are not strictly required to point to a valid Series, e.g. during
	// background garbage collections.
	Postings(name, value string) (index.Postings, error)

	// SortedPostings returns a postings list that is reordered to be sorted
	// by the label set of the underlying series.
	SortedPostings(index.Postings) index.Postings

	// Series populates the given labels and chunk metas for the series identified
	// by the reference.
	// Returns ErrNotFound if the ref does not resolve to a known series.
	Series(ref uint64, lset *labels.Labels, chks *[]chunks.Meta) error

	// LabelIndices returns a list of string tuples for which a label value index exists.
	// NOTE: This is deprecated. Use `LabelNames()` instead.
	LabelIndices() ([][]string, error)

	// LabelNames returns all the unique label names present in the index in sorted order.
	LabelNames() ([]string, error)

	// Close releases the underlying resources of the reader.
	Close() error
}

IndexReader provides reading access of serialized index data.

type IndexWriter

type IndexWriter interface {
	// AddSymbols registers all string symbols that are encountered in series
	// and other indices.
	AddSymbols(sym map[string]struct{}) error

	// AddSeries populates the index writer with a series and its offsets
	// of chunks that the index can reference.
	// Implementations may require series to be insert in increasing order by
	// their labels.
	// The reference numbers are used to resolve entries in postings lists that
	// are added later.
	AddSeries(ref uint64, l labels.Labels, chunks ...chunks.Meta) error

	// WriteLabelIndex serializes an index from label names to values.
	// The passed in values chained tuples of strings of the length of names.
	WriteLabelIndex(names []string, values []string) error

	// WritePostings writes a postings list for a single label pair.
	// The Postings here contain refs to the series that were added.
	WritePostings(name, value string, it index.Postings) error

	// Close writes any finalization and closes the resources associated with
	// the underlying writer.
	Close() error
}

IndexWriter serializes the index for a block of series data. The methods must be called in the order they are specified in.

type Interval

type Interval struct {
	Mint, Maxt int64
}

Interval represents a single time-interval.

type Intervals

type Intervals []Interval

Intervals represents a set of increasing and non-overlapping time-intervals.

type LeveledCompactor

type LeveledCompactor struct {
	// contains filtered or unexported fields
}

LeveledCompactor implements the Compactor interface.

func NewLeveledCompactor

func NewLeveledCompactor(ctx context.Context, r prometheus.Registerer, l log.Logger, ranges []int64, pool chunkenc.Pool) (*LeveledCompactor, error)

NewLeveledCompactor returns a LeveledCompactor.

func (*LeveledCompactor) Compact

func (c *LeveledCompactor) Compact(dest string, dirs []string, open []*Block) (uid ulid.ULID, err error)

Compact creates a new block in the compactor's directory from the blocks in the provided directories.

func (*LeveledCompactor) Plan

func (c *LeveledCompactor) Plan(dir string) ([]string, error)

Plan returns a list of compactable blocks in the provided directory.

func (*LeveledCompactor) Write

func (c *LeveledCompactor) Write(dest string, b BlockReader, mint, maxt int64, parent *BlockMeta) (ulid.ULID, error)

type Options

type Options struct {
	// Segments (wal files) max size.
	// WALSegmentSize = 0, segment size is default size.
	// WALSegmentSize > 0, segment size is WALSegmentSize.
	// WALSegmentSize < 0, wal is disabled.
	WALSegmentSize int

	// Duration of persisted data to keep.
	RetentionDuration uint64

	// Maximum number of bytes in blocks to be retained.
	// 0 or less means disabled.
	// NOTE: For proper storage calculations need to consider
	// the size of the WAL folder which is not added when calculating
	// the current size of the database.
	MaxBytes int64

	// The sizes of the Blocks.
	BlockRanges []int64

	// NoLockfile disables creation and consideration of a lock file.
	NoLockfile bool

	// Overlapping blocks are allowed if AllowOverlappingBlocks is true.
	// This in-turn enables vertical compaction and vertical query merge.
	AllowOverlappingBlocks bool

	// WALCompression will turn on Snappy compression for records on the WAL.
	WALCompression bool
}

Options of the DB storage.

type Overlaps

type Overlaps map[TimeRange][]BlockMeta

Overlaps contains overlapping blocks aggregated by overlapping range.

func OverlappingBlocks

func OverlappingBlocks(bm []BlockMeta) Overlaps

OverlappingBlocks returns all overlapping blocks from given meta files.

func (Overlaps) String

func (o Overlaps) String() string

String returns human readable string form of overlapped blocks.

type Querier

type Querier interface {
	// Select returns a set of series that matches the given label matchers.
	Select(...labels.Matcher) (SeriesSet, error)

	// LabelValues returns all potential values for a label name.
	LabelValues(string) ([]string, error)

	// LabelValuesFor returns all potential values for a label name.
	// under the constraint of another label.
	LabelValuesFor(string, labels.Label) ([]string, error)

	// LabelNames returns all the unique label names present in the block in sorted order.
	LabelNames() ([]string, error)

	// Close releases the resources of the Querier.
	Close() error
}

Querier provides querying access over time series data of a fixed time range.

func NewBlockQuerier

func NewBlockQuerier(b BlockReader, mint, maxt int64) (Querier, error)

NewBlockQuerier returns a querier against the reader.

type RecordDecoder

type RecordDecoder struct {
}

RecordDecoder decodes series, sample, and tombstone records. The zero value is ready to use.

func (*RecordDecoder) Samples

func (d *RecordDecoder) Samples(rec []byte, samples []RefSample) ([]RefSample, error)

Samples appends samples in rec to the given slice.

func (*RecordDecoder) Series

func (d *RecordDecoder) Series(rec []byte, series []RefSeries) ([]RefSeries, error)

Series appends series in rec to the given slice.

func (*RecordDecoder) Tombstones

func (d *RecordDecoder) Tombstones(rec []byte, tstones []Stone) ([]Stone, error)

Tombstones appends tombstones in rec to the given slice.

func (*RecordDecoder) Type

func (d *RecordDecoder) Type(rec []byte) RecordType

Type returns the type of the record. Return RecordInvalid if no valid record type is found.

type RecordEncoder

type RecordEncoder struct {
}

RecordEncoder encodes series, sample, and tombstones records. The zero value is ready to use.

func (*RecordEncoder) Samples

func (e *RecordEncoder) Samples(samples []RefSample, b []byte) []byte

Samples appends the encoded samples to b and returns the resulting slice.

func (*RecordEncoder) Series

func (e *RecordEncoder) Series(series []RefSeries, b []byte) []byte

Series appends the encoded series to b and returns the resulting slice.

func (*RecordEncoder) Tombstones

func (e *RecordEncoder) Tombstones(tstones []Stone, b []byte) []byte

Tombstones appends the encoded tombstones to b and returns the resulting slice.

type RecordType

type RecordType uint8

RecordType represents the data type of a record.

const (
	// RecordInvalid is returned for unrecognised WAL record types.
	RecordInvalid RecordType = 255
	// RecordSeries is used to match WAL records of type Series.
	RecordSeries RecordType = 1
	// RecordSamples is used to match WAL records of type Samples.
	RecordSamples RecordType = 2
	// RecordTombstones is used to match WAL records of type Tombstones.
	RecordTombstones RecordType = 3
)

type RefSample

type RefSample struct {
	Ref uint64
	T   int64
	V   float64
	// contains filtered or unexported fields
}

RefSample is a timestamp/value pair associated with a reference to a series.

type RefSeries

type RefSeries struct {
	Ref    uint64
	Labels labels.Labels
}

RefSeries is the series labels with the series ID.

type SegmentWAL

type SegmentWAL struct {
	// contains filtered or unexported fields
}

SegmentWAL is a write ahead log for series data.

DEPRECATED: use wal pkg combined with the record coders instead.

func OpenSegmentWAL

func OpenSegmentWAL(dir string, logger log.Logger, flushInterval time.Duration, r prometheus.Registerer) (*SegmentWAL, error)

OpenSegmentWAL opens or creates a write ahead log in the given directory. The WAL must be read completely before new data is written.

func (*SegmentWAL) Close

func (w *SegmentWAL) Close() error

Close syncs all data and closes the underlying resources.

func (*SegmentWAL) LogDeletes

func (w *SegmentWAL) LogDeletes(stones []Stone) error

LogDeletes write a batch of new deletes to the log.

func (*SegmentWAL) LogSamples

func (w *SegmentWAL) LogSamples(samples []RefSample) error

LogSamples writes a batch of new samples to the log.

func (*SegmentWAL) LogSeries

func (w *SegmentWAL) LogSeries(series []RefSeries) error

LogSeries writes a batch of new series labels to the log. The series have to be ordered.

func (*SegmentWAL) Reader

func (w *SegmentWAL) Reader() WALReader

Reader returns a new reader over the the write ahead log data. It must be completely consumed before writing to the WAL.

func (*SegmentWAL) Sync

func (w *SegmentWAL) Sync() error

Sync flushes the changes to disk.

func (*SegmentWAL) Truncate

func (w *SegmentWAL) Truncate(mint int64, keep func(uint64) bool) error

Truncate deletes the values prior to mint and the series which the keep function does not indicate to preserve.

type Series

type Series interface {
	// Labels returns the complete set of labels identifying the series.
	Labels() labels.Labels

	// Iterator returns a new iterator of the data of the series.
	Iterator() SeriesIterator
}

Series exposes a single time series.

type SeriesIterator

type SeriesIterator interface {
	// Seek advances the iterator forward to the given timestamp.
	// If there's no value exactly at t, it advances to the first value
	// after t.
	Seek(t int64) bool
	// At returns the current timestamp/value pair.
	At() (t int64, v float64)
	// Next advances the iterator by one.
	Next() bool
	// Err returns the current error.
	Err() error
}

SeriesIterator iterates over the data of a time series.

type SeriesSet

type SeriesSet interface {
	Next() bool
	At() Series
	Err() error
}

SeriesSet contains a set of series.

func EmptySeriesSet

func EmptySeriesSet() SeriesSet

EmptySeriesSet returns a series set that's always empty.

func NewMergedSeriesSet

func NewMergedSeriesSet(a, b SeriesSet) SeriesSet

NewMergedSeriesSet takes two series sets as a single series set. The input series sets must be sorted and sequential in time, i.e. if they have the same label set, the datapoints of a must be before the datapoints of b.

func NewMergedVerticalSeriesSet added in v0.10.2

func NewMergedVerticalSeriesSet(a, b SeriesSet) SeriesSet

NewMergedVerticalSeriesSet takes two series sets as a single series set. The input series sets must be sorted and the time ranges of the series can be overlapping.

type Stone

type Stone struct {
	// contains filtered or unexported fields
}

Stone holds the information on the posting and time-range that is deleted.

type StringTuples

type StringTuples interface {
	// Total number of tuples in the list.
	Len() int
	// At returns the tuple at position i.
	At(i int) ([]string, error)
}

StringTuples provides access to a sorted list of string tuples.

type TimeRange

type TimeRange struct {
	Min, Max int64
}

TimeRange specifies minTime and maxTime range.

type TombstoneReader

type TombstoneReader interface {
	// Get returns deletion intervals for the series with the given reference.
	Get(ref uint64) (Intervals, error)

	// Iter calls the given function for each encountered interval.
	Iter(func(uint64, Intervals) error) error

	// Total returns the total count of tombstones.
	Total() uint64

	// Close any underlying resources
	Close() error
}

TombstoneReader gives access to tombstone intervals by series reference.

type WAL

type WAL interface {
	Reader() WALReader
	LogSeries([]RefSeries) error
	LogSamples([]RefSample) error
	LogDeletes([]Stone) error
	Truncate(mint int64, keep func(uint64) bool) error
	Close() error
}

WAL is a write ahead log that can log new series labels and samples. It must be completely read before new entries are logged.

DEPRECATED: use wal pkg combined with the record codex instead.

type WALEntryType

type WALEntryType uint8

WALEntryType indicates what data a WAL entry contains.

const (
	WALEntrySymbols WALEntryType = 1
	WALEntrySeries  WALEntryType = 2
	WALEntrySamples WALEntryType = 3
	WALEntryDeletes WALEntryType = 4
)

Entry types in a segment file.

type WALReader

type WALReader interface {
	Read(
		seriesf func([]RefSeries),
		samplesf func([]RefSample),
		deletesf func([]Stone),
	) error
}

WALReader reads entries from a WAL.

Directories

Path Synopsis
cmd
Package fileutil provides utility methods used when dealing with the filesystem in tsdb.
Package fileutil provides utility methods used when dealing with the filesystem in tsdb.
Package goversion enforces the go version suported by the tsdb module.
Package goversion enforces the go version suported by the tsdb module.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL