tsdb

package
v0.9.4-rc1.0...-2cf2382 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 24, 2017 License: MIT Imports: 27 Imported by: 0

README

Line Protocol

The line protocol is a text based format for writing points to InfluxDB. Each line defines a single point. Multiple lines must be separated by the newline character \n. The format of the line consists of three parts:

[key] [fields] [timestamp]

Each section is separated by spaces. The minimum required point consists of a measurement name and at least one field. Points without a specified timestamp will be written using the server's local timestamp. Timestamps are assumed to be in nanoseconds unless a precision value is passed in the query string.

Key

The key is the measurement name and any optional tags separated by commas. Measurement names, tag keys, and tag values must escape any spaces or commas using a backslash (\). For example: \ and \,. All tag values are stored as strings and should not be surrounded in quotes.

Tags should be sorted by key before being sent for best performance. The sort should match that from the Go bytes.Compare function (http://golang.org/pkg/bytes/#Compare).

Examples
# measurement only
cpu

# measurement and tags
cpu,host=serverA,region=us-west

# measurement with commas
cpu\,01,host=serverA,region=us-west

# tag value with spaces
cpu,host=server\ A,region=us\ west

Fields

Fields are key-value metrics associated with the measurement. Every line must have at least one field. Multiple fields must be separated with commas and not spaces.

Field keys are always strings and follow the same syntactical rules as described above for tag keys and values. Field values can be one of four types. The first value written for a given field on a given measurement defines the type of that field for all series under that measurement.

  • integer - Numeric values that do not include a decimal and are followed by a trailing i when inserted (e.g. 1i, 345i, 2015i, -10i). Note that all values must have a trailing i. If they do not they will be written as floats.
  • float - Numeric values that are not followed by a trailing i. (e.g. 1, 1.0, -3.14, 6.0+e5, 10).
  • boolean - A value indicating true or false. Valid boolean strings are (t, T, true, TRUE, f, F, false, and FALSE).
  • string - A text value. All string values must be surrounded in double-quotes ". If the string contains a double-quote or backslashes, it must be escaped with a backslash, e.g. \", \\.
# integer value
cpu value=1i

cpu value=1.1i # will result in a parse error

# float value
cpu_load value=1

cpu_load value=1.0

cpu_load value=1.2

# boolean value
error fatal=true

# string value
event msg="logged out"

# multiple values
cpu load=10,alert=true,reason="value above maximum threshold"

Timestamp

The timestamp section is optional but should be specified if possible. The value is an integer representing nanoseconds since the epoch. If the timestamp is not provided the point will inherit the server's local timestamp.

Some write APIs allow passing a lower precision. If the API supports a lower precision, the timestamp may also be an integer epoch in microseconds, milliseconds, seconds, minutes or hours.

Full Example

A full example is shown below.

cpu,host=server01,region=uswest value=1 1434055562000000000
cpu,host=server02,region=uswest value=3 1434055562000010000

In this example the first line shows a measurement of "cpu", there are two tags "host" and "region, the value is 1.0, and the timestamp is 1434055562000000000. Following this is a second line, also a point in the measurement "cpu" but belonging to a different "host".

cpu,host=server\ 01,region=uswest value=1,msg="all systems nominal"
cpu,host=server\ 01,region=us\,west value_int=1i

In these examples, the "host" is set to server 01. The field value associated with field key msg is double-quoted, as it is a string. The second example shows a region of us,west with the comma properly escaped. In the first example value is written as a floating point number. In the second, value_int is an integer.

Distributed Queries

Documentation

Overview

Package tsdb implements a durable time series database.

Index

Constants

View Source
const (
	// DefaultEngine is the default engine for new shards
	DefaultEngine = "tsm1"

	// DefaultIndex is the default index for new shards
	DefaultIndex = "inmem"

	// DefaultCacheMaxMemorySize is the maximum size a shard's cache can
	// reach before it starts rejecting writes.
	DefaultCacheMaxMemorySize = 1024 * 1024 * 1024 // 1GB

	// DefaultCacheSnapshotMemorySize is the size at which the engine will
	// snapshot the cache and write it to a TSM file, freeing up memory
	DefaultCacheSnapshotMemorySize = 25 * 1024 * 1024 // 25MB

	// DefaultCacheSnapshotWriteColdDuration is the length of time at which
	// the engine will snapshot the cache and write it to a new TSM file if
	// the shard hasn't received writes or deletes
	DefaultCacheSnapshotWriteColdDuration = time.Duration(10 * time.Minute)

	// DefaultCompactFullWriteColdDuration is the duration at which the engine
	// will compact all TSM files in a shard if it hasn't received a write or delete
	DefaultCompactFullWriteColdDuration = time.Duration(4 * time.Hour)

	// DefaultMaxPointsPerBlock is the maximum number of points in an encoded
	// block in a TSM file
	DefaultMaxPointsPerBlock = 1000

	// DefaultMaxSeriesPerDatabase is the maximum number of series a node can hold per database.
	// This limit only applies to the "inmem" index.
	DefaultMaxSeriesPerDatabase = 1000000

	// DefaultMaxValuesPerTag is the maximum number of values a tag can have within a measurement.
	DefaultMaxValuesPerTag = 100000

	// DefaultMaxConcurrentCompactions is the maximum number of concurrent full and level compactions
	// that can run at one time.  A value of results in runtime.GOMAXPROCS(0) used at runtime.
	DefaultMaxConcurrentCompactions = 0
)

EOF represents a "not found" key returned by a Cursor.

Variables

View Source
var (
	// ErrFormatNotFound is returned when no format can be determined from a path.
	ErrFormatNotFound = errors.New("format not found")

	// ErrUnknownEngineFormat is returned when the engine format is
	// unknown. ErrUnknownEngineFormat is currently returned if a format
	// other than tsm1 is encountered.
	ErrUnknownEngineFormat = errors.New("unknown engine format")
)
View Source
var (
	// ErrFieldOverflow is returned when too many fields are created on a measurement.
	ErrFieldOverflow = errors.New("field overflow")

	// ErrFieldTypeConflict is returned when a new field already exists with a different type.
	ErrFieldTypeConflict = errors.New("field type conflict")

	// ErrFieldNotFound is returned when a field cannot be found.
	ErrFieldNotFound = errors.New("field not found")

	// ErrFieldUnmappedID is returned when the system is presented, during decode, with a field ID
	// there is no mapping for.
	ErrFieldUnmappedID = errors.New("field ID not mapped")

	// ErrEngineClosed is returned when a caller attempts indirectly to
	// access the shard's underlying engine.
	ErrEngineClosed = errors.New("engine is closed")

	// ErrShardDisabled is returned when a the shard is not available for
	// queries or writes.
	ErrShardDisabled = errors.New("shard is disabled")
)
View Source
var (
	// ErrShardNotFound is returned when trying to get a non existing shard.
	ErrShardNotFound = fmt.Errorf("shard not found")
	// ErrStoreClosed is returned when trying to use a closed Store.
	ErrStoreClosed = fmt.Errorf("store is closed")
)
View Source
var NewInmemIndex func(name string) (interface{}, error)

NewInmemIndex returns a new "inmem" index type.

Functions

func MakeTagsKey

func MakeTagsKey(keys []string, tags models.Tags) []byte

MakeTagsKey converts a tag set to bytes for use as a lookup key.

func MarshalTags

func MarshalTags(tags map[string]string) []byte

MarshalTags converts a tag set to bytes for use as a lookup key.

func MeasurementFromSeriesKey

func MeasurementFromSeriesKey(key []byte) []byte

MeasurementFromSeriesKey returns the name of the measurement from a key that contains a measurement name.

func NewFieldKeysIterator

func NewFieldKeysIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

NewFieldKeysIterator returns an iterator that can be iterated over to retrieve field keys.

func NewShardError

func NewShardError(id uint64, err error) error

NewShardError returns a new ShardError.

func NewTagKeysIterator

func NewTagKeysIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

NewTagKeysIterator returns a new instance of TagKeysIterator.

func RegisterEngine

func RegisterEngine(name string, fn NewEngineFunc)

RegisterEngine registers a storage engine initializer by name.

func RegisterIndex

func RegisterIndex(name string, fn NewIndexFunc)

RegisterIndex registers a storage index initializer by name.

func RegisteredEngines

func RegisteredEngines() []string

RegisteredEngines returns the slice of currently registered engines.

func RegisteredIndexes

func RegisteredIndexes() []string

RegisteredIndexs returns the slice of currently registered indexes.

Types

type Config

type Config struct {
	Dir    string `toml:"dir"`
	Engine string `toml:"-"`
	Index  string `toml:"index-version"`

	// General WAL configuration options
	WALDir string `toml:"wal-dir"`

	// WALFsyncDelay is the amount of time that a write will wait before fsyncing.  A duration
	// greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
	// disks or when WAL write contention is seen.  A value of 0 fsyncs every write to the WAL.
	WALFsyncDelay toml.Duration `toml:"wal-fsync-delay"`

	// Query logging
	QueryLogEnabled bool `toml:"query-log-enabled"`

	// Compaction options for tsm1 (descriptions above with defaults)
	CacheMaxMemorySize             uint64        `toml:"cache-max-memory-size"`
	CacheSnapshotMemorySize        uint64        `toml:"cache-snapshot-memory-size"`
	CacheSnapshotWriteColdDuration toml.Duration `toml:"cache-snapshot-write-cold-duration"`
	CompactFullWriteColdDuration   toml.Duration `toml:"compact-full-write-cold-duration"`

	// MaxSeriesPerDatabase is the maximum number of series a node can hold per database.
	// When this limit is exceeded, writes return a 'max series per database exceeded' error.
	// A value of 0 disables the limit. This limit only applies when using the "inmem" index.
	MaxSeriesPerDatabase int `toml:"max-series-per-database"`

	// MaxValuesPerTag is the maximum number of tag values a single tag key can have within
	// a measurement.  When the limit is execeeded, writes return an error.
	// A value of 0 disables the limit.
	MaxValuesPerTag int `toml:"max-values-per-tag"`

	// MaxConcurrentCompactions is the maximum number of concurrent level and full compactions
	// that can be running at one time across all shards.  Compactions scheduled to run when the
	// limit is reached are blocked until a running compaction completes.  Snapshot compactions are
	// not affected by this limit.  A value of 0 limits compactions to runtime.GOMAXPROCS(0).
	MaxConcurrentCompactions int `toml:"max-concurrent-compactions"`

	TraceLoggingEnabled bool `toml:"trace-logging-enabled"`
}

Config holds the configuration for the tsbd package.

func NewConfig

func NewConfig() Config

NewConfig returns the default configuration for tsdb.

func (Config) Diagnostics

func (c Config) Diagnostics() (*diagnostics.Diagnostics, error)

Diagnostics returns a diagnostics representation of a subset of the Config.

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration hold by c.

type Cursor

type Cursor interface {
	SeekTo(seek int64) (key int64, value interface{})
	Next() (key int64, value interface{})
	Ascending() bool
}

Cursor represents an iterator over a series.

type Engine

type Engine interface {
	Open() error
	Close() error
	SetEnabled(enabled bool)
	SetCompactionsEnabled(enabled bool)

	WithLogger(zap.Logger)

	LoadMetadataIndex(shardID uint64, index Index) error

	CreateSnapshot() (string, error)
	Backup(w io.Writer, basePath string, since time.Time) error
	Restore(r io.Reader, basePath string) error
	Import(r io.Reader, basePath string) error

	CreateIterator(measurement string, opt influxql.IteratorOptions) (influxql.Iterator, error)
	WritePoints(points []models.Point) error

	CreateSeriesIfNotExists(key, name []byte, tags models.Tags) error
	CreateSeriesListIfNotExists(keys, names [][]byte, tags []models.Tags) error
	DeleteSeriesRange(keys [][]byte, min, max int64) error

	SeriesSketches() (estimator.Sketch, estimator.Sketch, error)
	MeasurementsSketches() (estimator.Sketch, estimator.Sketch, error)
	SeriesN() int64

	MeasurementExists(name []byte) (bool, error)
	MeasurementNamesByExpr(expr influxql.Expr) ([][]byte, error)
	MeasurementNamesByRegex(re *regexp.Regexp) ([][]byte, error)
	MeasurementFields(measurement []byte) *MeasurementFields
	ForEachMeasurementName(fn func(name []byte) error) error
	DeleteMeasurement(name []byte) error

	// TagKeys(name []byte) ([][]byte, error)
	HasTagKey(name, key []byte) (bool, error)
	MeasurementTagKeysByExpr(name []byte, expr influxql.Expr) (map[string]struct{}, error)
	MeasurementTagKeyValuesByExpr(name []byte, key []string, expr influxql.Expr, keysSorted bool) ([][]string, error)
	ForEachMeasurementTagKey(name []byte, fn func(key []byte) error) error
	TagKeyCardinality(name, key []byte) int

	// InfluxQL iterators
	MeasurementSeriesKeysByExpr(name []byte, condition influxql.Expr) ([][]byte, error)
	ForEachMeasurementSeriesByExpr(name []byte, expr influxql.Expr, fn func(tags models.Tags) error) error
	SeriesPointIterator(opt influxql.IteratorOptions) (influxql.Iterator, error)

	// Statistics will return statistics relevant to this engine.
	Statistics(tags map[string]string) []models.Statistic
	LastModified() time.Time
	DiskSize() int64
	IsIdle() bool

	io.WriterTo
}

Engine represents a swappable storage engine for the shard.

func NewEngine

func NewEngine(id uint64, i Index, database, path string, walPath string, options EngineOptions) (Engine, error)

NewEngine returns an instance of an engine based on its format. If the path does not exist then the DefaultFormat is used.

type EngineFormat

type EngineFormat int

EngineFormat represents the format for an engine.

const (
	// TSM1Format is the format used by the tsm1 engine.
	TSM1Format EngineFormat = 2
)

type EngineOptions

type EngineOptions struct {
	EngineVersion     string
	IndexVersion      string
	ShardID           uint64
	InmemIndex        interface{} // shared in-memory index
	CompactionLimiter limiter.Fixed

	Config Config
}

EngineOptions represents the options used to initialize the engine.

func NewEngineOptions

func NewEngineOptions() EngineOptions

NewEngineOptions returns the default options.

type Field

type Field struct {
	ID   uint8             `json:"id,omitempty"`
	Name string            `json:"name,omitempty"`
	Type influxql.DataType `json:"type,omitempty"`
}

Field represents a series field.

type FieldCreate

type FieldCreate struct {
	Measurement []byte
	Field       *Field
}

FieldCreate holds information for a field to create on a measurement.

type Index

type Index interface {
	Open() error
	Close() error
	WithLogger(zap.Logger)

	MeasurementExists(name []byte) (bool, error)
	MeasurementNamesByExpr(expr influxql.Expr) ([][]byte, error)
	MeasurementNamesByRegex(re *regexp.Regexp) ([][]byte, error)
	DropMeasurement(name []byte) error
	ForEachMeasurementName(fn func(name []byte) error) error

	InitializeSeries(key, name []byte, tags models.Tags) error
	CreateSeriesIfNotExists(key, name []byte, tags models.Tags) error
	CreateSeriesListIfNotExists(keys, names [][]byte, tags []models.Tags) error
	DropSeries(key []byte) error

	SeriesSketches() (estimator.Sketch, estimator.Sketch, error)
	MeasurementsSketches() (estimator.Sketch, estimator.Sketch, error)
	SeriesN() int64

	HasTagKey(name, key []byte) (bool, error)
	TagSets(name []byte, options influxql.IteratorOptions) ([]*influxql.TagSet, error)
	MeasurementTagKeysByExpr(name []byte, expr influxql.Expr) (map[string]struct{}, error)
	MeasurementTagKeyValuesByExpr(name []byte, keys []string, expr influxql.Expr, keysSorted bool) ([][]string, error)

	ForEachMeasurementTagKey(name []byte, fn func(key []byte) error) error
	TagKeyCardinality(name, key []byte) int

	// InfluxQL system iterators
	MeasurementSeriesKeysByExpr(name []byte, condition influxql.Expr) ([][]byte, error)
	ForEachMeasurementSeriesByExpr(name []byte, expr influxql.Expr, fn func(tags models.Tags) error) error
	SeriesPointIterator(opt influxql.IteratorOptions) (influxql.Iterator, error)

	// Sets a shared fieldset from the engine.
	SetFieldSet(fs *MeasurementFieldSet)

	// Creates hard links inside path for snapshotting.
	SnapshotTo(path string) error

	// To be removed w/ tsi1.
	SetFieldName(measurement []byte, name string)
	AssignShard(k string, shardID uint64)
	UnassignShard(k string, shardID uint64) error
	RemoveShard(shardID uint64)

	Type() string
}

func MustOpenIndex

func MustOpenIndex(id uint64, database, path string, options EngineOptions) Index

func NewIndex

func NewIndex(id uint64, database, path string, options EngineOptions) (Index, error)

NewIndex returns an instance of an index based on its format. If the path does not exist then the DefaultFormat is used.

type IndexFormat

type IndexFormat int

IndexFormat represents the format for an index.

const (
	// InMemFormat is the format used by the original in-memory shared index.
	InMemFormat IndexFormat = 1

	// TSI1Format is the format used by the tsi1 index.
	TSI1Format IndexFormat = 2
)

type KeyValue

type KeyValue struct {
	Key, Value string
}

KeyValue holds a string key and a string value.

type KeyValues

type KeyValues []KeyValue

KeyValues is a sortable slice of KeyValue.

func (KeyValues) Len

func (a KeyValues) Len() int

Len implements sort.Interface.

func (KeyValues) Less

func (a KeyValues) Less(i, j int) bool

Less implements sort.Interface. Keys are compared before values.

func (KeyValues) Swap

func (a KeyValues) Swap(i, j int)

Swap implements sort.Interface.

type LimitError

type LimitError struct {
	Reason string
}

LimitError represents an error caused by a configurable limit.

func (*LimitError) Error

func (e *LimitError) Error() string

type MeasurementFieldSet

type MeasurementFieldSet struct {
	// contains filtered or unexported fields
}

MeasurementFieldSet represents a collection of fields by measurement. This safe for concurrent use.

func NewMeasurementFieldSet

func NewMeasurementFieldSet() *MeasurementFieldSet

NewMeasurementFieldSet returns a new instance of MeasurementFieldSet.

func (*MeasurementFieldSet) CreateFieldsIfNotExists

func (fs *MeasurementFieldSet) CreateFieldsIfNotExists(name []byte) *MeasurementFields

CreateFieldsIfNotExists returns fields for a measurement by name.

func (*MeasurementFieldSet) Delete

func (fs *MeasurementFieldSet) Delete(name string)

Delete removes a field set for a measurement.

func (*MeasurementFieldSet) DeleteWithLock

func (fs *MeasurementFieldSet) DeleteWithLock(name string, fn func() error) error

DeleteWithLock executes fn and removes a field set from a measurement under lock.

func (*MeasurementFieldSet) Fields

func (fs *MeasurementFieldSet) Fields(name string) *MeasurementFields

Fields returns fields for a measurement by name.

type MeasurementFields

type MeasurementFields struct {
	// contains filtered or unexported fields
}

MeasurementFields holds the fields of a measurement and their codec.

func NewMeasurementFields

func NewMeasurementFields() *MeasurementFields

NewMeasurementFields returns an initialised *MeasurementFields value.

func (*MeasurementFields) Clone

Clone returns copy of the MeasurementFields

func (*MeasurementFields) CreateFieldIfNotExists

func (m *MeasurementFields) CreateFieldIfNotExists(name []byte, typ influxql.DataType, limitCount bool) error

CreateFieldIfNotExists creates a new field with an autoincrementing ID. Returns an error if 255 fields have already been created on the measurement or the fields already exists with a different type.

func (*MeasurementFields) Field

func (m *MeasurementFields) Field(name string) *Field

Field returns the field for name, or nil if there is no field for name.

func (*MeasurementFields) FieldBytes

func (m *MeasurementFields) FieldBytes(name []byte) *Field

FieldBytes returns the field for name, or nil if there is no field for name. FieldBytes should be preferred to Field when the caller has a []byte, because it avoids a string allocation, which can't be avoided if the caller converts the []byte to a string and calls Field.

func (*MeasurementFields) FieldN

func (m *MeasurementFields) FieldN() int

func (*MeasurementFields) FieldSet

func (m *MeasurementFields) FieldSet() map[string]influxql.DataType

FieldSet returns the set of fields and their types for the measurement.

func (*MeasurementFields) HasField

func (m *MeasurementFields) HasField(name string) bool

func (*MeasurementFields) MarshalBinary

func (m *MeasurementFields) MarshalBinary() ([]byte, error)

MarshalBinary encodes the object to a binary format.

func (*MeasurementFields) UnmarshalBinary

func (m *MeasurementFields) UnmarshalBinary(buf []byte) error

UnmarshalBinary decodes the object from a binary format.

type NewEngineFunc

type NewEngineFunc func(id uint64, i Index, database, path string, walPath string, options EngineOptions) Engine

NewEngineFunc creates a new engine.

type NewIndexFunc

type NewIndexFunc func(id uint64, database, path string, options EngineOptions) Index

NewIndexFunc creates a new index.

type PartialWriteError

type PartialWriteError struct {
	Reason  string
	Dropped int

	// The set of series keys that were dropped. Can be nil.
	DroppedKeys map[string]struct{}
}

PartialWriteError indicates a write request could only write a portion of the requested values.

func (PartialWriteError) Error

func (e PartialWriteError) Error() string

type PointBatcher

type PointBatcher struct {
	// contains filtered or unexported fields
}

PointBatcher accepts Points and will emit a batch of those points when either a) the batch reaches a certain size, or b) a certain time passes.

func NewPointBatcher

func NewPointBatcher(sz int, bp int, d time.Duration) *PointBatcher

NewPointBatcher returns a new PointBatcher. sz is the batching size, bp is the maximum number of batches that may be pending. d is the time after which a batch will be emitted after the first point is received for the batch, regardless of its size.

func (*PointBatcher) Flush

func (b *PointBatcher) Flush()

Flush instructs the batcher to emit any pending points in a batch, regardless of batch size. If there are no pending points, no batch is emitted.

func (*PointBatcher) In

func (b *PointBatcher) In() chan<- models.Point

In returns the channel to which points should be written.

func (*PointBatcher) Out

func (b *PointBatcher) Out() <-chan []models.Point

Out returns the channel from which batches should be read.

func (*PointBatcher) Start

func (b *PointBatcher) Start()

Start starts the batching process. Returns the in and out channels for points and point-batches respectively.

func (*PointBatcher) Stats

func (b *PointBatcher) Stats() *PointBatcherStats

Stats returns a PointBatcherStats object for the PointBatcher. While the each statistic should be closely correlated with each other statistic, it is not guaranteed.

func (*PointBatcher) Stop

func (b *PointBatcher) Stop()

Stop stops the batching process. Stop waits for the batching routine to stop before returning.

type PointBatcherStats

type PointBatcherStats struct {
	BatchTotal   uint64 // Total count of batches transmitted.
	PointTotal   uint64 // Total count of points processed.
	SizeTotal    uint64 // Number of batches that reached size threshold.
	TimeoutTotal uint64 // Number of timeouts that occurred.
}

PointBatcherStats are the statistics each batcher tracks.

type Shard

type Shard struct {
	EnableOnOpen bool
	// contains filtered or unexported fields
}

Shard represents a self-contained time series database. An inverted index of the measurement and tag data is kept along with the raw time series data. Data can be split across many shards. The query engine in TSDB is responsible for combining the output of many shards into a single query result.

func NewShard

func NewShard(id uint64, path string, walPath string, opt EngineOptions) *Shard

NewShard returns a new initialized Shard. walPath doesn't apply to the b1 type index

func (*Shard) Close

func (s *Shard) Close() error

Close shuts down the shard's store.

func (*Shard) CloseFast

func (s *Shard) CloseFast() error

CloseFast closes the shard without cleaning up the shard ID or any of the shard's series keys from the index it belongs to.

CloseFast can be called when the entire index is being removed, e.g., when the database the shard belongs to is being dropped.

func (*Shard) CreateIterator

func (s *Shard) CreateIterator(measurement string, opt influxql.IteratorOptions) (influxql.Iterator, error)

CreateIterator returns an iterator for the data in the shard.

func (*Shard) CreateSnapshot

func (s *Shard) CreateSnapshot() (string, error)

CreateSnapshot will return a path to a temp directory containing hard links to the underlying shard files.

func (*Shard) Database

func (s *Shard) Database() string

Database returns the database of the shard.

func (*Shard) DeleteMeasurement

func (s *Shard) DeleteMeasurement(name []byte) error

DeleteMeasurement deletes a measurement and all underlying series.

func (*Shard) DeleteSeries

func (s *Shard) DeleteSeries(seriesKeys [][]byte) error

DeleteSeries deletes a list of series.

func (*Shard) DeleteSeriesRange

func (s *Shard) DeleteSeriesRange(seriesKeys [][]byte, min, max int64) error

DeleteSeriesRange deletes all values from for seriesKeys between min and max (inclusive)

func (*Shard) DiskSize

func (s *Shard) DiskSize() (int64, error)

DiskSize returns the size on disk of this shard

func (*Shard) ExpandSources

func (s *Shard) ExpandSources(sources influxql.Sources) (influxql.Sources, error)

ExpandSources expands regex sources and removes duplicates. NOTE: sources must be normalized (db and rp set) before calling this function.

func (*Shard) FieldDimensions

func (s *Shard) FieldDimensions(measurements []string) (fields map[string]influxql.DataType, dimensions map[string]struct{}, err error)

FieldDimensions returns unique sets of fields and dimensions across a list of sources.

func (*Shard) ForEachMeasurementTagKey

func (s *Shard) ForEachMeasurementTagKey(name []byte, fn func(key []byte) error) error

func (*Shard) ID

func (s *Shard) ID() uint64

ID returns the shards ID.

func (*Shard) Import

func (s *Shard) Import(r io.Reader, basePath string) error

Import imports data to the underlying engine for the shard. r should be a reader from a backup created by Backup.

func (*Shard) IndexType

func (s *Shard) IndexType() string

func (*Shard) IsIdle

func (s *Shard) IsIdle() bool

IsIdle return true if the shard is not receiving writes and is fully compacted.

func (*Shard) LastModified

func (s *Shard) LastModified() time.Time

LastModified returns the time when this shard was last modified.

func (*Shard) MapType

func (s *Shard) MapType(measurement, field string) influxql.DataType

MapType returns the data type for the field within the measurement.

func (*Shard) MeasurementExists

func (s *Shard) MeasurementExists(name []byte) (bool, error)

func (*Shard) MeasurementFields

func (s *Shard) MeasurementFields(name []byte) *MeasurementFields

MeasurementFields returns fields for a measurement.

func (*Shard) MeasurementNamesByExpr

func (s *Shard) MeasurementNamesByExpr(cond influxql.Expr) ([][]byte, error)

MeasurementNamesByExpr returns names of measurements matching the condition. If cond is nil then all measurement names are returned.

func (*Shard) MeasurementsByRegex

func (s *Shard) MeasurementsByRegex(re *regexp.Regexp) []string

func (*Shard) MeasurementsSketches

func (s *Shard) MeasurementsSketches() (estimator.Sketch, estimator.Sketch, error)

MeasurementsSketches returns the measurement sketches for the shard.

func (*Shard) Open

func (s *Shard) Open() error

Open initializes and opens the shard's store.

func (*Shard) Path

func (s *Shard) Path() string

Path returns the path set on the shard when it was created.

func (*Shard) Restore

func (s *Shard) Restore(r io.Reader, basePath string) error

Restore restores data to the underlying engine for the shard. The shard is reopened after restore.

func (*Shard) RetentionPolicy

func (s *Shard) RetentionPolicy() string

RetentionPolicy returns the retention policy of the shard.

func (*Shard) SeriesN

func (s *Shard) SeriesN() int64

SeriesN returns the unique number of series in the shard.

func (*Shard) SeriesSketches

func (s *Shard) SeriesSketches() (estimator.Sketch, estimator.Sketch, error)

SeriesSketches returns the series sketches for the shard.

func (*Shard) SetCompactionsEnabled

func (s *Shard) SetCompactionsEnabled(enabled bool)

SetCompactionsEnabled enables or disable shard background compactions.

func (*Shard) SetEnabled

func (s *Shard) SetEnabled(enabled bool)

SetEnabled enables the shard for queries and write. When disabled, all writes and queries return an error and compactions are stopped for the shard.

func (*Shard) Statistics

func (s *Shard) Statistics(tags map[string]string) []models.Statistic

Statistics returns statistics for periodic monitoring.

func (*Shard) TagKeyCardinality

func (s *Shard) TagKeyCardinality(name, key []byte) int

func (*Shard) UnloadIndex

func (s *Shard) UnloadIndex()

UnloadIndex removes all references to this shard from the DatabaseIndex

func (*Shard) WithLogger

func (s *Shard) WithLogger(log zap.Logger)

WithLogger sets the logger on the shard.

func (*Shard) WritePoints

func (s *Shard) WritePoints(points []models.Point) error

WritePoints will write the raw data points and any new metadata to the index in the shard.

func (*Shard) WriteTo

func (s *Shard) WriteTo(w io.Writer) (int64, error)

WriteTo writes the shard's data to w.

type ShardError

type ShardError struct {
	Err error
	// contains filtered or unexported fields
}

A ShardError implements the error interface, and contains extra context about the shard that generated the error.

func (ShardError) Error

func (e ShardError) Error() string

Error returns the string representation of the error, to satisfy the error interface.

type ShardGroup

type ShardGroup interface {
	MeasurementsByRegex(re *regexp.Regexp) []string
	FieldDimensions(measurements []string) (fields map[string]influxql.DataType, dimensions map[string]struct{}, err error)
	MapType(measurement, field string) influxql.DataType
	CreateIterator(measurement string, opt influxql.IteratorOptions) (influxql.Iterator, error)
	ExpandSources(sources influxql.Sources) (influxql.Sources, error)
}

type ShardStatistics

type ShardStatistics struct {
	WriteReq           int64
	WriteReqOK         int64
	WriteReqErr        int64
	FieldsCreated      int64
	WritePointsErr     int64
	WritePointsDropped int64
	WritePointsOK      int64
	BytesWritten       int64
	DiskBytes          int64
}

ShardStatistics maintains statistics for a shard.

type Shards

type Shards []*Shard

Shards represents a sortable list of shards.

func (Shards) CreateIterator

func (a Shards) CreateIterator(measurement string, opt influxql.IteratorOptions) (influxql.Iterator, error)

func (Shards) ExpandSources

func (a Shards) ExpandSources(sources influxql.Sources) (influxql.Sources, error)

func (Shards) FieldDimensions

func (a Shards) FieldDimensions(measurements []string) (fields map[string]influxql.DataType, dimensions map[string]struct{}, err error)

func (Shards) Len

func (a Shards) Len() int

Len implements sort.Interface.

func (Shards) Less

func (a Shards) Less(i, j int) bool

Less implements sort.Interface.

func (Shards) MapType

func (a Shards) MapType(measurement, field string) influxql.DataType

func (Shards) MeasurementsByRegex

func (a Shards) MeasurementsByRegex(re *regexp.Regexp) []string

func (Shards) Swap

func (a Shards) Swap(i, j int)

Swap implements sort.Interface.

type Store

type Store struct {
	EngineOptions EngineOptions

	Logger zap.Logger
	// contains filtered or unexported fields
}

Store manages shards and indexes for databases.

func NewStore

func NewStore(path string) *Store

NewStore returns a new store with the given path and a default configuration. The returned store must be initialized by calling Open before using it.

func (*Store) BackupShard

func (s *Store) BackupShard(id uint64, since time.Time, w io.Writer) error

BackupShard will get the shard and have the engine backup since the passed in time to the writer.

func (*Store) Close

func (s *Store) Close() error

Close closes the store and all associated shards. After calling Close accessing shards through the Store will result in ErrStoreClosed being returned.

func (*Store) CreateShard

func (s *Store) CreateShard(database, retentionPolicy string, shardID uint64, enabled bool) error

CreateShard creates a shard with the given id and retention policy on a database.

func (*Store) CreateShardSnapshot

func (s *Store) CreateShardSnapshot(id uint64) (string, error)

CreateShardSnapShot will create a hard link to the underlying shard and return a path. The caller is responsible for cleaning up (removing) the file path returned.

func (*Store) Databases

func (s *Store) Databases() []string

Databases returns the names of all databases managed by the store.

func (*Store) DeleteDatabase

func (s *Store) DeleteDatabase(name string) error

DeleteDatabase will close all shards associated with a database and remove the directory and files from disk.

func (*Store) DeleteMeasurement

func (s *Store) DeleteMeasurement(database, name string) error

DeleteMeasurement removes a measurement and all associated series from a database.

func (*Store) DeleteRetentionPolicy

func (s *Store) DeleteRetentionPolicy(database, name string) error

DeleteRetentionPolicy will close all shards associated with the provided retention policy, remove the retention policy directories on both the DB and WAL, and remove all shard files from disk.

func (*Store) DeleteSeries

func (s *Store) DeleteSeries(database string, sources []influxql.Source, condition influxql.Expr) error

DeleteSeries loops through the local shards and deletes the series data for the passed in series keys.

func (*Store) DeleteShard

func (s *Store) DeleteShard(shardID uint64) error

DeleteShard removes a shard from disk.

func (*Store) DiskSize

func (s *Store) DiskSize() (int64, error)

DiskSize returns the size of all the shard files in bytes. This size does not include the WAL size.

func (*Store) ExpandSources

func (s *Store) ExpandSources(sources influxql.Sources) (influxql.Sources, error)

ExpandSources expands sources against all local shards.

func (*Store) ImportShard

func (s *Store) ImportShard(id uint64, r io.Reader) error

ImportShard imports the contents of r to a given shard. All files in the backup are added as new files which may cause duplicated data to occur requiring more expensive compactions.

func (*Store) MeasurementNames

func (s *Store) MeasurementNames(database string, cond influxql.Expr) ([][]byte, error)

MeasurementNames returns a slice of all measurements. Measurements accepts an optional condition expression. If cond is nil, then all measurements for the database will be returned.

func (*Store) MeasurementSeriesCounts

func (s *Store) MeasurementSeriesCounts(database string) (measuments int, series int)

MeasurementSeriesCounts returns the number of measurements and series in all the shards' indices.

func (*Store) MeasurementsCardinality

func (s *Store) MeasurementsCardinality(database string) (int64, error)

MeasurementsCardinality returns the measurement cardinality for the provided database.

func (*Store) Open

func (s *Store) Open() error

Open initializes the store, creating all necessary directories, loading all shards as well as initializing periodic maintenance of them.

func (*Store) Path

func (s *Store) Path() string

Path returns the store's root path.

func (*Store) RestoreShard

func (s *Store) RestoreShard(id uint64, r io.Reader) error

RestoreShard restores a backup from r to a given shard. This will only overwrite files included in the backup.

func (*Store) SeriesCardinality

func (s *Store) SeriesCardinality(database string) (int64, error)

SeriesCardinality returns the series cardinality for the provided database.

func (*Store) SetShardEnabled

func (s *Store) SetShardEnabled(shardID uint64, enabled bool) error

SetShardEnabled enables or disables a shard for read and writes.

func (*Store) Shard

func (s *Store) Shard(id uint64) *Shard

Shard returns a shard by id.

func (*Store) ShardGroup

func (s *Store) ShardGroup(ids []uint64) ShardGroup

ShardGroup returns a ShardGroup with a list of shards by id.

func (*Store) ShardIDs

func (s *Store) ShardIDs() []uint64

ShardIDs returns a slice of all ShardIDs under management.

func (*Store) ShardN

func (s *Store) ShardN() int

ShardN returns the number of shards in the store.

func (*Store) ShardRelativePath

func (s *Store) ShardRelativePath(id uint64) (string, error)

ShardRelativePath will return the relative path to the shard, i.e., <database>/<retention>/<id>.

func (*Store) Shards

func (s *Store) Shards(ids []uint64) []*Shard

Shards returns a list of shards by id.

func (*Store) Statistics

func (s *Store) Statistics(tags map[string]string) []models.Statistic

Statistics returns statistics for period monitoring.

func (*Store) TagValues

func (s *Store) TagValues(database string, cond influxql.Expr) ([]TagValues, error)

TagValues returns the tag keys and values in the given database, matching the condition.

func (*Store) WithLogger

func (s *Store) WithLogger(log zap.Logger)

WithLogger sets the logger for the store.

func (*Store) WriteToShard

func (s *Store) WriteToShard(shardID uint64, points []models.Point) error

WriteToShard writes a list of points to a shard identified by its ID.

type TagValues

type TagValues struct {
	Measurement string
	Values      []KeyValue
}

type TagValuesSlice

type TagValuesSlice []TagValues

func (TagValuesSlice) Len

func (a TagValuesSlice) Len() int

func (TagValuesSlice) Less

func (a TagValuesSlice) Less(i, j int) bool

func (TagValuesSlice) Swap

func (a TagValuesSlice) Swap(i, j int)

Directories

Path Synopsis
Package engine can be imported to initialize and register all available TSDB engines.
Package engine can be imported to initialize and register all available TSDB engines.
tsm1
Package tsm1 provides a TSDB in the Time Structured Merge tree format.
Package tsm1 provides a TSDB in the Time Structured Merge tree format.
inmem
Package inmem implements a shared, in-memory index for each database.
Package inmem implements a shared, in-memory index for each database.
tsi1
Package tsi1 provides a memory-mapped index implementation that supports high cardinality series.
Package tsi1 provides a memory-mapped index implementation that supports high cardinality series.
Package meta is a generated protocol buffer package.
Package meta is a generated protocol buffer package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL