storage

package
v1.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 28, 2021 License: Apache-2.0 Imports: 27 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ActivePeriodConfig added in v1.6.9

func ActivePeriodConfig(configs []chunk.PeriodConfig) int

ActivePeriodConfig returns index of active PeriodicConfig which would be applicable to logs that would be pushed starting now. Note: Another PeriodicConfig might be applicable for future logs which can change index type.

func IsBlockOverlapping added in v1.6.9

func IsBlockOverlapping(b chunkenc.Block, with *LazyChunk, direction logproto.Direction) bool

func NewTableClient added in v1.6.9

func NewTableClient(name string, cfg Config) (chunk.TableClient, error)

NewTableClient creates a TableClient for managing tables for index/chunk store. ToDo: Add support in Cortex for registering custom table client like index client.

func RegisterCustomIndexClients added in v1.6.9

func RegisterCustomIndexClients(cfg *Config, registerer prometheus.Registerer)

func UsingBoltdbShipper added in v1.6.9

func UsingBoltdbShipper(configs []chunk.PeriodConfig) bool

UsingBoltdbShipper checks whether current or the next index type is boltdb-shipper, returns true if yes.

Types

type AsyncStore added in v1.6.9

type AsyncStore struct {
	chunk.Store
	// contains filtered or unexported fields
}

AsyncStore does querying to both ingesters and chunk store and combines the results after deduping them. This should be used when using an async store like boltdb-shipper. AsyncStore is meant to be used only in queriers or any other service other than ingesters. It should never be used in ingesters otherwise it would start spiraling around doing queries over and over again to other ingesters.

func NewAsyncStore added in v1.6.9

func NewAsyncStore(store chunk.Store, querier IngesterQuerier) *AsyncStore

func (*AsyncStore) GetChunkRefs added in v1.6.9

func (a *AsyncStore) GetChunkRefs(ctx context.Context, userID string, from, through model.Time, matchers ...*labels.Matcher) ([][]chunk.Chunk, []*chunk.Fetcher, error)

type ChunkMetrics added in v1.6.9

type ChunkMetrics struct {
	// contains filtered or unexported fields
}

func NewChunkMetrics added in v1.6.9

func NewChunkMetrics(r prometheus.Registerer, maxBatchSize int) *ChunkMetrics

type Config

type Config struct {
	storage.Config      `yaml:",inline"`
	MaxChunkBatchSize   int            `yaml:"max_chunk_batch_size"`
	BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper"`
}

Config is the loki storage configuration

func (*Config) RegisterFlags

func (cfg *Config) RegisterFlags(f *flag.FlagSet)

RegisterFlags adds the flags required to configure this flag set.

type IngesterQuerier added in v1.6.9

type IngesterQuerier interface {
	GetChunkIDs(ctx context.Context, from, through model.Time, matchers ...*labels.Matcher) ([]string, error)
}

type LazyChunk added in v1.6.9

type LazyChunk struct {
	Chunk   chunk.Chunk
	IsValid bool
	Fetcher *chunk.Fetcher
	// contains filtered or unexported fields
}

LazyChunk loads the chunk when it is accessed.

func (*LazyChunk) IsOverlapping added in v1.6.9

func (c *LazyChunk) IsOverlapping(with *LazyChunk, direction logproto.Direction) bool

func (*LazyChunk) Iterator added in v1.6.9

func (c *LazyChunk) Iterator(
	ctx context.Context,
	from, through time.Time,
	direction logproto.Direction,
	pipeline logql.Pipeline,
	nextChunk *LazyChunk,
) (iter.EntryIterator, error)

Iterator returns an entry iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.

func (*LazyChunk) SampleIterator added in v1.6.9

func (c *LazyChunk) SampleIterator(
	ctx context.Context,
	from, through time.Time,
	extractor logql.SampleExtractor,
	nextChunk *LazyChunk,
) (iter.SampleIterator, error)

SampleIterator returns an sample iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.

type SchemaConfig added in v1.6.9

type SchemaConfig struct {
	chunk.SchemaConfig `yaml:",inline"`
}

SchemaConfig contains the config for our chunk index schemas

func (*SchemaConfig) Validate added in v1.6.9

func (cfg *SchemaConfig) Validate() error

Validate the schema config and returns an error if the validation doesn't pass

type Store

type Store interface {
	chunk.Store
	SelectSamples(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error)
	SelectLogs(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error)
	GetSeries(ctx context.Context, req logql.SelectLogParams) ([]logproto.SeriesIdentifier, error)
	GetSchemaConfigs() []chunk.PeriodConfig
}

Store is the Loki chunk store to retrieve and save chunks.

func NewStore

func NewStore(cfg Config, schemaCfg SchemaConfig, chunkStore chunk.Store, registerer prometheus.Registerer) (Store, error)

NewStore creates a new Loki Store using configuration supplied.

Directories

Path Synopsis
stores

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL