Documentation ¶
Index ¶
- func ActivePeriodConfig(configs []chunk.PeriodConfig) int
- func IsBlockOverlapping(b chunkenc.Block, with *LazyChunk, direction logproto.Direction) bool
- func NewTableClient(name string, cfg Config) (chunk.TableClient, error)
- func RegisterCustomIndexClients(cfg *Config, registerer prometheus.Registerer)
- func UsingBoltdbShipper(configs []chunk.PeriodConfig) bool
- type AsyncStore
- type ChunkMetrics
- type Config
- type IngesterQuerier
- type LazyChunk
- func (c *LazyChunk) IsOverlapping(with *LazyChunk, direction logproto.Direction) bool
- func (c *LazyChunk) Iterator(ctx context.Context, from, through time.Time, direction logproto.Direction, ...) (iter.EntryIterator, error)
- func (c *LazyChunk) SampleIterator(ctx context.Context, from, through time.Time, extractor logql.SampleExtractor, ...) (iter.SampleIterator, error)
- type SchemaConfig
- type Store
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ActivePeriodConfig ¶ added in v1.6.9
func ActivePeriodConfig(configs []chunk.PeriodConfig) int
ActivePeriodConfig returns index of active PeriodicConfig which would be applicable to logs that would be pushed starting now. Note: Another PeriodicConfig might be applicable for future logs which can change index type.
func IsBlockOverlapping ¶ added in v1.6.9
func NewTableClient ¶ added in v1.6.9
func NewTableClient(name string, cfg Config) (chunk.TableClient, error)
NewTableClient creates a TableClient for managing tables for index/chunk store. ToDo: Add support in Cortex for registering custom table client like index client.
func RegisterCustomIndexClients ¶ added in v1.6.9
func RegisterCustomIndexClients(cfg *Config, registerer prometheus.Registerer)
func UsingBoltdbShipper ¶ added in v1.6.9
func UsingBoltdbShipper(configs []chunk.PeriodConfig) bool
UsingBoltdbShipper checks whether current or the next index type is boltdb-shipper, returns true if yes.
Types ¶
type AsyncStore ¶ added in v1.6.9
AsyncStore does querying to both ingesters and chunk store and combines the results after deduping them. This should be used when using an async store like boltdb-shipper. AsyncStore is meant to be used only in queriers or any other service other than ingesters. It should never be used in ingesters otherwise it would start spiraling around doing queries over and over again to other ingesters.
func NewAsyncStore ¶ added in v1.6.9
func NewAsyncStore(store chunk.Store, querier IngesterQuerier) *AsyncStore
type ChunkMetrics ¶ added in v1.6.9
type ChunkMetrics struct {
// contains filtered or unexported fields
}
func NewChunkMetrics ¶ added in v1.6.9
func NewChunkMetrics(r prometheus.Registerer, maxBatchSize int) *ChunkMetrics
type Config ¶
type Config struct { storage.Config `yaml:",inline"` MaxChunkBatchSize int `yaml:"max_chunk_batch_size"` BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper"` }
Config is the loki storage configuration
func (*Config) RegisterFlags ¶
RegisterFlags adds the flags required to configure this flag set.
type IngesterQuerier ¶ added in v1.6.9
type LazyChunk ¶ added in v1.6.9
type LazyChunk struct { Chunk chunk.Chunk IsValid bool Fetcher *chunk.Fetcher // contains filtered or unexported fields }
LazyChunk loads the chunk when it is accessed.
func (*LazyChunk) IsOverlapping ¶ added in v1.6.9
func (*LazyChunk) Iterator ¶ added in v1.6.9
func (c *LazyChunk) Iterator( ctx context.Context, from, through time.Time, direction logproto.Direction, pipeline logql.Pipeline, nextChunk *LazyChunk, ) (iter.EntryIterator, error)
Iterator returns an entry iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.
func (*LazyChunk) SampleIterator ¶ added in v1.6.9
func (c *LazyChunk) SampleIterator( ctx context.Context, from, through time.Time, extractor logql.SampleExtractor, nextChunk *LazyChunk, ) (iter.SampleIterator, error)
SampleIterator returns an sample iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.
type SchemaConfig ¶ added in v1.6.9
type SchemaConfig struct {
chunk.SchemaConfig `yaml:",inline"`
}
SchemaConfig contains the config for our chunk index schemas
func (*SchemaConfig) Validate ¶ added in v1.6.9
func (cfg *SchemaConfig) Validate() error
Validate the schema config and returns an error if the validation doesn't pass
type Store ¶
type Store interface { chunk.Store SelectSamples(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error) SelectLogs(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) GetSeries(ctx context.Context, req logql.SelectLogParams) ([]logproto.SeriesIdentifier, error) GetSchemaConfigs() []chunk.PeriodConfig }
Store is the Loki chunk store to retrieve and save chunks.
func NewStore ¶
func NewStore(cfg Config, schemaCfg SchemaConfig, chunkStore chunk.Store, registerer prometheus.Registerer) (Store, error)
NewStore creates a new Loki Store using configuration supplied.