compact

package
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 4, 2019 License: Apache-2.0 Imports: 19 Imported by: 0

Documentation

Index

Constants

View Source
const (
	ResolutionLevelRaw = ResolutionLevel(downsample.ResLevel0)
	ResolutionLevel5m  = ResolutionLevel(downsample.ResLevel1)
	ResolutionLevel1h  = ResolutionLevel(downsample.ResLevel2)
)

Variables

This section is empty.

Functions

func ApplyRetentionPolicyByResolution

func ApplyRetentionPolicyByResolution(ctx context.Context, logger log.Logger, bkt objstore.Bucket, retentionByResolution map[ResolutionLevel]time.Duration) error

Apply removes blocks depending on the specified retentionByResolution based on blocks MaxTime. A value of 0 disables the retention for its resolution.

func GroupKey

func GroupKey(meta metadata.Meta) string

GroupKey returns a unique identifier for the group the block belongs to. It considers the downsampling resolution and the block's labels.

func IsHaltError

func IsHaltError(err error) bool

IsHaltError returns true if the base error is a HaltError.

func IsIssue347Error

func IsIssue347Error(err error) bool

Issue347Error returns true if the base error is a Issue347Error.

func IsRetryError

func IsRetryError(err error) bool

IsRetryError returns true if the base error is a RetryError.

func RepairIssue347

func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, issue347Err error) error

RepairIssue347 repairs the https://github.com/prometheus/tsdb/issues/347 issue when having issue347Error.

Types

type BucketCompactor

type BucketCompactor struct {
	// contains filtered or unexported fields
}

BucketCompactor compacts blocks in a bucket.

func NewBucketCompactor

func NewBucketCompactor(logger log.Logger, sy *Syncer, comp tsdb.Compactor, compactDir string, bkt objstore.Bucket) *BucketCompactor

NewBucketCompactor creates a new bucket compactor.

func (*BucketCompactor) Compact

func (c *BucketCompactor) Compact(ctx context.Context) error

Compact runs compaction over bucket.

type Group

type Group struct {
	// contains filtered or unexported fields
}

Group captures a set of blocks that have the same origin labels and downsampling resolution. Those blocks generally contain the same series and can thus efficiently be compacted.

func (*Group) Add

func (cg *Group) Add(meta *metadata.Meta) error

Add the block with the given meta to the group.

func (*Group) Compact

func (cg *Group) Compact(ctx context.Context, dir string, comp tsdb.Compactor) (ulid.ULID, error)

Compact plans and runs a single compaction against the group. The compacted result is uploaded into the bucket the blocks were retrieved from.

func (*Group) IDs

func (cg *Group) IDs() (ids []ulid.ULID)

IDs returns all sorted IDs of blocks in the group.

func (*Group) Key

func (cg *Group) Key() string

Key returns an identifier for the group.

func (*Group) Labels

func (cg *Group) Labels() labels.Labels

Labels returns the labels that all blocks in the group share.

func (*Group) Resolution

func (cg *Group) Resolution() int64

Resolution returns the common downsampling resolution of blocks in the group.

type HaltError

type HaltError struct {
	// contains filtered or unexported fields
}

HaltError is a type wrapper for errors that should halt any further progress on compactions.

func (HaltError) Error

func (e HaltError) Error() string

type Issue347Error

type Issue347Error struct {
	// contains filtered or unexported fields
}

Issue347Error is a type wrapper for errors that should invoke repair process for broken block.

func (Issue347Error) Error

func (e Issue347Error) Error() string

type ResolutionLevel

type ResolutionLevel int64

type RetryError

type RetryError struct {
	// contains filtered or unexported fields
}

RetryError is a type wrapper for errors that should trigger warning log and retry whole compaction loop, but aborting current compaction further progress.

func (RetryError) Error

func (e RetryError) Error() string

type Syncer

type Syncer struct {
	// contains filtered or unexported fields
}

Syncer syncronizes block metas from a bucket into a local directory. It sorts them into compaction groups based on equal label sets.

func NewSyncer

func NewSyncer(logger log.Logger, reg prometheus.Registerer, bkt objstore.Bucket, syncDelay time.Duration) (*Syncer, error)

NewSyncer returns a new Syncer for the given Bucket and directory. Blocks must be at least as old as the sync delay for being considered.

func (*Syncer) GarbageBlocks

func (c *Syncer) GarbageBlocks(resolution int64) (ids []ulid.ULID, err error)

func (*Syncer) GarbageCollect

func (c *Syncer) GarbageCollect(ctx context.Context) error

GarbageCollect deletes blocks from the bucket if their data is available as part of a block with a higher compaction level.

func (*Syncer) Groups

func (c *Syncer) Groups() (res []*Group, err error)

Groups returns the compaction groups for all blocks currently known to the syncer. It creates all groups from the scratch on every call.

func (*Syncer) SyncMetas

func (c *Syncer) SyncMetas(ctx context.Context) error

SyncMetas synchronizes all meta files from blocks in the bucket into the memory.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL