store

package
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 28, 2018 License: Apache-2.0 Imports: 31 Imported by: 8

Documentation

Index

Constants

View Source
const (
	APIPathUserQuery      = "/query"
	APIPathInternalQuery  = "/_query"
	APIPathUserStream     = "/stream"
	APIPathInternalStream = "/_stream"
	APIPathReplicate      = "/replicate"
	APIPathClusterState   = "/_clusterstate"
)

These are the store API URL paths.

Variables

View Source
var ErrNoSegmentsAvailable = errors.New("no segments available")

ErrNoSegmentsAvailable is returned by various methods to indicate no qualifying segments are currently available.

View Source
var ErrShortRead = errors.New("short read")

ErrShortRead is returned when a read is unexpectedly shortened.

Functions

This section is empty.

Types

type API

type API struct {
	// contains filtered or unexported fields
}

API serves the store API.

func NewAPI

func NewAPI(
	peer ClusterPeer,
	log Log,
	queryClient, streamClient Doer,
	replicatedSegments, replicatedBytes prometheus.Counter,
	duration *prometheus.HistogramVec,
	reporter EventReporter,
) *API

NewAPI returns a usable API.

func (*API) Close added in v0.2.0

func (a *API) Close() error

Close out the API, including the streaming query registry.

func (*API) ServeHTTP

func (a *API) ServeHTTP(w http.ResponseWriter, r *http.Request)

type ClusterPeer added in v0.2.1

type ClusterPeer interface {
	Current(cluster.PeerType) []string
	State() map[string]interface{}
}

ClusterPeer models cluster.Peer.

type Compacter

type Compacter struct {
	// contains filtered or unexported fields
}

Compacter is responsible for all post-flush segment mutation. That includes compacting highly-overlapping segments, compacting small and sequential segments, and enforcing the retention window.

func NewCompacter

func NewCompacter(
	log Log,
	segmentTargetSize int64, retain time.Duration, purge time.Duration,
	compactDuration *prometheus.HistogramVec, trashSegments, purgeSegments *prometheus.CounterVec,
	reporter EventReporter,
) *Compacter

NewCompacter creates a Compacter. Don't forget to Run it.

func (*Compacter) Run

func (c *Compacter) Run()

Run performs compactions and cleanups. Run returns when Stop is invoked.

func (*Compacter) Stop

func (c *Compacter) Stop()

Stop the compacter from compacting.

type Consumer

type Consumer struct {
	// contains filtered or unexported fields
}

Consumer reads segments from the ingesters, and replicates merged segments to the rest of the cluster. It's implemented as a state machine: gather segments, replicate, commit, and repeat. All failures invalidate the entire batch.

func NewConsumer

func NewConsumer(
	peer *cluster.Peer,
	client *http.Client,
	segmentTargetSize int64,
	segmentTargetAge time.Duration,
	segmentDelay time.Duration,
	replicationFactor int,
	consumedSegments, consumedBytes prometheus.Counter,
	replicatedSegments, replicatedBytes prometheus.Counter,
	reporter EventReporter,
) *Consumer

NewConsumer creates a consumer. Don't forget to Run it.

func (*Consumer) Run

func (c *Consumer) Run()

Run consumes segments from ingest nodes, and replicates them to the cluster. Run returns when Stop is invoked.

func (*Consumer) Stop

func (c *Consumer) Stop()

Stop the consumer from consuming.

type Doer added in v0.2.1

type Doer interface {
	Do(*http.Request) (*http.Response, error)
}

Doer models http.Client.

type Event added in v0.3.0

type Event struct {
	Debug   bool
	Op      string
	File    string
	Error   error
	Warning error
	Msg     string
}

Event is emitted by store components, typically when things go wrong.

type EventReporter added in v0.3.0

type EventReporter interface {
	ReportEvent(Event)
}

EventReporter can receive (and, presumably, do something with) Events.

type Log

type Log interface {
	// Create a new segment for writes.
	Create() (WriteSegment, error)

	// Query written and closed segments.
	Query(qp QueryParams, statsOnly bool) (QueryResult, error)

	// Overlapping returns segments that have a high degree of time overlap and
	// can be compacted.
	Overlapping() ([]ReadSegment, error)

	// Sequential returns segments that are small and sequential and can be
	// compacted.
	Sequential() ([]ReadSegment, error)

	// Trashable segments are read segments whose newest record is older than
	// the given time. They may be trashed, i.e. made unavailable for querying.
	Trashable(oldestRecord time.Time) ([]ReadSegment, error)

	// Purgable segments are trash segments whose modification time (i.e. the
	// time they were trashed) is older than the given time. They may be purged,
	// i.e. hard deleted.
	Purgeable(oldestModTime time.Time) ([]TrashSegment, error)

	// Stats of the current state of the store log.
	Stats() (LogStats, error)

	// Close the log, releasing any claimed lock.
	Close() error
}

Log is an abstraction for segments on a storage node.

func NewFileLog

func NewFileLog(filesys fs.Filesystem, root string, segmentTargetSize, segmentBufferSize int64, reporter EventReporter) (Log, error)

NewFileLog returns a Log backed by the filesystem at path root. Note that we don't own segment files! They may disappear.

type LogReporter added in v0.3.0

type LogReporter struct{ log.Logger }

LogReporter is a default implementation of EventReporter that logs events to the wrapped logger. By default, events are logged at Warning level; if Err is non-nil, events are logged at Error level.

func (LogReporter) ReportEvent added in v0.3.0

func (r LogReporter) ReportEvent(e Event)

ReportEvent implements EventReporter.

type LogStats

type LogStats struct {
	ActiveSegments  int64
	ActiveBytes     int64
	FlushedSegments int64
	FlushedBytes    int64
	ReadingSegments int64
	ReadingBytes    int64
	TrashedSegments int64
	TrashedBytes    int64
}

LogStats describe the current state of the store log.

type QueryParams added in v0.1.3

type QueryParams struct {
	From  ulidOrTime `json:"from"`
	To    ulidOrTime `json:"to"`
	Q     string     `json:"q"`
	Regex bool       `json:"regex"`
}

QueryParams defines all dimensions of a query. StatsOnly is implicit by the HTTP method.

func (*QueryParams) DecodeFrom added in v0.1.3

func (qp *QueryParams) DecodeFrom(u *url.URL, rb rangeBehavior) error

DecodeFrom populates a QueryParams from a URL.

type QueryResult

type QueryResult struct {
	Params QueryParams `json:"query"`

	NodesQueried    int    `json:"nodes_queried"`
	SegmentsQueried int    `json:"segments_queried"`
	MaxDataSetSize  int64  `json:"max_data_set_size"`
	ErrorCount      int    `json:"error_count,omitempty"`
	Duration        string `json:"duration"`

	Records io.ReadCloser // TODO(pb): audit to ensure closing is valid throughout
}

QueryResult contains statistics about, and matching records for, a query.

func (*QueryResult) DecodeFrom

func (qr *QueryResult) DecodeFrom(resp *http.Response) error

DecodeFrom decodes the QueryResult from the HTTP response.

func (*QueryResult) EncodeTo

func (qr *QueryResult) EncodeTo(w http.ResponseWriter)

EncodeTo encodes the QueryResult to the HTTP response writer. It also closes the records ReadCloser.

func (*QueryResult) Merge

func (qr *QueryResult) Merge(other QueryResult) error

Merge the other QueryResult into this one.

type ReadSegment

type ReadSegment interface {
	io.Reader
	Reset() error
	Trash() error
	Purge() error
}

ReadSegment can be read from, reset (back to flushed state), trashed (made unavailable for queries), or purged (hard deleted).

type TrashSegment

type TrashSegment interface {
	Purge() error
}

TrashSegment may only be purged (hard deleted).

type WriteSegment

type WriteSegment interface {
	io.Writer
	Close(low, high ulid.ULID) error
	Delete() error
}

WriteSegment can be written to, and either closed or deleted.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL