Documentation ¶
Overview ¶
Package memory provides an in memory store, which can be used as write through or write back cache
Generated from series_store_int.go DO NOT EDIT!
Index ¶
- Constants
- func Intersect(postings ...[]common.SeriesID) []common.SeriesID
- func Term(tagKey string, tagValue string) string
- func Union(postings ...[]common.SeriesID) []common.SeriesID
- type Config
- type Data
- type DoubleSeriesStore
- func (store *DoubleSeriesStore) GetName() string
- func (store *DoubleSeriesStore) GetSeriesType() int64
- func (store *DoubleSeriesStore) GetTags() map[string]string
- func (store *DoubleSeriesStore) ReadByStartEndTime(startTime int64, endTime int64) *common.DoubleSeries
- func (store *DoubleSeriesStore) WriteSeries(newSeries common.DoubleSeries) error
- type Index
- type IntSeriesStore
- func (store *IntSeriesStore) GetName() string
- func (store *IntSeriesStore) GetSeriesType() int64
- func (store *IntSeriesStore) GetTags() map[string]string
- func (store *IntSeriesStore) ReadByStartEndTime(startTime int64, endTime int64) *common.IntSeries
- func (store *IntSeriesStore) WriteSeries(newSeries common.IntSeries) error
- type InvertedIndex
- type SeriesStore
- type Store
- func (store *Store) QuerySeries(queries []common.Query) ([]common.QueryResult, []common.Series, error)
- func (store *Store) Shutdown()
- func (store *Store) StoreType() string
- func (store *Store) WriteDoubleSeries(series []common.DoubleSeries) error
- func (store *Store) WriteIntSeries(series []common.IntSeries) error
- type StoreMap
Constants ¶
const (
MinimalChunkSize = 10 * 1024 * 1024
)
Variables ¶
This section is empty.
Functions ¶
func Intersect ¶ added in v0.0.2
Intersect is used for AND, i.e. app=nginx AND os=ubuntu - sort lists by length - loop through the element in the shortest list,
- use exponential search to find if the element exists in other lists, only add it to result if it appears in all lists
- if any list reaches its end, the outer loop breaks
NOTE: - we didn't use the algorithm in the VLDB paper, just a naive one with some similar ideas - in fact, this is just the `join` operation in RDBMS TODO: - it is also possible to sort by value range Ref - https://www.quora.com/Which-is-the-best-algorithm-to-merge-k-ordered-lists
- 'adaptive list intersection'
- Improving performance of List intersection http://www.vldb.org/pvldb/2/vldb09-pvldb37.pdf
- Dynamic probe
- Exponential (galloping) search https://en.wikipedia.org/wiki/Exponential_search
func Term ¶ added in v0.0.2
TODO: should add separator, in Prometheus `db.go` it's `const sep = '\xff'`
func Union ¶ added in v0.0.2
Union is used for OR, i.e. app=nginx OR app=apache - sort all the lists by length? or just pick the smallest one? - get first len(smallest) elements of each array into an array and sort it? this is nk * log(k) NOTE - Linear search merge duplicate compare - Divide and Conquer merge requires extra space - Heap merge requires using Heap (e... such a brainless note, a.k.a I don't know how to write heap) - need to exclude lists when they reaches the end, might use a map Ref - https://en.wikipedia.org/wiki/K-Way_Merge_Algorithms - https://github.com/prometheus/tsdb/issues/50 - k-way merging and k-ary sorts http://cs.uno.edu/people/faculty/bill/k-way-merge-n-sort-ACM-SE-Regl-1993.pdf - https://www.cs.cmu.edu/~adamchik/15-121/lectures/Binary%20Heaps/heaps.html
Types ¶
type Config ¶ added in v0.1.0
type Config struct { Layout string `yaml:"layout" json:"layout"` ChunkSize int `yaml:"chunkSize" json:"chunkSize"` EnableIndex bool `yaml:"enableIndex" json:"enableIndex"` XXX map[string]interface{} `yaml:",inline"` }
func (*Config) UnmarshalYAML ¶ added in v0.1.0
type Data ¶
type Data struct {
// contains filtered or unexported fields
}
Data is a map using SeriesID as key
func (*Data) ReadSeries ¶ added in v0.0.3
func (*Data) WriteDoubleSeries ¶ added in v0.0.3
type DoubleSeriesStore ¶ added in v0.0.3
type DoubleSeriesStore struct {
// contains filtered or unexported fields
}
DoubleSeriesStore protects the underlying DoubleSeries with a RWMutex
func NewDoubleSeriesStore ¶ added in v0.0.3
func NewDoubleSeriesStore(s common.DoubleSeries) *DoubleSeriesStore
NewDoubleSeriesStore creates a DoubleSeriesStore
func (*DoubleSeriesStore) GetName ¶ added in v0.0.3
func (store *DoubleSeriesStore) GetName() string
func (*DoubleSeriesStore) GetSeriesType ¶ added in v0.0.3
func (store *DoubleSeriesStore) GetSeriesType() int64
func (*DoubleSeriesStore) GetTags ¶ added in v0.0.3
func (store *DoubleSeriesStore) GetTags() map[string]string
func (*DoubleSeriesStore) ReadByStartEndTime ¶ added in v0.0.3
func (store *DoubleSeriesStore) ReadByStartEndTime(startTime int64, endTime int64) *common.DoubleSeries
ReadByStartEndTime filters and return a copy of the data TODO: we were previously returning *common.DoubleSeries, but there should not have any copy of the underlying points I suppose?
func (*DoubleSeriesStore) WriteSeries ¶ added in v0.0.3
func (store *DoubleSeriesStore) WriteSeries(newSeries common.DoubleSeries) error
WriteSeries merges the new series with existing one and replace old points with new points if their timestamp matches TODO: what happens when no memory is available? maybe this function should return error
type Index ¶
type Index struct {
// contains filtered or unexported fields
}
Index is a map of inverted index with tag name as key and tag value as term for the inverted index
type IntSeriesStore ¶
type IntSeriesStore struct {
// contains filtered or unexported fields
}
IntSeriesStore protects the underlying IntSeries with a RWMutex
func NewIntSeriesStore ¶
func NewIntSeriesStore(s common.IntSeries) *IntSeriesStore
NewIntSeriesStore creates a IntSeriesStore
func (*IntSeriesStore) GetName ¶ added in v0.0.3
func (store *IntSeriesStore) GetName() string
func (*IntSeriesStore) GetSeriesType ¶ added in v0.0.3
func (store *IntSeriesStore) GetSeriesType() int64
func (*IntSeriesStore) GetTags ¶ added in v0.0.3
func (store *IntSeriesStore) GetTags() map[string]string
func (*IntSeriesStore) ReadByStartEndTime ¶
func (store *IntSeriesStore) ReadByStartEndTime(startTime int64, endTime int64) *common.IntSeries
ReadByStartEndTime filters and return a copy of the data TODO: we were previously returning *common.IntSeries, but there should not have any copy of the underlying points I suppose?
func (*IntSeriesStore) WriteSeries ¶
func (store *IntSeriesStore) WriteSeries(newSeries common.IntSeries) error
WriteSeries merges the new series with existing one and replace old points with new points if their timestamp matches TODO: what happens when no memory is available? maybe this function should return error
type InvertedIndex ¶ added in v0.0.2
type InvertedIndex struct { Term string Postings []common.SeriesID // contains filtered or unexported fields }
InvertedIndex use Term for tag value postings for a list of sorted series ID TODO: Series ID should use locality sensitive hashing https://en.wikipedia.org/wiki/Locality-sensitive_hashing
func (*InvertedIndex) Add ¶ added in v0.0.2
func (iidx *InvertedIndex) Add(id common.SeriesID)
TODO: actually we can have a fixed size map to cache the hot series, so there is no need to lookup if the id is already in there
type SeriesStore ¶ added in v0.0.3
type Store ¶
type Store struct {
// contains filtered or unexported fields
}
Store is the in memory storage with data and index
func CreateStore ¶ added in v0.1.0
func NewMemStore ¶
func (*Store) QuerySeries ¶ added in v0.0.3
func (store *Store) QuerySeries(queries []common.Query) ([]common.QueryResult, []common.Series, error)
QuerySeries implements Store interface
func (*Store) WriteDoubleSeries ¶ added in v0.0.3
func (store *Store) WriteDoubleSeries(series []common.DoubleSeries) error
WriteDoubleSeries implements Store interface