pilosa

package module
v3.35.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 30, 2023 License: Apache-2.0, Apache-2.0 Imports: 106 Imported by: 0

README

FeatureBase

Pilosa is now FeatureBase

As of September 7, 2022, the Pilosa project is now FeatureBase. The core of the project remains the same: FeatureBase is the first real-time distributed database built entirely on bitmaps. (More information about updated capabilities and improvements below.)

FeatureBase delivers low-latency query results, regardless of throughput or query volumes, on fresh data with extreme efficiency. It works because bitmaps are faster, simpler, and far more I/O efficient than traditional column-oriented data formats. With FeatureBase, you can ingest data from batch data sources (e.g. S3, CSV, Snowflake, BigQuery, etc.) and/or streaming data sources (e.g. Kafka/Confluent, Kinesis, Pulsar).

For more information about FeatureBase, please visit www.featurebase.com.

Getting Started

Build FeatureBase Server from source
  1. Install go. Ensure that your shell's search path includes the go/bin directory.
  2. Clone the FeatureBase repository (or download as zip).
  3. In the featurebase directory, run make install to compile the FeatureBase server binary. By default, it will be installed in the go/bin directory.
  4. In the idk directory, run make install to compile the ingester binaries. By default, they will be installed in the go/bin directory.
  5. Run featurebase server --handler.allowed-origins=http://localhost:3000 to run FeatureBase server with default settings (learn more about configuring FeatureBase at the link below). The --handler.allowed-origins parameter allows the standalone web UI to talk to the server; this can be omitted if the web UI is not needed.
  6. Run curl localhost:10101/status to verify the server is running and accessible.
Ingest Data and Query
  1. Run
molecula-consumer-csv \
    --index repository \
    --header "language__ID_F,project_id__ID_F" \
    --id-field project_id \
    --batch-size 1000 \
    --files example.csv

This will ingest the example.csv file into a FeatureBase table called repository. If the table does not exist, it will be automatically created. Learn more about ingesting data into FeatureBase

  1. Query your data.
curl localhost:10101/index/repository/query \
     -X POST \
     -d 'Row(example=5)'

Learn about supported SQL, native Pilosa Query Language (PQL).

Data Model

Because FeatureBase is built on bitmaps, there is bit of a learning curve to grasp how your data is represented. Learn about Data Modeling.

More Information

Installation

Configuration

Community

You can email us at community@featurebase.com or learn more about contributing at https://www.featurebase.com/community.

Chat with us: https://discord.gg/FBn2vEp7Na

What's Changed Since the Pilosa Days?

A lot has changed since the days of Pilosa. This list highlights some new capabilites included in FeatureBase. We have also made signficant improvements to the performance, scalability, and stability of the FeatureBase product.

  • Query Languages: FeatureBase supports Pilosa Query Language (PQL), as well as SQL
  • Stream and Batch Ingest: Combine real-time data streams with batch historical data and act on it within milliseconds.
  • Mutable: Perform inserts, updates, and deletes at scale, in real time and on-the-fly. This is key for meeting data compliance requirements, and for reflecting the constantly-changing nature of high-volume data.
  • Multi-Valued Set Fields: Store multiple comma-delimited values within a single field while increasing query performance of counts, TopKs, etc.
  • Time Quantums: Setting a time quantum on a field creates extra views which allow ranged Row queries down to the time interval specified. For example, if the time quantum is set to YMD, ranged Row queries down to the granularity of a day are supported.
  • RBF storage backend: this is a new compressed bitmap format which improves performance in a number of ways: ACID support on a per shard basis, prevents issues with the number of open files, reduces memory allocation and lock contention for reads, provides more consistent garbage collection, and allows backups to run concurrently with writes. However, because of this change, Pilosa backup files cannot be restored into FeatureBase.

License

FeatureBase is licensed under the Apache License, Version 2.0

Documentation

Overview

Copyright 2021 Molecula Corp. All rights reserved.

Copyright 2021 Molecula Corp. All rights reserved.

Copyright 2021 Molecula Corp. All rights reserved.

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Package pilosa implements the core of the Pilosa distributed bitmap index. It contains all the domain objects, interfaces, and logic that defines pilosa.

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2021 Molecula Corp. All rights reserved.

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Copyright 2022 Molecula Corp. (DBA FeatureBase). SPDX-License-Identifier: Apache-2.0

Index

Constants

View Source
const (
	TRANSACTION_START    = "start"
	TRANSACTION_FINISH   = "finish"
	TRANSACTION_VALIDATE = "validate"
)

Transaction Actions

View Source
const (
	DefaultFieldType = FieldTypeSet

	DefaultCacheType = CacheTypeRanked

	// Default ranked field cache
	DefaultCacheSize = 50000
)

Default field settings.

View Source
const (
	FieldTypeSet       = "set"
	FieldTypeInt       = "int"
	FieldTypeTime      = "time"
	FieldTypeMutex     = "mutex"
	FieldTypeBool      = "bool"
	FieldTypeDecimal   = "decimal"
	FieldTypeTimestamp = "timestamp"
)

Field types.

View Source
const (
	CacheTypeLRU    = "lru"
	CacheTypeRanked = "ranked"
	CacheTypeNone   = "none"
)

Cache types.

View Source
const (
	TimeUnitSeconds      = "s"
	TimeUnitMilliseconds = "ms"
	TimeUnitMicroseconds = "µs"
	TimeUnitUSeconds     = "us"
	TimeUnitNanoseconds  = "ns"
)

Constants related to timestamp.

View Source
const (
	// ShardWidth is the number of column IDs in a shard. It must be a power of 2 greater than or equal to 16.
	// shardWidthExponent = 20 // set in shardwidthNN.go files
	ShardWidth = 1 << shardwidth.Exponent

	// HashBlockSize is the number of rows in a merkle hash block.
	HashBlockSize = 100
)
View Source
const (
	RequestActionSet       = "set"
	RequestActionClear     = "clear"
	RequestActionOverwrite = "overwrite"
)
View Source
const (

	// DiscoDir is the default data directory used by the disco implementation.
	DiscoDir = "disco"

	// IndexesDir is the default indexes directory used by the holder.
	IndexesDir = "indexes"

	// FieldsDir is the default fields directory used by each index.
	FieldsDir = "fields"

	// DataframesDir is the directory where we store the dataframe files (currently Apache Arrow)
	DataframesDir = "dataframes"
)
View Source
const (
	// OriginalIPHeader is the original IP for client
	// It is used mainly for authenticating on remote nodes
	// ForwardedIPHeader gets updated to the node's IP
	// when requests are forward to other nodes in the cluster
	OriginalIPHeader = "X-Molecula-Original-IP"

	// ForwardedIPHeader is part of the standard header
	// it is used to identify the originating IP of a client
	ForwardedIPHeader = "X-Forwarded-For"

	// AllowedNetworksGroupName is used for the admin group authorization
	// when authentication is completed through checking the client IP
	// against the allowed networks
	AllowedNetworksGroupName = "allowed-networks"
)
View Source
const (
	QueryResultTypeRow uint32 = iota
	QueryResultTypePairs
	QueryResultTypeUint64
)

QueryResult types.

View Source
const (
	MetricCreateIndex                     = "create_index_total"
	MetricDeleteIndex                     = "delete_index_total"
	MetricCreateField                     = "create_field_total"
	MetricDeleteField                     = "delete_field_total"
	MetricDeleteAvailableShard            = "delete_available_shard_total"
	MetricRecalculateCache                = "recalculate_cache_total"
	MetricInvalidateCache                 = "invalidate_cache_total"
	MetricInvalidateCacheSkipped          = "invalidate_cache_skipped_total"
	MetricReadDirtyCache                  = "dirty_cache_total"
	MetricRankCacheLength                 = "rank_cache_length"
	MetricCacheThresholdReached           = "cache_threshold_reached_total"
	MetricRow                             = "query_row_total"
	MetricRowBSI                          = "query_row_bsi_total"
	MetricSetBit                          = "set_bit_total"
	MetricClearBit                        = "clear_bit_total"
	MetricImportingN                      = "importing_total"
	MetricImportedN                       = "imported_total"
	MetricClearingN                       = "clearing_total"
	MetricClearedN                        = "cleared_total"
	MetricSnapshotDurationSeconds         = "snapshot_duration_seconds"
	MetricBlockRepair                     = "block_repair_total"
	MetricSyncFieldDurationSeconds        = "sync_field_duration_seconds"
	MetricSyncIndexDurationSeconds        = "sync_index_duration_seconds"
	MetricHTTPRequest                     = "http_request_duration_seconds"
	MetricGRPCUnaryQueryDurationSeconds   = "grpc_request_pql_unary_query_duration_seconds"
	MetricGRPCUnaryFormatDurationSeconds  = "grpc_request_pql_unary_format_duration_seconds"
	MetricGRPCStreamQueryDurationSeconds  = "grpc_request_pql_stream_query_duration_seconds"
	MetricGRPCStreamFormatDurationSeconds = "grpc_request_pql_stream_format_duration_seconds"
	MetricMaxShard                        = "maximum_shard"
	MetricAntiEntropy                     = "antientropy_total"
	MetricAntiEntropyDurationSeconds      = "antientropy_duration_seconds"
	MetricGarbageCollection               = "garbage_collection_total"
	MetricGoroutines                      = "goroutines"
	MetricOpenFiles                       = "open_files"
	MetricHeapAlloc                       = "heap_alloc"
	MetricHeapInuse                       = "heap_inuse"
	MetricStackInuse                      = "stack_inuse"
	MetricMallocs                         = "mallocs"
	MetricFrees                           = "frees"
	MetricTransactionStart                = "transaction_start"
	MetricTransactionEnd                  = "transaction_end"
	MetricTransactionBlocked              = "transaction_blocked"
	MetricExclusiveTransactionRequest     = "transaction_exclusive_request"
	MetricExclusiveTransactionActive      = "transaction_exclusive_active"
	MetricExclusiveTransactionEnd         = "transaction_exclusive_end"
	MetricExclusiveTransactionBlocked     = "transaction_exclusive_blocked"
	MetricPqlQueries                      = "pql_queries_total"
	MetricSqlQueries                      = "sql_queries_total"
	MetricDeleteDataframe                 = "delete_dataframe"
)
View Source
const (
	// MetricBatchImportDurationSeconds records the full time of the
	// RecordBatch.Import call. This includes starting and finishing a
	// transaction, doing key translation, building fragments locally,
	// importing all data, and resetting internal structures.
	MetricBatchImportDurationSeconds = "batch_import_duration_seconds"

	// MetricBatchFlushDurationSeconds records the full time for
	// RecordBatch.Flush (if splitBatchMode is in use). This includes
	// starting and finishing a transaction, importing all data, and
	// resetting internal structures.
	MetricBatchFlushDurationSeconds = "batch_flush_duration_seconds"

	// MetricBatchShardImportBuildRequestsSeconds is the time it takes
	// after making fragments to build the shard-transactional request
	// objects (but not actually import them or do any network activity).
	MetricBatchShardImportBuildRequestsSeconds = "batch_shard_import_build_requests_seconds"

	// MetricBatchShardImportDurationSeconds is the time it takes to
	// import all data for all shards in the batch using the
	// shard-transactional endpoint. This does not include the time it
	// takes to build the requests locally.
	MetricBatchShardImportDurationSeconds = "batch_shard_import_duration_seconds"
)
View Source
const (
	// raw - for when you just want a count of something
	CTR_TYPE_RAW = 0
	// per second - for when you accumulate counts of things
	// a consumer would sample this at intervals to arrive at a delta
	// then divide by the time in seconds between the samples to get a
	// per-second value
	CTR_TYPE_PER_SECOND = 1
	// ratio - for when you accumulate a count of something that you
	// want to use as a numerator in a ratio calculation
	// e.g. 'cache hits' could be a counter of this type and you could
	// divide it by a 'cache lookups' counter to get the hit ratio (see below)
	CTR_TYPE_RATIO = 2
	// ratio base - for when you accumulate a count of something that you
	// want to use as a denominator in a ratio calculation
	// e.g. 'cache lookups' could be a counter of this type and you could
	// use it as the denominator in a division with a 'cache hits' counter
	// as the numerator to get the hit ratio
	CTR_TYPE_RATIO_BASE = 3
)

constants for the counter types

View Source
const DetectMemAccessPastTx = false

DetectMemAccessPastTx true helps us catch places in api and executor where mmapped memory is being accessed after the point in time which the transaction has committed or rolled back. Since memory segments will be recycled by the underlying databases, this can lead to corruption. When DetectMemAccessPastTx is true, code in bolt.go will copy the transactionally viewed memory before returning it for bitmap reading, and then zero it or overwrite it with -2 when the Tx completes.

Should be false for production.

View Source
const ErrTransactionExclusive = Error("there is an exclusive transaction, try later")
View Source
const ErrTransactionExists = Error("transaction with the given id already exists")
View Source
const ErrTransactionNotFound = Error("transaction not found")
View Source
const (
	ErrViewNotFound = Error("view not found")
)
View Source
const (
	// HeaderRequestUserID is request userid header
	HeaderRequestUserID = "X-Request-Userid"
)
View Source
const LeftShifted16MaxContainerKey = uint64(0xffffffffffff0000) // or math.MaxUint64 - (1<<16 - 1), or 18446744073709486080

LeftShifted16MaxContainerKey is 0xffffffffffff0000. It is similar to the roaring.maxContainerKey 0x0000ffffffffffff, but shifted 16 bits to the left so its domain is the full [0, 2^64) bit space. It is used to match the semantics of the roaring.OffsetRange() API. This is the maximum endx value for Tx.OffsetRange(), because the lowbits, as in the roaring.OffsetRange(), are not allowed to be set. It is used in Tx.RoaringBitamp() to obtain the full contents of a fragment from a call from tx.OffsetRange() by requesting [0, LeftShifted16MaxContainerKey) with an offset of 0.

View Source
const (
	RBFTxn string = "rbf"
)

public strings that pilosa/server/config.go can reference

View Source
const TimeFormat = "2006-01-02T15:04"

TimeFormat is the go-style time format used to parse string dates.

View Source
const TxInitialMmapSize = 4 << 30 // 4GB

Variables

View Source
var (
	DefaultEpoch     = time.Unix(0, 0).UTC()            // 1970-01-01T00:00:00Z
	MinTimestampNano = time.Unix(-1<<32, 0).UTC()       // 1833-11-24T17:31:44Z
	MaxTimestampNano = time.Unix(1<<32, 0).UTC()        // 2106-02-07T06:28:16Z
	MinTimestamp     = time.Unix(-62135596799, 0).UTC() // 0001-01-01T00:00:01Z
	MaxTimestamp     = time.Unix(253402300799, 0).UTC() // 9999-12-31T23:59:59Z
)

Timestamp field ranges.

View Source
var (
	ErrHostRequired = errors.New("host required")

	ErrIndexRequired = errors.New("index required")
	ErrIndexExists   = disco.ErrIndexExists
	ErrIndexNotFound = errors.New("index not found")

	ErrInvalidAddress = errors.New("invalid address")
	ErrInvalidSchema  = errors.New("invalid schema")

	ErrForeignIndexNotFound = errors.New("foreign index not found")

	// ErrFieldRequired is returned when no field is specified.
	ErrFieldRequired  = errors.New("field required")
	ErrColumnRequired = errors.New("column required")
	ErrFieldExists    = disco.ErrFieldExists
	ErrFieldNotFound  = errors.New("field not found")

	ErrBSIGroupNotFound         = errors.New("bsigroup not found")
	ErrBSIGroupExists           = errors.New("bsigroup already exists")
	ErrBSIGroupNameRequired     = errors.New("bsigroup name required")
	ErrInvalidBSIGroupType      = errors.New("invalid bsigroup type")
	ErrInvalidBSIGroupRange     = errors.New("invalid bsigroup range")
	ErrInvalidBSIGroupValueType = errors.New("invalid bsigroup value type")
	ErrBSIGroupValueTooLow      = errors.New("value too low for configured field range")
	ErrBSIGroupValueTooHigh     = errors.New("value too high for configured field range")
	ErrInvalidRangeOperation    = errors.New("invalid range operation")
	ErrInvalidBetweenValue      = errors.New("invalid value for between operation")
	ErrDecimalOutOfRange        = errors.New("decimal value out of range")

	ErrViewRequired     = errors.New("view required")
	ErrViewExists       = disco.ErrViewExists
	ErrInvalidView      = errors.New("invalid view")
	ErrInvalidCacheType = errors.New("invalid cache type")

	ErrName = errors.New("invalid index or field name, must match [a-z][a-z0-9Θ_-]* and contain at most 230 characters")

	// ErrFragmentNotFound is returned when a fragment does not exist.
	ErrFragmentNotFound = errors.New("fragment not found")
	ErrQueryRequired    = errors.New("query required")
	ErrQueryCancelled   = errors.New("query cancelled")
	ErrQueryTimeout     = errors.New("query timeout")
	ErrTooManyWrites    = errors.New("too many write commands")

	// TODO(2.0) poorly named - used when a *node* doesn't own a shard. Probably
	// we won't need this error at all by 2.0 though.
	ErrClusterDoesNotOwnShard = errors.New("node does not own shard")

	// ErrPreconditionFailed is returned when specified index/field createdAt timestamps don't match
	ErrPreconditionFailed = errors.New("precondition failed")

	ErrNodeIDNotExists = errors.New("node with provided ID does not exist")
	ErrNodeNotPrimary  = errors.New("node is not the primary")

	ErrNotImplemented            = errors.New("not implemented")
	ErrFieldsArgumentRequired    = errors.New("fields argument required")
	ErrExpectedFieldListArgument = errors.New("expected field list argument")

	ErrIntFieldWithKeys       = errors.New("int field cannot be created with 'keys=true' option")
	ErrDecimalFieldWithKeys   = errors.New("decimal field cannot be created with 'keys=true' option")
	ErrTimestampFieldWithKeys = errors.New("timestamp field cannot be created with 'keys=true' option")
)

System errors.

View Source
var (
	ErrTranslateStoreClosed       = errors.New("translate store closed")
	ErrTranslateStoreReaderClosed = errors.New("translate store reader closed")
	ErrReplicationNotSupported    = errors.New("replication not supported")
	ErrTranslateStoreReadOnly     = errors.New("translate store could not find or create key, translate store read only")
	ErrTranslateStoreNotFound     = errors.New("translate store not found")
	ErrTranslatingKeyNotFound     = errors.New("translating key not found")
)

Translate store errors.

View Source
var (
	// ErrTranslateStoreClosed is returned when reading from an TranslateEntryReader
	// and the underlying store is closed.
	ErrBoltTranslateStoreClosed = errors.New("boltdb: translate store closing")

	// ErrTranslateKeyNotFound is returned when translating key
	// and the underlying store returns an empty set
	ErrTranslateKeyNotFound = errors.New("boltdb: translating key returned empty set")
)
View Source
var BuildTime string
View Source
var Commit string
View Source
var CounterCacheThresholdReached = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricCacheThresholdReached,
		Help:      "TODO",
	},
)
View Source
var CounterClearBit = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricClearBit,
		Help:      "TODO",
	},
)
View Source
var CounterClearedN = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricClearedN,
		Help:      "TODO",
	},
)
View Source
var CounterClearingingN = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricClearingN,
		Help:      "TODO",
	},
)
View Source
var CounterCreateField = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricCreateField,
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterCreateIndex = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricCreateIndex,
		Help:      "TODO",
	},
)
View Source
var CounterDeleteAvailableShard = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricDeleteAvailableShard,
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterDeleteDataframe = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricDeleteDataframe,
		Help:      "TODO",
	},
)
View Source
var CounterDeleteField = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricDeleteField,
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterDeleteIndex = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricDeleteIndex,
		Help:      "TODO",
	},
)
View Source
var CounterExclusiveTransactionActive = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricExclusiveTransactionActive,
		Help:      "TODO",
	},
)
View Source
var CounterExclusiveTransactionBlocked = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricExclusiveTransactionBlocked,
		Help:      "TODO",
	},
)
View Source
var CounterExclusiveTransactionEnd = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricExclusiveTransactionEnd,
		Help:      "TODO",
	},
)
View Source
var CounterExclusiveTransactionRequest = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricExclusiveTransactionRequest,
		Help:      "TODO",
	},
)
View Source
var CounterGarbageCollection = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricGarbageCollection,
		Help:      "TODO",
	},
)
View Source
var CounterImportedN = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricImportedN,
		Help:      "TODO",
	},
)
View Source
var CounterImportingN = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricImportingN,
		Help:      "TODO",
	},
)
View Source
var CounterInvalidateCache = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricInvalidateCache,
		Help:      "TODO",
	},
)
View Source
var CounterInvalidateCacheSkipped = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricInvalidateCacheSkipped,
		Help:      "TODO",
	},
)
View Source
var CounterJobTotal = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "job_total",
		Help:      "TODO",
	},
)
View Source
var CounterPQLQueries = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricPqlQueries,
		Help:      "TODO",
	},
)
View Source
var CounterQueryAllTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_all_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryApplyTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_apply_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryArrowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_arrow_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryBitmapTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_bitmap_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)

CounterQueryBitmapTotal represents bitmap calls.

View Source
var CounterQueryClearRowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_clearrow_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryClearTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_clear_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryConstRowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_constrow_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryCountTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_count_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryDeleteTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_delete_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryDifferenceTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_difference_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryDistinctTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_distinct_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryExternalLookupTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_externallookup_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryExtractTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_extract_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryFieldValueTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_fieldvalue_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryGroupByTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_groupby_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryIncludesColumnTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_includescolumn_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryInnerUnionRowsTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_innerunionrows_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryIntersectTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_intersect_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryLimitTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_limit_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryMaxRowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_maxrow_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryMaxTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_max_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryMinRowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_minrow_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryMinTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_min_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryNotTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_not_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryOptionsTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_options_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryPercentileTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_percentile_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryPrecomputedTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_precomputed_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryRangeTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_range_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryRowBSITotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_row_bsi_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryRowTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_row_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryRowsTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_rows_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQuerySetTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_set_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryShiftTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_shift_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQuerySortTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_sort_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryStoreTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_store_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQuerySumTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_sum_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryTopKTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_topk_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryTopNTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_topn_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryUnionRowsTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_unionrows_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryUnionTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_union_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterQueryXorTotal = prometheus.NewCounterVec(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "query_xor_total",
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var CounterReadDirtyCache = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricReadDirtyCache,
		Help:      "TODO",
	},
)
View Source
var CounterRecalculateCache = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricRecalculateCache,
		Help:      "TODO",
	},
)
View Source
var CounterSQLQueries = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricSqlQueries,
		Help:      "TODO",
	},
)
View Source
var CounterSetBit = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricSetBit,
		Help:      "TODO",
	},
)
View Source
var CounterSetRow = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      "setRow",
		Help:      "TODO",
	},
)
View Source
var CounterTransactionBlocked = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricTransactionBlocked,
		Help:      "TODO",
	},
)
View Source
var CounterTransactionEnd = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricTransactionEnd,
		Help:      "TODO",
	},
)
View Source
var CounterTransactionStart = prometheus.NewCounter(
	prometheus.CounterOpts{
		Namespace: "pilosa",
		Name:      MetricTransactionStart,
		Help:      "TODO",
	},
)
View Source
var DoPerQueryProfiling = false
View Source
var ErrAborted = fmt.Errorf("error: update was aborted")
View Source
var ErrInvalidTimeQuantum = errors.New("invalid time quantum")

ErrInvalidTimeQuantum is returned when parsing a time quantum.

View Source
var ErrNoData = fmt.Errorf("no data")
View Source
var ErrQcxDone = fmt.Errorf("Qcx already Aborted or Finished, so must call reset before re-use")
View Source
var GaugeFrees = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricFrees,
		Help:      "TODO",
	},
)
View Source
var GaugeGoroutines = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricGoroutines,
		Help:      "TODO",
	},
)
View Source
var GaugeHeapAlloc = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricHeapAlloc,
		Help:      "TODO",
	},
)
View Source
var GaugeHeapInUse = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricHeapInuse,
		Help:      "TODO",
	},
)
View Source
var GaugeIndexMaxShard = prometheus.NewGaugeVec(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricMaxShard,
		Help:      "TODO",
	},
	[]string{
		"index",
	},
)
View Source
var GaugeMallocs = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricMallocs,
		Help:      "TODO",
	},
)
View Source
var GaugeOpenFiles = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricOpenFiles,
		Help:      "TODO",
	},
)
View Source
var GaugeRankCacheLength = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricRankCacheLength,
		Help:      "TODO",
	},
)
View Source
var GaugeStackInUse = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      MetricStackInuse,
		Help:      "TODO",
	},
)
View Source
var GaugeWorkerTotal = prometheus.NewGauge(
	prometheus.GaugeOpts{
		Namespace: "pilosa",
		Name:      "worker_total",
		Help:      "TODO",
	},
)
View Source
var GoVersion string = runtime.Version()
View Source
var NaN = math.NaN()
View Source
var NewAuditor func() testhook.Auditor = NewNopAuditor
View Source
var NoopFinisher = func(perr *error) {}
View Source
var NopBroadcaster broadcaster = &nopBroadcaster{}

NopBroadcaster represents a Broadcaster that doesn't do anything.

View Source
var PerfCounterSQLBulkInsertBatchesSec = perfCtr{
	// contains filtered or unexported fields
}
View Source
var PerfCounterSQLBulkInsertsSec = perfCtr{
	// contains filtered or unexported fields
}
View Source
var PerfCounterSQLDeletesSec = perfCtr{
	// contains filtered or unexported fields
}
View Source
var PerfCounterSQLInsertsSec = perfCtr{
	// contains filtered or unexported fields
}
View Source
var PerfCounterSQLRequestSec = perfCtr{
	// contains filtered or unexported fields
}
View Source
var SummaryBatchFlushDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricBatchFlushDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryBatchImportDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricBatchImportDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryBatchShardImportBuildRequestsSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricBatchShardImportBuildRequestsSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryBatchShardImportDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricBatchShardImportDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryGRPCStreamFormatDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricGRPCStreamFormatDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryGRPCStreamQueryDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricGRPCStreamQueryDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryGRPCUnaryFormatDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricGRPCUnaryFormatDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryGRPCUnaryQueryDurationSeconds = prometheus.NewSummary(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricGRPCUnaryQueryDurationSeconds,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
)
View Source
var SummaryHttpRequests = prometheus.NewSummaryVec(
	prometheus.SummaryOpts{
		Namespace:  "pilosa",
		Name:       MetricHTTPRequest,
		Help:       "TODO",
		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
	},
	[]string{
		"method",
		"path",
		"slow",
		"useragent",
		"where",
	},
)
View Source
var TrialDeadline string
View Source
var Variant string
View Source
var Version string

Functions

func AddAuthToken

func AddAuthToken(ctx context.Context, header *http.Header)

AddAuthToken checks in a couple spots for our authorization token and adds it to the Authorization Header in the request if it finds it. It does the same for refresh tokens as well.

func AddressWithDefaults

func AddressWithDefaults(addr string) (*pnet.URI, error)

AddressWithDefaults converts addr into a valid address, using defaults when necessary.

func CPUProfileForDur

func CPUProfileForDur(dur time.Duration, outpath string)

CPUProfileForDur (where "Dur" is short for "Duration"), is used for performance tuning during development. It's only called—but is currently commented out—in holder.go.

func CheckEpochOutOfRange

func CheckEpochOutOfRange(epoch, min, max time.Time) error

CheckEpochOutOfRange checks if the epoch is after max or before min

func CompareTransactions

func CompareTransactions(t1, t2 *Transaction) error

func Concat

func Concat(schema *arrow.Schema, tables []*BasicTable, mem memory.Allocator) arrow.Table

func DeleteRows

func DeleteRows(ctx context.Context, src *Row, idx *Index, shard uint64) (bool, error)

func DeleteRowsWithFlow

func DeleteRowsWithFlow(ctx context.Context, src *Row, idx *Index, shard uint64, normalFlow bool) (change bool, err error)

func DeleteRowsWithFlowWithKeys

func DeleteRowsWithFlowWithKeys(ctx context.Context, columns *roaring.Bitmap, idx *Index, shard uint64, normalFlow bool) (bool, error)

func DeleteRowsWithOutKeysFlow

func DeleteRowsWithOutKeysFlow(ctx context.Context, columns *roaring.Bitmap, idx *Index, shard uint64, normalFlow bool) (changed bool, err error)

func FieldFromFieldOptions

func FieldFromFieldOptions(fname dax.FieldName, opts ...FieldOption) (*dax.Field, error)

FieldFromFieldOptions creates a dax.Field given a set of existing field options. It should possibly be unconditionally setting TrackExistence, because it's called in two places in SQL3 both of which are creating new tables, but for now I'm trying to keep its behavior transparent, and handle the enabling of TrackExistence in the code that knows it is creating a field, thus, in sql's create/alter table, or in api.CreateField.

func FieldInfoToField

func FieldInfoToField(fi *FieldInfo) *dax.Field

FieldInfoToField converts a featurebase.FieldInfo to a dax.Field.

func FieldInfosToFields added in v3.34.0

func FieldInfosToFields(fis []*FieldInfo) []*dax.Field

FieldInfosToFields converts a []*featurebase.FieldInfo to a []*dax.Field.

func FormatQualifiedFieldName

func FormatQualifiedFieldName(index, field string) string

FormatQualifiedFieldName generates a qualified name for the field to be used with Tx operations.

func FormatQualifiedFragmentName

func FormatQualifiedFragmentName(index, field, view string, shard uint64) string

FormatQualifiedFragmentName generates a qualified name for the fragment to be used with Tx operations.

func FormatQualifiedIndexName

func FormatQualifiedIndexName(index string) string

FormatQualifiedIndexName generates a qualified name for the index to be used with Tx operations.

func FormatQualifiedViewName

func FormatQualifiedViewName(index, field, view string) string

FormatQualifiedViewName generates a qualified name for the view to be used with Tx operations.

func GenerateNextPartitionedID

func GenerateNextPartitionedID(index string, prev uint64, partitionID, partitionN int) uint64

GenerateNextPartitionedID returns the next ID within the same partition.

func GenericApplyFilter

func GenericApplyFilter(tx Tx, index, field, view string, shard uint64, ckey uint64, filter roaring.BitmapFilter) (err error)

GenericApplyFilter implements ApplyFilter in terms of tx.ContainerIterator, as a convenience if a Tx backend hasn't implemented this new function yet.

func GetHTTPClient

func GetHTTPClient(t *tls.Config, opts ...ClientOption) *http.Client

func GetIP

func GetIP(r *http.Request) string

func GetLoopProgress

func GetLoopProgress(start time.Time, now time.Time, iteration uint, total uint) (remaining time.Duration, pctDone float64)

GetLoopProgress returns the estimated remaining time to iterate through some items as well as the loop completion percentage with the following parameters: the start time, the current time, the iteration, and the number of items

func IndexInfoToTable

func IndexInfoToTable(ii *IndexInfo) *dax.Table

IndexInfoToTable converts a featurebase.IndexInfo to a dax.Table.

func IndexInfosToTables

func IndexInfosToTables(iis []*IndexInfo) []*dax.Table

IndexInfosToTables converts a slice of featurebase.IndexInfo to a slice of dax.Table.

func IsValidTimeUnit

func IsValidTimeUnit(unit string) bool

IsValidTimeUnit returns true if unit is valid.

func IvyReduce

func IvyReduce(reduceCode string, opCode string, opt *ExecOptions) (func(ctx context.Context, prev, v interface{}) interface{}, func() (*dataframe.DataFrame, error))

Possibly combine all arrays together then apply some interesting computation at the end?

func MarshalInternalMessage

func MarshalInternalMessage(m Message, s Serializer) ([]byte, error)

MarshalInternalMessage serializes the pilosa message and adds pilosa internal type info which is used by the internal messaging stuff.

func MemProfileForDur

func MemProfileForDur(dur time.Duration, outpath string)

MemProfileForDur (where "Dur" is short for "Duration"), is used for performance tuning during development. It's only called—but is currently commented out—in holder.go.

func MustBackendToTxtype

func MustBackendToTxtype(backend string) (typ txtype)

func NewNopAuditor

func NewNopAuditor() testhook.Auditor

func NewOnPremImporter

func NewOnPremImporter(api *API) *onPremImporter

func NewOnPremSchema

func NewOnPremSchema(api *API) *onPremSchema

func NewRankCache

func NewRankCache(maxEntries uint32) *rankCache

NewRankCache returns a new instance of RankCache.

func OpenIDAllocator

func OpenIDAllocator(path string, enableFsync bool) (*idAllocator, error)

func OptAPIDirectiveWorkerPoolSize

func OptAPIDirectiveWorkerPoolSize(size int) apiOption

func OptAPIImportWorkerPoolSize

func OptAPIImportWorkerPoolSize(size int) apiOption

func OptAPIIsComputeNode

func OptAPIIsComputeNode(is bool) apiOption

func OptAPIServer

func OptAPIServer(s *Server) apiOption

func OptAPIServerlessStorage added in v3.27.0

func OptAPIServerlessStorage(mm *storage.ResourceManager) apiOption

func OptHandlerAPI

func OptHandlerAPI(api *API) handlerOption

func OptHandlerAllowedOrigins

func OptHandlerAllowedOrigins(origins []string) handlerOption

func OptHandlerAuthN

func OptHandlerAuthN(authn *authn.Auth) handlerOption

func OptHandlerAuthZ

func OptHandlerAuthZ(gp *authz.GroupPermissions) handlerOption

func OptHandlerCloseTimeout

func OptHandlerCloseTimeout(d time.Duration) handlerOption

OptHandlerCloseTimeout controls how long to wait for the http Server to shutdown cleanly before forcibly destroying it. Default is 30 seconds.

func OptHandlerFileSystem

func OptHandlerFileSystem(fs FileSystem) handlerOption

func OptHandlerListener

func OptHandlerListener(ln net.Listener, url string) handlerOption

OptHandlerListener set the listener that will be used by the HTTP server. Url must be the advertised URL. It will be used to show a log to the user about where the Web UI is. This option is mandatory.

func OptHandlerLogger

func OptHandlerLogger(logger logger.Logger) handlerOption

func OptHandlerMiddleware

func OptHandlerMiddleware(middleware func(http.Handler) http.Handler) handlerOption

func OptHandlerQueryLogger

func OptHandlerQueryLogger(logger logger.Logger) handlerOption

func OptHandlerRoaringSerializer

func OptHandlerRoaringSerializer(s Serializer) handlerOption

func OptHandlerSerializer

func OptHandlerSerializer(s Serializer) handlerOption

func ParseQualifiedFragmentName

func ParseQualifiedFragmentName(name string) (index, field, view string, shard uint64, err error)

ParseQualifiedFragmentName parses a qualified name into its parts.

func ReplaceFirstFromBack

func ReplaceFirstFromBack(s, toReplace, replacement string) string

ReplaceFirstFromBack replaces the first instance of toReplace from the back of the string s

func Rev

func Rev(input string) string

Rev reverses a string

func TimeUnitNanos

func TimeUnitNanos(unit string) int64

TimeUnitNanos returns the number of nanoseconds in unit.

func TimestampToVal

func TimestampToVal(unit string, ts time.Time) int64

TimestampToVal takes a time unit and a time.Time and converts it to an integer value

func ValToTimestamp

func ValToTimestamp(unit string, val int64) (time.Time, error)

ValToTimestamp takes a timeunit and an integer value and converts it to time.Time

func ValidateName

func ValidateName(name string) error

ValidateName ensures that the index or field or view name is a valid format.

func VersionInfo

func VersionInfo(rename bool) string

Types

type API

type API struct {
	Serializer Serializer
	// contains filtered or unexported fields
}

API provides the top level programmatic interface to Pilosa. It is usually wrapped by a handler which provides an external interface (e.g. HTTP).

func NewAPI

func NewAPI(opts ...apiOption) (*API, error)

NewAPI returns a new API instance.

func (*API) ActiveQueries

func (api *API) ActiveQueries(ctx context.Context) ([]ActiveQueryStatus, error)

func (*API) ApplyDataframeChangeset

func (api *API) ApplyDataframeChangeset(ctx context.Context, index string, cs *ChangesetRequest, shard uint64) error

func (*API) ApplyDirective

func (api *API) ApplyDirective(ctx context.Context, d *dax.Directive) error

ApplyDirective applies a Directive received, from the Controller, at the /directive endpoint.

func (*API) ApplySchema

func (api *API) ApplySchema(ctx context.Context, s *Schema, remote bool) error

ApplySchema takes the given schema and applies it across the cluster (if remote is false), or just to this node (if remote is true). This is designed for the use case of replicating a schema from one Pilosa cluster to another which is initially empty. It is not officially supported in other scenarios and may produce surprising results.

func (*API) AvailableShards

func (api *API) AvailableShards(ctx context.Context, indexName string) (*roaring.Bitmap, error)

AvailableShards returns bitmap of available shards for a single index.

func (*API) AvailableShardsByIndex

func (api *API) AvailableShardsByIndex(ctx context.Context) map[string]*roaring.Bitmap

AvailableShardsByIndex returns bitmaps of shards with available by index name.

func (*API) Close

func (api *API) Close() error

Close closes the api and waits for it to shutdown.

func (*API) ClusterMessage

func (api *API) ClusterMessage(ctx context.Context, reqBody io.Reader) error

ClusterMessage is for internal use. It decodes a protobuf message out of the body and forwards it to the BroadcastHandler.

func (*API) ClusterName

func (api *API) ClusterName() string

ClusterName returns the cluster name.

func (*API) CommitIDs

func (api *API) CommitIDs(key IDAllocKey, session [32]byte, count uint64) error

func (*API) CompilePlan

func (api *API) CompilePlan(ctx context.Context, q string) (planner_types.PlanOperator, error)

CompilePlan takes a sql string and returns a PlanOperator. Note that this is different from the internal CompilePlan() method on the CompilePlanner interface, which takes a parser statement and returns a PlanOperator. In other words, this CompilePlan() both parses and plans the provided sql string; it's the equivalent of the CompileExecutionPlan() method on Server. TODO: consider renaming this to something with less conflict.

func (*API) CreateField

func (api *API) CreateField(ctx context.Context, indexName string, fieldName string, opts ...FieldOption) (*Field, error)

CreateField makes the named field in the named index with the given options.

The resulting field will always have TrackExistence set.

func (*API) CreateFieldKeys

func (api *API) CreateFieldKeys(ctx context.Context, index, field string, keys ...string) (map[string]uint64, error)

CreateFieldKeys looks up keys in a field, mapping them to IDs. If a key does not exist, it will be created.

func (*API) CreateIndex

func (api *API) CreateIndex(ctx context.Context, indexName string, options IndexOptions) (*Index, error)

CreateIndex makes a new Pilosa index.

func (*API) CreateIndexKeys

func (api *API) CreateIndexKeys(ctx context.Context, index string, keys ...string) (map[string]uint64, error)

CreateIndexKeys looks up column keys in the index, mapping them to IDs. If a key does not exist, it will be created.

func (*API) DeleteAvailableShard

func (api *API) DeleteAvailableShard(_ context.Context, indexName, fieldName string, shardID uint64) error

DeleteAvailableShard a shard ID from the available shard set cache.

func (*API) DeleteDataframe

func (api *API) DeleteDataframe(ctx context.Context, indexName string) error

func (*API) DeleteField

func (api *API) DeleteField(ctx context.Context, indexName string, fieldName string) error

DeleteField removes the named field from the named index. If the index is not found, an error is returned. If the field is not found, it is ignored and no action is taken.

func (*API) DeleteIndex

func (api *API) DeleteIndex(ctx context.Context, indexName string) error

DeleteIndex removes the named index. If the index is not found it does nothing and returns no error.

func (*API) DeleteView

func (api *API) DeleteView(ctx context.Context, indexName string, fieldName string, viewName string) error

DeleteView removes the given view.

func (*API) Directive

func (api *API) Directive(ctx context.Context, d *dax.Directive) error

Directive applies the provided Directive to the local computer.

func (*API) DirectiveApplied

func (api *API) DirectiveApplied(ctx context.Context) (bool, error)

DirectiveApplied returns true if the computer's current Directive has been applied and is ready to be queried. This is temporary (primarily for tests) and needs to be refactored as we improve the logic around controller-to-computer communication.

func (*API) ExportCSV

func (api *API) ExportCSV(ctx context.Context, indexName string, fieldName string, shard uint64, w io.Writer) error

ExportCSV encodes the fragment designated by the index,field,shard as CSV of the form <row>,<col>

func (*API) Field

func (api *API) Field(ctx context.Context, indexName, fieldName string) (*Field, error)

Field retrieves the named field.

func (*API) FieldInfo

func (api *API) FieldInfo(ctx context.Context, indexName, fieldName string) (*FieldInfo, error)

FieldInfo returns the same information as Schema(), but only for a single field.

func (*API) FieldTranslateData

func (api *API) FieldTranslateData(ctx context.Context, indexName, fieldName string) (TranslateStore, error)

FieldTranslateData returns all translation data in the specified field.

func (*API) FindFieldKeys

func (api *API) FindFieldKeys(ctx context.Context, index, field string, keys ...string) (map[string]uint64, error)

FindFieldKeys looks up keys in a field, mapping them to IDs. If a key does not exist, it will be absent from the resulting map.

func (*API) FindIndexKeys

func (api *API) FindIndexKeys(ctx context.Context, index string, keys ...string) (map[string]uint64, error)

FindIndexKeys looks up column keys in the index, mapping them to IDs. If a key does not exist, it will be absent from the resulting map.

func (*API) FinishTransaction

func (api *API) FinishTransaction(ctx context.Context, id string, remote bool) (*Transaction, error)

func (*API) FragmentData

func (api *API) FragmentData(ctx context.Context, indexName, fieldName, viewName string, shard uint64) (io.WriterTo, error)

FragmentData returns all data in the specified fragment.

func (*API) GetDataframeSchema

func (api *API) GetDataframeSchema(ctx context.Context, indexName string) (interface{}, error)

func (*API) GetTransaction

func (api *API) GetTransaction(ctx context.Context, id string, remote bool) (*Transaction, error)

func (*API) GetTranslateEntryReader

func (api *API) GetTranslateEntryReader(ctx context.Context, offsets TranslateOffsetMap) (_ TranslateEntryReader, err error)

GetTranslateEntryReader provides an entry reader for key translation logs starting at offset.

func (*API) Holder

func (api *API) Holder() *Holder

func (*API) Hosts

func (api *API) Hosts(ctx context.Context) []*disco.Node

Hosts returns a list of the hosts in the cluster including their ID, URL, and which is the primary.

func (*API) Import

func (api *API) Import(ctx context.Context, qcx *Qcx, req *ImportRequest, opts ...ImportOption) (err error)

Import does the top-level importing.

func (*API) ImportAtomicRecord

func (api *API) ImportAtomicRecord(ctx context.Context, qcx *Qcx, req *AtomicRecord, opts ...ImportOption) error

func (*API) ImportRoaring

func (api *API) ImportRoaring(ctx context.Context, indexName, fieldName string, shard uint64, remote bool, req *ImportRoaringRequest) (err0 error)

ImportRoaring is a low level interface for importing data to Pilosa when extremely high throughput is desired. The data must be encoded in a particular way which may be unintuitive (discussed below). The data is merged with existing data.

It takes as input a roaring bitmap which it uses as the data for the indicated index, field, and shard. The bitmap may be encoded according to the official roaring spec (https://github.com/RoaringBitmap/RoaringFormatSpec), or to the pilosa roaring spec which supports 64 bit integers (https://www.pilosa.com/docs/latest/architecture/#roaring-bitmap-storage-format).

The data should be encoded the same way that Pilosa stores fragments internally. A bit "i" being set in the input bitmap indicates that the bit is set in Pilosa row "i/ShardWidth", and in column (shard*ShardWidth)+(i%ShardWidth). That is to say that "data" represents all of the rows in this shard of this field concatenated together in one long bitmap.

func (*API) ImportRoaringShard

func (api *API) ImportRoaringShard(ctx context.Context, indexName string, shard uint64, req *ImportRoaringShardRequest) error

ImportRoaringShard transactionally imports roaring-encoded data across many fields in a single shard. It can both set and clear bits and updates caches/bitDepth as appropriate, although only the bitmap parts happen truly transactionally.

This function does not attempt to do existence tracking, because it can't; there's no way to distinguish empty sets from not setting bits. As a result, users of this endpoint are responsible for providing corrected existence views for fields with existence tracking. Our batch API does that.

func (*API) ImportValue

func (api *API) ImportValue(ctx context.Context, qcx *Qcx, req *ImportValueRequest, opts ...ImportOption) error

ImportValue is a wrapper around the common code in ImportValueWithTx, which currently just translates req.Clear into a clear ImportOption.

func (*API) ImportValueWithTx

func (api *API) ImportValueWithTx(ctx context.Context, qcx *Qcx, req *ImportValueRequest, options *ImportOptions) (err0 error)

ImportValueWithTx bulk imports values into a particular field.

func (*API) ImportWithTx

func (api *API) ImportWithTx(ctx context.Context, qcx *Qcx, req *ImportRequest, options *ImportOptions) error

ImportWithTx bulk imports data into a particular index,field,shard.

func (*API) Index

func (api *API) Index(ctx context.Context, indexName string) (*Index, error)

Index retrieves the named index.

func (*API) IndexInfo

func (api *API) IndexInfo(ctx context.Context, name string) (*IndexInfo, error)

IndexInfo returns the same information as Schema(), but only for a single index.

func (*API) IndexShardSnapshot

func (api *API) IndexShardSnapshot(ctx context.Context, indexName string, shard uint64, writeTx bool) (io.ReadCloser, error)

IndexShardSnapshot returns a reader that contains the contents of an RBF snapshot for an index/shard. When snapshotting for serverless, we need to be able to transactionally move the write log to the new version, so we expose writeTx to allow the caller to request a write transaction for the snapshot even though we'll just be reading inside RBF.

func (*API) Info

func (api *API) Info() serverInfo

Info returns information about this server instance.

func (*API) LongQueryTime

func (api *API) LongQueryTime() time.Duration

LongQueryTime returns the configured threshold for logging/statting long running queries.

func (*API) MatchField

func (api *API) MatchField(ctx context.Context, index, field string, like string) ([]uint64, error)

MatchField finds the IDs of all field keys matching a filter.

func (*API) MaxShards

func (api *API) MaxShards(ctx context.Context) map[string]uint64

MaxShards returns the maximum shard number for each index in a map. TODO (2.0): This method has been deprecated. Instead, use AvailableShardsByIndex.

func (*API) MutexCheck

func (api *API) MutexCheck(ctx context.Context, qcx *Qcx, indexName string, fieldName string, details bool, limit int) (result interface{}, err error)

MutexCheck checks a named field for mutex violations, returning a map of record IDs to values for records that have multiple values in the field. The return will be one of:

details true:
map[uint64][]uint64 // unkeyed index, unkeyed field
map[uint64][]string // unkeyed index, keyed field
map[string][]uint64 // keyed index, unkeyed field
map[string][]string // keyed index, keyed field
details false:
[]uint64            // unkeyed index
[]string            // keyed index

func (*API) MutexCheckNode

func (api *API) MutexCheckNode(ctx context.Context, qcx *Qcx, indexName string, fieldName string, details bool, limit int) (map[uint64]map[uint64][]uint64, error)

MutexCheckNode checks for collisions in a given mutex field. The response is a map[shard]map[column]values, not translated.

func (*API) Node

func (api *API) Node() *disco.Node

Node gets the ID, URI and primary status for this particular node.

func (*API) NodeID

func (api *API) NodeID() string

NodeID gets the ID alone, so it doesn't have to do a complete lookup of the node, searching by its ID, to return the ID it searched for.

func (*API) PartitionNodes

func (api *API) PartitionNodes(ctx context.Context, partitionID int) ([]*disco.Node, error)

PartitionNodes returns the node and all replicas which should contain a partition key data.

func (*API) PastQueries

func (api *API) PastQueries(ctx context.Context, remote bool) ([]PastQueryStatus, error)

func (*API) PrimaryNode

func (api *API) PrimaryNode() *disco.Node

PrimaryNode returns the primary node for the cluster.

func (*API) PrimaryReplicaNodeURL

func (api *API) PrimaryReplicaNodeURL() url.URL

PrimaryReplicaNodeURL returns the URL of the cluster's primary replica.

func (*API) Query

func (api *API) Query(ctx context.Context, req *QueryRequest) (QueryResponse, error)

Query parses a PQL query out of the request and executes it.

func (*API) RBFDebugInfo

func (api *API) RBFDebugInfo() map[string]*rbf.DebugInfo

func (*API) RecalculateCaches

func (api *API) RecalculateCaches(ctx context.Context) error

RecalculateCaches forces all TopN caches to be updated. This is done internally within a TopN query, but a user may want to do it ahead of time?

func (*API) RehydratePlanOperator added in v3.29.0

func (api *API) RehydratePlanOperator(ctx context.Context, reader io.Reader) (planner_types.PlanOperator, error)

func (*API) ReserveIDs

func (api *API) ReserveIDs(key IDAllocKey, session [32]byte, offset uint64, count uint64) ([]IDRange, error)

func (*API) ResetIDAlloc

func (api *API) ResetIDAlloc(index string) error

func (*API) RestoreIDAlloc

func (api *API) RestoreIDAlloc(r io.Reader) error

func (*API) RestoreShard

func (api *API) RestoreShard(ctx context.Context, indexName string, shard uint64, rd io.Reader) error

RestoreShard is used by the restore tool to restore previously backed up data. This call is specific to RBF data for a shard.

func (*API) Schema

func (api *API) Schema(ctx context.Context, withViews bool) ([]*IndexInfo, error)

Schema returns information about each index in Pilosa including which fields they contain.

func (*API) SetAPIOptions

func (api *API) SetAPIOptions(opts ...apiOption) error

SetAPIOptions applies the given functional options to the API.

func (*API) ShardDistribution

func (api *API) ShardDistribution(ctx context.Context) map[string]interface{}

ShardDistribution returns an object representing the distribution of shards across nodes for each index, distinguishing between primary and replica. The structure of this information is [indexName][nodeID][primaryOrReplica][]uint64. This function supports a view in the UI.

func (*API) ShardNodes

func (api *API) ShardNodes(ctx context.Context, indexName string, shard uint64) ([]*disco.Node, error)

ShardNodes returns the node and all replicas which should contain a shard's data.

func (*API) SnapshotFieldKeys

func (api *API) SnapshotFieldKeys(ctx context.Context, req *dax.SnapshotFieldKeysRequest) error

SnapshotFieldKeys triggers the node to perform a field keys snapshot based on the provided SnapshotFieldKeysRequest.

func (*API) SnapshotShardData

func (api *API) SnapshotShardData(ctx context.Context, req *dax.SnapshotShardDataRequest) error

SnapshotShardData triggers the node to perform a shard snapshot based on the provided SnapshotShardDataRequest.

func (*API) SnapshotTableKeys

func (api *API) SnapshotTableKeys(ctx context.Context, req *dax.SnapshotTableKeysRequest) error

SnapshotTableKeys triggers the node to perform a table keys snapshot based on the provided SnapshotTableKeysRequest.

func (*API) StartTransaction

func (api *API) StartTransaction(ctx context.Context, id string, timeout time.Duration, exclusive bool, remote bool) (*Transaction, error)

func (*API) State

func (api *API) State() (disco.ClusterState, error)

State returns the cluster state which is usually "NORMAL", but could be "STARTING", or potentially others. See disco.go for more details.

func (*API) Transactions

func (api *API) Transactions(ctx context.Context) (map[string]*Transaction, error)

func (*API) TranslateData

func (api *API) TranslateData(ctx context.Context, indexName string, partition int) (TranslateStore, error)

TranslateData returns all translation data in the specified partition.

func (*API) TranslateFieldDB

func (api *API) TranslateFieldDB(ctx context.Context, indexName, fieldName string, rd io.Reader) error

TranslateFieldDB is an internal function to load the field keys database

func (*API) TranslateIDs

func (api *API) TranslateIDs(ctx context.Context, r io.Reader) (_ []byte, err error)

TranslateIDs handles a TranslateIDRequest.

func (*API) TranslateIndexDB

func (api *API) TranslateIndexDB(ctx context.Context, indexName string, partitionID int, rd io.Reader) error

TranslateIndexDB is an internal function to load the index keys database rd is a boltdb file.

func (*API) TranslateIndexIDs

func (api *API) TranslateIndexIDs(ctx context.Context, indexName string, ids []uint64) ([]string, error)

func (*API) TranslateIndexKey

func (api *API) TranslateIndexKey(ctx context.Context, indexName string, key string, writable bool) (uint64, error)

func (*API) TranslateKeys

func (api *API) TranslateKeys(ctx context.Context, r io.Reader) (_ []byte, err error)

TranslateKeys handles a TranslateKeyRequest. ErrTranslatingKeyNotFound error will be swallowed here, so the empty response will be returned.

func (*API) Txf

func (api *API) Txf() *TxFactory

func (*API) UpdateField

func (api *API) UpdateField(ctx context.Context, indexName, fieldName string, update FieldUpdate) error

func (*API) Version

func (api *API) Version() string

Version returns the Pilosa version.

func (*API) Views

func (api *API) Views(ctx context.Context, indexName string, fieldName string) ([]*view, error)

Views returns the views in the given field.

func (*API) WriteIDAllocDataTo

func (api *API) WriteIDAllocDataTo(w io.Writer) error

type ActiveQueryStatus

type ActiveQueryStatus struct {
	PQL   string        `json:"PQL"`
	SQL   string        `json:"SQL,omitempty"`
	Node  string        `json:"node"`
	Index string        `json:"index"`
	Age   time.Duration `json:"age"`
}

type ApplyResult

type ApplyResult *arrow.Column

type AtomicRecord

type AtomicRecord struct {
	Index string
	Shard uint64

	Ivr []*ImportValueRequest // BSI values
	Ir  []*ImportRequest      // other field types, e.g. single bit
}

AtomicRecord applies all its Ivr and Ivr atomically, in a Tx. The top level Shard has to agree with Ivr[i].Shard and the Iv[i].Shard for all i included (in Ivr and Ir). The same goes for the top level Index: all records have to be writes to the same Index. These requirements are checked.

func (*AtomicRecord) Clone

func (ar *AtomicRecord) Clone() *AtomicRecord

type BSIData

type BSIData []*Row

BSIData contains BSI-structured data.

func AddBSI

func AddBSI(x, y BSIData) BSIData

AddBSI adds two BSI bitmaps together. It does not handle sign and has no concept of overflow.

func (BSIData) PivotDescending

func (bsi BSIData) PivotDescending(filter *Row, branch uint64, limit, offset *uint64, fn func(uint64, ...uint64))

PivotDescending loops over nonzero BSI values in descending order. For each value, the provided function is called with the value and a slice of the associated columns. If limit or offset are not-nil, they will be applied. Applying a limit or offset may modify the pointed-to value.

type BadRequestError

type BadRequestError struct {
	// contains filtered or unexported fields
}

BadRequestError wraps an error value to signify that a request could not be read, decoded, or parsed such that in an HTTP scenario, http.StatusBadRequest would be returned.

func NewBadRequestError

func NewBadRequestError(err error) BadRequestError

NewBadRequestError returns err wrapped in a BadRequestError.

type BasicTable added in v3.30.0

type BasicTable struct {
	// contains filtered or unexported fields
}

func BasicTableFromArrow

func BasicTableFromArrow(table arrow.Table, mem memory.Allocator) *BasicTable

func (*BasicTable) Column added in v3.30.0

func (st *BasicTable) Column(i int) *arrow.Column

func (*BasicTable) Get added in v3.30.0

func (st *BasicTable) Get(column, row int) interface{}

func (*BasicTable) IsFiltered added in v3.30.0

func (st *BasicTable) IsFiltered() bool

func (*BasicTable) MarshalJSON added in v3.30.0

func (st *BasicTable) MarshalJSON() ([]byte, error)

func (*BasicTable) Name added in v3.30.0

func (st *BasicTable) Name() string

func (*BasicTable) NumCols added in v3.30.0

func (st *BasicTable) NumCols() int64

func (*BasicTable) NumRows added in v3.30.0

func (st *BasicTable) NumRows() int64

func (*BasicTable) Release added in v3.30.0

func (st *BasicTable) Release()

func (*BasicTable) Retain added in v3.30.0

func (st *BasicTable) Retain()

func (*BasicTable) Schema added in v3.30.0

func (st *BasicTable) Schema() *arrow.Schema

type Bit

type Bit struct {
	RowID     uint64
	ColumnID  uint64
	RowKey    string
	ColumnKey string
	Timestamp int64
}

Bit represents the intersection of a row and a column. It can be specified by integer ids or string keys.

type Bits

type Bits []Bit

Bits is a slice of Bit.

func (Bits) ColumnIDs

func (p Bits) ColumnIDs() []uint64

ColumnIDs returns a slice of all the column IDs.

func (Bits) ColumnKeys

func (p Bits) ColumnKeys() []string

ColumnKeys returns a slice of all the column keys.

func (Bits) GroupByShard

func (p Bits) GroupByShard() map[uint64][]Bit

GroupByShard returns a map of bits by shard.

func (Bits) HasColumnKeys

func (p Bits) HasColumnKeys() bool

HasColumnKeys returns true if any values use a column key.

func (Bits) HasRowKeys

func (p Bits) HasRowKeys() bool

HasRowKeys returns true if any values use a row key.

func (Bits) Len

func (p Bits) Len() int

func (Bits) Less

func (p Bits) Less(i, j int) bool

func (Bits) RowIDs

func (p Bits) RowIDs() []uint64

RowIDs returns a slice of all the row IDs.

func (Bits) RowKeys

func (p Bits) RowKeys() []string

RowKeys returns a slice of all the row keys.

func (Bits) Swap

func (p Bits) Swap(i, j int)

func (Bits) Timestamps

func (p Bits) Timestamps() []int64

Timestamps returns a slice of all the timestamps.

type BitsByPos

type BitsByPos []Bit

BitsByPos is a slice of bits sorted row then column.

func (BitsByPos) Len

func (p BitsByPos) Len() int

func (BitsByPos) Less

func (p BitsByPos) Less(i, j int) bool

func (BitsByPos) Swap

func (p BitsByPos) Swap(i, j int)

type BlockDataRequest

type BlockDataRequest struct {
	Index string
	Field string
	View  string
	Shard uint64
	Block uint64
}

BlockDataRequest describes the structure of a request for fragment block data.

type BlockDataResponse

type BlockDataResponse struct {
	RowIDs    []uint64
	ColumnIDs []uint64
}

BlockDataResponse is the structured response of a block data request.

type BoltInMemTranslateStore added in v3.27.0

type BoltInMemTranslateStore struct {
	*BoltTranslateStore
}

func (*BoltInMemTranslateStore) Close added in v3.27.0

func (b *BoltInMemTranslateStore) Close() error

type BoltTranslateEntryReader added in v3.27.0

type BoltTranslateEntryReader struct {
	// contains filtered or unexported fields
}

func (*BoltTranslateEntryReader) Close added in v3.27.0

func (r *BoltTranslateEntryReader) Close() error

Close closes the reader.

func (*BoltTranslateEntryReader) ReadEntry added in v3.27.0

func (r *BoltTranslateEntryReader) ReadEntry(entry *TranslateEntry) error

ReadEntry reads the next entry from the underlying translate store.

type BoltTranslateStore added in v3.27.0

type BoltTranslateStore struct {

	// File path to database file.
	Path string
	// contains filtered or unexported fields
}

BoltTranslateStore is an on-disk storage engine for translating string-to-uint64 values. An empty string will be converted into the sentinel byte slice:

var emptyKey = []byte{
	0x00, 0x00, 0x00,
	0x4d, 0x54, 0x4d, 0x54, // MTMT
	0x00,
	0xc2, 0xa0, // NO-BREAK SPACE
	0x00,
}

func NewBoltTranslateStore added in v3.27.0

func NewBoltTranslateStore(index, field string, partitionID, partitionN int, fsyncEnabled bool) *BoltTranslateStore

NewBoltTranslateStore returns a new instance of TranslateStore.

func (*BoltTranslateStore) Begin added in v3.27.0

func (s *BoltTranslateStore) Begin(write bool) (TranslatorTx, error)

Begin starts and returns a transaction on the underlying store.

func (*BoltTranslateStore) Close added in v3.27.0

func (s *BoltTranslateStore) Close() (err error)

Close closes the underlying database.

func (*BoltTranslateStore) CreateKeys added in v3.27.0

func (s *BoltTranslateStore) CreateKeys(keys ...string) (map[string]uint64, error)

CreateKeys maps all keys to IDs, creating the IDs if they do not exist. If the translator is read-only, this will return an error.

func (*BoltTranslateStore) Delete added in v3.27.0

func (s *BoltTranslateStore) Delete(records *roaring.Bitmap) (Commitor, error)

Delete removes the lookeup pairs in order to make avialble for reuse but doesn't commit the transaction for that is tied to the associated rbf transaction being successful

func (*BoltTranslateStore) EntryReader added in v3.27.0

func (s *BoltTranslateStore) EntryReader(ctx context.Context, offset uint64) (TranslateEntryReader, error)

EntryReader returns a reader that streams the underlying data file.

func (*BoltTranslateStore) FindKeys added in v3.27.0

func (s *BoltTranslateStore) FindKeys(keys ...string) (map[string]uint64, error)

FindKeys looks up the ID for each key. Keys are not created if they do not exist. Missing keys are not considered errors, so the length of the result may be less than that of the input.

func (*BoltTranslateStore) ForceSet added in v3.27.0

func (s *BoltTranslateStore) ForceSet(id uint64, key string) error

ForceSet writes the id/key pair to the store even if read only. Used by replication.

func (*BoltTranslateStore) FreeIDs added in v3.27.0

func (s *BoltTranslateStore) FreeIDs() (*roaring.Bitmap, error)

func (*BoltTranslateStore) Match added in v3.27.0

func (s *BoltTranslateStore) Match(filter func([]byte) bool) ([]uint64, error)

Match finds the IDs of all keys matching a filter.

func (*BoltTranslateStore) MaxID added in v3.27.0

func (s *BoltTranslateStore) MaxID() (max uint64, err error)

MaxID returns the highest id in the store.

func (*BoltTranslateStore) MergeFree added in v3.27.0

func (s *BoltTranslateStore) MergeFree(tx *bolt.Tx, newIDs *roaring.Bitmap) error

func (*BoltTranslateStore) Open added in v3.27.0

func (s *BoltTranslateStore) Open() (err error)

Open opens the translate file.

func (*BoltTranslateStore) PartitionID added in v3.27.0

func (s *BoltTranslateStore) PartitionID() int

PartitionID returns the partition id the store was initialized with.

func (*BoltTranslateStore) ReadFrom added in v3.27.0

func (s *BoltTranslateStore) ReadFrom(r io.Reader) (n int64, err error)

ReadFrom reads the content and overwrites the existing store.

func (*BoltTranslateStore) ReadOnly added in v3.27.0

func (s *BoltTranslateStore) ReadOnly() bool

ReadOnly returns true if the store is in read-only mode.

func (*BoltTranslateStore) SetReadOnly added in v3.27.0

func (s *BoltTranslateStore) SetReadOnly(v bool)

SetReadOnly toggles whether store is in read-only mode.

func (*BoltTranslateStore) Size added in v3.27.0

func (s *BoltTranslateStore) Size() int64

Size returns the number of bytes in the data file.

func (*BoltTranslateStore) TranslateID added in v3.27.0

func (s *BoltTranslateStore) TranslateID(id uint64) (string, error)

TranslateID converts an integer ID to a string key. Returns a blank string if ID does not exist.

func (*BoltTranslateStore) TranslateIDs added in v3.27.0

func (s *BoltTranslateStore) TranslateIDs(ids []uint64) ([]string, error)

TranslateIDs converts a list of integer IDs to a list of string keys.

func (*BoltTranslateStore) WriteNotify added in v3.27.0

func (s *BoltTranslateStore) WriteNotify() <-chan struct{}

WriteNotify returns a channel that is closed when a new entry is written.

type ChangesetRequest

type ChangesetRequest struct {
	ShardIds     []int64 // only shardwidth bits to provide 0 indexing inside shard file
	Columns      []interface{}
	SimpleSchema []NameType
}

func (*ChangesetRequest) ArrowSchema

func (cr *ChangesetRequest) ArrowSchema() *arrow.Schema

type ClientOption

type ClientOption func(client *http.Client, dialer *net.Dialer) *http.Client

func ClientDialTimeoutOption

func ClientDialTimeoutOption(dur time.Duration) ClientOption

func ClientResponseHeaderTimeoutOption

func ClientResponseHeaderTimeoutOption(dur time.Duration) ClientOption

type ClusterNode

type ClusterNode struct {
	ID        string
	Type      string
	State     string
	URI       string
	GRPCURI   string
	IsPrimary bool
}

type ClusterStatus

type ClusterStatus struct {
	ClusterID string
	State     string
	Nodes     []*disco.Node
	Schema    *Schema
}

ClusterStatus describes the status of the cluster including its state and node topology.

type Commitor

type Commitor interface {
	Rollback()
	Commit() error
}

type ConflictError

type ConflictError struct {
	// contains filtered or unexported fields
}

ConflictError wraps an error value to signify that a conflict with an existing resource occurred such that in an HTTP scenario, http.StatusConflict would be returned.

func (ConflictError) Unwrap

func (c ConflictError) Unwrap() error

Unwrap makes it so that a ConflictError wrapping ErrFieldExists gets a true from errors.Is(ErrFieldExists).

type CreateFieldMessage

type CreateFieldMessage struct {
	Index     string
	Field     string
	CreatedAt int64
	Owner     string
	Meta      *FieldOptions
}

CreateFieldMessage is an internal message indicating field creation.

type CreateFieldObj

type CreateFieldObj struct {
	Name    string
	Options []FieldOption
}

CreateFieldObj is used to encapsulate the information required for creating a field in the SchemaAPI.CreateIndexAndFields interface method.

type CreateIndexMessage

type CreateIndexMessage struct {
	Index     string
	CreatedAt int64
	Owner     string

	Meta IndexOptions
}

CreateIndexMessage is an internal message indicating index creation.

type CreateShardMessage

type CreateShardMessage struct {
	Index string
	Field string
	Shard uint64
}

CreateShardMessage is an internal message indicating shard creation.

type CreateViewMessage

type CreateViewMessage struct {
	Index string
	Field string
	View  string
}

CreateViewMessage is an internal message indicating view creation.

type DBHolder

type DBHolder struct {
	Index map[string]*DBIndex
}

func NewDBHolder

func NewDBHolder() *DBHolder

type DBIndex

type DBIndex struct {
	Shard map[uint64]*DBShard
}

type DBPerShard

type DBPerShard struct {
	Mu sync.Mutex

	HolderDir string

	// just flat, not buried within the Node heirarchy.
	// Easily see how many we have.
	Flatmap map[flatkey]*DBShard

	StorageConfig *storage.Config
	RBFConfig     *rbfcfg.Config
	// contains filtered or unexported fields
}

func (*DBPerShard) Close

func (per *DBPerShard) Close() (err error)

func (*DBPerShard) DeleteFieldFromStore

func (per *DBPerShard) DeleteFieldFromStore(index, field, fieldPath string) (err error)

func (*DBPerShard) DeleteFragment

func (per *DBPerShard) DeleteFragment(index, field, view string, shard uint64, frag *fragment) error

func (*DBPerShard) DeleteIndex

func (per *DBPerShard) DeleteIndex(index string) (err error)

func (*DBPerShard) GetDBShard

func (per *DBPerShard) GetDBShard(index string, shard uint64, idx *Index) (dbs *DBShard, err error)

func (*DBPerShard) GetFieldView2ShardsMapForIndex

func (per *DBPerShard) GetFieldView2ShardsMapForIndex(idx *Index) (vs *FieldView2Shards, err error)

func (*DBPerShard) LoadExistingDBs

func (per *DBPerShard) LoadExistingDBs() (err error)

func (*DBPerShard) TypedDBPerShardGetShardsForIndex

func (per *DBPerShard) TypedDBPerShardGetShardsForIndex(ty txtype, idx *Index, roaringViewPath string, requireData bool) (shardMap map[uint64]struct{}, err error)

requireData means open the database file and verify that at least one key is set. The returned sliceOfShards should not be modified. We will cache it for subsequent queries.

when a new DBShard is made, we will update the list of shards then. Thus the per.index2shard should always be up to date AFTER the first call here.

type DBRegistry

type DBRegistry interface {
	OpenDBWrapper(path string, doAllocZero bool, cfg *storage.Config) (DBWrapper, error)
}

type DBShard

type DBShard struct {
	HolderPath string

	Index string
	Shard uint64
	Open  bool

	W             DBWrapper
	ParentDBIndex *DBIndex
	// contains filtered or unexported fields
}

func (*DBShard) AllFieldViews

func (dbs *DBShard) AllFieldViews() (fvs []txkey.FieldView, err error)

func (*DBShard) Close

func (dbs *DBShard) Close() (err error)

func (*DBShard) DeleteFieldFromStore

func (dbs *DBShard) DeleteFieldFromStore(index, field, fieldPath string) (err error)

func (*DBShard) DeleteFragment

func (dbs *DBShard) DeleteFragment(index, field, view string, shard uint64, frag interface{}) (err error)

func (*DBShard) NewTx

func (dbs *DBShard) NewTx(write bool, initialIndexName string, o Txo) (tx Tx, err error)

type DBWrapper

type DBWrapper interface {
	NewTx(write bool, initialIndexName string, o Txo) (tx Tx, err error)
	Close() error
	DeleteFragment(index, field, view string, shard uint64, frag interface{}) error
	DeleteField(index, field, fieldPath string) error
	OpenListString() string
	Path() string
	HasData() (has bool, err error)
	SetHolder(h *Holder)
	//needed for restore
	CloseDB() error
	OpenDB() error
}

type DeleteAvailableShardMessage

type DeleteAvailableShardMessage struct {
	Index   string
	Field   string
	ShardID uint64
}

DeleteAvailableShardMessage is an internal message indicating available shard deletion.

type DeleteDataframeMessage

type DeleteDataframeMessage struct {
	Index string
}

DeleteDataframeMessage is an internal message indicating dataframe deletion.

type DeleteFieldMessage

type DeleteFieldMessage struct {
	Index string
	Field string
}

DeleteFieldMessage is an internal message indicating field deletion.

type DeleteIndexMessage

type DeleteIndexMessage struct {
	Index string
}

DeleteIndexMessage is an internal message indicating index deletion.

type DeleteViewMessage

type DeleteViewMessage struct {
	Index string
	Field string
	View  string
}

DeleteViewMessage is an internal message indicating view deletion.

type DiskUsage

type DiskUsage struct {
	Usage int64 `json:"usage"`
}

func GetDiskUsage

func GetDiskUsage(path string) (DiskUsage, error)

GetDiskUsage gets the disk usage of the path

type DistinctTimestamp

type DistinctTimestamp struct {
	Values []string
	Name   string
}

func (DistinctTimestamp) ToRows

func (d DistinctTimestamp) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (DistinctTimestamp) ToTable

func (d DistinctTimestamp) ToTable() (*proto.TableResponse, error)

ToTable implements the ToTabler interface for DistinctTimestamp

func (*DistinctTimestamp) Union

Union returns the union of the values of `d` and `other`

type Error

type Error string

func (Error) Error

func (e Error) Error() string

type ExecOptions

type ExecOptions struct {
	Remote        bool
	Profile       bool
	PreTranslated bool
	EmbeddedData  []*Row
	MaxMemory     int64
}

ExecOptions represents an execution context for a single Execute() call.

type ExecutionPlannerFn

type ExecutionPlannerFn func(executor Executor, api *API, sql string) sql3.CompilePlanner

type ExecutionRequest

type ExecutionRequest struct {
	// the id of the request
	RequestID string
	// the id of the user
	UserID string
	// time the request started
	StartTime time.Time
	// time the request finished - zero iif it has not finished
	EndTime time.Time
	// status of the request 'running' or 'complete' now, could have other values later
	Status string
	// future: if the request is waiting, the type of wait that is occuring
	WaitType string
	// future: the cumulative wait time for this request
	WaitTime time.Duration
	// futuure: if the request is waiting, the thing it is waiting on
	WaitResource string
	// future: the cululative cpu time for this request
	CPUTime time.Duration
	// the elapsed time for this request
	ElapsedTime time.Duration
	// future: the cumulative number of physical reads for this request
	Reads int64
	// future: the cumulative number of physical writes for this request
	Writes int64
	// future: the cumulative number of logical reads for this request
	LogicalReads int64
	// future: the cumulative number of rows affected for this request
	RowCount int64
	// the query plan for this request formatted in json
	Plan string
	// the sql for this request
	SQL string
}

ExecutionRequest holds data about an (sql) execution request

func (*ExecutionRequest) Copy

Copy returns a copy of the ExecutionRequest passed

type ExecutionRequestsAPI

type ExecutionRequestsAPI interface {
	// add a request
	AddRequest(requestID string, userID string, startTime time.Time, sql string) error

	// update a request
	UpdateRequest(requestID string,
		endTime time.Time,
		status string,
		waitType string,
		waitTime time.Duration,
		waitResource string,
		cpuTime time.Duration,
		reads int64,
		writes int64,
		logicalReads int64,
		rowCount int64,
		plan string) error

	// list all the requests
	ListRequests() ([]ExecutionRequest, error)

	// get a specific request
	GetRequest(requestID string) (ExecutionRequest, error)
}

ExecutionRequestsAPI defines the API for storing, updating and querying internal state around (sql) execution requests

type Executor

type Executor interface {
	Execute(context.Context, dax.TableKeyer, *pql.Query, []uint64, *ExecOptions) (QueryResponse, error)
}

type ExtractedIDColumn

type ExtractedIDColumn struct {
	ColumnID uint64
	Rows     [][]uint64
}

type ExtractedIDMatrix

type ExtractedIDMatrix struct {
	Fields  []string
	Columns []ExtractedIDColumn
}

func (*ExtractedIDMatrix) Append

func (e *ExtractedIDMatrix) Append(m ExtractedIDMatrix)

type ExtractedIDMatrixSorted

type ExtractedIDMatrixSorted struct {
	ExtractedIDMatrix *ExtractedIDMatrix
	RowKVs            []RowKV
}

func MergeExtractedIDMatrixSorted

func MergeExtractedIDMatrixSorted(a, b ExtractedIDMatrixSorted, sort_desc bool) (ExtractedIDMatrixSorted, error)

type ExtractedTable

type ExtractedTable struct {
	Fields  []ExtractedTableField  `json:"fields"`
	Columns []ExtractedTableColumn `json:"columns"`
}

func (ExtractedTable) ToRows

func (t ExtractedTable) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (ExtractedTable) ToTable

func (t ExtractedTable) ToTable() (*proto.TableResponse, error)

ToTable converts the table to protobuf format.

type ExtractedTableColumn

type ExtractedTableColumn struct {
	Column KeyOrID       `json:"column"`
	Rows   []interface{} `json:"rows"`
}

type ExtractedTableField

type ExtractedTableField struct {
	Name string `json:"name"`
	Type string `json:"type"`
}

type FeatureBaseSystemAPI

type FeatureBaseSystemAPI struct {
	*API
}

FeatureBaseSystemAPI is a wrapper around pilosa.API. It implements the SystemAPI interface

func (*FeatureBaseSystemAPI) ClusterName

func (fsapi *FeatureBaseSystemAPI) ClusterName() string

func (*FeatureBaseSystemAPI) ClusterNodeCount

func (fsapi *FeatureBaseSystemAPI) ClusterNodeCount() int

func (*FeatureBaseSystemAPI) ClusterNodes

func (fsapi *FeatureBaseSystemAPI) ClusterNodes() []ClusterNode

func (*FeatureBaseSystemAPI) ClusterReplicaCount

func (fsapi *FeatureBaseSystemAPI) ClusterReplicaCount() int

func (*FeatureBaseSystemAPI) ClusterState

func (fsapi *FeatureBaseSystemAPI) ClusterState() string

func (*FeatureBaseSystemAPI) DataDir added in v3.27.0

func (fsapi *FeatureBaseSystemAPI) DataDir() string

func (*FeatureBaseSystemAPI) NodeID added in v3.29.0

func (fsapi *FeatureBaseSystemAPI) NodeID() string

func (*FeatureBaseSystemAPI) PlatformDescription

func (fsapi *FeatureBaseSystemAPI) PlatformDescription() string

func (*FeatureBaseSystemAPI) PlatformVersion

func (fsapi *FeatureBaseSystemAPI) PlatformVersion() string

func (*FeatureBaseSystemAPI) ShardWidth

func (fsapi *FeatureBaseSystemAPI) ShardWidth() int

func (*FeatureBaseSystemAPI) Version

func (fsapi *FeatureBaseSystemAPI) Version() string

type Field

type Field struct {

	// Instantiates new translation stores
	OpenTranslateStore OpenTranslateStoreFunc
	// contains filtered or unexported fields
}

Field represents a container for views.

func (*Field) AddRemoteAvailableShards

func (f *Field) AddRemoteAvailableShards(b *roaring.Bitmap) error

AddRemoteAvailableShards merges the set of available shards into the current known set and saves the set to a file.

func (*Field) AvailableShards

func (f *Field) AvailableShards(localOnly bool) *roaring.Bitmap

AvailableShards returns a bitmap of shards that contain data.

func (*Field) CacheSize

func (f *Field) CacheSize() uint32

CacheSize returns the ranked field cache size.

func (*Field) ClearBit

func (f *Field) ClearBit(qcx *Qcx, rowID, colID uint64) (changed bool, err error)

ClearBit clears a bit within the field.

This does not, for now, create existence bits for the field, because it doesn't create them for the index.

func (*Field) ClearValue

func (f *Field) ClearValue(qcx *Qcx, columnID uint64) (changed bool, err error)

ClearValue removes a field value for a column.

func (*Field) Close

func (f *Field) Close() error

Close closes the field and its views.

func (*Field) CreatedAt

func (f *Field) CreatedAt() int64

CreatedAt is an timestamp for a specific version of field.

func (*Field) Existing added in v3.35.0

func (f *Field) Existing(tx Tx, shard uint64) (*Row, error)

Existing returns the existence row for this field, which comes from either the BSI view or the existence view.

func (*Field) ForeignIndex

func (f *Field) ForeignIndex() string

ForeignIndex returns the foreign index name attached to the field. Returns blank string if no foreign index exists.

func (*Field) GetIndex

func (f *Field) GetIndex() *Index

func (*Field) Import

func (f *Field) Import(qcx *Qcx, rowIDs, columnIDs []uint64, timestamps []int64, shard uint64, options *ImportOptions) (err0 error)

Import bulk imports data.

func (*Field) Index

func (f *Field) Index() string

Index returns the index name the field was initialized with.

func (*Field) Keys

func (f *Field) Keys() bool

Keys returns true if the field uses string keys.

func (*Field) LocalAvailableShards

func (f *Field) LocalAvailableShards() *roaring.Bitmap

LocalAvailableShards returns a bitmap of shards that contain data, but only from the local node. This prevents txfactory from making db-per-shard for remote shards.

func (*Field) MarkExisting added in v3.35.0

func (f *Field) MarkExisting(tx Tx, columnIDs []uint64, shard uint64) error

MarkExisting sets a range of column IDs as existing. The columnIDs are assumed to include the shard offset, but this will also work if they are shard-relative, as it's just stripping the offset.

Positions aren't the same as column IDs; this function takes advantage of the fact that we're always doing row 0, so we don't have to think hard about this. It doesn't overwrite its input because the column IDs could be reused by other things.

Note that this is subtly inefficient; if you're tracking existence for a field, we're computing the same column ID set to write to the index's existence field as we're using for the field's existence view. We don't have a good way to coalesce those, yet. (Also, that's not accurate in the ImportValue case, where we don't write to the existence view, etc.)

func (*Field) MarkNotExisting added in v3.35.0

func (f *Field) MarkNotExisting(tx Tx, columnIDs []uint64, shard uint64) error

MarkNotExisting is just like MarkExisting, except it is clearing bits, so it doesn't have to create the view or fragment if it doesn't exist. Because the bits reported to us in the case we wrote this for are likely to be sorted by position in the fragment, not by column ID, we sort the list after stripping the rows from the positions.

func (*Field) MaxForShard

func (f *Field) MaxForShard(qcx *Qcx, shard uint64, filter *Row) (ValCount, error)

func (*Field) MinForShard

func (f *Field) MinForShard(qcx *Qcx, shard uint64, filter *Row) (ValCount, error)

MinForShard returns the minimum value which appears in this shard (this field must be an Int or Decimal field). It also returns the number of times the minimum value appears.

func (*Field) MutexCheck

func (f *Field) MutexCheck(ctx context.Context, qcx *Qcx, details bool, limit int) (map[uint64]map[uint64][]uint64, error)

mutexCheck performs a sanity-check on the available fragments for a field. The return is map[column]map[shard][]values for collisions only.

func (*Field) Name

func (f *Field) Name() string

Name returns the name the field was initialized with.

func (*Field) Open

func (f *Field) Open() error

Open opens and initializes the field.

func (*Field) Options

func (f *Field) Options() FieldOptions

Options returns all options for this field.

func (*Field) Path

func (f *Field) Path() string

Path returns the path the field was initialized with.

func (*Field) Range

func (f *Field) Range(qcx *Qcx, name string, op pql.Token, predicate int64) (*Row, error)

Range performs a conditional operation on Field.

func (*Field) RemoveAvailableShard

func (f *Field) RemoveAvailableShard(v uint64) error

RemoveAvailableShard removes a shard from the bitmap cache.

NOTE: This can be overridden on the next sync so all nodes should be updated.

func (*Field) Row

func (f *Field) Row(qcx *Qcx, rowID uint64) (*Row, error)

Row returns a row of the standard view. It seems this method is only being used by the test package, and the fact that it's only allowed on `set`,`mutex`, and `bool` fields is odd. This may be considered for deprecation in a future version.

func (*Field) RowTime

func (f *Field) RowTime(qcx *Qcx, rowID uint64, time time.Time, quantum string) (*Row, error)

RowTime gets the row at the particular time with the granularity specified by the quantum.

func (*Field) SetBit

func (f *Field) SetBit(qcx *Qcx, rowID, colID uint64, t *time.Time) (changed bool, err error)

SetBit sets a bit on a view within the field.

func (*Field) SetValue

func (f *Field) SetValue(qcx *Qcx, columnID uint64, value int64) (changed bool, err error)

SetValue sets a field value for a column.

func (*Field) SortShardRow

func (f *Field) SortShardRow(tx Tx, shard uint64, filter *Row, sort_desc bool) (*SortedRow, error)

func (*Field) StringValue

func (f *Field) StringValue(qcx *Qcx, columnID uint64) (value string, exists bool, err error)

StringValue reads an integer field value for a column, and converts it to a string based on a foreign index string key.

func (*Field) TTL

func (f *Field) TTL() time.Duration

TTL returns the ttl of the field.

func (*Field) TimeQuantum

func (f *Field) TimeQuantum() TimeQuantum

TimeQuantum returns the time quantum for the field.

func (*Field) TranslateStore

func (f *Field) TranslateStore() TranslateStore

TranslateStore returns the field's translation store.

func (*Field) TranslateStorePath

func (f *Field) TranslateStorePath() string

TranslateStorePath returns the translation database path for the field.

func (*Field) Type

func (f *Field) Type() string

Type returns the field type.

func (*Field) Value

func (f *Field) Value(qcx *Qcx, columnID uint64) (value int64, exists bool, err error)

Value reads a field value for a column.

type FieldInfo

type FieldInfo struct {
	Name        string       `json:"name"`
	CreatedAt   int64        `json:"createdAt,omitempty"`
	Owner       string       `json:"owner"`
	Options     FieldOptions `json:"options"`
	Cardinality *uint64      `json:"cardinality,omitempty"`
	Views       []*ViewInfo  `json:"views,omitempty"`
}

FieldInfo represents schema information for a field.

func FieldToFieldInfo

func FieldToFieldInfo(fld *dax.Field) *FieldInfo

FieldToFieldInfo converts a dax.Field to a featurebase.FieldInfo. Note: it does not return errors; there is one scenario where a timestamp epoch could be out of range. In that case, this function will only log the error, and the proceed with timestamp option values which are likely incorrect. We are going to leave this as is for now because, since this is used for internal conversions of types which already exist and have been validated, we assume the option values are valid. TODO(tlt): add error handling to this function; worst case: panic.

func UnmarshalFieldOptions

func UnmarshalFieldOptions(name string, createdAt int64, buf []byte) (*FieldInfo, error)

type FieldOption

type FieldOption func(fo *FieldOptions) error

FieldOption is a functional option type for pilosa.fieldOptions.

func FieldOptionsFromField

func FieldOptionsFromField(fld *dax.Field) ([]FieldOption, error)

FieldOptionsFromField returns a slice of featurebase.FieldOption based on the given dax.Field.

func OptFieldForeignIndex

func OptFieldForeignIndex(index string) FieldOption

OptFieldForeignIndex marks this field as a foreign key to another index. That is, the values of this field should be interpreted as referencing records (Pilosa columns) in another index. TODO explain where/how this is used by Pilosa.

func OptFieldKeys

func OptFieldKeys() FieldOption

OptFieldKeys is a functional option on FieldOptions used to specify whether keys are used for this field.

func OptFieldTrackExistence added in v3.35.0

func OptFieldTrackExistence() FieldOption

OptFieldTrackExistence exists mostly to allow the FieldFromFieldOptions/FieldOptionsFromField round-trip to work. If you are actually creating a field, via api.CreateField, it will be turned on unconditionally. You can't turn it off.

func OptFieldTypeBool

func OptFieldTypeBool() FieldOption

OptFieldTypeBool is a functional option on FieldOptions used to specify the field as being type `bool` and to provide any respective configuration values.

func OptFieldTypeDecimal

func OptFieldTypeDecimal(scale int64, minmax ...pql.Decimal) FieldOption

OptFieldTypeDecimal is a functional option for creating a `decimal` field. Unless we decide to expand the range of supported values, `scale` is restricted to the range [0,19]. This supports anything from:

scale = 0: min: -9223372036854775808. max: 9223372036854775807.

to:

scale = 19: min: -0.9223372036854775808 max: 0.9223372036854775807

While it's possible to support scale values outside of this range, the coverage for those scales are no longer continuous. For example,

scale = -2: min : [-922337203685477580800, -100] GAPs: [-99, -1], [-199, -101] ... [-922337203685477580799, -922337203685477580701]

0

max : [100, 922337203685477580700] GAPs: [1, 99], [101, 199] ... [922337203685477580601, 922337203685477580699]

An alternative to this gap strategy would be to scale the supported range to a continuous 64-bit space (which is not unreasonable using bsiGroup.Base). The issue with this approach is that we would need to know which direction to favor. For example, there are two possible ranges for `scale = -2`:

min : [-922337203685477580800, -922337203685477580800+(2^64)] max : [922337203685477580700-(2^64), 922337203685477580700]

func OptFieldTypeDefault

func OptFieldTypeDefault() FieldOption

OptFieldTypeDefault is a functional option on FieldOptions used to set the field type and cache setting to the default values.

func OptFieldTypeInt

func OptFieldTypeInt(min, max int64) FieldOption

OptFieldTypeInt is a functional option on FieldOptions used to specify the field as being type `int` and to provide any respective configuration values.

func OptFieldTypeMutex

func OptFieldTypeMutex(cacheType string, cacheSize uint32) FieldOption

OptFieldTypeMutex is a functional option on FieldOptions used to specify the field as being type `mutex` and to provide any respective configuration values.

func OptFieldTypeSet

func OptFieldTypeSet(cacheType string, cacheSize uint32) FieldOption

OptFieldTypeSet is a functional option on FieldOptions used to specify the field as being type `set` and to provide any respective configuration values.

func OptFieldTypeTime

func OptFieldTypeTime(timeQuantum TimeQuantum, ttl string, opt ...bool) FieldOption

OptFieldTypeTime is a functional option on FieldOptions used to specify the field as being type `time` and to provide any respective configuration values. Pass true to skip creation of the standard view.

func OptFieldTypeTimestamp

func OptFieldTypeTimestamp(epoch time.Time, timeUnit string) FieldOption

OptFieldTypeTimestamp is a functional option on FieldOptions used to specify the field as being type `timestamp` and to provide any respective configuration values.

type FieldOptions

type FieldOptions struct {
	Base           int64         `json:"base,omitempty"`
	BitDepth       uint64        `json:"bitDepth,omitempty"`
	Min            pql.Decimal   `json:"min,omitempty"`
	Max            pql.Decimal   `json:"max,omitempty"`
	Scale          int64         `json:"scale,omitempty"`
	Keys           bool          `json:"keys"`
	NoStandardView bool          `json:"noStandardView,omitempty"`
	TrackExistence bool          `json:"trackExistence,omitempty"`
	CacheSize      uint32        `json:"cacheSize,omitempty"`
	CacheType      string        `json:"cacheType,omitempty"`
	Type           string        `json:"type,omitempty"`
	TimeUnit       string        `json:"timeUnit,omitempty"`
	TimeQuantum    TimeQuantum   `json:"timeQuantum,omitempty"`
	ForeignIndex   string        `json:"foreignIndex"`
	TTL            time.Duration `json:"ttl,omitempty"`
}

FieldOptions represents options to set when initializing a field.

func (*FieldOptions) ActuallyTrackingExistence added in v3.35.0

func (o *FieldOptions) ActuallyTrackingExistence() bool

ActuallyTrackingExistence reflects the distinction between the TrackExistence bool, which is enabled by default for most fields, and whether we actually do existence tracking. Specifically, we don't do existence tracking for time quantum fields which don't have a standard view, or for BSI fields.

func (*FieldOptions) MarshalJSON

func (o *FieldOptions) MarshalJSON() ([]byte, error)

MarshalJSON marshals FieldOptions to JSON such that only those attributes associated to the field type are included.

type FieldRow

type FieldRow struct {
	Field        string        `json:"field"`
	RowID        uint64        `json:"rowID"`
	RowKey       string        `json:"rowKey,omitempty"`
	Value        *int64        `json:"value,omitempty"`
	FieldOptions *FieldOptions `json:"-"`
}

FieldRow is used to distinguish rows in a group by result.

func (*FieldRow) Clone

func (fr *FieldRow) Clone() (clone *FieldRow)

func (FieldRow) MarshalJSON

func (fr FieldRow) MarshalJSON() ([]byte, error)

MarshalJSON marshals FieldRow to JSON such that either a Key or an ID is included.

func (FieldRow) String

func (fr FieldRow) String() string

String is the FieldRow stringer.

type FieldStatus

type FieldStatus struct {
	Name            string
	CreatedAt       int64
	AvailableShards *roaring.Bitmap
}

FieldStatus is an internal message representing the contents of a field.

type FieldUpdate

type FieldUpdate struct {
	Option string `json:"option"`
	Value  string `json:"value"`
}

FieldUpdate represents a change to a field. The thinking is to only support changing one field option at a time to keep the implementation sane. At time of writing, only TTL is supported.

type FieldValue

type FieldValue struct {
	ColumnID  uint64
	ColumnKey string
	Value     int64
}

FieldValue represents the value for a column within a range-encoded field.

type FieldValues

type FieldValues []FieldValue

FieldValues represents a slice of field values.

func (FieldValues) ColumnIDs

func (p FieldValues) ColumnIDs() []uint64

ColumnIDs returns a slice of all the column IDs.

func (FieldValues) ColumnKeys

func (p FieldValues) ColumnKeys() []string

ColumnKeys returns a slice of all the column keys.

func (FieldValues) GroupByShard

func (p FieldValues) GroupByShard() map[uint64][]FieldValue

GroupByShard returns a map of field values by shard.

func (FieldValues) HasColumnKeys

func (p FieldValues) HasColumnKeys() bool

HasColumnKeys returns true if any values use a column key.

func (FieldValues) Len

func (p FieldValues) Len() int

func (FieldValues) Less

func (p FieldValues) Less(i, j int) bool

func (FieldValues) Swap

func (p FieldValues) Swap(i, j int)

func (FieldValues) Values

func (p FieldValues) Values() []int64

Values returns a slice of all the values.

type FieldView2Shards

type FieldView2Shards struct {
	// contains filtered or unexported fields
}

func NewFieldView2Shards

func NewFieldView2Shards() *FieldView2Shards

func (*FieldView2Shards) String

func (vs *FieldView2Shards) String() (r string)

type FileSystem

type FileSystem interface {
	New() (http.FileSystem, error)
}

FileSystem represents an interface for file system for serving the Lattice UI.

var NopFileSystem FileSystem

NopFileSystem represents a FileSystem that returns an error if called.

type FragmentInfo

type FragmentInfo struct {
	BitmapInfo roaring.BitmapInfo
}

type GCNotifier

type GCNotifier interface {
	Close()
	AfterGC() <-chan struct{}
}

GCNotifier represents an interface for garbage collection notificationss.

var NopGCNotifier GCNotifier = &nopGCNotifier{}

NopGCNotifier represents a GCNotifier that doesn't do anything.

type GroupCount

type GroupCount struct {
	Group      []FieldRow   `json:"group"`
	Count      uint64       `json:"count"`
	Agg        int64        `json:"-"`
	DecimalAgg *pql.Decimal `json:"-"`
}

GroupCount represents a result item for a group by query.

func ApplyConditionToGroupCounts

func ApplyConditionToGroupCounts(gcs []GroupCount, subj string, cond *pql.Condition) []GroupCount

ApplyConditionToGroupCounts filters the contents of gcs according to the condition. Currently, `count` and `sum` are the only fields supported.

func (*GroupCount) Clone

func (g *GroupCount) Clone() (r *GroupCount)

func (GroupCount) Compare

func (g GroupCount) Compare(o GroupCount) int

Compare is used in ordering two GroupCount objects.

type GroupCounts

type GroupCounts struct {
	// contains filtered or unexported fields
}

GroupCounts is a list of GroupCount.

func NewGroupCounts

func NewGroupCounts(agg string, groups ...GroupCount) *GroupCounts

NewGroupCounts creates a GroupCounts with the given type and slice of GroupCount objects. There's intentionally no externally-accessible way to change the []GroupCount after creation.

func (*GroupCounts) AggregateColumn

func (g *GroupCounts) AggregateColumn() string

AggregateColumn gives the likely column name to use for aggregates, because for historical reasons we used "sum" when it was a sum, but don't want to use that when it's something else. This will likely get revisited.

func (*GroupCounts) Groups

func (g *GroupCounts) Groups() []GroupCount

Groups is a convenience method to let us not worry as much about the potentially-nil nature of a *GroupCounts.

func (*GroupCounts) MarshalJSON

func (g *GroupCounts) MarshalJSON() ([]byte, error)

MarshalJSON makes GroupCounts satisfy interface json.Marshaler and customizes the JSON output of the aggregate field label.

func (*GroupCounts) ToRows

func (g *GroupCounts) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (*GroupCounts) ToTable

func (g *GroupCounts) ToTable() (*proto.TableResponse, error)

ToTable implements the ToTabler interface.

type HTTPError

type HTTPError struct {
	// Human-readable message.
	Message string `json:"message"`
}

HTTPError defines a standard application error.

func (*HTTPError) Error

func (e *HTTPError) Error() string

Error returns the string representation of the error message.

type HTTPTranslateEntryReader

type HTTPTranslateEntryReader struct {

	// Lookup of offsets for each index & field.
	// Must be set before calling Open().
	Offsets TranslateOffsetMap

	// URL to stream entries from.
	// Must be set before calling Open().
	URL string

	HTTPClient *http.Client

	Logger logger.Logger
	// contains filtered or unexported fields
}

HTTPTranslateEntryReader represents an implementation of TranslateEntryReader. It consolidates all index & field translate entries into a single reader.

func NewTranslateEntryReader

func NewTranslateEntryReader(ctx context.Context, client *http.Client) *HTTPTranslateEntryReader

NewTranslateEntryReader returns a new instance of TranslateEntryReader.

func (*HTTPTranslateEntryReader) Close

func (r *HTTPTranslateEntryReader) Close() error

Close stops the reader.

func (*HTTPTranslateEntryReader) Open

func (r *HTTPTranslateEntryReader) Open() error

Open initiates the reader.

func (*HTTPTranslateEntryReader) ReadEntry

func (r *HTTPTranslateEntryReader) ReadEntry(entry *TranslateEntry) error

ReadEntry reads the next entry from the stream into entry. Returns io.EOF at the end of the stream.

type Handler

type Handler struct {
	Handler http.Handler
	// contains filtered or unexported fields
}

Handler represents an HTTP handler.

func NewHandler

func NewHandler(opts ...handlerOption) (*Handler, error)

NewHandler returns a new instance of Handler with a default logger.

func (*Handler) Close

func (h *Handler) Close() error

Close tries to cleanly shutdown the HTTP server, and failing that, after a timeout, calls Server.Close.

func (*Handler) DiscardHTTPServerLogs added in v3.33.0

func (h *Handler) DiscardHTTPServerLogs()

func (*Handler) Serve

func (h *Handler) Serve() error

func (*Handler) ServeHTTP

func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request)

ServeHTTP handles an HTTP request.

type HandlerI

type HandlerI interface {
	Serve() error
	Close() error
}

HandlerI is the interface for the data handler, a wrapper around Pilosa's data store.

var NopHandler HandlerI = nopHandler{}

NopHandler is a no-op implementation of the Handler interface.

type Holder

type Holder struct {
	Schemator disco.Schemator

	Logger logger.Logger

	// Instantiates new translation stores
	OpenTranslateStore  OpenTranslateStoreFunc
	OpenTranslateReader OpenTranslateReaderFunc

	// Func to open whatever implementation of transaction store we're using.
	OpenTransactionStore OpenTransactionStoreFunc

	// Func to open the ID allocator.
	OpenIDAllocator func(string, bool) (*idAllocator, error)

	Opts HolderOpts

	Auditor testhook.Auditor
	// contains filtered or unexported fields
}

Holder represents a container for indexes.

func NewHolder

func NewHolder(path string, cfg *HolderConfig) *Holder

NewHolder returns a new instance of Holder for the given path.

func (*Holder) Activate

func (h *Holder) Activate()

Activate runs the background tasks relevant to keeping a holder in a stable state, such as flushing caches. This is separate from opening because, while a server would nearly always want to do this, other use cases (like consistency checks of a data directory) need to avoid it even getting started.

func (*Holder) BeginTx

func (h *Holder) BeginTx(writable bool, idx *Index, shard uint64) (Tx, error)

BeginTx starts a transaction on the holder. The index and shard must be specified.

func (*Holder) Close

func (h *Holder) Close() error

Close closes all open fragments.

func (*Holder) CreateIndex

func (h *Holder) CreateIndex(name string, requestUserID string, opt IndexOptions) (*Index, error)

CreateIndex creates an index. An error is returned if the index already exists.

func (*Holder) CreateIndexAndBroadcast

func (h *Holder) CreateIndexAndBroadcast(ctx context.Context, cim *CreateIndexMessage) (*Index, error)

CreateIndexAndBroadcast creates an index locally, then broadcasts the creation to other nodes so they can create locally as well. An error is returned if the index already exists.

func (*Holder) CreateIndexIfNotExists

func (h *Holder) CreateIndexIfNotExists(name string, requestUserID string, opt IndexOptions) (*Index, error)

CreateIndexIfNotExists returns an index by name. The index is created if it does not already exist.

func (*Holder) DeleteDataframe

func (h *Holder) DeleteDataframe(name string) error

func (*Holder) DeleteIndex

func (h *Holder) DeleteIndex(name string) error

DeleteIndex removes an index from the holder.

func (*Holder) Directive

func (h *Holder) Directive() dax.Directive

func (*Holder) DirectiveApplied

func (h *Holder) DirectiveApplied() bool

DirectiveApplied returns true if the Holder's latest directive has been fully applied and is safe for queries. This is primarily used in testing and will likely evolve to something smarter.

func (*Holder) Field

func (h *Holder) Field(index, name string) *Field

Field returns the field for an index and name.

func (*Holder) FinishTransaction

func (h *Holder) FinishTransaction(ctx context.Context, id string) (*Transaction, error)

func (*Holder) GetTransaction

func (h *Holder) GetTransaction(ctx context.Context, id string) (*Transaction, error)

func (*Holder) HasData

func (h *Holder) HasData() (bool, error)

HasData returns true if Holder contains at least one index. This is used to determine if the rebalancing of data is necessary when a node joins the cluster.

func (*Holder) Index

func (h *Holder) Index(name string) (idx *Index)

Index returns the index by name.

func (*Holder) IndexPath

func (h *Holder) IndexPath(name string) string

IndexPath returns the path where a given index is stored.

func (*Holder) Indexes

func (h *Holder) Indexes() []*Index

Indexes returns a list of all indexes in the holder.

func (*Holder) IndexesPath

func (h *Holder) IndexesPath() string

IndexesPath returns the path of the indexes directory.

func (*Holder) LoadField

func (h *Holder) LoadField(index, field string) (*Field, error)

LoadField creates a field based on the information stored in Schemator. An error is returned if the field already exists.

func (*Holder) LoadIndex

func (h *Holder) LoadIndex(name string) (*Index, error)

LoadIndex creates an index based on the information stored in Schemator. An error is returned if the index already exists.

func (*Holder) LoadSchema

func (h *Holder) LoadSchema() error

LoadSchema creates all indexes based on the information stored in Schemator. It does not return an error if an index already exists. The thinking is that this method will load all indexes that don't already exist. We likely want to revisit this; for example, we might want to confirm that the createdAt timestamps on each of the indexes matches the value in etcd.

func (*Holder) LoadView

func (h *Holder) LoadView(index, field, view string) (*view, error)

LoadView creates a view based on the information stored in Schemator. Unlike index and field, it is not considered an error if the view already exists.

func (*Holder) Open

func (h *Holder) Open() error

Open initializes the root data directory for the holder.

func (*Holder) Path

func (h *Holder) Path() string

Path returns the path directory the holder was created with.

func (*Holder) Schema

func (h *Holder) Schema() ([]*IndexInfo, error)

Schema returns schema information for all indexes, fields, and views.

func (*Holder) SetDirective

func (h *Holder) SetDirective(d *dax.Directive)

func (*Holder) SetDirectiveApplied

func (h *Holder) SetDirectiveApplied(a bool)

SetDirectiveApplied sets the value of directiveApplied. See the node on the DirectiveApplied method.

func (*Holder) StartTransaction

func (h *Holder) StartTransaction(ctx context.Context, id string, timeout time.Duration, exclusive bool) (*Transaction, error)

func (*Holder) Transactions

func (h *Holder) Transactions(ctx context.Context) (map[string]*Transaction, error)

func (*Holder) Txf

func (h *Holder) Txf() *TxFactory

type HolderConfig

type HolderConfig struct {
	PartitionN           int
	OpenTranslateStore   OpenTranslateStoreFunc
	OpenTranslateReader  OpenTranslateReaderFunc
	OpenTransactionStore OpenTransactionStoreFunc
	OpenIDAllocator      OpenIDAllocatorFunc
	TranslationSyncer    TranslationSyncer
	Serializer           Serializer
	Schemator            disco.Schemator
	Sharder              disco.Sharder
	CacheFlushInterval   time.Duration
	Logger               logger.Logger

	StorageConfig *storage.Config
	RBFConfig     *rbfcfg.Config

	LookupDBDSN string
}

HolderConfig holds configuration details that need to be set up at initial holder creation. NewHolder takes a *HolderConfig, which can be nil. Use DefaultHolderConfig to get a default-valued HolderConfig you can then alter.

func DefaultHolderConfig

func DefaultHolderConfig() *HolderConfig

DefaultHolderConfig provides a holder config with reasonable defaults. Note that a production server would almost certainly need to override these; that's usually handled by server options such as OptServerOpenTranslateStore.

func TestHolderConfig

func TestHolderConfig() *HolderConfig

TestHolderConfig provides a holder config with reasonable defaults for tests. This means it tries to disable fsync and sets significantly smaller file size limits for RBF, for instance. Do not use this outside of the test infrastructure.

type HolderOpts

type HolderOpts struct {
	// StorageBackend controls the tx/storage engine we instatiate. Set by
	// server.go OptServerStorageConfig
	StorageBackend string
}

HolderOpts holds information about the holder which other things might want to look up later while using the holder.

type IDAllocCommitRequest

type IDAllocCommitRequest struct {
	Key     IDAllocKey `json:"key"`
	Session [32]byte   `json:"session"`
	Count   uint64     `json:"count"`
}

type IDAllocKey

type IDAllocKey struct {
	Index string `json:"index"`
	Key   string `json:"key,omitempty"`
}

IDAllocKey is an ID allocation key.

func (IDAllocKey) String

func (k IDAllocKey) String() string

type IDAllocReserveRequest

type IDAllocReserveRequest struct {
	Key     IDAllocKey `json:"key"`
	Session [32]byte   `json:"session"`
	Offset  uint64     `json:"offset"`
	Count   uint64     `json:"count"`
}

type IDOffsetDesyncError added in v3.34.0

type IDOffsetDesyncError struct {
	// Requested is the offset that the client attempted to reserve.
	Requested uint64 `json:"requested"`

	// Base is the next uncommitted offset for which IDs may be reserved.
	Base uint64 `json:"base"`
}

IDOffsetDesyncError is an error generated when attempting to reserve IDs at a committed offset. This will typically happen when kafka partitions are moved between kafka ingesters - there may be a brief period in which 2 ingesters are processing the same messages at the same time. The ingester can resolve this by ignoring messages under base.

func (IDOffsetDesyncError) Error added in v3.34.0

func (err IDOffsetDesyncError) Error() string

type IDRange

type IDRange struct {
	First uint64 `json:"first"`
	Last  uint64 `json:"last"`
}

IDRange is a reserved ID range.

type IDSet

type IDSet []int64

IDSet is a return type specific to SQLResponse types.

func (IDSet) String

func (ii IDSet) String() string

type ImportOption

type ImportOption func(*ImportOptions) error

ImportOption is a functional option type for API.Import.

func OptImportOptionsClear

func OptImportOptionsClear(c bool) ImportOption

OptImportOptionsClear is a functional option on ImportOption used to specify whether the import is a set or clear operation.

func OptImportOptionsIgnoreKeyCheck

func OptImportOptionsIgnoreKeyCheck(b bool) ImportOption

OptImportOptionsIgnoreKeyCheck is a functional option on ImportOption used to specify whether key check should be ignored.

func OptImportOptionsPresorted

func OptImportOptionsPresorted(b bool) ImportOption

func OptImportOptionsSuppressLog

func OptImportOptionsSuppressLog(b bool) ImportOption

type ImportOptions

type ImportOptions struct {
	Clear          bool
	IgnoreKeyCheck bool
	Presorted      bool

	// test Tx atomicity if > 0
	SimPowerLossAfter int
	// contains filtered or unexported fields
}

ImportOptions holds the options for the API.Import method.

TODO(2.0) we have entirely missed the point of functional options by exporting this structure. If it needs to be exported for some reason, we should consider not using functional options here which just adds complexity.

type ImportRequest

type ImportRequest struct {
	Index          string
	IndexCreatedAt int64
	Field          string
	FieldCreatedAt int64
	Shard          uint64
	RowIDs         []uint64
	ColumnIDs      []uint64
	RowKeys        []string
	ColumnKeys     []string
	Timestamps     []int64
	Clear          bool
}

ImportRequest describes the import request structure for an import. BSIs use the ImportValueRequest instead.

func (*ImportRequest) Clone

func (ir *ImportRequest) Clone() *ImportRequest

Clone allows copying an import request. Normally you wouldn't, but some import functions are destructive on their inputs, and if you want to *re-use* an import request, you might need this. If you're using this outside tx_test, something is probably wrong.

func (*ImportRequest) SortToShards

func (ir *ImportRequest) SortToShards() (result map[uint64]*ImportRequest)

SortToShards takes an import request which has been translated, but may not be sorted, and turns it into a map from shard IDs to individual import requests. We don't sort the entries within each shard because the correct sorting depends on the field type and we don't want to deal with that here.

func (*ImportRequest) ValidateWithTimestamp

func (ir *ImportRequest) ValidateWithTimestamp(indexCreatedAt, fieldCreatedAt int64) error

ValidateWithTimestamp ensures that the payload of the request is valid.

type ImportResponse

type ImportResponse struct {
	Err string
}

ImportResponse is the structured response of an import.

type ImportRoaringRequest

type ImportRoaringRequest struct {
	IndexCreatedAt  int64
	FieldCreatedAt  int64
	Clear           bool
	Action          string // [set, clear, overwrite]
	Block           int
	Views           map[string][]byte
	UpdateExistence bool
	SuppressLog     bool
}

ImportRoaringRequest describes the import request structure for an import containing roaring-encoded data.

func (*ImportRoaringRequest) ValidateWithTimestamp

func (irr *ImportRoaringRequest) ValidateWithTimestamp(indexCreatedAt, fieldCreatedAt int64) error

ValidateWithTimestamp ensures that the payload of the request is valid.

type ImportRoaringShardRequest

type ImportRoaringShardRequest struct {
	// Has this request already been forwarded to all replicas? If
	// Remote=false, then the handling server is responsible for
	// ensuring this request is sent to all repliacs before returning
	// a successful response to the client.
	Remote bool
	Views  []RoaringUpdate

	// SuppressLog requests we not write to the write log. Typically
	// that would be because this request is being replayed from a
	// write log.
	SuppressLog bool
}

ImportRoaringShardRequest is the request for the shard transactional endpoint.

type ImportValueRequest

type ImportValueRequest struct {
	Index          string
	IndexCreatedAt int64
	Field          string
	FieldCreatedAt int64
	// if Shard is MaxUint64 (an impossible shard value), this
	// indicates that the column IDs may come from multiple shards.
	Shard           uint64
	ColumnIDs       []uint64 // e.g. weather stationID
	ColumnKeys      []string
	Values          []int64 // e.g. temperature, humidity, barometric pressure
	FloatValues     []float64
	TimestampValues []time.Time
	StringValues    []string
	Clear           bool
	// contains filtered or unexported fields
}

ImportValueRequest describes the import request structure for a value (BSI) import. Note: no RowIDs here. have to convert BSI Values into RowIDs internally.

func (*ImportValueRequest) Clone

func (ivr *ImportValueRequest) Clone() *ImportValueRequest

func (*ImportValueRequest) Len

func (ivr *ImportValueRequest) Len() int

func (*ImportValueRequest) Less

func (ivr *ImportValueRequest) Less(i, j int) bool

func (*ImportValueRequest) Swap

func (ivr *ImportValueRequest) Swap(i, j int)

func (*ImportValueRequest) Validate

func (ivr *ImportValueRequest) Validate() error

Validate ensures that the payload of the request is valid.

func (*ImportValueRequest) ValidateWithTimestamp

func (ivr *ImportValueRequest) ValidateWithTimestamp(indexCreatedAt, fieldCreatedAt int64) error

ValidateWithTimestamp ensures that the payload of the request is valid.

type Importer

type Importer interface {
	StartTransaction(ctx context.Context, id string, timeout time.Duration, exclusive bool, requestTimeout time.Duration) (*Transaction, error)
	FinishTransaction(ctx context.Context, id string) (*Transaction, error)
	CreateTableKeys(ctx context.Context, tid dax.TableID, keys ...string) (map[string]uint64, error)
	CreateFieldKeys(ctx context.Context, tid dax.TableID, fname dax.FieldName, keys ...string) (map[string]uint64, error)
	ImportRoaringBitmap(ctx context.Context, tid dax.TableID, fld *dax.Field, shard uint64, views map[string]*roaring.Bitmap, clear bool) error
	ImportRoaringShard(ctx context.Context, tid dax.TableID, shard uint64, request *ImportRoaringShardRequest) error
	EncodeImportValues(ctx context.Context, tid dax.TableID, fld *dax.Field, shard uint64, vals []int64, ids []uint64, clear bool) (path string, data []byte, err error)
	EncodeImport(ctx context.Context, tid dax.TableID, fld *dax.Field, shard uint64, vals, ids []uint64, clear bool) (path string, data []byte, err error)
	DoImport(ctx context.Context, tid dax.TableID, fld *dax.Field, shard uint64, path string, data []byte) error
}

type InMemTransactionStore

type InMemTransactionStore struct {
	// contains filtered or unexported fields
}

InMemTransactionStore does not persist transaction data and is only useful for testing.

func NewInMemTransactionStore

func NewInMemTransactionStore() *InMemTransactionStore

func (*InMemTransactionStore) Get

func (*InMemTransactionStore) List

func (s *InMemTransactionStore) List() (map[string]*Transaction, error)

func (*InMemTransactionStore) Put

func (s *InMemTransactionStore) Put(trns *Transaction) error

func (*InMemTransactionStore) Remove

func (s *InMemTransactionStore) Remove(id string) (*Transaction, error)

type Index

type Index struct {

	// Instantiates new translation stores
	OpenTranslateStore OpenTranslateStoreFunc
	// contains filtered or unexported fields
}

Index represents a container for fields.

func NewIndex

func NewIndex(holder *Holder, path, name string) (*Index, error)

NewIndex returns an existing (but possibly empty) instance of Index at path. It will not erase any prior content.

func (*Index) AvailableShards

func (i *Index) AvailableShards(localOnly bool) *roaring.Bitmap

AvailableShards returns a bitmap of all shards with data in the index.

func (*Index) BeginTx

func (i *Index) BeginTx(writable bool, shard uint64) (Tx, error)

Begin starts a transaction on a shard of the index.

func (*Index) Close

func (i *Index) Close() error

Close closes the index and its fields.

func (*Index) CreateField

func (i *Index) CreateField(name string, requestUserID string, opts ...FieldOption) (*Field, error)

CreateField creates a field. This interface enforces the setting of the TrackExistence flag; if you don't want that, use createNullableField, but actually don't. That should be used only for applying previously-created fields.

func (*Index) CreateFieldIfNotExists

func (i *Index) CreateFieldIfNotExists(name string, requestUserID string, opts ...FieldOption) (*Field, error)

CreateFieldIfNotExists creates a field with the given options if it doesn't exist.

Does NOT apply the "default" TrackExistence.

func (*Index) CreateFieldIfNotExistsWithOptions

func (i *Index) CreateFieldIfNotExistsWithOptions(name string, requestUserID string, opt *FieldOptions) (*Field, error)

CreateFieldIfNotExistsWithOptions is a method which I created because I needed the functionality of CreateFieldIfNotExists, but instead of taking function options, taking a *FieldOptions struct. TODO: This should definintely be refactored so we don't have these virtually equivalent methods, but I'm puttin this here for now just to see if it works.

Does NOT apply the "default" TrackExistence.

func (*Index) CreatedAt

func (i *Index) CreatedAt() int64

CreatedAt is an timestamp for a specific version of an index.

func (*Index) DataframesPath

func (i *Index) DataframesPath() string

DataframePath returns the path of the dataframes specific to an index

func (*Index) DeleteField

func (i *Index) DeleteField(name string) error

DeleteField removes a field from the index.

func (*Index) Field

func (i *Index) Field(name string) *Field

Field returns a field in the index by name.

func (*Index) Fields

func (i *Index) Fields() []*Field

Fields returns a list of all fields in the index.

func (*Index) FieldsPath

func (i *Index) FieldsPath() string

FieldsPath returns the path of the fields directory.

func (*Index) GetDataFramePath

func (i *Index) GetDataFramePath(shard uint64) string

TODO (twg) refine parquet strategy a bit

func (*Index) Holder

func (i *Index) Holder() *Holder

Holder yields this index's Holder.

func (*Index) Keys

func (i *Index) Keys() bool

Keys returns true if the index uses string keys.

func (*Index) Name

func (i *Index) Name() string

Name returns name of the index.

func (*Index) NewTx

func (i *Index) NewTx(txo Txo) Tx

func (*Index) Open

func (i *Index) Open() error

Open opens and initializes the index.

func (*Index) OpenWithSchema

func (i *Index) OpenWithSchema(idx *disco.Index) error

OpenWithSchema opens the index and uses the provided schema to verify that the index's fields are expected.

func (*Index) Options

func (i *Index) Options() IndexOptions

Options returns all options for this index.

func (*Index) Path

func (i *Index) Path() string

Path returns the path the index was initialized with.

func (*Index) QualifiedName

func (i *Index) QualifiedName() string

QualifiedName returns the qualified name of the index.

func (*Index) SetTranslatePartitions

func (i *Index) SetTranslatePartitions(tp dax.PartitionNums)

SetTranslatePartitions sets the cached value: translatePartitions.

There's already logic in api_directive.go which creates a new index with partitions. This particular function is used when the index already exists on the node, but we get a Directive which changes its partition list. In that case, we need to update this cached value. Really, this is kind of hacky and we need to revisit the ApplyDirective logic so that it's more intuitive with respect to index.translatePartitions.

func (*Index) TranslateStore

func (i *Index) TranslateStore(partitionID int) TranslateStore

TranslateStore returns the translation store for a given partition.

func (*Index) TranslateStorePath

func (i *Index) TranslateStorePath(partitionID int) string

TranslateStorePath returns the translation database path for a partition.

func (*Index) UpdateField

func (i *Index) UpdateField(ctx context.Context, name string, requestUserID string, update FieldUpdate) (*CreateFieldMessage, error)

func (*Index) UpdateFieldLocal

func (i *Index) UpdateFieldLocal(cfm *CreateFieldMessage, update FieldUpdate) error

type IndexInfo

type IndexInfo struct {
	Name           string       `json:"name"`
	CreatedAt      int64        `json:"createdAt,omitempty"`
	UpdatedAt      int64        `json:"updatedAt"`
	Owner          string       `json:"owner"`
	LastUpdateUser string       `json:"lastUpdatedUser"`
	Options        IndexOptions `json:"options"`
	Fields         []*FieldInfo `json:"fields"`
	ShardWidth     uint64       `json:"shardWidth"`
}

IndexInfo represents schema information for an index.

func TableToIndexInfo

func TableToIndexInfo(tbl *dax.Table) *IndexInfo

TableToIndexInfo converts a dax.Table to a featurease.IndexInfo.

func TablesToIndexInfos

func TablesToIndexInfos(tbls []*dax.Table) []*IndexInfo

TablesToIndexInfos converts a slice of dax.Table to a slice of featurease.IndexInfo.

func (*IndexInfo) Field

func (ii *IndexInfo) Field(name string) *FieldInfo

Field returns the FieldInfo the provided field name. If the field does not exist, it returns nil

type IndexOptions

type IndexOptions struct {
	Keys           bool   `json:"keys"`
	TrackExistence bool   `json:"trackExistence"`
	PartitionN     int    `json:"partitionN"`
	Description    string `json:"description"`
}

IndexOptions represents options to set when initializing an index.

func UnmarshalIndexOptions

func UnmarshalIndexOptions(name string, createdAt int64, buf []byte) (*IndexOptions, error)

type IndexStatus

type IndexStatus struct {
	Name      string
	CreatedAt int64
	Fields    []*FieldStatus
}

IndexStatus is an internal message representing the contents of an index.

type IndexTranslateOffsetMap

type IndexTranslateOffsetMap struct {
	Partitions map[int]uint64    `json:"partitions"`
	Fields     map[string]uint64 `json:"fields"`
}

func NewIndexTranslateOffsetMap

func NewIndexTranslateOffsetMap() *IndexTranslateOffsetMap

func (*IndexTranslateOffsetMap) Empty

func (i *IndexTranslateOffsetMap) Empty() bool

Empty reports whether this map has neither partitions nor fields.

type InternalClient

type InternalClient struct {
	// contains filtered or unexported fields
}

InternalClient represents a client to the Pilosa cluster.

func NewInternalClient

func NewInternalClient(host string, remoteClient *http.Client, opts ...InternalClientOption) (*InternalClient, error)

NewInternalClient returns a new instance of InternalClient to connect to host. If api is non-nil, the client uses it for some same-host operations instead of going through http.

func NewInternalClientFromURI

func NewInternalClientFromURI(defaultURI *pnet.URI, remoteClient *http.Client, opts ...InternalClientOption) *InternalClient

func (*InternalClient) AvailableShards

func (c *InternalClient) AvailableShards(ctx context.Context, indexName string) ([]uint64, error)

AvailableShards returns a list of shards for an index.

func (*InternalClient) CreateField

func (c *InternalClient) CreateField(ctx context.Context, index, field string) error

func (*InternalClient) CreateFieldKeysNode

func (c *InternalClient) CreateFieldKeysNode(ctx context.Context, uri *pnet.URI, index string, field string, keys ...string) (transMap map[string]uint64, err error)

func (*InternalClient) CreateFieldWithOptions

func (c *InternalClient) CreateFieldWithOptions(ctx context.Context, index, field string, opt FieldOptions) error

CreateFieldWithOptions creates a new field on the server.

func (*InternalClient) CreateIndex

func (c *InternalClient) CreateIndex(ctx context.Context, index string, opt IndexOptions) error

CreateIndex creates a new index on the server.

func (*InternalClient) CreateIndexKeysNode

func (c *InternalClient) CreateIndexKeysNode(ctx context.Context, uri *pnet.URI, index string, keys ...string) (transMap map[string]uint64, err error)

func (*InternalClient) EnsureField

func (c *InternalClient) EnsureField(ctx context.Context, indexName string, fieldName string) error

func (*InternalClient) EnsureFieldWithOptions

func (c *InternalClient) EnsureFieldWithOptions(ctx context.Context, indexName string, fieldName string, opt FieldOptions) error

func (*InternalClient) EnsureIndex

func (c *InternalClient) EnsureIndex(ctx context.Context, name string, options IndexOptions) error

func (*InternalClient) ExportCSV

func (c *InternalClient) ExportCSV(ctx context.Context, index, field string, shard uint64, w io.Writer) error

ExportCSV bulk exports data for a single shard from a host to CSV format.

func (*InternalClient) FieldTranslateDataReader

func (c *InternalClient) FieldTranslateDataReader(ctx context.Context, index, field string) (io.ReadCloser, error)

FieldTranslateDataReader returns a reader that provides a snapshot of translation data for a field.

func (*InternalClient) FindFieldKeysNode

func (c *InternalClient) FindFieldKeysNode(ctx context.Context, uri *pnet.URI, index string, field string, keys ...string) (transMap map[string]uint64, err error)

func (*InternalClient) FindIndexKeysNode

func (c *InternalClient) FindIndexKeysNode(ctx context.Context, uri *pnet.URI, index string, keys ...string) (transMap map[string]uint64, err error)

func (*InternalClient) FinishTransaction

func (c *InternalClient) FinishTransaction(ctx context.Context, id string) (*Transaction, error)

func (*InternalClient) FragmentNodes

func (c *InternalClient) FragmentNodes(ctx context.Context, index string, shard uint64) ([]*disco.Node, error)

FragmentNodes returns a list of nodes that own a shard.

func (*InternalClient) GetDataframeShard

func (c *InternalClient) GetDataframeShard(ctx context.Context, index string, shard uint64) (*http.Response, error)

func (*InternalClient) GetDiskUsage

func (c *InternalClient) GetDiskUsage(ctx context.Context) (DiskUsage, error)

GetDiskUsage gets the size of data directory across all nodes.

func (*InternalClient) GetIndexUsage

func (c *InternalClient) GetIndexUsage(ctx context.Context, index string) (DiskUsage, error)

GetIndexUsage gets the size of an index across all nodes.

func (*InternalClient) GetPastQueries

func (c *InternalClient) GetPastQueries(ctx context.Context, uri *pnet.URI) ([]PastQueryStatus, error)

GetPastQueries retrieves the query history log for the specified node.

func (*InternalClient) GetTransaction

func (c *InternalClient) GetTransaction(ctx context.Context, id string) (*Transaction, error)

func (*InternalClient) IDAllocDataReader

func (c *InternalClient) IDAllocDataReader(ctx context.Context) (io.ReadCloser, error)

IDAllocDataReader returns a reader that provides a snapshot of ID allocation data.

func (*InternalClient) IDAllocDataWriter

func (c *InternalClient) IDAllocDataWriter(ctx context.Context, f io.Reader, primary *disco.Node) error

func (*InternalClient) Import

func (c *InternalClient) Import(ctx context.Context, qcx *Qcx, req *ImportRequest, options *ImportOptions) error

Import imports values using an ImportRequest, whether or not it's keyed. It may modify the contents of req.

If a request comes in with Shard -1, it will be sent to only one node, which will translate if necessary, split into shards, and loop back through this for each sub-request. If a request uses record keys, it will be set to use shard = -1 unconditionally, because we know that it has to be translated and possibly reshuffled. Value keys don't override the shard.

If we get a non-nil qcx, and have an associated API, we'll use that API directly for the local shard.

func (*InternalClient) ImportFieldKeys

func (c *InternalClient) ImportFieldKeys(ctx context.Context, uri *pnet.URI, index, field string, remote bool, readerFunc func() (io.Reader, error)) error

func (*InternalClient) ImportIndexKeys

func (c *InternalClient) ImportIndexKeys(ctx context.Context, uri *pnet.URI, index string, partitionID int, remote bool, readerFunc func() (io.Reader, error)) error

func (*InternalClient) ImportRoaring

func (c *InternalClient) ImportRoaring(ctx context.Context, uri *pnet.URI, index, field string, shard uint64, remote bool, req *ImportRoaringRequest) error

ImportRoaring does fast import of raw bits in roaring format (pilosa or official format, see API.ImportRoaring).

func (*InternalClient) ImportRoaringShard added in v3.27.0

func (c *InternalClient) ImportRoaringShard(ctx context.Context, uri *pnet.URI, index string, shard uint64, remote bool, req *ImportRoaringShardRequest) error

ImportRoaringShard(ctx, node, string(tid), shard, request

func (*InternalClient) ImportValue

func (c *InternalClient) ImportValue(ctx context.Context, qcx *Qcx, req *ImportValueRequest, options *ImportOptions) error

ImportValue imports values using an ImportValueRequest, whether or not it's keyed. It may modify the contents of req.

If a request comes in with Shard -1, it will be sent to only one node, which will translate if necessary, split into shards, and loop back through this for each sub-request. If a request uses record keys, it will be set to use shard = -1 unconditionally, because we know that it has to be translated and possibly reshuffled. Value keys don't override the shard.

If we get a non-nil qcx, and have an associated API, we'll use that API directly for the local shard.

func (*InternalClient) IndexTranslateDataReader

func (c *InternalClient) IndexTranslateDataReader(ctx context.Context, index string, partitionID int) (io.ReadCloser, error)

IndexTranslateDataReader returns a reader that provides a snapshot of translation data for a partition in an index.

func (*InternalClient) MatchFieldKeysNode

func (c *InternalClient) MatchFieldKeysNode(ctx context.Context, uri *pnet.URI, index string, field string, like string) (matches []uint64, err error)

func (*InternalClient) MaxShardByIndex

func (c *InternalClient) MaxShardByIndex(ctx context.Context) (map[string]uint64, error)

MaxShardByIndex returns the number of shards on a server by index.

func (*InternalClient) MutexCheck

func (c *InternalClient) MutexCheck(ctx context.Context, uri *pnet.URI, indexName string, fieldName string, details bool, limit int) (map[uint64]map[uint64][]uint64, error)

MutexCheck uses the mutex-check endpoint to request mutex collision data from a single node. It produces per-shard results, and does not translate them.

func (*InternalClient) Nodes

func (c *InternalClient) Nodes(ctx context.Context) ([]*disco.Node, error)

Nodes returns a list of all nodes.

func (*InternalClient) OAuthConfig

func (c *InternalClient) OAuthConfig() (rsp oauth2.Config, err error)

func (*InternalClient) PartitionNodes

func (c *InternalClient) PartitionNodes(ctx context.Context, partitionID int) ([]*disco.Node, error)

func (*InternalClient) PostSchema

func (c *InternalClient) PostSchema(ctx context.Context, uri *pnet.URI, s *Schema, remote bool) error

func (*InternalClient) Query

func (c *InternalClient) Query(ctx context.Context, index string, queryRequest *QueryRequest) (*QueryResponse, error)

Query executes query against the index.

func (*InternalClient) QueryNode

func (c *InternalClient) QueryNode(ctx context.Context, addr dax.Address, index string, queryRequest *QueryRequest) (*QueryResponse, error)

QueryNode executes query against the index, sending the request to the node specified.

func (*InternalClient) RetrieveShardFromURI

func (c *InternalClient) RetrieveShardFromURI(ctx context.Context, index, field, view string, shard uint64, uri pnet.URI) (io.ReadCloser, error)

RetrieveShardFromURI returns a ReadCloser which contains the data of the specified shard from the specified node. Caller *must* close the returned ReadCloser or risk leaking goroutines/tcp connections.

func (*InternalClient) RetrieveTranslatePartitionFromURI

func (c *InternalClient) RetrieveTranslatePartitionFromURI(ctx context.Context, index string, partition int, uri pnet.URI) (io.ReadCloser, error)

RetrieveTranslatePartitionFromURI returns a ReadCloser which contains the data of the specified translate partition from the specified node. Caller *must* close the returned ReadCloser or risk leaking goroutines/tcp connections.

func (*InternalClient) Schema

func (c *InternalClient) Schema(ctx context.Context) ([]*IndexInfo, error)

Schema returns all index and field schema information.

func (*InternalClient) SchemaNode

func (c *InternalClient) SchemaNode(ctx context.Context, uri *pnet.URI, views bool) ([]*IndexInfo, error)

SchemaNode returns all index and field schema information from the specified node.

func (*InternalClient) SendMessage

func (c *InternalClient) SendMessage(ctx context.Context, uri *pnet.URI, msg []byte) error

SendMessage posts a message synchronously.

func (*InternalClient) SetInternalAPI

func (c *InternalClient) SetInternalAPI(api *API)

func (*InternalClient) ShardReader

func (c *InternalClient) ShardReader(ctx context.Context, index string, shard uint64) (io.ReadCloser, error)

ShardReader returns a reader that provides a snapshot of the current shard RBF data.

func (*InternalClient) StartTransaction

func (c *InternalClient) StartTransaction(ctx context.Context, id string, timeout time.Duration, exclusive bool) (*Transaction, error)

func (*InternalClient) Status

func (c *InternalClient) Status(ctx context.Context) (string, error)

Status returns pilosa cluster state as a string ("NORMAL", "DEGRADED", "DOWN", ...)

func (*InternalClient) Transactions

func (c *InternalClient) Transactions(ctx context.Context) (map[string]*Transaction, error)

func (*InternalClient) TranslateIDsNode

func (c *InternalClient) TranslateIDsNode(ctx context.Context, uri *pnet.URI, index, field string, ids []uint64) ([]string, error)

TranslateIDsNode sends an id translation request to a specific node.

func (*InternalClient) TranslateKeysNode

func (c *InternalClient) TranslateKeysNode(ctx context.Context, uri *pnet.URI, index, field string, keys []string, writable bool) ([]uint64, error)

TranslateKeysNode function is mainly called to translate keys from primary node. If primary node returns 404 error the function wraps it with ErrTranslatingKeyNotFound.

type InternalClientOption

type InternalClientOption func(c *InternalClient)

func WithClientLogger

func WithClientLogger(log logger.Logger) InternalClientOption

func WithClientRetryPeriod

func WithClientRetryPeriod(period time.Duration) InternalClientOption

WithClientRetryPeriod is the max amount of total time the client will retry failed requests using exponential backoff.

func WithPathPrefix

func WithPathPrefix(prefix string) InternalClientOption

WithPathPrefix sets the http path prefix.

func WithSecretKey

func WithSecretKey(secretKey string) InternalClientOption

WithSecretKey adds the secretKey used for inter-node communication when auth is enabled

func WithSerializer

func WithSerializer(s Serializer) InternalClientOption

type KeyOrID

type KeyOrID struct {
	ID    uint64
	Key   string
	Keyed bool
}

func (KeyOrID) MarshalJSON

func (kid KeyOrID) MarshalJSON() ([]byte, error)

type LoadSchemaMessage

type LoadSchemaMessage struct{}

LoadSchemaMessage is an internal message used to inform a node to load the latest schema from etcd.

type MemoryUsage

type MemoryUsage struct {
	Capacity uint64 `json:"capacity"`
	TotalUse uint64 `json:"totalUsed"`
}

func GetMemoryUsage

func GetMemoryUsage() (MemoryUsage, error)

GetMemoryUsage gets the memory usage

type Message

type Message interface{}

Message is the interface implemented by all core pilosa types which can be serialized to messages. TODO add at least a single "isMessage()" method.

type MessageProcessingError

type MessageProcessingError struct {
	Err error
}

MessageProcessingError is an error indicating that a cluster message could not be processed.

func (MessageProcessingError) Cause

func (err MessageProcessingError) Cause() error

Cause allows the error to be unwrapped.

func (MessageProcessingError) Error

func (err MessageProcessingError) Error() string

func (MessageProcessingError) Unwrap

func (err MessageProcessingError) Unwrap() error

Unwrap allows the error to be unwrapped.

type MultiTranslateEntryReader

type MultiTranslateEntryReader struct {
	// contains filtered or unexported fields
}

MultiTranslateEntryReader reads from multiple TranslateEntryReader instances and merges them into a single reader.

func NewMultiTranslateEntryReader

func NewMultiTranslateEntryReader(ctx context.Context, readers []TranslateEntryReader) *MultiTranslateEntryReader

NewMultiTranslateEntryReader returns a new instance of MultiTranslateEntryReader.

func (*MultiTranslateEntryReader) Close

func (r *MultiTranslateEntryReader) Close() error

Close stops the reader & child readers and waits for all goroutines to stop.

func (*MultiTranslateEntryReader) ReadEntry

func (r *MultiTranslateEntryReader) ReadEntry(entry *TranslateEntry) error

ReadEntry reads the next available entry into entry. Returns an error if any of the child readers error. Returns io.EOF if reader is closed.

type NameType

type NameType struct {
	Name     string
	DataType arrow.DataType
}

type NodeEvent

type NodeEvent struct {
	Event NodeEventType
	Node  *disco.Node
}

NodeEvent is a single event related to node activity in the cluster.

type NodeEventType

type NodeEventType int

NodeEventType are the types of node events.

const (
	NodeJoin NodeEventType = iota
	NodeLeave
	NodeUpdate
)

Constant node event types.

type NodeStateMessage

type NodeStateMessage struct {
	NodeID string `protobuf:"bytes,1,opt,name=NodeID,proto3" json:"NodeID,omitempty"`
	State  string `protobuf:"bytes,2,opt,name=State,proto3" json:"State,omitempty"`
}

NodeStateMessage is an internal message for broadcasting a node's state.

type NodeStatus

type NodeStatus struct {
	Node    *disco.Node
	Indexes []*IndexStatus
	Schema  *Schema
}

NodeStatus is an internal message representing the contents of a node.

type NopCommitor

type NopCommitor struct{}

func (*NopCommitor) Commit

func (c *NopCommitor) Commit() error

func (*NopCommitor) Rollback

func (c *NopCommitor) Rollback()

type NopSchemaAPI added in v3.32.0

type NopSchemaAPI struct{}

NopSchemaAPI is a no-op implementation of the SchemaAPI.

func (*NopSchemaAPI) ClusterName added in v3.32.0

func (n *NopSchemaAPI) ClusterName() string

func (*NopSchemaAPI) CreateDatabase added in v3.32.0

func (n *NopSchemaAPI) CreateDatabase(context.Context, *dax.Database) error

func (*NopSchemaAPI) CreateField added in v3.32.0

func (n *NopSchemaAPI) CreateField(ctx context.Context, tname dax.TableName, fld *dax.Field) error

func (*NopSchemaAPI) CreateTable added in v3.32.0

func (n *NopSchemaAPI) CreateTable(ctx context.Context, tbl *dax.Table) error

func (*NopSchemaAPI) DatabaseByID added in v3.32.0

func (n *NopSchemaAPI) DatabaseByID(ctx context.Context, dbid dax.DatabaseID) (*dax.Database, error)

func (*NopSchemaAPI) DatabaseByName added in v3.32.0

func (n *NopSchemaAPI) DatabaseByName(ctx context.Context, dbname dax.DatabaseName) (*dax.Database, error)

func (*NopSchemaAPI) Databases added in v3.32.0

func (n *NopSchemaAPI) Databases(context.Context, ...dax.DatabaseID) ([]*dax.Database, error)

func (*NopSchemaAPI) DeleteField added in v3.32.0

func (n *NopSchemaAPI) DeleteField(ctx context.Context, tname dax.TableName, fname dax.FieldName) error

func (*NopSchemaAPI) DeleteTable added in v3.32.0

func (n *NopSchemaAPI) DeleteTable(ctx context.Context, tname dax.TableName) error

func (*NopSchemaAPI) DropDatabase added in v3.32.0

func (n *NopSchemaAPI) DropDatabase(context.Context, dax.DatabaseID) error

func (*NopSchemaAPI) SetDatabaseOption added in v3.32.0

func (n *NopSchemaAPI) SetDatabaseOption(ctx context.Context, dbid dax.DatabaseID, option string, value string) error

func (*NopSchemaAPI) TableByID added in v3.32.0

func (n *NopSchemaAPI) TableByID(ctx context.Context, tid dax.TableID) (*dax.Table, error)

func (*NopSchemaAPI) TableByName added in v3.32.0

func (n *NopSchemaAPI) TableByName(ctx context.Context, tname dax.TableName) (*dax.Table, error)

func (*NopSchemaAPI) Tables added in v3.32.0

func (n *NopSchemaAPI) Tables(ctx context.Context) ([]*dax.Table, error)

type NopSystemAPI added in v3.29.0

type NopSystemAPI struct{}

NopSystemAPI is a no-op implementation of the SystemAPI.

func (*NopSystemAPI) ClusterName added in v3.29.0

func (napi *NopSystemAPI) ClusterName() string

func (*NopSystemAPI) ClusterNodeCount added in v3.29.0

func (napi *NopSystemAPI) ClusterNodeCount() int

func (*NopSystemAPI) ClusterNodes added in v3.29.0

func (napi *NopSystemAPI) ClusterNodes() []ClusterNode

func (*NopSystemAPI) ClusterReplicaCount added in v3.29.0

func (napi *NopSystemAPI) ClusterReplicaCount() int

func (*NopSystemAPI) ClusterState added in v3.29.0

func (napi *NopSystemAPI) ClusterState() string

func (*NopSystemAPI) DataDir added in v3.29.0

func (napi *NopSystemAPI) DataDir() string

func (*NopSystemAPI) NodeID added in v3.29.0

func (napi *NopSystemAPI) NodeID() string

func (*NopSystemAPI) PlatformDescription added in v3.29.0

func (napi *NopSystemAPI) PlatformDescription() string

func (*NopSystemAPI) PlatformVersion added in v3.29.0

func (napi *NopSystemAPI) PlatformVersion() string

func (*NopSystemAPI) ShardWidth added in v3.29.0

func (napi *NopSystemAPI) ShardWidth() int

func (*NopSystemAPI) Version added in v3.29.0

func (napi *NopSystemAPI) Version() string

type NotFoundError

type NotFoundError error

NotFoundError wraps an error value to signify that a resource was not found such that in an HTTP scenario, http.StatusNotFound would be returned.

type OpenIDAllocatorFunc

type OpenIDAllocatorFunc func(path string, enableFsync bool) (*idAllocator, error) // whyyyyyyyyy

type OpenTransactionStoreFunc

type OpenTransactionStoreFunc func(path string) (TransactionStore, error)

type OpenTranslateReaderFunc

type OpenTranslateReaderFunc func(ctx context.Context, nodeURL string, offsets TranslateOffsetMap) (TranslateEntryReader, error)

OpenTranslateReaderFunc represents a function for instantiating and opening a TranslateStore.

func GetOpenTranslateReaderFunc

func GetOpenTranslateReaderFunc(client *http.Client) OpenTranslateReaderFunc

func GetOpenTranslateReaderWithLockerFunc

func GetOpenTranslateReaderWithLockerFunc(client *http.Client, locker sync.Locker) OpenTranslateReaderFunc

type OpenTranslateStoreFunc

type OpenTranslateStoreFunc func(path, index, field string, partitionID, partitionN int, fsyncEnabled bool) (TranslateStore, error)

OpenTranslateStoreFunc represents a function for instantiating and opening a TranslateStore.

type Pair

type Pair struct {
	ID    uint64 `json:"id"`
	Key   string `json:"key"`
	Count uint64 `json:"count"`
}

Pair holds an id/count pair.

type PairField

type PairField struct {
	Pair  Pair
	Field string
}

PairField is a Pair with its associated field.

func (PairField) Clone

func (p PairField) Clone() (r PairField)

func (PairField) MarshalJSON

func (p PairField) MarshalJSON() ([]byte, error)

MarshalJSON marshals PairField into a JSON-encoded byte slice, excluding `Field`.

func (PairField) ToRows

func (p PairField) ToRows(callback func(*pb.RowResponse) error) error

ToRows implements the ToRowser interface.

func (PairField) ToTable

func (p PairField) ToTable() (*pb.TableResponse, error)

ToTable implements the ToTabler interface.

type Pairs

type Pairs []Pair

Pairs is a sortable slice of Pair objects.

func (Pairs) Add

func (p Pairs) Add(other []Pair) []Pair

Add merges other into p and returns a new slice.

func (Pairs) Keys

func (p Pairs) Keys() []uint64

Keys returns a slice of all keys in p.

func (Pairs) Len

func (p Pairs) Len() int

func (Pairs) Less

func (p Pairs) Less(i, j int) bool

func (*Pairs) Pop

func (p *Pairs) Pop() interface{}

Pop removes the minimum element from the Pair slice.

func (*Pairs) Push

func (p *Pairs) Push(x interface{})

Push appends the element onto the Pair slice.

func (Pairs) String

func (p Pairs) String() string

func (Pairs) Swap

func (p Pairs) Swap(i, j int)

type PairsField

type PairsField struct {
	Pairs []Pair
	Field string
}

PairsField is a Pairs object with its associated field.

func (*PairsField) Clone

func (p *PairsField) Clone() (r *PairsField)

func (PairsField) MarshalJSON

func (p PairsField) MarshalJSON() ([]byte, error)

MarshalJSON marshals PairsField into a JSON-encoded byte slice, excluding `Field`.

func (*PairsField) ToRows

func (p *PairsField) ToRows(callback func(*pb.RowResponse) error) error

ToRows implements the ToRowser interface.

func (*PairsField) ToTable

func (p *PairsField) ToTable() (*pb.TableResponse, error)

ToTable implements the ToTabler interface.

type PastQueryStatus

type PastQueryStatus struct {
	PQL       string        `json:"PQL"`
	SQL       string        `json:"SQL,omitempty"`
	Node      string        `json:"nodeID"`
	Index     string        `json:"index"`
	Start     time.Time     `json:"start"`
	Runtime   time.Duration `json:"runtime"` // deprecated
	RuntimeNs time.Duration `json:"runtimeNanoseconds"`
}

type PerformanceCounter added in v3.29.0

type PerformanceCounter struct {
	NameSpace   string
	SubSystem   string
	CounterName string
	Help        string
	Value       int64
	CounterType int64
}

PerformanceCounter holds data about a performance counter for external consumers

type PerformanceCounters added in v3.29.0

type PerformanceCounters struct {
	// contains filtered or unexported fields
}
var PerfCounters *PerformanceCounters = newPerformanceCounters()

func (*PerformanceCounters) ListCounters added in v3.29.0

func (p *PerformanceCounters) ListCounters() ([]PerformanceCounter, error)

list all the counters we can just read here without locking because if the counters get changed midway thru the loop, absent evidence to the contrary, the world will not end

type PreconditionFailedError

type PreconditionFailedError struct {
	// contains filtered or unexported fields
}

type Qcx

type Qcx struct {
	Grp *TxGroup
	Txf *TxFactory

	// RequiredForAtomicWriteTx is used by api.ImportAtomicRecord
	// to ensure that all writes happen on this one Tx.
	RequiredForAtomicWriteTx *Tx

	// efficient access to the options for RequiredForAtomicWriteTx
	RequiredTxo *Txo
	// contains filtered or unexported fields
}

Qcx is a (Pilosa) Query Context.

It flexibly expresses the desired grouping of Tx for mass rollback at a query's end. It provides one-time commit for an atomic import write Tx that involves multiple fragments.

The most common use of Qcx is to call GetTx() to obtain a Tx locally, once the index/shard pair is known:

  someFunc(qcx Qcx, idx *Index, shard uint64) (err0 error) {
		tx, finisher := qcx.GetTx(Txo{Write: true, Index:idx, Shard:shard, ...})
		defer finisher(&err0)
     ...
  }

Qcx reuses read-only Tx on the same index/shard pair. See the Qcx.GetTx() for further discussion. The caveat is of course that your "new" read Tx actually has an "old" view of the database.

At the moment, most writes to individual shards are commited eagerly and locally when the `defer finisher(&err0)` is run. This is done by returning a finisher that actually Commits, thus freeing the one write slot for re-use. A single writer is also required by RBF, so this design accomodates both.

In contrast, the default read Tx generated (or re-used) will return a no-op finisher and the group of reads as a whole will be rolled back (mmap memory released) en-mass when Qcx.Abort() is called at the top-most level.

Local use of a (Tx, finisher) pair obtained from Qcx.GetTx() doesn't need to care about these details. Local use should always invoke finisher(&err0) or finisher(nil) to complete the Tx within the local function scope.

In summary write Tx are typically "local" and are never saved into the TxGroup. The parallelism supplied by TxGroup typically applies only to read Tx.

The one exception is this rule is for the one write Tx used during the api.ImportAtomicRecord routine. There we make a special write Tx and use it for all matching writes. This is then committed at the final, top-level, Qcx.Finish() call.

See also the Qcx.GetTx() example and the TxGroup description below.

func (*Qcx) Abort

func (q *Qcx) Abort()

Abort rolls back all Tx generated and stored within the Qcx. The Qcx is then reset and can be used again immediately.

func (*Qcx) Finish

func (q *Qcx) Finish() (err error)

Finish commits/rollsback all stored Tx. It no longer resets the Qcx for further operations automatically. User must call Reset() or NewQxc() again.

func (*Qcx) GetTx

func (qcx *Qcx) GetTx(o Txo) (tx Tx, finisher func(perr *error), err error)

GetTx is used like this:

someFunc(ctx context.Context, shard uint64) (_ interface{}, err0 error) {

	tx, finisher := qcx.GetTx(Txo{Write: !writable, Index: idx, Shard: shard})
	defer finisher(&err0)

	return e.executeIncludesColumnCallShard(ctx, tx, index, c, shard, col)
}

Note we are tracking the returned err0 error value of someFunc(). An option instead is to say

defer finisher(nil)

This means always Commit writes, ignoring if there were errors. This style is expected to be rare compared to the typical

defer finisher(&err0)

invocation, where err0 is your return from the enclosing function error. If the Tx is local and not a part of a group, then the finisher consults that error to decides whether to Commit() or Rollback().

If instead the Tx becomes part of a group, then the local finisher() is always a no-op, in deference to the Qcx.Finish() or Qcx.Abort() calls.

Take care the finisher(&err) is capturing the address of the enclosing function's err and that it has not been shadowed locally by another _, err := f() call. For this reason, it can be clearer (and much safer) to rename the enclosing functions 'err' to 'err0', to make it clear we are referring to the first and final error.

func (*Qcx) ListOpenTx

func (qcx *Qcx) ListOpenTx() string

func (*Qcx) Reset

func (q *Qcx) Reset()

Reset forgets everything are starts fresh with an empty group, ready for use again as if NewQcx() had been called.

func (*Qcx) StartAtomicWriteTx

func (qcx *Qcx) StartAtomicWriteTx(o Txo)

StartAtomicWriteTx allocates a Tx and stores it in qcx.RequiredForAtomicWriteTx. All subsequent writes to this shard/index will re-use it.

type QueryAPI

type QueryAPI interface {
	Query(ctx context.Context, req *QueryRequest) (QueryResponse, error)
}

QueryAPI is a subset of the API methods which have to do with query.

type QueryRequest

type QueryRequest struct {
	// Index to execute query against.
	Index string

	// The query string to parse and execute.
	Query string

	// The SQL source query, if applicable.
	SQLQuery string

	// The shards to include in the query execution.
	// If empty, all shards are included.
	Shards []uint64

	// If true, indicates that query is part of a larger distributed query.
	// If false, this request is on the originating node.
	Remote bool

	// Query has already been translated. This is only used if Remote
	// is false, Remote=true implies this.
	PreTranslated bool

	// Should we profile this query?
	Profile bool

	// Additional data associated with the query, in cases where there's
	// row-style inputs for precomputed values.
	EmbeddedData []*Row

	// Limit on memory used by request (Extract() only)
	MaxMemory int64
}

QueryRequest represent a request to process a query.

type QueryResponse

type QueryResponse struct {
	// Result for each top-level query call.
	// The result type differs depending on the query; types
	// include: Row, RowIdentifiers, GroupCounts, SignedRow,
	// ValCount, Pair, Pairs, bool, uint64.
	Results []interface{}

	// Error during parsing or execution.
	Err error

	// Profiling data, if any
	Profile *tracing.Profile
}

QueryResponse represent a response from a processed query.

func (*QueryResponse) MarshalJSON

func (resp *QueryResponse) MarshalJSON() ([]byte, error)

MarshalJSON marshals QueryResponse into a JSON-encoded byte slice

func (*QueryResponse) SameAs

func (qr *QueryResponse) SameAs(other *QueryResponse) error

SameAs compares one QueryResponse to another and returns nil if the Results and Err are both identical, or a descriptive error if they differ. This function replaces using reflect.DeepEqual directly on the QueryResponse, since QueryResponse contains an error field and reflect.DeepEqual should not be used on errors.

type RBFTx

type RBFTx struct {
	Db *RbfDBWrapper
	// contains filtered or unexported fields
}

func (*RBFTx) Add

func (tx *RBFTx) Add(index, field, view string, shard uint64, a ...uint64) (changeCount int, err error)

Add sets all the a bits hot in the specified fragment.

func (*RBFTx) ApplyFilter

func (tx *RBFTx) ApplyFilter(index, field, view string, shard uint64, ckey uint64, filter roaring.BitmapFilter) (err error)

func (*RBFTx) ApplyRewriter

func (tx *RBFTx) ApplyRewriter(index, field, view string, shard uint64, ckey uint64, filter roaring.BitmapRewriter) (err error)

func (*RBFTx) Commit

func (tx *RBFTx) Commit() (err error)

func (*RBFTx) Container

func (tx *RBFTx) Container(index, field, view string, shard uint64, key uint64) (*roaring.Container, error)

func (*RBFTx) ContainerIterator

func (tx *RBFTx) ContainerIterator(index, field, view string, shard uint64, key uint64) (citer roaring.ContainerIterator, found bool, err error)

func (*RBFTx) Contains

func (tx *RBFTx) Contains(index, field, view string, shard uint64, v uint64) (exists bool, err error)

func (*RBFTx) Count

func (tx *RBFTx) Count(index, field, view string, shard uint64) (uint64, error)

func (*RBFTx) CountRange

func (tx *RBFTx) CountRange(index, field, view string, shard uint64, start, end uint64) (n uint64, err error)

CountRange returns the count of hot bits in the start, end range on the fragment. roaring.countRange counts the number of bits set between [start, end).

func (*RBFTx) DBPath

func (tx *RBFTx) DBPath() string

func (*RBFTx) GetFieldSizeBytes

func (tx *RBFTx) GetFieldSizeBytes(index, field string) (uint64, error)

func (*RBFTx) GetSortedFieldViewList

func (tx *RBFTx) GetSortedFieldViewList(idx *Index, shard uint64) (fvs []txkey.FieldView, err error)

func (*RBFTx) ImportRoaringBits

func (tx *RBFTx) ImportRoaringBits(index, field, view string, shard uint64, rit roaring.RoaringIterator, clear bool, log bool, rowSize uint64) (changed int, rowSet map[uint64]int, err error)

func (*RBFTx) Max

func (tx *RBFTx) Max(index, field, view string, shard uint64) (uint64, error)

func (*RBFTx) Min

func (tx *RBFTx) Min(index, field, view string, shard uint64) (uint64, bool, error)

func (*RBFTx) OffsetRange

func (tx *RBFTx) OffsetRange(index, field, view string, shard uint64, offset, start, end uint64) (*roaring.Bitmap, error)

func (*RBFTx) PutContainer

func (tx *RBFTx) PutContainer(index, field, view string, shard uint64, key uint64, c *roaring.Container) error

func (*RBFTx) Remove

func (tx *RBFTx) Remove(index, field, view string, shard uint64, a ...uint64) (changeCount int, err error)

Remove clears all the specified a bits in the chosen fragment.

func (*RBFTx) RemoveContainer

func (tx *RBFTx) RemoveContainer(index, field, view string, shard uint64, key uint64) error

func (*RBFTx) Removed added in v3.35.0

func (tx *RBFTx) Removed(index, field, view string, shard uint64, a ...uint64) (changed []uint64, err error)

Removed clears the specified bits and tells you which ones it actually removed.

func (*RBFTx) RoaringBitmap

func (tx *RBFTx) RoaringBitmap(index, field, view string, shard uint64) (*roaring.Bitmap, error)

func (*RBFTx) Rollback

func (tx *RBFTx) Rollback()

func (*RBFTx) SnapshotReader

func (tx *RBFTx) SnapshotReader() (io.Reader, error)

SnapshotReader returns a reader that provides a snapshot of the current database.

func (*RBFTx) Type

func (tx *RBFTx) Type() string

type RbfDBWrapper

type RbfDBWrapper struct {
	// contains filtered or unexported fields
}

RbfDBWrapper wraps an *rbf.DB

func (*RbfDBWrapper) CleanupTx

func (w *RbfDBWrapper) CleanupTx(tx Tx)

func (*RbfDBWrapper) Close

func (w *RbfDBWrapper) Close() error

func (*RbfDBWrapper) CloseDB

func (w *RbfDBWrapper) CloseDB() error

func (*RbfDBWrapper) DeleteField

func (w *RbfDBWrapper) DeleteField(index, field, fieldPath string) error

func (*RbfDBWrapper) DeleteFragment

func (w *RbfDBWrapper) DeleteFragment(index, field, view string, shard uint64, frag interface{}) error

func (*RbfDBWrapper) DeleteIndex

func (w *RbfDBWrapper) DeleteIndex(indexName string) error

func (*RbfDBWrapper) HasData

func (w *RbfDBWrapper) HasData() (has bool, err error)

func (*RbfDBWrapper) NewTx

func (w *RbfDBWrapper) NewTx(write bool, initialIndex string, o Txo) (_ Tx, err error)

func (*RbfDBWrapper) OpenDB

func (w *RbfDBWrapper) OpenDB() error

func (*RbfDBWrapper) OpenListString

func (w *RbfDBWrapper) OpenListString() (r string)

func (*RbfDBWrapper) Path

func (w *RbfDBWrapper) Path() string

func (*RbfDBWrapper) SetHolder

func (w *RbfDBWrapper) SetHolder(h *Holder)

type RecalculateCaches

type RecalculateCaches struct{}

RecalculateCaches is an internal message for recalculating all caches within a holder.

type RedirectError

type RedirectError struct {
	HostPort string
	// contains filtered or unexported fields
}

func (RedirectError) Error

func (r RedirectError) Error() string

type RoaringUpdate

type RoaringUpdate struct {
	Field string
	View  string

	// Clear is a roaring encoded bitmatrix of bits to clear. For
	// mutex or int-like fields, only the first row is looked at and
	// the bits in that row are cleared from every row.
	Clear []byte

	// Set is the roaring encoded bitmatrix of bits to set. If this is
	// a mutex or int-like field, we'll assume the first shard width
	// of containers is the exists row and we will first clear all
	// bits in those columns and then set
	Set []byte

	// ClearRecords, when true, denotes that Clear should be
	// interpreted as a single row which will be subtracted from every
	// row in this view.
	ClearRecords bool
}

RoaringUpdate represents the bits to clear and then set in a particular view.

type Row

type Row struct {
	Segments []RowSegment

	// String keys translated to/from segment columns.
	Keys []string

	// Index tells what index this row is from - needed for key translation.
	Index string

	// Field tells what field this row is from if it's a "vertical"
	// row. It may be the result of a Distinct query or Rows
	// query. Knowing the index and field, we can figure out how to
	// interpret the row data.
	Field string

	// NoSplit indicates that this row may not be split.
	// This is used for `Rows` calls in a GroupBy.
	NoSplit bool
}

Row is a set of integers (the associated columns).

func NewRow

func NewRow(columns ...uint64) *Row

NewRow returns a new instance of Row.

func NewRowFromBitmap

func NewRowFromBitmap(b *roaring.Bitmap) *Row

NewRowFromBitmap divides a bitmap into rows, which it now calls shards. This transposes; data that was in any shard for Row 0 is now considered shard 0, etcetera.

func NewRowFromRoaring

func NewRowFromRoaring(data []byte) *Row

NewRowFromRoaring parses a roaring data file as a row, dividing it into bitmaps and rowSegments based on shard width.

func (*Row) Any

func (r *Row) Any() bool

Any returns true if row contains any bits.

func (*Row) Clone

func (r *Row) Clone() (clone *Row)

func (*Row) Columns

func (r *Row) Columns() []uint64

Columns returns the columns in r as a slice of ints.

func (*Row) Count

func (r *Row) Count() uint64

Count returns the number of columns in the row.

func (*Row) Difference

func (r *Row) Difference(others ...*Row) *Row

Difference returns the diff of r and other.

func (*Row) Freeze

func (r *Row) Freeze()

func (*Row) Hash

func (r *Row) Hash() uint64

Hash calculate checksum code be useful in block hash join

func (*Row) Includes

func (r *Row) Includes(col uint64) bool

Includes returns true if the row contains the given column.

func (*Row) Intersect

func (r *Row) Intersect(other *Row) *Row

Intersect returns the itersection of r and other.

func (*Row) IsEmpty

func (r *Row) IsEmpty() bool

IsEmpty returns true if the row doesn't contain any set bits.

func (*Row) MarshalJSON

func (r *Row) MarshalJSON() ([]byte, error)

MarshalJSON returns a JSON-encoded byte slice of r.

func (*Row) Merge

func (r *Row) Merge(other *Row)

Merge merges data from other into r.

func (*Row) Roaring

func (r *Row) Roaring() []byte

Roaring returns the row treated as a unified roaring bitmap.

func (*Row) SetBit

func (r *Row) SetBit(i uint64) (changed bool)

SetBit sets the i-th column of the row.

func (*Row) ShardColumns

func (r *Row) ShardColumns() []int64

func (*Row) Shift

func (r *Row) Shift(n int64) (*Row, error)

Shift returns the bitwise shift of r by n bits. Currently only positive shift values are supported.

NOTE: the Shift method is currently unsupported, and is considerred to be incorrect. Please DO NOT use it. We are leaving it here in case someone internally wants to use it with the understanding that the results may be incorrect.

Why unsupported? For a full description, see: https://github.com/featurebasedb/pilosa/issues/403. In short, the current implementation will shift a bit at the edge of a shard out of the shard and into a container which is assumed to be an invalid container for the shard. So for example, shifting the last bit of shard 0 (containers 0-15) will shift that bit out to container 16. While this "sort of" works, it breaks an assumption about containers, and might stop working in the future if that assumption is enforced.

func (*Row) ToRows

func (r *Row) ToRows(callback func(*pb.RowResponse) error) error

ToRows implements the ToRowser interface.

func (*Row) ToTable

func (r *Row) ToTable() (*pb.TableResponse, error)

ToTable implements the ToTabler interface.

func (*Row) Union

func (r *Row) Union(others ...*Row) *Row

Union returns the bitwise union of r and other.

func (*Row) Xor

func (r *Row) Xor(other *Row) *Row

Xor returns the xor of r and other.

type RowIDs

type RowIDs []uint64

RowIDs is a query return type for just uint64 row ids. It should only be used internally (since RowIdentifiers is the external return type), but it is exported because the proto package needs access to it.

func (RowIDs) Merge

func (r RowIDs) Merge(other RowIDs, limit int) RowIDs

type RowIdentifiers

type RowIdentifiers struct {
	Rows  []uint64 `json:"rows"`
	Keys  []string `json:"keys,omitempty"`
	Field string
}

RowIdentifiers is a return type for a list of row ids or row keys. The names `Rows` and `Keys` are meant to follow the same convention as the Row query which returns `Columns` and `Keys`. TODO: Rename this to something better. Anything.

func (*RowIdentifiers) Clone

func (r *RowIdentifiers) Clone() (clone *RowIdentifiers)

func (RowIdentifiers) ToRows

func (r RowIdentifiers) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (RowIdentifiers) ToTable

func (r RowIdentifiers) ToTable() (*proto.TableResponse, error)

ToTable implements the ToTabler interface.

type RowKV

type RowKV struct {
	RowID uint64      `json:"id"`
	Value interface{} `json:"value"`
}

func (*RowKV) Compare

func (r *RowKV) Compare(o RowKV, desc bool) (bool, bool)

type RowSegment

type RowSegment struct {
	// contains filtered or unexported fields
}

RowSegment holds a subset of a row. This could point to a mmapped roaring bitmap or an in-memory bitmap. The width of the segment will always match the shard width.

func (*RowSegment) ClearBit

func (s *RowSegment) ClearBit(i uint64) (changed bool)

ClearBit clears the i-th column of the row.

func (*RowSegment) Columns

func (s *RowSegment) Columns() []uint64

Columns returns a list of all columns set in the segment.

func (*RowSegment) Count

func (s *RowSegment) Count() uint64

Count returns the number of set columns in the row.

func (*RowSegment) Difference

func (s *RowSegment) Difference(others ...*RowSegment) *RowSegment

Difference returns the diff of s and other.

func (*RowSegment) Freeze

func (s *RowSegment) Freeze()

func (*RowSegment) Intersect

func (s *RowSegment) Intersect(other *RowSegment) *RowSegment

Intersect returns the itersection of s and other.

func (*RowSegment) IntersectionCount

func (s *RowSegment) IntersectionCount(other *RowSegment) uint64

IntersectionCount returns the number of intersections between s and other.

func (*RowSegment) InvalidateCount

func (s *RowSegment) InvalidateCount()

InvalidateCount updates the cached count in the row.

func (*RowSegment) Merge

func (s *RowSegment) Merge(other *RowSegment)

Merge adds chunks from other to s. Chunks in s are overwritten if they exist in other.

func (*RowSegment) SetBit

func (s *RowSegment) SetBit(i uint64) (changed bool)

SetBit sets the i-th column of the row.

func (*RowSegment) Shard

func (s *RowSegment) Shard() uint64

func (*RowSegment) ShardColumns

func (s *RowSegment) ShardColumns() []int64

Columns returns a list of all columns set in the segment, normalized from 0-shardwidth-1

func (*RowSegment) Shift

func (s *RowSegment) Shift() (*RowSegment, error)

Shift returns s shifted by 1 bit.

func (*RowSegment) Union

func (s *RowSegment) Union(others ...*RowSegment) *RowSegment

Union returns the bitwise union of s and other.

func (*RowSegment) Xor

func (s *RowSegment) Xor(other *RowSegment) *RowSegment

Xor returns the xor of s and other.

type Schema

type Schema struct {
	Indexes []*IndexInfo `json:"indexes"`
}

Schema contains information about indexes and their configuration.

type SchemaAPI

type SchemaAPI interface {
	CreateDatabase(context.Context, *dax.Database) error
	DropDatabase(context.Context, dax.DatabaseID) error

	DatabaseByName(ctx context.Context, dbname dax.DatabaseName) (*dax.Database, error)
	DatabaseByID(ctx context.Context, dbid dax.DatabaseID) (*dax.Database, error)
	SetDatabaseOption(ctx context.Context, dbid dax.DatabaseID, option string, value string) error
	Databases(context.Context, ...dax.DatabaseID) ([]*dax.Database, error)

	TableByName(ctx context.Context, tname dax.TableName) (*dax.Table, error)
	TableByID(ctx context.Context, tid dax.TableID) (*dax.Table, error)
	Tables(ctx context.Context) ([]*dax.Table, error)

	CreateTable(ctx context.Context, tbl *dax.Table) error
	CreateField(ctx context.Context, tname dax.TableName, fld *dax.Field) error

	DeleteTable(ctx context.Context, tname dax.TableName) error
	DeleteField(ctx context.Context, tname dax.TableName, fname dax.FieldName) error
}

type Serializer

type Serializer interface {
	Marshal(Message) ([]byte, error)
	Unmarshal([]byte, Message) error
}

Serializer is an interface for serializing pilosa types to bytes and back.

var GobSerializer Serializer = &gobSerializer{}

GobSerializer represents a Serializer that uses gob encoding. This is only used in tests; there's really no reason to use this instead of the proto serializer except that, as it's currently implemented, the proto serializer can't be used in internal tests (i.e test in the pilosa package) because the proto package imports the pilosa package, so it would result in circular imports. We really need all the pilosa types to be in a sub-package of pilosa, so that both proto and pilosa can import them without resulting in circular imports.

var NopSerializer Serializer = &nopSerializer{}

NopSerializer represents a Serializer that doesn't do anything.

type Server

type Server struct {
	SystemLayer SystemLayerAPI
	// contains filtered or unexported fields
}

Server represents a holder wrapped by a running HTTP server.

func NewServer

func NewServer(opts ...ServerOption) (*Server, error)

NewServer returns a new instance of Server.

func (*Server) Close

func (s *Server) Close() error

Close closes the server and waits for it to shutdown.

func (*Server) CompileExecutionPlan

func (s *Server) CompileExecutionPlan(ctx context.Context, q string) (planner_types.PlanOperator, error)

CompileExecutionPlan parses and compiles an execution plan from a SQL statement using a new parser and planner.

func (*Server) FinishTransaction

func (srv *Server) FinishTransaction(ctx context.Context, id string, remote bool) (*Transaction, error)

func (*Server) GRPCURI

func (s *Server) GRPCURI() pnet.URI

func (*Server) GetTransaction

func (srv *Server) GetTransaction(ctx context.Context, id string, remote bool) (*Transaction, error)

func (*Server) Holder

func (s *Server) Holder() *Holder

Holder returns the holder for server.

func (*Server) InternalClient

func (s *Server) InternalClient() *InternalClient

func (*Server) IsPrimary

func (s *Server) IsPrimary() bool

IsPrimary returns if this node is primary right now or not.

func (*Server) NodeID

func (s *Server) NodeID() string

NodeID returns the server's node id.

func (*Server) Open

func (s *Server) Open() error

Open opens and initializes the server.

func (*Server) RehydratePlanOperator added in v3.29.0

func (s *Server) RehydratePlanOperator(ctx context.Context, reader io.Reader) (planner_types.PlanOperator, error)

func (*Server) SendAsync

func (s *Server) SendAsync(m Message) error

SendAsync represents an implementation of Broadcaster.

func (*Server) SendSync

func (s *Server) SendSync(m Message) error

SendSync represents an implementation of Broadcaster.

func (*Server) SendTo

func (s *Server) SendTo(node *disco.Node, m Message) error

SendTo represents an implementation of Broadcaster.

func (*Server) SetAPI

func (s *Server) SetAPI(api *API)

func (*Server) StartTransaction

func (srv *Server) StartTransaction(ctx context.Context, id string, timeout time.Duration, exclusive bool, remote bool) (*Transaction, error)

func (*Server) Transactions

func (srv *Server) Transactions(ctx context.Context) (map[string]*Transaction, error)

func (*Server) UpAndDown

func (s *Server) UpAndDown() error

UpAndDown brings the server up minimally and shuts it down again; basically, it exists for testing holder open and close.

func (*Server) ViewsRemoval

func (s *Server) ViewsRemoval(ctx context.Context)

Remove views based on these criterias: 1. views that are older than specified TTL 2. "standard" view of a field if its "noStandardView" option is set to true

type ServerOption

type ServerOption func(s *Server) error

ServerOption is a functional option type for pilosa.Server

func OptServerClusterHasher

func OptServerClusterHasher(h disco.Hasher) ServerOption

OptServerClusterHasher is a functional option on Server used to specify the consistent hash algorithm for data location within the cluster.

func OptServerClusterName

func OptServerClusterName(name string) ServerOption

OptServerClusterName sets the human-readable cluster name.

func OptServerDataDir

func OptServerDataDir(dir string) ServerOption

OptServerDataDir is a functional option on Server used to set the data directory.

func OptServerDataframeUseParquet added in v3.27.0

func OptServerDataframeUseParquet(is bool) ServerOption

func OptServerDiagnosticsInterval

func OptServerDiagnosticsInterval(dur time.Duration) ServerOption

OptServerDiagnosticsInterval is a functional option on Server used to specify the duration between diagnostic checks.

func OptServerDisCo

func OptServerDisCo(disCo disco.DisCo,
	noder disco.Noder,
	sharder disco.Sharder,
	schemator disco.Schemator,
) ServerOption

OptServerDisCo is a functional option on Server used to set the Distributed Consensus implementation.

func OptServerExecutionPlannerFn

func OptServerExecutionPlannerFn(fn ExecutionPlannerFn) ServerOption

func OptServerExecutorPoolSize

func OptServerExecutorPoolSize(size int) ServerOption

func OptServerGCNotifier

func OptServerGCNotifier(gcn GCNotifier) ServerOption

OptServerGCNotifier is a functional option on Server used to set the garbage collection notification source.

func OptServerGRPCURI

func OptServerGRPCURI(uri *pnet.URI) ServerOption

OptServerGRPCURI is a functional option on Server used to set the server gRPC URI.

func OptServerInternalClient

func OptServerInternalClient(c *InternalClient) ServerOption

OptServerInternalClient is a functional option on Server used to set the implementation of InternalClient.

func OptServerIsComputeNode

func OptServerIsComputeNode(is bool) ServerOption

OptServerIsComputeNode specifies that this node is running as a DAX compute node.

func OptServerIsDataframeEnabled

func OptServerIsDataframeEnabled(is bool) ServerOption

OptServerIsDataframeEnabled specifies if experimental dataframe support available

func OptServerLogger

func OptServerLogger(l logger.Logger) ServerOption

OptServerLogger is a functional option on Server used to set the logger.

func OptServerLongQueryTime

func OptServerLongQueryTime(dur time.Duration) ServerOption

OptServerLongQueryTime is a functional option on Server used to set long query duration.

func OptServerLookupDB

func OptServerLookupDB(dsn string) ServerOption

OptServerLookupDB configures a connection to an external postgres database for ExternalLookup queries.

func OptServerMaxQueryMemory

func OptServerMaxQueryMemory(v int64) ServerOption

OptServerMaxQueryMemory sets the memory used per Extract() and SELECT query.

func OptServerMaxWritesPerRequest

func OptServerMaxWritesPerRequest(n int) ServerOption

OptServerMaxWritesPerRequest is a functional option on Server used to set the maximum number of writes allowed per request.

func OptServerMetricInterval

func OptServerMetricInterval(dur time.Duration) ServerOption

OptServerMetricInterval is a functional option on Server used to set the interval between metric samples.

func OptServerNodeDownRetries

func OptServerNodeDownRetries(retries int, sleep time.Duration) ServerOption

OptServerNodeDownRetries is a functional option on Server used to specify the retries and sleep duration for node down checks.

func OptServerNodeID

func OptServerNodeID(nodeID string) ServerOption

OptServerNodeID is a functional option on Server used to set the server node ID.

func OptServerOpenIDAllocator

func OptServerOpenIDAllocator(fn OpenIDAllocatorFunc) ServerOption

OptServerOpenIDAllocator is a functional option on Server used to specify the ID allocator data store type. Except not really (because there's only one at this time).

func OptServerOpenTranslateReader

func OptServerOpenTranslateReader(fn OpenTranslateReaderFunc) ServerOption

OptServerOpenTranslateReader is a functional option on Server used to specify the remote translation data reader.

func OptServerOpenTranslateStore

func OptServerOpenTranslateStore(fn OpenTranslateStoreFunc) ServerOption

OptServerOpenTranslateStore is a functional option on Server used to specify the translation data store type.

func OptServerPartitionAssigner

func OptServerPartitionAssigner(p string) ServerOption

func OptServerPrimaryTranslateStore

func OptServerPrimaryTranslateStore(store TranslateStore) ServerOption

OptServerPrimaryTranslateStore has been deprecated.

func OptServerQueryHistoryLength

func OptServerQueryHistoryLength(length int) ServerOption

OptServerQueryHistoryLength is a functional option on Server used to specify the length of the query history buffer that maintains the information returned at /query-history.

func OptServerQueryLogger

func OptServerQueryLogger(l logger.Logger) ServerOption

func OptServerRBFConfig

func OptServerRBFConfig(cfg *rbfcfg.Config) ServerOption

OptServerRBFConfig conveys the RBF flags to the Holder.

func OptServerReplicaN

func OptServerReplicaN(n int) ServerOption

OptServerReplicaN is a functional option on Server used to set the number of replicas.

func OptServerSerializer

func OptServerSerializer(ser Serializer) ServerOption

OptServerSerializer is a functional option on Server used to set the serializer.

func OptServerServerlessStorage added in v3.27.0

func OptServerServerlessStorage(mm *daxstorage.ResourceManager) ServerOption

func OptServerStorageConfig

func OptServerStorageConfig(cfg *storage.Config) ServerOption

OptServerStorageConfig is a functional option on Server used to specify the transactional-storage backend to use, resulting in RoaringTx or RbfTx being used for all Tx interface calls.

func OptServerSystemInfo

func OptServerSystemInfo(si SystemInfo) ServerOption

OptServerSystemInfo is a functional option on Server used to set the system information source.

func OptServerURI

func OptServerURI(uri *pnet.URI) ServerOption

OptServerURI is a functional option on Server used to set the server URI.

func OptServerUUIDFile added in v3.33.0

func OptServerUUIDFile(uf string) ServerOption

OptServerUUIDFile is a functional option on Server used to set the file name for storing the checkin UUID.

func OptServerVerChkAddress added in v3.33.0

func OptServerVerChkAddress(addr string) ServerOption

OptServerVerChkAddress is a functional option on Server used to set the address to check for the current version.

func OptServerViewsRemovalInterval

func OptServerViewsRemovalInterval(interval time.Duration) ServerOption

OptServerViewsRemovalInterval is a functional option on Server used to set the ttl removal interval.

type ShardFile

type ShardFile struct {
	// contains filtered or unexported fields
}

func NewShardFile

func NewShardFile(ctx context.Context, name string, mem memory.Allocator, e *executor) (*ShardFile, error)

func (*ShardFile) EnsureSchema

func (sf *ShardFile) EnsureSchema(cs *ChangesetRequest) error

func (*ShardFile) LoadBlobs added in v3.29.0

func (sf *ShardFile) LoadBlobs() error

func (*ShardFile) Process

func (sf *ShardFile) Process(cs *ChangesetRequest) error

func (*ShardFile) ReplaceString added in v3.29.0

func (sf *ShardFile) ReplaceString(col, chunk, l int, s string)

func (*ShardFile) Save

func (sf *ShardFile) Save(name string) error

func (*ShardFile) SetFloatValue

func (sf *ShardFile) SetFloatValue(col int, row int64, val float64)

func (*ShardFile) SetIntValue

func (sf *ShardFile) SetIntValue(col int, row int64, val int64)

the row offset must be reset to 0 for the slices being appended

func (*ShardFile) SetStringValue added in v3.29.0

func (sf *ShardFile) SetStringValue(col int, row int64, val string)

type ShowColumnsResponse added in v3.34.0

type ShowColumnsResponse struct {
	Fields []*dax.Field
}

ShowColumnsResponse is a type used to marshal the results of a `SHOW COLUMNS` statement.

func (*ShowColumnsResponse) Field added in v3.34.0

func (s *ShowColumnsResponse) Field(name dax.FieldName) *dax.Field

Field returns the field by name. If the field is not found, nil is returned.

type SignedRow

type SignedRow struct {
	Neg   *Row   `json:"neg"`
	Pos   *Row   `json:"pos"`
	Field string `json:"-"`
}

SignedRow represents a signed *Row with two (neg/pos) *Rows.

func (*SignedRow) Clone

func (s *SignedRow) Clone() (r *SignedRow)

func (SignedRow) ToRows

func (s SignedRow) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (SignedRow) ToTable

func (s SignedRow) ToTable() (*proto.TableResponse, error)

ToTable implements the ToTabler interface.

func (*SignedRow) Union

func (sr *SignedRow) Union(other SignedRow) SignedRow

type SortByOther added in v3.29.0

type SortByOther twoSlices

func (SortByOther) Len added in v3.29.0

func (sbo SortByOther) Len() int

func (SortByOther) Less added in v3.29.0

func (sbo SortByOther) Less(i, j int) bool

func (SortByOther) Swap added in v3.29.0

func (sbo SortByOther) Swap(i, j int)

type SortedRow

type SortedRow struct {
	Row    *Row
	RowKVs []RowKV
}

func (*SortedRow) Columns

func (s *SortedRow) Columns() []uint64

func (*SortedRow) Merge

func (s *SortedRow) Merge(o *SortedRow, sort_desc bool) error

func (*SortedRow) ToRows

func (s *SortedRow) ToRows(callback func(*proto.RowResponse) error) error

type StringSet

type StringSet []string

StringSet is a return type specific to SQLResponse types.

func (StringSet) String

func (ss StringSet) String() string

type SystemAPI

type SystemAPI interface {
	ClusterName() string
	Version() string
	PlatformDescription() string
	PlatformVersion() string
	ClusterNodeCount() int
	ClusterReplicaCount() int
	ShardWidth() int
	ClusterState() string
	DataDir() string

	NodeID() string
	ClusterNodes() []ClusterNode
}

type SystemInfo

type SystemInfo interface {
	Uptime() (uint64, error)
	Platform() (string, error)
	Family() (string, error)
	OSVersion() (string, error)
	KernelVersion() (string, error)
	MemFree() (uint64, error)
	MemTotal() (uint64, error)
	MemUsed() (uint64, error)
	CPUModel() string
	CPUCores() (physical int, logical int, err error)
	CPUMHz() (int, error)
	CPUArch() string
	DiskCapacity(string) (uint64, error)
}

SystemInfo collects information about the host OS.

type SystemLayerAPI

type SystemLayerAPI interface {
	ExecutionRequests() ExecutionRequestsAPI
}

SystemLayerAPI defines an api to allow access to internal FeatureBase state

type TimeArgs

type TimeArgs struct {
	From time.Time
	To   time.Time
}

type TimeQuantum

type TimeQuantum string

TimeQuantum represents a time granularity for time-based bitmaps.

func (TimeQuantum) Granularity

func (q TimeQuantum) Granularity() rune

func (TimeQuantum) HasDay

func (q TimeQuantum) HasDay() bool

HasDay returns true if the quantum contains a 'D' unit.

func (TimeQuantum) HasHour

func (q TimeQuantum) HasHour() bool

HasHour returns true if the quantum contains a 'H' unit.

func (TimeQuantum) HasMonth

func (q TimeQuantum) HasMonth() bool

HasMonth returns true if the quantum contains a 'M' unit.

func (TimeQuantum) HasYear

func (q TimeQuantum) HasYear() bool

HasYear returns true if the quantum contains a 'Y' unit.

func (TimeQuantum) IsEmpty

func (q TimeQuantum) IsEmpty() bool

IsEmpty returns true if the quantum is empty.

func (*TimeQuantum) Set

func (q *TimeQuantum) Set(value string) error

Set sets the time quantum value.

func (TimeQuantum) String

func (q TimeQuantum) String() string

func (TimeQuantum) Type

func (q TimeQuantum) Type() string

Type returns the type of a time quantum value.

func (TimeQuantum) Valid

func (q TimeQuantum) Valid() bool

Valid returns true if q is a valid time quantum value.

type Transaction

type Transaction struct {
	// ID is an arbitrary string identifier. All transactions must have a unique ID.
	ID string `json:"id"`

	// Active notes whether an exclusive transaction is active, or
	// still pending (if other active transactions exist). All
	// non-exclusive transactions are always active.
	Active bool `json:"active"`

	// Exclusive is set to true for transactions which can only become active when no other
	// transactions exist.
	Exclusive bool `json:"exclusive"`

	// Timeout is the minimum idle time for which this transaction should continue to exist.
	Timeout time.Duration `json:"timeout"`

	// CreatedAt is the timestamp at which the transaction was created. This supports
	// the case of listing transactions in a useful order.
	CreatedAt time.Time `json:"createdAt"`

	// Deadline is calculated from Timeout. TODO reset deadline each time there is activity
	// on the transaction. (we can't do this until there is some method of associating a
	// request/call with a transaction)
	Deadline time.Time `json:"deadline"`

	// Stats track statistics for the transaction. Not yet used.
	Stats TransactionStats `json:"stats"`
}

Transaction contains information related to a block of work that needs to be tracked and spans multiple API calls.

func (*Transaction) Copy

func (trns *Transaction) Copy() *Transaction

func (*Transaction) MarshalJSON

func (trns *Transaction) MarshalJSON() ([]byte, error)

func (*Transaction) UnmarshalJSON

func (trns *Transaction) UnmarshalJSON(b []byte) error

type TransactionManager

type TransactionManager struct {
	Log logger.Logger
	// contains filtered or unexported fields
}

TransactionManager enforces the rules for transactions on a single node. It is goroutine-safe. It should be created by a call to NewTransactionManager where it takes a TransactionStore. If logging is desired, Log should be set before an instance of TransactionManager is used.

func NewTransactionManager

func NewTransactionManager(store TransactionStore) *TransactionManager

NewTransactionManager creates a new TransactionManager with the given store, and starts a deadline-checker in a goroutine.

func (*TransactionManager) Finish

func (tm *TransactionManager) Finish(ctx context.Context, id string) (*Transaction, error)

Finish completes and removes a transaction, returning the completed transaction (so that the caller can e.g. view the Stats)

func (*TransactionManager) Get

Get retrieves the transaction with the given ID. Returns ErrTransactionNotFound if there isn't one.

func (*TransactionManager) List

func (tm *TransactionManager) List(ctx context.Context) (map[string]*Transaction, error)

List returns map of all transactions by their ID. It is a copy and so may be retained and modified by the caller.

func (*TransactionManager) ResetDeadline

func (tm *TransactionManager) ResetDeadline(ctx context.Context, id string) (*Transaction, error)

ResetDeadline updates the deadline for the transaction with the given ID to be equal to the current time plus the transaction's timeout.

func (*TransactionManager) Start

func (tm *TransactionManager) Start(ctx context.Context, id string, timeout time.Duration, exclusive bool) (*Transaction, error)

Start starts a new transaction with the given parameters. If an exclusive transaction is pending or in progress, ErrTransactionExclusive is returned. If a transaction with the same id already exists, that transaction is returned along with ErrTransactionExists. If there is no error, the created transaction is returned—this is primarily so that the caller can discover if an exclusive transaction has been made immediately active or if they need to poll.

type TransactionMessage

type TransactionMessage struct {
	Transaction *Transaction
	Action      string
}

type TransactionResponse

type TransactionResponse struct {
	Transaction *Transaction `json:"transaction,omitempty"`
	Error       string       `json:"error,omitempty"`
}

type TransactionStats

type TransactionStats struct{}

type TransactionStore

type TransactionStore interface {
	// Put stores a new transaction or replaces an existing transaction with the given one.
	Put(trns *Transaction) error
	// Get retrieves the transaction at id or returns ErrTransactionNotFound if there isn't one.
	Get(id string) (*Transaction, error)
	// List returns a map of all transactions by ID. The map must be safe to modify by the caller.
	List() (map[string]*Transaction, error)
	// Remove deletes the transaction from the store. It must return ErrTransactionNotFound if there isn't one.
	Remove(id string) (*Transaction, error)
}

TransactionStore declares the functionality which a store for Pilosa transactions must implement.

func OpenInMemTransactionStore

func OpenInMemTransactionStore(path string) (TransactionStore, error)

type TranslateEntry

type TranslateEntry struct {
	Index string `json:"index,omitempty"`
	Field string `json:"field,omitempty"`
	ID    uint64 `json:"id,omitempty"`
	Key   string `json:"key,omitempty"`
}

TranslateEntry represents a key/ID pair from a TranslateStore.

type TranslateEntryReader

type TranslateEntryReader interface {
	io.Closer
	ReadEntry(entry *TranslateEntry) error
}

TranslateEntryReader represents a stream of translation entries.

type TranslateIDsRequest

type TranslateIDsRequest struct {
	Index string
	Field string
	IDs   []uint64
}

TranslateIDsRequest describes the structure of a request for a batch of id translations.

type TranslateIDsResponse

type TranslateIDsResponse struct {
	Keys []string
}

TranslateIDsResponse is the structured response of a id translation request.

type TranslateKeysRequest

type TranslateKeysRequest struct {
	Index string
	Field string
	Keys  []string

	// NotWritable is an awkward name, but it's just to keep backward compatibility with client and idk.
	NotWritable bool
}

TranslateKeysRequest describes the structure of a request for a batch of key translations.

type TranslateKeysResponse

type TranslateKeysResponse struct {
	IDs []uint64
}

TranslateKeysResponse is the structured response of a key translation request.

type TranslateOffsetMap

type TranslateOffsetMap map[string]*IndexTranslateOffsetMap

TranslateOffsetMap maintains a set of offsets for both indexes & fields.

func (TranslateOffsetMap) Empty

func (m TranslateOffsetMap) Empty() bool

Empty reports whether there are any actual entries in the map. This is distinct from len(m) == 0 in that an entry in this map which is itself empty doesn't count as non-empty.

func (TranslateOffsetMap) FieldOffset

func (m TranslateOffsetMap) FieldOffset(index, name string) uint64

FieldOffset returns the offset for the given field.

func (TranslateOffsetMap) IndexPartitionOffset

func (m TranslateOffsetMap) IndexPartitionOffset(name string, partitionID int) uint64

IndexOffset returns the offset for the given index.

func (TranslateOffsetMap) SetFieldOffset

func (m TranslateOffsetMap) SetFieldOffset(index, name string, offset uint64)

SetFieldOffset sets the offset for the given field.

func (TranslateOffsetMap) SetIndexPartitionOffset

func (m TranslateOffsetMap) SetIndexPartitionOffset(name string, partitionID int, offset uint64)

SetIndexOffset sets the offset for the given index.

type TranslateStore

type TranslateStore interface {
	io.Closer

	// Returns the maximum ID set on the store.
	MaxID() (uint64, error)

	// Retrieves the partition ID associated with the store.
	// Only applies to index stores.
	PartitionID() int

	// Sets & retrieves whether the store is read-only.
	ReadOnly() bool
	SetReadOnly(v bool)

	// FindKeys looks up the ID for each key.
	// Keys are not created if they do not exist.
	// Missing keys are not considered errors, so the length of the result may be less than that of the input.
	FindKeys(keys ...string) (map[string]uint64, error)

	// CreateKeys maps all keys to IDs, creating the IDs if they do not exist.
	// If the translator is read-only, this will return an error.
	CreateKeys(keys ...string) (map[string]uint64, error)

	// Match finds IDs of strings matching the filter.
	Match(filter func([]byte) bool) ([]uint64, error)

	// Converts an integer ID to its associated string key.
	TranslateID(id uint64) (string, error)
	TranslateIDs(id []uint64) ([]string, error)

	// Forces the write of a key/id pair, even if read only. Used by replication.
	ForceSet(id uint64, key string) error

	// Returns a reader from the given ID offset.
	EntryReader(ctx context.Context, offset uint64) (TranslateEntryReader, error)

	Begin(write bool) (TranslatorTx, error)

	// ReadFrom ensures that the TranslateStore implements io.ReaderFrom.
	// It should read from the reader and replace the data store with
	// the read payload.
	ReadFrom(io.Reader) (int64, error)

	Delete(records *roaring.Bitmap) (Commitor, error)
}

TranslateStore is the storage for translation string-to-uint64 values. For BoltDB implementation an empty string will be converted into the sentinel byte slice:

var emptyKey = []byte{
	0x00, 0x00, 0x00,
	0x4d, 0x54, 0x4d, 0x54, // MTMT
	0x00,
	0xc2, 0xa0, // NO-BREAK SPACE
	0x00,
}

func OpenInMemTranslateStore

func OpenInMemTranslateStore(rawurl, index, field string, partitionID, partitionN int, fsyncEnabled bool) (TranslateStore, error)

OpenInMemTranslateStore returns a new instance of a BoltDB based TranslateStore which removes all its files when it's closed, and tries to operate off a RAM disk if one is configured and set in the environment. Implements OpenTranslateStoreFunc.

func OpenTranslateStore added in v3.27.0

func OpenTranslateStore(path, index, field string, partitionID, partitionN int, fsyncEnabled bool) (TranslateStore, error)

OpenTranslateStore opens and initializes a boltdb translation store.

type TranslationSyncer

type TranslationSyncer interface {
	Reset() error
}

TranslationSyncer provides an interface allowing a function to notify the server that an action has occurred which requires the translation sync process to be reset. In general, this includes anything which modifies schema (add/remove index, etc), or anything that changes the cluster topology (add/remove node). I originally considered leveraging the broadcaster since that was already in place and provides similar event messages, but the broadcaster is really meant for notifiying other nodes, while this is more akin to an internal message bus. In fact, I think a future iteration on this may be to make it more generic so it can act as an internal message bus where one of the messages being published is "translationSyncReset".

var NopTranslationSyncer TranslationSyncer = &nopTranslationSyncer{}

NopTranslationSyncer represents a translationSyncer that doesn't do anything.

type TranslatorTx added in v3.27.0

type TranslatorTx interface {
	WriteTo(io.Writer) (int64, error)
	Rollback() error
}

TranslatorTx reproduces a subset of the methods on the BoltDB Tx object. Others may be needed in the future and we should just add them here. The idea is not to scatter direct references to bolt stuff throughout the whole codebase.

type Tx

type Tx interface {

	// Type returns "roaring", "rbf", or one of the other
	// Tx types at the top of txfactory.go
	Type() string

	// Rollback must be called at the end of read-only transactions. Either
	// Rollback or Commit must be called at the end of writable transactions.
	// It is safe to call Rollback multiple times, but it must be
	// called at least once to release resources. Any Rollback after
	// a Commit is ignored, so 'defer tx.Rollback()' should be commonly
	// written after starting a new transaction.
	//
	// If there is an error during internal Rollback processing,
	// this would be quite serious, and the underlying storage is
	// expected to panic. Hence there is no explicit error returned
	// from Rollback that needs to be checked.
	Rollback()

	// Commit makes the updates in the Tx visible to subsequent transactions.
	Commit() error

	// ContainerIterator loops over the containers in the conceptual
	// roaring.Bitmap for the specified fragment.
	// Calling Next() on the returned roaring.ContainerIterator gives
	// you a roaring.Container that is either run, array, or raw bitmap.
	// Return value 'found' is true when the ckey container was present.
	// ckey of 0 gives all containers (in the fragment).
	//
	// ContainerIterator must not have side-effects.
	//
	// citer.Close() must be called when the client is done using it.
	ContainerIterator(index, field, view string, shard uint64, ckey uint64) (citer roaring.ContainerIterator, found bool, err error)

	// ApplyFilter applies a roaring.BitmapFilter to a specified shard,
	// starting at the given container key. The filter's ConsiderData
	// method may be called with transient Container objects which *must
	// not* be retained or referenced after that function exits. Similarly,
	// their data must not be retained. If you need the data later, you
	// must copy it into some other memory.
	ApplyFilter(index, field, view string, shard uint64, ckey uint64, filter roaring.BitmapFilter) (err error)

	// ApplyRewriter applies a roaring.BitmapRewriter to a specified shard,
	// starting at the given container key. The filter's ConsiderData
	// method may be called with transient Container objects which *must
	// not* be retained or referenced after that function exits. Similarly,
	// their data must not be retained. If you need the data later, you
	// must copy it into some other memory. However, it is safe to overwrite
	// the returned container; for instance, you can DifferenceInPlace on
	// it.
	ApplyRewriter(index, field, view string, shard uint64, ckey uint64, filter roaring.BitmapRewriter) (err error)

	// RoaringBitmap retrieves the roaring.Bitmap for the entire shard.
	RoaringBitmap(index, field, view string, shard uint64) (*roaring.Bitmap, error)

	// Container returns the roaring.Container for the given ckey
	// (container-key or highbits) in the chosen fragment.
	Container(index, field, view string, shard uint64, ckey uint64) (*roaring.Container, error)

	// PutContainer stores c under the given ckey (container-key) in the specified fragment.
	PutContainer(index, field, view string, shard uint64, ckey uint64, c *roaring.Container) error

	// RemoveContainer deletes the roaring.Container under the given ckey (container-key)
	// in the specified fragment.
	RemoveContainer(index, field, view string, shard uint64, ckey uint64) error

	// Add adds the 'a' values to the Bitmap for the fragment.
	Add(index, field, view string, shard uint64, a ...uint64) (changeCount int, err error)

	// Remove removes the 'a' values from the Bitmap for the fragment.
	Remove(index, field, view string, shard uint64, a ...uint64) (changeCount int, err error)

	// Removed removes values, returning the set of values it removed.
	// It may overwrite the slice passed to it.
	Removed(index, field, view string, shard uint64, a ...uint64) (changed []uint64, err error)

	// Contains tests if the uint64 v is stored in the fragment's Bitmap.
	Contains(index, field, view string, shard uint64, v uint64) (exists bool, err error)

	// Count returns the count of hot bits on the fragment.
	Count(index, field, view string, shard uint64) (uint64, error)

	// Max returns the maximum value set in the Bitmap for the fragment.
	Max(index, field, view string, shard uint64) (uint64, error)

	// Min returns the minimum value set in the Bitmap for the fragment.
	Min(index, field, view string, shard uint64) (uint64, bool, error)

	// CountRange returns the count of hot bits in the [start, end) range on the
	// fragment.
	CountRange(index, field, view string, shard uint64, start, end uint64) (uint64, error)

	// OffsetRange returns a *roaring.Bitmap containing the portion of the Bitmap for the fragment
	// which is specified by a combination of (offset, [start, end)).
	//
	// start  - The value at which to start reading. This must be the zero value
	//          of a container; i.e. [0, 65536, ...]
	// end    - The value at which to end reading. This must be the zero value
	//          of a container; i.e. [0, 65536, ...]
	// offset - The number of positions to shift the resulting bitmap. This must
	//          be the zero value of a container; i.e. [0, 65536, ...]
	//
	// For example, if (index, field, view, shard) represents the following bitmap:
	// [1, 2, 3, 65536, 65539]
	//
	// then the following results are achieved based on (offset, start, end):
	// (0, 0, 131072)          => [1, 2, 3, 65536, 65539]
	// (0, 65536, 131072)      => [0, 3]
	// (65536, 65536, 131072)  => [65536, 65539]
	// (262144, 65536, 131072) => [262144, 262147]
	//
	OffsetRange(index, field, view string, shard uint64, offset, start, end uint64) (*roaring.Bitmap, error)

	// ImportRoaringBits does efficient bulk import using rit, a roaring.RoaringIterator.
	//
	// See the roaring package for details of the RoaringIterator.
	//
	// If clear is true, the bits from rit are cleared, otherwise they are set in the
	// specifed fragment.
	//
	// ImportRoaringBits return values changed and rowSet may be inaccurate if
	// the data []byte is supplied (the RoaringTx implementation neglects this for speed).
	ImportRoaringBits(index, field, view string, shard uint64, rit roaring.RoaringIterator, clear bool, log bool, rowSize uint64) (changed int, rowSet map[uint64]int, err error)

	// GetSortedFieldViewList gets the set of FieldView(s)
	GetSortedFieldViewList(idx *Index, shard uint64) (fvs []txkey.FieldView, err error)

	GetFieldSizeBytes(index, field string) (uint64, error)
}

Tx providers offer transactional storage for high-level roaring.Bitmaps and low-level roaring.Containers.

The common 4-tuple of (index, field, view, shard) jointly specify a fragment. A fragment conceptually holds one roaring.Bitmap.

Within the fragment, the ckey or container-key is the uint64 that specifies the high 48-bits of the roaring.Bitmap 64-bit space. The ckey is used to retrieve a specific roaring.Container that is either a run, array, or raw bitmap. The roaring.Container is the low 16-bits of the roaring.Bitmap space. Its size is at most 8KB (2^16 bits / (8 bits / byte) == 8192 bytes).

The grain of the transaction is guaranteed to be at least at the shard within one index. Therefore updates to any of the fields within the same shard will be atomically visible only once the transaction commits. Reads from another, concurrently open, transaction will not see updates that have not been committed.

type TxFactory

type TxFactory struct {
	// contains filtered or unexported fields
}

TxFactory abstracts the creation of Tx interface-level transactions so that RBF, or Roaring-fragment-files, or several of these at once in parallel, is used as the storage and transction layer.

func NewTxFactory

func NewTxFactory(backend string, holderDir string, holder *Holder) (f *TxFactory, err error)

NewTxFactory always opens an existing database. If you want to a fresh database, os.RemoveAll on dir/name ahead of time. We always store files in a subdir of holderDir.

func (*TxFactory) Close

func (f *TxFactory) Close() (err error)

func (*TxFactory) CloseIndex

func (f *TxFactory) CloseIndex(idx *Index) error

CloseIndex is a no-op. This seems to be in place for debugging purposes.

func (*TxFactory) DeleteFieldFromStore

func (f *TxFactory) DeleteFieldFromStore(index, field, fieldPath string) (err error)

func (*TxFactory) DeleteFragmentFromStore

func (f *TxFactory) DeleteFragmentFromStore(
	index, field, view string, shard uint64, frag *fragment,
) (err error)

func (*TxFactory) DeleteIndex

func (f *TxFactory) DeleteIndex(name string) (err error)

func (*TxFactory) GetDBShardPath

func (f *TxFactory) GetDBShardPath(index string, shard uint64, idx *Index, ty txtype, write bool) (shardPath string, err error)

func (*TxFactory) GetFieldView2ShardsMapForIndex

func (txf *TxFactory) GetFieldView2ShardsMapForIndex(idx *Index) (vs *FieldView2Shards, err error)

func (*TxFactory) GetShardsForIndex

func (f *TxFactory) GetShardsForIndex(idx *Index, roaringViewPath string, requireData bool) (map[uint64]struct{}, error)

DBPerShardGetShardsForIndex returns the shards for idx. If requireData, we open the database and see that it has a key, rather than assume that the database file presence is enough.

func (*TxFactory) NewDBPerShard

func (txf *TxFactory) NewDBPerShard(typ txtype, holderDir string, holder *Holder) (d *DBPerShard)

func (*TxFactory) NewQcx

func (f *TxFactory) NewQcx() (qcx *Qcx)

NewQcx allocates a freshly allocated and empty Grp. The top-level Qcx is not marked writable. Non-writable Qcx should not be used to request write Tx.

func (*TxFactory) NewTx

func (f *TxFactory) NewTx(o Txo) (txn Tx)

func (*TxFactory) NewTxGroup

func (f *TxFactory) NewTxGroup() (g *TxGroup)

NewTxGroup

func (*TxFactory) NewWritableQcx

func (f *TxFactory) NewWritableQcx() (qcx *Qcx)

NewWritableQcx allocates a freshly allocated and empty Grp. The resulting Qcx is marked writable.

func (*TxFactory) Open

func (f *TxFactory) Open() error

Open should be called only once the index metadata is loaded from Holder.Open(), so we find all of our indexes.

func (*TxFactory) TxTyp

func (f *TxFactory) TxTyp() txtype

func (*TxFactory) TxType

func (f *TxFactory) TxType() string

type TxGroup

type TxGroup struct {
	// contains filtered or unexported fields
}

TxGroup holds a set of read transactions that will en-mass have Rollback() (for the read set) called on them when TxGroup.Finish() is invoked. Alternatively, TxGroup.Abort() will call Rollback() on all Tx group memebers.

It used to have writes but we never actually used that because of the Qcx needing to make every commit get its own transaction.

func (*TxGroup) AbortGroup

func (g *TxGroup) AbortGroup()

Abort calls Rollback() on all the group Tx, and marks the group as finished. Either Abort() or Finish() must be called on the TxGroup.

func (*TxGroup) AddTx

func (g *TxGroup) AddTx(tx Tx, o Txo)

AddTx adds tx to the group.

func (*TxGroup) AlreadyHaveTx

func (g *TxGroup) AlreadyHaveTx(o Txo) (tx Tx, already bool)

func (*TxGroup) FinishGroup

func (g *TxGroup) FinishGroup() (err error)

Finish commits the write tx and calls Rollback() on the read tx contained in the group. Either Abort() or Finish() must be called on the TxGroup exactly once.

func (*TxGroup) String

func (g *TxGroup) String() (r string)

type Txo

type Txo struct {
	Write    bool
	Field    *Field
	Index    *Index
	Fragment *fragment
	Shard    uint64
	// contains filtered or unexported fields
}

Txo holds the transaction options

type UpdateFieldMessage

type UpdateFieldMessage struct {
	CreateFieldMessage CreateFieldMessage
	Update             FieldUpdate
}

UpdateFieldMessage represents a change to an existing field. The CreateFieldMessage holds the changed field, while the update shows the change that was made.

type ValCount

type ValCount struct {
	Val          int64        `json:"value"`
	FloatVal     float64      `json:"floatValue"`
	DecimalVal   *pql.Decimal `json:"decimalValue"`
	TimestampVal time.Time    `json:"timestampValue"`
	Count        int64        `json:"count"`
}

ValCount represents a grouping of sum & count for Sum() and Average() calls. Also Min, Max....

func (*ValCount) Add

func (vc *ValCount) Add(other ValCount) ValCount

func (*ValCount) Cleanup

func (vc *ValCount) Cleanup()

cleanup removes the integer value (Val) from the ValCount if one of the other fields is in use.

ValCounts are normally holding data which is stored as a BSI (integer) under the hood. Sometimes it's convenient to be able to compare the underlying integer values rather than their interpretation as decimal, timestamp, etc, so the lower level functions may return both integer and the interpreted value, but we don't want to pass that all the way back to the client, so we remove it here.

func (*ValCount) Clone

func (v *ValCount) Clone() (r *ValCount)

func (*ValCount) Larger

func (vc *ValCount) Larger(other ValCount) ValCount

larger returns the larger of the two ValCounts.

func (*ValCount) Smaller

func (vc *ValCount) Smaller(other ValCount) ValCount

smaller returns the smaller of the two ValCounts.

func (ValCount) ToRows

func (v ValCount) ToRows(callback func(*proto.RowResponse) error) error

ToRows implements the ToRowser interface.

func (ValCount) ToTable

func (v ValCount) ToTable() (*proto.TableResponse, error)

ToTable implements the ToTabler interface.

type ViewInfo

type ViewInfo struct {
	Name string `json:"name"`
}

ViewInfo represents schema information for a view.

type WireQueryField

type WireQueryField struct {
	Name     dax.FieldName          `json:"name"`
	Type     string                 `json:"type"`      // human readable display (e.g. "decimal(2)")
	BaseType dax.BaseType           `json:"base-type"` // for programmatic switching on type (e.g. "decimal")
	TypeInfo map[string]interface{} `json:"type-info"` // type modifiers (like scale), but not constraints (like min/max)
}

WireQueryField is a field name along with a supported BaseType and type information.

type WireQueryResponse

type WireQueryResponse struct {
	Schema        WireQuerySchema        `json:"schema"`
	Data          [][]interface{}        `json:"data"`
	Error         string                 `json:"error"`
	Warnings      []string               `json:"warnings"`
	QueryPlan     map[string]interface{} `json:"query-plan"`
	ExecutionTime int64                  `json:"execution-time"`
}

WireQueryResponse is the standard featurebase response type which can be serialized and sent over the wire.

func (*WireQueryResponse) ShowColumnsResponse added in v3.34.0

func (s *WireQueryResponse) ShowColumnsResponse() (*ShowColumnsResponse, error)

ShowColumnsResponse returns a structure which is specific to a `SHOW COLUMNS` statement, derived from the results in the WireQueryResponse. This is kind of a crude way to unmarshal a WireQueryResponse into a type which is specific to the sql operation. TODO(tlt): see if we can standardize on this logic, because it would be useful to have the same thing for SHOW DATABASES and SHOW TABLES. TODO(tlt): the fields unmarshalled in this method are a subset of the actual columns available; at the moment, we only handled the ones we need in the CLI.

func (*WireQueryResponse) UnmarshalJSON

func (s *WireQueryResponse) UnmarshalJSON(in []byte) error

UnmarshalJSON is a custom unmarshaller for the SQLResponse that converts the value types in `Data` based on the types in `Schema`.

func (*WireQueryResponse) UnmarshalJSONTyped

func (s *WireQueryResponse) UnmarshalJSONTyped(in []byte, typed bool) error

UnmarshalJSONTyped is a temporary until we send typed values back in sql responses. At that point, we can get rid of the typed=false path. In order to do that, we need sql3 to return typed values, and we need the sql3/test/defs to define results as typed values (like `IDSet`) instead of (for example) `[]int64`.

type WireQuerySchema

type WireQuerySchema struct {
	Fields []*WireQueryField `json:"fields"`
}

WireQuerySchema is a list of Fields which map to the data columns in the Response.

Directories

Path Synopsis
api
Package authn handles authentication
Package authn handles authentication
cli
Package cli contains a FeatureBase command line interface.
Package cli contains a FeatureBase command line interface.
csv
cmd
dax
Package dax defines DAX domain level types.
Package dax defines DAX domain level types.
computer
Package computer contains the compute-specific portions of the DAX architecture.
Package computer contains the compute-specific portions of the DAX architecture.
controller
Package controller provides the core Controller struct.
Package controller provides the core Controller struct.
controller/balancer
Package balancer is an implementation of the controller.Balancer interface.
Package balancer is an implementation of the controller.Balancer interface.
controller/client
Package client is an HTTP client for Controller.
Package client is an HTTP client for Controller.
controller/http
Package http provides the http implementation of the Director interface.
Package http provides the http implementation of the Director interface.
controller/partitioner
Package partitioner provides the Partitioner type, which provides helper methods for determining partitions based on string keys.
Package partitioner provides the Partitioner type, which provides helper methods for determining partitions based on string keys.
controller/poller
Package poller provides the core Poller struct.
Package poller provides the core Poller struct.
controller/schemar
Package schemar provides the core Schemar interface.
Package schemar provides the core Schemar interface.
queryer/client
Package client is an HTTP client for the Queryer.
Package client is an HTTP client for the Queryer.
snapshotter
Package snapshotter provides the core snapshotter structs.
Package snapshotter provides the core snapshotter structs.
test
Package test include external test apps, helper functions, and test data.
Package test include external test apps, helper functions, and test data.
writelogger
Package writelogger provides the writelogger structs.
Package writelogger provides the writelogger structs.
encoding
Package errors wraps pkg/errors and includes some custom features such as error codes.
Package errors wraps pkg/errors and includes some custom features such as error codes.
idk
api
csv
sql
internal
Package lru implements an LRU cache.
Package lru implements an LRU cache.
qa
Package querycontext provides a semi-transactional layer wrapping access to multiple underlying transactional databases.
Package querycontext provides a semi-transactional layer wrapping access to multiple underlying transactional databases.
rbf
cfg
Package sql3 contains the latest version of FeatureBase SQL support.
Package sql3 contains the latest version of FeatureBase SQL support.
Package task provides an interface for indicating when an operation has been blocked, so that a worker pool which wants to be doing N things at a time can start trying new things when some things are blocked.
Package task provides an interface for indicating when an operation has been blocked, so that a worker pool which wants to be doing N things at a time can start trying new things when some things are blocked.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL