flux

package module
v0.86.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 21, 2020 License: MIT Imports: 23 Imported by: 88

README ¶

Flux - Influx data language

CircleCI

Flux is a lightweight scripting language for querying databases (like InfluxDB) and working with data. It is part of InfluxDB 1.7 and 2.0, but can be run independently of those. This repository contains the language definition and an implementation of the language core.

Specification

A complete specification can be found in SPEC.md. The specification contains many examples to start learning Flux.

Requirements

Building Flux requires the following:

  • Go 1.12 or greater with module support enabled
  • Latest stable version of Rust and Cargo (can be installed with rustup)
  • Clang

Getting Started

Flux is currently available in InfluxDB 1.7 and 2.0, or through the REPL that can be compiled from this repository.

To build Flux, first install the pkg-config utility, and ensure the GNU pkg-config utility is also installed.

# On Debian/Ubuntu
$ sudo apt-get install -y clang pkg-config
# On Mac OS X with Homebrew
$ brew install pkg-config
# Install the pkg-config wrapper utility
$ go get github.com/influxdata/pkg-config
# Ensure the GOBIN directory is on your PATH
$ export PATH=${GOPATH}/bin:${PATH}

To ensure that pkg-config is configured correctly, you can use which -a.

$ which -a pkg-config
/home/user/go/bin/pkg-config
/usr/bin/pkg-config

To compile and use the REPL, use the following command:

$ go build ./cmd/flux
$ ./flux repl

If you do not want to add the wrapper pkg-config to your PATH, you can also set PKG_CONFIG and Go will use it.

$ export PKG_CONFIG=/home/user/go/bin/pkg-config
$ go build ./cmd/flux
$ ./flux repl

From within the REPL, you can run any Flux expression. You can also load a file directly into the REPL by typing @ followed by the filename.

> @my_file_to_load.flux

Basic Syntax

Here are a few examples of the language to get an idea of the syntax.

// This line is a comment

// Support for traditional math operators
1 + 1

// Several data types are built-in
true                     // a boolean true value
1                        // an int
1.0                      // a float
"this is a string"       // a string literal
1h5m                     // a duration of time representing 1 hour and 5 minutes
2018-10-10               // a time starting at midnight for the default timezone on Oct 10th 2018
2018-10-10T10:05:00      // a time at 10:05 AM for the default timezone on Oct 10th 2018
[1,1,2]                  // an array of integers
{foo: "str", bar: false} // an object with two keys and their values

// Values can be assigned to identifers
x = 5.0
x + 3.0 // 8.0

// Import libraries
import "math"

// Call functions always using keyword arguments
math.pow(x: 5.0, y: 3.0) // 5^3 = 125

// Functions are defined by assigning them to identifers
add = (a, b) => a + b

// Call add using keyword arguments
add(a: 5, b: 3) // 8

// Functions are polymorphic
add(a: 5.5, b: 2.5) // 8.0

// And strongly typed
add(a: 5, b: 2.5) // type error

// Access data from a database and store it as an identifier
// This is only possible within the influxdb repl (at the moment).
import "influxdata/influxdb"
data = influxdb.from(bucket:"telegraf/autogen")

// When running inside of influxdb, the import isn't needed.
data = from(bucket:"telegraf/autogen")

// Chain more transformation functions to further specify the desired data
cpu = data 
    // only get the last 5m of data
    |> range(start: -5m)
    // only get the "usage_user" data from the _measurement "cpu"
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user")

// Return the data to the client
cpu |> yield()

// Group an aggregate along different dimensions
cpu
    // organize data into groups by host and region
    |> group(columns:["host","region"])
    // compute the mean of each group
    |> mean()
    // yield this result to the client
    |> yield()

// Window an aggregate over time
cpu
    // organize data into groups of 1 minute
    // compute the mean of each group
    |> aggregateWindow(every: 1m, fn: mean)
    // yield this result to the client
    |> yield()

// Gather different data
mem = data 
    // only get the last 5m of data
    |> range(start: -5m)
    // only get the "used_percent" data from the _measurement "mem"
    |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent")


// Join data to create wider tables and map a function over the result
join(tables: {cpu:cpu, mem:mem}, on:["_time", "host"])
    // compute the ratio of cpu usage to mem used_percent
    |> map(fn:(r) => {_time: r._time, _value: r._value_cpu / r._value_mem)
    // again yield this result to the client
    |> yield()

The above examples give only a taste of what is possible with Flux. See the complete documentation for more complete examples and instructions for how to use Flux with InfluxDB 2.0.

Contributing

Flux welcomes contributions to the language and the runtime.

If you are interested in contributing, please read the contributing guide for more information.

Development basics

If you modify any Rust code, you will need to force Go to rebuild the library.

$ go generate ./libflux/go/libflux

If you create or change any Flux functions, you will need to rebuild the stdlib and inform Go that it must rebuild libflux:

$ go generate ./stdlib ./libflux/go/libflux

Your new Flux's code should be formatted to coexist nicely with the existing codebase with go fmt. For example, if you add code to stdlib/universe:

$ go fmt ./stdlib/universe/

Don't forget to add your tests and make sure they work. Here is an example showing how to run the tests for the stdlib/universe package:

$ go test ./stdlib/universe/

Documentation ¶

Index ¶

Constants ¶

View Source
const (
	TablesParameter = "tables"
)

Variables ¶

View Source
var (
	MinTime = Time{
		Absolute: time.Unix(0, math.MinInt64),
	}
	MaxTime = Time{
		Absolute: time.Unix(0, math.MaxInt64),
	}
	Now = Time{
		IsRelative: true,
	}
)

Functions ¶

func ErrorCode ¶ added in v0.36.0

func ErrorCode(err error) codes.Code

ErrorCode returns the error code for the given error. If the error is not a flux.Error, this will return Unknown for the code. If the error is a flux.Error and its code is Inherit, then this will return the wrapped error's code.

func ErrorDocURL ¶ added in v0.82.0

func ErrorDocURL(err error) string

ErrorDocURL returns the DocURL associated with this error if one exists. This will return the outermost DocURL associated with this error unless the code is Inherit. If the code for an error is Inherit, this will return the DocURL for the nested error if it exists.

func FmtJSON ¶

func FmtJSON(f *formatter)

func Formatted ¶

func Formatted(q *Spec, opts ...FormatOption) fmt.Formatter

func FunctionValue ¶ added in v0.14.0

func FunctionValue(name string, c CreateOperationSpec, ft semantic.MonoType) (values.Value, error)

FunctionValue creates a values.Value from the operation spec and signature. Name is the name of the function as it would be called. c is a function reference of type CreateOperationSpec sig is a function signature type that specifies the names and types of each argument for the function.

func FunctionValueWithSideEffect ¶ added in v0.14.0

func FunctionValueWithSideEffect(name string, c CreateOperationSpec, ft semantic.MonoType) (values.Value, error)

FunctionValueWithSideEffect creates a values.Value from the operation spec and signature. Name is the name of the function as it would be called. c is a function reference of type CreateOperationSpec sig is a function signature type that specifies the names and types of each argument for the function.

func IsQueryTracingEnabled ¶ added in v0.86.0

func IsQueryTracingEnabled(ctx context.Context) bool

IsQueryTracingEnabled will return true if the context contains a key indicating that experimental tracing is enabled.

func MustValue ¶ added in v0.68.0

func MustValue(v values.Value, err error) values.Value

MustValue panics if err is not nil, otherwise value is returned.

func NumberOfOperations ¶

func NumberOfOperations() int

func RegisterOpSpec ¶

func RegisterOpSpec(k OperationKind, c NewOperationSpec)

RegisterOpSpec registers an operation spec with a given kind. k is a label that uniquely identifies this operation. If the kind has already been registered the call panics. c is a function reference that creates a new, default-initialized opSpec for the given kind. TODO:(nathanielc) make this part of RegisterMethod/RegisterFunction

func SemanticType ¶ added in v0.14.0

func SemanticType(typ ColType) semantic.MonoType

func WithQueryTracingEnabled ¶ added in v0.86.0

func WithQueryTracingEnabled(parentCtx context.Context) context.Context

WithQueryTracingEnabled will return a child context that will turn on experimental query tracing.

Types ¶

type ASTHandle ¶ added in v0.68.0

type ASTHandle interface {
	// ASTHandle is a no-op method whose purpose is to avoid types unintentionally
	// implementing this interface.
	ASTHandle()

	// GetError will return the first error encountered when parsing Flux source code,
	// if any.
	GetError() error
}

ASTHandle is an opaque type that represents an abstract syntax tree.

type Administration ¶

type Administration struct {
	// contains filtered or unexported fields
}

func (*Administration) AddParent ¶

func (a *Administration) AddParent(np *TableObject)

AddParent instructs the evaluation Context that a new edge should be created from the parent to the current operation. Duplicate parents will be removed, so the caller need not concern itself with which parents have already been added.

func (*Administration) AddParentFromArgs ¶

func (a *Administration) AddParentFromArgs(args Arguments) error

AddParentFromArgs reads the args for the `table` argument and adds the value as a parent.

type Arguments ¶

type Arguments struct {
	interpreter.Arguments
}

func (Arguments) GetDuration ¶

func (a Arguments) GetDuration(name string) (Duration, bool, error)

func (Arguments) GetRequiredDuration ¶

func (a Arguments) GetRequiredDuration(name string) (Duration, error)

func (Arguments) GetRequiredTime ¶

func (a Arguments) GetRequiredTime(name string) (Time, error)

func (Arguments) GetTime ¶

func (a Arguments) GetTime(name string) (Time, bool, error)

type Bounds ¶

type Bounds struct {
	Start Time
	Stop  Time
	Now   time.Time
}

func (Bounds) HasZero ¶

func (b Bounds) HasZero() bool

HasZero returns true if the given bounds contain a Go zero time value as either Start or Stop.

func (Bounds) IsEmpty ¶

func (b Bounds) IsEmpty() bool

IsEmpty reports whether the given bounds are empty, i.e., if start >= stop.

type BufferedTable ¶ added in v0.31.0

type BufferedTable interface {
	Table

	// Buffer returns the i'th buffer in the buffered table.
	// This allows accessing the buffered table contents without
	// using the Table.
	Buffer(i int) ColReader

	// BufferN returns the number of buffers in this table.
	BufferN() int

	// Copy will return a copy of the BufferedTable without
	// consuming the Table itself. If this Table has already
	// been consumed by the Do method, then this will panic.
	Copy() BufferedTable
}

BufferedTable is an implementation of Table that has all of its data buffered.

type ColMeta ¶

type ColMeta struct {
	// Label is the name of the column. The label is unique per table.
	Label string
	// Type is the type of the column. Only basic types are allowed.
	Type ColType
}

ColMeta contains the information about the column metadata.

type ColReader ¶

type ColReader interface {
	Key() GroupKey
	// Cols returns a list of column metadata.
	Cols() []ColMeta
	// Len returns the length of the slices.
	// All slices will have the same length.
	Len() int
	Bools(j int) *array.Boolean
	Ints(j int) *array.Int64
	UInts(j int) *array.Uint64
	Floats(j int) *array.Float64
	Strings(j int) *array.Binary
	Times(j int) *array.Int64

	// Retain will retain this buffer to avoid having the
	// memory consumed by it freed.
	Retain()

	// Release will release a reference to this buffer.
	Release()
}

ColReader allows access to reading arrow buffers of column data. All data the ColReader exposes is guaranteed to be in memory. A ColReader that is produced when processing a Table will be released once it goes out of scope. Retain can be used to keep a reference to the buffered memory.

type ColType ¶

type ColType int

ColType is the type for a column. This covers only basic data types.

const (
	TInvalid ColType = iota
	TBool
	TInt
	TUInt
	TFloat
	TString
	TTime
)

func ColumnType ¶

func ColumnType(typ semantic.MonoType) ColType

ColumnType returns the column type when given a semantic.Type. It returns flux.TInvalid if the Type is not a valid column type.

func (ColType) String ¶

func (t ColType) String() string

String returns a string representation of the column type.

type Compiler ¶

type Compiler interface {
	// Compile produces a specification for the query.
	Compile(ctx context.Context, runtime Runtime) (Program, error)
	CompilerType() CompilerType
}

Compiler produces a specification for the query.

type CompilerMappings ¶

type CompilerMappings map[CompilerType]CreateCompiler

func (CompilerMappings) Add ¶

type CompilerType ¶

type CompilerType string

CompilerType is the name of a query compiler.

type CreateCompiler ¶

type CreateCompiler func() Compiler

type CreateDialect ¶

type CreateDialect func() Dialect

type CreateOperationSpec ¶

type CreateOperationSpec func(args Arguments, a *Administration) (OperationSpec, error)

type DelimitedMultiResultEncoder ¶

type DelimitedMultiResultEncoder struct {
	Delimiter []byte
	Encoder   interface {
		ResultEncoder
		// EncodeError encodes an error on the writer.
		EncodeError(w io.Writer, err error) error
	}
}

DelimitedMultiResultEncoder encodes multiple results using a trailing delimiter. The delimiter is written after every result.

If an error is encountered when iterating and the error is an encoder error, the error will be returned. Otherwise, the error is assumed to have arisen from query execution, and said error will be encoded with the EncodeError method of the Encoder field.

If the io.Writer implements flusher, it will be flushed after each delimiter.

func (*DelimitedMultiResultEncoder) Encode ¶

Encode will encode the results into the writer using the Encoder and separating each entry by the Delimiter. If an error occurs while processing the ResultIterator or is returned from the underlying Encoder, Encode will return the error if nothing has yet been written to the Writer. If something has been written to the Writer, then an error will only be returned when the error is an EncoderError.

type Dependencies ¶ added in v0.48.0

type Dependencies interface {
	Dependency
	HTTPClient() (http.Client, error)
	FilesystemService() (filesystem.Service, error)
	SecretService() (secret.Service, error)
	URLValidator() (url.Validator, error)
}

func GetDependencies ¶ added in v0.48.0

func GetDependencies(ctx context.Context) Dependencies

func NewEmptyDependencies ¶ added in v0.48.0

func NewEmptyDependencies() Dependencies

NewEmptyDependencies produces an empty set of dependencies. Accessing any dependency will result in an error.

type Dependency ¶ added in v0.48.0

type Dependency interface {
	Inject(ctx context.Context) context.Context
}

Dependency is an interface that must be implemented by every injectable dependency. On Inject, the dependency is injected into the context and the resulting one is returned. Every dependency must provide a function to extract it from the context.

type Deps ¶ added in v0.48.0

type Deps struct {
	Deps WrappedDeps
}

Deps implements Dependencies. Any deps which are nil will produce an explicit error.

func NewDefaultDependencies ¶ added in v0.48.0

func NewDefaultDependencies() Deps

NewDefaultDependencies produces a set of dependencies. Not all dependencies have valid defaults and will not be set.

func (Deps) FilesystemService ¶ added in v0.48.0

func (d Deps) FilesystemService() (filesystem.Service, error)

func (Deps) HTTPClient ¶ added in v0.48.0

func (d Deps) HTTPClient() (http.Client, error)

func (Deps) Inject ¶ added in v0.48.0

func (d Deps) Inject(ctx context.Context) context.Context

func (Deps) SecretService ¶ added in v0.48.0

func (d Deps) SecretService() (secret.Service, error)

func (Deps) URLValidator ¶ added in v0.48.0

func (d Deps) URLValidator() (url.Validator, error)

type Dialect ¶

type Dialect interface {
	// Encoder creates an encoder for the results
	Encoder() MultiResultEncoder
	// DialectType report the type of the dialect
	DialectType() DialectType
}

Dialect describes how to encode results.

type DialectMappings ¶

type DialectMappings map[DialectType]CreateDialect

func (DialectMappings) Add ¶

type DialectType ¶

type DialectType string

DialectType is the name of a query result dialect.

type Duration ¶

type Duration = values.Duration

Duration is a marshalable duration type.

func ConvertDuration ¶ added in v0.51.0

func ConvertDuration(v time.Duration) Duration

ConvertDurationNsecs will convert a time.Duration into a flux.Duration.

type Edge ¶

type Edge struct {
	Parent OperationID `json:"parent"`
	Child  OperationID `json:"child"`
}

Edge is a data flow relationship between a parent and a child

type EncoderError ¶

type EncoderError interface {
	IsEncoderError() bool
}

EncoderError is an interface that any error produced from a ResultEncoder implementation should conform to. It allows for differentiation between errors that occur in results, and errors that occur while encoding results.

type Error ¶ added in v0.36.0

type Error = errors.Error

type FormatOption ¶

type FormatOption func(*formatter)

TODO(nathanielc): Add better options for formatting plans as Graphviz dot format.

type GroupKey ¶

type GroupKey interface {
	Cols() []ColMeta
	Values() []values.Value

	HasCol(label string) bool
	LabelValue(label string) values.Value

	IsNull(j int) bool
	ValueBool(j int) bool
	ValueUInt(j int) uint64
	ValueInt(j int) int64
	ValueFloat(j int) float64
	ValueString(j int) string
	ValueDuration(j int) values.Duration
	ValueTime(j int) values.Time
	Value(j int) values.Value

	Equal(o GroupKey) bool
	Less(o GroupKey) bool
	String() string
}

type GroupKeys ¶ added in v0.15.0

type GroupKeys []GroupKey

GroupKeys provides a sortable collection of group keys.

func (GroupKeys) Len ¶ added in v0.15.0

func (a GroupKeys) Len() int

func (GroupKeys) Less ¶ added in v0.15.0

func (a GroupKeys) Less(i, j int) bool

func (GroupKeys) String ¶ added in v0.15.0

func (a GroupKeys) String() string

String returns a string representation of the keys

func (GroupKeys) Swap ¶ added in v0.15.0

func (a GroupKeys) Swap(i, j int)

type GroupMode ¶ added in v0.14.0

type GroupMode int

GroupMode represents the method for grouping data

const (
	// GroupModeNone indicates that no grouping action is specified
	GroupModeNone GroupMode = 0
	// GroupModeBy produces a table for each unique value of the specified GroupKeys.
	GroupModeBy GroupMode = 1 << iota
	// GroupModeExcept produces a table for the unique values of all keys, except those specified by GroupKeys.
	GroupModeExcept
)

type IDer ¶

type IDer interface {
	ID(*TableObject) OperationID
}

IDer produces the mapping of table Objects to OperationIDs

type IDerOpSpec ¶

type IDerOpSpec interface {
	IDer(ider IDer)
}

IDerOpSpec is the interface any operation spec that needs access to OperationIDs in the query spec must implement.

type MultiResultDecoder ¶

type MultiResultDecoder interface {
	// Decode decodes multiple results from r.
	Decode(r io.ReadCloser) (ResultIterator, error)
}

MultiResultDecoder can decode multiple results from a reader.

type MultiResultEncoder ¶

type MultiResultEncoder interface {
	// Encode writes multiple results from r into w.
	// Returns the number of bytes written to w and any error resulting from the encoding process.
	// It is up to the specific implementation for whether it will encode any errors that occur
	// from the ResultIterator.
	Encode(w io.Writer, results ResultIterator) (int64, error)
}

MultiResultEncoder can encode multiple results into a writer.

type NewOperationSpec ¶

type NewOperationSpec func() OperationSpec

func OperationSpecNewFn ¶

func OperationSpecNewFn(k OperationKind) NewOperationSpec

type Operation ¶

type Operation struct {
	ID     OperationID     `json:"id"`
	Spec   OperationSpec   `json:"spec"`
	Source OperationSource `json:"source"`
}

Operation denotes a single operation in a query.

func (Operation) MarshalJSON ¶

func (o Operation) MarshalJSON() ([]byte, error)

func (*Operation) UnmarshalJSON ¶

func (o *Operation) UnmarshalJSON(data []byte) error

type OperationID ¶

type OperationID string

OperationID is a unique ID within a query for the operation.

type OperationKind ¶

type OperationKind string

OperationKind denotes the kind of operations.

type OperationSource ¶ added in v0.83.0

type OperationSource struct {
	Stack []interpreter.StackEntry `json:"stack"`
}

OperationSource specifies the source location that created an operation.

type OperationSpec ¶

type OperationSpec interface {
	// Kind returns the kind of the operation.
	Kind() OperationKind
}

OperationSpec specifies an operation as part of a query.

type Priority ¶

type Priority int32

Priority is an integer that represents the query priority. Any positive 32bit integer value may be used. Special constants are provided to represent the extreme high and low priorities.

const (
	// High is the highest possible priority = 0
	High Priority = 0
	// Low is the lowest possible priority = MaxInt32
	Low Priority = math.MaxInt32
)

func (Priority) MarshalText ¶

func (p Priority) MarshalText() ([]byte, error)

func (*Priority) UnmarshalText ¶

func (p *Priority) UnmarshalText(txt []byte) error

type Program ¶ added in v0.26.0

type Program interface {
	// Start begins execution of the program and returns immediately.
	// As results are produced they arrive on the channel.
	// The program is finished once the result channel is closed and all results have been consumed.
	Start(context.Context, *memory.Allocator) (Query, error)
}

Program defines a Flux script which has been compiled.

type Query ¶

type Query interface {
	// Results returns a channel that will deliver the query results.
	// Its possible that the channel is closed before any results arrive,
	// in which case the query should be inspected for an error using Err().
	Results() <-chan Result

	// Done must always be called to free resources. It is safe to call Done
	// multiple times.
	Done()

	// Cancel will signal that query execution should stop.
	// Done must still be called to free resources.
	// It is safe to call Cancel multiple times.
	Cancel()

	// Err reports any error the query may have encountered.
	Err() error

	// Statistics reports the statistics for the query.
	// The statistics are not complete until Done is called.
	Statistics() Statistics

	// ProfilerResults returns profiling results for the query
	ProfilerResults() (ResultIterator, error)
}

Query represents an active query.

type ResourceManagement ¶

type ResourceManagement struct {
	// Priority or the query.
	// Queries with a lower value will move to the front of the priority queue.
	// A zero value indicates the highest priority.
	Priority Priority `json:"priority"`
	// ConcurrencyQuota is the number of concurrency workers allowed to process this query.
	// A zero value indicates the planner can pick the optimal concurrency.
	ConcurrencyQuota int `json:"concurrency_quota"`
	// MemoryBytesQuota is the number of bytes of RAM this query may consume.
	// There is a small amount of overhead memory being consumed by a query that will not be counted towards this limit.
	// A zero value indicates unlimited.
	MemoryBytesQuota int64 `json:"memory_bytes_quota"`
}

ResourceManagement defines how the query should consume avaliable resources.

type Result ¶

type Result interface {
	Name() string
	// Tables returns a TableIterator for iterating through results
	Tables() TableIterator
}

type ResultDecoder ¶

type ResultDecoder interface {
	// Decode decodes data from r into a result.
	Decode(r io.Reader) (Result, error)
}

ResultDecoder can decode a result from a reader.

type ResultEncoder ¶

type ResultEncoder interface {
	// Encode encodes data from the result into w.
	// Returns the number of bytes written to w and any error.
	Encode(w io.Writer, result Result) (int64, error)
}

ResultEncoder can encode a result into a writer.

type ResultIterator ¶

type ResultIterator interface {
	// More indicates if there are more results.
	More() bool

	// Next returns the next result.
	// If More is false, Next panics.
	Next() Result

	// Release discards the remaining results and frees the currently used resources.
	// It must always be called to free resources. It can be called even if there are
	// more results. It is safe to call Release multiple times.
	Release()

	// Err reports the first error encountered.
	// Err will not report anything unless More has returned false,
	// or the query has been cancelled.
	Err() error

	// Statistics reports the statistics for the query.
	// The statistics are not complete until Release is called.
	Statistics() Statistics
}

ResultIterator allows iterating through all results synchronously. A ResultIterator is not thread-safe and all of the methods are expected to be called within the same goroutine.

func NewMapResultIterator ¶

func NewMapResultIterator(results map[string]Result) ResultIterator

func NewResultIteratorFromQuery ¶

func NewResultIteratorFromQuery(q Query) ResultIterator

func NewSliceResultIterator ¶

func NewSliceResultIterator(results []Result) ResultIterator

type Runtime ¶ added in v0.68.0

type Runtime interface {
	// Parse parses a Flux script and produces a handle to an AST.
	Parse(flux string) (ASTHandle, error)

	// JSONToHandle takes JSON data and returns an AST handle.
	JSONToHandle(json []byte) (ASTHandle, error)

	// MargePackages removes all the files from src and appends them to the list
	// of files in dst.
	MergePackages(dst, src ASTHandle) error

	// Eval accepts a Flux AST and evaluates it to produce a set of side effects (as a slice of values) and a scope.
	Eval(ctx context.Context, astPkg ASTHandle, opts ...ScopeMutator) ([]interpreter.SideEffect, values.Scope, error)

	// IsPreludePackage will return if the named package is part
	// of the prelude for this runtime.
	IsPreludePackage(pkg string) bool

	// LookupBuiltinType returns the type of the builtin value for a given
	// Flux stdlib package. Returns an error if lookup fails.
	LookupBuiltinType(pkg, name string) (semantic.MonoType, error)
}

Runtime encapsulates the operations supported by the flux runtime.

type ScopeMutator ¶ added in v0.14.0

type ScopeMutator = func(r Runtime, scope values.Scope)

ScopeMutator is any function that mutates the scope of an identifier.

func SetNowOption ¶ added in v0.48.0

func SetNowOption(now time.Time) ScopeMutator

SetNowOption returns a ScopeMutator that sets the `now` option to the given time.

func SetOption ¶ added in v0.14.0

func SetOption(pkg, name string, fn func(r Runtime) values.Value) ScopeMutator

SetOption returns a func that adds a var binding to a scope.

type Spec ¶

type Spec struct {
	Operations []*Operation       `json:"operations"`
	Edges      []Edge             `json:"edges"`
	Resources  ResourceManagement `json:"resources"`
	Now        time.Time          `json:"now"`
	// contains filtered or unexported fields
}

Spec specifies a query.

func (*Spec) Children ¶

func (q *Spec) Children(id OperationID) []*Operation

Children returns a list of children for a given operation. If the query is invalid no children will be returned.

func (*Spec) Functions ¶

func (q *Spec) Functions() ([]string, error)

Functions return the names of all functions used in the plan

func (*Spec) Parents ¶

func (q *Spec) Parents(id OperationID) []*Operation

Parents returns a list of parents for a given operation. If the query is invalid no parents will be returned.

func (*Spec) Validate ¶

func (q *Spec) Validate() error

Validate ensures the query is a valid DAG.

func (*Spec) Walk ¶

func (q *Spec) Walk(f func(o *Operation) error) error

Walk calls f on each operation exactly once. The function f will be called on an operation only after all of its parents have already been passed to f.

type Statistics ¶

type Statistics struct {
	// TotalDuration is the total amount of time in nanoseconds spent.
	TotalDuration time.Duration `json:"total_duration"`
	// CompileDuration is the amount of time in nanoseconds spent compiling the query.
	CompileDuration time.Duration `json:"compile_duration"`
	// QueueDuration is the amount of time in nanoseconds spent queueing.
	QueueDuration time.Duration `json:"queue_duration"`
	// PlanDuration is the amount of time in nanoseconds spent in plannig the query.
	PlanDuration time.Duration `json:"plan_duration"`
	// RequeueDuration is the amount of time in nanoseconds spent requeueing.
	RequeueDuration time.Duration `json:"requeue_duration"`
	// ExecuteDuration is the amount of time in nanoseconds spent in executing the query.
	ExecuteDuration time.Duration `json:"execute_duration"`

	// Concurrency is the number of goroutines allocated to process the query
	Concurrency int `json:"concurrency"`
	// MaxAllocated is the maximum number of bytes the query allocated.
	MaxAllocated int64 `json:"max_allocated"`
	// TotalAllocated is the total number of bytes allocated.
	// The number includes memory that was freed and then used again.
	TotalAllocated int64 `json:"total_allocated"`

	// RuntimeErrors contains error messages that happened during the execution of the query.
	RuntimeErrors []string `json:"runtime_errors"`

	// Metadata contains metadata key/value pairs that have been attached during execution.
	Metadata metadata.Metadata `json:"metadata"`
}

Statistics is a collection of statistics about the processing of a query.

func (Statistics) Add ¶ added in v0.7.1

func (s Statistics) Add(other Statistics) Statistics

Add returns the sum of s and other.

type Table ¶

type Table interface {
	// Key returns the set of data that is common among all rows
	// in the table.
	Key() GroupKey

	// Cols contains metadata about the column schema.
	Cols() []ColMeta

	// Do calls f to process the data contained within the table.
	// This must only be called once and implementations should return
	// an error if this is called multiple times.
	Do(f func(ColReader) error) error

	// Done indicates that this table is no longer needed and that the
	// underlying processor that produces the table may discard any
	// buffers that need to be processed. If the table has already been
	// read with Do, this happens automatically.
	// This is also not required if the table is empty.
	// It should be safe to always call this function and call it multiple
	// times.
	Done()

	// Empty returns whether the table contains no records.
	Empty() bool
}

Table represents a set of streamed data with a common schema. The contents of the table can be read exactly once.

This data structure is not thread-safe.

type TableIterator ¶

type TableIterator interface {
	Do(f func(Table) error) error
}

type TableObject ¶

type TableObject struct {
	Kind    OperationKind
	Spec    OperationSpec
	Source  OperationSource
	Parents []*TableObject
	// contains filtered or unexported fields
}

TableObject represents the value returned by a transformation. As such, it holds the OperationSpec of the transformation it is associated with, and it is a values.Value (and, also, a values.Object). It can be compiled and executed as a flux.Program by using a lang.TableObjectCompiler.

func (*TableObject) Append ¶ added in v0.68.0

func (t *TableObject) Append(v values.Value)

func (*TableObject) Array ¶

func (t *TableObject) Array() values.Array

func (*TableObject) Bool ¶

func (t *TableObject) Bool() bool

func (*TableObject) Bytes ¶ added in v0.40.0

func (t *TableObject) Bytes() []byte

func (*TableObject) Duration ¶

func (t *TableObject) Duration() values.Duration

func (*TableObject) Equal ¶

func (t *TableObject) Equal(rhs values.Value) bool

func (*TableObject) Float ¶

func (t *TableObject) Float() float64

func (*TableObject) Function ¶

func (t *TableObject) Function() values.Function

func (*TableObject) Get ¶

func (t *TableObject) Get(i int) values.Value

func (*TableObject) Int ¶

func (t *TableObject) Int() int64

func (*TableObject) IsNull ¶ added in v0.14.0

func (t *TableObject) IsNull() bool

func (*TableObject) Len ¶

func (t *TableObject) Len() int

func (*TableObject) Object ¶

func (t *TableObject) Object() values.Object

func (*TableObject) Operation ¶

func (t *TableObject) Operation(ider IDer) *Operation

func (*TableObject) Range ¶

func (t *TableObject) Range(f func(i int, v values.Value))

func (*TableObject) Regexp ¶

func (t *TableObject) Regexp() *regexp.Regexp

func (*TableObject) Set ¶

func (t *TableObject) Set(i int, v values.Value)

func (*TableObject) Sort ¶ added in v0.68.0

func (t *TableObject) Sort(f func(i, j values.Value) bool)

func (*TableObject) Str ¶

func (t *TableObject) Str() string

func (*TableObject) String ¶

func (t *TableObject) String() string

func (*TableObject) Time ¶

func (t *TableObject) Time() values.Time

func (*TableObject) Type ¶

func (t *TableObject) Type() semantic.MonoType

func (*TableObject) UInt ¶

func (t *TableObject) UInt() uint64

type Time ¶

type Time struct {
	IsRelative bool
	Relative   time.Duration
	Absolute   time.Time
}

Time represents either a relative or absolute time. If Time is its zero value then it represents a time.Time{}. To represent the now time you must set IsRelative to true.

func ToQueryTime ¶

func ToQueryTime(value values.Value) (Time, error)

func (Time) IsZero ¶

func (t Time) IsZero() bool

func (Time) MarshalText ¶

func (t Time) MarshalText() ([]byte, error)

func (Time) Time ¶

func (t Time) Time(now time.Time) time.Time

Time returns the time specified relative to now.

func (*Time) UnmarshalText ¶

func (t *Time) UnmarshalText(data []byte) error

type WrappedDeps ¶ added in v0.48.0

type WrappedDeps struct {
	HTTPClient        http.Client
	FilesystemService filesystem.Service
	SecretService     secret.Service
	URLValidator      url.Validator
}

Directories ¶

Path Synopsis
ast
Package ast declares the types used to represent the syntax tree for Flux source code.
Package ast declares the types used to represent the syntax tree for Flux source code.
asttest
Package asttest implements utilities for testing the abstract syntax tree.
Package asttest implements utilities for testing the abstract syntax tree.
internal/fbast
Package fbast contains code generated by the FlatBuffers compiler for serializing AST.
Package fbast contains code generated by the FlatBuffers compiler for serializing AST.
Package builtin contains all packages related to Flux built-ins are imported and initialized.
Package builtin contains all packages related to Flux built-ins are imported and initialized.
cmd
Package codes defines the error codes used by flux.
Package codes defines the error codes used by flux.
colm
The compiler package provides a compiler and Go runtime for a subset of the Flux language.
The compiler package provides a compiler and Go runtime for a subset of the Flux language.
Package complete provides types to aid with auto-completion of Flux scripts in editors.
Package complete provides types to aid with auto-completion of Flux scripts in editors.
Package csv contains the csv result encoders and decoders.
Package csv contains the csv result encoders and decoders.
dependencies
url
Package execute contains the implementation of the execution phase in the query engine.
Package execute contains the implementation of the execution phase in the query engine.
executetest
Package executetest contains utilities for testing the query execution phase.
Package executetest contains utilities for testing the query execution phase.
table/static
Package static provides utilities for easily constructing static tables that are meant for tests.
Package static provides utilities for easily constructing static tables that are meant for tests.
internal
cmd/cmpgen
cmpgen generates comparison options for the asttest package.
cmpgen generates comparison options for the asttest package.
fbsemantic
Package fbsemantic contains code generated by the FlatBuffers compiler for serializing the semantic graph.
Package fbsemantic contains code generated by the FlatBuffers compiler for serializing the semantic graph.
gen
tools Module
Package interpreter provides the implementation of the Flux interpreter.
Package interpreter provides the implementation of the Flux interpreter.
libflux
Package mock contains mock implementations of the query package interfaces for testing.
Package mock contains mock implementations of the query package interfaces for testing.
Package parser implements a parser for Flux source files.
Package parser implements a parser for Flux source files.
plantest
Package plantest contains utilities for testing each query planning phase
Package plantest contains utilities for testing each query planning phase
Package querytest contains utilities for testing the query end-to-end.
Package querytest contains utilities for testing the query end-to-end.
Package repl implements the read-eval-print-loop for the command line flux query console.
Package repl implements the read-eval-print-loop for the command line flux query console.
The semantic package provides a graph structure that represents the meaning of a Flux script.
The semantic package provides a graph structure that represents the meaning of a Flux script.
semantictest
Package semantictest contains utilities for testing the semantic package.
Package semantictest contains utilities for testing the semantic package.
Package stdlib represents the Flux standard library.
Package stdlib represents the Flux standard library.
csv
influxdata/influxdb
From is an operation that mocks the real implementation of InfluxDB's from.
From is an operation that mocks the real implementation of InfluxDB's from.
socket
Package socket implements a source that gets input from a socket connection and produces tables given a decoder.
Package socket implements a source that gets input from a socket connection and produces tables given a decoder.
sql
universe
Package universe contains the implementations for the builtin transformation functions.
Package universe contains the implementations for the builtin transformation functions.
Package values declares the flux data types and implements them.
Package values declares the flux data types and implements them.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL