Documentation
¶
Index ¶
- Constants
- func ColumnsUsed(expr Expression, schema column.TableSchema) (cols []string)
- func ColumnsUsedMultiple(schema column.TableSchema, exprs ...Expression) []string
- func Evaluate(expr Expression, chunkLength int, columnData map[string]column.Chunk, ...) (column.Chunk, error)
- func HasIdentifiers(expr Expression) bool
- func InitAggregator(fun *Function, schema column.TableSchema) error
- func PruneFunctionCalls(ex Expression)
- func UpdateAggregator(fun *Function, buckets []uint64, ndistinct int, ...) error
- type Bool
- type Dataset
- type Expression
- type Float
- type Function
- type Identifier
- type Infix
- type Integer
- type Null
- type Ordering
- type Parentheses
- type Parser
- type Prefix
- type Query
- type Relabel
- type String
- type Tuple
Constants ¶
const ( LOWEST int BOOL_AND_OR // TODO(next): is it really that AND and OR have the same precedence? EQUALS // ==, != LESSGREATER // >, <, <=, >= ADDITION // + PRODUCT // * PREFIX // -X or NOT X NAMESPACE // foo.bar CALL // myFunction(X) )
Variables ¶
This section is empty.
Functions ¶
func ColumnsUsed ¶
func ColumnsUsed(expr Expression, schema column.TableSchema) (cols []string)
ARCH: this panics when a given column is not in the schema, but since we already validated this schema during the ReturnType call, we should be fine. It's still a bit worrying that we might panic though. TODO(next)/TODO(joins): all the columnsUsed functions need to support multiple schemas and namespaces perhaps we should return []*Identifier, that would solve a few other issues as well
func ColumnsUsedMultiple ¶
func ColumnsUsedMultiple(schema column.TableSchema, exprs ...Expression) []string
func Evaluate ¶
func Evaluate(expr Expression, chunkLength int, columnData map[string]column.Chunk, filter *bitmap.Bitmap) (column.Chunk, error)
OPTIM: we're doing a lot of type shenanigans at runtime - when we evaluate a function on each stripe, we do the same tree of operations - this applies not just here, but in projections.go as well - e.g. we know that if we have `intA - intB`, we'll run a function for ints - we don't need to decide that for each stripe
func HasIdentifiers ¶
func HasIdentifiers(expr Expression) bool
func InitAggregator ¶
func InitAggregator(fun *Function, schema column.TableSchema) error
func PruneFunctionCalls ¶
func PruneFunctionCalls(ex Expression)
Types ¶
type Bool ¶
type Bool struct {
// contains filtered or unexported fields
}
func (*Bool) Children ¶
func (ex *Bool) Children() []Expression
func (*Bool) ReturnType ¶
type Dataset ¶ added in v0.1.3
type Expression ¶
type Expression interface { ReturnType(ts column.TableSchema) (column.Schema, error) String() string Children() []Expression }
func ParseStringExpr ¶
func ParseStringExpr(s string) (Expression, error)
func ParseStringExprs ¶
func ParseStringExprs(s string) ([]Expression, error)
type Float ¶
type Float struct {
// contains filtered or unexported fields
}
func (*Float) Children ¶
func (ex *Float) Children() []Expression
func (*Float) ReturnType ¶
type Function ¶
type Function struct {
// contains filtered or unexported fields
}
func AggExpr ¶
func AggExpr(expr Expression) ([]*Function, error)
func NewFunction ¶
NewFunction is one of the very few constructors as we have to do some fiddling here
func (*Function) Children ¶
func (ex *Function) Children() []Expression
func (*Function) ReturnType ¶
now, all function return types are centralised here, but it should probably be embedded in individual functions' definitions - we'll need to have some structs in place (for state management in aggregating funcs), so those could have methods like `ReturnType(args)` and `IsValid(args)`, `IsAggregating` etc. also, should we make multiplication, inequality etc. just functions like nullif or coalesce? That would allow us to fold all the functionality of eval() into a (recursive) function call TODO: make sure that these return types are honoured in aggregators' resolvers
type Identifier ¶
type Identifier struct { Namespace *Identifier Name string // contains filtered or unexported fields }
func NewIdentifier ¶
func NewIdentifier(name string) *Identifier
TODO(quoting): rules are quite non-transparent - unify and document somehow
func (*Identifier) Children ¶
func (ex *Identifier) Children() []Expression
func (*Identifier) ReturnType ¶
func (ex *Identifier) ReturnType(ts column.TableSchema) (column.Schema, error)
func (*Identifier) String ¶
func (ex *Identifier) String() string
type Infix ¶
type Infix struct {
// contains filtered or unexported fields
}
func (*Infix) Children ¶
func (ex *Infix) Children() []Expression
func (*Infix) ReturnType ¶
type Integer ¶
type Integer struct {
// contains filtered or unexported fields
}
func (*Integer) Children ¶
func (ex *Integer) Children() []Expression
func (*Integer) ReturnType ¶
type Null ¶
type Null struct{}
func (*Null) Children ¶
func (ex *Null) Children() []Expression
func (*Null) ReturnType ¶
type Ordering ¶
type Ordering struct {
Asc, NullsFirst bool // ARCH: consider *bool for better stringers (and better roundtrip tests)
// contains filtered or unexported fields
}
func (*Ordering) Children ¶
func (ex *Ordering) Children() []Expression
func (*Ordering) ReturnType ¶
type Parentheses ¶
type Parentheses struct {
// contains filtered or unexported fields
}
func (*Parentheses) Children ¶
func (ex *Parentheses) Children() []Expression
func (*Parentheses) ReturnType ¶
func (ex *Parentheses) ReturnType(ts column.TableSchema) (column.Schema, error)
func (*Parentheses) String ¶
func (ex *Parentheses) String() string
type Prefix ¶
type Prefix struct {
// contains filtered or unexported fields
}
func (*Prefix) Children ¶
func (ex *Prefix) Children() []Expression
func (*Prefix) ReturnType ¶
type Query ¶
type Query struct { Select []Expression Dataset *Dataset Filter Expression Aggregate []Expression Order []Expression Limit *int }
Query describes what we want to retrieve from a given dataset There are basically four places you need to edit (and test!) in order to extend this:
- The engine itself needs to support this functionality (usually a method on Dataset or column.Chunk)
- The query method has to be able to translate query parameters to the engine
- The query endpoint handler needs to be able to process the incoming body to the Query struct (the Unmarshaler should mostly take care of this)
- The HTML/JS frontend needs to incorporate this in some way
func ParseQuerySQL ¶
type Relabel ¶
type Relabel struct { Label string // exporting it, because there's no other way of getting to it // contains filtered or unexported fields }
func (*Relabel) Children ¶
func (ex *Relabel) Children() []Expression
func (*Relabel) ReturnType ¶
type String ¶
type String struct {
// contains filtered or unexported fields
}
func (*String) Children ¶
func (ex *String) Children() []Expression
func (*String) ReturnType ¶
type Tuple ¶
type Tuple struct {
// contains filtered or unexported fields
}
func (*Tuple) Children ¶
func (ex *Tuple) Children() []Expression
func (*Tuple) ReturnType ¶
this is a bit weird, because a Tuple is a container, it doesn't "return" anything, so we'll just return the homogenous type it contains so (1, 2, 3) -> int, (1, 2.0, 3) -> float, (1, 'foo', 3) -> err TODO/ARCH: we don't worry if these are all literals... should we? Or should we leave that to eval?