lql

package module
v0.17.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 11, 2026 License: MIT Imports: 24 Imported by: 5

README

lql

LQL (Lockd Query Language) provides a compact selector and mutation syntax for JSON documents. This module includes:

  • A Go library for parsing selectors and mutations, plus selector evaluation.
  • A CLI (cmd/lql) for selecting JSON documents, applying mutations, and formatting output via pkt.systems/prettyx.

Install

go get pkt.systems/lql@latest

Development Targets

The repository provides smoke (fast) and full (expensive) validation targets:

make test                 # fast test pass
make benchmark            # smoke benchmarks
make benchmark-lockd      # lockd fixture benchmarks
make benchmark-lockd-save # save lockd baseline under perf/baselines
make fuzz                 # smoke fuzzers
make test-all             # check + smoke fuzz + smoke bench

make test-all-full        # full coverage + full fuzz + full bench
make ci-lockd             # lockd CI profile (contracts + lockd benches)

For CPU-constrained systems, cap Go runtime parallelism for any target:

CPU_LIMIT=2 make test-all

Library usage

Selectors

Selectors are declarative predicates built over JSON Pointer fields.

sel, err := lql.ParseSelectorString(`
  and.eq{field=/status,value=open},
  and.range{field=/progress,gte=50}
`)
if err != nil {
  log.Fatal(err)
}

doc := map[string]any{
  "status":   "open",
  "progress": 72,
}

if lql.Matches(sel, doc) {
  fmt.Println("match")
}

If you already have selectors split into a slice, you can combine them with AND or OR without rejoining them yourself:

sel, err := lql.ParseSelectorStrings([]string{`/status="ok"`, `/msg="done"`})
sel, err := lql.ParseSelectorStringsOr([]string{`/status="ok"`, `/msg="done"`})

Shorthand forms are supported:

sel, _ := lql.ParseSelectorString(`/status="open",/progress>=50`)

Datetime literals are also supported in shorthand range comparisons:

sel, _ := lql.ParseSelectorString(`/timestamp>=2026-03-05T10:28:21Z`)
sel, _ = lql.ParseSelectorString(`/timestamp>=2026-03-11T01:11:28`)
sel, _ = lql.ParseSelectorString(`/timestamp<"2026-03-05T11:29:41.265+01:00"`)

eq shorthand is datetime-aware when both sides parse as temporal values:

sel, _ := lql.ParseSelectorString(`/timestamp="2025-01-01"`)
// Matches values such as "2025-01-01T15:00:00Z" (date-only intersection).

Explicit date selectors are available for intent and macro support:

sel, _ := lql.ParseSelectorString(`date{field=/timestamp,after=2025-01-01,before=2025-02-01}`)
sel, _ = lql.ParseSelectorString(`date{f=/timestamp,a=2025-01-01,b=2025-01-03}`)
sel, _ = lql.ParseSelectorString(`date{f=/timestamp,since=yesterday}`)

Programmatic range-term construction can use exported bound helpers:

sel := lql.Selector{
  Range: &lql.RangeTerm{
    Field: "/timestamp",
    GTE:   lql.NewDatetimeRangeBound("2026-03-05T10:28:21Z"),
    LT:    lql.NewDatetimeRangeBound("2026-03-05T10:30:00Z"),
  },
}

Temporal behavior summary:

  • Supported temporal literals: YYYY-MM-DD, RFC3339, RFC3339Nano, and naive UTC datetimes like 2026-03-11T01:11:28 or 2026-03-11T01:11:28.123456789.
  • Timezones are normalized to the same instant for datetime comparison.
  • range{...} supports numeric or datetime bounds (gt/gte/lt/lte) but cannot mix numeric and datetime bounds in one clause.
  • Relative macros are only supported by date{...,since=...} (now, today, yesterday).
  • Shorthand/range comparators require explicit numeric or datetime literals (no macros).

Temporal selector performance note (measured on March 5, 2026, synthetic benchmark, go test on Intel i7-1355U):

go test . -run '^$' \
  -bench '^BenchmarkQueryStreamSynthetic/(large_ndjson$|large_ndjson_datetime_range$|large_ndjson_date_selector$)/decision_only_selector/steady_state$' \
  -benchmem -benchtime=500ms -count=5
  • large_ndjson/decision_only_selector/steady_state: ~10.25 ms/op, 0 B/op, 0 allocs/op
  • large_ndjson_datetime_range/decision_only_selector/steady_state: 16.33 ms/op, 0 B/op, 0 allocs/op (1.59x)
  • large_ndjson_date_selector/decision_only_selector/steady_state: 16.29 ms/op, 0 B/op, 0 allocs/op (1.59x)

Interpretation: temporal selectors keep zero steady-state allocations, but runtime is higher than plain string equality because candidate values are parsed as datetimes during matching.

Fuzz-style alloc regression guard:

  • TestQueryStreamSelectorAllocBudgetFuzzReplayTemporal replays deterministic parity-fuzz payloads and enforces per-candidate allocation ceilings for temporal shorthand/range/date selectors.
  • This runs in normal go test (and therefore make test, make test-all, make test-all-full) to catch alloc regressions without fuzz nondeterminism.

If you want implicit OR semantics across a single string, use ParseSelectorStringOr:

sel, _ := lql.ParseSelectorStringOr(`/status="open",/status="queued"`)

Array element selection is supported via JSON Pointer indices:

sel, _ := lql.ParseSelectorString(`/devices/0/status="online"`)

Wildcard selection follows explicit semantics:

  • * matches any child value of an object (objects only; arrays do not match)
  • [] matches any element of an array (arrays only; objects do not match)
  • ** matches any child (object value or array element)
  • ... matches any descendant at any depth (objects or arrays)
  • Type mismatches do not match (e.g. [] on an object)
  • Bracket sugar: /items[]/sku is the same as /items/[]/sku
sel, _ := lql.ParseSelectorString(`/labels/*="production"`)
sel, _ = lql.ParseSelectorString(`/items[]/sku="ABC-123"`)
sel, _ = lql.ParseSelectorString(`/items/**/sku="ABC-123"`)
sel, _ = lql.ParseSelectorString(`/items/.../sku="ABC-123"`)
sel, _ = lql.ParseSelectorString(`icontains{field=/message,value=timeout}`)
sel, _ = lql.ParseSelectorString(`contains{field=/message,any=timeout|degraded}`)
sel, _ = lql.ParseSelectorString(`prefix{field=/service,value=auth,ignoreCase=t}`)
Mutations

Mutations modify JSON objects in-place.

doc := map[string]any{
  "state": map[string]any{
    "status":  "queued",
    "retries": 1,
  },
}

if err := lql.Mutate(doc,
  "/state/status=running",
  "/state/retries=+2",
  "rm:/state/legacy",
); err != nil {
  log.Fatal(err)
}

Brace shorthand applies multiple mutations under a prefix:

_ = lql.Mutate(doc, `/state{/owner="alice",/note="hi"}`)

Time-prefixed mutations normalize timestamps to RFC3339Nano:

_ = lql.MutateWithTime(doc, time.Now(), `time:/state/updated=NOW`)

Mutations support the same wildcard semantics as selectors; missing paths under wildcards are skipped.

File-backed mutation values are available for streaming mutation paths via ParseMutationsWithOptions and the file:, textfile:, and base64file: prefixes. They are disabled by default, produce JSON string values, and are not supported by ApplyMutations.

Create a new JSON document from {} while streaming file content into a field:

printf '{}\n' | lql -F \
  -m '/filename=notes.txt' \
  -m '/tags/kind=document' \
  -m '/tags/source=local' \
  -m 'textfile:/content=notes.txt'

Auto mode chooses escaped text for UTF-8 files without NUL bytes and base64 for binary-looking input:

printf '{}\n' | lql -F \
  -m '/filename=photo.jpg' \
  -m '/tags/media=image' \
  -m 'file:/content=photo.jpg'

Library callers can do the same thing with MutateStream:

muts, _ := lql.ParseMutationsWithOptions([]string{
  `/filename=notes.txt`,
  `/tags/kind=document`,
  `/tags/source=local`,
  `textfile:/content=notes.txt`,
}, time.Now(), lql.ParseMutationsOptions{
  EnableFileValues: true,
  FileValueBaseDir: ".",
})

_ = lql.MutateStream(lql.MutateStreamRequest{
  Reader: strings.NewReader(`{}`),
  Writer: os.Stdout,
  Mutations: muts,
})

Stream mutations over large inputs without loading the whole stream:

muts, _ := lql.ParseMutations([]string{`/state/status=running`}, time.Now())
_ = lql.MutateStream(lql.MutateStreamRequest{
  Ctx: context.Background(),
  Reader: strings.NewReader(`{"state":{"status":"queued"}}`),
  Writer: io.Discard, // optional compact NDJSON sink
  Mutations: muts,
  OnValue: func(v lql.MutateStreamValue) error {
    fmt.Printf("%s\n", v.JSON)
    return nil
  },
})

QueryStream supports QueryDecisionOnly and QueryDecisionPlusValue modes, and both stream APIs return typed *StreamError values for contract failures with machine-usable codes (invalid_selector, invalid_body, document_too_large, context_canceled, internal). Helper predicates are available: IsStreamInvalidSelector, IsStreamInvalidBody, IsStreamDocumentTooLarge, IsStreamContextCanceled, IsStreamInternal.

Selector capability routing helpers are available via InspectSelectorCapabilities and InspectSelectorExecutionTraits.

For deterministic stream accounting and early-stop contracts, use QueryStreamWithResult and MutateStreamWithResult. QueryStreamRequest supports additive stop limits: MaxMatches, MaxCandidates, MaxBytesRead. QueryStreamWithResult reports: CandidatesSeen, CandidatesMatched, BytesRead, BytesCaptured, SpillCount, SpillBytes, StoppedEarly, StopReason, LastOffset. MutateStreamWithResult reports: CandidatesSeen, CandidatesWritten, BytesRead, BytesWritten, BytesCaptured, SpillCount, SpillBytes, StoppedEarly, StopReason, LastOffset.

QueryStreamRequest.OnDecision runs once per candidate before OnValue and still fires when MatchedOnly=true and value callbacks are skipped. Return ErrStreamStop (or wrapping it) from stream callbacks for graceful callback-driven stop (StoppedEarly=true, stop reason callback_stop, nil error return).

In QueryDecisionPlusValue/IncludeJSON mode, candidate payloads spool in memory up to 3 MiB by default, then spill to temp files (/tmp by default). Configure with SpoolMemoryBytes, SpoolTempDir, and SpoolFilePattern. Set MatchedOnly to invoke callbacks only for matched candidates. Tune capture behavior with CapturePolicy: QueryCaptureAllCandidates (default) or QueryCaptureMatchesOnlyBestEffort for lower spool pressure on low-hit scans.

For caller-managed payload storage, set DisableInternalSpool=true and provide PayloadSinkFactory with a custom QueryStreamPayloadSink. For low-churn spill reuse across many candidates, use NewReusableQueryPayloadSinkFactory(...) and pass factory.Factory() as the payload sink factory, then call factory.Close() when done.

MutateStream callback payload capture also supports caller-managed sinks via DisableInternalSpool and PayloadSinkFactory. For reusable spill behavior, use NewReusableMutatePayloadSinkFactory(...) and pass factory.Factory(), then call factory.Close() when done.

MutateStream supports strict framing/root modes: MutateSingleValueOnly, MutateObjectRootOnly, and MutateSingleObjectOnly.

Candidate size accounting contract:

  • QueryStream MaxCandidateBytes and QueryStreamValue.Size count bytes from the first non-whitespace byte of each candidate to its closing JSON token.
  • Top-level separators and surrounding whitespace are excluded.

CLI usage

usage: lql [-m mutator...] [-f field...] selector... [data.json]
   or: lql selector... < data.json
   or: cat data.json | lql selector...

By default, multiple selector arguments are combined with AND. Use -O (or --or) to combine them with OR.

Selectors determine which JSON documents to output. The matching documents are printed in full by default. If the input is a JSON array, each element is treated as a candidate document.

Mutations apply to each JSON object in the input stream (NDJSON or JSON arrays). When selectors are provided alongside -m, mutations are applied only to matching objects. With -m, selectors no longer filter output; they only control which objects are mutated (output still includes all objects, subject to -f). Use -M/--matches-only to keep selectors acting as output filters even when -m is provided. Output always contains the full (possibly mutated) object unless -f is used.

Local file-backed mutation values are disabled by default in the CLI. Enable them with -F or --enable-file-mutations to use file:/field=path, textfile:/field=path, or base64file:/field=path. Leading ~/ expands to the current user's home directory.

Selection examples

Select documents matching a status and region:

lql '/status="open",/region="us-west"' data.json
Full LQL examples

CLI (full LQL expressions):

lql 'and.eq{field=/status,value=open},and.range{field=/progress,gte=50}' data.json
lql 'or.eq{field=/region,value=us},or.eq{field=/region,value=eu}' data.json
lql 'not.eq{field=/state,value=disabled}' data.json
lql 'exists{/metadata/etag}' data.json
lql 'in{field=/env,any=prod|stage|dev}' data.json
lql 'and.eq{field=/items[]/sku,value=ABC-123},and.range{field=/items[]/price,lt=20}' data.json
lql 'contains{field=/msg,value=timeout,ic=t}' data.json
lql 'contains{field=/msg,any=timeout|degraded},icontains{field=/service,a=AUTH|EDGE}' data.json
lql 'icontains{field=/msg,value=timeout},iprefix{field=/service,value=auth}' data.json
lql '/timestamp>=2026-03-05T10:28:21Z' data.json
lql 'date{field=/timestamp,after=2025-01-01,before=2025-02-01}' data.json
lql 'date{f=/timestamp,a=2025-01-01,b=2025-01-03}' data.json
lql 'date{f=/timestamp,since=yesterday}' data.json

Note: full LQL expressions use {} and should be quoted (or the braces escaped) to avoid shell brace expansion.

Note: inside {}, you can use f=/v= as aliases for field=/value=, a= as an alias for any= (in, contains, and icontains), and ic= as an alias for ignoreCase= in string terms (contains/prefix). For date, a= is an alias for after= and b= is an alias for before=. ignoreCase accepts true/false or shorthand t/f.

Note: omitted string values stay field-scoped. For example contains{field=/msg}, icontains{field=/msg}, prefix{field=/name}, and iprefix{field=/name} act as path assertions and require those paths to exist (regardless of the terminal value type). Only root/wildcard-any forms (such as field=/, field=/*, field=/...) collapse to match-all for empty string terms.

SDK (parse full LQL expressions):

sel, err := lql.ParseSelectorString(
  "and.eq{field=/status,value=open},and.range{field=/progress,gte=50}",
)
sel, err := lql.ParseSelectorString(
  "or.eq{field=/region,value=us},or.eq{field=/region,value=eu}",
)
sel, err := lql.ParseSelectorString("not.eq{field=/state,value=disabled}")
sel, err := lql.ParseSelectorString("exists{/metadata/etag}")
sel, err := lql.ParseSelectorString("in{field=/env,any=prod|stage|dev}")
sel, err := lql.ParseSelectorString(
  "and.eq{field=/items[]/sku,value=ABC-123},and.range{field=/items[]/price,lt=20}",
)
sel, err := lql.ParseSelectorString(
  "icontains{field=/msg,value=timeout},iprefix{field=/service,value=auth}",
)
sel, err := lql.ParseSelectorString(
  "contains{field=/msg,any=timeout|degraded},icontains{field=/service,a=AUTH|EDGE}",
)
sel, err := lql.ParseSelectorString(
  "date{field=/timestamp,after=2025-01-01,before=2025-02-01}",
)
sel, err := lql.ParseSelectorString("date{f=/timestamp,since=yesterday}")

Select only a few fields from matching documents:

lql '/status="open"' -f /id -f /status -f /region data.json

Filter array input (each element is evaluated):

cat devices.json | lql '/telemetry/battery_mv<3600'

Match on array elements inside each document:

lql '/devices/0/status="online"' data.json

Match on any array element using wildcards:

lql '/items[]/sku="ABC-123"' data.json

Match on any descendant using recursive descent:

lql '/items/.../sku="ABC-123"' data.json
Mutation examples

Apply mutations conditionally:

lql -m '/state/retries++' -m '/state/status=running' '/state/status="queued"' state.json

Apply mutations and emit only selected fields:

lql -m '/state/retries=+3' -f /state/retries -f /state/status state.json

Apply mutations across array elements:

lql -m '/items[]/status=ready' data.json

Apply mutations using recursive descent:

lql -m '/groups/.../status=ready' data.json

Note: mutations apply to a JSON object root, but paths may traverse arrays using wildcards or numeric indices.

Write mutations inline:

lql -m '/state/status=done' -i state.json
Output formatting

By default, output is pretty-printed using prettyx. Use -c for compact one-line JSON documents and -t to select a prettyx theme (see lql -h).

Selector grammar overview

  • Logical: and, or, not
  • Clauses: eq, contains, icontains, prefix, iprefix, range, date, in, exists
  • contains/icontains support value= (single term) or any=/a= (pipe-delimited terms)
  • JSON Pointer fields: /path/to/field
  • Shorthand: /field=value, /field!=value, /field>=10, /field<5
  • Datetime shorthand: /timestamp>=2026-03-05T10:28:21Z, /timestamp>=2026-03-11T01:11:28, /timestamp<"2026-03-05T11:29:41.265+01:00"
  • range bounds: gt, gte, lt, lte with numeric or datetime literals (single-mode per clause)
  • date keys: value, after/a, before/b, gt, gte, lt, lte, since
  • date.since macros: now, today, yesterday
  • Arrays: /items/0/sku="ABC-123"
  • Wildcards: * (object values), [] (array elements), ** (any child), ... (recursive descent)

Mutation grammar overview

  • Set: /path=value
  • Increment: /path++, /path--, /path=+3, /path=-2
  • Remove: rm:/path, delete:/path
  • Time: time:/path=NOW or RFC3339Nano timestamp (RFC3339 also accepted)
  • Brace: /path{/a=1,/b=2}
  • Wildcards: *, [], **, ... in path segments

Documentation

Overview

Package lql implements the Lockd Query Language (LQL) selector and mutation syntax for JSON documents.

Selectors

LQL selectors are declarative predicates over JSON documents. They compile to a Selector AST and can be evaluated with Matches.

Basic forms:

eq{field=/status,value=open}
contains{field=/message,value=timeout}
contains{field=/message,any=timeout|degraded}
icontains{field=/message,value=timeout}
icontains{field=/service,a=AUTH|EDGE}
prefix{field=/owner,value=team-}
iprefix{field=/owner,value=team-}
range{field=/progress,gt=10}
range{field=/timestamp,gte="2026-03-05T10:28:21Z"}
date{field=/timestamp,after=2025-01-01,before=2025-02-01}
date{f=/timestamp,a=2025-01-01,b=2025-01-03}
date{f=/timestamp,since=yesterday}
in{field=/state,any=["queued","running"]}
exists{/metadata/etag}

String terms (`contains`/`prefix`) support `ignoreCase=true|false` (or shorthand `ic=t|f`) for per-clause case handling. `contains`/`icontains` also support `any=`/`a=` for pipe-delimited multi-term matching.

Omitted values for `contains`/`icontains`/`prefix`/`iprefix` remain field-scoped path assertions (for example `contains{field=/msg}` requires `/msg` to resolve, regardless of terminal value type). Only root/wildcard-any fields such as `/`, `/*`, and `/...` collapse to match-all for empty string terms.

Logical composition:

and.eq{field=/status,value=open},and.range{field=/progress,gte=50}
or.eq{field=/region,value=us},or.eq{field=/region,value=eu}
not.eq{field=/state,value=disabled}

Shorthand:

/status="open"
/status!=closed
/progress>=50
/timestamp>=2026-03-05T10:28:21Z
/timestamp>=2026-03-11T01:11:28
/timestamp<"2026-03-05T11:29:41.265+01:00"

Temporal selector semantics:

  • Supported temporal literals: YYYY-MM-DD, RFC3339, RFC3339Nano, and naive UTC datetimes like YYYY-MM-DDTHH:MM:SS or YYYY-MM-DDTHH:MM:SS.fffffffff.
  • eq is datetime-aware when both query value and field value parse as temporal values.
  • Date-only equality intersects timestamps by calendar date (for example "2025-01-01" matches "2025-01-01T15:00:00Z").
  • range supports numeric or datetime bounds (gt/gte/lt/lte), but does not allow mixed numeric + datetime bounds in one clause.
  • Programmatic range construction can use NewNumericRangeBound and NewDatetimeRangeBound.
  • date.since supports relative macros now/today/yesterday.

Arrays:

/devices/0/status="online"
/items/2/sku="ABC-123"

Wildcards (selector paths):

  • any child value of an object (objects only; arrays do not match) [] any element of an array (arrays only; objects do not match) ** any child (object value or array element) ... recursive descent (any depth, objects or arrays)

Type mismatches do not match (e.g. [] on an object). Bracket sugar expands "/items[]/sku" to "/items/[]/sku".

/labels/*="production"
/items[]/sku="ABC-123"
/items/**/sku="ABC-123"
/items/.../sku="ABC-123"

Example:

sel, _ := lql.ParseSelectorString(`/status="open",/progress>=50`)
ok := lql.Matches(sel, map[string]any{
  "status": "open",
  "progress": 72,
})

Mutations

Mutations mutate JSON objects in-place using JSON Pointer paths.

/state/status=running        # set
/state/retries++             # increment by 1
/state/retries=+3            # add 3
rm:/state/legacy             # delete
time:/state/updated=NOW      # RFC3339Nano timestamp (RFC3339 also accepted)

Mutations support the same wildcard semantics as selectors. When a wildcard path segment is used, missing paths under matched nodes are skipped.

Brace shorthand applies a set of nested mutations under a prefix:

/state{/owner="alice",/note="hi"}

File-backed mutation values are supported only in streaming mutation paths. They are disabled by default and require ParseMutationsWithOptions:

muts, _ := lql.ParseMutationsWithOptions([]string{`file:/payload=blob.bin`}, time.Now(), lql.ParseMutationsOptions{
  EnableFileValues: true,
  FileValueBaseDir: "/workdir",
})

Streaming mutation can also create a new JSON object from `{}` while loading file content into a field:

muts, _ := lql.ParseMutationsWithOptions([]string{
  `/filename=notes.txt`,
  `/tags/kind=document`,
  `/tags/source=local`,
  `textfile:/content=notes.txt`,
}, time.Now(), lql.ParseMutationsOptions{
  EnableFileValues: true,
  FileValueBaseDir: ".",
})
_ = lql.MutateStream(lql.MutateStreamRequest{
  Reader: strings.NewReader(`{}`),
  Writer: os.Stdout,
  Mutations: muts,
})

Example:

doc := map[string]any{"state": map[string]any{"retries": 1}}
_ = lql.Mutate(doc, "/state/retries=+2", "/state/status=running")

Comma/newline separated mutation strings can also drive streaming mutation:

muts, _ := lql.ParseMutationsString("/filename=notes.txt,\n/tags/kind=document,\n/tags/source=local", time.Now())
_ = lql.MutateStream(lql.MutateStreamRequest{
  Reader: strings.NewReader(`{}`),
  Writer: os.Stdout,
  Mutations: muts,
})

The package is intentionally small and dependency-free so it can be embedded in CLIs and services that need LQL parsing or evaluation.

Streaming Queries

Query streams can be evaluated without materializing full JSON objects:

sel, _ := lql.ParseSelectorString(`/status="open"`)
_ = lql.QueryStream(lql.QueryStreamRequest{
  Reader: strings.NewReader(`{"status":"open"}`),
  Ctx: context.Background(),
  Selector: sel,
  Mode: lql.QueryDecisionPlusValue,
  MatchedOnly: true,
  OnValue: func(v lql.QueryStreamValue) error {
    if v.Matched {
      // v.JSON contains candidate payload bytes when in-memory.
      // If spooled to disk, use v.OpenJSON.
    }
    return nil
  },
})

Streaming Mutations

Mutation streams can be applied candidate-by-candidate without materializing the full input stream:

muts, _ := lql.ParseMutations([]string{`/state/status=running`}, time.Now())
_ = lql.MutateStream(lql.MutateStreamRequest{
  Reader: strings.NewReader(`{"state":{"status":"queued"}}`),
  Ctx: context.Background(),
  Writer: io.Discard, // optional compact NDJSON output sink
  SpoolMemoryBytes: 1 << 20, // optional callback payload memory threshold
  Mode: lql.MutateSingleObjectOnly,
  Mutations: muts,
  OnValue: func(v lql.MutateStreamValue) error {
    // v.JSON/v.Value are in-memory when available.
    // v.OpenJSON works for both in-memory and spooled payloads.
    return nil
  },
})

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrStreamStop = errors.New("lql: stream stop")

ErrStreamStop requests graceful early termination from stream callbacks.

When returned from supported callbacks, stream APIs stop scanning and return nil error with stop metadata in result-bearing variants.

Functions

func ApplyMutations

func ApplyMutations(doc map[string]any, muts []Mutation) error

ApplyMutations mutates the provided JSON object in-place according to muts.

func IsStreamContextCanceled added in v0.7.0

func IsStreamContextCanceled(err error) bool

IsStreamContextCanceled reports whether err is a StreamErrorContextCanceled.

func IsStreamDocumentTooLarge added in v0.7.0

func IsStreamDocumentTooLarge(err error) bool

IsStreamDocumentTooLarge reports whether err is a StreamErrorDocumentTooLarge.

func IsStreamErrorCode added in v0.7.0

func IsStreamErrorCode(err error, code StreamErrorCode) bool

IsStreamErrorCode reports whether err unwraps to *StreamError with code.

func IsStreamInternal added in v0.7.0

func IsStreamInternal(err error) bool

IsStreamInternal reports whether err is a StreamErrorInternal.

func IsStreamInvalidBody added in v0.7.0

func IsStreamInvalidBody(err error) bool

IsStreamInvalidBody reports whether err is a StreamErrorInvalidBody.

func IsStreamInvalidSelector added in v0.7.0

func IsStreamInvalidSelector(err error) bool

IsStreamInvalidSelector reports whether err is a StreamErrorInvalidSelector.

func Matches

func Matches(sel Selector, doc map[string]any) bool

Matches reports whether doc satisfies sel. A nil document never matches.

func MatchesValue

func MatchesValue(sel Selector, value any) bool

MatchesValue reports whether value satisfies sel. Only JSON objects match.

func Mutate

func Mutate(doc map[string]any, exprs ...string) error

Mutate applies the provided mutation expressions to doc using time.Now().

func MutateStream added in v0.7.0

func MutateStream(req MutateStreamRequest) error

MutateStream applies mutations to JSON values from a stream and emits values in deterministic input order. Top-level arrays are treated as streams of candidate values (including nested top-level arrays).

func MutateWithTime

func MutateWithTime(doc map[string]any, now time.Time, exprs ...string) error

MutateWithTime applies mutation expressions using the supplied time.

func QueryMutateStream added in v0.8.0

func QueryMutateStream(req QueryMutateStreamRequest) error

QueryMutateStream runs QueryMutateStreamWithResult and discards summary.

func QueryStream added in v0.7.0

func QueryStream(req QueryStreamRequest) error

QueryStream evaluates selector matches against a JSON stream without materializing full documents.

Top-level arrays are treated as streams of candidate values.

Types

type DateTerm added in v0.13.0

type DateTerm struct {
	Field  string `json:"field"`
	Value  string `json:"value,omitempty"`
	Since  string `json:"since,omitempty"`
	After  string `json:"after,omitempty"`
	Before string `json:"before,omitempty"`
	GTE    string `json:"gte,omitempty"`
	GT     string `json:"gt,omitempty"`
	LTE    string `json:"lte,omitempty"`
	LT     string `json:"lt,omitempty"`
	// contains filtered or unexported fields
}

DateTerm captures explicit datetime selector parameters.

func (*DateTerm) UnmarshalJSON added in v0.13.0

func (t *DateTerm) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes the date selector term and primes temporal caches.

type InTerm

type InTerm struct {
	Field string   `json:"field"`
	Any   []string `json:"any"`
}

InTerm represents a small set membership filter.

type MutateStreamMode added in v0.7.0

type MutateStreamMode uint8

MutateStreamMode controls top-level framing and root-shape validation.

const (
	// MutateModeAuto preserves backward-compatible behavior:
	// multiple top-level values are allowed and top-level arrays are flattened.
	MutateModeAuto MutateStreamMode = iota
	// MutateSingleValueOnly requires exactly one top-level JSON value and disables
	// top-level array flattening.
	MutateSingleValueOnly
	// MutateObjectRootOnly requires each top-level value to be a JSON object root.
	// Top-level arrays/scalars/null are rejected.
	MutateObjectRootOnly
	// MutateSingleObjectOnly combines MutateSingleValueOnly and MutateObjectRootOnly.
	MutateSingleObjectOnly
)

type MutateStreamPayloadSink added in v0.8.0

type MutateStreamPayloadSink interface {
	io.Writer
	Finalize() error
	Open() (io.ReadCloser, error)
	Bytes() []byte
	SizeHint() int
	Cleanup() error
}

MutateStreamPayloadSink receives candidate payload bytes in callback mode.

type MutateStreamPayloadSinkFactory added in v0.8.0

type MutateStreamPayloadSinkFactory func(MutateStreamPayloadSinkRequest) (MutateStreamPayloadSink, error)

MutateStreamPayloadSinkFactory creates per-candidate payload sinks.

type MutateStreamPayloadSinkRequest added in v0.8.0

type MutateStreamPayloadSinkRequest struct {
	Offset int64
}

MutateStreamPayloadSinkRequest describes one candidate sink allocation.

type MutateStreamPlan added in v0.7.0

type MutateStreamPlan struct {
	// contains filtered or unexported fields
}

MutateStreamPlan reuses compiled mutation state for MutateStream.

A zero-value plan is treated as unset.

func NewMutateStreamPlan added in v0.7.0

func NewMutateStreamPlan(mutations []Mutation) (MutateStreamPlan, error)

NewMutateStreamPlan compiles mutations for reuse across MutateStream calls.

func (MutateStreamPlan) IsZero added in v0.7.0

func (p MutateStreamPlan) IsZero() bool

IsZero reports whether the plan is unset.

type MutateStreamRequest added in v0.7.0

type MutateStreamRequest struct {
	Ctx    context.Context
	Reader io.Reader
	Writer io.Writer
	Mode   MutateStreamMode
	// Plan reuses compiled mutation state across stream invocations.
	// When set, Mutations must be empty.
	Plan      MutateStreamPlan
	Mutations []Mutation
	// SpoolMemoryBytes sets the in-memory callback payload threshold.
	// Values <= 0 default to 1 MiB.
	SpoolMemoryBytes int64
	// SpoolTempDir sets the temp directory used when callback payloads spill to disk.
	// Empty defaults to /tmp.
	SpoolTempDir string
	// SpoolFilePattern controls os.CreateTemp naming for spilled callback payloads.
	// Empty defaults to "lql-spool-*.json".
	SpoolFilePattern string
	// DisableInternalSpool requires caller-managed payload sink when callback
	// capture is enabled.
	DisableInternalSpool bool
	// PayloadSinkFactory creates a candidate payload sink for caller-managed
	// callback capture.
	PayloadSinkFactory MutateStreamPayloadSinkFactory
	// MaxCandidateBytes is measured on canonical emitted candidate bytes
	// (compact JSON form after mutation for object roots) from the first
	// non-whitespace byte to closing token.
	MaxCandidateBytes int64
	OnValue           func(MutateStreamValue) error
}

MutateStreamRequest configures streaming mutation evaluation.

type MutateStreamResult added in v0.8.0

type MutateStreamResult struct {
	CandidatesSeen    int64
	CandidatesWritten int64
	BytesRead         int64
	BytesWritten      int64
	BytesCaptured     int64
	SpillCount        int64
	SpillBytes        int64
	StoppedEarly      bool
	StopReason        MutateStreamStopReason
	LastOffset        int64
}

MutateStreamResult reports deterministic run summary for MutateStream.

func MutateStreamWithResult added in v0.8.0

func MutateStreamWithResult(req MutateStreamRequest) (result MutateStreamResult, err error)

MutateStreamWithResult applies mutations to JSON values from a stream and emits values in deterministic input order. Top-level arrays are treated as streams of candidate values (including nested top-level arrays).

type MutateStreamStopReason added in v0.8.0

type MutateStreamStopReason string

MutateStreamStopReason classifies graceful early-stop outcomes.

const (
	// MutateStreamStopNone indicates normal end-of-stream completion.
	MutateStreamStopNone MutateStreamStopReason = ""
	// MutateStreamStopCallbackStop indicates callback requested graceful stop.
	MutateStreamStopCallbackStop MutateStreamStopReason = "callback_stop"
)

type MutateStreamValue added in v0.7.0

type MutateStreamValue struct {
	// Value mirrors JSON for compatibility with older callback handlers.
	// Prefer JSON for new code.
	Value json.RawMessage
	JSON  []byte
	// OpenJSON returns payload bytes from offset 0, whether in-memory or spooled.
	OpenJSON func() (io.ReadCloser, error)
	Size     int64
	Offset   int64
}

MutateStreamValue describes one mutated candidate value from the stream. JSON, Value, and OpenJSON are valid only during callback invocation.

type Mutation

type Mutation struct {
	Path  []string
	Kind  MutationKind
	Value any
	Delta float64
	// contains filtered or unexported fields
}

Mutation describes a parsed mutation expression.

func Mutations

func Mutations(now time.Time, exprs ...string) ([]Mutation, error)

Mutations parses variadic mutation expressions (each of which may contain comma/newline separated clauses) using the provided timestamp for time: operands.

func ParseMutations

func ParseMutations(exprs []string, now time.Time) ([]Mutation, error)

ParseMutations parses CLI-style mutation expressions into Mutation structs. Paths follow JSON Pointer semantics (`/foo/bar`), so literal dots or spaces in keys require no extra quoting. Brace shorthand (`/foo{/bar=1,/baz=2}`), rm:/time: prefixes, and ++/--/+=/-= increment forms are supported.

func ParseMutationsString

func ParseMutationsString(expr string, now time.Time) ([]Mutation, error)

ParseMutationsString parses a comma/newline separated LQL mutation string.

func ParseMutationsWithOptions added in v0.15.0

func ParseMutationsWithOptions(exprs []string, now time.Time, opts ParseMutationsOptions) ([]Mutation, error)

ParseMutationsWithOptions parses CLI-style mutation expressions into Mutation structs using the supplied parser options.

type MutationFileValueResolver added in v0.15.0

type MutationFileValueResolver interface {
	Open(path string) (io.ReadCloser, error)
}

MutationFileValueResolver opens file-backed mutation sources.

Implementations must return a new reader positioned at offset 0 on each Open call. Auto mode may open the same path more than once per mutation application.

type MutationKind

type MutationKind int

MutationKind identifies the operation applied to a JSON path.

const (
	// MutationSet assigns a value to the target path.
	MutationSet MutationKind = iota
	// MutationIncrement adds/subtracts a numeric delta to the target path.
	MutationIncrement
	// MutationRemove deletes the target path.
	MutationRemove
)

type ParseMutationsOptions added in v0.15.0

type ParseMutationsOptions struct {
	EnableFileValues  bool
	FileValueBaseDir  string
	FileValueResolver MutationFileValueResolver
}

ParseMutationsOptions configures ParseMutationsWithOptions behavior.

type ProjectFieldsRequest added in v0.7.0

type ProjectFieldsRequest struct {
	Ctx    context.Context
	Reader io.Reader
	Writer io.Writer
	Paths  []ProjectionPath
	Plan   *ProjectionPlan

	MaxOutputBytes int64

	SpoolMemoryBytes int64
	SpoolTempDir     string
	SpoolFilePattern string
}

ProjectFieldsRequest configures streaming projection for one JSON value.

Reader must provide exactly one JSON value. The root value must be an object. Paths are matched as JSON Pointer segments and only matched fields are emitted. Selected field payloads are spooled in memory and spill to disk by spool config.

type ProjectFieldsResult added in v0.7.0

type ProjectFieldsResult struct {
	Found bool
	Size  int64
}

ProjectFieldsResult describes one projection invocation.

func ProjectFields added in v0.7.0

func ProjectFields(req ProjectFieldsRequest) (result ProjectFieldsResult, err error)

ProjectFields projects selected fields from one JSON object without materializing the full input document.

type ProjectionPath added in v0.7.0

type ProjectionPath struct {
	Raw      string
	Segments []string
}

ProjectionPath is a compiled JSON Pointer path used for field projection.

func ParseProjectionPaths added in v0.7.0

func ParseProjectionPaths(paths []string) ([]ProjectionPath, error)

ParseProjectionPaths parses repeated JSON Pointer fields used by projection.

type ProjectionPlan added in v0.7.0

type ProjectionPlan struct {
	// contains filtered or unexported fields
}

ProjectionPlan stores compiled projection paths for repeated invocations.

func NewProjectionPlan added in v0.7.0

func NewProjectionPlan(paths []ProjectionPath) (*ProjectionPlan, error)

NewProjectionPlan compiles projection paths for repeated ProjectFields calls.

func (*ProjectionPlan) Paths added in v0.7.0

func (p *ProjectionPlan) Paths() []ProjectionPath

Paths returns a copy of compiled projection paths.

type QueryMutateStreamRequest added in v0.8.0

type QueryMutateStreamRequest struct {
	Ctx    context.Context
	Reader io.Reader
	Writer io.Writer

	Selector  Selector
	QueryPlan QueryStreamPlan

	Mutations  []Mutation
	MutatePlan MutateStreamPlan

	// Query-side limits/capture options.
	QueryCapturePolicy      QueryStreamCapturePolicy
	QuerySpoolMemoryBytes   int64
	QuerySpoolTempDir       string
	QuerySpoolFilePattern   string
	QueryDisableSpool       bool
	QueryPayloadSinkFactory QueryStreamPayloadSinkFactory
	QueryMaxCandidateBytes  int64
	MaxMatches              int64
	MaxCandidates           int64
	MaxBytesRead            int64

	// Mutate-side options.
	MutateMode               MutateStreamMode
	MutateSpoolMemoryBytes   int64
	MutateSpoolTempDir       string
	MutateSpoolFilePattern   string
	MutateDisableSpool       bool
	MutatePayloadSinkFactory MutateStreamPayloadSinkFactory
	MutateMaxCandidateBytes  int64

	OnDecision func(QueryStreamDecision) error
}

QueryMutateStreamRequest configures fused query+mutate streaming.

It selects matched candidates from Reader and applies mutations to each matched candidate in streaming order, writing mutated output to Writer.

type QueryMutateStreamResult added in v0.8.0

type QueryMutateStreamResult struct {
	Query  QueryStreamResult
	Mutate MutateStreamResult
}

QueryMutateStreamResult reports fused query+mutate run summaries.

func QueryMutateStreamWithResult added in v0.8.0

func QueryMutateStreamWithResult(req QueryMutateStreamRequest) (QueryMutateStreamResult, error)

QueryMutateStreamWithResult selects matched candidates from Reader and applies mutations to each matched candidate, writing mutated output to Writer without pipe handoff orchestration.

type QueryStreamCapturePolicy added in v0.8.0

type QueryStreamCapturePolicy uint8

QueryStreamCapturePolicy controls candidate payload capture strategy in QueryDecisionPlusValue mode.

const (
	// QueryCaptureAuto preserves backward-compatible capture behavior.
	QueryCaptureAuto QueryStreamCapturePolicy = iota
	// QueryCaptureAllCandidates captures payload bytes for every candidate.
	QueryCaptureAllCandidates
	// QueryCaptureMatchesOnlyBestEffort captures only when match payload may be
	// needed, with conservative fallback to full capture.
	QueryCaptureMatchesOnlyBestEffort
)

type QueryStreamDecision added in v0.8.0

type QueryStreamDecision struct {
	Matched bool
	Size    int64
	Offset  int64
}

QueryStreamDecision describes one per-candidate decision event.

type QueryStreamMode added in v0.7.0

type QueryStreamMode uint8

QueryStreamMode controls payload behavior for QueryStream callbacks.

const (
	// QueryModeAuto preserves backward-compatible behavior using IncludeJSON.
	QueryModeAuto QueryStreamMode = iota
	// QueryDecisionOnly reports only match/unmatch state.
	QueryDecisionOnly
	// QueryDecisionPlusValue includes candidate payload access.
	QueryDecisionPlusValue
)

type QueryStreamPayloadSink added in v0.7.0

type QueryStreamPayloadSink interface {
	io.Writer
	// Finalize is called once after candidate parse before callback observation.
	Finalize() error
	// Open returns a readable payload view from offset 0.
	Open() (io.ReadCloser, error)
	// Bytes returns in-memory payload bytes when available; may return nil.
	Bytes() []byte
	// SizeHint allows reuse-aware callers to tune next allocation hint.
	SizeHint() int
	// Cleanup releases all resources; always called exactly once after callback,
	// including on errors.
	Cleanup() error
}

QueryStreamPayloadSink receives candidate payload bytes in IncludeJSON mode. Implementations may keep bytes in memory, spill to disk, or forward elsewhere.

type QueryStreamPayloadSinkFactory added in v0.7.0

type QueryStreamPayloadSinkFactory func(QueryStreamPayloadSinkRequest) (QueryStreamPayloadSink, error)

QueryStreamPayloadSinkFactory creates per-candidate payload sinks.

type QueryStreamPayloadSinkRequest added in v0.7.0

type QueryStreamPayloadSinkRequest struct {
	Offset int64
}

QueryStreamPayloadSinkRequest describes one candidate sink allocation.

type QueryStreamPlan added in v0.7.0

type QueryStreamPlan struct {
	// contains filtered or unexported fields
}

QueryStreamPlan reuses compiled selector state for QueryStream.

A zero-value plan is treated as unset.

func NewQueryStreamPlan added in v0.7.0

func NewQueryStreamPlan(selector Selector) (QueryStreamPlan, error)

NewQueryStreamPlan compiles selector state for reuse across QueryStream calls.

func (QueryStreamPlan) IsZero added in v0.7.0

func (p QueryStreamPlan) IsZero() bool

IsZero reports whether the plan is unset.

type QueryStreamRequest added in v0.7.0

type QueryStreamRequest struct {
	Ctx      context.Context
	Reader   io.Reader
	Selector Selector
	// Plan reuses compiled selector state across query invocations.
	// When set, Selector must be empty.
	Plan        QueryStreamPlan
	Mode        QueryStreamMode
	IncludeJSON bool
	// MatchedOnly invokes OnValue only for matched candidates.
	// Default false preserves callback-per-candidate behavior.
	MatchedOnly bool
	// SpoolMemoryBytes sets the in-memory payload threshold for IncludeJSON mode.
	// Values <= 0 default to 3 MiB.
	SpoolMemoryBytes int64
	// SpoolTempDir sets the temp directory used when payloads spill to disk.
	// Empty defaults to /tmp.
	SpoolTempDir string
	// SpoolFilePattern controls os.CreateTemp naming for spilled payload files.
	// Empty defaults to "lql-spool-*.json".
	SpoolFilePattern string
	// DisableInternalSpool requires caller-managed payload sink in IncludeJSON mode.
	DisableInternalSpool bool
	// PayloadSinkFactory creates a candidate payload sink for caller-managed spooling.
	// When set, QueryStream uses this sink instead of internal spool.
	PayloadSinkFactory QueryStreamPayloadSinkFactory
	// MaxCandidateBytes is measured as canonical candidate bytes from the first
	// non-whitespace byte of each candidate to its closing JSON token, excluding
	// surrounding top-level separators/whitespace.
	MaxCandidateBytes int64
	// MaxMatches stops stream after this many matched candidates (<=0 disabled).
	MaxMatches int64
	// MaxCandidates stops stream after this many scanned candidates (<=0 disabled).
	MaxCandidates int64
	// MaxBytesRead stops stream after this many bytes consumed from input
	// (<=0 disabled).
	MaxBytesRead int64
	// CapturePolicy controls payload capture strategy in plus-value mode.
	CapturePolicy QueryStreamCapturePolicy
	// OnDecision runs once per candidate after match decision and before OnValue.
	OnDecision func(QueryStreamDecision) error
	OnValue    func(QueryStreamValue) error
}

QueryStreamRequest configures token-stream query evaluation.

type QueryStreamResult added in v0.8.0

type QueryStreamResult struct {
	CandidatesSeen    int64
	CandidatesMatched int64
	BytesRead         int64
	BytesCaptured     int64
	SpillCount        int64
	SpillBytes        int64
	StoppedEarly      bool
	StopReason        QueryStreamStopReason
	LastOffset        int64
}

QueryStreamResult reports deterministic run summary for QueryStream.

func QueryStreamWithResult added in v0.8.0

func QueryStreamWithResult(req QueryStreamRequest) (QueryStreamResult, error)

QueryStreamWithResult evaluates selector matches against a JSON stream without materializing full documents and returns run summary.

Top-level arrays are treated as streams of candidate values.

type QueryStreamStopReason added in v0.8.0

type QueryStreamStopReason string

QueryStreamStopReason classifies graceful early-stop outcomes.

const (
	// QueryStreamStopNone indicates normal end-of-stream completion.
	QueryStreamStopNone QueryStreamStopReason = ""
	// QueryStreamStopMatchLimit indicates MaxMatches stopped the scan.
	QueryStreamStopMatchLimit QueryStreamStopReason = "match_limit"
	// QueryStreamStopCandidateLimit indicates MaxCandidates stopped the scan.
	QueryStreamStopCandidateLimit QueryStreamStopReason = "candidate_limit"
	// QueryStreamStopByteLimit indicates MaxBytesRead stopped the scan.
	QueryStreamStopByteLimit QueryStreamStopReason = "byte_limit"
	// QueryStreamStopCallbackStop indicates callback requested graceful stop.
	QueryStreamStopCallbackStop QueryStreamStopReason = "callback_stop"
)

type QueryStreamValue added in v0.7.0

type QueryStreamValue struct {
	OpenJSON func() (io.ReadCloser, error)
	JSON     []byte
	Size     int64
	Matched  bool
}

QueryStreamValue describes one candidate JSON value from the stream.

JSON and OpenJSON are valid only during the callback invocation. JSON may be nil when payloads are spooled to disk; use OpenJSON to read bytes.

type RangeBound added in v0.13.0

type RangeBound struct {
	// contains filtered or unexported fields
}

RangeBound stores a numeric or datetime literal.

func NewDatetimeRangeBound added in v0.13.0

func NewDatetimeRangeBound(v string) *RangeBound

NewDatetimeRangeBound builds a datetime range bound for RangeTerm fields. The value is trimmed; semantic validation occurs during selector validation/eval.

func NewNumericRangeBound added in v0.13.0

func NewNumericRangeBound(v float64) *RangeBound

NewNumericRangeBound builds a numeric range bound for RangeTerm fields.

func (*RangeBound) DateTime added in v0.13.0

func (b *RangeBound) DateTime() (string, bool)

DateTime returns the datetime bound literal when present.

func (*RangeBound) Number added in v0.13.0

func (b *RangeBound) Number() (float64, bool)

Number returns the numeric bound when present.

type RangeTerm

type RangeTerm struct {
	Field string      `json:"field"`
	GTE   *RangeBound `json:"gte,omitempty"`
	GT    *RangeBound `json:"gt,omitempty"`
	LTE   *RangeBound `json:"lte,omitempty"`
	LT    *RangeBound `json:"lt,omitempty"`
}

RangeTerm captures numeric or datetime bounds.

func (RangeTerm) MarshalJSON added in v0.13.0

func (r RangeTerm) MarshalJSON() ([]byte, error)

MarshalJSON writes numeric bounds as JSON numbers and datetime bounds as strings.

func (*RangeTerm) UnmarshalJSON added in v0.13.0

func (r *RangeTerm) UnmarshalJSON(data []byte) error

UnmarshalJSON accepts numeric or datetime strings for each range bound.

type ReusableMutatePayloadSinkFactory added in v0.8.0

type ReusableMutatePayloadSinkFactory struct {
	// contains filtered or unexported fields
}

ReusableMutatePayloadSinkFactory reuses in-memory buffers and spill files across MutateStream callback payloads.

func NewReusableMutatePayloadSinkFactory added in v0.8.0

func NewReusableMutatePayloadSinkFactory(opts ReusableMutatePayloadSinkFactoryOptions) *ReusableMutatePayloadSinkFactory

NewReusableMutatePayloadSinkFactory builds a reusable caller-managed sink factory for MutateStreamRequest.PayloadSinkFactory.

func (*ReusableMutatePayloadSinkFactory) Close added in v0.8.0

Close releases all reusable sink resources including any temp files.

func (*ReusableMutatePayloadSinkFactory) Factory added in v0.8.0

Factory returns a MutateStream payload sink factory function.

func (*ReusableMutatePayloadSinkFactory) NewSink added in v0.8.0

NewSink implements MutateStreamPayloadSinkFactory.

type ReusableMutatePayloadSinkFactoryOptions added in v0.8.0

type ReusableMutatePayloadSinkFactoryOptions = ReusableQueryPayloadSinkFactoryOptions

ReusableMutatePayloadSinkFactoryOptions configures reusable caller-managed mutate payload sinks.

type ReusableQueryPayloadSinkFactory added in v0.8.0

type ReusableQueryPayloadSinkFactory struct {
	// contains filtered or unexported fields
}

ReusableQueryPayloadSinkFactory provides a caller-managed QueryStream payload sink factory that reuses in-memory buffers and spill files across candidates to reduce allocation and filesystem churn.

Call Close when the factory is no longer needed to release temp files and pooled buffers.

func NewReusableQueryPayloadSinkFactory added in v0.8.0

func NewReusableQueryPayloadSinkFactory(opts ReusableQueryPayloadSinkFactoryOptions) *ReusableQueryPayloadSinkFactory

NewReusableQueryPayloadSinkFactory builds a reusable caller-managed sink factory for QueryStreamRequest.PayloadSinkFactory.

func (*ReusableQueryPayloadSinkFactory) Close added in v0.8.0

Close releases all reusable sink resources including any temp files.

func (*ReusableQueryPayloadSinkFactory) Factory added in v0.8.0

Factory returns a QueryStream payload sink factory function.

func (*ReusableQueryPayloadSinkFactory) NewSink added in v0.8.0

NewSink implements QueryStreamPayloadSinkFactory.

type ReusableQueryPayloadSinkFactoryOptions added in v0.8.0

type ReusableQueryPayloadSinkFactoryOptions struct {
	SpoolMemoryBytes int64
	SpoolTempDir     string
	SpoolFilePattern string
}

ReusableQueryPayloadSinkFactoryOptions configures reusable caller-managed payload sinks for QueryStream.

type Selector

type Selector struct {
	And       []Selector `json:"and,omitempty"`
	Or        []Selector `json:"or,omitempty"`
	Not       *Selector  `json:"not,omitempty"`
	Eq        *Term      `json:"eq,omitempty"`
	Contains  *Term      `json:"contains,omitempty"`
	IContains *Term      `json:"icontains,omitempty"`
	Prefix    *Term      `json:"prefix,omitempty"`
	IPrefix   *Term      `json:"iprefix,omitempty"`
	Range     *RangeTerm `json:"range,omitempty"`
	Date      *DateTerm  `json:"date,omitempty"`
	In        *InTerm    `json:"in,omitempty"`
	Exists    string     `json:"exists,omitempty"`
}

Selector represents the recursive selector AST.

func ParseSelectorString

func ParseSelectorString(expr string) (Selector, error)

ParseSelectorString parses a comma/newline separated selector expression string (LQL). When multiple expressions are provided, non-explicit clauses are combined with AND.

func ParseSelectorStringOr added in v0.4.0

func ParseSelectorStringOr(expr string) (Selector, error)

ParseSelectorStringOr parses a comma/newline separated selector expression string (LQL). When multiple expressions are provided, non-explicit clauses are combined with OR.

Example
sel, err := ParseSelectorStringOr(`/status="ok",/msg="done"`)
if err != nil {
	fmt.Println("error:", err)
	return
}
fmt.Println(!sel.IsEmpty(), len(sel.Or))
Output:
true 2

func ParseSelectorStrings added in v0.3.0

func ParseSelectorStrings(exprs []string) (Selector, error)

ParseSelectorStrings parses each selector expression and combines them with AND.

Example
sel, err := ParseSelectorStrings([]string{`/status="ok"`, `/msg="done"`})
if err != nil {
	fmt.Println("error:", err)
	return
}
fmt.Println(!sel.IsEmpty(), len(sel.And))
Output:
true 2

func ParseSelectorStringsOr added in v0.3.0

func ParseSelectorStringsOr(exprs []string) (Selector, error)

ParseSelectorStringsOr parses each selector expression and combines them with OR.

Example
sel, err := ParseSelectorStringsOr([]string{`/status="ok"`, `/msg="done"`})
if err != nil {
	fmt.Println("error:", err)
	return
}
fmt.Println(!sel.IsEmpty(), len(sel.Or))
Output:
true 2

func ParseSelectorValues

func ParseSelectorValues(values url.Values) (Selector, error)

ParseSelectorValues scans URL parameters for selector expressions and returns the AST.

func (Selector) IsEmpty

func (s Selector) IsEmpty() bool

IsEmpty reports whether the selector contains any clauses.

func (*Selector) UnmarshalJSON

func (s *Selector) UnmarshalJSON(data []byte) error

UnmarshalJSON ensures empty selectors stay zeroed without nil pointers.

type SelectorCapabilities added in v0.7.0

type SelectorCapabilities struct {
	And      bool
	Or       bool
	Not      bool
	Eq       bool
	Range    bool
	Date     bool
	In       bool
	Prefix   bool
	Contains bool
	Exists   bool

	WildcardPath  bool
	RecursivePath bool
}

SelectorCapabilities reports feature families used by a selector AST.

func InspectSelectorCapabilities added in v0.7.0

func InspectSelectorCapabilities(sel Selector) SelectorCapabilities

InspectSelectorCapabilities reports selector feature families and path complexity.

func (SelectorCapabilities) Families added in v0.7.0

func (c SelectorCapabilities) Families() []string

Families returns sorted selector clause families referenced by the selector.

type SelectorExecutionTraits added in v0.8.0

type SelectorExecutionTraits struct {
	UsesContainsLike   bool
	UsesRecursivePath  bool
	UsesWildcardPath   bool
	RequiresObjectRoot bool
	// EarlyNonMatchLikely is a best-effort heuristic for low-match workloads.
	EarlyNonMatchLikely bool
}

SelectorExecutionTraits reports selector planning hints for stream execution.

func InspectSelectorExecutionTraits added in v0.8.0

func InspectSelectorExecutionTraits(sel Selector) SelectorExecutionTraits

InspectSelectorExecutionTraits reports execution planning hints for selector evaluation.

type StreamError added in v0.7.0

type StreamError struct {
	Code   StreamErrorCode
	Detail string
	Offset int64
	Path   string
	Err    error
}

StreamError is returned by QueryStream and MutateStream.

func AsStreamError added in v0.7.0

func AsStreamError(err error) (*StreamError, bool)

AsStreamError unwraps err into *StreamError.

func (*StreamError) Error added in v0.7.0

func (e *StreamError) Error() string

func (*StreamError) Unwrap added in v0.7.0

func (e *StreamError) Unwrap() error

type StreamErrorCode added in v0.7.0

type StreamErrorCode string

StreamErrorCode is a machine-usable error class for streaming contracts.

const (
	// StreamErrorInvalidSelector indicates selector parse/compile incompatibility.
	StreamErrorInvalidSelector StreamErrorCode = "invalid_selector"
	// StreamErrorInvalidBody indicates malformed JSON/body or stream framing input.
	StreamErrorInvalidBody StreamErrorCode = "invalid_body"
	// StreamErrorDocumentTooLarge indicates a configured candidate-size limit was exceeded.
	StreamErrorDocumentTooLarge StreamErrorCode = "document_too_large"
	// StreamErrorContextCanceled indicates request cancellation or timeout.
	StreamErrorContextCanceled StreamErrorCode = "context_canceled"
	// StreamErrorInternal indicates an unexpected internal execution failure.
	StreamErrorInternal StreamErrorCode = "internal"
)

func StreamErrorCodeOf added in v0.7.0

func StreamErrorCodeOf(err error) (StreamErrorCode, bool)

StreamErrorCodeOf returns StreamError code if err unwraps to *StreamError.

type Term

type Term struct {
	Field      string   `json:"field"`
	Value      string   `json:"value"`
	Any        []string `json:"any,omitempty"`
	IgnoreCase bool     `json:"ignoreCase,omitempty"`
	// contains filtered or unexported fields
}

Term represents a simple field/value predicate.

func (Term) MarshalJSON added in v0.12.0

func (t Term) MarshalJSON() ([]byte, error)

MarshalJSON preserves omitted-value intent for transport round-trips.

func (*Term) UnmarshalJSON

func (t *Term) UnmarshalJSON(data []byte) error

UnmarshalJSON accepts string/bool/number for value and converts to string.

Directories

Path Synopsis
cmd
lql command
Command lql evaluates LQL selectors and applies LQL mutations over JSON input.
Command lql evaluates LQL selectors and applies LQL mutations over JSON input.
Package jsonpointer implements RFC 6901 JSON Pointer utilities.
Package jsonpointer implements RFC 6901 JSON Pointer utilities.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL