quark

package module
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2026 License: Apache-2.0 Imports: 27 Imported by: 0

README

Quark ORM

Quark

A type-safe, security-first ORM for Go — generics on the surface, six dialects underneath.

Go Reference CI Go Version Release

Docs · Quick Start · Examples · CLI · Changelog


📌 Status

Quark is in v0.6 — late-alpha. v0.6 is the Phase 3 cut: schema-as-code migrations land — neutral schema introspection across the four CI dialects + SQLite, pure-Go schema diff, Client.PlanMigration / ApplyPlan round-trip, transactional + resumable execution, quarkmigrate plan/verify/apply CLI workflow, orchestrated Backfill, per-Client model registry, and a distributed migration lock. Also adds Array[T]. v0.5 was a Phase 0 cleanup release that made the cross-engine integration matrix (PostgreSQL, MySQL, MariaDB, MSSQL via testcontainers; Oracle excluded pending an image issue) blocking in CI. v0.4 landed the composable query builder (typed expression AST, subqueries, CTEs, window functions, set operators, pessimistic locking, structured Join builder, nested-Preload dotted paths, IN(...) chunking, HavingAggregate); v0.3 landed the rich-type / dirty-tracking layer (Nullable[T], JSON[T], RegisterTypeMapper, optimistic locking, soft-delete scopes). It is not yet v1.0 production-ready — see docs/ANALISIS_MADUREZ.md for the honest gap analysis and the path to a real v1.0.

Breaking changes are documented in docs/MIGRATION_vX.Y.Z.md per version (none for v0.6 or v0.5; MIGRATION_v0.4.0.md covers the Join builder rename from v0.3.x). Release notes per version live under docs/RELEASE_NOTES_*.md.


🏗️ Why I built this

After running production services on GORM, three patterns kept causing incidents: every db.Find(&result) forced an interface{} cast the compiler couldn't verify; column names in WHERE clauses were plain strings with no guard against typos or injection in dynamic queries; N+1 queries appeared silently whenever a Preload was forgotten, only surfacing in slow-query logs hours later; and multi-tenant isolation meant copy-pasting WHERE tenant_id = ? everywhere, relying on discipline instead of enforcement. Quark is the ORM I wished existed: generics end the casts, SQLGuard validates every identifier at the API boundary, eager loading is explicit, and multi-tenancy is first-class — not an afterthought.


🚀 Quick Start

go get github.com/jcsvwinston/quark
package main

import (
    "context"
    "log"

    "github.com/jcsvwinston/quark"
    _ "modernc.org/sqlite"
)

type User struct {
    ID    int64  `db:"id"    pk:"true"`
    Name  string `db:"name"  quark:"not_null"`
    Email string `db:"email" quark:"unique"`
    Age   int    `db:"age"`
}

func main() {
    client, err := quark.New("sqlite", "file:app.db?cache=shared")
    if err != nil {
        log.Fatal(err)
    }
    defer client.Close()

    ctx := context.Background()

    // Create the table
    client.Migrate(ctx, &User{})

    // Insert
    u := User{Name: "Alice", Email: "alice@example.com", Age: 30}
    quark.For[User](ctx, client).Create(&u)
    // u.ID is now set

    // Query
    users, _ := quark.For[User](ctx, client).
        Where("age", ">=", 18).
        OrderBy("name", "ASC").
        Limit(20).
        List()

    // Update (partial — only non-zero fields) → returns (rowsAffected int64, err error)
    u.Name = "Alice Smith"
    rows, err := quark.For[User](ctx, client).Update(&u)
    _, _ = rows, err

    // Delete
    _, _ = quark.For[User](ctx, client).HardDelete(&u)

    _ = users
}

Switch to PostgreSQL — change one line, zero query code changes:

client, _ = quark.New("postgres", "postgres://user:pass@localhost/db")

See the per-dialect runnable examples under examples/ (one folder per supported engine).


🎬 Demo

Recording coming soon. To preview Quark locally right now:

git clone https://github.com/jcsvwinston/quark
go run ./examples/sqlite

Why Quark?

Most Go ORMs make you choose between safety and ergonomics. Quark doesn't.

Quark GORM sqlx ent
Native Generics (no interface{}) partial¹
SQL Injection Guard identifier + value value only² manual value only²
6 Dialects, zero config switch partial
Native Multi-Tenant (DB/Schema/RLS) manual/plugin manual manual/interceptor
Immutable Query Builder mutable³ N/A
Integrated L2 Cache plugin
stdlib *sql.DB — no magic pool
OpenTelemetry built-in plugin plugin
Batch Ops (Delete/Upsert/Update) partial⁴ partial

¹ GORM v2 core API uses interface{}; generic wrappers exist but are not part of the primary API.
² GORM and ent use parameterized queries that protect values against injection. Quark additionally validates identifiers (column/table names) at the API layer. See docs/comparison.md for a detailed breakdown with code examples.
³ GORM queries can mutate shared state when chained; Session(&gorm.Session{NewDB: true}) mitigates this but is opt-in.
⁴ GORM supports CreateInBatches; batch DELETE and batch UPDATE require custom loops.

For a cell-by-cell justification with code examples, see docs/comparison.md.


✨ Features

  • 100% Type-Safe — Go Generics end interface{} casts and silent runtime errors forever
  • SQLGuard — Every identifier (column, table, operator) is validated before touching the wire
  • Immutable Builder — Clone-on-write query builder, safe for concurrent goroutines
  • 6 Dialects — PostgreSQL · MySQL · MariaDB · SQLite · MSSQL · Oracle, all with idiomatic SQL generation
  • Native Multi-Tenancy — Database-per-tenant, schema-per-tenant, and Row-Level Security out of the box
  • L2 Cache — Pluggable cache backend (in-memory, Redis) wired directly into the query lifecycle
  • OpenTelemetry — Distributed tracing and metrics without changing your query code
  • Batch Operations — Chunked DeleteBatch, dialect-optimal UpsertBatch, atomic UpdateBatch
  • Eager Loading — Single-query Preload() eliminates N+1 queries
  • Auto-Migrations & SyncMigrate() creates tables; Sync() evolves them, including column renames
  • Hooks & Middleware — Full lifecycle hooks (BeforeCreate, AfterDelete…) and stackable middleware
  • Versioned Migrations — Code-first migration files with Up/Down and dry-run support
  • Composite PKs — First-class support for multi-column primary keys across all dialects
  • StreamingIter(), Cursor(), and Paginate() prevent OOM on large datasets
  • CLIquark model generate, quark migrate up, quark inspect schema and more

🔒 SQLGuard — Security by Default

Quark refuses to build a query with an unknown column, operator, or identifier:

// Compile-time + runtime guard: "drop_table" is not a valid operator
quark.For[User](ctx, client).Where("name", "drop_table", "x").List()
// → ErrInvalidQuery: operator "drop_table" not allowed

// Raw subqueries require explicit opt-in
quark.For[User](ctx, client).WhereSubquery("id", "IN", rawSQL).List()
// → ErrInvalidQuery: WhereSubquery requires AllowRawQueries to be enabled

Enable raw queries only where you deliberately need them:

lims := quark.DefaultLimits()
lims.AllowRawQueries = true
client, _ = quark.New("postgres", "postgres://user:pass@localhost/db", quark.WithLimits(lims))

📖 Core Operations

CRUD
// Create
err := quark.For[User](ctx, client).Create(&user)

// Find by PK
user, err := quark.For[User](ctx, client).Find(1)

// Update (partial — zero-value fields are skipped)
user.Name = "Bob"
rows, err := quark.For[User](ctx, client).Update(&user)

// UpdateMap (force any value, including zero)
rows, err := quark.For[User](ctx, client).
    Where("id", "=", user.ID).
    UpdateMap(map[string]any{"active": false, "score": 0})

// Upsert (INSERT … ON CONFLICT) — all 6 dialects
err = quark.For[User](ctx, client).Upsert(&user, []string{"email"}, []string{"name", "age"})

// Soft delete (sets deleted_at) or hard delete
rows, err = quark.For[User](ctx, client).Delete(&user)
rows, err = quark.For[User](ctx, client).HardDelete(&user)
rows, err = quark.For[User](ctx, client).Where("active", "=", false).DeleteBy()
Batch Operations
// DeleteBatch — chunked IN clauses, respects dialect limits
affected, err := quark.For[User](ctx, client).DeleteBatch([]int64{1, 2, 3, 100})

// UpsertBatch — dialect-optimal (multi-row ON CONFLICT / bulk MERGE / individual MERGE)
err = quark.For[User](ctx, client).UpsertBatch(users, []string{"email"}, []string{"name", "age"})

// UpdateBatch — N partial updates in a single transaction
affected, err = quark.For[User](ctx, client).UpdateBatch(users)
Query Builder
users, err := quark.For[User](ctx, client).
    Select("id", "name", "email").
    Where("active", "=", true).
    Where("age", ">", 18).
    WhereIn("role", []any{"admin", "editor"}).
    WhereBetween("created_at", start, end).
    WhereNot("banned", "=", true).
    Or(func(q *quark.Query[User]) *quark.Query[User] {
        return q.Where("tier", "=", "vip")
    }).
    OrderBy("created_at", "DESC").
    Limit(50).Offset(100).
    List()

count, err := quark.For[User](ctx, client).Where("active", "=", true).Count()
total, err := quark.For[Order](ctx, client).Sum("amount")
Transactions & Savepoints
err := client.Tx(ctx, func(tx *quark.Tx) error {
    if err := quark.ForTx[User](ctx, tx).Create(&u); err != nil {
        return err // triggers ROLLBACK
    }
    tx.Savepoint("checkpoint")
    // nested savepoint — partial rollback possible
    return nil // triggers COMMIT
})

🏢 Multi-Tenancy

cfg := quark.DefaultTenantConfig()
cfg.Strategy  = quark.DatabasePerTenant  // or SchemaPerTenant, RowLevelSecurity
cfg.BaseClient = adminClient

router := quark.NewTenantRouter(cfg,
    func(ctx context.Context) string {
        return ctx.Value("tenant_id").(string)
    }, nil)

// Queries are automatically routed & isolated — no code changes
users, _ := quark.For[User](tenantCtx, router).List()

📦 Caching (L2)

import "github.com/jcsvwinston/quark/cache/memory"

store := memory.New()
client, _ := quark.New("postgres", "postgres://user:pass@localhost/db",
    quark.WithCacheStore(store),
)

// Cache for 5 minutes, tagged "users"
users, _ := quark.For[User](ctx, client).
    Cache(5*time.Minute, "users").
    List()

// Invalidate the tag (e.g. after a write)
store.InvalidateTags(ctx, "users")

Redis backend: github.com/jcsvwinston/quark/cache/redis


🔭 OpenTelemetry

import quarkotel "github.com/jcsvwinston/quark/otel"

client, _ := quark.New("postgres", "postgres://user:pass@localhost/db",
    quark.WithMiddleware(quarkotel.New()),
)
// Every query now emits spans and metrics to your configured OTEL exporter

🗄 Migrations

Auto-Migrate & Sync
// Create table if not exists
client.Migrate(ctx, &User{}, &Order{})

// Evolve schema: add columns, rename with quark:"rename:old_col", drop with safe=false
client.Sync(ctx, &User{})
Versioned Migrations
// migrations/20240101_create_users.go
migrate.Register(&migrate.Migration{
    ID: "20240101_create_users",
    Up: func(ctx context.Context, client *quark.Client) error {
        return client.Exec(ctx, `CREATE TABLE users (...)`)
    },
    Down: func(ctx context.Context, client *quark.Client) error {
        return client.Exec(ctx, `DROP TABLE users`)
    },
})
quark migrate up
quark migrate down --steps 1
quark migrate status

🛠️ CLI

Install:

go install github.com/jcsvwinston/quark/cmd/quark@latest
Command Description
quark init Scaffold a new project with .quark.yml config
quark model generate --from-table users Generate Go structs from live database tables
quark migrate create add_index Create a new versioned migration file
quark migrate up Apply pending migrations
quark migrate down --steps 1 Revert the last migration
quark migrate status Show applied / pending migrations
quark inspect schema Print full database schema
quark inspect table users Inspect a specific table
quark validate --table users Validate column ↔ struct mapping
quark seed run Execute registered seeders
quark tenant provision acme Provision a new tenant
quark tenant migrate-all Run migrations across all tenants

📐 Project Structure

github.com/jcsvwinston/quark
├── *.go                  Core ORM (client, query builder, CRUD, dialect)
├── cache/
│   ├── memory/           In-memory L2 cache
│   └── redis/            Redis L2 cache
├── migrate/              Versioned migration engine
├── otel/                 OpenTelemetry middleware
├── internal/             Private implementation (guard, schema, introspection)
├── cmd/
│   └── quark/            CLI tool
├── examples/             Runnable examples per dialect
└── docs/                 Architecture, API reference, multi-tenancy guide

⚙️ Configuration Reference

client, err := quark.New("postgres", "postgres://user:pass@localhost/db",
    quark.WithLimits(quark.Limits{
        MaxResults:         10_000,          // hard cap on List() results
        MaxWhereConditions: 20,              // prevent runaway WHERE chains
        MaxJoins:           5,
        QueryTimeout:       30 * time.Second,
        AllowRawQueries:    false,           // explicit opt-in for raw SQL
        SafeMigrations:     true,            // block DROP COLUMN by default
    }),
    quark.WithCacheStore(store),             // L2 cache backend
    quark.WithMiddleware(myMiddleware),      // stackable middleware
    quark.WithQueryObserver(myObserver),     // query logging / metrics
)

🤝 Contributing

Pull requests are welcome. For major changes, please open an issue first.

git clone https://github.com/jcsvwinston/quark
cd quark
go test ./...           # all unit + integration tests (SQLite runs offline)

External engine tests require env vars:

QUARK_TEST_POSTGRES_DSN=postgres://...
QUARK_TEST_MYSQL_DSN=user:pass@tcp(...)/db?parseTime=true
QUARK_TEST_MSSQL_DSN=sqlserver://...
QUARK_TEST_ORACLE_DSN=oracle://...

📄 License

Apache 2.0 — see LICENSE

Documentation

Overview

Package quark provides a modern, type-safe ORM for Go. It supports multiple SQL dialects and is designed to be framework-agnostic.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNotFound indicates that no record was found for the given criteria.
	ErrNotFound = errors.New("record not found")

	// ErrInvalidModel indicates that the provided model is invalid or not registered.
	ErrInvalidModel = errors.New("invalid model")

	// ErrInvalidQuery indicates that the query is malformed or invalid.
	ErrInvalidQuery = errors.New("invalid query")

	// ErrInvalidIdentifier indicates that a table or column identifier is invalid.
	ErrInvalidIdentifier = errors.New("invalid identifier")

	// ErrInvalidJSONPath indicates that a JSON path passed to WhereJSON is malformed
	// or contains characters that could enable SQL injection. Quark accepts dotted
	// paths shaped like "user.name"; see guard.ValidateJSONPath for the grammar.
	// Array indexes and engine-specific syntax are out of scope for WhereJSON;
	// use RawQuery for those.
	ErrInvalidJSONPath = errors.New("invalid JSON path")

	// ErrInvalidJoin indicates that a JOIN ... ON clause passed to Join,
	// LeftJoin, or RightJoin does not match the minimal identifier-only
	// grammar Quark accepts while a structured Join().On() builder is pending
	// (Phase 2 AST). See guard.ValidateJoinOn for the grammar; use a
	// structured Join (when available) or RawQuery for shapes outside it.
	ErrInvalidJoin = errors.New("invalid JOIN ON clause")

	// ErrStaleEntity indicates that an optimistic-locking update failed
	// because the row's version column had been bumped by another writer
	// since the entity was loaded. The caller should reload the row, replay
	// the change against the fresh state, and retry — or surface the
	// conflict to the user. Returned by Update / UpdateFields / Tracked.Save
	// when the model carries a quark:"version" field.
	ErrStaleEntity = errors.New("stale entity (optimistic-locking conflict)")

	// ErrUnsupportedFeature indicates that a feature is not supported by the
	// active database dialect. Returned by builder methods (e.g. ForUpdate
	// on SQLite) so callers can branch by dialect or fall back to a different
	// strategy. The error message includes the dialect name and the feature
	// being requested.
	ErrUnsupportedFeature = errors.New("feature not supported by dialect")

	// ErrDialectNotSupported indicates that the database dialect is not supported.
	ErrDialectNotSupported = errors.New("dialect not supported")

	// ErrConnection indicates a database connection error.
	ErrConnection = errors.New("database connection error")

	// ErrTimeout indicates that a query timed out.
	ErrTimeout = errors.New("query timeout")

	// ErrConstraintViolation indicates a database constraint violation.
	ErrConstraintViolation = errors.New("constraint violation")
)

Common errors returned by quark operations.

View Source
var ErrLockTimeout = errors.New("migration lock acquisition timed out")

ErrLockTimeout is returned by AcquireMigrationLock when the lock cannot be acquired within the given timeout. Distinct from ErrUnsupportedFeature (which means the dialect doesn't model distributed locks at all). Distinct from generic driver errors.

View Source
var HasPlaceholders = guard.HasPlaceholders

HasPlaceholders checks if a query string contains parameter placeholders.

Functions

func Call

func Call(ctx context.Context, provider ClientProvider, procedure string, args ...any) error

Call executes a stored procedure that does not return a result set, but may modify OUT parameters.

func Notify

func Notify(ctx context.Context, provider ClientProvider, channel, payload string) error

Notify is a helper to trigger a database event (e.g., NOTIFY in Postgres).

func RegisterDialect

func RegisterDialect(name string, d Dialect)

RegisterDialect allows developers to register custom database dialects. This enables support for proprietary or non-standard databases.

Example:

quark.RegisterDialect("cockroach", myCockroachDialect)

The registered dialect can then be used with:

client, err := quark.New(db, quark.WithDialect(quark.DetectDialectByName("cockroach")))

func RegisterTypeMapper added in v0.3.0

func RegisterTypeMapper(t reflect.Type, m TypeMapper)

RegisterTypeMapper registers a custom Go-type → SQL-type mapping for the migration / sync layer. Pointer types are stripped before registration: registering for time.Duration also covers *time.Duration. Re-registering the same type overwrites the previous mapper.

Example: storing google/uuid.UUID as native UUID on Postgres and as a 36-char string on the rest:

quark.RegisterTypeMapper(reflect.TypeOf(uuid.UUID{}), func(d string, _ quark.TypeOptions) string {
    if d == "postgres" {
        return "UUID"
    }
    return "VARCHAR(36)"
})

The mapper is consulted by client.Migrate and client.Sync. database/sql's Scanner / driver.Valuer interfaces still apply to the read/write side — a type registered here must also implement those interfaces (or be already supported by the underlying driver) for round-trip to work.

Types

type AfterCreateHook

type AfterCreateHook interface {
	AfterCreate(ctx context.Context) error
}

AfterCreateHook is executed after an entity is created.

type AfterDeleteHook

type AfterDeleteHook interface {
	AfterDelete(ctx context.Context) error
}

AfterDeleteHook is executed after an entity is deleted.

type AfterUpdateHook

type AfterUpdateHook interface {
	AfterUpdate(ctx context.Context) error
}

AfterUpdateHook is executed after an entity is updated.

type Array added in v0.6.0

type Array[T any] struct {
	V []T
}

Array[T] is the typed wrapper for a column that holds a list of T. It round-trips through JSON regardless of dialect, so the same model definition works on every supported engine without per-dialect Scan / Value code:

Postgres → JSONB
MySQL / MariaDB → JSON
SQLite → TEXT
SQL Server → NVARCHAR(MAX)
Oracle → CLOB

The semantic clarity vs. `JSON[[]T]`: `Tags Array[string]` reads as "the column holds a list of strings"; the helper methods (`Len`, `Slice`, `Contains`) carry that intent through call sites that read the field.

Trade-off: this wrapper does NOT use Postgres' native `INT[]` / `TEXT[]` array types — operators like `@>`, `&&`, and `array_agg` won't fire on Array[T] columns. For PG-native arrays with operators drop down to `pgx`/`pgtype.Array` directly or use `RawQuery`. The "neutral wrapper" spec in TASKS § Bloque B explicitly asked for a dialect-independent type that doesn't import `pgtype`; JSON-backed is the simplest shape that satisfies that constraint.

Example:

type Post struct {
    ID   int64                 `db:"id" pk:"true"`
    Tags quark.Array[string]   `db:"tags"`
}

p := Post{Tags: quark.Array[string]{V: []string{"go", "orm"}}}
_ = client.Migrate(ctx, &Post{})
_ = quark.For[Post](ctx, client).Create(&p)

func (Array[T]) Len added in v0.6.0

func (a Array[T]) Len() int

Len returns the number of elements in the array. Safe on the zero value (returns 0).

func (*Array[T]) Scan added in v0.6.0

func (a *Array[T]) Scan(src any) error

Scan implements sql.Scanner. Accepts []byte and string sources (the two forms drivers return for JSON columns). NULL clears V to nil. An empty / whitespace-only payload also resolves to nil so a column default of `'[]'` round-trips through the zero value cleanly.

func (Array[T]) Slice added in v0.6.0

func (a Array[T]) Slice() []T

Slice returns the underlying slice. Mutations on the returned slice affect the Array's storage — useful for appending without re-allocating, dangerous if the Array value is used after.

func (Array[T]) Value added in v0.6.0

func (a Array[T]) Value() (driver.Value, error)

Value implements driver.Valuer by JSON-marshalling V. A nil V serialises to `[]` (empty array) rather than `null` — the more useful default when the column is also used in SQL operations. Pair with quark.Nullable[Array[T]] when you need to distinguish NULL from empty.

type BackfillSpec added in v0.6.0

type BackfillSpec struct {
	// Name uniquely identifies this backfill across runs. Used as
	// the primary key of `quark_backfill_state`. Two backfills with
	// the same Name share the same resume token — that's how a
	// retry resumes. Different backfills MUST use different Names
	// or they'll trample each other's state.
	Name string

	// Table is the source table being iterated.
	Table string

	// PKColumn is the column the helper orders + paginates by.
	// Defaults to "id" if empty. The column must be a sortable
	// integer-like type (int64 / bigint) — text PKs aren't
	// supported in F3-6-core. Composite PKs aren't supported
	// either; the helper takes the first PK column only.
	PKColumn string

	// BatchSize is the number of PKs fetched per round-trip.
	// Defaults to 1000. Larger batches are more efficient but
	// hold locks longer; tune for your workload.
	BatchSize int

	// Process is invoked once per batch with the PK list in
	// ascending order. The callback can do any SQL it wants
	// (typically UPDATE ... WHERE id IN (...) or similar). Errors
	// abort the backfill and propagate; the state table records
	// the highest PK from successful batches, so a retry resumes
	// from there.
	Process func(ctx context.Context, batchPKs []int64) error
}

BackfillSpec describes a single backfill operation. A backfill iterates a table by primary key in batches, invokes a user callback per batch, and persists the highest PK seen so a process kill (or a deliberate retry) resumes at the next un-processed row rather than re-running the entire table.

The user callback receives the slice of PKs in the current batch and is responsible for whatever data work the backfill needs — the helper deliberately doesn't read row contents itself. Why: backfill SQL is rarely "SELECT * + transform"; it's usually "UPDATE ... WHERE id IN (...)" or "INSERT ... SELECT ... WHERE id IN (...)" where the user knows the relevant columns and the helper would just be in the way trying to scan them generically.

type BaseMiddleware

type BaseMiddleware struct{}

BaseMiddleware provides default implementations that pass through to the next handler. Embed this in your middleware so you only need to override the methods you care about.

func (BaseMiddleware) WrapExec

func (BaseMiddleware) WrapExec(next ExecFunc) ExecFunc

func (BaseMiddleware) WrapQuery

func (BaseMiddleware) WrapQuery(next QueryFunc) QueryFunc

func (BaseMiddleware) WrapQueryRow

func (BaseMiddleware) WrapQueryRow(next QueryRowFunc) QueryRowFunc

type BaseQuery

type BaseQuery struct {
	// contains filtered or unexported fields
}

BaseQuery holds the non-generic state of a database query.

type BeforeCreateHook

type BeforeCreateHook interface {
	BeforeCreate(ctx context.Context) error
}

BeforeCreateHook is executed before an entity is created.

type BeforeDeleteHook

type BeforeDeleteHook interface {
	BeforeDelete(ctx context.Context) error
}

BeforeDeleteHook is executed before an entity is deleted.

type BeforeUpdateHook

type BeforeUpdateHook interface {
	BeforeUpdate(ctx context.Context) error
}

BeforeUpdateHook is executed before an entity is updated.

type CacheConfig

type CacheConfig struct {
	TTL     time.Duration
	Tags    []string
	Enabled bool
}

CacheConfig holds the caching parameters for a specific query.

type CacheStore

type CacheStore interface {
	// Get retrieves a value from the cache.
	Get(ctx context.Context, key string) ([]byte, error)
	// Set stores a value in the cache with a specific TTL and associated tags.
	Set(ctx context.Context, key string, val []byte, ttl time.Duration, tags ...string) error
	// Delete removes a specific key.
	Delete(ctx context.Context, key string) error
	// InvalidateTags removes all entries associated with the given tags (usually table names).
	InvalidateTags(ctx context.Context, tags ...string) error
}

CacheStore defines the contract for any caching backend. Implementations should be provided in separate packages (e.g., quark/cache/redis).

type Check added in v0.6.0

type Check struct {
	Name       string
	Expression string
}

Check is one CHECK constraint declared on a table. Expression is the raw catalog text — dialect-specific phrasing, parenthesisation, and whitespace. The introspector deliberately does NOT normalise the expression because expression-equivalence across dialects is an AST-level problem (`(x > 0)` vs `x > 0`, `'a' IN ('a','b')` vs `'a' = ANY (ARRAY['a','b'])`, etc.) that belongs to F3-3's diff engine, not to the catalog reader.

Name comes from the catalog. Inline anonymous checks (`age INTEGER CHECK (age > 0)` without an explicit `CONSTRAINT <name>`) get dialect-generated names (`age_check`, `CK__table__age__hash`, etc.). F3-3-core's `Diff` matches checks **by name only** — there is no fallback to expression equivalence for anonymous ones, because expression equivalence is AST-level work that's out of scope for the diff layer (each dialect emits its own canonical form; comparing them is its own problem). If your checks must round-trip cleanly cross-dialect, give them explicit `CONSTRAINT <name>` clauses in DDL. TODO(F3-3-checks-anon): consider an opt-in pass that matches anonymous checks by normalised expression once the AST equivalence work lands.

Coverage: PostgreSQL, MySQL, MariaDB, and MSSQL implement the introspector. **SQLite** returns `Checks=nil` because SQLite has no catalog for CHECK constraints — the only path is parsing `sqlite_master.sql` DDL, which is brittle and intentionally out of scope for the catalog-reader layer. A future F3-2-checks-sqlite follow-up could add DDL parsing if user demand justifies it.

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is the main entry point for quark ORM operations. It wraps a database connection and provides type-safe query building.

func New

func New(driverName, dataSource string, opts ...any) (*Client, error)

New creates a new quark Client with the given driver name and data source.

Example:

client, err := quark.New("sqlite", "example.db",
    quark.WithMaxOpenConns(25),
    quark.WithMaxIdleConns(5),
)

For PostgreSQL:

client, err := quark.New("pgx", "postgres://user:pass@localhost/db",
    quark.WithMaxOpenConns(25),
)

The dialect is auto-detected from the driver name. You can override it with WithDialect().

func (*Client) AcquireMigrationLock added in v0.6.0

func (c *Client) AcquireMigrationLock(ctx context.Context, name string, timeout time.Duration) (MigrationLock, error)

AcquireMigrationLock attempts to acquire a cluster-wide advisory lock named `name` for migration operations. The first concurrent caller wins; subsequent callers block up to `timeout` (or receive ErrLockTimeout if the timeout elapses).

Typical use:

lock, err := client.AcquireMigrationLock(ctx, "schema-migrations", 30*time.Second)
if err != nil {
    return err
}
defer lock.Release(ctx)

if err := client.Migrate(ctx, &User{}, &Order{}); err != nil {
    return err
}

Behaviour per dialect (see TASKS § F3-1):

  • PostgreSQL: `pg_advisory_lock(hashtext(name))` on a dedicated connection. Released by `pg_advisory_unlock` on Release.
  • MySQL / MariaDB: `GET_LOCK(name, timeout_seconds)` + `RELEASE_LOCK`.
  • MSSQL: `sp_getapplock @LockMode='Exclusive', @LockOwner='Session'`
  • `sp_releaseapplock`.
  • SQLite: returns `ErrUnsupportedFeature` — no distributed-lock primitive; use a `BEGIN IMMEDIATE` transaction inside the process for single-writer semantics.
  • Oracle: returns `ErrUnsupportedFeature` — the DBMS_LOCK API requires PL/SQL blocks and per-lock allocation handles; deferred to a follow-up PR.

func (*Client) AddForeignKey

func (c *Client) AddForeignKey(ctx context.Context, table, constraintName string, columns []string, refTable string, refColumns []string, onDelete, onUpdate string) error

AddForeignKey adds a FOREIGN KEY constraint to an existing table. constraintName is the constraint identifier; refTable is the referenced table; columns and refColumns are matched by position.

Example:

client.AddForeignKey(ctx, "orders", "fk_orders_user", []string{"user_id"}, "users", []string{"id"}, "CASCADE", "SET NULL")

func (*Client) ApplyPlan added in v0.6.0

func (c *Client) ApplyPlan(ctx context.Context, plan Plan) error

ApplyPlan executes the operations in `plan` against the database in the order they appear. It dispatches each op to the appropriate per-dialect DDL — column ops via the `Dialect.AlterTable*` helpers, index / FK / table ops via inline dispatch using the existing `Client.CreateIndex` / `Client.AddForeignKey` where applicable.

**Transactional behaviour** (F3-4-tx):

  • **PostgreSQL, MSSQL, SQLite** — DDL is transactional on these engines. ApplyPlan opens a BEGIN, runs all ops, and COMMITs. On ANY failure the transaction is rolled back, leaving the schema in its pre-plan state. This is the safety net users should rely on when running migrations against production.

  • **MySQL, MariaDB, Oracle** — DDL implicitly commits the current transaction on every statement, so wrapping is pointless. Instead, ApplyPlan uses a **resumable** path backed by a `quark_migration_state` checkpoint table (F3-4-resumable). Each successfully applied op is recorded by `(plan_hash, op_index)`; a re-invocation against the same plan (identified by `Plan.Hash()`) skips ops that were already recorded. A mid-plan failure can therefore be fixed (the underlying constraint addressed, the connection restored, etc.) and resumed simply by calling ApplyPlan again with the same plan — no re-applying earlier successful ops, no manual state management.

    Drift detection: if the plan changed between runs (different ops, different table names, anything that flips the hash), the state from the prior run doesn't apply and the new plan starts fresh from op 0. This is the safety boundary that prevents "resume from op 3" against a plan whose op 3 means something different.

    **Concurrency**: on non-transactional engines, two processes calling ApplyPlan against the same plan simultaneously race on the state table — both read `resumeFrom = -1`, both try to apply op 0, one (or both) of them hit DDL errors (table already exists) and a `duplicate-PK` on the state insert. `Client.AcquireMigrationLock` (F3-1) is the right primitive to serialise — wrap your ApplyPlan call:

    lock, err := client.AcquireMigrationLock(ctx, "schema", 30*time.Second) if err != nil { return err } defer lock.Release(ctx) err = client.ApplyPlan(ctx, plan)

    The reason this isn't done automatically inside ApplyPlan: not every dialect implements `MigrationLocker` yet (Oracle is pending) and the lock name / timeout are workflow choices that belong to the caller. Future: a `Client.MigrateAtomic` wrapper that bundles lock + plan + apply, target for F3-5.

The returned error carries the index of the op that failed and the op's String() rendering for debuggability, regardless of which path was taken.

Operation-specific caveats:

  • **OpAlterColumn** today only emits DDL for the Type change via `Dialect.AlterTableAlterColumn`. Nullable and Default deltas are NOT emitted as DDL yet — they're logged as TODO and the column lands in the requested type but keeps its old nullable/default. F3-3-execute-alter follow-up will close this.
  • **OpDropForeignKey on SQLite** returns `ErrUnsupportedFeature` because SQLite doesn't support `ALTER TABLE DROP CONSTRAINT`; a real drop would require a full table rebuild via the 12-step procedure documented in the SQLite manual. That belongs to F3-3-execute-sqlite-rebuild, a separate follow-up.
  • **OpDropCheck on SQLite** has the same limitation as OpDropForeignKey for the same reason — returns `ErrUnsupportedFeature`.

All other ops work uniformly across the 4 CI dialects + SQLite (for the supported subset).

func (*Client) Backfill added in v0.6.0

func (c *Client) Backfill(ctx context.Context, spec BackfillSpec) error

Backfill iterates [BackfillSpec.Table] by primary key in batches, invoking [BackfillSpec.Process] for each batch. Persists the highest PK seen in `quark_backfill_state` so a kill / error / deliberate retry resumes at the next un-processed PK instead of re-running the entire table.

Workflow when something goes wrong:

  1. Backfill processes batches 0..N successfully; state.last_pk records the max PK from batch N.
  2. Batch N+1 fails (callback error, process killed, DB connection lost, etc.). State remains at batch N's max.
  3. Caller fixes the underlying issue and re-invokes Backfill with the same Name. The helper reads state.last_pk and starts from `WHERE pk > last_pk` — no re-processing of batches 0..N.

Once the helper sees an empty batch (no PKs left), the backfill is complete. A subsequent re-invocation with the same Name finds nothing to do and returns nil immediately — idempotent.

Concurrency: like ApplyPlan's resumable path, Backfill is not safe for concurrent invocations against the same Name. Wrap with Client.AcquireMigrationLock if you need cross-process serialisation. The state table's primary key on Name prevents silent corruption — concurrent runs would race on the UPDATE, not produce divergent state.

func (*Client) BeginTx

func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error)

BeginTx starts a new database transaction with the given options.

Example:

tx, err := client.BeginTx(ctx, nil)
if err != nil { log.Fatal(err) }
defer tx.Rollback()

quark.ForTx[User](ctx, tx).Create(&user)
tx.Commit()

func (*Client) Close

func (c *Client) Close() error

Close closes the underlying database connection.

func (*Client) CreateIndex

func (c *Client) CreateIndex(ctx context.Context, table, indexName string, columns []string, unique bool) error

CreateIndex creates an index on the given table and columns. If unique is true, a UNIQUE INDEX is created. If the index already exists the error is silently ignored for compatible dialects.

Example:

client.CreateIndex(ctx, "users", "idx_users_email", []string{"email"}, true)

func (*Client) Dialect

func (c *Client) Dialect() Dialect

Dialect returns the dialect being used.

func (*Client) Exec

func (c *Client) Exec(ctx context.Context, query string, args ...any) error

Exec executes a raw SQL statement (INSERT, UPDATE, DELETE, DDL). This is primarily used for migrations and schema changes.

func (*Client) GetClient

func (c *Client) GetClient(ctx context.Context) (*Client, error)

GetClient implements ClientProvider for the basic Client.

func (*Client) IntrospectSchema added in v0.6.0

func (c *Client) IntrospectSchema(ctx context.Context) (Schema, error)

IntrospectSchema reads the current state of the database's schema and returns it as a dialect-neutral Schema. It's the first half of the F3 migration story: the diff comparator (F3-3) takes the Schema produced here plus the Schema derived from the Go models and emits the operations needed to bring them into alignment.

Supported dialects: PostgreSQL, SQLite, MySQL, MariaDB, MSSQL. Oracle returns `ErrUnsupportedFeature` until F3-2-oracle lands (deferred while the container image situation is resolved).

Surface: tables, columns, non-PK indexes, foreign keys, and CHECK constraints. SQLite returns `Checks=nil` (the only catalog read it doesn't implement; see the Check godoc for the rationale).

Code that reads Schema should treat the unpopulated slices as "not yet introspected" (or, for SQLite Checks, "intentionally not surfaced"), not "no constraints exist".

func (*Client) Migrate

func (c *Client) Migrate(ctx context.Context, models ...any) error

Migrate creates tables for the given models if they don't exist. This is a simplistic auto-migration tool for development. It uses the "db" and "pk" tags to generate CREATE TABLE statements. It also creates join tables for many-to-many relations.

func (*Client) MigrateRegistered added in v0.6.0

func (c *Client) MigrateRegistered(ctx context.Context) error

MigrateRegistered is a convenience for `Migrate(ctx, c.RegisteredModels()...)`. Returns nil immediately if no models have been registered (no-op rather than error — letting the caller initialise the Client in stages without worrying about the registration phase).

func (*Client) PlanMigration added in v0.6.0

func (c *Client) PlanMigration(ctx context.Context, models ...any) (Plan, error)

PlanMigration computes the Plan of operations needed to align the database schema with the Go-side models. It does NOT execute anything — the returned Plan is inert.

The pipeline is:

  1. Build a `desired` Schema by reflecting on the model structs (the same metadata path Client.Migrate uses to render CREATE TABLE DDL).
  2. Read the `current` Schema via Client.IntrospectSchema.
  3. Call Diff to produce the ordered op list.
  4. Wrap the ops in a Plan.

Surface caveats — known asymmetries between desired and current:

  • **Type strings**: the desired Schema uses the migrator's `SQLTypeWithOpts` output (`BIGINT`, `VARCHAR(255)`) while the introspector returns whatever the catalog stores (lowercase, parameter-bearing, or canonical form per dialect). The diff's [normalizeType] helper collapses these to a comparable form (case-fold, PG `character varying` ↔ `varchar` alias, MySQL display-width strip), so a clean round-trip works on all 5 supported dialects since F3-3-types. Edge cases not yet normalised: PG `int8`/`int4`/`int2` aliases (don't arise from introspection; only relevant if the desired Schema is hand-constructed with those names).
  • **Indexes / FKs / CHECK** declared on the model: struct tags don't yet carry index or FK metadata (CreateIndex / AddForeignKey are explicit calls). PlanMigration's `desired` Schema is column-only — indexes and FKs present in the database but not in the model would show up as OpDropIndex / OpDropForeignKey if Diff were left to its own devices. To avoid that, PlanMigration **copies** the indexes / FKs / checks from the current schema into the desired schema before diffing, on the assumption that schema-level objects not declared in models are managed manually. A future F3-3-plan-indexes follow-up will let struct tags drive these.

SQLite quirk: same Checks=nil handling as the rest of F3-3-core — no spurious drops when the database doesn't introspect checks.

func (*Client) PlanMigrationRegistered added in v0.6.0

func (c *Client) PlanMigrationRegistered(ctx context.Context) (Plan, error)

PlanMigrationRegistered is a convenience for `PlanMigration(ctx, c.RegisteredModels()...)`. Returns an empty Plan when no models are registered (consistent with the IsEmpty() semantics used elsewhere — no models means nothing to plan against).

func (*Client) Raw

func (c *Client) Raw() *sql.DB

Raw returns the underlying *sql.DB for advanced operations. Use with caution - this bypasses quark's safety features.

func (*Client) RawQuery

func (c *Client) RawQuery(ctx context.Context, query string, args ...any) (*sql.Rows, error)

RawQuery executes a raw SQL query with the given arguments. By default, this requires placeholders to prevent SQL injection. Enable with WithLimits(Limits{AllowRawQueries: true}).

func (*Client) RegisterModel added in v0.6.0

func (c *Client) RegisterModel(models ...any) error

RegisterModel records one or more model values on the Client so follow-on calls like Client.MigrateRegistered and Client.PlanMigrationRegistered don't need the caller to re-list them. The typical setup pattern:

client, _ := quark.New(...)
client.RegisterModel(&User{}, &Order{}, &Invoice{})
if err := client.MigrateRegistered(ctx); err != nil { ... }

Each model goes through the same reflection validation as Client.Migrate (must be a struct value or pointer to struct; nil is rejected) so registration fails fast on a bad model rather than at migration time.

Calling RegisterModel multiple times APPENDS to the registry — it does NOT deduplicate. If you call `RegisterModel(&User{})` twice, Client.MigrateRegistered will see User twice. That's intentional: Quark doesn't try to be clever about identity of reflect.Type-equal values; the caller controls the list.

Safe for concurrent use — the registry is mutex-protected. In practice you'll call this once at startup, not from request handlers.

The per-Client registry is intentionally additive — it does NOT replace the global type-meta cache in `internal/schema`, which is correct as global state (deterministic per `reflect.Type`). F3-7 only adds per-Client state for "which models does this Client manage", not for "what's the cached meta of type X".

func (*Client) RegisteredModels added in v0.6.0

func (c *Client) RegisteredModels() []any

RegisteredModels returns a snapshot of the models registered on this Client via Client.RegisterModel. The returned slice is a COPY — mutations to the returned value don't affect the internal registry. Order matches the registration order.

Useful for introspection / debugging and for the user CLI wrappers (e.g. `quarkmigrate.Run` could accept a Client with pre-registered models instead of taking the variadic models argument — though that's not yet wired).

func (*Client) Sync

func (c *Client) Sync(ctx context.Context, opts SyncOptions, models ...any) error

Sync synchronizes the database schema with the provided models. It detects missing columns, renames, and can drop columns if safe mode is disabled.

func (*Client) Tx

func (c *Client) Tx(ctx context.Context, fn func(tx *Tx) error) error

Tx executes fn within a transaction. If fn returns nil, the transaction is committed. If fn returns an error or panics, the transaction is rolled back.

Example:

err := client.Tx(ctx, func(tx *quark.Tx) error {
    quark.ForTx[User](ctx, tx).Create(&user)
    quark.ForTx[Order](ctx, tx).Create(&order)
    return nil // auto-commit
})

func (*Client) Validate

func (c *Client) Validate(ctx context.Context, model any) error

Validate checks a model's fields using standard validation tags (e.g. validate:"required"). It is automatically called before Create and Update operations if the model is a struct.

func (*Client) WithOptions added in v0.3.0

func (c *Client) WithOptions(opts ...any) (*Client, error)

WithOptions creates a new client with the same database connection but different options. This is useful for tests that need to create clients with different configurations.

type ClientProvider

type ClientProvider interface {
	GetClient(ctx context.Context) (*Client, error)
}

ClientProvider is an interface that provides a database client. Both *Client and *TenantRouter implement this.

type Column added in v0.6.0

type Column struct {
	Name     string
	Type     string
	Nullable bool
	Default  *string
}

Column is one column in a table.

`Type` is the raw dialect-native type string as returned by the catalog (e.g. `INTEGER`, `bigint`, `character varying(255)`, `NVARCHAR(MAX)`). Normalisation to a cross-dialect form is the diff layer's responsibility (F3-3), not the introspector's.

`Default` is `nil` when no default is set, and a `*string` when one is present — preserving the distinction between "no default" and "default is the empty string". The value is the raw dialect-native expression: a literal, a function call (`CURRENT_TIMESTAMP`, `gen_random_uuid()`), or `NULL`.

type Cursor

type Cursor[T any] struct {
	// contains filtered or unexported fields
}

Cursor provides manual iteration over query results. Similar to sql.Rows but type-safe for model T.

func (*Cursor[T]) Close

func (c *Cursor[T]) Close() error

Close releases resources and notifies observers.

func (*Cursor[T]) Err

func (c *Cursor[T]) Err() error

Err returns any error encountered during iteration.

func (*Cursor[T]) Next

func (c *Cursor[T]) Next() bool

Next advances to the next row. Returns false when done or on error.

func (*Cursor[T]) Scan

func (c *Cursor[T]) Scan(dest *T) error

Scan copies the current row into the destination struct.

type DBConn added in v0.6.0

type DBConn interface {
	ExecContext(ctx context.Context, query string, args ...any) (Result, error)
	QueryRowContext(ctx context.Context, query string, args ...any) Row
	Close() error
}

DBConn is the per-connection subset the lock implementations consume. Wraps *sql.Conn so the locks can ExecContext and Close on a single connection without coupling to database/sql package types directly.

type DBConnector added in v0.6.0

type DBConnector interface {
	Conn(ctx context.Context) (DBConn, error)
}

DBConnector is the narrow subset of *sql.DB the lock implementations need. It exists so the optional-interface contract doesn't drag the full Executor surface into MigrationLocker. The only implementation of this in practice is *sql.DB itself; the alias keeps tests honest without re-exporting database/sql.

type Dialect

type Dialect interface {
	// Name returns the dialect name (e.g., "postgres", "mysql", "sqlite").
	Name() string

	// Placeholder returns the placeholder for the given parameter index.
	// PostgreSQL: $1, $2, etc.
	// MySQL/SQLite: ?
	// MSSQL: @p1, @p2, etc.
	// Oracle: :1, :2, etc.
	Placeholder(index int) string

	// Quote returns a quoted identifier (table/column name).
	// PostgreSQL: "identifier"
	// MySQL: `identifier`
	// MSSQL: [identifier]
	// SQLite/Oracle: "identifier"
	Quote(identifier string) string

	// Placeholders returns a slice of placeholders for n parameters.
	Placeholders(n int) []string

	// LimitOffset returns the LIMIT/OFFSET clause for the given parameters.
	LimitOffset(limit, offset int) string

	// SupportsReturning indicates if the dialect supports RETURNING clause.
	SupportsReturning() bool

	// Returning returns the RETURNING clause for the given columns.
	// Returns empty string if not supported.
	Returning(columns ...string) string

	// SupportsLastInsertID indicates if the dialect supports LastInsertId().
	SupportsLastInsertID() bool

	// LastInsertIDQuery returns the query to get the last insert ID.
	// Used for dialects that don't support RETURNING.
	LastInsertIDQuery(table, pkColumn string) string

	// CurrentTimestamp returns the SQL function for current timestamp.
	CurrentTimestamp() string

	// BuildRoutineQuery returns the SQL for a table-valued function or routine returning rows.
	// E.g., Postgres: SELECT * FROM func($1, $2)
	BuildRoutineQuery(routine string, argCount int) string

	// BuildProcedureCall returns the SQL for calling a procedure (pure logic / OUT params).
	// E.g., MySQL: CALL proc(?, ?)
	BuildProcedureCall(procedure string, argCount int) string

	// JSONExtract returns the SQL expression to extract a value from a JSON column,
	// the bind args required by that expression, or an error if the path is
	// malformed.
	//
	// The returned SQL fragment uses literal '?' as a neutral bind marker; the
	// caller (typically buildWhereClause) substitutes each '?' for the dialect's
	// placeholder syntax (`$N`, `?`, `@pN`, `:N`) at the appropriate arg index.
	//
	// The path is validated and passed as a bind parameter — never interpolated
	// into the SQL surface. This closes the SQL-injection vector that existed
	// while the path was concatenated with fmt.Sprintf.
	//
	// Example outputs (with column "data" and path "user.name"):
	//   Postgres: jsonb_extract_path_text(("data")::jsonb, ?, ?) / args=["user","name"]
	//   MySQL:    JSON_EXTRACT(`data`, ?) / args=["$.user.name"]
	//   SQLite:   JSON_EXTRACT("data", ?) / args=["$.user.name"]
	//   MSSQL:    JSON_VALUE([data], ?) / args=["$.user.name"]
	//   Oracle:   JSON_VALUE("DATA", ?) / args=["$.user.name"]
	JSONExtract(column, path string) (sql string, args []any, err error)

	// AlterTableAddColumn returns SQL to add a column to a table.
	// E.g., PostgreSQL: ALTER TABLE "users" ADD COLUMN "email" VARCHAR(255)
	AlterTableAddColumn(table, column, dataType string) string

	// AlterTableDropColumn returns SQL to drop a column from a table.
	// E.g., PostgreSQL: ALTER TABLE "users" DROP COLUMN "email"
	AlterTableDropColumn(table, column string) string

	// AlterTableAlterColumn returns SQL to alter a column's type.
	// E.g., PostgreSQL: ALTER TABLE "users" ALTER COLUMN "email" TYPE VARCHAR(255)
	AlterTableAlterColumn(table, column, newDataType string) string

	// RenameColumn returns SQL to rename a column.
	// E.g., PostgreSQL: ALTER TABLE "users" RENAME COLUMN "old_name" TO "new_name"
	RenameColumn(table, oldName, newName string) string

	// RenameTable returns SQL to rename a table.
	// E.g., PostgreSQL: ALTER TABLE "users" RENAME TO "accounts"
	RenameTable(oldName, newName string) string

	// SupportsTransactionalDDL indicates if the dialect supports DDL in transactions.
	SupportsTransactionalDDL() bool

	// LockSuffix returns the SQL fragments needed to attach a pessimistic
	// lock to a SELECT.
	//
	//   - tableHint is appended after the FROM clause's table name. MSSQL
	//     uses this slot for `WITH (UPDLOCK, ROWLOCK)`-style hints; the
	//     row-level locking dialects return "" here.
	//   - suffix is appended at the very end of the SELECT (after ORDER BY
	//     and LIMIT/OFFSET) — `FOR UPDATE [SKIP LOCKED|NOWAIT]` for the
	//     PG/MySQL/Oracle/MariaDB family.
	//
	// Returning ErrUnsupportedFeature signals "this dialect doesn't speak
	// pessimistic locks at this level" — SQLite is the canonical case.
	// LockOptions.IsZero() input must always return ("", "", nil).
	LockSuffix(opts LockOptions) (tableHint, suffix string, err error)

	// UpsertSQL returns the dialect-specific upsert (INSERT … ON CONFLICT … DO UPDATE)
	// fragment that is appended after the VALUES clause.
	// conflictCols: columns that define the conflict target (e.g. primary key or unique index).
	// updateCols:   columns to update on conflict; if empty defaults to all non-conflict columns.
	// argOffset:    current placeholder index (1-based) so positional dialects stay in sync.
	// Returns the SQL fragment and the additional argument list (for the SET clause values).
	UpsertSQL(conflictCols, updateCols []string, argOffset int) string
}

Dialect defines the interface for database-specific SQL generation. Each supported database (PostgreSQL, MySQL, SQLite, etc.) implements this interface.

func DetectDialect

func DetectDialect(driverName string) (Dialect, error)

DetectDialect attempts to auto-detect the dialect from a driver name.

func DetectDialectByName

func DetectDialectByName(name string) (Dialect, error)

DetectDialectByName attempts to get a dialect by name from all registered dialects including custom ones. This is useful when you know the exact dialect name.

func MSSQL

func MSSQL() Dialect

MSSQL returns the Microsoft SQL Server dialect instance.

func MariaDB

func MariaDB() Dialect

MariaDB returns a MariaDB dialect instance.

func MySQL

func MySQL() Dialect

MySQL returns the MySQL dialect instance.

func Oracle

func Oracle() Dialect

Oracle returns the Oracle Database dialect instance.

func PostgreSQL

func PostgreSQL() Dialect

PostgreSQL returns the PostgreSQL dialect instance.

func SQLite

func SQLite() Dialect

SQLite returns the SQLite dialect instance.

type EventBus

type EventBus struct {
	// contains filtered or unexported fields
}

EventBus provides a dialect-agnostic factory for creating EventListeners. Since not all databases support PubSub natively (e.g., SQLite), this may return ErrNotSupported for certain dialects.

func NewEventBus

func NewEventBus(client *Client) *EventBus

NewEventBus creates a new EventBus for the given client.

func (*EventBus) CreateListener

func (eb *EventBus) CreateListener() (EventListener, error)

CreateListener creates an EventListener based on the dialect.

NOTE: EventBus is experimental in V1. Native LISTEN/NOTIFY (PostgreSQL) requires a dedicated connection with a driver-specific implementation (e.g., github.com/lib/pq). This will be completed in a future release.

type EventListener

type EventListener interface {
	// Listen subscribes to a specific channel.
	Listen(ctx context.Context, channel string) error

	// Unlisten unsubscribes from a channel.
	Unlisten(ctx context.Context, channel string) error

	// Receive blocks until an event is received, returning the payload or an error.
	Receive(ctx context.Context) (EventPayload, error)

	// Close terminates the listener connection.
	Close() error
}

EventListener defines an interface for listening to database events. This is typically implemented via PubSub mechanisms like PostgreSQL's LISTEN/NOTIFY.

type EventPayload

type EventPayload struct {
	Channel string
	Payload string
}

EventPayload represents a message received from a database event channel.

type ExecFunc

type ExecFunc func(ctx context.Context, exec Executor, sqlStr string, args []any) (sql.Result, error)

ExecFunc is the signature for SQL execution functions used by middleware.

type Executor

type Executor interface {
	QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)
	QueryRowContext(ctx context.Context, query string, args ...any) *sql.Row
	ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
}

Executor is the common interface for *sql.DB and *sql.Tx. It allows Query[T] to execute against either a raw connection or a transaction.

type Expr added in v0.4.0

type Expr interface {
	ToSQL(d Dialect, g *SQLGuard) (sql string, args []any, err error)
}

Expr is the composable expression node Phase 2's query builder rests on. Each node implements ToSQL, returning a SQL fragment with `?` as a neutral bind marker plus the args that fill those markers.

Callers (WhereExpr, HavingExpr) hand the rendered fragment to the existing buildWhereClause / substitutePathMarkers pipeline that already rewrites `?` to the dialect's placeholder syntax at the correct arg index. That keeps the AST dialect-agnostic at construction time and lets us compose deep trees without juggling indices.

The exported constructors below (Col, Lit, And, Or, Not, Cmp, Eq, Ne, Lt, Gt, Lte, Gte, In, NotIn, Func) are the v0.4 AST surface. Subqueries, Cast, and Exists arrive in the subqueries-and-CTE PR.

func And added in v0.4.0

func And(parts ...Expr) Expr

And composes two or more expressions with logical AND. Empty And is a no-op (renders to ""). Single-element And renders the inner expression without parentheses; two or more get wrapped so precedence is explicit.

func Cmp added in v0.4.0

func Cmp(lhs Expr, op string, rhs Expr) Expr

Cmp is the general comparison constructor. Operator goes through SQLGuard.ValidateOperator so the AST cannot smuggle arbitrary tokens into the SQL surface.

func Col added in v0.4.0

func Col(name string) Expr

Col references a column by name. The name is validated through SQLGuard.ValidateIdentifier — the AST inherits the same identifier safety the rest of the builder enforces. The wildcard "*" is accepted as-is for use inside aggregate calls (e.g. Func("COUNT", Col("*"))).

func DenseRank added in v0.4.0

func DenseRank() Expr

DenseRank renders `DENSE_RANK()`.

func Eq added in v0.4.0

func Eq(lhs, rhs Expr) Expr

Eq, Ne, Lt, Gt, Lte, Gte are the syntactic shortcuts for Cmp with the most common operators. Built on top of Lit / Col for the typical "Col(x) = Lit(v)" shape.

func Exists added in v0.4.0

func Exists(s *Subquery) Expr

Exists renders `EXISTS (<subquery>)`. Typically used as a top-level WhereExpr predicate.

Example:

hasOrders, _ := quark.For[Order](ctx, client).
    Where("user_id", "=", quark.Col("users.id")).
    Select("1").
    AsSubquery()
q := quark.For[User](ctx, client).WhereExpr(quark.Exists(hasOrders))

(Correlated subqueries require the inner query to reference the outer table by qualified name; see the JoinOn grammar for what's accepted.)

func Func added in v0.4.0

func Func(name string, args ...Expr) Expr

Func calls a SQL function. The name is normalised to upper-case and matched against a whitelist; unknown names return ErrInvalidQuery rather than reaching the SQL surface. Empty argument list is allowed — emit a bare "FUN()" — for COUNT(*), use Col("*") explicitly.

func Gt added in v0.4.0

func Gt(lhs, rhs Expr) Expr

func Gte added in v0.4.0

func Gte(lhs, rhs Expr) Expr

func In added in v0.4.0

func In(lhs Expr, values ...Expr) Expr

In renders "lhs IN (v1, v2, …)". Empty values list is a logic error and returns ErrInvalidQuery — `WHERE x IN ()` is non-portable across dialects (Postgres errors, MySQL silently matches nothing) so we refuse to emit it. Use a no-rows query instead.

func InSub added in v0.4.0

func InSub(lhs Expr, s *Subquery) Expr

InSub renders `lhs IN (<subquery>)`. Useful for the common `WHERE x IN (SELECT id FROM ...)` shape; the inner Subquery should `Select("…")` exactly one column for the comparison.

func Lag added in v0.4.0

func Lag(col Expr, offset int) Expr

Lag renders `LAG(<col>, <offset>)`. The offset is bound as a parameter so the path is uniform with the rest of the AST — no SQL-surface integers, no per-dialect numeric formatting concerns.

func Lead added in v0.4.0

func Lead(col Expr, offset int) Expr

Lead renders `LEAD(<col>, <offset>)`.

func Lit added in v0.4.0

func Lit(v any) Expr

Lit binds a Go value as a SQL parameter. The value never reaches the SQL surface — it always travels through args, regardless of how nested the expression tree is.

func Lt added in v0.4.0

func Lt(lhs, rhs Expr) Expr

func Lte added in v0.4.0

func Lte(lhs, rhs Expr) Expr

func Ne added in v0.4.0

func Ne(lhs, rhs Expr) Expr

func Not added in v0.4.0

func Not(e Expr) Expr

Not negates an expression. Renders as "NOT (<inner>)" so precedence is explicit.

func NotExists added in v0.4.0

func NotExists(s *Subquery) Expr

NotExists is the negated form: `NOT EXISTS (<subquery>)`.

func NotIn added in v0.4.0

func NotIn(lhs Expr, values ...Expr) Expr

NotIn is the negation. Same emptiness rules apply.

func NotInSub added in v0.4.0

func NotInSub(lhs Expr, s *Subquery) Expr

NotInSub is the negated form: `lhs NOT IN (<subquery>)`.

func Or added in v0.4.0

func Or(parts ...Expr) Expr

Or composes two or more expressions with logical OR. Same parenthesis rules as And.

func Over added in v0.4.0

func Over(inner Expr, w *Window) Expr

Over wraps an inner Expr with a Window: `<inner> OVER (<window>)`. Typical use is wrapping a window-function leaf (RowNumber, Rank, DenseRank, Lag, Lead) but any aggregate function from the AST whitelist (`COUNT`, `SUM`, etc.) is also valid as the inner — the SQL spec defines them all as windowable.

func Rank added in v0.4.0

func Rank() Expr

Rank renders `RANK()`.

func RowNumber added in v0.4.0

func RowNumber() Expr

RowNumber renders `ROW_NUMBER()`. Meaningless outside Over().

func Sub added in v0.4.0

func Sub(s *Subquery) Expr

Sub wraps a Subquery as an Expr leaf so it can take the place of a Lit or Col anywhere an Expr is accepted (e.g. `Eq(Col("user_id"), Sub(maxID))`, `Cmp(Col("price"), ">", Sub(avgPrice))`). Renders as `(<inner-sql>)`.

type FieldMeta

type FieldMeta = schema.FieldMeta

FieldMeta is the metadata for a single struct field.

type ForeignKey added in v0.6.0

type ForeignKey struct {
	Name       string
	Columns    []string
	RefTable   string
	RefColumns []string
	OnDelete   string
	OnUpdate   string
}

ForeignKey is one FOREIGN KEY constraint declared on a table. It captures the surface that Client.AddForeignKey takes as input so the diff comparator (F3-3) can match introspected FKs against Go-side declarations symmetrically.

Columns and RefColumns are positionally matched — `Columns[i]` references `RefColumns[i]` (FK constraints can span multiple columns, e.g. composite FKs).

OnDelete / OnUpdate are normalised to the SQL-standard verbose form (`"CASCADE"`, `"SET NULL"`, `"SET DEFAULT"`, `"RESTRICT"`, `"NO ACTION"`) regardless of how the underlying catalog encodes them (PG single-char `confdeltype`, MSSQL `delete_referential_action_desc` with underscores, etc.). The empty string never appears here in practice — every catalog returns a verbose label.

Catalog asymmetry to know about: when a foreign key is declared without an explicit ON DELETE/ON UPDATE clause, **MariaDB** stores the SQL-standard default as `"RESTRICT"` in `INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS`, while **MySQL**, **PostgreSQL**, **MSSQL**, and **SQLite** store it as `"NO ACTION"`. In SQL semantics RESTRICT and NO ACTION are equivalent in immediate-check mode (the only mode these engines support); the difference is purely how each catalog labels the default. The introspector reports what the catalog says rather than normalising to a single canonical form — the diff layer (F3-3) treats the two as equivalent on the MySQL/MariaDB side.

Name comes from the catalog. SQLite returns `""` for inline FKs declared without an explicit `CONSTRAINT <name>` clause; the diff layer handles unnamed FKs by matching on (columns, ref_table, ref_columns) tuples.

type Index added in v0.6.0

type Index struct {
	Name    string
	Columns []string
	Unique  bool
}

Index is one secondary (non-primary-key) index on a table. The PK is a constraint rather than an index in the diff model and lives on the Column (via Default / future PrimaryKey field), not here. The introspector deliberately filters PK-backing indexes per dialect so `Table.Indexes` only carries what F3-3 diffs need to add/drop.

Columns is the ordered list of column names as the index defines them (the order is significant for B-tree indexes — a (a,b) index is not the same as (b,a)).

type JSON added in v0.3.0

type JSON[T any] struct {
	V T
}

JSON[T] wraps a Go value so it round-trips through a SQL JSON column via encoding/json. Use it when you want a typed view of a JSON-shaped column without writing scan / value code by hand.

JSON implements database/sql.Scanner and database/sql/driver.Valuer, so every driver Quark supports handles the round-trip with no extra reflect in Quark's hot paths. The migrate layer detects JSON[T] and emits the dialect-native JSON column type:

Postgres → JSONB
MySQL / MariaDB → JSON
SQLite → TEXT (with json_* functions available at query time)
SQL Server → NVARCHAR(MAX)
Oracle → CLOB

Example:

type Settings struct{ Theme string `json:"theme"`; Volume int `json:"volume"` }

type Profile struct {
    ID       int64                  `db:"id" pk:"true"`
    Settings quark.JSON[Settings]   `db:"settings"`
}

p := Profile{Settings: quark.JSON[Settings]{V: Settings{Theme: "dark", Volume: 7}}}
_ = client.Migrate(ctx, &Profile{})
_ = quark.For[Profile](ctx, client).Create(&p)

func (*JSON[T]) Scan added in v0.3.0

func (j *JSON[T]) Scan(src any) error

Scan implements sql.Scanner. Accepts []byte and string sources (the two forms drivers return for JSON columns). NULL clears V to its zero value rather than erroring; pair with quark.Nullable[JSON[T]] when you want to distinguish NULL from "valid but empty" payloads.

func (JSON[T]) Value added in v0.3.0

func (j JSON[T]) Value() (driver.Value, error)

Value implements driver.Valuer by JSON-marshalling V. Drivers receive the resulting []byte (treated as a JSON string) and store it in the column.

type JoinBuilder added in v0.4.0

type JoinBuilder[T any] struct {
	// contains filtered or unexported fields
}

JoinBuilder is the structured form returned by Join, LeftJoin, and RightJoin. Complete the JOIN by chaining `.On(left, op, right)` (the typed identifier form) or `.OnRaw(onClause)` (the legacy free-form string for ON clauses outside the simple binary grammar). Both chain-terminate by returning *Query[T] so subsequent builder calls pick up where the JOIN left off.

JoinBuilder values are immutable; the underlying *Query[T] is cloned before the JOIN is appended, matching the rest of the builder's thread-safety contract.

func (*JoinBuilder[T]) On added in v0.4.0

func (b *JoinBuilder[T]) On(left, op, right string) *Query[T]

On appends an INNER/LEFT/RIGHT JOIN with a single binary identifier comparison as the ON clause: `<left> <op> <right>`. The three arguments are concatenated as `left + " " + op + " " + right` and the resulting clause is validated as a whole against `guard.ValidateJoinOn` at exec time, surfacing `ErrInvalidJoin` for any shape outside the identifier-only grammar (literal RHS, function calls, parens, comments, mismatched operators). The grammar accepts the binary comparison operators `=`, `!=`, `<>`, `<`, `<=`, `>`, `>=` and AND-chained compound clauses.

Most JOINs need only this form — the typical `users.id = orders.user_id` shape. For multi-condition ON clauses or any expression the ValidateJoinOn grammar accepts (AND-chained identifier comparisons), use `OnRaw`.

Example:

quark.For[Order](ctx, client).
    Join("users").On("users.id", "=", "orders.user_id").
    List()

func (*JoinBuilder[T]) OnRaw added in v0.4.0

func (b *JoinBuilder[T]) OnRaw(onClause string) *Query[T]

OnRaw appends the JOIN with a free-form ON clause string. The clause must match the minimal identifier-only grammar that `guard.ValidateJoinOn` enforces (AND-chained binary comparisons of qualified identifiers, e.g. `users.id = orders.user_id AND users.tenant_id = orders.tenant_id`). Literals, function calls, subqueries, and parentheses are rejected. Drop down to `RawQuery` for shapes outside this grammar.

OnRaw is the migration path for callers of the v0.3.x string-raw `Join(table, onClause)`: rewrite as `Join(table).OnRaw(onClause)`.

type Limits

type Limits struct {
	MaxQueryLength     int
	MaxResults         int
	MaxJoins           int
	MaxWhereConditions int
	QueryTimeout       time.Duration
	AllowRawQueries    bool
	SafeMigrations     bool
}

Limits defines security and performance limits for queries.

func DefaultLimits

func DefaultLimits() Limits

DefaultLimits returns sensible default limits.

type LockMode added in v0.4.0

type LockMode int

LockMode is the kind of pessimistic lock requested for a SELECT.

const (
	// LockNone means no lock clause is emitted (the default).
	LockNone LockMode = iota
	// LockForUpdate locks the rows for update; other transactions cannot
	// read-with-lock or write the matching rows until the current
	// transaction commits or rolls back. Most engines support it.
	LockForUpdate
	// LockForShare takes a shared read lock — other transactions can also
	// read-with-lock but not write. Supported on PG / MySQL 8+ / MariaDB;
	// not on SQLite. MSSQL approximates with HOLDLOCK.
	LockForShare
)

type LockOptions added in v0.4.0

type LockOptions struct {
	Mode       LockMode
	SkipLocked bool
	NoWait     bool
}

LockOptions describes the pessimistic-lock behaviour for a SELECT. The zero value (LockMode == LockNone) emits nothing — callers opt in via ForUpdate / ForShare on Query[T].

func (LockOptions) IsZero added in v0.4.0

func (o LockOptions) IsZero() bool

IsZero reports whether the options request no lock at all. Used by dialects to short-circuit their LockSuffix implementations.

type MSSQLDialect

type MSSQLDialect struct {
	// contains filtered or unexported fields
}

MSSQLDialect implements the Microsoft SQL Server dialect.

func (*MSSQLDialect) AcquireMigrationLock added in v0.6.0

func (d *MSSQLDialect) AcquireMigrationLock(ctx context.Context, db DBConnector, name string, timeout time.Duration) (MigrationLock, error)

AcquireMigrationLock uses `sp_getapplock` with @LockOwner='Session', scoped to the dedicated connection's session. Returns an integer status: 0 (granted immediately), 1 (granted after wait), -1 (timeout), -2 (cancel), -3 (deadlock), -999 (parameter / fatal error). We map -1 to `ErrLockTimeout` and the others to descriptive errors.

Timeout is in milliseconds; we round the Go Duration to ms.

func (*MSSQLDialect) AlterTableAddColumn

func (m *MSSQLDialect) AlterTableAddColumn(table, column, dataType string) string

func (*MSSQLDialect) AlterTableAlterColumn

func (m *MSSQLDialect) AlterTableAlterColumn(table, column, newDataType string) string

func (*MSSQLDialect) AlterTableDropColumn

func (m *MSSQLDialect) AlterTableDropColumn(table, column string) string

func (*MSSQLDialect) BuildProcedureCall

func (m *MSSQLDialect) BuildProcedureCall(procedure string, argCount int) string

func (*MSSQLDialect) BuildRoutineQuery

func (m *MSSQLDialect) BuildRoutineQuery(routine string, argCount int) string

func (*MSSQLDialect) CurrentTimestamp

func (m *MSSQLDialect) CurrentTimestamp() string

func (*MSSQLDialect) IntrospectSchema added in v0.6.0

func (d *MSSQLDialect) IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)

IntrospectSchema reads the MSSQL schema via `sys.tables`, `sys.columns`, `sys.types`, and `sys.default_constraints`. MSSQL stores defaults in a separate catalog joined on parent object_id + column_id, so default-extraction needs a LEFT JOIN; everything else is a straight catalog read.

MSSQL caveats handled here:

  • System-shipped tables are filtered via `is_ms_shipped = 0` plus `name NOT LIKE 'quark[_]%' ESCAPE`-style char class (the `[_]` bracket prevents `_` from being interpreted as the wildcard).
  • Type reassembly: `sys.types` returns the bare type name (`varchar`, `decimal`); we glue `(N)` / `(P,S)` / `(MAX)` onto it from the adjacent columns. `max_length = -1` is the MSSQL convention for VARCHAR(MAX) / NVARCHAR(MAX). For nvarchar, `max_length` is bytes (chars * 2), so we divide by 2 when emitting the displayed type — matches what a user would write in DDL.
  • Default values: MSSQL wraps them in parens (`(0)`, `(getdate())`, `('draft')`). We pass that through raw — the diff layer (F3-3) is responsible for unwrapping if it needs to compare against the Go-side DDL.

func (*MSSQLDialect) JSONExtract

func (m *MSSQLDialect) JSONExtract(column, path string) (string, []any, error)

func (*MSSQLDialect) LastInsertIDQuery

func (m *MSSQLDialect) LastInsertIDQuery(table, pkColumn string) string

func (*MSSQLDialect) LimitOffset

func (m *MSSQLDialect) LimitOffset(limit, offset int) string

func (*MSSQLDialect) LockSuffix added in v0.4.0

func (m *MSSQLDialect) LockSuffix(opts LockOptions) (string, string, error)

MSSQL uses table hints attached to the FROM clause (not a SELECT suffix). Single-row pessimistic-style locks come from (UPDLOCK, ROWLOCK) for ForUpdate and (HOLDLOCK, ROWLOCK) for ForShare. SkipLocked maps to READPAST; NoWait has no direct hint, so it errors out rather than silently blocking.

func (*MSSQLDialect) Name

func (d *MSSQLDialect) Name() string

func (*MSSQLDialect) Placeholder

func (m *MSSQLDialect) Placeholder(index int) string

func (*MSSQLDialect) Placeholders

func (m *MSSQLDialect) Placeholders(n int) []string

func (*MSSQLDialect) Quote

func (m *MSSQLDialect) Quote(identifier string) string

func (*MSSQLDialect) RenameColumn

func (m *MSSQLDialect) RenameColumn(table, oldName, newName string) string

func (*MSSQLDialect) RenameTable

func (m *MSSQLDialect) RenameTable(oldName, newName string) string

func (*MSSQLDialect) Returning

func (m *MSSQLDialect) Returning(columns ...string) string

func (*MSSQLDialect) SupportsLastInsertID

func (m *MSSQLDialect) SupportsLastInsertID() bool

func (*MSSQLDialect) SupportsReturning

func (m *MSSQLDialect) SupportsReturning() bool

func (*MSSQLDialect) SupportsTransactionalDDL

func (m *MSSQLDialect) SupportsTransactionalDDL() bool

func (*MSSQLDialect) UpsertSQL

func (m *MSSQLDialect) UpsertSQL(conflictCols, updateCols []string, _ int) string

UpsertSQL for MSSQL: uses MERGE statement appended as a WITH-style hint. MSSQL requires MERGE syntax which cannot be appended to a plain INSERT, so we return a marker that buildUpsert handles specially.

type MariaDBDialect

type MariaDBDialect struct {
	MySQLDialect // embed MySQL — identical wire protocol and driver
}

MariaDBDialect implements the MariaDB dialect. MariaDB is a fork of MySQL with significant additions:

  • RETURNING clause in INSERT/DELETE/UPDATE (10.5+)
  • Native sequences via CREATE SEQUENCE (10.3+)
  • Temporal tables / system-versioned tables (10.3.4+)
  • JSON_TABLE support (10.6+)
  • INTERSECT / EXCEPT set operations (10.3+)
  • Descending indexes (10.6+)
  • UUID() and UUID_SHORT() built-ins
  • IGNORE INDEX / USE INDEX hints identical to MySQL

func (*MariaDBDialect) AcquireMigrationLock added in v0.6.0

func (d *MariaDBDialect) AcquireMigrationLock(ctx context.Context, db DBConnector, name string, timeout time.Duration) (MigrationLock, error)

func (*MariaDBDialect) AlterTableAlterColumn

func (m *MariaDBDialect) AlterTableAlterColumn(table, column, newDataType string) string

AlterTableAlterColumn uses MODIFY COLUMN (same as MySQL).

func (*MariaDBDialect) CreateSequence

func (m *MariaDBDialect) CreateSequence(name string, start, increment int64) string

CreateSequence returns the DDL to create a named sequence (MariaDB 10.3+).

func (*MariaDBDialect) CreateSystemVersionedTable

func (m *MariaDBDialect) CreateSystemVersionedTable(table string, columnDefs string) string

CreateSystemVersionedTable returns the DDL for a system-versioned (temporal) table. Requires MariaDB 10.3.4+.

func (*MariaDBDialect) HistoryBetween

func (m *MariaDBDialect) HistoryBetween(table, from, to string) string

HistoryBetween returns SELECT … FOR SYSTEM_TIME BETWEEN for a time range.

func (*MariaDBDialect) HistoryQuery

func (m *MariaDBDialect) HistoryQuery(table string) string

HistoryQuery returns SELECT … FOR SYSTEM_TIME ALL to query full row history.

func (*MariaDBDialect) IntrospectSchema added in v0.6.0

func (d *MariaDBDialect) IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)

func (*MariaDBDialect) JSONExtract

func (m *MariaDBDialect) JSONExtract(column, path string) (string, []any, error)

JSONExtract uses the MariaDB / MySQL JSON_VALUE syntax (10.2.3+). MariaDB also accepts the arrow operator col->>'$.key' from 10.4.3+.

func (*MariaDBDialect) JSONTable

func (m *MariaDBDialect) JSONTable(source, path string, columns ...string) string

JSONTable returns a JSON_TABLE expression (MariaDB 10.6+). source: SQL expression producing JSON; path: root path e.g. '$[*]'; columns: column definitions e.g. "id INT PATH '$.id'".

The path is validated against guard.ValidateJSONTablePath (JSONPath grammar rooted at "$"). source and columns must be trusted strings — the JSON_TABLE row syntax intermixes column types and PATH literals, so binding it as a parameter is not possible. Callers MUST NOT pass user-controlled values for source or columns. If invalid, the returned SQL embeds an obvious sentinel that fails parsing at execution time, surfacing the misuse rather than silently producing executable injection.

TODO(public-api): when JSONTable graduates from internal-only to a public builder, change the signature to return (string, error) so callers can detect validation failure with errors.Is rather than scanning the SQL for JSON_TABLE_PATH_INVALID.

func (*MariaDBDialect) LastInsertIDQuery

func (m *MariaDBDialect) LastInsertIDQuery(table, pkColumn string) string

LastInsertIDQuery is kept as fallback for engines older than 10.5.

func (*MariaDBDialect) LimitOffset

func (m *MariaDBDialect) LimitOffset(limit, offset int) string

LimitOffset for MariaDB uses standard LIMIT … OFFSET … syntax (unlike MySQL which uses LIMIT offset, count).

func (*MariaDBDialect) LockSuffix added in v0.4.0

func (m *MariaDBDialect) LockSuffix(opts LockOptions) (string, string, error)

MariaDB 10.6+ supports SKIP LOCKED and NOWAIT in the same shape as MySQL.

func (*MariaDBDialect) Name

func (d *MariaDBDialect) Name() string

func (*MariaDBDialect) NextVal

func (m *MariaDBDialect) NextVal(sequenceName string) string

NextVal returns the SQL expression that reads the next value from a sequence.

func (*MariaDBDialect) RenameColumn

func (m *MariaDBDialect) RenameColumn(table, oldName, newName string) string

RenameColumn uses the standard SQL syntax supported since MariaDB 10.4.2.

func (*MariaDBDialect) Returning

func (m *MariaDBDialect) Returning(columns ...string) string

Returning generates a RETURNING clause compatible with MariaDB 10.5+.

func (*MariaDBDialect) SupportsLastInsertID

func (m *MariaDBDialect) SupportsLastInsertID() bool

SupportsLastInsertID returns false when RETURNING is used. The ORM prefers RETURNING over LAST_INSERT_ID() for MariaDB.

func (*MariaDBDialect) SupportsReturning

func (m *MariaDBDialect) SupportsReturning() bool

SupportsReturning returns true: MariaDB 10.5+ supports RETURNING in INSERT … RETURNING, DELETE … RETURNING and UPDATE … RETURNING.

func (*MariaDBDialect) SupportsTransactionalDDL

func (m *MariaDBDialect) SupportsTransactionalDDL() bool

SupportsTransactionalDDL returns false — MariaDB (like MySQL) performs implicit commits around DDL statements.

func (*MariaDBDialect) UpsertSQL

func (m *MariaDBDialect) UpsertSQL(conflictCols, updateCols []string, argOffset int) string

UpsertSQL for MariaDB: INSERT … ON DUPLICATE KEY UPDATE (same as MySQL)

type Middleware

type Middleware interface {
	WrapExec(next ExecFunc) ExecFunc
	WrapQuery(next QueryFunc) QueryFunc
	WrapQueryRow(next QueryRowFunc) QueryRowFunc
}

Middleware wraps query execution for cross-cutting concerns like logging, retry logic, caching, rate limiting, etc. It intercepts all types of database interactions (Exec, Query, QueryRow).

type MigrationLock added in v0.6.0

type MigrationLock interface {
	// Release relinquishes the lock and returns the underlying
	// connection to the pool. Safe to call multiple times; subsequent
	// calls are no-ops. Returns an error only if the release RPC fails
	// — not if the lock was already released.
	Release(ctx context.Context) error
}

MigrationLock is the handle returned by Client.AcquireMigrationLock. The caller must invoke Release before the *Client is closed; the lock is held by a dedicated connection for its entire lifetime so a process panic / Client.Close releases it automatically through the underlying driver's session teardown.

The lock guarantees mutual exclusion across processes sharing the same database. Concurrent acquirers of the same `name` block up to the requested timeout; the first one wins, the rest receive `ErrLockTimeout` if the timeout elapses.

type MigrationLocker added in v0.6.0

type MigrationLocker interface {
	AcquireMigrationLock(ctx context.Context, db DBConnector, name string, timeout time.Duration) (MigrationLock, error)
}

MigrationLocker is the optional interface a Dialect implements to support distributed migration locks. PG / MySQL / MariaDB / MSSQL implement it; SQLite and (currently) Oracle do not.

Kept as an optional interface — not a required method on Dialect — so custom Dialect implementations downstream don't have to grow this method to keep compiling. They opt in if and when they need distributed-lock support.

type ModelMeta

type ModelMeta = schema.ModelMeta

ModelMeta is the cached metadata for a model struct.

func GetModelMeta

func GetModelMeta[T any]() *ModelMeta

GetModelMeta returns the cached metadata for model type T.

func GetModelMetaByType

func GetModelMetaByType(t reflect.Type) *ModelMeta

GetModelMetaByType returns the cached metadata for a reflect.Type.

type MySQLDialect

type MySQLDialect struct {
	// contains filtered or unexported fields
}

MySQLDialect implements the MySQL dialect.

func (*MySQLDialect) AcquireMigrationLock added in v0.6.0

func (d *MySQLDialect) AcquireMigrationLock(ctx context.Context, db DBConnector, name string, timeout time.Duration) (MigrationLock, error)

AcquireMigrationLock uses MySQL's `GET_LOCK(name, timeout_seconds)`, which is session-bound (released when the connection ends). Returns 1 on success, 0 on timeout, NULL on error. We dedicate a connection for the lock's lifetime; `Release` calls `RELEASE_LOCK` and returns the connection to the pool.

Timeout argument: GET_LOCK accepts seconds (negative = wait forever); we round Duration to seconds. Sub-second timeouts are rounded UP to 1 second — the next-best approximation of the caller's intent given the protocol granularity.

MariaDB shares MySQL's GET_LOCK semantics — same code path.

func (*MySQLDialect) AlterTableAddColumn

func (m *MySQLDialect) AlterTableAddColumn(table, column, dataType string) string

func (*MySQLDialect) AlterTableAlterColumn

func (m *MySQLDialect) AlterTableAlterColumn(table, column, newDataType string) string

func (*MySQLDialect) AlterTableDropColumn

func (m *MySQLDialect) AlterTableDropColumn(table, column string) string

func (*MySQLDialect) BuildProcedureCall

func (m *MySQLDialect) BuildProcedureCall(procedure string, argCount int) string

func (*MySQLDialect) BuildRoutineQuery

func (m *MySQLDialect) BuildRoutineQuery(routine string, argCount int) string

func (*MySQLDialect) CurrentTimestamp

func (m *MySQLDialect) CurrentTimestamp() string

func (*MySQLDialect) IntrospectSchema added in v0.6.0

func (d *MySQLDialect) IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)

IntrospectSchema reads the MySQL/MariaDB schema using `INFORMATION_SCHEMA.TABLES` and `INFORMATION_SCHEMA.COLUMNS`. Both engines share the same catalog structure for the column-level surface, so a single implementation covers them (the two Dialect types just delegate here).

MySQL caveats handled here:

  • Scope: `TABLE_SCHEMA = DATABASE()` — the current database, which is MySQL's analogue of PG's `current_schema()`. Cross-database introspection is out of scope (caller would need to switch DBs explicitly).
  • Type representation: we use `COLUMN_TYPE` (full type string with parameters and modifiers — `int(11) unsigned`, `varchar(255)`, `decimal(10,2)`) instead of reassembling from `DATA_TYPE`. MySQL returns this verbatim, which means the round-trip vs the Go migrate-side DDL is comparable without per-type switches.
  • System tables: MySQL exposes `mysql`, `information_schema`, `performance_schema`, `sys` as system databases. Our scope is the user's current DB, so those don't surface; we additionally filter `quark_%` for our internal tables.

func (*MySQLDialect) JSONExtract

func (m *MySQLDialect) JSONExtract(column, path string) (string, []any, error)

func (*MySQLDialect) LastInsertIDQuery

func (m *MySQLDialect) LastInsertIDQuery(table, pkColumn string) string

func (*MySQLDialect) LimitOffset

func (m *MySQLDialect) LimitOffset(limit, offset int) string

func (*MySQLDialect) LockSuffix added in v0.4.0

func (m *MySQLDialect) LockSuffix(opts LockOptions) (string, string, error)

MySQL 8.0+ supports SKIP LOCKED and NOWAIT. Older 5.x ignores those modifiers (driver does not error; lock still taken).

func (*MySQLDialect) Name

func (d *MySQLDialect) Name() string

func (*MySQLDialect) Placeholder

func (m *MySQLDialect) Placeholder(index int) string

func (*MySQLDialect) Placeholders

func (m *MySQLDialect) Placeholders(n int) []string

func (*MySQLDialect) Quote

func (m *MySQLDialect) Quote(identifier string) string

func (*MySQLDialect) RenameColumn

func (m *MySQLDialect) RenameColumn(table, oldName, newName string) string

func (*MySQLDialect) RenameTable

func (m *MySQLDialect) RenameTable(oldName, newName string) string

func (*MySQLDialect) Returning

func (m *MySQLDialect) Returning(columns ...string) string

func (*MySQLDialect) SupportsLastInsertID

func (m *MySQLDialect) SupportsLastInsertID() bool

func (*MySQLDialect) SupportsReturning

func (m *MySQLDialect) SupportsReturning() bool

func (*MySQLDialect) SupportsTransactionalDDL

func (m *MySQLDialect) SupportsTransactionalDDL() bool

func (*MySQLDialect) UpsertSQL

func (m *MySQLDialect) UpsertSQL(conflictCols, updateCols []string, _ int) string

UpsertSQL for MySQL: INSERT … ON DUPLICATE KEY UPDATE col = VALUES(col)

type Nullable added in v0.3.0

type Nullable[T any] = sql.Null[T]

Nullable[T] is a generic wrapper for a column value that may be SQL NULL. It is a thin alias of database/sql.Null[T] (Go 1.22+) so a Nullable[T] already implements both database/sql.Scanner and database/sql/driver.Valuer — drivers handle the round-trip through their existing fast paths and Quark's reflect-based scan / write code does not need to special-case it.

Replace the long-standing pointer-as-nullable idiom (e.g. *time.Time, *string) with Nullable[T] when you want explicit "is set" semantics without a heap allocation per field. The migrate layer recognises the type and emits the SQL type for T (no NOT NULL, since the column is nullable by definition).

Example:

type Profile struct {
    ID   int64                `db:"id" pk:"true"`
    Bio  quark.Nullable[string]    `db:"bio"`
    Born quark.Nullable[time.Time] `db:"born"`
}

p := Profile{
    Bio:  quark.SomeOf("hello"),
    Born: quark.NullOf[time.Time](),
}

func NullOf added in v0.3.0

func NullOf[T any]() Nullable[T]

NullOf returns the SQL-NULL value of Nullable[T] — Nullable[T]{} with the generic spelt out at the call site for readability.

func SomeOf added in v0.3.0

func SomeOf[T any](v T) Nullable[T]

SomeOf returns a non-null Nullable[T] wrapping v. Equivalent to Nullable[T]{V: v, Valid: true} — provided as a constructor so callers don't have to spell the struct literal.

type OpAddCheck added in v0.6.0

type OpAddCheck struct {
	Table string
	Check Check
}

OpAddCheck / OpDropCheck — emitted only when both schemas populate Checks (which means neither side is SQLite — see the SQLite Checks=nil contract in Check's godoc). When one side is SQLite, Diff skips the check comparison rather than treating Checks=nil as "no checks" (which would falsely emit DropCheck for every check on the other side).

func (OpAddCheck) String added in v0.6.0

func (o OpAddCheck) String() string

type OpAddColumn added in v0.6.0

type OpAddColumn struct {
	Table  string
	Column Column
}

OpAddColumn is emitted when the desired schema adds a column to a table both sides have. Carries the full Column so the executor can splice the right `<type> [NULL|NOT NULL] [DEFAULT ...]` into the ALTER TABLE ADD COLUMN statement.

func (OpAddColumn) String added in v0.6.0

func (o OpAddColumn) String() string

type OpAddForeignKey added in v0.6.0

type OpAddForeignKey struct {
	Table      string
	ForeignKey ForeignKey
}

OpAddForeignKey / OpDropForeignKey — same model as indexes: name is the match key. SQLite's "" name for inline FKs (see schema.ForeignKey godoc) is handled by Diff matching on (Columns, RefTable, RefColumns) when Name is "" on both sides.

func (OpAddForeignKey) String added in v0.6.0

func (o OpAddForeignKey) String() string

type OpAlterColumn added in v0.6.0

type OpAlterColumn struct {
	Table string
	Old   Column
	New   Column
}

OpAlterColumn is emitted when both sides have a column with the same name but at least one attribute differs (Type, Nullable, or Default). The op carries BOTH the old and the new column so the executor / CLI can render the delta precisely and so resumable migrations (F3-4) can decide whether the alter is safe to retry.

Diff convention: we emit AT MOST ONE OpAlterColumn per (table, column). If both Type and Nullable changed, a single op describes both deltas; the executor decides whether per-attribute ALTERs are needed or a single multi-attribute ALTER is supported.

func (OpAlterColumn) String added in v0.6.0

func (o OpAlterColumn) String() string

type OpCreateIndex added in v0.6.0

type OpCreateIndex struct {
	Table string
	Index Index
}

OpCreateIndex / OpDropIndex are emitted by Diff when the index list of a table both sides have differs. We match indexes by name — there's no fuzzy "same columns, different name" matching in this PR. Renames look like DROP + CREATE, which is what the engines do anyway. If F3-3-execute later wants to detect renames for safety reasons, that's a separate pass.

func (OpCreateIndex) String added in v0.6.0

func (o OpCreateIndex) String() string

type OpCreateTable added in v0.6.0

type OpCreateTable struct {
	Table Table
}

OpCreateTable is emitted when the desired schema has a table that the current schema lacks. The full Table value (columns, indexes, FKs, checks) is carried so the executor can emit CREATE TABLE + CREATE INDEX + ALTER TABLE ADD CONSTRAINT in the right order.

func (OpCreateTable) String added in v0.6.0

func (o OpCreateTable) String() string

type OpDropCheck added in v0.6.0

type OpDropCheck struct {
	Table string
	Check string
}

func (OpDropCheck) String added in v0.6.0

func (o OpDropCheck) String() string

type OpDropColumn added in v0.6.0

type OpDropColumn struct {
	Table  string
	Column string
}

OpDropColumn is emitted when the current schema has a column the desired schema lacks. Destructive — same caveat as OpDropTable.

func (OpDropColumn) String added in v0.6.0

func (o OpDropColumn) String() string

type OpDropForeignKey added in v0.6.0

type OpDropForeignKey struct {
	Table      string
	ForeignKey string // catalog name; "" on SQLite inline FKs
}

func (OpDropForeignKey) String added in v0.6.0

func (o OpDropForeignKey) String() string

type OpDropIndex added in v0.6.0

type OpDropIndex struct {
	Table string
	Index string
}

func (OpDropIndex) String added in v0.6.0

func (o OpDropIndex) String() string

type OpDropTable added in v0.6.0

type OpDropTable struct {
	Table string
}

OpDropTable is emitted when the current schema has a table that the desired schema lacks. Destructive; F3-3 doesn't gate on a "safe mode" flag — that belongs to the executor / CLI (F3-4 + F3-5) which can prompt or refuse to drop without an explicit flag.

func (OpDropTable) String added in v0.6.0

func (o OpDropTable) String() string

type Operation added in v0.6.0

type Operation interface {
	// String returns a stable human-readable description for the
	// plan output and test failure messages. Format is
	// `<VERB> <subject>` — no DDL syntax, since DDL depends on the
	// dialect.
	String() string
	// contains filtered or unexported methods
}

Operation is one unit of schema change emitted by Diff. It's a dialect-neutral plan node: each op carries the identifiers and neutral shape needed to reconstruct a single DDL statement, and the executor (F3-3-execute, follow-up PR) translates it to per-dialect SQL via the existing migrator helpers.

Operation is a sealed interface — the concrete types in this file are the only valid implementations. F3-3 deliberately models ops as values rather than method calls so the diff stays inspectable (the CLI plan command in F3-5 can render each op as text without touching SQL) and testable (unit tests assert on op structure, not on emitted SQL).

func Diff added in v0.6.0

func Diff(desired, current Schema) []Operation

Diff returns the ordered list of [Operation]s that, applied in order, would bring `current` into alignment with `desired`. Both arguments are dialect-neutral Schema values typically produced by Client.IntrospectSchema (for `current`) and by a future models-to-schema pass (for `desired`, F3-3-plan).

Ordering rules:

  1. Tables present in desired but not in current → OpCreateTable first (so subsequent ops can reference them).
  2. Per table that both sides have, in this exact order: a) ADD COLUMN then ALTER COLUMN (so the new shape is in place before in-place alters). b) DROP CHECK → DROP FK → DROP INDEX → DROP COLUMN (reverse-dependency order: drop the dependent constraint before the column it references). c) CREATE INDEX after all column changes (add/alter/drop) so new indexes can reference new columns and don't trip over dropped ones. d) ADD FOREIGN KEY after CREATE INDEX (FKs typically require an index on the referencing column). e) ADD CHECK last.
  3. Tables present in current but not in desired → OpDropTable LAST (so FK references from other dropped tables are already gone).

Diff is pure and deterministic: same input always produces the same output, and tables/columns/indexes are sorted by name within each step so the plan is reviewable in tests and CLI output.

Diff intentionally does NOT compare:

  • Column.Type strings across dialects (PG "varchar(255)" vs MSSQL "nvarchar(255)" — F3-2 doesn't normalise types, F3-3 compares the strings verbatim and emits an OpAlterColumn if they differ. The caller is expected to feed two schemas from the same dialect, OR explicitly accept the alter noise.).
  • Check.Expression text (each dialect has its own canonical form — see the Check godoc). When both sides have a check by the same name, Diff treats them as equal regardless of expression text. AST-level equivalence is out of scope for this PR.
  • Checks on a side where Checks=nil (the SQLite contract). When desired.Checks=nil OR current.Checks=nil for a table, the check comparison for that table is skipped entirely.

type Option

type Option func(*Client)

Option configures a Client.

func WithCacheStore

func WithCacheStore(s CacheStore) Option

WithCacheStore sets the caching backend for the client.

func WithDialect

func WithDialect(d Dialect) Option

WithDialect sets the SQL dialect for the client. If not set, the dialect will be auto-detected from the database driver.

func WithLimits

func WithLimits(l Limits) Option

WithLimits sets the security and performance limits.

func WithLogger

func WithLogger(l *slog.Logger) Option

WithLogger sets the logger for the client. If not set, a no-op logger will be used.

func WithMiddleware

func WithMiddleware(m Middleware) Option

WithMiddleware adds middleware to the query execution chain. Middleware is applied in the order they are added.

func WithQueryObserver

func WithQueryObserver(o QueryObserver) Option

WithQueryObserver adds a query observer to the client. Multiple observers can be added and will be called in order.

type OracleDialect

type OracleDialect struct {
	// contains filtered or unexported fields
}

OracleDialect implements the Oracle Database dialect.

func (*OracleDialect) AlterTableAddColumn

func (o *OracleDialect) AlterTableAddColumn(table, column, dataType string) string

func (*OracleDialect) AlterTableAlterColumn

func (o *OracleDialect) AlterTableAlterColumn(table, column, newDataType string) string

func (*OracleDialect) AlterTableDropColumn

func (o *OracleDialect) AlterTableDropColumn(table, column string) string

func (*OracleDialect) BuildProcedureCall

func (o *OracleDialect) BuildProcedureCall(procedure string, argCount int) string

func (*OracleDialect) BuildRoutineQuery

func (o *OracleDialect) BuildRoutineQuery(routine string, argCount int) string

func (*OracleDialect) CurrentTimestamp

func (o *OracleDialect) CurrentTimestamp() string

func (*OracleDialect) JSONExtract

func (o *OracleDialect) JSONExtract(column, path string) (string, []any, error)

func (*OracleDialect) LastInsertIDQuery

func (o *OracleDialect) LastInsertIDQuery(table, pkColumn string) string

func (*OracleDialect) LimitOffset

func (o *OracleDialect) LimitOffset(limit, offset int) string

func (*OracleDialect) LockSuffix added in v0.4.0

func (o *OracleDialect) LockSuffix(opts LockOptions) (string, string, error)

Oracle pessimistic locking. SKIP LOCKED supported on 12c+.

FOR UPDATE [NOWAIT|SKIP LOCKED]

Oracle does not have a FOR SHARE; map LockForShare → ErrUnsupportedFeature rather than emitting an unsafe approximation.

func (*OracleDialect) Name

func (d *OracleDialect) Name() string

func (*OracleDialect) Placeholder

func (o *OracleDialect) Placeholder(index int) string

func (*OracleDialect) Placeholders

func (o *OracleDialect) Placeholders(n int) []string

func (*OracleDialect) Quote

func (o *OracleDialect) Quote(identifier string) string

func (*OracleDialect) RenameColumn

func (o *OracleDialect) RenameColumn(table, oldName, newName string) string

func (*OracleDialect) RenameTable

func (o *OracleDialect) RenameTable(oldName, newName string) string

func (*OracleDialect) Returning

func (o *OracleDialect) Returning(columns ...string) string

func (*OracleDialect) SupportsLastInsertID

func (o *OracleDialect) SupportsLastInsertID() bool

func (*OracleDialect) SupportsReturning

func (o *OracleDialect) SupportsReturning() bool

func (*OracleDialect) SupportsTransactionalDDL

func (o *OracleDialect) SupportsTransactionalDDL() bool

func (*OracleDialect) UpsertSQL

func (o *OracleDialect) UpsertSQL(conflictCols, updateCols []string, _ int) string

UpsertSQL for Oracle: MERGE syntax — same as MSSQL, built separately.

type Page

type Page[T any] struct {
	Items      []T   // The items for current page
	Total      int64 // Total count (if available)
	Page       int   // Current page number (0-indexed)
	PageSize   int   // Items per page
	TotalPages int64 // Calculated total pages
}

Page represents a paginated result set.

type Plan added in v0.6.0

type Plan struct {
	// Ops is the diff between the desired schema (derived from
	// models) and the current schema (from [Client.IntrospectSchema]).
	// See the [Diff] godoc for ordering guarantees.
	Ops []Operation
}

Plan is the result of Client.PlanMigration: the ordered list of [Operation]s that, applied to the database, would bring it into alignment with the Go-side models.

A Plan is inert — it doesn't execute itself. F3-3-execute (the follow-up PR) will add a Plan.Apply method that walks Ops and dispatches each to the per-dialect migrator helpers. The CLI plan command (F3-5) renders the Plan via Plan.String without ever touching SQL.

func (Plan) Hash added in v0.6.0

func (p Plan) Hash() string

Hash returns a deterministic SHA-256 hex digest of the Plan's operation sequence. Used by Client.ApplyPlan's resumable path to detect plan drift between runs: if the same plan_hash carries over from a partially-applied run, the resume can pick up where it left off; if the hash differs, the Plan has changed and the resume is unsafe (e.g. the user added a column to their model between runs).

The digest input is the line-joined `op.String()` output of every op. Op.String() formats are documented as stable in the F3-3-core godoc, so the hash is reproducible across processes and binaries on the same Plan value. A change in any op's content — even cosmetic, like a renamed table — produces a new hash, which is the right safety boundary for resume.

Empty plans hash to the sha256 of the empty string (a constant) so two empty plans compare equal.

func (Plan) IsEmpty added in v0.6.0

func (p Plan) IsEmpty() bool

IsEmpty reports whether the Plan would be a no-op when applied. Equivalent to `len(p.Ops) == 0` but more readable in user code.

Use this as the "did anything drift?" check in CI / health endpoints — a non-empty Plan means the Go models and the database schema have diverged.

func (Plan) String added in v0.6.0

func (p Plan) String() string

String renders the Plan as a multi-line human-readable report. Each Op contributes one line via its own [Operation.String]. Empty plans render as "(no changes)".

The format is intentionally minimal so the F3-5 CLI can wrap it without parsing — table or coloured output is the CLI's responsibility, not the Plan's.

type PoolOption added in v0.3.0

type PoolOption interface {
	// contains filtered or unexported methods
}

PoolOption is a configuration option for the database connection pool. These are applied to the *sql.DB before creating the Client.

func WithConnMaxIdleTime added in v0.3.0

func WithConnMaxIdleTime(d time.Duration) PoolOption

WithConnMaxIdleTime sets the maximum amount of time a connection may be idle.

func WithConnMaxLifetime added in v0.3.0

func WithConnMaxLifetime(d time.Duration) PoolOption

WithConnMaxLifetime sets the maximum amount of time a connection may be reused.

func WithMaxIdleConns added in v0.3.0

func WithMaxIdleConns(n int) PoolOption

WithMaxIdleConns sets the maximum number of idle connections in the pool.

func WithMaxOpenConns added in v0.3.0

func WithMaxOpenConns(n int) PoolOption

WithMaxOpenConns sets the maximum number of open connections to the database.

type PostgresDialect

type PostgresDialect struct {
	// contains filtered or unexported fields
}

PostgresDialect implements the PostgreSQL dialect.

func (*PostgresDialect) AcquireMigrationLock added in v0.6.0

func (d *PostgresDialect) AcquireMigrationLock(ctx context.Context, db DBConnector, name string, timeout time.Duration) (MigrationLock, error)

AcquireMigrationLock uses `pg_advisory_lock(hashtext(name))` on a dedicated connection. Session-level (not transaction-level), so the caller can run multiple statements under the lock without holding a long transaction open. Released by `pg_advisory_unlock` on Release.

Timeout is honoured via `SET lock_timeout` on the connection — PG's native way to bound advisory-lock waits. A timeout violation surfaces as SQLSTATE `55P03` (`lock_not_available`); we translate it to `ErrLockTimeout`.

func (*PostgresDialect) AlterTableAddColumn

func (p *PostgresDialect) AlterTableAddColumn(table, column, dataType string) string

func (*PostgresDialect) AlterTableAlterColumn

func (p *PostgresDialect) AlterTableAlterColumn(table, column, newDataType string) string

func (*PostgresDialect) AlterTableDropColumn

func (p *PostgresDialect) AlterTableDropColumn(table, column string) string

func (*PostgresDialect) BuildProcedureCall

func (p *PostgresDialect) BuildProcedureCall(procedure string, argCount int) string

func (*PostgresDialect) BuildRoutineQuery

func (p *PostgresDialect) BuildRoutineQuery(routine string, argCount int) string

func (*PostgresDialect) CurrentTimestamp

func (p *PostgresDialect) CurrentTimestamp() string

func (*PostgresDialect) IntrospectSchema added in v0.6.0

func (d *PostgresDialect) IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)

IntrospectSchema reads the PG schema by querying `information_schema` (more portable than `pg_catalog` and sufficient for the column-level surface F3-2 needs). The `current_schema()` (typically `public`) scopes the lookup so multi-schema setups don't drag in unrelated tables.

PG caveats handled here:

  • The data type returned by `data_type` is the SQL-standard form (`integer`, `bigint`, `character varying`). For native parameter-bearing types (`varchar(255)`, `numeric(10,2)`) we reassemble the precision/scale/length from the adjacent columns so the round-trip vs Go-side schema is comparable.
  • The `column_default` is preserved as-is — including PG's `nextval('seq')` wrappers around SERIAL/IDENTITY columns. The diff layer is responsible for recognising those.

func (*PostgresDialect) JSONExtract

func (p *PostgresDialect) JSONExtract(column, path string) (string, []any, error)

func (*PostgresDialect) LastInsertIDQuery

func (p *PostgresDialect) LastInsertIDQuery(table, pkColumn string) string

func (*PostgresDialect) LimitOffset

func (p *PostgresDialect) LimitOffset(limit, offset int) string

func (*PostgresDialect) LockSuffix added in v0.4.0

func (p *PostgresDialect) LockSuffix(opts LockOptions) (string, string, error)

PostgreSQL pessimistic locking (PG 9.5+ for SKIP LOCKED).

FOR UPDATE [SKIP LOCKED|NOWAIT]
FOR SHARE  [SKIP LOCKED|NOWAIT]

func (*PostgresDialect) Name

func (d *PostgresDialect) Name() string

func (*PostgresDialect) Placeholder

func (p *PostgresDialect) Placeholder(index int) string

func (*PostgresDialect) Placeholders

func (p *PostgresDialect) Placeholders(n int) []string

func (*PostgresDialect) Quote

func (p *PostgresDialect) Quote(identifier string) string

func (*PostgresDialect) RenameColumn

func (p *PostgresDialect) RenameColumn(table, oldName, newName string) string

func (*PostgresDialect) RenameTable

func (p *PostgresDialect) RenameTable(oldName, newName string) string

func (*PostgresDialect) Returning

func (p *PostgresDialect) Returning(columns ...string) string

func (*PostgresDialect) SupportsLastInsertID

func (p *PostgresDialect) SupportsLastInsertID() bool

func (*PostgresDialect) SupportsReturning

func (p *PostgresDialect) SupportsReturning() bool

func (*PostgresDialect) SupportsTransactionalDDL

func (p *PostgresDialect) SupportsTransactionalDDL() bool

func (*PostgresDialect) UpsertSQL

func (p *PostgresDialect) UpsertSQL(conflictCols, updateCols []string, _ int) string

UpsertSQL for PostgreSQL: INSERT … ON CONFLICT (cols) DO UPDATE SET col = EXCLUDED.col

type Query

type Query[T any] struct {
	BaseQuery
}

Query represents a type-safe database query builder for model T. All builder methods return a new Query (immutable/clone pattern) for thread-safety. Execution methods are in query_exec.go and query_crud.go

func For

func For[T any](ctx context.Context, provider ClientProvider) *Query[T]

For creates a Query builder for the given model type. This is the primary entry point for type-safe database operations.

Example:

type User struct {
    ID   int64  `db:"id"`
    Name string `db:"name"`
}

user, err := quark.For[User](ctx, client).Find(1)
users, err := quark.For[User](ctx, client).Where("active", "=", true).List()

func ForTx

func ForTx[T any](ctx context.Context, tx *Tx) *Query[T]

ForTx creates a Query builder for the given model type bound to a transaction. This is the transactional counterpart of For[T]().

Example:

err := client.Tx(ctx, func(tx *quark.Tx) error {
    return quark.ForTx[User](ctx, tx).Create(&user)
})

func (*Query[T]) Apply

func (q *Query[T]) Apply(scopes ...Scope[T]) *Query[T]

Apply applies one or more Scope functions to the query. Scopes are composable, reusable query fragments.

Example:

activeUsers := func(q *quark.Query[User]) *quark.Query[User] {
    return q.Where("active", "=", true)
}
users, _ := quark.For[User](ctx, client).Apply(activeUsers).List()

func (*Query[T]) AsSubquery added in v0.4.0

func (q *Query[T]) AsSubquery() (*Subquery, error)

AsSubquery captures the current Query[T] as a renderable Subquery. The SELECT is rendered eagerly: identifier validation, soft-delete and tenant predicates, JOINs, GROUP BY, HAVING, ORDER BY, LIMIT all run at AsSubquery time.

The captured SQL uses '?' as the bind marker; the outer query swaps it for the dialect's placeholder when the wrapping Expr is rendered. The SELECT cols can be customised via the standard Select() before capture — typical use is `Select("id")` for an `IN (subquery)` shape.

Pessimistic lock options (`ForUpdate` / `ForShare` / `SkipLocked` / `NoWait`) are explicitly rejected on the inner query — MSSQL emits table hints (`WITH (UPDLOCK, ROWLOCK)`) inline in the FROM clause that are not legal inside a subquery context, and PG / MySQL / Oracle's `FOR UPDATE` suffix on a subquery is technically valid but misleading when the outer caller already drives row locking. Acquire locks on the outer query instead.

func (*Query[T]) Avg

func (q *Query[T]) Avg(column string) (float64, error)

Avg returns the average of the given column across all matching rows.

func (*Query[T]) Cache

func (q *Query[T]) Cache(ttl time.Duration, tags ...string) *Query[T]

Cache enables caching for this query results with the given TTL.

func (*Query[T]) Count

func (q *Query[T]) Count() (int64, error)

Count returns the total number of matching rows.

func (*Query[T]) Create

func (q *Query[T]) Create(entity *T) error

Create inserts a new record. The entity must have a db tag on fields to be persisted. Returns with the ID set from the database. Create inserts a new record and recursively saves associations.

func (*Query[T]) CreateBatch

func (q *Query[T]) CreateBatch(entities []*T) error

CreateBatch inserts multiple records in a single SQL statement using bulk VALUES. Each entity gets its PK populated when the dialect supports RETURNING; otherwise PKs are left at their zero value (callers can re-query if needed).

Example:

users := []*User{{Name: "Alice"}, {Name: "Bob"}}
err := quark.For[User](ctx, client).CreateBatch(users)

func (*Query[T]) Cursor

func (q *Query[T]) Cursor() (*Cursor[T], error)

Cursor returns a Cursor for manual iteration over large result sets. The Cursor must be closed after use (defer cursor.Close()).

Example:

cursor, err := quark.For[User](ctx, client).Where("active", "=", true).Cursor()
if err != nil { log.Fatal(err) }
defer cursor.Close()

for cursor.Next() {
    var user User
    if err := cursor.Scan(&user); err != nil { break }
    process(user)
}

func (*Query[T]) Delete

func (q *Query[T]) Delete(entity *T) (int64, error)

Delete performs a soft delete by setting deleted_at = NOW(). If the model doesn't have deleted_at field, performs hard delete. Returns the number of rows affected.

func (*Query[T]) DeleteBatch

func (q *Query[T]) DeleteBatch(ids []any) (int64, error)

DeleteBatch deletes multiple records by their primary key values using DELETE … WHERE pk IN (…) statements, chunked to batchChunkSize to stay within every supported dialect's placeholder limit (Oracle: 1000, MSSQL: ~2100, others: larger).

Example:

affected, err := quark.For[User](ctx, client).DeleteBatch([]any{1, 2, 3})

func (*Query[T]) DeleteBy

func (q *Query[T]) DeleteBy() (int64, error)

DeleteBy performs a hard delete with WHERE conditions. Requires Where clause for safety.

func (*Query[T]) Distinct

func (q *Query[T]) Distinct() *Query[T]

Distinct adds SELECT DISTINCT to the query.

func (*Query[T]) Except added in v0.4.0

func (q *Query[T]) Except(other *Query[T]) *Query[T]

Except renders `EXCEPT` (or `MINUS` on Oracle). Not supported on MySQL / MariaDB.

func (*Query[T]) Find

func (q *Query[T]) Find(id any) (T, error)

Find retrieves a single row by primary key.

func (*Query[T]) First

func (q *Query[T]) First() (T, error)

First returns the first matching row or ErrNotFound.

func (*Query[T]) ForShare added in v0.4.0

func (q *Query[T]) ForShare() *Query[T]

ForShare marks the query so the emitted SELECT acquires a shared read lock. Composes with SkipLocked / NoWait. Not supported on SQLite; MSSQL approximates with HOLDLOCK; PG / MySQL 8+ / MariaDB / Oracle support it natively.

func (*Query[T]) ForUpdate added in v0.4.0

func (q *Query[T]) ForUpdate() *Query[T]

ForUpdate marks the query so the emitted SELECT acquires a row-level FOR UPDATE lock. Composes with SkipLocked / NoWait. Returns the receiver (no error) so it chains naturally; SQL surface failures (unsupported dialect, invalid combination) surface at execution time.

rows, err := quark.For[Order](ctx, client).
    Where("status", "=", "pending").
    ForUpdate().
    Limit(50).
    List()

func (*Query[T]) GroupBy

func (q *Query[T]) GroupBy(columns ...string) *Query[T]

GroupBy adds a GROUP BY clause.

func (*Query[T]) HardDelete

func (q *Query[T]) HardDelete(entity *T) (int64, error)

HardDelete permanently deletes the entity by its primary key.

func (*Query[T]) Having

func (q *Query[T]) Having(column string, operator string, value any) *Query[T]

Having adds a HAVING condition (used together with GroupBy).

The column argument is validated as a plain identifier — no parentheses, function calls, or expressions. To filter on aggregates such as COUNT(*) or SUM(col), use HavingAggregate instead.

func (*Query[T]) HavingAggregate added in v0.4.0

func (q *Query[T]) HavingAggregate(fn, column, operator string, value any) *Query[T]

HavingAggregate adds a HAVING condition over an aggregate function.

fn must be one of COUNT, SUM, AVG, MIN, MAX (case-insensitive). column is either a regular column name (validated through SQLGuard) or "*" — only accepted with COUNT, since "SUM(*)" / "AVG(*)" / etc. are not valid SQL. operator goes through the same whitelist Where uses (=, !=, <>, <, <=, >, >=, IN, NOT IN, BETWEEN, IS [NOT] NULL, LIKE, ILIKE).

Example:

groups, err := quark.For[Order](ctx, client).
    GroupBy("status").
    HavingAggregate("COUNT", "*", ">", 5).
    List()
// emitted: ... GROUP BY "status" HAVING COUNT(*) > $1

This closes the historic Having(column, op, value) limitation where the column went through ValidateIdentifier and aggregates therefore could not be expressed without RawQuery. The structured-AST form Having(Func("count", Col("*")), ">", 5) arrives with the full Phase 2 AST; HavingAggregate is the focused, type-safe shortcut for the overwhelmingly common case.

func (*Query[T]) HavingExpr added in v0.4.0

func (q *Query[T]) HavingExpr(e Expr) *Query[T]

HavingExpr adds a HAVING condition built from the Expr AST. Same rendering pipeline as WhereExpr; useful for aggregate predicates that need the full composition surface (Func("COUNT", Col("*")) > Lit(5), and so on).

func (*Query[T]) Intersect added in v0.4.0

func (q *Query[T]) Intersect(other *Query[T]) *Query[T]

Intersect renders `INTERSECT` between the base and the operand. Not supported on MySQL / MariaDB — those return ErrUnsupportedFeature from setOpKeyword at render time.

func (*Query[T]) Iter

func (q *Query[T]) Iter(fn func(T) error) error

Iter executes the query and iterates over results one by one. Uses streaming to handle large datasets without loading all into memory.

Example:

err := quark.For[User](ctx, client).Where("active", "=", true).Iter(func(user User) error {
    process(user)
    return nil
})

func (*Query[T]) Join

func (q *Query[T]) Join(table string) *JoinBuilder[T]

Join opens an INNER JOIN against `table`. Complete the JOIN with `.On(left, op, right)` (typed binary identifier comparison) or `.OnRaw(onClause)` (free-form, validated through the same identifier grammar as `On`).

The structured form replaces the v0.3.x string-raw `Join(table, on)` signature; see `MIGRATION_v0.4.0.md` for the mechanical rewrite.

func (*Query[T]) LeftJoin

func (q *Query[T]) LeftJoin(table string) *JoinBuilder[T]

LeftJoin opens a LEFT JOIN. See Join for ON-clause grammar.

func (*Query[T]) Limit

func (q *Query[T]) Limit(n int) *Query[T]

Limit sets the maximum number of rows to return.

func (*Query[T]) List

func (q *Query[T]) List() ([]T, error)

List executes the query and returns all matching rows. If Limit() is not called, uses a safe default (100) to prevent OOM. Use Iter() for unbounded streaming or Paginate() for large datasets.

func (*Query[T]) Max

func (q *Query[T]) Max(column string) (float64, error)

Max returns the maximum value of the given column across all matching rows.

func (*Query[T]) Min

func (q *Query[T]) Min(column string) (float64, error)

Min returns the minimum value of the given column across all matching rows.

func (*Query[T]) MustAsSubquery added in v0.4.0

func (q *Query[T]) MustAsSubquery() *Subquery

MustAsSubquery is the panic-on-error variant of AsSubquery for use in expression composition where errors would otherwise have to be threaded through the AST. The error is realistic (invalid identifier, etc.) only when the inner query is malformed; for well-formed inputs it never triggers.

func (*Query[T]) NoWait added in v0.4.0

func (q *Query[T]) NoWait() *Query[T]

NoWait tells the database to fail immediately if any matching row is already locked by another transaction. Combine with ForUpdate / ForShare. Implementation-defined per dialect.

func (*Query[T]) Offset

func (q *Query[T]) Offset(n int) *Query[T]

Offset sets the number of rows to skip.

func (*Query[T]) OnlyTrashed added in v0.3.0

func (q *Query[T]) OnlyTrashed() *Query[T]

OnlyTrashed returns a query that filters down to soft-deleted rows (deleted_at IS NOT NULL) so callers can list, restore, or hard-delete the trash. A no-op when the model has no deleted_at column.

func (*Query[T]) Or

func (q *Query[T]) Or(fn func(*Query[T]) *Query[T]) *Query[T]

Or adds an OR condition group. The callback receives a fresh Query to build conditions. All conditions within the callback are grouped with AND and joined to the outer query with OR.

Example:

quark.For[User](ctx, client).
    Where("active", "=", true).
    Or(func(q *Query[User]) *Query[User] {
        return q.Where("role", "=", "admin").Where("role", "=", "superadmin")
    }).List()

Generates: WHERE "active" = $1 OR ("role" = $2 AND "role" = $3)

Under the RowLevelSecurity tenant strategy the OR group inherits the parent's tenant_id predicate so it cannot escape isolation via SQL operator precedence.

func (*Query[T]) OrderBy

func (q *Query[T]) OrderBy(column string, direction string) *Query[T]

OrderBy adds an ORDER BY clause.

func (*Query[T]) Paginate

func (q *Query[T]) Paginate(pageSize, page int) (*Page[T], error)

Paginate executes the query with pagination. Returns current page, total count, and error.

Example:

page, err := quark.For[User](ctx, client).Paginate(100, 0) // 100 per page, page 0
page, err := quark.For[User](ctx, client).Where("active", "=", true).Paginate(50, 2)

func (*Query[T]) Preload

func (q *Query[T]) Preload(relations ...string) *Query[T]

Preload specifies relations to load automatically.

func (*Query[T]) Restore added in v0.3.0

func (q *Query[T]) Restore(entity *T) (int64, error)

Restore clears the deleted_at column on the row identified by entity's primary key, "untrashing" it. Returns the number of rows affected.

Restore implicitly scopes to currently-trashed rows (deleted_at IS NOT NULL): a Restore on a row that was never deleted is a 0-row no-op rather than a corrupting NULL-write. Useful as the inverse of Delete in admin flows.

Phase-1 F1-5. Returns ErrInvalidModel when the model has no deleted_at.

func (*Query[T]) RightJoin

func (q *Query[T]) RightJoin(table string) *JoinBuilder[T]

RightJoin opens a RIGHT JOIN. See Join for ON-clause grammar.

func (*Query[T]) Select

func (q *Query[T]) Select(columns ...string) *Query[T]

Select specifies columns to select. If empty, all columns are selected.

func (*Query[T]) SelectExpr added in v0.4.0

func (q *Query[T]) SelectExpr(alias string, e Expr) *Query[T]

SelectExpr adds an AST projection to the SELECT list, aliased as `alias`. Use it for window functions, scalar subqueries, or any expression the plain `Select(cols...)` API can't model:

q := quark.For[Order](ctx, client).
    SelectExpr("rank", quark.Over(quark.Rank(),
        quark.NewWindow().
            PartitionBy(quark.Col("status")).
            OrderBy(quark.Col("amount"), true))).
    SelectExpr("running_total", quark.Over(
        quark.Func("SUM", quark.Col("amount")),
        quark.NewWindow().OrderBy(quark.Col("id"), false)))

The expression is rendered against a `qmark`-emitting dialect at SelectExpr time, so the inner '?' markers are reindexed to the outer dialect's placeholder syntax when buildSelect runs. The args land in the args slice between any CTE args and the WHERE args — matching the SQL-surface order of the SELECT projection.

Composing SelectExpr with the plain Select(cols...) is allowed: the regular columns render first, the AST projections after, comma- separated. If neither is set, the SELECT defaults to '*'.

func (*Query[T]) SkipLocked added in v0.4.0

func (q *Query[T]) SkipLocked() *Query[T]

SkipLocked tells the database to skip rows that are already locked by another transaction instead of blocking on them. Combine with ForUpdate / ForShare. Implementation-defined per dialect.

func (*Query[T]) Sum

func (q *Query[T]) Sum(column string) (float64, error)

Sum returns the sum of the given column across all matching rows.

func (*Query[T]) Track added in v0.3.0

func (q *Query[T]) Track() *TrackedQuery[T]

Track returns a TrackedQuery whose Find/First/List yield *Tracked[T] values carrying a column-value snapshot. Call Save on the result to emit an UPDATE that touches only the columns whose values changed since load — the permanent fix for the zero-value trap (P0-4).

Track is opt-in. Existing Find/First/List remain unchanged.

func (*Query[T]) Union added in v0.4.0

func (q *Query[T]) Union(other *Query[T]) *Query[T]

Union appends a UNION (DISTINCT) operand to the query. The combined statement renders flat — `SELECT ... UNION SELECT ...` — without parentheses around the operands, since SQLite's compound-select grammar rejects parenthesised operands. The flat form is the portable shape across all six target dialects.

Identifier validation runs eagerly on the operand so a malformed other-query surfaces at attach time, not at the outer query's exec time. Outer-query `OrderBy` / `Limit` apply to the combined result (the SQL standard binding); the operand cannot have its own ORDER BY / LIMIT (rejected with `ErrUnsupportedFeature`). See attachSetOp for the full operand restriction list.

func (*Query[T]) UnionAll added in v0.4.0

func (q *Query[T]) UnionAll(other *Query[T]) *Query[T]

UnionAll is the multiset variant: `UNION ALL` keeps duplicate rows.

func (*Query[T]) Unscoped

func (q *Query[T]) Unscoped() *Query[T]

Unscoped ignores soft-delete filters for the query, returning both trashed and non-trashed rows. Equivalent to WithTrashed; kept for backward compatibility.

func (*Query[T]) Update

func (q *Query[T]) Update(entity *T) (int64, error)

Update updates the entity by its primary key with partial-update semantics: only fields whose value is non-zero for their type are written.

CAUTION — zero-value trap (P0-4 — pending dirty tracking in Phase 1): because zero values are skipped, calling Update cannot write false to a bool, 0 to an integer, "" to a string, or nil to a pointer/slice/map. To write a zero value explicitly, use UpdateFields or UpdateMap. When Update detects skipped zero-value fields it logs a WARN line so callers notice the silent skip.

Any Where() conditions are merged into the WHERE clause alongside the PK. Returns the number of rows affected. Recursively saves associations.

func (*Query[T]) UpdateBatch

func (q *Query[T]) UpdateBatch(entities []*T) error

UpdateBatch updates multiple records by their primary keys within a single transaction. Each entity undergoes a partial update: zero-value fields are skipped (same semantics as Update). A transaction is used to guarantee atomicity across all rows.

Example:

err := quark.For[User](ctx, client).UpdateBatch(users)

func (*Query[T]) UpdateFields added in v0.3.0

func (q *Query[T]) UpdateFields(entity *T, fields ...string) (int64, error)

UpdateFields updates only the named fields on the entity, bypassing the zero-value filter that Update applies. This is the recommended API when you need to write false / 0 / "" / nil to a column — values that Update would silently skip.

fields are matched against struct field db tags only — the same identifier resolution as Update and Find. Listing a struct field name without a db tag returns ErrInvalidQuery: there is one canonical name per column and we don't accept aliases here, to keep the resolution unambiguous.

The primary key is never overwritten; listing a PK column returns an error. If the client is configured with the RowLevelSecurity tenant strategy, the tenant column is injected before the SET clause is built; callers do not need to (and should not) list it explicitly.

Example:

user := User{ID: 42, Active: false}
rows, err := quark.For[User](ctx, client).UpdateFields(&user, "active")
// emitted: UPDATE "users" SET "active" = $1 WHERE "id" = $2  args=[false, 42]

Returns the number of rows affected.

func (*Query[T]) UpdateMap

func (q *Query[T]) UpdateMap(data map[string]any) (int64, error)

UpdateMap updates fields using a map (for partial updates without full entity). Requires Where clause for safety. Returns the number of rows affected.

func (*Query[T]) Upsert

func (q *Query[T]) Upsert(entity *T, conflictCols []string, updateCols []string) error

Upsert inserts or updates a record depending on whether a conflict occurs on conflictCols. updateCols specifies which columns to update on conflict; if empty, all non-conflict columns are updated.

Example:

quark.For[User](ctx, client).Upsert(&user, []string{"email"}, []string{"name", "updated_at"})

func (*Query[T]) UpsertBatch

func (q *Query[T]) UpsertBatch(entities []*T, conflictCols []string, updateCols []string) error

UpsertBatch inserts or updates multiple records in a single batch operation. conflictCols defines uniqueness (e.g. primary key or unique index columns). updateCols defines which columns to update on conflict; empty = all non-conflict columns.

Dialect strategies:

  • Postgres / SQLite / MySQL / MariaDB: multi-row INSERT … ON CONFLICT / ON DUPLICATE KEY
  • MSSQL: single MERGE … USING (VALUES …) AS src(…)
  • Oracle: N individual MERGE statements (Oracle IDENTITY restriction prevents bulk MERGE)

Example:

err := quark.For[User](ctx, client).UpsertBatch(users, []string{"email"}, []string{"name"})

func (*Query[T]) Where

func (q *Query[T]) Where(column string, operator string, value any) *Query[T]

Where adds a WHERE condition with AND logic.

func (*Query[T]) WhereBetween

func (q *Query[T]) WhereBetween(column string, start, end any) *Query[T]

WhereBetween adds a WHERE ... BETWEEN condition.

func (*Query[T]) WhereExpr added in v0.4.0

func (q *Query[T]) WhereExpr(e Expr) *Query[T]

WhereExpr adds a WHERE condition built from a composable Expr AST.

The AST is rendered against the active dialect at call time, producing a fragment with '?' bind markers plus the args. Storage and execution reuse the existing raw-fragment slot in condition: buildWhereClause substitutes each '?' for the dialect placeholder at the correct argIndex, so the AST stays dialect-agnostic at construction time and integrates cleanly with WhereJSON, Or, and the rest of the builder.

Errors raised during ToSQL — unknown function names, invalid identifiers, invalid operators, empty IN lists — are stashed on the query and surface at execution time wrapping ErrInvalidQuery.

Example:

q := quark.For[User](ctx, client).WhereExpr(
    quark.Or(
        quark.Eq(quark.Col("role"), quark.Lit("admin")),
        quark.And(
            quark.Gt(quark.Col("logins"), quark.Lit(10)),
            quark.Eq(quark.Col("verified"), quark.Lit(true)),
        ),
    ),
)

func (*Query[T]) WhereIn

func (q *Query[T]) WhereIn(column string, values []any) *Query[T]

WhereIn adds a WHERE ... IN condition.

func (*Query[T]) WhereJSON

func (q *Query[T]) WhereJSON(column, path, operator string, value any) *Query[T]

WhereJSON adds a WHERE condition for a JSON field. column is the JSON column name, path is a dotted key path within the JSON object (e.g. "user.name"). The path is validated and bound as a parameter — never interpolated into the SQL surface — so it cannot carry SQL injection. See guard.ValidateJSONPath for the accepted grammar.

On invalid path the error is stashed on the query and surfaces at execution time (List, First, etc.), wrapping ErrInvalidJSONPath.

func (*Query[T]) WhereNot

func (q *Query[T]) WhereNot(column string, operator string, value any) *Query[T]

WhereNot adds a WHERE NOT condition with AND logic.

Example:

quark.For[User](ctx, client).WhereNot("active", "=", false).List()

Generates: WHERE NOT ("active" = $1)

func (*Query[T]) WhereSubquery

func (q *Query[T]) WhereSubquery(column, operator, subquery string) *Query[T]

WhereSubquery adds a WHERE column operator (subquery) condition. The subquery is a raw SQL string. Use this only when AllowRawQueries is enabled.

Example:

sub := "SELECT MAX(id) FROM orders WHERE status = 'open'"
quark.For[User](ctx, client).WhereSubquery("id", "IN", sub).List()

func (*Query[T]) With added in v0.4.0

func (q *Query[T]) With(name string, sub *Subquery) *Query[T]

With attaches a non-recursive CTE to the query. The CTE renders as `WITH <name> AS (<inner>)` before the outer SELECT, and the outer query can reference the CTE by name in JOIN clauses.

Example:

topOrders, _ := quark.For[Order](ctx, client).
    Where("amount", ">", 100).
    Select("user_id", "amount").
    AsSubquery()

users, err := quark.For[User](ctx, client).
    With("top_orders", topOrders).
    Join("top_orders", "users.id = top_orders.user_id").
    Limit(50).
    List()

Multiple With calls compose: the entries render comma-separated in the order they were added. If any entry is recursive, the prefix becomes `WITH RECURSIVE ...`.

func (*Query[T]) WithRecursive added in v0.4.0

func (q *Query[T]) WithRecursive(name string, sub *Subquery) *Query[T]

WithRecursive is the recursive form. Emits `WITH RECURSIVE` (or just promotes the prefix when at least one of the previously-added entries is recursive). The inner Subquery is responsible for shaping the recursive body — typically a `UNION ALL` between a base case and a step that references the CTE name. quark's typed Subquery surface doesn't yet model UNION (F2-set), so practical recursive use today is limited to engines/cases where the Subquery body can be constructed from a single SELECT — full recursive support is the motivating use case for F2-set.

func (*Query[T]) WithTrashed added in v0.3.0

func (q *Query[T]) WithTrashed() *Query[T]

WithTrashed returns a query that includes both soft-deleted (trashed) and live rows — the same effect as Unscoped, named for parity with the scope-driven idiom that other ORMs use. Only meaningful when the model carries a deleted_at column.

type QueryEvent

type QueryEvent struct {
	SQL       string
	Args      []any
	Duration  time.Duration
	Rows      int64
	Error     error
	Table     string
	Operation string // "SELECT", "INSERT", "UPDATE", "DELETE"
}

QueryEvent represents a executed query.

type QueryFunc

type QueryFunc func(ctx context.Context, exec Executor, sqlStr string, args []any) (*sql.Rows, error)

QueryFunc is the signature for SQL query functions used by middleware.

type QueryObserver

type QueryObserver interface {
	ObserveQuery(event QueryEvent)
}

QueryObserver is called after each query execution. Use this for logging, metrics, auditing, etc.

type QueryRowFunc

type QueryRowFunc func(ctx context.Context, exec Executor, sqlStr string, args []any) *sql.Row

QueryRowFunc is the signature for SQL single-row query functions used by middleware.

type RelationMeta

type RelationMeta = schema.RelationMeta

RelationMeta is the metadata for a model relation.

type Result added in v0.6.0

type Result interface {
	LastInsertId() (int64, error)
	RowsAffected() (int64, error)
}

Result mirrors database/sql.Result for the lock implementations.

type Routine

type Routine[T any] struct {
	// contains filtered or unexported fields
}

Routine is a builder for executing database functions and stored procedures that return results (table-valued functions or scalar functions).

func NewRoutine

func NewRoutine[T any](ctx context.Context, provider ClientProvider, routine string, args ...any) *Routine[T]

NewRoutine creates a new Routine builder for the given procedure/function.

func (*Routine[T]) First

func (r *Routine[T]) First() (T, error)

First executes the routine and returns the first row.

func (*Routine[T]) List

func (r *Routine[T]) List() ([]T, error)

List executes the routine and maps the resulting rows to a slice of T.

func (*Routine[T]) Scalar

func (r *Routine[T]) Scalar() (T, error)

Scalar executes a routine and returns a single scalar value. It is a convenient alias for First() when T is a primitive type.

type Row added in v0.6.0

type Row interface {
	Scan(dest ...any) error
}

Row mirrors *database/sql.Row for the lock implementations (Scan only).

type SQLGuard

type SQLGuard = guard.SQLGuard

SQLGuard re-exports the internal guard.SQLGuard. It provides SQL injection prevention utilities for Quark ORM.

func NewSQLGuard

func NewSQLGuard() *SQLGuard

NewSQLGuard creates a new SQLGuard with default settings.

type SQLiteDialect

type SQLiteDialect struct {
	// contains filtered or unexported fields
}

SQLiteDialect implements the SQLite dialect.

func (*SQLiteDialect) AlterTableAddColumn

func (s *SQLiteDialect) AlterTableAddColumn(table, column, dataType string) string

func (*SQLiteDialect) AlterTableAlterColumn

func (s *SQLiteDialect) AlterTableAlterColumn(table, column, newDataType string) string

func (*SQLiteDialect) AlterTableDropColumn

func (s *SQLiteDialect) AlterTableDropColumn(table, column string) string

func (*SQLiteDialect) BuildProcedureCall

func (s *SQLiteDialect) BuildProcedureCall(procedure string, argCount int) string

func (*SQLiteDialect) BuildRoutineQuery

func (s *SQLiteDialect) BuildRoutineQuery(routine string, argCount int) string

func (*SQLiteDialect) CurrentTimestamp

func (s *SQLiteDialect) CurrentTimestamp() string

func (*SQLiteDialect) IntrospectSchema added in v0.6.0

func (d *SQLiteDialect) IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)

IntrospectSchema reads the SQLite schema using `sqlite_master` for the table list and `PRAGMA table_info(<table>)` for the column metadata of each table. This avoids parsing the CREATE TABLE DDL, which would be brittle.

SQLite caveats handled here:

  • System tables (`sqlite_*`) and quark internal tables (`quark_*`) are filtered out. The diff layer doesn't need to reason about them.
  • SQLite's PRAGMA returns columns in declaration order. We preserve that order (Tables is sorted alphabetically; Columns aren't re-sorted within a table).
  • The `dflt_value` column from PRAGMA table_info comes back as a literal SQL fragment (`'draft'`, `0`, `CURRENT_TIMESTAMP`); we pass it through unchanged in `Column.Default`.

func (*SQLiteDialect) JSONExtract

func (s *SQLiteDialect) JSONExtract(column, path string) (string, []any, error)

func (*SQLiteDialect) LastInsertIDQuery

func (s *SQLiteDialect) LastInsertIDQuery(table, pkColumn string) string

func (*SQLiteDialect) LimitOffset

func (s *SQLiteDialect) LimitOffset(limit, offset int) string

func (*SQLiteDialect) LockSuffix added in v0.4.0

func (s *SQLiteDialect) LockSuffix(opts LockOptions) (string, string, error)

SQLite has no row-level pessimistic-lock primitive — locking is transaction-scoped via BEGIN IMMEDIATE / EXCLUSIVE. Return ErrUnsupportedFeature so callers can branch by dialect or fall back.

func (*SQLiteDialect) Name

func (d *SQLiteDialect) Name() string

func (*SQLiteDialect) Placeholder

func (s *SQLiteDialect) Placeholder(index int) string

func (*SQLiteDialect) Placeholders

func (s *SQLiteDialect) Placeholders(n int) []string

func (*SQLiteDialect) Quote

func (s *SQLiteDialect) Quote(identifier string) string

func (*SQLiteDialect) RenameColumn

func (s *SQLiteDialect) RenameColumn(table, oldName, newName string) string

func (*SQLiteDialect) RenameTable

func (s *SQLiteDialect) RenameTable(oldName, newName string) string

func (*SQLiteDialect) Returning

func (s *SQLiteDialect) Returning(columns ...string) string

func (*SQLiteDialect) SupportsLastInsertID

func (s *SQLiteDialect) SupportsLastInsertID() bool

func (*SQLiteDialect) SupportsReturning

func (s *SQLiteDialect) SupportsReturning() bool

func (*SQLiteDialect) SupportsTransactionalDDL

func (s *SQLiteDialect) SupportsTransactionalDDL() bool

func (*SQLiteDialect) UpsertSQL

func (s *SQLiteDialect) UpsertSQL(conflictCols, updateCols []string, _ int) string

UpsertSQL for SQLite: ON CONFLICT (cols) DO UPDATE SET col = excluded.col

type Schema added in v0.6.0

type Schema struct {
	Tables []Table
}

Schema is the dialect-neutral representation of a database schema. It's the foundation for F3-3 (schema diff) — the diff comparator takes a Schema derived from the Go models and a Schema returned by IntrospectSchema, and emits the operations needed to align the two.

Tables are sorted by Name for deterministic ordering; the diff comparator relies on this to produce stable plans.

type SchemaIntrospector added in v0.6.0

type SchemaIntrospector interface {
	IntrospectSchema(ctx context.Context, exec Executor) (Schema, error)
}

SchemaIntrospector is the optional Dialect interface for retrieving the current schema from the database. The same pattern as MigrationLocker — kept as a stand-alone interface so custom dialects downstream don't have to grow this method to keep compiling.

IntrospectSchema returns the schema of the database the executor is connected to (the current schema / database / "user space", depending on dialect semantics). It does NOT cross schema or database boundaries.

type Scope

type Scope[T any] func(*Query[T]) *Query[T]

Scope is a reusable query modifier — a function that receives and returns a *Query[T]. Scopes can be composed via Apply().

type Subquery added in v0.4.0

type Subquery struct {
	// contains filtered or unexported fields
}

Subquery is a rendered SELECT that can be embedded inside another query through the Expr AST (Sub, Exists, NotExists, InSub, NotInSub).

A subquery is built like any other query — `For[T](...).Where(...)` — and then captured with `AsSubquery()`. The capture eagerly renders the SELECT using the active dialect's identifier quoting but with `?` as the bind marker, so the outer query's `buildWhereClause` can swap each `?` for the dialect's placeholder syntax at the correct arg index when the AST is rendered.

This contract matches the rest of the AST: leaves emit `?`, `substitutePathMarkers` does the placeholder rewrite, args are threaded through `condition.extraArgs`. So a subquery is just another Expr leaf.

func (*Subquery) SQL added in v0.4.0

func (s *Subquery) SQL() (string, []any)

SQL returns the captured SELECT fragment with '?' bind markers and the args slice. Exposed for test introspection — production code should use the Expr wrappers (Sub, Exists, etc.) which compose through the AST.

type SyncOptions

type SyncOptions struct {
	DryRun        bool // If true, logs the SQL but doesn't execute it.
	NoTransaction bool // If true, doesn't wrap the sync in a transaction.
}

SyncOptions configures the behavior of the Sync operation.

type Table added in v0.6.0

type Table struct {
	Name        string
	Columns     []Column
	Indexes     []Index
	ForeignKeys []ForeignKey
	Checks      []Check
}

Table represents one table in the schema. The neutral representation stores both the raw dialect-native type strings (`Type`) and (in a later phase) a normalised form for cross-dialect comparison.

type TenantConfig

type TenantConfig struct {
	Strategy       TenantStrategy
	MaxCachedPools int     // Maximum number of DB connection pools to keep open (for DatabasePerTenant)
	BaseClient     *Client // Used for SchemaPerTenant and RowLevelSecurity
	TenantColumn   string  // Column name for RowLevelSecurity, default is "tenant_id"
}

TenantConfig configures the TenantRouter.

func DefaultTenantConfig

func DefaultTenantConfig() TenantConfig

DefaultTenantConfig provides sensible defaults.

type TenantRouter

type TenantRouter struct {
	// contains filtered or unexported fields
}

TenantRouter manages dynamic database connections or queries for different tenants.

func NewTenantRouter

func NewTenantRouter(
	config TenantConfig,
	resolver func(ctx context.Context) string,
	factory func(tenantID string) (*Client, error),
) *TenantRouter

NewTenantRouter creates a new router for multi-tenant database access.

func (*TenantRouter) ActiveTenants

func (r *TenantRouter) ActiveTenants() []string

ActiveTenants returns a list of active tenant connections in the cache.

func (*TenantRouter) GetClient

func (r *TenantRouter) GetClient(ctx context.Context) (*Client, error)

GetClient resolves the tenant ID from the context and returns the corresponding Client. It implements the ClientProvider interface so it can be used with For[T].

func (*TenantRouter) ResolveTenant

func (r *TenantRouter) ResolveTenant(ctx context.Context) (string, error)

ResolveTenant returns the tenant ID for the context.

type TenantStrategy

type TenantStrategy int

TenantStrategy defines how multi-tenancy is handled.

const (
	// DatabasePerTenant uses a separate database connection pool per tenant.
	// This requires an LRU cache to prevent connection exhaustion.
	DatabasePerTenant TenantStrategy = iota
	// SchemaPerTenant uses a single database connection pool but prefixes
	// the table name with the tenant ID (e.g. "tenant_acme.users").
	SchemaPerTenant
	// RowLevelSecurity uses a single database connection pool and injects
	// a "WHERE tenant_id = ?" condition to every query.
	RowLevelSecurity
)

type Tracked added in v0.3.0

type Tracked[T any] struct {
	// Entity is the loaded value. Mutate fields on it directly; Save will
	// detect the changes and write only those columns.
	Entity *T
	// contains filtered or unexported fields
}

Tracked wraps a loaded entity with a snapshot of its column values, so a later Save can emit an UPDATE limited to the fields that actually changed.

Active Record + dirty tracking ligero (Phase 1): the snapshot lives on the wrapper, not in the Client, so there is no shared map to grow or evict. Each Tracked carries the metadata it needs (client, table, dialect, pk, meta) to run Save without the caller threading state.

Tracked is the permanent fix for P0-4: a bool / int / string / pointer can be set to its zero value and Save will write it because the diff is taken against the snapshot, not against "is this field non-zero?".

func (*Tracked[T]) Changed added in v0.3.0

func (t *Tracked[T]) Changed() []string

Changed reports the names (db tags) of columns whose value differs between Entity now and the snapshot taken at load time. Useful for tests and observability; Save uses the same comparison internally.

func (*Tracked[T]) Save added in v0.3.0

func (t *Tracked[T]) Save(ctx context.Context) (int64, error)

Save updates the row identified by Entity's primary key, writing only the columns whose value differs from the load-time snapshot. If nothing changed, Save returns (0, nil) without touching the database.

Returns the number of rows affected. Refreshes the internal snapshot on success so subsequent Save calls diff against the new state.

type TrackedQuery added in v0.3.0

type TrackedQuery[T any] struct {
	// contains filtered or unexported fields
}

TrackedQuery is the lightweight wrapper Query[T].Track() returns. It re-issues Find/First/List on the underlying query and wraps each loaded entity with a snapshot for later dirty-tracked Save.

func (*TrackedQuery[T]) Find added in v0.3.0

func (tq *TrackedQuery[T]) Find(id any) (*Tracked[T], error)

Find loads a single entity by primary key and wraps it in a Tracked.

func (*TrackedQuery[T]) First added in v0.3.0

func (tq *TrackedQuery[T]) First() (*Tracked[T], error)

First returns the first matching entity wrapped in a Tracked.

func (*TrackedQuery[T]) List added in v0.3.0

func (tq *TrackedQuery[T]) List() ([]*Tracked[T], error)

List loads all matching entities wrapped in Tracked values.

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx wraps *sql.Tx and provides transactional query execution. It shares dialect, guard, observers, and limits from the parent Client.

func (*Tx) Commit

func (t *Tx) Commit() error

Commit commits the transaction.

func (*Tx) ReleaseSavepoint

func (t *Tx) ReleaseSavepoint(name string) error

ReleaseSavepoint releases the named savepoint.

func (*Tx) Rollback

func (t *Tx) Rollback() error

Rollback aborts the transaction.

func (*Tx) RollbackTo

func (t *Tx) RollbackTo(name string) error

RollbackTo rolls back to the named savepoint.

func (*Tx) Savepoint

func (t *Tx) Savepoint(name string) error

Savepoint creates a savepoint with the given name.

func (*Tx) Tx

func (t *Tx) Tx(ctx context.Context, fn func(tx *Tx) error) error

type TypeMapper added in v0.3.0

type TypeMapper = migrate.TypeMapper

TypeMapper produces a dialect-specific SQL type for a Go type. The caller supplies the dialect name (lower-case: "postgres", "mysql", "mariadb", "sqlite", "mssql", "oracle") and the sizing hints from the field's tag.

type TypeOptions added in v0.3.0

type TypeOptions = migrate.TypeOptions

TypeOptions carries the SQL-type sizing hints parsed from a struct's db tag — `size=N`, `precision=N`, `scale=N` — plus a flag indicating whether the column is the primary key. A zero value for any field means "use the dialect default for the Go type".

type Window added in v0.4.0

type Window struct {
	// contains filtered or unexported fields
}

Window models the `OVER (...)` clause for a window-function expression. Build with NewWindow() and chain PartitionBy / OrderBy. The chain is immutable: each method returns a fresh copy so a Window definition can be reused across multiple Over() calls without aliasing.

func NewWindow added in v0.4.0

func NewWindow() *Window

NewWindow returns an empty Window. An empty Window renders as `OVER ()` — sometimes legitimate (e.g. `COUNT(*) OVER ()` for a running grand total), so it's not an error.

func (*Window) OrderBy added in v0.4.0

func (w *Window) OrderBy(col Expr, desc bool) *Window

OrderBy adds an order entry. Set desc=true for descending; the second argument is the conventional bool toggle to keep the API tight (no "ASC"/"DESC" stringly-typed argument).

func (*Window) PartitionBy added in v0.4.0

func (w *Window) PartitionBy(cols ...Expr) *Window

PartitionBy adds one or more partition expressions. Identifiers go through SQLGuard at render time — pass `Col("status")`, not the raw column string.

Directories

Path Synopsis
cache
examples
migrations command
Command migrations is a minimal example of using the `quarkmigrate` package to wire a plan/verify/apply CLI workflow for a Quark-managed schema.
Command migrations is a minimal example of using the `quarkmigrate` package to wire a plan/verify/apply CLI workflow for a Quark-managed schema.
mssql command
mysql command
oracle command
postgres command
sqlite command
internal
db
guard
Package guard provides SQL injection prevention utilities for Quark ORM.
Package guard provides SQL injection prevention utilities for Quark ORM.
migrate
Package migrate provides internal utilities for database schema migrations.
Package migrate provides internal utilities for database schema migrations.
schema
Package schema provides struct reflection and model metadata caching for Quark ORM.
Package schema provides struct reflection and model metadata caching for Quark ORM.
Package quarkmigrate is the thin CLI wrapper that turns a quark.Client + a set of Go model values into a plan/verify/apply workflow.
Package quarkmigrate is the thin CLI wrapper that turns a quark.Client + a set of Go model values into a plan/verify/apply workflow.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL