zdb

package module
v0.0.0-...-bf49d33 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 29, 2024 License: MIT Imports: 33 Imported by: 19

README

zdb provides a nice API to interact with SQL databases in Go.

Features:

  • Nice API with debugging and testing features.
  • Easier to wrap databases for logging, metrics, etc.
  • Templating in queries.
  • Run queries from filesystem.
  • Deals with some SQL interoperability issues.
  • Basic migrations.

PostgreSQL, SQLite, and MariaDB are supported. Oracle MySQL is not supported as it's lacking some features.

This requires the following versions because it uses some features introduced in those versions:

  • SQLite 3.35 (go-sqlite3 v1.14.8)
  • PostgreSQL 12.0
  • MariaDB 10.5

To avoid confusion it gives an error when connecting if an older version is used. This also requires Go 1.16 or newer.

Full reference documentation: https://godocs.io/zgo.at/zdb

Table of contents for this README:

Usage

Connecting

You first need to register a driver similar to how you register a driver for database/sql; several drivers are available:

  • zgo.at/drivers/zdb-pq
  • zgo.at/drivers/zdb-mysql
  • zgo.at/drivers/zdb-go-sqlite3

Simply importing this package is enough; e.g.:

import _ "zgo.at/zdb-pq"

Connect() opens a new connection, runs migrations, and/or creates a database if it doesn't exist yet. A basic example:

db, err := zdb.Connect(zdb.ConnectOptions{
    Connect: "sqlite3+:memory:",
})
if err != nil {
    log.Fatal(err)
}
defer db.Close()

Connect is a connection string prefixed with either the database engine (e.g. "postgresql") or the driver name (e.g. "pq")`. Further details on the connection string depends on the driver.

Schema creation and migrations is covered in Schema creation and migrations below.

zdb.DB and context

There are two ways to use zdb:

  • The methods on the zdb.DB interface.
  • The top-level zdb.* functions, which accept a context with a zdb.DB value on it.

Both are exactly identical, and there is no real difference. Personally I think that using zdb.Get(ctx, ...) is a lot easier, but some people are against storing the database connection on the context as matter of religion – I don't really see the problem and it makes a number of things easier. You can use whatever fits your faith.

You can create a context with zdb.WithDB():

db, _ := zdb.Connect(..)
ctx := zdb.WithDB(context.Background(), db)
zdb.Get(ctx, `select 1`)

But as mentioned, you don't need to use the context, the following is identical and the context is just used for cancellation:

ctx := context.Background()
db.Get(ctx, `select 1`)

I will mostly use the zdb.* functions in this documentation.

You can use zdb.GetDB() or zdb.MustGetDB() to get the zdb.DB back, but this should rarely be needed.

Running queries

The query methods are:

Get(..)             Run a query and get a single row.
Select(..)          Run a query and get multiple rows.
Exec(..)            Execute a query without returning the result.
NumRows(..)         Run a query and return the number of affected rows.
InsertID(..)        Run a query and return the last insert ID.
Query(..)           Select multiple rows, but don't immediatly load them.

Most of these work as you would expect, and similar to database/sql and sqlx. The main difference is that Exec() doesn't return an sql.Result and that NumRows() and InsertID() exist for this use case.

You can use ?, $n, or named parameters; these are all identical and work on any database:

zdb.Exec(ctx, `insert into test (value) values (?)`, "hello")
zdb.Exec(ctx, `insert into test (value) values ($1)`, "hello")
zdb.Exec(ctx, `insert into test (value) values (:value)`, zdb.P{"value": "hello"})

zdb.P is just a map[string]any, except shorter. The "P" is for "Parameters".

You can pass multiple structs and/or maps with named parameters:

err = zdb.Exec(ctx, `info into test (a, b) (:a, :b)`,
    zdb.P{"A": "hello"},
    struct{B int}{42})

Get(), Select(), and Query() can scan the results in to a struct or map; see the function documentation for details on the exact rules.

Simple conditionals

There is a mini template language for conditionals; this only works if you're using named parameters:

var values []string
err := zdb.Select(ctx, &values, `
    select * from test
    where
        value = :val
        {{:val2 and value not like :val2}}
`, zdb.P{
    "val":  "hello",
    "val2": "%world",
})

The text between {{:param ... }} will be omitted if param is the type's zero value. End with the parameter name with ! to invert the match: {{:param! ... }}.

I find this is a fairly nice middle ground between writing plain SQL queries and using more complex query builder DSLs.

Templates

For more complex use cases you can use text/template; this only works for queries loaded from the filesystem, and the filenames need to end with .gotxt.

See the documentation on Template() for a list of template functions.

Queries from filesystem

Queries are loaded from the filesystem if the query starts with load::

var values []string
err := zdb.Select(ctx, &values, `load:find-site`, zdb.P{
    "name": "hello",
})

The special prefix load:[filename] loads a query from the db/query/ directory. You can use zdb.Load() to only read a query from the filesystem.

Comments in the form of -- at the start of the line only are stripped. This makes queries a bit less noisy in query logs and the like.

This requires the Files parameter in ConnectOptions to be set; e.g.

zdb.Connect(zdb.ConnectOptions{
    Connect: "...",
    Files:   os.DirFS("db"),
})

You can also use embeded files here.

Transactions

zdb.TX(func(..) { }) runs the function in a transaction:

err := zdb.TX(ctx, func(ctx context.Context) error {
    err := zdb.Exec(ctx, ..)
    if err != nil {
        return err
    }

    // ... more queries
})
if err != nil {
    log.Fatal(err)
}

The transaction will be rolled back if an error is returned, or committed if it doesn't. This can be nested, but will start only one transaction and will only be committed after the outermost transaction finishes.

You can also start a transaction with zdb.Begin(), but I find the TX() wrapper more useful in almost all cases:

txctx, tx, err := zdb.Begin(ctx)
if err != nil {
    return err
}
defer tx.Rollback()

// Do stuff with tx...

err := tx.Commit()
if err != nil {
    return err
}

Schema creation and migrations

If Create is set in zdb.Connect(), it will try to:

  1. Create the database if it does not yet exist.
  2. Tun the schema setup file if there are no tables yet and the file exists; this requires the Files parameter to be set.

The schemas to set up the database should be in /schema-{dialect}.sql, /schema.gotxt, or /schema.sql. Files are tried in that order. The {dialect} is sqlite3, postgres, or mariadb and only needed if you need to vary something by SQL engine: otherwise it will just load the generic file. You can also use templating with .gotxt files, as documented above.

Migrations are loaded from /migrate/foo-{dialect}.sql, /migrate/foo.gotxt, or /migrate/foo.sql. Migrations are always run in lexical order, so you probably want to prefix them with the date, and maybe a sequence number for the day in case you have multiple migrations on the same day (e.g. 2021-06-18-1-name.sql). It uses a version table to keep track of which migrations were already run (will be created automatically if it doesn't exist).

This isn't really intended to solve every possible use case for database migrations, but it should be enough for many use cases, and for more advanced things you can use one of several dedicated packages.

You can also pass GoMigrations in zdb.Connect() to run Go code as migrations. This is sometimes more convenient if you need to do some complex processing.

It's okay if directories are missing; e.g. no migrate directory simply means that it won't attempt to run migrations – you don't need to use all features.

Bulk insert

BulkInsert makes it easier to bulk insert values:

ins := zdb.NewBulkInsert(ctx, "table", []string{"col1", "col2", "col3"})
for _, v := range listOfValues {
    ins.Values(v.Col1, v.Col2, v.Col3)
}
err := ins.Finish()

This won't naïvely group everything in one query; after more than 998 parameters it will construct an SQL query and send it to the server. 998 was chosen because that's the default SQLite limit, and you probably won't get much benefit from larger inserts anyway.

You get the error(s) back with Finish().

Note this isn't run in a transaction by default; start a transaction yourself if you want it.

Testing and debugging

DumpArgs

zdb has a rather useful facility of "DumpArgs"; you can add it to any query method and it will "dump" information to stderr. This is really useful for quick testing/debugging:

TODO: include somewhwat realistic example here.

Testing

RunTest()              Create a temporary database and run tests.
TestQueries()          Test queries from fs.

Dump(), DumpString()   Show result of any query.
ApplyParams()          Apply parameters.

zdb.Dump() and zdb.DumpString() are two rather useful helper functions: they get the output of any SQL query in a table, similar to what you get on the sqlite3 or psql commandline. This is rather useful for debugging and tests:

want := `
    site    day                            browser  version  count   count_unique  event
    1       2019-08-31 00:00:00 +0000 UTC  Firefox  68.0     1       0             0`

out := zdb.DumpString(ctx, `select * from table`)

if d := ztest.Diff(out, want, ztest.DiffNormalizeWhitespace); d != "" {
    t.Error(d)
}

This will panic() on errors. Again, it's only intended for debugging and tests, and omitting error returns makes it a bit smoother to use.

The zdb.ApplyParams() function will replace ? and $1 with the actual values. This is intended to make copying long-ish queries to the psql CLI for additional debugging/testing easier. This is not intended for any serious use and is not safe against malicious input.

fmt.Println(zdb.ApplyParams(
    `select * from users where site=$1 and state=$2`,
    1, "active"))

// Output: select * from users where site=1 and state='active'

Database wrapping

Wrapping a database well with database/sql or sqlx is a bit tricky since you need to wrap both the actual database but also the transactions.

LogDB

Wrap the database with zdb.NewLognDB() to automatically dump the query, the results, the explain, or all of them to a writer:

db, _ := zdb.Connect(...)
logDB = zdb.NewLogDB(db, os.Stdout, zdb.DumpAll, "")

The last parameter is an optional filter:

logDB = zdb.NewExplainDB(db, os.Stdout, zdb.DumpAll, "only_if_query_matches_this_text")

This may run queries twice.

MetricsDB

Wrap the database with zdb.NewMetricsDB() to record metrics on the execution of query times. This takes a "recoder", and the Record() method is called for every query invocation.

There is a MetricsMemory, which records the metrics in the process memory. You can implement your own to send metrics to datadog or grafana or whatnot.

    db, _ := zdb.Connect(...)
    metricDB := zdb.NewMetricsDB(db, zdb.NewMetricsMemory(0))

Documentation

Overview

Package zdb provides a nice API to interact with SQL databases in Go.

All query functions (Exec, NumRows, InsertID Get, Select, Query) use named parameters (:name)used if params contains a map or struct; positional parameters (? or $1) are used if it doesn't. You can add multiple structs or maps, but mixing named and positional parameters is not allowed.

Everything between {{:name ..}} is parsed as a conditional; for example {{:foo query}} will only be added if "foo" from params is true or not a zero type. Conditionals only work with named parameters.

If the query starts with "load:" then it's loaded from the filesystem or embedded files; see Load() for details.

Additional DumpArgs can be added to "dump" information to stderr for testing and debugging:

DumpLocation   Show location of Dump call.
DumpQuery      Show the query
DumpExplain    Show query plain (WILL RUN QUERY TWICE!)
DumpResult     Show the query result (WILL RUN QUERY TWICE!)
DumpVertical   Show results in vertical format.
DumpCSV        Print query result as CSV.
DumpJSON       Print query result as JSON.
DumpHTML       Print query result as a HTML table.
DumpAll        Dump all we can.

Running the query twice for a select is usually safe (just slower), but running insert, update, or delete twice may cause problems.

Index

Constants

This section is empty.

Variables

View Source
var ErrTransactionStarted = errors.New("transaction already started")

ErrTransactionStarted is returned when a transaction is already started; this can often be treated as a non-fatal error.

View Source
var TemplateFuncMap template.FuncMap

TemplateFuncMap are additional template functions for Template(). Existing functions may be overridden.

Functions

func ApplyParams

func ApplyParams(query string, params ...any) string

ApplyParams replaces parameter placeholders in query with the values.

This is ONLY for printf-debugging, and NOT for actual usage. Security was NOT a consideration when writing this. Parameters in SQL are sent separately over the write and are not interpolated, so it's very different.

This supports ? placeholders and $1 placeholders *in order* ($\d+ is simply replace with ?).

func BeginIsolation

func BeginIsolation(level sql.IsolationLevel) beginOpt

func BeginReadOnly

func BeginReadOnly() beginOpt

func Create

func Create(db DB, files fs.FS) error

Create tables based on db/schema.{sql,gotxt}

func Diff

func Diff(out, want string) string

Diff two strings, ignoring whitespace at the start of a line.

This is useful in tests in combination with zdb.Dump():

got := DumpString(ctx, `select * from factions`)
want := `
    faction_id  name
    1           Peacekeepers
    2           Moya`
if d := Diff(got, want); d != "" {
   t.Error(d)
}

It normalizes the leading whitespace in want, making "does my database match with what's expected?" fairly easy to test.

func Dump

func Dump(ctx context.Context, out io.Writer, query string, params ...any)

Dump the results of a query to a writer in an aligned table. This is a convenience function intended mostly for testing/debugging.

Combined with ztest.Diff() it can be an easy way to test the database state.

You can add some special sentinel values in the params to control the output (they're not sent as parameters to the DB):

DumpAll
DumpLocation   Show location of the Dump() cal.
DumpQuery      Show the query with placeholders substituted.
DumpExplain    Show the results of EXPLAIN (or EXPLAIN ANALYZE for PostgreSQL).
DumpResult     Show the query result (
DumpVertical   Show vertical output instead of horizontal columns.
DumpCSV        Show as CSV.
DumpNul        Separate columns with NUL (0x00) bytes; useful to feed output to another printer.
DumpJSON       Show as an array of JSON objects.
DumpHTML       Show as a HTML table.

func DumpString

func DumpString(ctx context.Context, query string, params ...any) string

DumpString is like Dump(), but returns the result as a string.

func ErrMissingField

func ErrMissingField(err error) bool

ErrMissingFields reports if this error is because not all columns could be scanned due to missing struct fields.

All other columns are scanned; you can choose to ignore this error safely.

func ErrNoRows

func ErrNoRows(err error) bool

ErrNoRows reports if this error is sql.ErrNoRows.

func ErrUnique

func ErrUnique(err error) bool

ErrUnique reports if this error reports a UNIQUE constraint violation.

func Exec

func Exec(ctx context.Context, query string, params ...any) error

Exec executes a query without returning the result.

func Get

func Get(ctx context.Context, dest any, query string, params ...any) error

Get one row, returning sql.ErrNoRows if there are no rows.

func InsertID

func InsertID(ctx context.Context, idColumn, query string, params ...any) (int64, error)

InsertID runs a INSERT query and returns the ID column idColumn.

If multiple rows are inserted it will return the ID of the last inserted row.

func Load

func Load(db DB, name string) (string, bool, error)

Load a query from the filesystem or embeded files.

Queries are loaded from the "db/query/" directory, as "{name}-{driver}.sql" or "db/query/{name}.sql".

To allow identifying queries in logging and statistics such as pg_stat_statements every query will have the file name inserted in the first line; for example for "db/query/select-x.sql" the query sent to the database:

/* select-x */
select x from y;

Typical usage with Query() is to use "load:name", instead of calling this directly:

zdb.QueryGet(ctx, "load:select-x", &foo, zdb.P{
    "param": "foo",
})

func NumRows

func NumRows(ctx context.Context, query string, params ...any) (int64, error)

NumRows executes a query and returns the number of affected rows.

func RunTest

func RunTest(t *testing.T, f func(*testing.T, context.Context), opts ...drivers.TestOptions)

RunTest runs tests against all registered zdb SQL drivers.

func Select

func Select(ctx context.Context, dest any, query string, params ...any) error

Select zero or more rows; dest needs to be a pointer to a slice of:

  • a struct to scan in to
  • map[string]any
  • []any

Returns nil (and no error) if there are no rows.

func TX

func TX(ctx context.Context, fn func(context.Context) error) error

TX runs the given function in a transaction.

The context passed to the callback has the DB replaced with a transaction. The transaction is committed if the fn returns nil, or will be rolled back if it's not.

Multiple TX() calls can be nested, but they all run the same transaction and are comitted only if the outermost transaction returns true.

This is just a more convenient wrapper for Begin().

func Template

func Template(dialect Dialect, tpl string, params ...any) ([]byte, error)

Template runs text/template on SQL to make writing compatible schemas a bit easier.

Available template functions:

{{sqlite "str"}}         Include the string only for the given SQL dialect.
{{psql   "str"}}
{{maria  "str"}}

Column types:

{{"auto_increment [large]}}     Auto-incrementing column; the optional
                                [large] boolean can be used to make a
                                bigserial/bigint.
{{json}}                        JSON column type (jsonb, json, varchar)
{{blob}}                        Binary column type (bytea, blob, binary)

These only produce output for SQLite:

{{check_timestamp "colname"}}   Check constraint for timestamp
{{check_date                    Check constraint for date

You can set additional functions (or override any of the above) by adding functions to TemplateFuncMap.

func TestQueries

func TestQueries(t *testing.T, files fs.FS)

TestQueries tests queries in the db/query directory.

for every .sql file you can create a _test.sql file, similar to how Go's testing works; the following special comments are recognized:

-- params     Parameters for the query.
-- want       Expected result.

Everything before the first special comment is run as a "setup". The "-- params" and "-- want" comments can be repeated for multiple tests.

Example:

db/query/select-sites.sql:
   select * from sites where site_id = :site and created_at > :start

db/query/select-sites_test.sql
  insert into sites () values (...)

  -- params
  site_id:    1
  created_at: 2020-01-01

  -- want
  1

  -- params

  -- want

func WithDB

func WithDB(ctx context.Context, db DB) context.Context

WithDB returns a copy of the context with the DB instance.

Types

type BulkInsert

type BulkInsert struct {
	Limit uint16
	// contains filtered or unexported fields
}

BulkInsert inserts as many rows as possible per query we send to the server.

func NewBulkInsert

func NewBulkInsert(ctx context.Context, table string, columns []string) BulkInsert

NewBulkInsert makes a new BulkInsert builder.

func (BulkInsert) Errors

func (m BulkInsert) Errors() error

Errors returns all errors that have been encounterd.

func (*BulkInsert) Finish

func (m *BulkInsert) Finish() error

Finish the operation, returning any errors.

This can be called more than once, in cases where you want to have some fine-grained control over when actual SQL is sent to the server.

func (*BulkInsert) OnConflict

func (m *BulkInsert) OnConflict(c string)

OnConflict sets the "on conflict [..]" part of the query. This needs to include the "on conflict" itself.

func (*BulkInsert) Returned

func (m *BulkInsert) Returned() [][]any

Returned returns any rows that were returned; only useful of [Returning] was set.

This will only return values once, for example:

Values(...)    // Inserts 3 rows
...
Returned()     // Return the 3 rows
Values(..)     // Inserts 1 row
Returned()     // Returns the 1 row

func (*BulkInsert) Returning

func (m *BulkInsert) Returning(columns ...string)

Returning sets a column name in the "returning" part of the query.

The values can be fetched with [Returned].

func (*BulkInsert) Values

func (m *BulkInsert) Values(values ...any)

Values adds a set of values.

type ConnectOptions

type ConnectOptions struct {
	Connect    string            // Connect string.
	Create     bool              // Create database if it doesn't exist yet.
	Migrate    []string          // Migrations to run; nil for none, "all" for all, or a migration name.
	MigrateLog func(name string) // Called for every migration that gets run.

	// Set the maximum number of open and idle connections.
	//
	// The default for MaxOpenConns is 16, and the default for MaxIdleConns is
	// 4, instead of Go's default of 0 and 2. Use a value <0 to skip the
	// default.
	//
	// This can also be changed at runtime with:
	//
	//    db.DBSQL().SetMaxOpenConns(100)
	MaxOpenConns int
	MaxIdleConns int

	// In addition to migrations from .sql files, you can run migrations from Go
	// functions. See the documentation on Migrate for details.
	GoMigrations map[string]func(context.Context) error

	// Database files; the following layout is assumed:
	//
	//   Schema       schema-{dialect}.sql, schema.sql, or schema.gotxt
	//   Migrations   migrate/{name}-{dialect}.sql, migrate/{name}.sql, or migrate/{name}.gotxt
	//   Queries      query/{name}-{dialect}.sql, query/{name}.sql, or query/{name}.gotxt
	//
	// It's okay if files are missing; e.g. no migrate directory simply means
	// that it won't attempt to run migrations.
	Files fs.FS
}

ConnectOptions are options for Connect().

type DB

type DB interface {
	DBSQL() *sql.DB
	SQLDialect() Dialect
	Info(ctx context.Context) (ServerInfo, error)
	Close() error

	Exec(ctx context.Context, query string, params ...any) error
	NumRows(ctx context.Context, query string, params ...any) (int64, error)
	InsertID(ctx context.Context, idColumn, query string, params ...any) (int64, error)
	Get(ctx context.Context, dest any, query string, params ...any) error
	Select(ctx context.Context, dest any, query string, params ...any) error
	Query(ctx context.Context, query string, params ...any) (*Rows, error)

	TX(ctx context.Context, fb func(context.Context) error) error
	Begin(ctx context.Context, opts ...beginOpt) (context.Context, DB, error)
	Rollback() error
	Commit() error
}

DB is an interface to the database; this can be a regular connection, a transaction, or a wrapped connection to add features such as logging.

If this is not a transaction, then Commit() and Rollback() will always return an error. If this is a transaction, then Begin() is a no-op, and Close() will rollback the transaction and close the database connection.

See documentation on the top-level functions for more details on the methods.

func Begin

func Begin(ctx context.Context, opts ...beginOpt) (context.Context, DB, error)

Begin a new transaction.

The returned context is a copy of the original with the DB replaced with a transaction. The same transaction is also returned directly.

Nested transactions return the original transaction together with ErrTransactionStarted (which is not a fatal error).

func Connect

func Connect(ctx context.Context, opt ConnectOptions) (DB, error)

Connect to a database.

To connect to a database you need to register a driver for it first. While zdb uses database/sql, it needs a zdb-specific driver which contains additional information. Several drivers are included in the zgo.at/zdb/drivers package. To register a driver simply import it:

import _ "zgo.at/zdb/drivers/pq"

The connect string has the following layout (minus spaces):

dialect                           Use default connection parameters for this driver.
dialect              + connect    Pass driver-specific connection string.
driverName           + connect    Use a SQL driver name, instead of SQL dialect.
dialect / driverName + connect    Specify both.

The connectString is driver-specific; see the documentation of the driver for details. The dialect is the "SQL dialect"; currently recognized dialects are:

postgresql    aliases: postgres psql pgsql
sqlite        aliases: sqlite3
mysql         aliases: mariadb

For example, "postgresql+dbname=mydb", "pq+dbname=mydb", and "postgresql/pq+dbname=mydb" are all identical, assuming pq is the registered driver.

If multiple drivers are registered for the same dialect then it will use the first one.

If Create is set it will try to automatically create a database if it doesn't exist yet. If Files is given it will also look for the following files to set up the database if it doesn't exist or is empty:

schema.gotxt           Run zdb.Template first.
schema-{dialect}.sql   Schema for this SQL dialect.
schema.sql

Migrate and GoMigrate are migrations to run, see the documentation of Migrate for details.

func GetDB

func GetDB(ctx context.Context) (DB, bool)

GetDB gets the DB from the context.

func MustGetDB

func MustGetDB(ctx context.Context) DB

MustGet gets the DB from the context, panicking if there is none.

func NewLogDB

func NewLogDB(db DB, out io.Writer, logWhat DumpArg, filter string) DB

NewLogDB returns a DB wrapper to log queries, query plans, and query results.

If Filter is not an empty string then only queries containing the text are logged. Use an empty string to log everything.

If LogQuery is enabled then the query text will be logged, if LogExplain is enabled then the query plan will be logged, and if LogResult is enabled then the query result will be logged.

Only LogQuery will be set if opts is nil,

WARNING: printing the result means the query will be run twice, which is a significant performance impact and DATA MODIFICATION STATEMENTS ARE ALSO RUN TWICE. Some SQL engines may also run the query on EXPLAIN (e.g. PostgreSQL does).

func NewMetricsDB

func NewMetricsDB(db DB, recorder MetricRecorder) DB

NewMetricsDB returns a DB wrapper which records query performance metrics.

For every query recorder.Record is called.

func Unwrap

func Unwrap(db DB) DB

Unwrap this database, removing all zdb wrappers and returning the underlying database (which may be a transaction).

To wrap a zdb.DB object embed the zdb.DB interface, which contains the parent DB connection. The Unwrap() method is expected to return the parent DB.

Then implement whatever you want; usually you will want to implement the dbImpl interface, which contains the methods that actually interact with the database. All the DB methods call this under the hood. This way you don't have to wrap all the methods on DB, but just five.

In Begin() you will want to return a new wrapped DB instance with the transaction attached.

See logDB and metricDB in log.go and metric.go for examples.

TODO: document wrapping a bit better.

type Dialect

type Dialect uint8

Dialect is an SQL dialect. This can be represented by multiple drivers; for example for PostgreSQL "pq" and "pgx" are both DialectPostgreSQL.

const (
	DialectUnknown Dialect = iota
	DialectSQLite
	DialectPostgreSQL
	DialectMariaDB
)

SQL dialects.

func SQLDialect

func SQLDialect(ctx context.Context) Dialect

SQLDialect gets the SQL dialect.

func (Dialect) String

func (d Dialect) String() string

type DumpArg

type DumpArg int32
const (
	DumpLocation DumpArg // Show location of Dump call.
	DumpQuery            // Show the query with placeholders substituted.
	DumpExplain          // Show the results of EXPLAIN (or EXPLAIN ANALYZE for PostgreSQL).
	DumpResult           // Show the query result.
	DumpVertical         // Print query result in vertical columns instead of horizontal.
	DumpCSV              // Print query result as CSV.
	DumpNul              // Print query result with columns separated by NUL (0x00) bytes.
	DumpJSON             // Print query result as JSON.
	DumpHTML             // Print query result as a HTML table.
	DumpAll              // Dump all we can.
)

type L deprecated

type L []any

L ("list") is an alias for []any.

Deprecated: []any{..} is now a lot shorter and less ugly than []interface{}{..}

type MetricRecorder

type MetricRecorder interface {
	Record(d time.Duration, query string, params []any)
}

type MetricsMemory

type MetricsMemory struct {
	// contains filtered or unexported fields
}

MetricsMemory records metrics in memory.

func NewMetricsMemory

func NewMetricsMemory(max int) *MetricsMemory

NewMetricsMemory creates a new MetricsMemory, up to "max" metrics per query.

func (*MetricsMemory) Queries

func (m *MetricsMemory) Queries() []struct {
	Query string
	Times ztime.Durations
}

Queries gets a list of queries sorted by the total run time.

func (*MetricsMemory) Record

func (m *MetricsMemory) Record(d time.Duration, query string, params []any)

Record this query.

func (*MetricsMemory) Reset

func (m *MetricsMemory) Reset()

Reset the contents.

func (*MetricsMemory) String

func (m *MetricsMemory) String() string

type Migrate

type Migrate struct {
	// contains filtered or unexported fields
}

Migrate allows running database migrations.

func NewMigrate

func NewMigrate(db DB, files fs.FS, gomig map[string]func(context.Context) error) (*Migrate, error)

NewMigrate creates a new migration instance.

Migrations are loaded from the filesystem, as described in ConnectOptions.

You can optionally pass a list of Go functions to run as a "migration".

Every migration is automatically run in a transaction; and an entry in the version table is inserted.

func (Migrate) Check

func (m Migrate) Check() error

Check if there are pending migrations; will return the (non-fatal) PendingMigrationsError if there are.

func (Migrate) List

func (m Migrate) List() (haveMig, ranMig []string, err error)

List all migrations we know about, and all migrations that have already been run.

func (*Migrate) Log

func (m *Migrate) Log(f func(name string))

Log sets a log function for migrations; this gets called for every migration that gets run.

This only gets called if the migration was run successfully.

func (Migrate) Run

func (m Migrate) Run(which ...string) error

Run a migration, or all of then if which contains "all" or "auto".

func (Migrate) Schema

func (m Migrate) Schema(name string) (string, error)

Schema of a migration by name.

func (*Migrate) Show

func (m *Migrate) Show(v bool)

Show sets the "show" flag; it won't run anything, just print the queries it would run to stdout.

func (*Migrate) Test

func (m *Migrate) Test(t bool)

Test sets the "test" flag: it won't commit any transactions.

This will work correctly for SQLite and PostgreSQL, but not MariaDB as most ALTER and CREATE commands will automatically imply COMMIT. See: https://mariadb.com/kb/en/sql-statements-that-cause-an-implicit-commit/

type P deprecated

type P map[string]any

P ("params") is an alias for map[string]any, just because it's less typing and looks less noisy 🙃

Deprecated: map[string]any{..} is now a lot shorter and less ugly than map[string]interface{}{...}

type PendingMigrationsError

type PendingMigrationsError struct{ Pending []string }

PendingMigrationsError is a non-fatal error used to indicate there are migrations that have not yet been run.

func (PendingMigrationsError) Error

func (err PendingMigrationsError) Error() string

type Rows

type Rows struct {
	// contains filtered or unexported fields
}

TODO: document.

func Query

func Query(ctx context.Context, query string, params ...any) (*Rows, error)

Query the database without immediately loading the result.

This gives more flexibility over Select(), and won't load the entire result in memory to allow fetching the result one row at a time.

This won't return an error if there are no rows.

TODO: will it return nil or Rows which just does nothing? Make sure this is tested and documented.

func (*Rows) Close

func (r *Rows) Close() error

func (*Rows) ColumnTypes

func (r *Rows) ColumnTypes() ([]*sql.ColumnType, error)

func (*Rows) Columns

func (r *Rows) Columns() ([]string, error)

func (*Rows) Err

func (r *Rows) Err() error

func (*Rows) Next

func (r *Rows) Next() bool

func (*Rows) Scan

func (r *Rows) Scan(dest ...any) error

type SQL

type SQL string

SQL represents a safe SQL string that will be directly inserted in the query without any modification, rather than passed as a parameter.

Use with wisdom! Careless use of this can open you to SQL injections. Generally speaking you rarely want to use this, except in some rare cases where 1) parameters won't work, and 2) you're really sure this value is safe.

type ServerInfo

type ServerInfo struct {
	Version    ServerVersion
	DriverName string
	Dialect    Dialect
}

ServerInfo contains information about the SQL server.

func Info

func Info(ctx context.Context) (ServerInfo, error)

Info gets information about the SQL server.

type ServerVersion

type ServerVersion string

ServerVersion represents a database version.

func (ServerVersion) AtLeast

func (v ServerVersion) AtLeast(want ServerVersion) bool

AtLeast reports if this version is at least version want.

Directories

Path Synopsis
go-sqlite3
Package sqlite3 provides a zdb driver for SQLite.
Package sqlite3 provides a zdb driver for SQLite.
mariadb
Package mariadb provides a zdb driver for MariaDB.
Package mariadb provides a zdb driver for MariaDB.
pgx
Package pgx provides a zdb driver for PostgreSQL.
Package pgx provides a zdb driver for PostgreSQL.
pq
Package pq provides a zdb driver for PostgreSQL.
Package pq provides a zdb driver for PostgreSQL.
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL