README

zdb allows interacting with SQL databases.

Much of this is built on top of sqlx, but that's mostly an implementation detail; in regular usage you shouldn't really have to deal with the sqlx (or database/sql) package interfaces at all. The API/interface of zdb is quite different.

This requires Go 1.16 or newer. It uses the new fs package to load files.

Right now only PostgreSQL and SQLite are supported. Adding MariaDB or other engines wouldn't be hard, but I don't use it myself so didn't bother adding (and testing!) it. Just need someone to write a patch 😅

Full reference documentation: https://pkg.go.dev/zgo.at/zdb#pkg-index

Expand ▾ Collapse ▴

Documentation

Index

Constants

View Source
const Date = "2006-01-02 15:04:05"

Date format for SQL.

TODO: deal with this better, inserting a date as a parameter should just work.

Variables

View Source
var ErrTransactionStarted = errors.New("transaction already started")

ErrTransactionStarted is returned when a transaction is already started; this can often be treated as a non-fatal error.

Functions

func ApplyParams

func ApplyParams(query string, params ...interface{}) string

ApplyParams replaces parameter placeholders in query with the values.

This is ONLY for printf-debugging, and NOT for actual usage. Security was NOT a consideration when writing this. Parameters in SQL are sent separately over the write and are not interpolated, so it's very different.

This supports ? placeholders and $1 placeholders *in order* ($\d is simply replace with ?).

func BeginIsolation

func BeginIsolation(level sql.IsolationLevel) beginOpt

func BeginReadOnly

func BeginReadOnly() beginOpt

func Diff

func Diff(out, want string) string

Diff two strings, ignoring whitespace at the start of a line.

This is useful in tests in combination with zdb.Dump():

got := DumpString(ctx, `select * from factions`)
want := `
    faction_id  name
    1           Peacekeepers
    2           Moya`
if d := Diff(got, want); d != "" {
   t.Error(d)
}

It normalizes the leading whitespace in want, making "does my database match with what's expected?" fairly easy to test.

func Dump

func Dump(ctx context.Context, out io.Writer, query string, params ...interface{})

Dump the results of a query to a writer in an aligned table. This is a convenience function intended mostly for testing/debugging.

Combined with ztest.Diff() it can be an easy way to test the database state.

You can add some special sentinel values in the params to control the output (they're not sent as parameters to the DB):

DumpVertical   Show vertical output instead of horizontal columns.
DumpQuery      Show the query with placeholders substituted.
DumpExplain    Show the results of EXPLAIN (or EXPLAIN ANALYZE for PostgreSQL).

func DumpString

func DumpString(ctx context.Context, query string, params ...interface{}) string

DumpString is like Dump(), but returns the result as a string.

func ErrNoRows

func ErrNoRows(err error) bool

ErrNoRows reports if this error is sql.ErrNoRows.

func ErrUnique

func ErrUnique(err error) bool

ErrUnique reports if this error reports a UNIQUE constraint violation.

This is the cgo version which works for PostgreSQL and SQLite.

func Exec

func Exec(ctx context.Context, query string, params ...interface{}) error

Exec executes a query without returning the result.

This uses Prepare(), and all the documentation from there applies here too.

func Get

func Get(ctx context.Context, dest interface{}, query string, params ...interface{}) error

Get one row, returning sql.ErrNoRows if there are no rows.

This uses Prepare(), and all the documentation from there applies here too.

func InsertID

func InsertID(ctx context.Context, idColumn, query string, params ...interface{}) (int64, error)

InsertID runs a INSERT query and returns the ID column idColumn.

If multiple rows are inserted it will return the ID of the last inserted row. This works for both PostgreSQL and SQLite.

This uses Prepare(), and all the documentation from there applies here too.

func ListTables

func ListTables(ctx context.Context) ([]string, error)

ListTables lists all tables

func Load

func Load(ctx context.Context, name string) (string, error)

Load a query from the filesystem or embeded files.

Queries are loaded from "db/query/{name}-{driver}.sql" or "db/query/{name}.sql". Every query will have the file name inserted in the first line; for example with "db/query/select-x.sql":

select x from y;

Then the actual query will be:

/* select-x */
select x from y;

This allows identifying queries in logging and statistics such as pg_stat_statements.

Typical usage with Query() is to use "load:name", instead of calling this directly:

zdb.QueryGet(ctx, "load:select-x", &foo, zdb.P{
    "param": "foo",
})

func NumRows

func NumRows(ctx context.Context, query string, params ...interface{}) (int64, error)

NumRows executes a query and returns the number of affected rows.

This uses Prepare(), and all the documentation from there applies here too.

func PgSQL

func PgSQL(ctx context.Context) bool

PgSQL reports if this database connection is to PostgreSQL.

func Prepare

func Prepare(ctx context.Context, query string, params ...interface{}) (string, []interface{}, error)

Prepare a query for sendoff to the database.

Named parameters (:name) are used if params contains a map or struct; positional parameters (? or $1) are used if it's doesn't. You can add multiple structs or maps, but mixing named and positional parameters is not allowed.

Everything between {{:name ..}} is parsed as a conditional; for example {{:foo query}} will only be added if "foo" from params is true or not a zero type. Conditionals only work with named parameters.

If the query starts with "load:" then it's loaded from the filesystem or embedded files; see Load() for details.

Additional DumpArgs can be added to dump the results of the query to stderr for testing and debugging:

DumpQuery      Show the query
DumpExplain    Show query plain (WILL RUN QUERY TWICE!)
DumpResult     Show the query result (WILL RUN QUERY TWICE!)
DumpVertical   Show results in vertical format.

Running the query twice for a select is usually safe (just slower), but running insert, update, or delete twice may cause problems.

func SQLite

func SQLite(ctx context.Context) bool

SQLite reports if this database connection is to SQLite.

func Select

func Select(ctx context.Context, dest interface{}, query string, params ...interface{}) error

Select one or more rows; dest needs to be a pointer to a slice.

Returns nil if there are no rows.

This uses Prepare(), and all the documentation from there applies here too.

func StartTest

func StartTest(t *testing.T) (context.Context, func())

StartTest starts a new test.

There are two versions of this function: one for SQLite and one for PostgreSQL; by default the SQLite version is used, which uses a :memory: database.

The PostgreSQL version is compiled with the 'testpg' tag; this will use the "zdb_test" database (will be created if it doesn't exist) and will run every test in a new schema.

The standard PG* environment variables (PGHOST, PGPORT) can be used to specify the connection to a PostgreSQL database; see psql(1) for details.

The table (or schema) will be removed when the test ends, but the PostgreSQL zdb_test database is kept.

func TX

func TX(ctx context.Context, fn func(context.Context) error) error

TX runs the given function in a transaction.

The context passed to the callback has the DB replaced with a transaction. The transaction is committed if the fn returns nil, or will be rolled back if it's not.

Multiple TX() calls can be nested, but they all run the same transaction and are comitted only if the outermost transaction returns true.

This is just a more convenient wrapper for Begin().

func TestQueries

func TestQueries(t *testing.T, files fs.FS)

TestQueries tests queries in the db/query directory.

for every .sql file you can create a _test.sql file, similar to how Go's testing works; the following special comments are recognized:

-- params     Parameters for the query.
-- want       Expected result.

Everything before the first special comment is run as a "setup". The "-- params" and "-- want" comments can be repeated for multiple tests.

Example:

db/query/select-sites.sql:
   select * from sites where site_id = :site and created_at > :start

db/query/select-sites_test.sql
  insert into sites () values (...)

  -- params
  site_id:    1
  created_at: 2020-01-01

  -- want
  1

  -- params

  -- want

func WithDB

func WithDB(ctx context.Context, db DB) context.Context

WithDB returns a copy of the context with the DB instance.

Types

type ConnectOptions

type ConnectOptions struct {
	Connect string   // Connect string.
	Create  bool     // Create database if it doesn't exist yet.
	Migrate []string // Migrations to run; nil for none, "all" for all, or a migration name.

	// Database files; the following layout is assumed:
	//
	//   Schema       schema-{driver}.sql or schema.sql
	//   Migrations   migrate/foo-{schema}.sql or migrate/foo.sql
	//   Queries      query/foo-{schema}.sql or query/foo.sql
	//
	// It's okay if files are missing; e.g. no migrate directory simply means
	// that it won't attempt to run migrations.
	Files fs.FS

	// In addition to migrations from .sql files, you can run migrations from Go
	// functions. See the documentation on Migrate for details.
	GoMigrations map[string]func(context.Context) error

	// ConnectHook for sqlite3.SQLiteDriver; mainly useful to add your own
	// functions:
	//
	//    opt.SQLiteHook = func(c *sqlite3.SQLiteConn) error {
	//        return c.RegisterFunc("percent_diff", func(start, final float64) float64 {
	//            return (final - start) / start * 100
	//        }, true)
	//    }
	//
	// It'll automatically register and connect to a new "sqlite3_zdb_[addr]"
	// driver; note that DriverName() will now return "sqlite3_zdb_[addr]"
	// instead of "sqlite3"; use zdb.SQLite() to test if a connection is a
	// SQLite one.
	SQLiteHook func(*sqlite3.SQLiteConn) error
}

type DB

type DB interface {
	Prepare(ctx context.Context, query string, params ...interface{}) (string, []interface{}, error)
	Load(ctx context.Context, name string) (string, error)

	Exec(ctx context.Context, query string, params ...interface{}) error
	NumRows(ctx context.Context, query string, params ...interface{}) (int64, error)
	InsertID(ctx context.Context, idColumn, query string, params ...interface{}) (int64, error)
	Get(ctx context.Context, dest interface{}, query string, params ...interface{}) error
	Select(ctx context.Context, dest interface{}, query string, params ...interface{}) error
	Query(ctx context.Context, query string, params ...interface{}) (*Rows, error)

	BindNamed(query string, param interface{}) (newquery string, params []interface{}, err error)
	Rebind(query string) string
	DriverName() string
	Close() error

	TX(context.Context, func(context.Context) error) error
	Begin(context.Context, ...beginOpt) (context.Context, DB, error)
	Rollback() error
	Commit() error
}

func Begin

func Begin(ctx context.Context, opts ...beginOpt) (context.Context, DB, error)

Begin a new transaction.

The returned context is a copy of the original with the DB replaced with a transaction. The same transaction is also returned directly.

Nested transactions return the original transaction together with ErrTransactionStarted (which is not a fatal error).

func Connect

func Connect(opt ConnectOptions) (DB, error)

Connect to a database.

The database will be created automatically if the database doesn't exist and Schema is in ConnectOptions

This will set the maximum number of open and idle connections to 25 each for PostgreSQL, and 16 and 4 for SQLite, instead of Go's default of 0 and 2.

To change this, you can use:

db.(*sqlx.DB).SetMaxOpenConns(100)

Several connection parameters are set to different defaults in SQLite:

_journal_mode=wal          Almost always faster with better concurrency,
                           with little drawbacks for most use cases.
                           https://www.sqlite.org/wal.html

_foreign_keys=on           Check FK constraints; by default they're not
                           enforced, which is probably not what you want.

_defer_foreign_keys=on     Delay FK checks until the transaction commit; by
                           default they're checked immediately (if
                           enabled).

_case_sensitive_like=on    LIKE is case-sensitive, like PostgreSQL.

_cache_size=-20000         20M cache size, instead of 2M. Can be a
                           significant performance improvement.

You can still use "?_journal_mode=something_else" in the connection string to set something different.

For details on the connection string, see the documentation for go-sqlite3 and pq: https://github.com/mattn/go-sqlite3/ https://github.com/lib/pq

func GetDB

func GetDB(ctx context.Context) (DB, bool)

GetDB gets the DB from the context.

func MustGetDB

func MustGetDB(ctx context.Context) DB

MustGet gets the DB from the context, panicking if there is none.

func NewExplainDB

func NewExplainDB(db DB, out io.Writer, filter string) DB

NewExplainDB returns a DB wrapper that will print all queries with their EXPLAINs to out.

Only queries which contain the string in filter are shown. Use an empty string to show everything.

Because EXPLAIN will actually run the queries this is quite a significant performance impact. Note that data modification statements are *also* run twice!

TODO: rename this to logDB, and add an option to either only log queries, log + explain, or log + explain + show results, etc.

func Unwrap

func Unwrap(db DB) DB

Unwrap this database, removing any of the zdb wrappers and returning the underlying sqlx.DB or sqlx.Tx.

type DumpArg

type DumpArg int
const (
	DumpVertical DumpArg
	DumpQuery
	DumpExplain
	DumpResult
)

type L

type L []interface{}

L ("list") is an alias for []interface.

type Migrate

type Migrate struct {

	// contains filtered or unexported fields

}

Migrate allows running database migrations.

func NewMigrate

func NewMigrate(db DB, files fs.FS, gomig map[string]func(context.Context) error) (*Migrate, error)

NewMigrate creates a new migration instance.

Migrations are loaded from the filesystem, as described in ConnectOptions.

You can optionally pass a list of Go functions to run as a "migration".

Every migration is automatically run in a transaction; and an entry in the version table is inserted.

func (Migrate) Check

func (m Migrate) Check() error

Check if there are pending migrations; will return the (non-fatal) PendingMigrationsError if there are.

func (Migrate) List

func (m Migrate) List() (haveMig, ranMig []string, err error)

List all migrations we know about, and all migrations that have already been run.

func (Migrate) Run

func (m Migrate) Run(which ...string) error

Run a migration, or all of then if which contains "all" or "auto".

func (Migrate) Schema

func (m Migrate) Schema(name string) (string, error)

Schema of a migration by name.

type P

type P map[string]interface{}

P ("params") is an alias for map[string]interface{}, just because it's less typing and looks less noisy 🙃

type PendingMigrationsError

type PendingMigrationsError error

PendingMigrationsError is a non-fatal error used to indicate there are migrations that have not yet been run.

type Rows

type Rows struct { // contains filtered or unexported fields

}

func Query

func Query(ctx context.Context, query string, params ...interface{}) (*Rows, error)

Query the database without immediately loading the result.

This gives more flexibility over Select(), and won't load the entire result in memory and allows fetching the result one row at a time.

This won't return an error if there are no rows.

This uses Prepare(), and all the documentation from there applies here too.

func (*Rows) Close

func (r *Rows) Close() error

func (*Rows) ColumnTypes

func (r *Rows) ColumnTypes() ([]*sql.ColumnType, error)

func (*Rows) Columns

func (r *Rows) Columns() ([]string, error)

func (*Rows) Err

func (r *Rows) Err() error

func (*Rows) Next

func (r *Rows) Next() bool

func (*Rows) Scan

func (r *Rows) Scan(dest ...interface{}) error

Directories

Path Synopsis
bulk Package bulk provides helpers for bulk SQL operations.