dbmigrate

package module
v0.0.0-...-f1955fc Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 3, 2023 License: MIT Imports: 12 Imported by: 0

README

dbmigrate Build Status

rails migrate inspired approach to database schema migrations but with plain sql files. and much faster.

Getting started with docker-compose

Given a working app

Let's say we have a simple docker-compose project setup with only a docker-compose.yml file

$ tree .
.
└── docker-compose.yml

0 directories, 1 file

That declares a postgres database (mydb) and an app (myapp) that uses the database somehow (in our contrived example, we're just listing the tables in our database with \dt)

version: '3'
services:
    mydb:
        image: "postgres"
        environment:
            - POSTGRES_DB=myapp_development
            - POSTGRES_USER=myuser
            - POSTGRES_PASSWORD=topsecret

    myapp:
        image: "postgres"
        command: >-
            sh -c '
                until pg_isready --host=mydb --port=5432 --timeout=30; do sleep 1; done;
                psql postgres://myuser:topsecret@mydb:5432/myapp_development -c \\dt
            '
        depends_on:
            - mydb

Let's see how our app runs

$ docker-compose up myapp
Creating dbmigrate-example_mydb_1 ... done
Creating dbmigrate-example_myapp_1 ... done
Attaching to dbmigrate-example_myapp_1
myapp_1  | mydb:5432 - no response
myapp_1  | mydb:5432 - no response
myapp_1  | mydb:5432 - accepting connections
myapp_1  | Did not find any relations.
dbmigrate-example_myapp_1 exited with code 0

The output simply shows that

  • our myapp waited until mydb:5432 was ready to accept connections
  • then it listed the tables inside the database and got Did not find any relations (which means no tables)
Add dbmigrate

Now let's add dbmigrate to our docker-compose.yml (take special note that myapp has a new entry under depends_on also)

version: '3'
services:
    mydb:
        image: "postgres"
        environment:
            - POSTGRES_DB=myapp_development
            - POSTGRES_USER=myuser
            - POSTGRES_PASSWORD=topsecret

    myapp:
        image: "postgres"
        command: >-
            sh -c '
                until pg_isready --host=mydb --port=5432 --timeout=30; do sleep 1; done;
                psql postgres://myuser:topsecret@mydb:5432/myapp_development -c \\dt
            '
        depends_on:
            - mydb
            - dbmigrate

    # by default apply migrations with `-up` flag
    # we can run different commands by overwriting `DBMIGRATE_CMD` env
    # try DBMIGRATE_CMD="-h" to see what other flags dbmigrate can offer
    dbmigrate:
        image: "choonkeat/dbmigrate"
        environment:
            - DATABASE_URL=postgres://myuser:topsecret@mydb:5432/myapp_development?sslmode=disable
        volumes:
            - .:/app
        working_dir: /app
        command: ${DBMIGRATE_CMD:--up -server-ready 60s -create-db}
        depends_on:
            - mydb

And we can start creating a few migration scripts by running docker-compose up dbmigrate with a custom DBMIGRATE_CMD env variable

$ DBMIGRATE_CMD="-create users" docker-compose up dbmigrate
dbmigrate-example_mydb_1 is up-to-date
Creating dbmigrate-example_dbmigrate_1 ... done
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1  | 2019/06/01 08:58:05 writing db/migrations/20190601085805_users.up.sql
dbmigrate_1  | 2019/06/01 08:58:05 writing db/migrations/20190601085805_users.down.sql
dbmigrate-example_dbmigrate_1 exited with code 0
$ DBMIGRATE_CMD="-create blogs" docker-compose up dbmigrate
dbmigrate-example_mydb_1 is up-to-date
Recreating dbmigrate-example_dbmigrate_1 ... done
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1  | 2019/06/01 08:58:14 writing db/migrations/20190601085814_blogs.up.sql
dbmigrate_1  | 2019/06/01 08:58:14 writing db/migrations/20190601085814_blogs.down.sql
dbmigrate-example_dbmigrate_1 exited with code 0

After running those 2 commands, we see that we've generated 2 pairs of empty *.up.sql and *.down.sql files.

$ tree .
.
├── db
│   └── migrations
│       ├── 20190601085805_users.down.sql
│       ├── 20190601085805_users.up.sql
│       ├── 20190601085814_blogs.down.sql
│       └── 20190601085814_blogs.up.sql
└── docker-compose.yml

2 directories, 5 files

We add our SQLs into our respective files

-- db/migrations/20190601085805_users.up.sql
CREATE TABLE IF NOT EXISTS users (
    "id" SERIAL primary key
);
-- db/migrations/20190601085805_users.down.sql
DROP TABLE IF EXISTS users;
-- db/migrations/20190601085814_blogs.up.sql
CREATE TABLE IF NOT EXISTS blogs (
    "id" SERIAL primary key
);
-- db/migrations/20190601085814_blogs.down.sql
DROP TABLE IF EXISTS blogs;
Database schema migrations is now managed

Let's see how our app runs after those changes

$ docker-compose up myapp
dbmigrate-example_mydb_1 is up-to-date
Recreating dbmigrate-example_dbmigrate_1 ... done
Recreating dbmigrate-example_myapp_1     ... done
Attaching to dbmigrate-example_myapp_1
myapp_1      | mydb:5432 - accepting connections
myapp_1      |               List of relations
myapp_1      |  Schema |        Name        | Type  | Owner  
myapp_1      | --------+--------------------+-------+--------
myapp_1      |  public | blogs              | table | myuser
myapp_1      |  public | dbmigrate_versions | table | myuser
myapp_1      |  public | users              | table | myuser
myapp_1      | (3 rows)
myapp_1      |
dbmigrate-example_myapp_1 exited with code 0

Hey, looks like we have 3 tables now:

  1. blogs created by our db/migrations/20190601085814_blogs.up.sql
  2. users created by our db/migrations/20190601085805_users.up.sql
  3. dbmigrate_versions created by dbmigrate for itself to track migration history.
    • every time dbmigrate runs, it checks dbmigrate_versions table to know which files in db/migrations had been applied and skip them; which files have not been seen before and apply them

We can look at the logs of the dbmigrate container to see what had happened when myapp booted up just now

$ docker-compose logs dbmigrate
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1  | 2019/06/01 08:59:32 [up] 20190601085805_users.up.sql
dbmigrate_1  | 2019/06/01 08:59:32 [up] 20190601085814_blogs.up.sql
That's it

Now everytime myapp starts up, since it declares depends_on: dbmigrate, our dbmigrate container will be run automatically to apply any new migration files in db/migrations. To add new migration files there, just run docker-compose up dbmigrate with a custom DBMIGRATE_CMD env variable (see above)


Basic operations

Create a new migration
$ dbmigrate -create describe your change
2018/12/21 16:33:13 writing db/migrations/20181221083313_describe-your-change.up.sql
2018/12/21 16:33:13 writing db/migrations/20181221083313_describe-your-change.down.sql

generate a pair of blank .up.sql and .down.sql files inside the directory db/migrations. configure the directory with -dir command line flag.

the numeric prefix of the filename is the version. i.e. the version of the file above is 20181221083313

Migrate up
$ dbmigrate -up
2018/12/21 16:37:40 [up] 20181221083313_describe-your-change.up.sql
2018/12/21 16:37:40 [up] 20181221083727_more-changes.up.sql
  1. Connect to database (defaults to the value of DATABASE_URL env; configure with -url)
  2. Start a db transaction
  3. Pick up each .up.sql file in db/migrations and iterate through them in ascending order
    • if the file version is found in dbmigrate_versions table, skip it
    • otherwise, execute the sql statements in the file
    • if it succeeds, insert an entry into dbmigrate_versions table
      CREATE TABLE dbmigrate_versions (
        version text NOT NULL PRIMARY KEY
      );
      
    • if fail, rollback the entire transaction and exit 1
  4. Commit db transaction and exit 0
Migrate down
$ dbmigrate -down 1
2018/12/21 16:42:24 [down] 20181221083727_more-changes.down.sql
  1. Connect to database (defaults to the value of DATABASE_URL env; configure with -url)
  2. Start a db transaction
  3. Pick up each .down.sql file in db/migrations and iterate through them in descending order
    • if the file version is NOT found in dbmigrate_versions table, skip it
    • otherwise, execute the sql statements in the file
    • if succeeds, remove the entry WHERE version = ? from dbmigrate_versions table
    • if fail, rollback the entire transaction and exit 1
  4. Commit db transaction and exit 0

You can migrate "down" more files by using a different number

$ dbmigrate -down 3
2018/12/21 16:46:45 [down] 20181221083313_describe-your-change.down.sql
2018/12/21 16:46:45 [down] 20181221055307_create-users.down.sql
2018/12/21 16:46:45 [down] 20181221055304_create-projects.down.sql
Show versions pending

Prints a sorted list of versions found in -dir but does not have a record in dbmigrate_versions table.

$ dbmigrate -versions-pending
20181222073750
20181222073900
20181222073901
Configuring DATABASE_URL

PostgreSQL

We're using github.com/lib/pq so environment variable look like this

DATABASE_URL=postgres://user:password@localhost:5432/dbmigrate_test?sslmode=disable

or

DATABASE_DRIVER=postgres
DATABASE_URL='user=pqgotest dbname=pqgotest sslmode=verify-full'

NOTE: out of the box, this driver supports having multiple statements in one .sql file.

MySQL

We're using github.com/go-sql-driver/mysql so environment variables look like

DATABASE_DRIVER=mysql
DATABASE_URL='user:password@tcp(127.0.0.1:3306)/dbmigrate_test'

NOTE: to have multiple SQL statements in each .sql file, you'd need to add multiStatements=true to the CGI query string of your DATABASE_URL. i.e.

DATABASE_URL='user:password@tcp(127.0.0.1:3306)/dbmigrate_test?multiStatements=true'

See the driver documentation for details and other available options.

Handling failure

When there's an error, we rollback the entire transaction. So you can edit your faulty .sql file and simply re-run

$ dbmigrate -up
2018/12/21 16:55:41 20181221083313_describe-your-change.up.sql: pq: relation "users" already exists
exit status 1
$ vim db/migrations/20181221083313_describe-your-change.up.sql
$ dbmigrate -up
2018/12/21 16:56:05 [up] 20181221083313_describe-your-change.up.sql
2018/12/21 16:56:05 [up] 20181221083727_more-changes.up.sql
$ dbmigrate -up
$

PostgreSQL supports rollback for most data definition language (DDL)

one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible. https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis

MySQL does not support rollback for DDL

Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines. https://dev.mysql.com/doc/refman/8.0/en/cannot-roll-back.html

If you're using MySQL, make sure to have DDL (e.g. CREATE TABLE ...) in their individual *.sql files.

Caveat: -create-db and database names

The SQL command CREATE DATABASE <dbname> does not work well (at least in postgres) if <dbname> contains dashes -. The proper way would've been to quote the value when using it but alas there doesn't seem to be a driver agnostic way to quote that string in Go.

The workaround is, if -create-db is needed, use underscore _ for your dbname instead of dashes -

Documentation

Index

Constants

This section is empty.

Variables

View Source
var RequireDriverName = errors.Errorf("Cannot discern db driver. Please set -driver flag or DATABASE_DRIVER environment variable.")

RequireDriverName to indicate explicit driver name

Functions

func ReadyWait

func ReadyWait(ctx context.Context, driverName string, databaseURLs []string, logger func(...interface{})) error

ReadyWait for server to be ready, and try to create db and connect again

func Register

func Register(name string, value Adapter)

Register a new adapter.

NOTE that postgres and mysql is supported out of the box. sqlite3 is supported by including cmd/dbmigrate/sqlite3.go during compilation

func SanitizeDriverNameURL

func SanitizeDriverNameURL(driverName string, databaseURL string) (dbdriver string, dburl string, err error)

SanitizeDriverNameURL sanitizes `driverName` and `databaseURL` values

Types

type Adapter

type Adapter struct {
	CreateVersionsTable    func(*string) string
	SelectExistingVersions func(*string) string
	InsertNewVersion       func(*string) string
	DeleteOldVersion       func(*string) string
	PingQuery              string                                                     // `""` means does NOT support -server-ready
	CreateDatabaseQuery    func(string) string                                        // nil means does NOT support -create-db
	CreateSchemaQuery      func(string) string                                        // nil means does NOT support -schema
	BaseDatabaseURL        func(string) (connString string, dbName string, err error) // nil means does not support -server-ready nor -create-db
	BeginTx                func(ctx context.Context, db *sql.DB, opts *sql.TxOptions) (ExecCommitRollbacker, error)
}

Adapter defines raw sql statements to run for an sql.DB adapter

func AdapterFor

func AdapterFor(driverName string) (Adapter, error)

AdapterFor returns Adapter for given driverName

type Config

type Config struct {
	// contains filtered or unexported fields
}

A Config holds on to an open database to perform dbmigrate

func New

func New(dir fs.FS, driverName string, databaseURL string) (*Config, error)

New returns an instance of &Config

Returns error when - database driver is unsupported (try adding support via `dbmigrate.Register`) - database fails to connect or retrieve existing versions - unable to read list of files from `dir`

func (*Config) CloseDB

func (c *Config) CloseDB() error

CloseDB should be run when Config is no longer in use; ideally `defer CloseDB` after every `New`

func (*Config) MigrateDown

func (c *Config) MigrateDown(ctx context.Context, txOpts *sql.TxOptions, schema *string, logFilename func(string), downStep int) error

MigrateDown un-applies at most N migrations in descending order, in a transaction

Transaction is committed on success, rollback on error. Different databases will behave differently, e.g. postgres & sqlite3 can rollback DDL changes but mysql cannot

func (*Config) MigrateUp

func (c *Config) MigrateUp(ctx context.Context, txOpts *sql.TxOptions, schema *string, logFilename func(string)) error

MigrateUp applies pending migrations in ascending order, in a transaction

Transaction is committed on success, rollback on error. Different databases will behave differently, e.g. postgres & sqlite3 can rollback DDL changes but mysql cannot

func (*Config) PendingVersions

func (c *Config) PendingVersions(ctx context.Context, schema *string) ([]string, error)

PendingVersions returns a slice of version strings that are not appled in the database yet

type ExecCommitRollbacker

type ExecCommitRollbacker interface {
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
	Commit() error
	Rollback() error
}

ExecCommitRollbacker interface for sql.Tx

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL