migrate

package module
v0.0.0-...-9dda14b Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 5, 2018 License: MIT Imports: 15 Imported by: 0

README

sql-migrate

SQL Schema migration tool for Go. Based on gorp and goose.

Build Status GoDoc

Using modl? Check out modl-migrate.

Features

  • Usable as a CLI tool or as a library
  • Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through gorp)
  • Can embed migrations into your application
  • Migrations are defined with SQL for full flexibility
  • Atomic migrations
  • Up/down migrations to allow rollback
  • Supports multiple database types in one project

Installation

To install the library and command line program, use the following:

go get -v github.com/rubenv/sql-migrate/...

Usage

As a standalone tool
$ sql-migrate --help
usage: sql-migrate [--version] [--help] <command> [<args>]

Available commands are:
    down      Undo a database migration
    new       Create a new migration
    redo      Reapply the last migration
    status    Show migration status
    up        Migrates the database to the most recent version available

Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments:

development:
    dialect: sqlite3
    datasource: test.db
    dir: migrations/sqlite3

production:
    dialect: postgres
    datasource: dbname=myapp sslmode=disable
    dir: migrations/postgres
    table: migrations

The table setting is optional and will default to gorp_migrations.

The environment that will be used can be specified with the -env flag (defaults to development).

Use the --help flag in combination with any of the commands to get an overview of its usage:

$ sql-migrate up --help
Usage: sql-migrate up [options] ...

  Migrates the database to the most recent version available.

Options:

  -config=config.yml   Configuration file to use.
  -env="development"   Environment.
  -limit=0             Limit the number of migrations (0 = unlimited).
  -dryrun              Don't apply migrations, just print them.

The new command creates a new empty migration template using the following pattern <current time>-<name>.sql.

The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter.

The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.

Use the status command to see the state of the applied migrations:

$ sql-migrate status
+---------------+-----------------------------------------+
|   MIGRATION   |                 APPLIED                 |
+---------------+-----------------------------------------+
| 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
| 2_record.sql  | no                                      |
+---------------+-----------------------------------------+
MySQL Caveat

If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example:

production:
    dialect: mysql
    datasource: root@/dbname?parseTime=true
    dir: migrations/mysql
    table: migrations

See here for more information.

As a library

Import sql-migrate into your application:

import "github.com/rubenv/sql-migrate"

Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later):

// Hardcoded strings in memory:
migrations := &migrate.MemoryMigrationSource{
    Migrations: []*migrate.Migration{
        &migrate.Migration{
            Id:   "123",
            Up:   []string{"CREATE TABLE people (id int)"},
            Down: []string{"DROP TABLE people"},
        },
    },
}

// OR: Read migrations from a folder:
migrations := &migrate.FileMigrationSource{
    Dir: "db/migrations",
}

// OR: Use migrations from a packr box
migrations := &migrate.PackrMigrationSource{
    Box: packr.NewBox("./migrations"),
}

// OR: Use migrations from bindata:
migrations := &migrate.AssetMigrationSource{
    Asset:    Asset,
    AssetDir: AssetDir,
    Dir:      "migrations",
}

Then use the Exec function to upgrade your database:

db, err := sql.Open("sqlite3", filename)
if err != nil {
    // Handle errors!
}

n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
if err != nil {
    // Handle errors!
}
fmt.Printf("Applied %d migrations!\n", n)

Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails.

Check the GoDoc reference for the full documentation.

Writing migrations

Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.

-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
CREATE TABLE people (id int);


-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
DROP TABLE people;

You can put multiple statements in each block, as long as you end them with a semicolon (;).

You can alternatively set up a separator string that matches an entire line by setting sqlparse.LineSeparator. This can be used to imitate, for example, MS SQL Query Analyzer functionality where commands can be separated by a line with contents of GO. If sqlparse.LineSeparator is matched, it will not be included in the resulting migration scripts.

If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries:

-- +migrate Up
CREATE TABLE people (id int);

-- +migrate StatementBegin
CREATE OR REPLACE FUNCTION do_something()
returns void AS $$
DECLARE
  create_query text;
BEGIN
  -- Do something here
END;
$$
language plpgsql;
-- +migrate StatementEnd

-- +migrate Down
DROP FUNCTION do_something();
DROP TABLE people;

The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.

Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option:

-- +migrate Up notransaction
CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);

-- +migrate Down
DROP INDEX people_unique_id_idx;

Embedding migrations with packr

If you like your Go applications self-contained (that is: a single binary): use packr to embed the migration files.

Just write your migration files as usual, as a set of SQL files in a folder.

Use the PackrMigrationSource in your application to find the migrations:

migrations := &migrate.PackrMigrationSource{
    Box: packr.NewBox("./migrations"),
}

If you already have a box and would like to use a subdirectory:

migrations := &migrate.PackrMigrationSource{
    Box: myBox,
    Dir: "./migrations",
}

Embedding migrations with bindata

As an alternative, but slightly less maintained, you can use bindata to embed the migration files.

Just write your migration files as usual, as a set of SQL files in a folder.

Then use bindata to generate a .go file with the migrations embedded:

go-bindata -pkg myapp -o bindata.go db/migrations/

The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives).

Use the AssetMigrationSource in your application to find the migrations:

migrations := &migrate.AssetMigrationSource{
    Asset:    Asset,
    AssetDir: AssetDir,
    Dir:      "db/migrations",
}

Both Asset and AssetDir are functions provided by bindata.

Then proceed as usual.

Extending

Adding a new migration source means implementing MigrationSource.

type MigrationSource interface {
    FindMigrations() ([]*Migration, error)
}

The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.

License

This library is distributed under the MIT license.

Documentation

Overview

SQL Schema migration tool for Go.

Key features:

  • Usable as a CLI tool or as a library
  • Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through gorp)
  • Can embed migrations into your application
  • Migrations are defined with SQL for full flexibility
  • Atomic migrations
  • Up/down migrations to allow rollback
  • Supports multiple database types in one project

Installation

To install the library and command line program, use the following:

go get -v github.com/rubenv/sql-migrate/...

Command-line tool

The main command is called sql-migrate.

$ sql-migrate --help
usage: sql-migrate [--version] [--help] <command> [<args>]

Available commands are:
	down      Undo a database migration
	new       Create a new migration
	redo      Reapply the last migration
	status    Show migration status
	up        Migrates the database to the most recent version available

Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments:

development:
	dialect: sqlite3
	datasource: test.db
	dir: migrations/sqlite3

production:
	dialect: postgres
	datasource: dbname=myapp sslmode=disable
	dir: migrations/postgres
	table: migrations

The `table` setting is optional and will default to `gorp_migrations`.

The environment that will be used can be specified with the -env flag (defaults to development).

Use the --help flag in combination with any of the commands to get an overview of its usage:

$ sql-migrate up --help
Usage: sql-migrate up [options] ...

  Migrates the database to the most recent version available.

Options:

  -config=config.yml   Configuration file to use.
  -env="development"   Environment.
  -limit=0             Limit the number of migrations (0 = unlimited).
  -dryrun              Don't apply migrations, just print them.

The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter.

The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.

Use the status command to see the state of the applied migrations:

$ sql-migrate status
+---------------+-----------------------------------------+
|   MIGRATION   |                 APPLIED                 |
+---------------+-----------------------------------------+
| 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
| 2_record.sql  | no                                      |
+---------------+-----------------------------------------+

MySQL Caveat

If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example:

production:
	dialect: mysql
	datasource: root@/dbname?parseTime=true
	dir: migrations/mysql
	table: migrations

See https://github.com/go-sql-driver/mysql#parsetime for more information.

Library

Import sql-migrate into your application:

import "github.com/rubenv/sql-migrate"

Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later):

// Hardcoded strings in memory:
migrations := &migrate.MemoryMigrationSource{
	Migrations: []*migrate.Migration{
		&migrate.Migration{
			Id:   "123",
			Up:   []string{"CREATE TABLE people (id int)"},
			Down: []string{"DROP TABLE people"},
		},
	},
}

// OR: Read migrations from a folder:
migrations := &migrate.FileMigrationSource{
	Dir: "db/migrations",
}

// OR: Use migrations from bindata:
migrations := &migrate.AssetMigrationSource{
	Asset:    Asset,
	AssetDir: AssetDir,
	Dir:      "migrations",
}

Then use the Exec function to upgrade your database:

db, err := sql.Open("sqlite3", filename)
if err != nil {
	// Handle errors!
}

n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
if err != nil {
	// Handle errors!
}
fmt.Printf("Applied %d migrations!\n", n)

Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails.

The full set of capabilities can be found in the API docs below.

Writing migrations

Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.

-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
CREATE TABLE people (id int);

-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
DROP TABLE people;

You can put multiple statements in each block, as long as you end them with a semicolon (;).

If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries:

-- +migrate Up
CREATE TABLE people (id int);

-- +migrate StatementBegin
CREATE OR REPLACE FUNCTION do_something()
returns void AS $$
DECLARE
  create_query text;
BEGIN
  -- Do something here
END;
$$
language plpgsql;
-- +migrate StatementEnd

-- +migrate Down
DROP FUNCTION do_something();
DROP TABLE people;

The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.

Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option:

-- +migrate Up notransaction
CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);

-- +migrate Down
DROP INDEX people_unique_id_idx;

Embedding migrations with packr

If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files.

Just write your migration files as usual, as a set of SQL files in a folder.

Use the PackrMigrationSource in your application to find the migrations:

migrations := &migrate.PackrMigrationSource{
	Box: packr.NewBox("./migrations"),
}

If you already have a box and would like to use a subdirectory:

migrations := &migrate.PackrMigrationSource{
	Box: myBox,
	Dir: "./migrations",
}

Embedding migrations with bindata

As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files.

Just write your migration files as usual, as a set of SQL files in a folder.

Then use bindata to generate a .go file with the migrations embedded:

go-bindata -pkg myapp -o bindata.go db/migrations/

The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives).

Use the AssetMigrationSource in your application to find the migrations:

migrations := &migrate.AssetMigrationSource{
	Asset:    Asset,
	AssetDir: AssetDir,
	Dir:      "db/migrations",
}

Both Asset and AssetDir are functions provided by bindata.

Then proceed as usual.

Extending

Adding a new migration source means implementing MigrationSource.

type MigrationSource interface {
	FindMigrations() ([]*Migration, error)
}

The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.

Index

Constants

This section is empty.

Variables

View Source
var MigrationDialects = map[string]gorp.Dialect{
	"sqlite3":    gorp.SqliteDialect{},
	"postgres":   gorp.PostgresDialect{},
	"mysql":      gorp.MySQLDialect{Engine: "InnoDB", Encoding: "UTF8"},
	"mssql":      gorp.SqlServerDialect{},
	"oci8":       gorp.OracleDialect{},
	"clickhouse": dialects.ClickHouseDialect{},
}

Functions

func Exec

func Exec(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection) (int, error)

Execute a set of migrations

Returns the number of applied migrations.

func ExecMax

func ExecMax(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) (int, error)

Execute a set of migrations

Will apply at most `max` migrations. Pass 0 for no limit (or use Exec).

Returns the number of applied migrations.

func SetSchema

func SetSchema(name string)

SetSchema sets the name of a schema that the migration table be referenced.

func SetTable

func SetTable(name string)

Set the name of the table used to store migration info.

Should be called before any other call such as (Exec, ExecMax, ...).

func SkipMax

func SkipMax(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) (int, error)

Skip a set of migrations

Will skip at most `max` migrations. Pass 0 for no limit.

Returns the number of skipped migrations.

Types

type AssetMigrationSource

type AssetMigrationSource struct {
	// Asset should return content of file in path if exists
	Asset func(path string) ([]byte, error)

	// AssetDir should return list of files in the path
	AssetDir func(path string) ([]string, error)

	// Path in the bindata to use.
	Dir string
}

Migrations from a bindata asset set.

func (AssetMigrationSource) FindMigrations

func (a AssetMigrationSource) FindMigrations() ([]*Migration, error)

type FileMigrationSource

type FileMigrationSource struct {
	Dir string
}

A set of migrations loaded from a directory.

func (FileMigrationSource) FindMigrations

func (f FileMigrationSource) FindMigrations() ([]*Migration, error)

type HttpFileSystemMigrationSource

type HttpFileSystemMigrationSource struct {
	FileSystem http.FileSystem
}

func (HttpFileSystemMigrationSource) FindMigrations

func (f HttpFileSystemMigrationSource) FindMigrations() ([]*Migration, error)

type MemoryMigrationSource

type MemoryMigrationSource struct {
	Migrations []*Migration
}

A hardcoded set of migrations, in-memory.

func (MemoryMigrationSource) FindMigrations

func (m MemoryMigrationSource) FindMigrations() ([]*Migration, error)

type Migration

type Migration struct {
	Id   string
	Up   []string
	Down []string

	DisableTransactionUp   bool
	DisableTransactionDown bool
}

func ParseMigration

func ParseMigration(id string, r io.ReadSeeker) (*Migration, error)

Migration parsing

func ToApply

func ToApply(migrations []*Migration, current string, direction MigrationDirection) []*Migration

Filter a slice of migrations into ones that should be applied.

func (Migration) Less

func (m Migration) Less(other *Migration) bool

func (Migration) NumberPrefixMatches

func (m Migration) NumberPrefixMatches() []string

func (Migration) VersionInt

func (m Migration) VersionInt() int64

type MigrationDirection

type MigrationDirection int
const (
	Up MigrationDirection = iota
	Down
)

type MigrationRecord

type MigrationRecord struct {
	Id        string    `db:"id"`
	AppliedAt time.Time `db:"applied_at"`
}

func GetMigrationRecords

func GetMigrationRecords(db *sql.DB, dialect string) ([]*MigrationRecord, error)

type MigrationSource

type MigrationSource interface {
	// Finds the migrations.
	//
	// The resulting slice of migrations should be sorted by Id.
	FindMigrations() ([]*Migration, error)
}

type PackrBox

type PackrBox interface {
	List() []string
	Bytes(name string) []byte
}

Avoids pulling in the packr library for everyone, mimicks the bits of packr.Box that we need.

type PackrMigrationSource

type PackrMigrationSource struct {
	Box PackrBox

	// Path in the box to use.
	Dir string
}

Migrations from a packr box.

func (PackrMigrationSource) FindMigrations

func (p PackrMigrationSource) FindMigrations() ([]*Migration, error)

type PlanError

type PlanError struct {
	Migration   *Migration
	ErrorMessag string
}

PlanError happens where no migration plan could be created between the sets of already applied migrations and the currently found. For example, when the database contains a migration which is not among the migrations list found for an operation.

func (*PlanError) Error

func (p *PlanError) Error() string

type PlannedMigration

type PlannedMigration struct {
	*Migration

	DisableTransaction bool
	Queries            []string
}

func PlanMigration

func PlanMigration(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) ([]*PlannedMigration, *gorp.DbMap, error)

Plan a migration.

func ToCatchup

func ToCatchup(migrations, existingMigrations []*Migration, lastRun *Migration) []*PlannedMigration

type SqlExecutor

type SqlExecutor interface {
	Exec(query string, args ...interface{}) (sql.Result, error)
	Insert(list ...interface{}) error
	Delete(list ...interface{}) (int64, error)
}

type TxError

type TxError struct {
	Migration *Migration
	Err       error
}

TxError is returned when any error is encountered during a database transaction. It contains the relevant *Migration and notes it's Id in the Error function output.

func (*TxError) Error

func (e *TxError) Error() string

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL