storm

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 9, 2017 License: MIT Imports: 16 Imported by: 0

README

Storm

Build Status GoDoc Go Report Card

Storm is a simple and powerful toolkit for BoltDB. Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.

In addition to the examples below, see also the examples in the GoDoc.

Table of Contents

Getting Started

go get -u github.com/asdine/storm

Import Storm

import "github.com/asdine/storm"

Open a database

Quick way of opening a database

db, err := storm.Open("my.db")

defer db.Close()

Open can receive multiple options to customize the way it behaves. See Options below

Simple CRUD system

Declare your structures
type User struct {
  ID int // primary key
  Group string `storm:"index"` // this field will be indexed
  Email string `storm:"unique"` // this field will be indexed with a unique constraint
  Name string // this field will not be indexed
  Age int `storm:"index"`
}

The primary key can be of any type as long as it is not a zero value. Storm will search for the tag id, if not present Storm will search for a field named ID.

type User struct {
  ThePrimaryKey string `storm:"id"`// primary key
  Group string `storm:"index"` // this field will be indexed
  Email string `storm:"unique"` // this field will be indexed with a unique constraint
  Name string // this field will not be indexed
}

Storm handles tags in nested structures with the inline tag

type Base struct {
  Ident bson.ObjectId `storm:"id"`
}

type User struct {
	Base      `storm:"inline"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	CreatedAt time.Time `storm:"index"`
}
Save your object
user := User{
  ID: 10,
  Group: "staff",
  Email: "john@provider.com",
  Name: "John",
  Age: 21,
  CreatedAt: time.Now(),
}

err := db.Save(&user)
// err == nil

user.ID++
err = db.Save(&user)
// err == storm.ErrAlreadyExists

That's it.

Save creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.

Auto Increment

Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.


type Product struct {
	Pk                  int `storm:"id,increment"` // primary key with auto increment
	Name                string
	IntegerField        uint64 `storm:"increment"`
	IndexedIntegerField uint32 `storm:"index,increment"`
	UniqueIntegerField  int16  `storm:"unique,increment=100"` // the starting value can be set
}

p := Product{Name: "Vaccum Cleaner"}

fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 0
// 0
// 0
// 0

_ = db.Save(&p)

fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 1
// 1
// 1
// 100

Simple queries

Any object can be fetched, indexed or not. Storm uses indexes when available, otherwhise it uses the query system.

Fetch one object
var user User
err := db.One("Email", "john@provider.com", &user)
// err == nil

err = db.One("Name", "John", &user)
// err == nil

err = db.One("Name", "Jack", &user)
// err == storm.ErrNotFound
Fetch multiple objects
var users []User
err := db.Find("Group", "staff", &users)
Fetch all objects
var users []User
err := db.All(&users)
Fetch all objects sorted by index
var users []User
err := db.AllByIndex("CreatedAt", &users)
Fetch a range of objects
var users []User
err := db.Range("Age", 10, 21, &users)
Fetch objects by prefix
var users []User
err := db.Prefix("Name", "Jo", &users)
Skip, Limit and Reverse
var users []User
err := db.Find("Group", "staff", &users, storm.Skip(10))
err = db.Find("Group", "staff", &users, storm.Limit(10))
err = db.Find("Group", "staff", &users, storm.Reverse())
err = db.Find("Group", "staff", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())

err = db.All(&users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.AllByIndex("CreatedAt", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.Range("Age", 10, 21, &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
Delete an object
err := db.DeleteStruct(&user)
Update an object
// Update multiple fields
err := db.Update(&User{ID: 10, Name: "Jack", Age: 45})

// Update a single field
err := db.UpdateField(&User{ID: 10}, "Age", 0)
Initialize buckets and indexes before saving an object
err := db.Init(&User{})

Useful when starting your application

Drop a bucket

Using the struct

err := db.Drop(&User)

Using the bucket name

err := db.Drop("User")
Re-index a bucket
err := db.ReIndex(&User{})

Useful when the structure has changed

Advanced queries

For more complex queries, you can use the Select method. Select takes any number of Matcher from the q package.

Here are some common Matchers:

// Equality
q.Eq("Name", John)

// Strictly greater than
q.Gt("Age", 7)

// Lesser than or equal to
q.Lte("Age", 77)

// Regex with name that starts with the letter D
q.Re("Name", "^D")

// In the given slice of values
q.In("Group", []string{"Staff", "Admin"})

Matchers can also be combined with And, Or and Not:


// Match if all match
q.And(
  q.Gt("Age", 7),
  q.Re("Name", "^D")
)

// Match if one matches
q.Or(
  q.Re("Name", "^A"),
  q.Not(
    q.Re("Name", "^B")
  ),
  q.Re("Name", "^C"),
  q.In("Group", []string{"Staff", "Admin"}),
  q.And(
    q.StrictEq("Password", []byte(password)),
    q.Eq("Registered", true)
  )
)

You can find the complete list in the documentation.

Select takes any number of matchers and wraps them into a q.And() so it's not necessary to specify it. It returns a Query type.

query := db.Select(q.Gte("Age", 7), q.Lte("Age", 77))

The Query type contains methods to filter and order the records.

// Limit
query = query.Limit(10)

// Skip
query = query.Skip(20)

// Calls can also be chained
query = query.Limit(10).Skip(20).OrderBy("Age").Reverse()

But also to specify how to fetch them.

var users []User
err = query.Find(&users)

var user User
err = query.First(&user)

Examples with Select:

// Find all users with an ID between 10 and 100
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Find(&users)

// Nested matchers
err = db.Select(q.Or(
  q.Gt("ID", 50),
  q.Lt("Age", 21),
  q.And(
    q.Eq("Group", "admin"),
    q.Gte("Age", 21),
  ),
)).Find(&users)

query := db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name")

// Find multiple records
err = query.Find(&users)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").Find(&users)

// Find first record
err = query.First(&user)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").First(&user)

// Delete all matching records
err = query.Delete(new(User))

// Fetching records one by one (useful when the bucket contains a lot of records)
query = db.Select(q.Gte("ID", 10),q.Lte("ID", 100)).OrderBy("Age", "Name")

err = query.Each(new(User), func(record interface{}) error) {
  u := record.(*User)
  ...
  return nil
}

See the documentation for a complete list of methods.

Transactions
tx, err := db.Begin(true)
if err != nil {
  return err
}
defer tx.Rollback()

accountA.Amount -= 100
accountB.Amount += 100

err = tx.Save(accountA)
if err != nil {
  return err
}

err = tx.Save(accountB)
if err != nil {
  return err
}

return tx.Commit()
Options

Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.

BoltOptions

By default, Storm opens a database with the mode 0600 and a timeout of one second. You can change this behavior by using BoltOptions

db, err := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))
MarshalUnmarshaler

To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implements codec.MarshalUnmarshaler via the storm.Codec option:

db := storm.Open("my.db", storm.Codec(myCodec))
Provided Codecs

You can easily implement your own MarshalUnmarshaler, but Storm comes with built-in support for JSON (default), GOB, Sereal, Protocol Buffers and MessagePack.

These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):

import (
	"github.com/asdine/storm"
	"github.com/asdine/storm/codec/gob"
	"github.com/asdine/storm/codec/json"
	"github.com/asdine/storm/codec/sereal"
	"github.com/asdine/storm/codec/protobuf"
	"github.com/asdine/storm/codec/msgpack"
)

var gobDb, _ = storm.Open("gob.db", storm.Codec(gob.Codec))
var jsonDb, _ = storm.Open("json.db", storm.Codec(json.Codec))
var serealDb, _ = storm.Open("sereal.db", storm.Codec(sereal.Codec))
var protobufDb, _ = storm.Open("protobuf.db", storm.Codec(protobuf.Codec))
var msgpackDb, _ = storm.Open("msgpack.db", storm.Codec(msgpack.Codec))

Tip: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to use this tool to inject the tags during the compilation.

Use existing Bolt connection

You can use an existing connection and pass it to Storm

bDB, _ := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
db := storm.Open("my.db", storm.UseDB(bDB))
Batch mode

Batch mode can be enabled to speed up concurrent writes (see Batch read-write transactions)

db := storm.Open("my.db", storm.Batch())

Nodes and nested buckets

Storm takes advantage of BoltDB nested buckets feature by using storm.Node. A storm.Node is the underlying object used by storm.DB to manipulate a bucket. To create a nested bucket and use the same API as storm.DB, you can use the DB.From method.

repo := db.From("repo")

err := repo.Save(&Issue{
  Title: "I want more features",
  Author: user.ID,
})

err = repo.Save(newRelease("0.10"))

var issues []Issue
err = repo.Find("Author", user.ID, &issues)

var release Release
err = repo.One("Tag", "0.10", &release)

You can also chain the nodes to create a hierarchy

chars := db.From("characters")
heroes := chars.From("heroes")
enemies := chars.From("enemies")

items := db.From("items")
potions := items.From("consumables").From("medicine").From("potions")

You can even pass the entire hierarchy as arguments to From:

privateNotes := db.From("notes", "private")
workNotes :=  db.From("notes", "work")
Node options

A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.

n := db.From("my-node")

Give a bolt.Tx transaction to the Node

n = n.WithTransaction(tx)

Enable batch mode

n = n.WithBatch(true)

Use a Codec

n = n.WithCodec(gob.Codec)

Simple Key/Value store

Storm can be used as a simple, robust, key/value store that can store anything. The key and the value can be of any type as long as the key is not a zero value.

Saving data :

db.Set("logs", time.Now(), "I'm eating my breakfast man")
db.Set("sessions", bson.NewObjectId(), &someUser)
db.Set("weird storage", "754-3010", map[string]interface{}{
  "hair": "blonde",
  "likes": []string{"cheese", "star wars"},
})

Fetching data :

user := User{}
db.Get("sessions", someObjectId, &user)

var details map[string]interface{}
db.Get("weird storage", "754-3010", &details)

db.Get("sessions", someObjectId, &details)

Deleting data :

db.Delete("sessions", someObjectId)
db.Delete("weird storage", "754-3010")

BoltDB

BoltDB is still easily accessible and can be used as usual

db.Bolt.View(func(tx *bolt.Tx) error {
  bucket := tx.Bucket([]byte("my bucket"))
  val := bucket.Get([]byte("any id"))
  fmt.Println(string(val))
  return nil
})

A transaction can be also be passed to Storm

db.Bolt.Update(func(tx *bolt.Tx) error {
  ...
  dbx := db.WithTransaction(tx)
  err = dbx.Save(&user)
  ...
  return nil
})

Migrations

You can use the migration tool to migrate databases that use older version of Storm. See this README for more informations.

License

MIT

Credits

Documentation

Index

Examples

Constants

View Source
const Version = "1.0.0"

Version of Storm

Variables

View Source
var (
	// ErrNoID is returned when no ID field or id tag is found in the struct.
	ErrNoID = errors.New("missing struct tag id or ID field")

	// ErrZeroID is returned when the ID field is a zero value.
	ErrZeroID = errors.New("id field must not be a zero value")

	// ErrBadType is returned when a method receives an unexpected value type.
	ErrBadType = errors.New("provided data must be a struct or a pointer to struct")

	// ErrAlreadyExists is returned uses when trying to set an existing value on a field that has a unique index.
	ErrAlreadyExists = errors.New("already exists")

	// ErrNilParam is returned when the specified param is expected to be not nil.
	ErrNilParam = errors.New("param must not be nil")

	// ErrUnknownTag is returned when an unexpected tag is specified.
	ErrUnknownTag = errors.New("unknown tag")

	// ErrIdxNotFound is returned when the specified index is not found.
	ErrIdxNotFound = errors.New("index not found")

	// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer to slice.
	ErrSlicePtrNeeded = errors.New("provided target must be a pointer to slice")

	// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer to struct.
	ErrStructPtrNeeded = errors.New("provided target must be a pointer to struct")

	// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer.
	ErrPtrNeeded = errors.New("provided target must be a pointer to a valid variable")

	// ErrNoName is returned when the specified struct has no name.
	ErrNoName = errors.New("provided target must have a name")

	// ErrNotFound is returned when the specified record is not saved in the bucket.
	ErrNotFound = errors.New("not found")

	// ErrNotInTransaction is returned when trying to rollback or commit when not in transaction.
	ErrNotInTransaction = errors.New("not in transaction")

	// ErrUnAddressable is returned when a struct or an exported field of a struct is unaddressable
	ErrUnAddressable = errors.New("unaddressable value")

	// ErrIncompatibleValue is returned when trying to set a value with a different type than the chosen field
	ErrIncompatibleValue = errors.New("incompatible value")

	// ErrDifferentCodec is returned when using a codec different than the first codec used with the bucket.
	ErrDifferentCodec = errors.New("the selected codec is incompatible with this bucket")
)

Errors

Functions

func AutoIncrement

func AutoIncrement() func(*DB) error

AutoIncrement used to enable bolt.NextSequence on empty integer ids. Deprecated: Set the increment tag to the id field instead.

func Batch

func Batch() func(*DB) error

Batch enables the use of batch instead of update for read-write transactions.

func BoltOptions

func BoltOptions(mode os.FileMode, options *bolt.Options) func(*DB) error

BoltOptions used to pass options to BoltDB.

func Codec

func Codec(c codec.MarshalUnmarshaler) func(*DB) error

Codec used to set a custom encoder and decoder. The default is JSON.

func Limit

func Limit(limit int) func(*index.Options)

Limit sets the maximum number of records to return

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.All(&users, storm.Limit(2))

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 2

func Reverse

func Reverse() func(*index.Options)

Reverse will return the results in descending order

func Root

func Root(root ...string) func(*DB) error

Root used to set the root bucket. See also the From method.

func Skip

func Skip(offset int) func(*index.Options)

Skip sets the number of records to skip

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.All(&users, storm.Skip(1))

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 2

func UseDB

func UseDB(b *bolt.DB) func(*DB) error

UseDB allow Storm to use an existing open Bolt.DB. Warning: storm.DB.Close() will close the bolt.DB instance.

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"time"

	"github.com/asdine/storm"
	"github.com/boltdb/bolt"
)

func main() {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	defer os.RemoveAll(dir)

	bDB, err := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
	if err != nil {
		log.Fatal(err)
	}

	db, _ := storm.Open("", storm.UseDB(bDB))
	defer db.Close()

	err = db.Save(&User{ID: 10})
	if err != nil {
		log.Fatal(err)
	}

	var user User
	err = db.One("ID", 10, &user)
	fmt.Println(err)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}
Output:

<nil>

Types

type BucketScanner

type BucketScanner interface {
	// PrefixScan scans the root buckets for keys matching the given prefix.
	PrefixScan(prefix string) []Node
	// PrefixScan scans the buckets in this node for keys matching the given prefix.
	RangeScan(min, max string) []Node
}

A BucketScanner scans a Node for a list of buckets

type DB

type DB struct {
	// Path of the database file
	Path string

	// Bolt is still easily accessible
	Bolt *bolt.DB
	// contains filtered or unexported fields
}

DB is the wrapper around BoltDB. It contains an instance of BoltDB and uses it to perform all the needed operations

func Open

func Open(path string, stormOptions ...func(*DB) error) (*DB, error)

Open opens a database at the given path with optional Storm options.

func (*DB) All

func (s *DB) All(to interface{}, options ...func(*index.Options)) error

All get all the records of a bucket

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.All(&users)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 3

func (*DB) AllByIndex

func (s *DB) AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error

AllByIndex gets all the records of a bucket that are indexed in the specified index

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.AllByIndex("CreatedAt", &users)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 3

func (*DB) Begin

func (s *DB) Begin(writable bool) (Node, error)

Begin starts a new transaction.

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	// both start out with a balance of 10000 cents
	var account1, account2 Account

	tx, err := db.Begin(true)

	if err != nil {
		log.Fatal(err)
	}
	defer tx.Rollback()

	err = tx.One("ID", 1, &account1)

	if err != nil {
		log.Fatal(err)
	}

	err = tx.One("ID", 2, &account2)

	if err != nil {
		log.Fatal(err)
	}

	account1.Amount -= 1000
	account2.Amount += 1000

	err = tx.Save(&account1)

	if err != nil {
		log.Fatal(err)
	}

	err = tx.Save(&account2)

	if err != nil {
		log.Fatal(err)
	}

	tx.Commit()

	var account1Reloaded, account2Reloaded Account

	err = db.One("ID", 1, &account1Reloaded)

	if err != nil {
		log.Fatal(err)
	}

	err = db.One("ID", 2, &account2Reloaded)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Amount in account 1:", account1Reloaded.Amount)
	fmt.Println("Amount in account 2:", account2Reloaded.Amount)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Amount in account 1: 9000
Amount in account 2: 11000

func (*DB) Bucket

func (s *DB) Bucket() []string

Bucket returns the root bucket name as a slice. In the normal, simple case this will be empty.

func (*DB) Close

func (s *DB) Close() error

Close the database

func (*DB) Codec

func (s *DB) Codec() codec.MarshalUnmarshaler

Codec returns the EncodeDecoder used by this instance of Storm

func (*DB) Commit

func (s *DB) Commit() error

Commit writes all changes to disk.

func (*DB) Count

func (s *DB) Count(data interface{}) (int, error)

Count counts all the records of a bucket

func (*DB) CreateBucketIfNotExists

func (s *DB) CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error)

CreateBucketIfNotExists creates the bucket below the current node if it doesn't already exist.

func (*DB) Delete

func (s *DB) Delete(bucketName string, key interface{}) error

Delete deletes a key from a bucket

func (*DB) DeleteStruct

func (s *DB) DeleteStruct(data interface{}) error

DeleteStruct deletes a structure from the associated bucket

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var user User

	err := db.One("ID", 1, &user)

	if err != nil {
		log.Fatal(err)
	}

	err = db.DeleteStruct(&user)
	fmt.Println(err)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

<nil>

func (*DB) Drop

func (s *DB) Drop(data interface{}) error

Drop a bucket

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var user User

	err := db.One("Email", "john@provider.com", &user)

	if err != nil {
		log.Fatal(err)
	}

	err = db.Drop("User")
	if err != nil {
		log.Fatal(err)
	}

	// One only works for indexed fields.
	err = db.One("Email", "john@provider.com", &user)
	fmt.Println(err)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

not found

func (*DB) Find

func (s *DB) Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error

Find returns one or more records by the specified index

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.Find("Group", "staff", &users)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 3

func (*DB) From

func (s *DB) From(root ...string) Node

From returns a new Storm node with a new bucket root. All DB operations on the new node will be executed relative to the given bucket.

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	// Create some sub buckets to partition the data.
	privateNotes := db.From("notes", "private")
	workNotes := db.From("notes", "work")

	err := privateNotes.Save(&Note{ID: "private1", Text: "This is some private text."})

	if err != nil {
		log.Fatal(err)
	}

	err = workNotes.Save(&Note{ID: "work1", Text: "Work related."})

	if err != nil {
		log.Fatal(err)
	}

	var privateNote, workNote, personalNote Note

	err = privateNotes.One("ID", "work1", &workNote)

	// Not found: Wrong bucket.
	fmt.Println(err)

	err = workNotes.One("ID", "work1", &workNote)

	if err != nil {
		log.Fatal(err)
	}

	err = privateNotes.One("ID", "private1", &privateNote)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(workNote.Text)
	fmt.Println(privateNote.Text)

	// These can be nested further if needed:
	personalNotes := privateNotes.From("personal")
	err = personalNotes.Save(&Note{ID: "personal1", Text: "This is some very personal text."})

	if err != nil {
		log.Fatal(err)
	}

	err = personalNotes.One("ID", "personal1", &personalNote)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(personalNote.Text)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

type Note struct {
	ID   string `storm:"id"`
	Text string
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

not found
Work related.
This is some private text.
This is some very personal text.

func (*DB) Get

func (s *DB) Get(bucketName string, key interface{}, to interface{}) error

Get a value from a bucket

func (*DB) GetBucket

func (s *DB) GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket

GetBucket returns the given bucket below the current node.

func (*DB) GetBytes

func (s *DB) GetBytes(bucketName string, key interface{}) ([]byte, error)

GetBytes gets a raw value from a bucket.

func (*DB) Init

func (s *DB) Init(data interface{}) error

Init creates the indexes and buckets for a given structure

func (*DB) One

func (s *DB) One(fieldName string, value interface{}, to interface{}) error

One returns one record by the specified index

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var user User

	err := db.One("Email", "john@provider.com", &user)

	if err != nil {
		log.Fatal(err)
	}

	// also works on unindexed fields
	err = db.One("Name", "John", &user)

	if err != nil {
		log.Fatal(err)
	}

	err = db.One("Name", "Jack", &user)
	fmt.Println(err)

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

not found

func (*DB) Prefix

func (s *DB) Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error

Prefix returns one or more records whose given field starts with the specified prefix.

func (*DB) PrefixScan

func (s *DB) PrefixScan(prefix string) []Node

PrefixScan scans the root buckets for keys matching the given prefix.

func (*DB) Range

func (s *DB) Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error

Range returns one or more records by the specified index within the specified range

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"strings"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, db := prepareDB()
	defer os.RemoveAll(dir)
	defer db.Close()

	var users []User
	err := db.Range("Age", 21, 22, &users)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Found", len(users))

}

type User struct {
	ID        int    `storm:"id"`
	Group     string `storm:"index"`
	Email     string `storm:"unique"`
	Name      string
	Age       int       `storm:"index"`
	CreatedAt time.Time `storm:"index"`
}

type Account struct {
	ID     int `storm:"id"`
	Amount int64
}

func prepareDB() (string, *storm.DB) {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())

	for i, name := range []string{"John", "Eric", "Dilbert"} {
		email := strings.ToLower(name + "@provider.com")
		user := User{
			Group:     "staff",
			Email:     email,
			Name:      name,
			Age:       21 + i,
			CreatedAt: time.Now(),
		}
		err := db.Save(&user)

		if err != nil {
			log.Fatal(err)
		}
	}

	for i := int64(0); i < 10; i++ {
		account := Account{Amount: 10000}

		err := db.Save(&account)

		if err != nil {
			log.Fatal(err)
		}
	}

	return dir, db
}
Output:

Found 2

func (*DB) RangeScan

func (s *DB) RangeScan(min, max string) []Node

RangeScan scans the root buckets over a range such as a sortable time range.

func (*DB) ReIndex

func (s *DB) ReIndex(data interface{}) error

ReIndex rebuilds all the indexes of a bucket

func (*DB) Remove

func (s *DB) Remove(data interface{}) error

Remove deletes a structure from the associated bucket Deprecated: Use DeleteStruct instead.

func (*DB) Rollback

func (s *DB) Rollback() error

Rollback closes the transaction and ignores all previous updates.

func (*DB) Save

func (s *DB) Save(data interface{}) error

Save a structure

Example
package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"path/filepath"
	"time"

	"github.com/asdine/storm"
)

func main() {
	dir, _ := ioutil.TempDir(os.TempDir(), "storm")
	defer os.RemoveAll(dir)

	type User struct {
		ID        int    `storm:"id"`
		Group     string `storm:"index"`
		Email     string `storm:"unique"`
		Name      string
		Age       int       `storm:"index"`
		CreatedAt time.Time `storm:"index"`
	}

	// Open takes an optional list of options as the last argument.
	// AutoIncrement will auto-increment integer IDs without existing values.
	db, _ := storm.Open(filepath.Join(dir, "storm.db"), storm.AutoIncrement())
	defer db.Close()

	user := User{
		Group:     "staff",
		Email:     "john@provider.com",
		Name:      "John",
		Age:       21,
		CreatedAt: time.Now(),
	}

	err := db.Save(&user)

	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(user.ID)

	user2 := user
	user2.ID = 0

	// Save will fail because of the unique constraint on Email
	err = db.Save(&user2)
	fmt.Println(err)

}
Output:

1
already exists

func (*DB) Select

func (s *DB) Select(matchers ...q.Matcher) Query

Select a list of records that match a list of matchers. Doesn't use indexes.

func (*DB) Set

func (s *DB) Set(bucketName string, key interface{}, value interface{}) error

Set a key/value pair into a bucket

func (*DB) SetBytes

func (s *DB) SetBytes(bucketName string, key interface{}, value []byte) error

SetBytes sets a raw value into a bucket.

func (*DB) Update

func (s *DB) Update(data interface{}) error

Update a structure

func (*DB) UpdateField

func (s *DB) UpdateField(data interface{}, fieldName string, value interface{}) error

UpdateField updates a single field

func (*DB) WithBatch

func (s *DB) WithBatch(enabled bool) Node

WithBatch returns a new Storm Node with the batch mode enabled.

func (*DB) WithCodec

func (s *DB) WithCodec(codec codec.MarshalUnmarshaler) Node

WithCodec returns a New Storm Node that will use the given Codec.

func (*DB) WithTransaction

func (s *DB) WithTransaction(tx *bolt.Tx) Node

WithTransaction returns a New Storm node that will use the given transaction.

type Finder

type Finder interface {
	// One returns one record by the specified index
	One(fieldName string, value interface{}, to interface{}) error

	// Find returns one or more records by the specified index
	Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error

	// AllByIndex gets all the records of a bucket that are indexed in the specified index
	AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error

	// All gets all the records of a bucket.
	// If there are no records it returns no error and the 'to' parameter is set to an empty slice.
	All(to interface{}, options ...func(*index.Options)) error

	// Select a list of records that match a list of matchers. Doesn't use indexes.
	Select(matchers ...q.Matcher) Query

	// Range returns one or more records by the specified index within the specified range
	Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error

	// Prefix returns one or more records whose given field starts with the specified prefix.
	Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error

	// Count counts all the records of a bucket
	Count(data interface{}) (int, error)
}

A Finder can fetch types from BoltDB

type KeyValueStore

type KeyValueStore interface {
	// Get a value from a bucket
	Get(bucketName string, key interface{}, to interface{}) error
	// Set a key/value pair into a bucket
	Set(bucketName string, key interface{}, value interface{}) error
	// Delete deletes a key from a bucket
	Delete(bucketName string, key interface{}) error
	// GetBytes gets a raw value from a bucket.
	GetBytes(bucketName string, key interface{}) ([]byte, error)
	// SetBytes sets a raw value into a bucket.
	SetBytes(bucketName string, key interface{}, value []byte) error
}

KeyValueStore can store and fetch values by key

type Node

type Node interface {
	Tx
	TypeStore
	KeyValueStore
	BucketScanner
	// From returns a new Storm node with a new bucket root below the current.
	// All DB operations on the new node will be executed relative to this bucket.
	From(addend ...string) Node

	// Bucket returns the bucket name as a slice from the root.
	// In the normal, simple case this will be empty.
	Bucket() []string

	// GetBucket returns the given bucket below the current node.
	GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket

	// CreateBucketIfNotExists creates the bucket below the current node if it doesn't
	// already exist.
	CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error)

	// WithTransaction returns a New Storm node that will use the given transaction.
	WithTransaction(tx *bolt.Tx) Node

	// Begin starts a new transaction.
	Begin(writable bool) (Node, error)

	// Codec used by this instance of Storm
	Codec() codec.MarshalUnmarshaler

	// WithCodec returns a New Storm Node that will use the given Codec.
	WithCodec(codec codec.MarshalUnmarshaler) Node

	// WithBatch returns a new Storm Node with the batch mode enabled.
	WithBatch(enabled bool) Node
}

A Node in Storm represents the API to a BoltDB bucket.

type Query

type Query interface {
	// Skip matching records by the given number
	Skip(int) Query

	// Limit the results by the given number
	Limit(int) Query

	// Order by the given fields, in descending precedence, left-to-right.
	OrderBy(...string) Query

	// Reverse the order of the results
	Reverse() Query

	// Bucket specifies the bucket name
	Bucket(string) Query

	// Find a list of matching records
	Find(interface{}) error

	// First gets the first matching record
	First(interface{}) error

	// Delete all matching records
	Delete(interface{}) error

	// Count all the matching records
	Count(interface{}) (int, error)

	// Returns all the records without decoding them
	Raw() ([][]byte, error)

	// Execute the given function for each raw element
	RawEach(func([]byte, []byte) error) error

	// Execute the given function for each element
	Each(interface{}, func(interface{}) error) error
}

Query is the low level query engine used by Storm. It allows to operate searches through an entire bucket.

type Tx

type Tx interface {
	// Commit writes all changes to disk.
	Commit() error

	// Rollback closes the transaction and ignores all previous updates.
	Rollback() error
}

Tx is a transaction

type TypeStore

type TypeStore interface {
	Finder
	// Init creates the indexes and buckets for a given structure
	Init(data interface{}) error

	// ReIndex rebuilds all the indexes of a bucket
	ReIndex(data interface{}) error

	// Save a structure
	Save(data interface{}) error

	// Update a structure
	Update(data interface{}) error

	// UpdateField updates a single field
	UpdateField(data interface{}, fieldName string, value interface{}) error

	// Drop a bucket
	Drop(data interface{}) error

	// DeleteStruct deletes a structure from the associated bucket
	DeleteStruct(data interface{}) error

	// Remove deletes a structure from the associated bucket
	// Deprecated: Use DeleteStruct instead.
	Remove(data interface{}) error
}

TypeStore stores user defined types in BoltDB

Directories

Path Synopsis
Package codec contains sub-packages with different codecs that can be used to encode and decode entities in Storm.
Package codec contains sub-packages with different codecs that can be used to encode and decode entities in Storm.
gob
Package gob contains a codec to encode and decode entities in Gob format
Package gob contains a codec to encode and decode entities in Gob format
json
Package json contains a codec to encode and decode entities in JSON format
Package json contains a codec to encode and decode entities in JSON format
msgpack
Package msgpack contains a codec to encode and decode entities in msgpack format
Package msgpack contains a codec to encode and decode entities in msgpack format
protobuf
Package protobuf contains a codec to encode and decode entities in Protocol Buffer Package protobuf is a generated protocol buffer package.
Package protobuf contains a codec to encode and decode entities in Protocol Buffer Package protobuf is a generated protocol buffer package.
sereal
Package sereal contains a codec to encode and decode entities using Sereal
Package sereal contains a codec to encode and decode entities using Sereal
Package index contains Index engines used to store values and their corresponding IDs
Package index contains Index engines used to store values and their corresponding IDs
q
Package q contains a list of Matchers used to compare struct fields with values
Package q contains a list of Matchers used to compare struct fields with values

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL