dbscan

package
v1.0.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 28, 2020 License: MIT Imports: 5 Imported by: 0

Documentation

Overview

Package dbscan allows scanning data from abstract database rows into Go structs and more.

dbscan works with abstract Rows interface and doesn't depend on any specific database or a library. If a type implements Rows it can leverage the full functionality of this package.

Mapping struct field to database column

The main feature of dbscan is the ability to scan rows data into structs.

type User struct {
	ID        string `db:"user_id"`
	FirstName string
	Email     string
}

// Query rows from the database that implement dbscan.Rows interface.
var rows dbscan.Rows

var users []*User
dbscan.ScanAll(&users, rows)
// users variable now contains data from all rows.

By default, to get the corresponding database column dbscan translates struct field name to snake case. To override this behavior, specify the column name in the `db` field tag. In the example above User struct is mapped to the following columns: "user_id", "first_name", "email".

If selected rows contain a column that doesn't have a corresponding struct field dbscan returns an error, this forces to only select data from the database that application needs.

Reusing structs

dbscan works recursively, a struct can contain embedded or nested structs as well. It allows reusing models in different queries. Structs can be embedded or nested both by value and by a pointer. If you don't specify the `db` tag, dbscan maps fields from nested structs to database columns with the struct field name translated to snake case as the prefix, on the opposite, fields from embedded structs are mapped to database columns without any prefix. dbscan uses "." to separate the prefix. Here is an example:

type UserPost struct {
	*User
	Post Post
}

type User struct {
	UserID string
	Email  string
}

type Post struct {
	ID   string
	Text string
}

UserPost struct is mapped to the following columns: "user_id", "email", "post.id", "post.text".

To add a prefix to an embedded struct or change the prefix of a nested struct specify it in the `db` field tag. You can also use the empty tag `db:""` to remove the prefix of a nested struct. Here is an example:

type UserPostComment struct {
	*User   `db:"user"`
	Post    Post    `db:"p"`
	Comment Comment `db:""`
}

type User struct {
	UserID string
	Email  string
}

type Post struct {
	ID   string
	Text string
}

type Comment struct {
	CommentBody string
}

UserPostComment struct is mapped to the following columns: "user.user_id", "user.email", "p.id", "p.text", "comment_body".

NULLs and custom types

dbscan supports custom types and NULLs perfectly. You can work with them the same way as if you would be using your database library directly. Under the hood, dbscan passes all types that you provide to the underlying rows.Scan() and if the database library supports a type, dbscan supports it automatically, for example:

type User struct {
	OptionalBio  *string
	OptionalAge  CustomNullInt
	Data         CustomData
	OptionalData *CustomData
}

type CustomNullInt struct {
	// Any fields that this custom type needs
}

type CustomData struct {
	// Any fields that this custom type needs
}

User struct is valid and every field will be scanned properly, the only condition for this is that your database library can handle *string, CustomNullInt, CustomData and *CustomData types.

Ignored struct fields

In order for dbscan to work with a field it must be exported, unexported fields will be ignored. The only exception is embedded structs, the type that is embedded might be unexported.

It's possible to explicitly mark a field as ignored for dbscan. To do this set `db:"-"` struct tag. By the way, it works for nested and embedded structs as well, for example:

type Comment struct {
	Post  `db:"-"`
	ID    string
	Body  string
	Likes int `db:"-"`
}

type Post struct {
	ID   string
	Text string
}

Comment struct is mapped to the following columns: "id", "body".

Ambiguous struct fields

If a struct contains multiple fields that are mapped to the same database column, dbscan will assign to the outermost and topmost field, for example:

type UserPost struct {
	User
	Post
}

type Post struct {
	PostID string
	Text   string
	UserID string
}

type User struct {
	UserID string
	Email  string
}

UserPost struct is mapped to the following columns: "user_id", "email", "post_id", "text". But both UserPost.User.UserID and UserPost.Post.UserID are mapped to the "user_id" column, since the User struct is embedded above the Post struct in the UserPost type, UserPost.User.UserID will receive data from the "user_id" column and UserPost.Post.UserID will remain empty. Note that you can't access it as UserPost.UserID though. it's an error for Go, and you need to use the full version: UserPost.User.UserID

Scanning into map

Apart from scanning into structs, dbscan can handle maps, in that case, it uses database column name as the map key and column data as the map value, for example:

// Query rows from the database that implement dbscan.Rows interface.
var rows dbscan.Rows

var results []map[string]interface{}
dbscan.ScanAll(&results, rows)
// results variable now contains data from all rows.

Map type isn't limited to map[string]interface{}, it can be any map with a string key, e.g. map[string]string or map[string]int, if all column values have the same specific type.

Scanning into other types

If the destination isn't a struct nor a map, dbscan handles it as a single column scan, dbscan ensures that rows contain exactly one column and scans destination from that column, for example:

// Query rows from the database that implement dbscan.Rows interface.
var rows dbscan.Rows

var results []string
dbscan.ScanAll(&results, rows)
// results variable not contains data from all rows single column.

Duplicate columns

Rows must not contain duplicate columns otherwise dbscan won't be able to decide from which column to select and will return an error.

Rows processing

ScanAll and ScanOne functions take care of rows processing, they iterate rows to the end and close them after that. Client code doesn't need to bother with that, it just passes rows to dbscan.

Manual rows iteration

It's possible to manually control rows iteration but still use all scanning features of dbscan, see RowScanner for details.

Implementing Rows interface

dbscan can be used with any database library that has a concept of rows and can implement dbscan Rows interface. It's pretty likely that your rows type already implements Rows interface as-is, for example this is true for the standard *sql.Rows type. Or you just need a thin adapter how it was done for pgx.Rows in pgxscan, see pgxscan.RowsAdapter for details.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func NotFound

func NotFound(err error) bool

NotFound returns true if err is a not found error. This error is returned by ScanOne if there were no rows.

func ScanAll

func ScanAll(dst interface{}, rows Rows) error

ScanAll iterates all rows to the end. After iterating it closes the rows, and propagates any errors that could pop up. It expects that destination should be a slice. For each row it scans data and appends it to the destination slice. ScanAll supports both types of slices: slice of structs by a pointer and slice of structs by value, for example:

type User struct {
    ID    string
    Name  string
    Email string
    Age   int
}

var usersByPtr []*User
var usersByValue []User

Both usersByPtr and usersByValue are valid destinations for ScanAll function.

Before starting, ScanAll resets the destination slice, so if it's not empty it will overwrite all existing elements.

Example
package main

import (
	"github.com/KirksFletcher/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	var rows dbscan.Rows

	var users []*User
	if err := dbscan.ScanAll(&users, rows); err != nil {
		// Handle rows processing error
	}
	// users variable now contains data from all rows.
}
Output:

func ScanOne

func ScanOne(dst interface{}, rows Rows) error

ScanOne iterates all rows to the end and makes sure that there was exactly one row otherwise it returns an error. Use NotFound function to check if there were no rows. After iterating ScanOne closes the rows, and propagates any errors that could pop up. It scans data from that single row into the destination.

Example
package main

import (
	"github.com/KirksFletcher/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	var rows dbscan.Rows

	var user User
	if err := dbscan.ScanOne(&user, rows); err != nil {
		// Handle rows processing error.
	}
	// user variable now contains data from the single row.
}
Output:

func ScanRow

func ScanRow(dst interface{}, rows Rows) error

ScanRow creates a new RowScanner and calls RowScanner.Scan that scans current row data into the destination. It's just a helper function if you don't bother with efficiency and don't want to instantiate a new RowScanner before iterating the rows, so it could cache the reflection work between Scan calls. See RowScanner for details.

Example
package main

import (
	"github.com/KirksFletcher/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	// You should also take care of handling rows error after iteration and closing them.
	var rows dbscan.Rows

	for rows.Next() {

		var user User
		if err := dbscan.ScanRow(&user, rows); err != nil {
			// Handle row scanning error.
		}
		// user variable now contains data from the current row.

	}
}
Output:

Types

type RowScanner

type RowScanner struct {
	// contains filtered or unexported fields
}

RowScanner embraces Rows and exposes the Scan method that allows scanning data from the current row into the destination. The first time the Scan method is called it parses the destination type via reflection and caches all required information for further scans. Due to this caching mechanism, it's not allowed to call Scan for destinations of different types, the behavior is unknown in that case. RowScanner doesn't proceed to the next row nor close them, it should be done by the client code.

The main benefit of using this type directly is that you can instantiate a RowScanner and manually iterate over the rows and control how data is scanned from each row. This can be beneficial if the result set is large and you don't want to allocate a slice for all rows at once as it would be done in ScanAll.

ScanOne and ScanAll both use RowScanner type internally.

Example
package main

import (
	"github.com/KirksFletcher/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	// You should also take care of handling rows error after iteration and closing them.
	var rows dbscan.Rows

	rs := dbscan.NewRowScanner(rows)

	for rows.Next() {

		var user User
		if err := rs.Scan(&user); err != nil {
			// Handle row scanning error.
		}
		// user variable now contains data from the current row.

	}
}
Output:

func NewRowScanner

func NewRowScanner(rows Rows) *RowScanner

NewRowScanner returns a new instance of the RowScanner.

func (*RowScanner) Scan

func (rs *RowScanner) Scan(dst interface{}) error

Scan scans data from the current row into the destination. On the first call it caches expensive reflection work and uses it the future calls. See RowScanner for details.

type Rows

type Rows interface {
	Close() error
	Err() error
	Next() bool
	Columns() ([]string, error)
	Scan(dest ...interface{}) error
}

Rows is an abstract database rows that dbscan can iterate over and get the data from. This interface is used to decouple from any particular database library.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL