dbscan

package
v2.1.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 7, 2024 License: MIT Imports: 6 Imported by: 17

Documentation

Overview

Package dbscan allows scanning data from abstract database rows into Go structs and more.

dbscan works with abstract Rows interface and doesn't depend on any specific database or a library. If a type implements Rows, it can leverage the full functionality of this package.

Mapping struct field to database column

The main feature of dbscan is the ability to scan rows data into structs.

type User struct {
	ID        string `db:"user_id"`
	FirstName string
	Email     string
}

// Query rows from the database that implements dbscan.Rows interface.
var rows dbscan.Rows

var users []*User
dbscan.ScanAll(&users, rows)
// users variable now contains data from all rows.

By default, to get the corresponding database column, dbscan translates the struct field name to snake case. To override this behavior, specify the column name in the `db` field tag. In the example above User struct is mapped to the following columns: "user_id", "first_name", "email".

If selected rows contain a column that doesn't have a corresponding struct field, dbscan returns an error, this forces to only select data from the database that the application needs.

dbscan supports commas "," in the struct tag name. That makes it compatible with the struct tag formats of other libraries. dbscan splits the tag name by "," and uses the first part as the column name. So `db:"user_id,other_tag_value"` struct tag is equivalent to `db:"user_id"` for dbscan.

Reusing structs

dbscan works recursively. A struct can contain embedded or nested structs as well. It allows reusing models in different queries. Structs can be embedded or nested both by value and by a pointer. If you don't specify the `db` tag, dbscan maps fields from nested structs to database columns with the struct field name translated to snake case as the prefix. On the opposite, fields from embedded structs are mapped to database columns without any prefix. dbscan uses "." to separate the prefix. Here is an example:

type UserPost struct {
	*User
	Post Post
}

type User struct {
	UserID string
	Email  string
}

type Post struct {
	ID   string
	Text string
}

UserPost struct is mapped to the following columns: "user_id", "email", "post.id", "post.text".

To add a prefix to an embedded struct or change the prefix of a nested struct specify it in the `db` field tag. You can also use the empty tag `db:""` to remove the prefix of a nested struct. Here is an example:

type UserPostComment struct {
	*User   `db:"user"`
	Post    Post    `db:"p"`
	Comment Comment `db:""`
}

type User struct {
	UserID string
	Email  string
}

type Post struct {
	ID   string
	Text string
}

type Comment struct {
	CommentBody string
}

UserPostComment struct is mapped to the following columns: "user.user_id", "user.email", "p.id", "p.text", "comment_body".

NULLs and custom types

dbscan supports custom types and NULLs perfectly. You can work with them the same way as if you would be using your database library directly. Under the hood, dbscan passes all types that you provide to the underlying rows.Scan() and if the database library supports a type, dbscan supports it automatically, for example:

type User struct {
	OptionalBio  *string
	OptionalAge  CustomNullInt
	Data         CustomData
	OptionalData *CustomData
}

type CustomNullInt struct {
	// Any fields that this custom type needs
}

type CustomData struct {
	// Any fields that this custom type needs
}

User struct is valid, and every field will be scanned correctly, the only condition for this is that your database library can handle *string, CustomNullInt, CustomData and *CustomData types.

Ignored struct fields

In order for dbscan to work with a field, it must be exported. Unexported fields will be ignored. The only exception is embedded structs. The type that is embedded might be unexported.

It's possible to mark a field as ignored for dbscan explicitly. To do this set `db:"-"` struct tag. By the way, it works for nested and embedded structs as well, for example:

type Comment struct {
	Post  `db:"-"`
	ID    string
	Body  string
	Likes int `db:"-"`
}

type Post struct {
	ID   string
	Text string
}

Comment struct is mapped to the following columns: "id", "body".

Ambiguous struct fields

If a struct contains multiple fields that are mapped to the same database column, dbscan will assign to the outermost and topmost field, for example:

type UserPost struct {
	User
	Post
}

type Post struct {
	PostID string
	Text   string
	UserID string
}

type User struct {
	UserID string
	Email  string
}

UserPost struct is mapped to the following columns: "user_id", "email", "post_id", "text". But both UserPost.User.UserID and UserPost.Post.UserID are mapped to the "user_id" column, since the User struct is embedded above the Post struct in the UserPost type, UserPost.User.UserID will receive data from the "user_id" column and UserPost.Post.UserID will remain empty. Note that you can't access it as UserPost.UserID though. it's an error for Go, and you need to use the full version: UserPost.User.UserID

Scanning into map

Apart from scanning into structs, dbscan can handle maps, in that case, it uses database column name as the map key and column data as the map value, for example:

// Query rows from the database that implements dbscan.Rows interface.
var rows dbscan.Rows

var results []map[string]interface{}
dbscan.ScanAll(&results, rows)
// results variable now contains data from all rows.

Map type isn't limited to map[string]interface{}, it can be any map with a string key, e.g., map[string]string or map[string]int, if all column values have the same specific type.

Scanning into other types

If the destination isn't a struct nor a map, dbscan handles it as a single column scan, dbscan ensures that rows contain exactly one column and scans destination from that column, for example:

// Query rows from the database that implements dbscan.Rows interface.
var rows dbscan.Rows

var results []string
dbscan.ScanAll(&results, rows)
// results variable not contains data from all rows single column.

Duplicate columns

Rows must not contain duplicate columns otherwise, dbscan won't be able to decide from which column to select and will return an error.

Support for Row type

dbscan doesn't support a single row type like Row, which you might see in many database libraries. This is because the Row type doesn't expose the required information to do the mapping between columns and the Go destination. So dbscan can only work with Rows type, and it provides a convenient function ScanOne to handle a single row case; see ScanOne for details.

Rows processing

ScanAll and ScanOne functions take care of rows processing, they iterate rows to the end and close them after that. Client code doesn't need to bother with that. It just passes rows to dbscan.

Manual rows iteration

It's possible to manually control rows iteration but still use all scanning features of dbscan, see RowScanner for details.

Overriding default settings

dbscan has API type, which you can use to set custom settings, see API for details.

Implementing Rows interface

dbscan can be used with any database library with a concept of rows and can implement dbscan Rows interface. It's pretty likely that your rows type already implements the Rows interface as-is. For example, this is true for the standard *sql.Rows type. Or you just need a thin adapter as it is done for pgx.Rows in pgxscan, see pgxscan.RowsAdapter for details.

Index

Examples

Constants

This section is empty.

Variables

View Source
var DefaultAPI = mustNewAPI()

DefaultAPI is the default instance of API with all configuration settings set to default.

View Source
var ErrNotFound = errors.New("scany: no row was found")

ErrNotFound is returned by ScanOne if there were no rows.

Functions

func NotFound

func NotFound(err error) bool

NotFound returns true if err is a not found error. This error is returned by ScanOne if there were no rows.

func ScanAll

func ScanAll(dst interface{}, rows Rows) error

ScanAll is a package-level helper function that uses the DefaultAPI object. See API.ScanAll for details.

Example
package main

import (
	"github.com/georgysavva/scany/v2/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	var rows dbscan.Rows

	var users []*User
	if err := dbscan.ScanAll(&users, rows); err != nil {
		// Handle rows processing error.
	}
	// users variable now contains data from all rows.
}
Output:

func ScanAllSets added in v2.1.0

func ScanAllSets(dsts []interface{}, rows Rows) error

ScanAllSets is a package-level helper function that uses the DefaultAPI object. See API.ScanAllSets for details.

func ScanOne

func ScanOne(dst interface{}, rows Rows) error

ScanOne is a package-level helper function that uses the DefaultAPI object. See API.ScanOne for details.

Example
package main

import (
	"github.com/georgysavva/scany/v2/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	var rows dbscan.Rows

	var user User
	if err := dbscan.ScanOne(&user, rows); err != nil {
		// Handle rows processing error.
	}
	// user variable now contains data from the single row.
}
Output:

func ScanRow

func ScanRow(dst interface{}, rows Rows) error

ScanRow is a package-level helper function that uses the DefaultAPI object. See API.ScanRow for details.

Example
package main

import (
	"github.com/georgysavva/scany/v2/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	// You should also take care of handling rows error after iteration and closing them.
	var rows dbscan.Rows

	for rows.Next() {
		var user User
		if err := dbscan.ScanRow(&user, rows); err != nil {
			// Handle row scanning error.
		}
		// user variable now contains data from the current row.
	}
}
Output:

func SnakeCaseMapper

func SnakeCaseMapper(str string) string

SnakeCaseMapper is a NameMapperFunc that maps struct field to snake case.

Types

type API

type API struct {
	// contains filtered or unexported fields
}

API is the core type in dbscan. It implements all the logic and exposes functionality available in the package. With API type users can create a custom API instance and override default settings hence configure dbscan. API should not be copied after first use.

Example

This example shows how to create and use a custom API instance to override default settings.

package main

import (
	"strings"

	"github.com/georgysavva/scany/v2/dbscan"
)

func main() {
	type User struct {
		ID    string `database:"userid"`
		Name  string
		Email string
		Age   int
	}

	// Instantiate a custom API with overridden settings.
	api, err := dbscan.NewAPI(
		dbscan.WithFieldNameMapper(strings.ToLower),
		dbscan.WithStructTagKey("database"),
	)
	if err != nil {
		// Handle API initialization error.
	}

	// Query rows from the database that implement Rows interface.
	// You should also take care of handling rows error after iteration and closing them.
	var rows dbscan.Rows

	var users []*User
	// Use the custom API instance to access dbscan functionality.
	if err := api.ScanAll(&users, rows); err != nil {
		// Handle rows processing error.
	}
	// users variable now contains data from all rows.
}
Output:

func NewAPI

func NewAPI(opts ...APIOption) (*API, error)

NewAPI creates a new API object with provided list of options.

func (*API) NewRowScanner

func (api *API) NewRowScanner(rows Rows) *RowScanner

NewRowScanner returns a new instance of the RowScanner.

func (*API) ScanAll

func (api *API) ScanAll(dst interface{}, rows Rows) error

ScanAll iterates all rows to the end. After iterating it closes the rows, and propagates any errors that could pop up. It expects that destination should be a slice. For each row it scans data and appends it to the destination slice. ScanAll supports both types of slices: slice of structs by a pointer and slice of structs by value, for example:

type User struct {
    ID    string
    Name  string
    Email string
    Age   int
}

var usersByPtr []*User
var usersByValue []User

Both usersByPtr and usersByValue are valid destinations for ScanAll function.

Before starting, ScanAll resets the destination slice, so if it's not empty it will overwrite all existing elements.

func (*API) ScanAllSets added in v2.1.0

func (api *API) ScanAllSets(dsts []interface{}, rows Rows) error

ScanAllSets iterates all rows to the end and scans data into each destination. Multiple destinations is supported by multiple result sets.

func (*API) ScanOne

func (api *API) ScanOne(dst interface{}, rows Rows) error

ScanOne iterates all rows to the end and makes sure that there was exactly one row otherwise it returns an error. Use NotFound function to check if there were no rows. After iterating ScanOne closes the rows, and propagates any errors that could pop up. It scans data from that single row into the destination.

func (*API) ScanRow

func (api *API) ScanRow(dst interface{}, rows Rows) error

ScanRow creates a new RowScanner and calls RowScanner.Scan that scans current row data into the destination. It's just a helper function if you don't bother with efficiency and don't want to instantiate a new RowScanner before iterating the rows, so it could cache the reflection work between Scan calls. See RowScanner for details.

type APIOption

type APIOption func(api *API)

APIOption is a function type that changes API configuration.

func WithAllowUnknownColumns

func WithAllowUnknownColumns(allowUnknownColumns bool) APIOption

WithAllowUnknownColumns allows the scanner to ignore db columns that doesn't exist at the destination. The default function is to throw an error when a db column ain't found at the destination.

func WithColumnSeparator

func WithColumnSeparator(separator string) APIOption

WithColumnSeparator allows to use a custom separator character for column name when combining nested structs. The default separator is "." character.

func WithFieldNameMapper

func WithFieldNameMapper(mapperFn NameMapperFunc) APIOption

WithFieldNameMapper allows to use a custom function to map field name to column names. The default function is SnakeCaseMapper.

func WithScannableTypes

func WithScannableTypes(scannableTypes ...interface{}) APIOption

WithScannableTypes specifies a list of interfaces that underlying database library can scan into. In case the destination type passed to dbscan implements one of those interfaces, dbscan will handle it as primitive type case i.e. simply pass the destination to the database library. Instead of attempting to map database columns to destination struct fields or map keys. In order for reflection to capture the interface type, you must pass it by pointer.

For example your database library defines a scanner interface like this:

type Scanner interface {
    Scan(...) error
}

You can pass it to dbscan this way: dbscan.WithScannableTypes((*Scanner)(nil)).

func WithStructTagKey

func WithStructTagKey(tagKey string) APIOption

WithStructTagKey allows to use a custom struct tag key. The default tag key is `db`.

type NameMapperFunc

type NameMapperFunc func(string) string

NameMapperFunc is a function type that maps a struct field name to the database column name.

type RowScanner

type RowScanner struct {
	// contains filtered or unexported fields
}

RowScanner embraces Rows and exposes the Scan method that allows scanning data from the current row into the destination. The first time the Scan method is called it parses the destination type via reflection and caches all required information for further scans. Due to this caching mechanism, it's not allowed to call Scan for destinations of different types, the behavior is unknown in that case. RowScanner doesn't proceed to the next row nor close them, it should be done by the client code.

The main benefit of using this type directly is that you can instantiate a RowScanner and manually iterate over the rows and control how data is scanned from each row. This can be beneficial if the result set is large and you don't want to allocate a slice for all rows at once as it would be done in ScanAll.

ScanOne and ScanAll both use RowScanner type internally.

Example
package main

import (
	"github.com/georgysavva/scany/v2/dbscan"
)

func main() {
	type User struct {
		ID    string `db:"user_id"`
		Name  string
		Email string
		Age   int
	}

	// Query rows from the database that implement Rows interface.
	// You should also take care of handling rows error after iteration and closing them.
	var rows dbscan.Rows

	rs := dbscan.NewRowScanner(rows)

	for rows.Next() {
		var user User
		if err := rs.Scan(&user); err != nil {
			// Handle row scanning error.
		}
		// user variable now contains data from the current row.
	}
}
Output:

func NewRowScanner

func NewRowScanner(rows Rows) *RowScanner

NewRowScanner is a package-level helper function that uses the DefaultAPI object. See API.NewRowScanner for details.

func (*RowScanner) Scan

func (rs *RowScanner) Scan(dst interface{}) error

Scan scans data from the current row into the destination. On the first call it caches expensive reflection work and uses it the future calls. See RowScanner for details.

type Rows

type Rows interface {
	Close() error
	Err() error
	Next() bool
	Columns() ([]string, error)
	Scan(dest ...interface{}) error
	NextResultSet() bool
}

Rows is an abstract database rows that dbscan can iterate over and get the data from. This interface is used to decouple from any particular database library.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL