pgx

package
v0.0.0-...-15cc3ae Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 12, 2015 License: MIT, BSD-3-Clause Imports: 23 Imported by: 0

README

Pgx

Pgx is a a pure Go database connection library designed specifically for PostgreSQL. Pgx is different from other drivers such as pq because, while it can operate as a database/sql compatible driver, pgx is primarily intended to be used directly. It offers a native interface similar to database/sql that offers better performance and more features.

Features

Pgx supports many additional features beyond what is available through database/sql.

  • Listen / notify
  • Transaction isolation level control
  • Full TLS connection control
  • Binary format support for custom types (can be much faster)
  • Logging support
  • Configurable connection pool with after connect hooks to do arbitrary connection setup
  • PostgreSQL array to Go slice mapping for integers, floats, and strings
  • Hstore support
  • Large object support

Performance

Pgx performs roughly equivalent to pq and go-pg for selecting a single column from a single row, but it is substantially faster when selecting multiple entire rows (6893 queries/sec for pgx vs. 3968 queries/sec for pq -- 73% faster).

See this gist for the underlying benchmark results or checkout go_db_bench to run tests for yourself.

database/sql

Import the github.com/jackc/pgx/stdlib package to use pgx as a driver for database/sql. It is possible to retrieve a pgx connection from database/sql on demand. This allows using the database/sql interface in most places, but using pgx directly when more performance or PostgreSQL specific features are needed.

Documentation

pgx includes extensive documentation in the godoc format. It is viewable online at godoc.org.

Testing

pgx supports multiple connection and authentication types. Setting up a test environment that can test all of them can be cumbersome. In particular, Windows cannot test Unix domain socket connections. Because of this pgx will skip tests for connection types that are not configured.

Normal Test Environment

To setup the normal test environment run the following SQL:

create user pgx_md5 password 'secret';
create database pgx_test;

Connect to database pgx_test and run:

create extension hstore;

Next open connection_settings_test.go.example and make a copy without the .example. If your PostgreSQL server is accepting connections on 127.0.0.1, then you are done.

Connection and Authentication Test Environment

Complete the normal test environment setup and also do the following.

Run the following SQL:

create user pgx_none;
create user pgx_pw password 'secret';

Add the following to your pg_hba.conf:

If you are developing on Unix with domain socket connections:

local  pgx_test  pgx_none  trust
local  pgx_test  pgx_pw    password
local  pgx_test  pgx_md5   md5

If you are developing on Windows with TCP connections:

host  pgx_test  pgx_none  127.0.0.1/32 trust
host  pgx_test  pgx_pw    127.0.0.1/32 password
host  pgx_test  pgx_md5   127.0.0.1/32 md5

Version Policy

pgx follows semantic versioning for the documented public API. master branch tracks the latest stable branch (v2). Consider using import "gopkg.in/jackc/pgx.v2" to lock to the v2 branch or use a vendoring tool such as godep.

Documentation

Overview

Package pgx is a PostgreSQL database driver.

pgx provides lower level access to PostgreSQL than the standard database/sql It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jack/pgx/stdlib to use pgx as a database/sql compatible driver.

Query Interface

pgx implements Query and Scan in the familiar database/sql style.

var sum int32

// Send the query to the server. The returned rows MUST be closed
// before conn can be used again.
rows, err := conn.Query("select generate_series(1,$1)", 10)
if err != nil {
    return err
}

// rows.Close is called by rows.Next when all rows are read
// or an error occurs in Next or Scan. So it may optionally be
// omitted if nothing in the rows.Next loop can panic. It is
// safe to close rows multiple times.
defer rows.Close()

// Iterate through the result set
for rows.Next() {
    var n int32
    err = rows.Scan(&n)
    if err != nil {
        return err
    }
    sum += n
}

// Any errors encountered by rows.Next or rows.Scan will be returned here
if rows.Err() != nil {
    return err
}

// No errors found - do something with sum

pgx also implements QueryRow in the same style as database/sql.

var name string
var weight int64
err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Use exec to execute a query that does not return a result set.

commandTag, err := conn.Exec("delete from widgets where id=$1", 42)
if err != nil {
    return err
}
if commandTag.RowsAffected() != 1 {
    return errors.New("No row found to delete")
}

Connection Pool

Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. Also, the connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. This is especially useful to ensure all connections have the same prepared statements available or to change any other connection settings.

It delegates Query, QueryRow, Exec, and Begin functions to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control.

var name string
var weight int64
err := pool.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Transactions

Transactions are started by calling Begin or BeginIso. The BeginIso variant creates a transaction with a specified isolation level.

tx, err := conn.Begin()
if err != nil {
    return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback()

_, err = tx.Exec("insert into foo(id) values (1)")
if err != nil {
    return err
}

err = tx.Commit()
if err != nil {
    return err
}

Listen and Notify

pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification.

err := conn.Listen("channelname")
if err != nil {
    return nil
}

if notification, err := conn.WaitForNotification(time.Second); err != nil {
    // do something with notification
}

Null Mapping

pgx includes Null* types in a similar fashion to database/sql that implement the necessary interfaces to be encoded and scanned.

Array Mapping

pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a slice is read into a native Go slice an error will occur.

Hstore Mapping

pgx includes an Hstore type and a NullHstore type. Hstore is simply a map[string]string and is preferred when the hstore contains no nulls. NullHstore follows the Null* pattern and supports null values.

Custom Type Support

pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. Support can be added for additional types like point, hstore, numeric, etc. that do not have direct mappings in Go by the types implementing Scanner and Encoder.

Custom types can support text or binary formats. Binary format can provide a large performance increase. The natural place for deciding the format for a value would be in Scanner as it is responsible for decoding the returned data. However, that is impossible as the query has already been sent by the time the Scanner is invoked. The solution to this is the global DefaultTypeFormats. If a custom type prefers binary format it should register it there.

pgx.DefaultTypeFormats["point"] = pgx.BinaryFormatCode

Note that the type is referred to by name, not by OID. This is because custom PostgreSQL types like hstore will have different OIDs on different servers. When pgx establishes a connection it queries the pg_type table for all types. It then matches the names in DefaultTypeFormats with the returned OIDs and stores it in Conn.PgTypes.

See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type.

TLS

The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection.

Logging

pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. The log15 package (http://gopkg.in/inconshreveable/log15.v2) satisfies this interface and it is simple to define adapters for other loggers.

Index

Constants

View Source
const (
	Serializable    = "serializable"
	RepeatableRead  = "repeatable read"
	ReadCommitted   = "read committed"
	ReadUncommitted = "read uncommitted"
)

Transaction isolation levels

View Source
const (
	BoolOid             = 16
	ByteaOid            = 17
	Int8Oid             = 20
	Int2Oid             = 21
	Int4Oid             = 23
	TextOid             = 25
	OidOid              = 26
	Float4Oid           = 700
	Float8Oid           = 701
	BoolArrayOid        = 1000
	Int2ArrayOid        = 1005
	Int4ArrayOid        = 1007
	TextArrayOid        = 1009
	VarcharArrayOid     = 1015
	Int8ArrayOid        = 1016
	Float4ArrayOid      = 1021
	Float8ArrayOid      = 1022
	VarcharOid          = 1043
	DateOid             = 1082
	TimestampOid        = 1114
	TimestampArrayOid   = 1115
	TimestampTzOid      = 1184
	TimestampTzArrayOid = 1185
)

PostgreSQL oids for common types

View Source
const (
	TextFormatCode   = 0
	BinaryFormatCode = 1
)

PostgreSQL format codes

Variables

View Source
var DefaultTypeFormats map[string]int16

DefaultTypeFormats maps type names to their default requested format (text or binary). In theory the Scanner interface should be the one to determine the format of the returned values. However, the query has already been executed by the time Scan is called so it has no chance to set the format. So for types that should be returned in binary th

View Source
var ErrDeadConn = errors.New("conn is dead")
View Source
var ErrNoRows = errors.New("no rows in result set")
View Source
var ErrNotificationTimeout = errors.New("notification timeout")
View Source
var ErrTxClosed = errors.New("tx is closed")

Functions

This section is empty.

Types

type CommandTag

type CommandTag string

func (CommandTag) RowsAffected

func (ct CommandTag) RowsAffected() int64

RowsAffected returns the number of rows affected. If the CommandTag was not for a row affecting command (such as "CREATE TABLE") then it returns 0

type Conn

type Conn struct {
	Pid           int32             // backend pid
	SecretKey     int32             // key to use to send a cancel query message to the server
	RuntimeParams map[string]string // parameters that have been reported by the server
	PgTypes       map[Oid]PgType    // oids to PgTypes

	TxStatus byte
	// contains filtered or unexported fields
}

Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use ConnPool to manage access to multiple database connections from multiple goroutines.

func Connect

func Connect(config ConnConfig) (c *Conn, err error)

Connect establishes a connection with a PostgreSQL server using config. config.Host must be specified. config.User will default to the OS user name. Other config fields are optional.

func (*Conn) Begin

func (c *Conn) Begin() (*Tx, error)

Begin starts a transaction with the default isolation level for the current connection. To use a specific isolation level see BeginIso.

func (*Conn) BeginIso

func (c *Conn) BeginIso(isoLevel string) (*Tx, error)

BeginIso starts a transaction with isoLevel as the transaction isolation level.

Valid isolation levels (and their constants) are:

serializable (pgx.Serializable)
repeatable read (pgx.RepeatableRead)
read committed (pgx.ReadCommitted)
read uncommitted (pgx.ReadUncommitted)

func (*Conn) CauseOfDeath

func (c *Conn) CauseOfDeath() error

func (*Conn) Close

func (c *Conn) Close() (err error)

Close closes a connection. It is safe to call Close on a already closed connection.

func (*Conn) Deallocate

func (c *Conn) Deallocate(name string) (err error)

Deallocate released a prepared statement

func (*Conn) Exec

func (c *Conn) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced positionally from the sql string as $1, $2, etc.

func (*Conn) IsAlive

func (c *Conn) IsAlive() bool

func (*Conn) Listen

func (c *Conn) Listen(channel string) (err error)

Listen establishes a PostgreSQL listen/notify to channel

func (*Conn) Prepare

func (c *Conn) Prepare(name, sql string) (ps *PreparedStatement, err error)

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

func (*Conn) Query

func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error)

Query executes sql with args. If there is an error the returned *Rows will be returned in an error state. So it is allowed to ignore the error returned from Query and handle it in *Rows.

func (*Conn) QueryRow

func (c *Conn) QueryRow(sql string, args ...interface{}) *Row

QueryRow is a convenience wrapper over Query. Any error that occurs while querying is deferred until calling Scan on the returned *Row. That *Row will error with ErrNoRows if no rows are returned.

func (*Conn) WaitForNotification

func (c *Conn) WaitForNotification(timeout time.Duration) (*Notification, error)

WaitForNotification waits for a PostgreSQL notification for up to timeout. If the timeout occurs it returns pgx.ErrNotificationTimeout

type ConnConfig

type ConnConfig struct {
	Host      string // host (e.g. localhost) or path to unix domain socket directory (e.g. /private/tmp)
	Port      uint16 // default: 5432
	Database  string
	User      string // default: OS user name
	Password  string
	TLSConfig *tls.Config // config for TLS connection -- nil disables TLS
	Logger    Logger
}

ConnConfig contains all the options used to establish a connection.

func ParseDSN

func ParseDSN(s string) (ConnConfig, error)

ParseDSN parses a database DSN (data source name) into a ConnConfig

e.g. ParseDSN("user=username password=password host=1.2.3.4 port=5432 dbname=mydb")

func ParseURI

func ParseURI(uri string) (ConnConfig, error)

ParseURI parses a database URI into ConnConfig

type ConnPool

type ConnPool struct {
	// contains filtered or unexported fields
}

func NewConnPool

func NewConnPool(config ConnPoolConfig) (p *ConnPool, err error)

NewConnPool creates a new ConnPool. config.ConnConfig is passed through to Connect directly.

func (*ConnPool) Acquire

func (p *ConnPool) Acquire() (c *Conn, err error)

Acquire takes exclusive use of a connection until it is released.

func (*ConnPool) Begin

func (p *ConnPool) Begin() (*Tx, error)

Begin acquires a connection and begins a transaction on it. When the transaction is closed the connection will be automatically released.

func (*ConnPool) BeginIso

func (p *ConnPool) BeginIso(iso string) (*Tx, error)

BeginIso acquires a connection and begins a transaction in isolation mode iso on it. When the transaction is closed the connection will be automatically released.

func (*ConnPool) Close

func (p *ConnPool) Close()

Close ends the use of a connection pool. It prevents any new connections from being acquired, waits until all acquired connections are released, then closes all underlying connections.

func (*ConnPool) Exec

func (p *ConnPool) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec acquires a connection, delegates the call to that connection, and releases the connection

func (*ConnPool) Query

func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error)

Query acquires a connection and delegates the call to that connection. When *Rows are closed, the connection is released automatically.

func (*ConnPool) QueryRow

func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row

QueryRow acquires a connection and delegates the call to that connection. The connection is released automatically after Scan is called on the returned *Row.

func (*ConnPool) Release

func (p *ConnPool) Release(conn *Conn)

Release gives up use of a connection.

func (*ConnPool) Stat

func (p *ConnPool) Stat() (s ConnPoolStat)

Stat returns connection pool statistics

type ConnPoolConfig

type ConnPoolConfig struct {
	ConnConfig
	MaxConnections int               // max simultaneous connections to use, default 5, must be at least 2
	AfterConnect   func(*Conn) error // function to call on every new connection
}

type ConnPoolStat

type ConnPoolStat struct {
	MaxConnections       int // max simultaneous connections to use
	CurrentConnections   int // current live connections
	AvailableConnections int // unused live connections
}

type Encoder

type Encoder interface {
	// Encode writes the value to w.
	//
	// If the value is NULL an int32(-1) should be written.
	//
	// Encode MUST check oid to see if the parameter data type is compatible. If
	// this is not done, the PostgreSQL server may detect the error if the
	// expected data size or format of the encoded data does not match. But if
	// the encoded data is a valid representation of the data type PostgreSQL
	// expects such as date and int4, incorrect data may be stored.
	Encode(w *WriteBuf, oid Oid) error

	// FormatCode returns the format that the encoder writes the value. It must be
	// either pgx.TextFormatCode or pgx.BinaryFormatCode.
	FormatCode() int16
}

Encoder is an interface used to encode values for transmission to the PostgreSQL server.

type FieldDescription

type FieldDescription struct {
	Name            string
	Table           Oid
	AttributeNumber int16
	DataType        Oid
	DataTypeSize    int16
	DataTypeName    string
	Modifier        int32
	FormatCode      int16
}

type Hstore

type Hstore map[string]string

Hstore represents an hstore column. It does not support a null column or null key values (use NullHstore for this). Hstore implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

func (Hstore) Encode

func (h Hstore) Encode(w *WriteBuf, oid Oid) error

func (Hstore) FormatCode

func (h Hstore) FormatCode() int16

func (*Hstore) Scan

func (h *Hstore) Scan(vr *ValueReader) error

type LargeObject

type LargeObject struct {
	// contains filtered or unexported fields
}

A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized in. It implements these interfaces:

io.Writer
io.Reader
io.Seeker
io.Closer

func (*LargeObject) Close

func (o *LargeObject) Close() error

Close closees the large object descriptor.

func (*LargeObject) Read

func (o *LargeObject) Read(p []byte) (int, error)

Read reads up to len(p) bytes into p returning the number of bytes read.

func (*LargeObject) Seek

func (o *LargeObject) Seek(offset int64, whence int) (n int64, err error)

Seek moves the current location pointer to the new location specified by offset.

func (*LargeObject) Tell

func (o *LargeObject) Tell() (n int64, err error)

Tell returns the current read or write location of the large object descriptor.

func (*LargeObject) Truncate

func (o *LargeObject) Truncate(size int64) (err error)

Trunctes the large object to size.

func (*LargeObject) Write

func (o *LargeObject) Write(p []byte) (int, error)

Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.

type LargeObjectMode

type LargeObjectMode int32
const (
	LargeObjectModeWrite LargeObjectMode = 0x20000
	LargeObjectModeRead  LargeObjectMode = 0x40000
)

type LargeObjects

type LargeObjects struct {
	// Has64 is true if the server is capable of working with 64-bit numbers
	Has64 bool
	// contains filtered or unexported fields
}

LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it was created.

For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html

func (*LargeObjects) Create

func (o *LargeObjects) Create(id Oid) (Oid, error)

Create creates a new large object. If id is zero, the server assigns an unused OID.

func (*LargeObjects) Open

func (o *LargeObjects) Open(oid Oid, mode LargeObjectMode) (*LargeObject, error)

Open opens an existing large object with the given mode.

func (o *LargeObjects) Unlink(oid Oid) error

Unlink removes a large object from the database.

type Logger

type Logger interface {
	// Log a message at the given level with context key/value pairs
	Debug(msg string, ctx ...interface{})
	Info(msg string, ctx ...interface{})
	Warn(msg string, ctx ...interface{})
	Error(msg string, ctx ...interface{})
}

Logger is the interface used to get logging from pgx internals. https://github.com/inconshreveable/log15 is the recommended logging package. This logging interface was extracted from there. However, it should be simple to adapt any logger to this interface.

type Notification

type Notification struct {
	Pid     int32  // backend pid that sent the notification
	Channel string // channel from which notification was received
	Payload string
}

type NullBool

type NullBool struct {
	Bool  bool
	Valid bool // Valid is true if Bool is not NULL
}

NullBool represents an bool that may be null. NullBool implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullBool) Encode

func (n NullBool) Encode(w *WriteBuf, oid Oid) error

func (NullBool) FormatCode

func (n NullBool) FormatCode() int16

func (*NullBool) Scan

func (n *NullBool) Scan(vr *ValueReader) error

type NullFloat32

type NullFloat32 struct {
	Float32 float32
	Valid   bool // Valid is true if Float32 is not NULL
}

NullFloat32 represents an float4 that may be null. NullFloat32 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullFloat32) Encode

func (n NullFloat32) Encode(w *WriteBuf, oid Oid) error

func (NullFloat32) FormatCode

func (n NullFloat32) FormatCode() int16

func (*NullFloat32) Scan

func (n *NullFloat32) Scan(vr *ValueReader) error

type NullFloat64

type NullFloat64 struct {
	Float64 float64
	Valid   bool // Valid is true if Float64 is not NULL
}

NullFloat64 represents an float8 that may be null. NullFloat64 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullFloat64) Encode

func (n NullFloat64) Encode(w *WriteBuf, oid Oid) error

func (NullFloat64) FormatCode

func (n NullFloat64) FormatCode() int16

func (*NullFloat64) Scan

func (n *NullFloat64) Scan(vr *ValueReader) error

type NullHstore

type NullHstore struct {
	Hstore map[string]NullString
	Valid  bool
}

NullHstore represents an hstore column that can be null or have null values associated with its keys. NullHstore implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false, then the value of the entire hstore column is NULL If any of the NullString values in Store has Valid set to false, the key appears in the hstore column, but its value is explicitly set to NULL.

func (NullHstore) Encode

func (h NullHstore) Encode(w *WriteBuf, oid Oid) error

func (NullHstore) FormatCode

func (h NullHstore) FormatCode() int16

func (*NullHstore) Scan

func (h *NullHstore) Scan(vr *ValueReader) error

type NullInt16

type NullInt16 struct {
	Int16 int16
	Valid bool // Valid is true if Int16 is not NULL
}

NullInt16 represents an smallint that may be null. NullInt16 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan for prepared and unprepared queries.

If Valid is false then the value is NULL.

func (NullInt16) Encode

func (n NullInt16) Encode(w *WriteBuf, oid Oid) error

func (NullInt16) FormatCode

func (n NullInt16) FormatCode() int16

func (*NullInt16) Scan

func (n *NullInt16) Scan(vr *ValueReader) error

type NullInt32

type NullInt32 struct {
	Int32 int32
	Valid bool // Valid is true if Int64 is not NULL
}

NullInt32 represents an integer that may be null. NullInt32 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullInt32) Encode

func (n NullInt32) Encode(w *WriteBuf, oid Oid) error

func (NullInt32) FormatCode

func (n NullInt32) FormatCode() int16

func (*NullInt32) Scan

func (n *NullInt32) Scan(vr *ValueReader) error

type NullInt64

type NullInt64 struct {
	Int64 int64
	Valid bool // Valid is true if Int64 is not NULL
}

NullInt64 represents an bigint that may be null. NullInt64 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullInt64) Encode

func (n NullInt64) Encode(w *WriteBuf, oid Oid) error

func (NullInt64) FormatCode

func (n NullInt64) FormatCode() int16

func (*NullInt64) Scan

func (n *NullInt64) Scan(vr *ValueReader) error

type NullString

type NullString struct {
	String string
	Valid  bool // Valid is true if Int64 is not NULL
}

NullString represents an string that may be null. NullString implements the Scanner Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func ParseHstore

func ParseHstore(s string) (k []string, v []NullString, err error)

ParseHstore parses the string representation of an hstore column (the same you would get from an ordinary SELECT) into two slices of keys and values. it is used internally in the default parsing of hstores, but is exported for use in handling custom data structures backed by an hstore column without the overhead of creating a map[string]string

func (NullString) Encode

func (s NullString) Encode(w *WriteBuf, oid Oid) error

func (NullString) FormatCode

func (n NullString) FormatCode() int16

func (*NullString) Scan

func (s *NullString) Scan(vr *ValueReader) error

type NullTime

type NullTime struct {
	Time  time.Time
	Valid bool // Valid is true if Time is not NULL
}

NullTime represents an bigint that may be null. NullTime implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullTime) Encode

func (n NullTime) Encode(w *WriteBuf, oid Oid) error

func (NullTime) FormatCode

func (n NullTime) FormatCode() int16

func (*NullTime) Scan

func (n *NullTime) Scan(vr *ValueReader) error

type Oid

type Oid int32

type PgError

type PgError struct {
	Severity       string
	Code           string
	Message        string
	Detail         string
	Hint           string
	SchemaName     string
	TableName      string
	ColumnName     string
	DataTypeName   string
	ConstraintName string
}

PgError represents an error reported by the PostgreSQL server. See http://www.postgresql.org/docs/9.3/static/protocol-error-fields.html for detailed field description.

func (PgError) Error

func (self PgError) Error() string

type PgType

type PgType struct {
	Name          string // name of type e.g. int4, text, date
	DefaultFormat int16  // default format (text or binary) this type will be requested in
}

type PreparedStatement

type PreparedStatement struct {
	Name              string
	FieldDescriptions []FieldDescription
	ParameterOids     []Oid
}

type ProtocolError

type ProtocolError string

func (ProtocolError) Error

func (e ProtocolError) Error() string

type QueryArgs

type QueryArgs []interface{}

QueryArgs is a container for arguments to an SQL query. It is helpful when building SQL statements where the number of arguments is variable.

func (*QueryArgs) Append

func (qa *QueryArgs) Append(v interface{}) string

Append adds a value to qa and returns the placeholder value for the argument. e.g. $1, $2, etc.

type Row

type Row Rows

Row is a convenience wrapper over Rows that is returned by QueryRow.

func (*Row) Scan

func (r *Row) Scan(dest ...interface{}) (err error)

Scan reads the values from the row into dest values positionally. dest can include pointers to core types and the Scanner interface. If no rows were found it returns ErrNoRows. If multiple rows are returned it ignores all but the first.

type Rows

type Rows struct {
	// contains filtered or unexported fields
}

Rows is the result set returned from *Conn.Query. Rows must be closed before the *Conn can be used again. Rows are closed by explicitly calling Close(), calling Next() until it returns false, or when a fatal error occurs.

func (*Rows) Close

func (rows *Rows) Close()

Close closes the rows, making the connection ready for use again. It is safe to call Close after rows is already closed.

func (*Rows) Err

func (rows *Rows) Err() error

func (*Rows) Fatal

func (rows *Rows) Fatal(err error)

Fatal signals an error occurred after the query was sent to the server. It closes the rows automatically.

func (*Rows) FieldDescriptions

func (rows *Rows) FieldDescriptions() []FieldDescription

func (*Rows) Next

func (rows *Rows) Next() bool

Next prepares the next row for reading. It returns true if there is another row and false if no more rows are available. It automatically closes rows when all rows are read.

func (*Rows) Scan

func (rows *Rows) Scan(dest ...interface{}) (err error)

Scan reads the values from the current row into dest values positionally. dest can include pointers to core types and the Scanner interface.

func (*Rows) Values

func (rows *Rows) Values() ([]interface{}, error)

Values returns an array of the row values

type Scanner

type Scanner interface {
	// Scan MUST check r.Type().DataType (to check by OID) or
	// r.Type().DataTypeName (to check by name) to ensure that it is scanning an
	// expected column type. It also MUST check r.Type().FormatCode before
	// decoding. It should not assume that it was called on a data type or format
	// that it understands.
	Scan(r *ValueReader) error
}

Scanner is an interface used to decode values from the PostgreSQL server.

type SerializationError

type SerializationError string

func (SerializationError) Error

func (e SerializationError) Error() string

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx represents a database transaction.

All Tx methods return ErrTxClosed if Commit or Rollback has already been called on the Tx.

func (*Tx) Commit

func (tx *Tx) Commit() error

Commit commits the transaction

func (*Tx) Exec

func (tx *Tx) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec delegates to the underlying *Conn

func (*Tx) LargeObjects

func (tx *Tx) LargeObjects() (*LargeObjects, error)

LargeObjects returns a LargeObjects instance for the transaction.

func (*Tx) Query

func (tx *Tx) Query(sql string, args ...interface{}) (*Rows, error)

Query delegates to the underlying *Conn

func (*Tx) QueryRow

func (tx *Tx) QueryRow(sql string, args ...interface{}) *Row

QueryRow delegates to the underlying *Conn

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback rolls back the transaction. Rollback will return ErrTxClosed if the Tx is already closed, but is otherwise safe to call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will be called first in a non-error condition.

type ValueReader

type ValueReader struct {
	// contains filtered or unexported fields
}

ValueReader is used by the Scanner interface to decode values.

func (*ValueReader) Err

func (r *ValueReader) Err() error

Err returns any error that the ValueReader has experienced

func (*ValueReader) Fatal

func (r *ValueReader) Fatal(err error)

Fatal tells r that a Fatal error has occurred

func (*ValueReader) Len

func (r *ValueReader) Len() int32

Len returns the number of unread bytes

func (*ValueReader) ReadByte

func (r *ValueReader) ReadByte() byte

func (*ValueReader) ReadBytes

func (r *ValueReader) ReadBytes(count int32) []byte

ReadBytes reads count bytes and returns as []byte

func (*ValueReader) ReadInt16

func (r *ValueReader) ReadInt16() int16

func (*ValueReader) ReadInt32

func (r *ValueReader) ReadInt32() int32

func (*ValueReader) ReadInt64

func (r *ValueReader) ReadInt64() int64

func (*ValueReader) ReadOid

func (r *ValueReader) ReadOid() Oid

func (*ValueReader) ReadString

func (r *ValueReader) ReadString(count int32) string

ReadString reads count bytes and returns as string

func (*ValueReader) Type

func (r *ValueReader) Type() *FieldDescription

Type returns the *FieldDescription of the value

type WriteBuf

type WriteBuf struct {
	// contains filtered or unexported fields
}

WrifeBuf is used build messages to send to the PostgreSQL server. It is used by the Encoder interface when implementing custom encoders.

func (*WriteBuf) WriteByte

func (wb *WriteBuf) WriteByte(b byte)

func (*WriteBuf) WriteBytes

func (wb *WriteBuf) WriteBytes(b []byte)

func (*WriteBuf) WriteCString

func (wb *WriteBuf) WriteCString(s string)

func (*WriteBuf) WriteInt16

func (wb *WriteBuf) WriteInt16(n int16)

func (*WriteBuf) WriteInt32

func (wb *WriteBuf) WriteInt32(n int32)

func (*WriteBuf) WriteInt64

func (wb *WriteBuf) WriteInt64(n int64)

Directories

Path Synopsis
examples
Package stdlib is the compatibility layer from pgx to database/sql.
Package stdlib is the compatibility layer from pgx to database/sql.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL