pgx

package module
v2.8.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2016 License: MIT Imports: 27 Imported by: 0

README

Pgx

Pgx is a a pure Go database connection library designed specifically for PostgreSQL. Pgx is different from other drivers such as pq because, while it can operate as a database/sql compatible driver, pgx is primarily intended to be used directly. It offers a native interface similar to database/sql that offers better performance and more features.

Features

Pgx supports many additional features beyond what is available through database/sql.

  • Listen / notify
  • Transaction isolation level control
  • Full TLS connection control
  • Binary format support for custom types (can be much faster)
  • Logging support
  • Configurable connection pool with after connect hooks to do arbitrary connection setup
  • PostgreSQL array to Go slice mapping for integers, floats, and strings
  • Hstore support
  • JSON and JSONB support
  • Maps inet and cidr PostgreSQL types to net.IPNet
  • Large object support
  • Null mapping to Null* struct or pointer to pointer.
  • Supports database/sql.Scanner and database/sql/driver.Valuer interfaces for custom types

Performance

Pgx performs roughly equivalent to pq and go-pg for selecting a single column from a single row, but it is substantially faster when selecting multiple entire rows (6893 queries/sec for pgx vs. 3968 queries/sec for pq -- 73% faster).

See this gist for the underlying benchmark results or checkout go_db_bench to run tests for yourself.

database/sql

Import the github.com/jackc/pgx/stdlib package to use pgx as a driver for database/sql. It is possible to retrieve a pgx connection from database/sql on demand. This allows using the database/sql interface in most places, but using pgx directly when more performance or PostgreSQL specific features are needed.

Documentation

pgx includes extensive documentation in the godoc format. It is viewable online at godoc.org.

Testing

pgx supports multiple connection and authentication types. Setting up a test environment that can test all of them can be cumbersome. In particular, Windows cannot test Unix domain socket connections. Because of this pgx will skip tests for connection types that are not configured.

Normal Test Environment

To setup the normal test environment run the following SQL:

create user pgx_md5 password 'secret';
create database pgx_test;

Connect to database pgx_test and run:

create extension hstore;

Next open connection_settings_test.go.example and make a copy without the .example. If your PostgreSQL server is accepting connections on 127.0.0.1, then you are done.

Connection and Authentication Test Environment

Complete the normal test environment setup and also do the following.

Run the following SQL:

create user pgx_none;
create user pgx_pw password 'secret';

Add the following to your pg_hba.conf:

If you are developing on Unix with domain socket connections:

local  pgx_test  pgx_none  trust
local  pgx_test  pgx_pw    password
local  pgx_test  pgx_md5   md5

If you are developing on Windows with TCP connections:

host  pgx_test  pgx_none  127.0.0.1/32 trust
host  pgx_test  pgx_pw    127.0.0.1/32 password
host  pgx_test  pgx_md5   127.0.0.1/32 md5

Version Policy

pgx follows semantic versioning for the documented public API. master branch tracks the latest stable branch (v2). Consider using import "gopkg.in/jackc/pgx.v2" to lock to the v2 branch or use a vendoring tool such as godep.

Documentation

Overview

Package pgx is a PostgreSQL database driver.

pgx provides lower level access to PostgreSQL than the standard database/sql It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jack/pgx/stdlib to use pgx as a database/sql compatible driver.

Query Interface

pgx implements Query and Scan in the familiar database/sql style.

var sum int32

// Send the query to the server. The returned rows MUST be closed
// before conn can be used again.
rows, err := conn.Query("select generate_series(1,$1)", 10)
if err != nil {
    return err
}

// rows.Close is called by rows.Next when all rows are read
// or an error occurs in Next or Scan. So it may optionally be
// omitted if nothing in the rows.Next loop can panic. It is
// safe to close rows multiple times.
defer rows.Close()

// Iterate through the result set
for rows.Next() {
    var n int32
    err = rows.Scan(&n)
    if err != nil {
        return err
    }
    sum += n
}

// Any errors encountered by rows.Next or rows.Scan will be returned here
if rows.Err() != nil {
    return err
}

// No errors found - do something with sum

pgx also implements QueryRow in the same style as database/sql.

var name string
var weight int64
err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Use exec to execute a query that does not return a result set.

commandTag, err := conn.Exec("delete from widgets where id=$1", 42)
if err != nil {
    return err
}
if commandTag.RowsAffected() != 1 {
    return errors.New("No row found to delete")
}

Connection Pool

Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. Also, the connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. This is especially useful to ensure all connections have the same prepared statements available or to change any other connection settings.

It delegates Query, QueryRow, Exec, and Begin functions to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control.

var name string
var weight int64
err := pool.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Transactions

Transactions are started by calling Begin or BeginIso. The BeginIso variant creates a transaction with a specified isolation level.

tx, err := conn.Begin()
if err != nil {
    return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback()

_, err = tx.Exec("insert into foo(id) values (1)")
if err != nil {
    return err
}

err = tx.Commit()
if err != nil {
    return err
}

Listen and Notify

pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification.

err := conn.Listen("channelname")
if err != nil {
    return nil
}

if notification, err := conn.WaitForNotification(time.Second); err != nil {
    // do something with notification
}

Null Mapping

pgx can map nulls in two ways. The first is Null* types that have a data field and a valid field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer.

var foo pgx.NullString
var bar *string
err := conn.QueryRow("select foo, bar from widgets where id=$1", 42).Scan(&a, &b)
if err != nil {
    return err
}

Array Mapping

pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a slice is read into a native Go slice an error will occur.

Hstore Mapping

pgx includes an Hstore type and a NullHstore type. Hstore is simply a map[string]string and is preferred when the hstore contains no nulls. NullHstore follows the Null* pattern and supports null values.

JSON and JSONB Mapping

pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB.

Inet and Cidr Mapping

pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6.

Custom Type Support

pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. Support can be added for additional types like point, hstore, numeric, etc. that do not have direct mappings in Go by the types implementing Scanner and Encoder.

Custom types can support text or binary formats. Binary format can provide a large performance increase. The natural place for deciding the format for a value would be in Scanner as it is responsible for decoding the returned data. However, that is impossible as the query has already been sent by the time the Scanner is invoked. The solution to this is the global DefaultTypeFormats. If a custom type prefers binary format it should register it there.

pgx.DefaultTypeFormats["point"] = pgx.BinaryFormatCode

Note that the type is referred to by name, not by OID. This is because custom PostgreSQL types like hstore will have different OIDs on different servers. When pgx establishes a connection it queries the pg_type table for all types. It then matches the names in DefaultTypeFormats with the returned OIDs and stores it in Conn.PgTypes.

See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type.

pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces.

Raw Bytes Mapping

[]byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. In like manner, a *[]byte passed to Scan will be filled with the raw bytes returned by PostgreSQL. This can be especially useful for reading varchar, text, json, and jsonb values directly into a []byte and avoiding the type conversion from string.

TLS

The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection.

Logging

pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. The log15 package (http://gopkg.in/inconshreveable/log15.v2) satisfies this interface and it is simple to define adapters for other loggers. Set LogLevel to control logging verbosity.

Index

Constants

View Source
const (
	LogLevelTrace = 6
	LogLevelDebug = 5
	LogLevelInfo  = 4
	LogLevelWarn  = 3
	LogLevelError = 2
	LogLevelNone  = 1
)

The values for log levels are chosen such that the zero value means that no log level was specified and we can default to LogLevelDebug to preserve the behavior that existed prior to log level introduction.

View Source
const (
	Serializable    = "serializable"
	RepeatableRead  = "repeatable read"
	ReadCommitted   = "read committed"
	ReadUncommitted = "read uncommitted"
)

Transaction isolation levels

View Source
const (
	TxStatusInProgress      = 0
	TxStatusCommitFailure   = -1
	TxStatusRollbackFailure = -2
	TxStatusCommitSuccess   = 1
	TxStatusRollbackSuccess = 2
)
View Source
const (
	BoolOid             = 16
	ByteaOid            = 17
	Int8Oid             = 20
	Int2Oid             = 21
	Int4Oid             = 23
	TextOid             = 25
	OidOid              = 26
	JsonOid             = 114
	CidrOid             = 650
	CidrArrayOid        = 651
	Float4Oid           = 700
	Float8Oid           = 701
	InetOid             = 869
	BoolArrayOid        = 1000
	Int2ArrayOid        = 1005
	Int4ArrayOid        = 1007
	TextArrayOid        = 1009
	VarcharArrayOid     = 1015
	Int8ArrayOid        = 1016
	Float4ArrayOid      = 1021
	Float8ArrayOid      = 1022
	InetArrayOid        = 1041
	VarcharOid          = 1043
	DateOid             = 1082
	TimestampOid        = 1114
	TimestampArrayOid   = 1115
	TimestampTzOid      = 1184
	TimestampTzArrayOid = 1185
	UuidOid             = 2950
	JsonbOid            = 3802
)

PostgreSQL oids for common types

View Source
const (
	TextFormatCode   = 0
	BinaryFormatCode = 1
)

PostgreSQL format codes

Variables

View Source
var DefaultTypeFormats map[string]int16

DefaultTypeFormats maps type names to their default requested format (text or binary). In theory the Scanner interface should be the one to determine the format of the returned values. However, the query has already been executed by the time Scan is called so it has no chance to set the format. So for types that should be returned in binary th

View Source
var ErrConnBusy = errors.New("conn is busy")
View Source
var ErrDeadConn = errors.New("conn is dead")
View Source
var ErrInvalidLogLevel = errors.New("invalid log level")
View Source
var ErrNoRows = errors.New("no rows in result set")
View Source
var ErrNotificationTimeout = errors.New("notification timeout")
View Source
var ErrTLSRefused = errors.New("server refused TLS connection")
View Source
var ErrTxClosed = errors.New("tx is closed")
View Source
var ErrTxCommitRollback = errors.New("commit unexpectedly resulted in rollback")

ErrTxCommitRollback occurs when an error has occurred in a transaction and Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but it is treated as ROLLBACK.

Functions

func Decode

func Decode(vr *ValueReader, d interface{}) error

Decode decodes from vr into d. d must be a pointer. This allows implementations of the Decoder interface to delegate the actual work of decoding to the built-in functionality.

func Encode

func Encode(wbuf *WriteBuf, oid Oid, arg interface{}) error

Encode encodes arg into wbuf as the type oid. This allows implementations of the Encoder interface to delegate the actual work of encoding to the built-in functionality.

func LogLevelFromString

func LogLevelFromString(s string) (int, error)

Converts log level string to constant

Valid levels:

  trace
	 debug
	 info
	 warn
  error
	 none

Types

type CommandTag

type CommandTag string

func (CommandTag) RowsAffected

func (ct CommandTag) RowsAffected() int64

RowsAffected returns the number of rows affected. If the CommandTag was not for a row affecting command (such as "CREATE TABLE") then it returns 0

type Conn

type Conn struct {
	Pid           int32             // backend pid
	SecretKey     int32             // key to use to send a cancel query message to the server
	RuntimeParams map[string]string // parameters that have been reported by the server
	PgTypes       map[Oid]PgType    // oids to PgTypes

	TxStatus byte
	// contains filtered or unexported fields
}

Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use ConnPool to manage access to multiple database connections from multiple goroutines.

func Connect

func Connect(config ConnConfig) (c *Conn, err error)

Connect establishes a connection with a PostgreSQL server using config. config.Host must be specified. config.User will default to the OS user name. Other config fields are optional.

func (*Conn) Begin

func (c *Conn) Begin() (*Tx, error)

Begin starts a transaction with the default isolation level for the current connection. To use a specific isolation level see BeginIso.

func (*Conn) BeginIso

func (c *Conn) BeginIso(isoLevel string) (*Tx, error)

BeginIso starts a transaction with isoLevel as the transaction isolation level.

Valid isolation levels (and their constants) are:

serializable (pgx.Serializable)
repeatable read (pgx.RepeatableRead)
read committed (pgx.ReadCommitted)
read uncommitted (pgx.ReadUncommitted)

func (*Conn) CauseOfDeath

func (c *Conn) CauseOfDeath() error

func (*Conn) Close

func (c *Conn) Close() (err error)

Close closes a connection. It is safe to call Close on a already closed connection.

func (*Conn) Deallocate

func (c *Conn) Deallocate(name string) (err error)

Deallocate released a prepared statement

func (*Conn) Exec

func (c *Conn) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced positionally from the sql string as $1, $2, etc.

func (*Conn) IsAlive

func (c *Conn) IsAlive() bool

func (*Conn) Listen

func (c *Conn) Listen(channel string) error

Listen establishes a PostgreSQL listen/notify to channel

func (*Conn) Prepare

func (c *Conn) Prepare(name, sql string) (ps *PreparedStatement, err error)

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec without concern for if the statement has already been prepared.

func (*Conn) Query

func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error)

Query executes sql with args. If there is an error the returned *Rows will be returned in an error state. So it is allowed to ignore the error returned from Query and handle it in *Rows.

func (*Conn) QueryRow

func (c *Conn) QueryRow(sql string, args ...interface{}) *Row

QueryRow is a convenience wrapper over Query. Any error that occurs while querying is deferred until calling Scan on the returned *Row. That *Row will error with ErrNoRows if no rows are returned.

func (*Conn) SetLogLevel

func (c *Conn) SetLogLevel(lvl int) (int, error)

SetLogLevel replaces the current log level and returns the previous log level.

func (*Conn) SetLogger

func (c *Conn) SetLogger(logger Logger) Logger

SetLogger replaces the current logger and returns the previous logger.

func (*Conn) Unlisten

func (c *Conn) Unlisten(channel string) error

Unlisten unsubscribes from a listen channel

func (*Conn) WaitForNotification

func (c *Conn) WaitForNotification(timeout time.Duration) (*Notification, error)

WaitForNotification waits for a PostgreSQL notification for up to timeout. If the timeout occurs it returns pgx.ErrNotificationTimeout

type ConnConfig

type ConnConfig struct {
	Host              string // host (e.g. localhost) or path to unix domain socket directory (e.g. /private/tmp)
	Port              uint16 // default: 5432
	Database          string
	User              string // default: OS user name
	Password          string
	TLSConfig         *tls.Config // config for TLS connection -- nil disables TLS
	UseFallbackTLS    bool        // Try FallbackTLSConfig if connecting with TLSConfig fails. Used for preferring TLS, but allowing unencrypted, or vice-versa
	FallbackTLSConfig *tls.Config // config for fallback TLS connection (only used if UseFallBackTLS is true)-- nil disables TLS
	Logger            Logger
	LogLevel          int
	Dial              DialFunc
	RuntimeParams     map[string]string // Run-time parameters to set on connection as session default values (e.g. search_path or application_name)
}

ConnConfig contains all the options used to establish a connection.

func ParseDSN

func ParseDSN(s string) (ConnConfig, error)

ParseDSN parses a database DSN (data source name) into a ConnConfig

e.g. ParseDSN("user=username password=password host=1.2.3.4 port=5432 dbname=mydb sslmode=disable")

Any options not used by the connection process are parsed into ConnConfig.RuntimeParams.

e.g. ParseDSN("application_name=pgxtest search_path=admin user=username password=password host=1.2.3.4 dbname=mydb")

ParseDSN tries to match libpq behavior with regard to sslmode. See comments for ParseEnvLibpq for more information on the security implications of sslmode options.

func ParseEnvLibpq

func ParseEnvLibpq() (ConnConfig, error)

ParseEnvLibpq parses the environment like libpq does into a ConnConfig

See http://www.postgresql.org/docs/9.4/static/libpq-envars.html for details on the meaning of environment variables.

ParseEnvLibpq currently recognizes the following environment variables: PGHOST PGPORT PGDATABASE PGUSER PGPASSWORD PGSSLMODE PGAPPNAME

Important TLS Security Notes: ParseEnvLibpq tries to match libpq behavior with regard to PGSSLMODE. This includes defaulting to "prefer" behavior if no environment variable is set.

See http://www.postgresql.org/docs/9.4/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION for details on what level of security each sslmode provides.

"require" and "verify-ca" modes currently are treated as "verify-full". e.g. They have stronger security guarantees than they would with libpq. Do not rely on this behavior as it may be possible to match libpq in the future. If you need full security use "verify-full".

Several of the PGSSLMODE options (including the default behavior of "prefer") will set UseFallbackTLS to true and FallbackTLSConfig to a disabled or weakened TLS mode. This means that if ParseEnvLibpq is used, but TLSConfig is later set from a different source that UseFallbackTLS MUST be set false to avoid the possibility of falling back to weaker or disabled security.

func ParseURI

func ParseURI(uri string) (ConnConfig, error)

ParseURI parses a database URI into ConnConfig

Query parameters not used by the connection process are parsed into ConnConfig.RuntimeParams.

type ConnPool

type ConnPool struct {
	// contains filtered or unexported fields
}

func NewConnPool

func NewConnPool(config ConnPoolConfig) (p *ConnPool, err error)

NewConnPool creates a new ConnPool. config.ConnConfig is passed through to Connect directly.

func (*ConnPool) Acquire

func (p *ConnPool) Acquire() (*Conn, error)

Acquire takes exclusive use of a connection until it is released.

func (*ConnPool) Begin

func (p *ConnPool) Begin() (*Tx, error)

Begin acquires a connection and begins a transaction on it. When the transaction is closed the connection will be automatically released.

func (*ConnPool) BeginIso

func (p *ConnPool) BeginIso(iso string) (*Tx, error)

BeginIso acquires a connection and begins a transaction in isolation mode iso on it. When the transaction is closed the connection will be automatically released.

func (*ConnPool) Close

func (p *ConnPool) Close()

Close ends the use of a connection pool. It prevents any new connections from being acquired, waits until all acquired connections are released, then closes all underlying connections.

func (*ConnPool) Deallocate

func (p *ConnPool) Deallocate(name string) (err error)

Deallocate releases a prepared statement from all connections in the pool.

func (*ConnPool) Exec

func (p *ConnPool) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec acquires a connection, delegates the call to that connection, and releases the connection

func (*ConnPool) Prepare

func (p *ConnPool) Prepare(name, sql string) (*PreparedStatement, error)

Prepare creates a prepared statement on a connection in the pool to test the statement is valid. If it succeeds all connections accessed through the pool will have the statement available.

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec without concern for if the statement has already been prepared.

func (*ConnPool) Query

func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error)

Query acquires a connection and delegates the call to that connection. When *Rows are closed, the connection is released automatically.

func (*ConnPool) QueryRow

func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row

QueryRow acquires a connection and delegates the call to that connection. The connection is released automatically after Scan is called on the returned *Row.

func (*ConnPool) Release

func (p *ConnPool) Release(conn *Conn)

Release gives up use of a connection.

func (*ConnPool) Reset

func (p *ConnPool) Reset()

Reset closes all open connections, but leaves the pool open. It is intended for use when an error is detected that would disrupt all connections (such as a network interruption or a server state change).

It is safe to reset a pool while connections are checked out. Those connections will be closed when they are returned to the pool.

func (*ConnPool) Stat

func (p *ConnPool) Stat() (s ConnPoolStat)

Stat returns connection pool statistics

type ConnPoolConfig

type ConnPoolConfig struct {
	ConnConfig
	MaxConnections int               // max simultaneous connections to use, default 5, must be at least 2
	AfterConnect   func(*Conn) error // function to call on every new connection
}

type ConnPoolStat

type ConnPoolStat struct {
	MaxConnections       int // max simultaneous connections to use
	CurrentConnections   int // current live connections
	AvailableConnections int // unused live connections
}

type DialFunc

type DialFunc func(network, addr string) (net.Conn, error)

type Encoder

type Encoder interface {
	// Encode writes the value to w.
	//
	// If the value is NULL an int32(-1) should be written.
	//
	// Encode MUST check oid to see if the parameter data type is compatible. If
	// this is not done, the PostgreSQL server may detect the error if the
	// expected data size or format of the encoded data does not match. But if
	// the encoded data is a valid representation of the data type PostgreSQL
	// expects such as date and int4, incorrect data may be stored.
	Encode(w *WriteBuf, oid Oid) error

	// FormatCode returns the format that the encoder writes the value. It must be
	// either pgx.TextFormatCode or pgx.BinaryFormatCode.
	FormatCode() int16
}

Encoder is an interface used to encode values for transmission to the PostgreSQL server.

type FieldDescription

type FieldDescription struct {
	Name            string
	Table           Oid
	AttributeNumber int16
	DataType        Oid
	DataTypeSize    int16
	DataTypeName    string
	Modifier        int32
	FormatCode      int16
}

type Hstore

type Hstore map[string]string

Hstore represents an hstore column. It does not support a null column or null key values (use NullHstore for this). Hstore implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

func (Hstore) Encode

func (h Hstore) Encode(w *WriteBuf, oid Oid) error

func (Hstore) FormatCode

func (h Hstore) FormatCode() int16

func (*Hstore) Scan

func (h *Hstore) Scan(vr *ValueReader) error

type LargeObject

type LargeObject struct {
	// contains filtered or unexported fields
}

A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized in. It implements these interfaces:

io.Writer
io.Reader
io.Seeker
io.Closer

func (*LargeObject) Close

func (o *LargeObject) Close() error

Close closees the large object descriptor.

func (*LargeObject) Read

func (o *LargeObject) Read(p []byte) (int, error)

Read reads up to len(p) bytes into p returning the number of bytes read.

func (*LargeObject) Seek

func (o *LargeObject) Seek(offset int64, whence int) (n int64, err error)

Seek moves the current location pointer to the new location specified by offset.

func (*LargeObject) Tell

func (o *LargeObject) Tell() (n int64, err error)

Tell returns the current read or write location of the large object descriptor.

func (*LargeObject) Truncate

func (o *LargeObject) Truncate(size int64) (err error)

Trunctes the large object to size.

func (*LargeObject) Write

func (o *LargeObject) Write(p []byte) (int, error)

Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.

type LargeObjectMode

type LargeObjectMode int32
const (
	LargeObjectModeWrite LargeObjectMode = 0x20000
	LargeObjectModeRead  LargeObjectMode = 0x40000
)

type LargeObjects

type LargeObjects struct {
	// Has64 is true if the server is capable of working with 64-bit numbers
	Has64 bool
	// contains filtered or unexported fields
}

LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it was created.

For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html

func (*LargeObjects) Create

func (o *LargeObjects) Create(id Oid) (Oid, error)

Create creates a new large object. If id is zero, the server assigns an unused OID.

func (*LargeObjects) Open

func (o *LargeObjects) Open(oid Oid, mode LargeObjectMode) (*LargeObject, error)

Open opens an existing large object with the given mode.

func (o *LargeObjects) Unlink(oid Oid) error

Unlink removes a large object from the database.

type Logger

type Logger interface {
	// Log a message at the given level with context key/value pairs
	Debug(msg string, ctx ...interface{})
	Info(msg string, ctx ...interface{})
	Warn(msg string, ctx ...interface{})
	Error(msg string, ctx ...interface{})
}

Logger is the interface used to get logging from pgx internals. https://github.com/inconshreveable/log15 is the recommended logging package. This logging interface was extracted from there. However, it should be simple to adapt any logger to this interface.

type Notification

type Notification struct {
	Pid     int32  // backend pid that sent the notification
	Channel string // channel from which notification was received
	Payload string
}

type NullBool

type NullBool struct {
	Bool  bool
	Valid bool // Valid is true if Bool is not NULL
}

NullBool represents an bool that may be null. NullBool implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullBool) Encode

func (n NullBool) Encode(w *WriteBuf, oid Oid) error

func (NullBool) FormatCode

func (n NullBool) FormatCode() int16

func (*NullBool) Scan

func (n *NullBool) Scan(vr *ValueReader) error

type NullFloat32

type NullFloat32 struct {
	Float32 float32
	Valid   bool // Valid is true if Float32 is not NULL
}

NullFloat32 represents an float4 that may be null. NullFloat32 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullFloat32) Encode

func (n NullFloat32) Encode(w *WriteBuf, oid Oid) error

func (NullFloat32) FormatCode

func (n NullFloat32) FormatCode() int16

func (*NullFloat32) Scan

func (n *NullFloat32) Scan(vr *ValueReader) error

type NullFloat64

type NullFloat64 struct {
	Float64 float64
	Valid   bool // Valid is true if Float64 is not NULL
}

NullFloat64 represents an float8 that may be null. NullFloat64 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullFloat64) Encode

func (n NullFloat64) Encode(w *WriteBuf, oid Oid) error

func (NullFloat64) FormatCode

func (n NullFloat64) FormatCode() int16

func (*NullFloat64) Scan

func (n *NullFloat64) Scan(vr *ValueReader) error

type NullHstore

type NullHstore struct {
	Hstore map[string]NullString
	Valid  bool
}

NullHstore represents an hstore column that can be null or have null values associated with its keys. NullHstore implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false, then the value of the entire hstore column is NULL If any of the NullString values in Store has Valid set to false, the key appears in the hstore column, but its value is explicitly set to NULL.

func (NullHstore) Encode

func (h NullHstore) Encode(w *WriteBuf, oid Oid) error

func (NullHstore) FormatCode

func (h NullHstore) FormatCode() int16

func (*NullHstore) Scan

func (h *NullHstore) Scan(vr *ValueReader) error

type NullInt16

type NullInt16 struct {
	Int16 int16
	Valid bool // Valid is true if Int16 is not NULL
}

NullInt16 represents an smallint that may be null. NullInt16 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan for prepared and unprepared queries.

If Valid is false then the value is NULL.

func (NullInt16) Encode

func (n NullInt16) Encode(w *WriteBuf, oid Oid) error

func (NullInt16) FormatCode

func (n NullInt16) FormatCode() int16

func (*NullInt16) Scan

func (n *NullInt16) Scan(vr *ValueReader) error

type NullInt32

type NullInt32 struct {
	Int32 int32
	Valid bool // Valid is true if Int32 is not NULL
}

NullInt32 represents an integer that may be null. NullInt32 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullInt32) Encode

func (n NullInt32) Encode(w *WriteBuf, oid Oid) error

func (NullInt32) FormatCode

func (n NullInt32) FormatCode() int16

func (*NullInt32) Scan

func (n *NullInt32) Scan(vr *ValueReader) error

type NullInt64

type NullInt64 struct {
	Int64 int64
	Valid bool // Valid is true if Int64 is not NULL
}

NullInt64 represents an bigint that may be null. NullInt64 implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func (NullInt64) Encode

func (n NullInt64) Encode(w *WriteBuf, oid Oid) error

func (NullInt64) FormatCode

func (n NullInt64) FormatCode() int16

func (*NullInt64) Scan

func (n *NullInt64) Scan(vr *ValueReader) error

type NullString

type NullString struct {
	String string
	Valid  bool // Valid is true if String is not NULL
}

NullString represents an string that may be null. NullString implements the Scanner Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan.

If Valid is false then the value is NULL.

func ParseHstore

func ParseHstore(s string) (k []string, v []NullString, err error)

ParseHstore parses the string representation of an hstore column (the same you would get from an ordinary SELECT) into two slices of keys and values. it is used internally in the default parsing of hstores, but is exported for use in handling custom data structures backed by an hstore column without the overhead of creating a map[string]string

func (NullString) Encode

func (s NullString) Encode(w *WriteBuf, oid Oid) error

func (NullString) FormatCode

func (n NullString) FormatCode() int16

func (*NullString) Scan

func (s *NullString) Scan(vr *ValueReader) error

type NullTime

type NullTime struct {
	Time  time.Time
	Valid bool // Valid is true if Time is not NULL
}

NullTime represents an time.Time that may be null. NullTime implements the Scanner and Encoder interfaces so it may be used both as an argument to Query[Row] and a destination for Scan. It corresponds with the PostgreSQL types timestamptz, timestamp, and date.

If Valid is false then the value is NULL.

func (NullTime) Encode

func (n NullTime) Encode(w *WriteBuf, oid Oid) error

func (NullTime) FormatCode

func (n NullTime) FormatCode() int16

func (*NullTime) Scan

func (n *NullTime) Scan(vr *ValueReader) error

type Oid

type Oid int32

type PgError

type PgError struct {
	Severity         string
	Code             string
	Message          string
	Detail           string
	Hint             string
	Position         int32
	InternalPosition int32
	InternalQuery    string
	Where            string
	SchemaName       string
	TableName        string
	ColumnName       string
	DataTypeName     string
	ConstraintName   string
	File             string
	Line             int32
	Routine          string
}

PgError represents an error reported by the PostgreSQL server. See http://www.postgresql.org/docs/9.3/static/protocol-error-fields.html for detailed field description.

func (PgError) Error

func (self PgError) Error() string

type PgType

type PgType struct {
	Name          string // name of type e.g. int4, text, date
	DefaultFormat int16  // default format (text or binary) this type will be requested in
}

type PreparedStatement

type PreparedStatement struct {
	Name              string
	SQL               string
	FieldDescriptions []FieldDescription
	ParameterOids     []Oid
}

type ProtocolError

type ProtocolError string

func (ProtocolError) Error

func (e ProtocolError) Error() string

type QueryArgs

type QueryArgs []interface{}

QueryArgs is a container for arguments to an SQL query. It is helpful when building SQL statements where the number of arguments is variable.

func (*QueryArgs) Append

func (qa *QueryArgs) Append(v interface{}) string

Append adds a value to qa and returns the placeholder value for the argument. e.g. $1, $2, etc.

type Row

type Row Rows

Row is a convenience wrapper over Rows that is returned by QueryRow.

func (*Row) Scan

func (r *Row) Scan(dest ...interface{}) (err error)

Scan works the same as (*Rows Scan) with the following exceptions. If no rows were found it returns ErrNoRows. If multiple rows are returned it ignores all but the first.

type Rows

type Rows struct {
	// contains filtered or unexported fields
}

Rows is the result set returned from *Conn.Query. Rows must be closed before the *Conn can be used again. Rows are closed by explicitly calling Close(), calling Next() until it returns false, or when a fatal error occurs.

func (*Rows) AfterClose

func (rows *Rows) AfterClose(f func(*Rows))

AfterClose adds f to a LILO queue of functions that will be called when rows is closed.

func (*Rows) Close

func (rows *Rows) Close()

Close closes the rows, making the connection ready for use again. It is safe to call Close after rows is already closed.

func (*Rows) Conn

func (rows *Rows) Conn() *Conn

Conn returns the *Conn this *Rows is using.

func (*Rows) Err

func (rows *Rows) Err() error

func (*Rows) Fatal

func (rows *Rows) Fatal(err error)

Fatal signals an error occurred after the query was sent to the server. It closes the rows automatically.

func (*Rows) FieldDescriptions

func (rows *Rows) FieldDescriptions() []FieldDescription

func (*Rows) Next

func (rows *Rows) Next() bool

Next prepares the next row for reading. It returns true if there is another row and false if no more rows are available. It automatically closes rows when all rows are read.

func (*Rows) Scan

func (rows *Rows) Scan(dest ...interface{}) (err error)

Scan reads the values from the current row into dest values positionally. dest can include pointers to core types, values implementing the Scanner interface, and []byte. []byte will skip the decoding process and directly copy the raw bytes received from PostgreSQL.

func (*Rows) Values

func (rows *Rows) Values() ([]interface{}, error)

Values returns an array of the row values

type Scanner

type Scanner interface {
	// Scan MUST check r.Type().DataType (to check by OID) or
	// r.Type().DataTypeName (to check by name) to ensure that it is scanning an
	// expected column type. It also MUST check r.Type().FormatCode before
	// decoding. It should not assume that it was called on a data type or format
	// that it understands.
	Scan(r *ValueReader) error
}

Scanner is an interface used to decode values from the PostgreSQL server.

type SerializationError

type SerializationError string

func (SerializationError) Error

func (e SerializationError) Error() string

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx represents a database transaction.

All Tx methods return ErrTxClosed if Commit or Rollback has already been called on the Tx.

func (*Tx) AfterClose

func (tx *Tx) AfterClose(f func(*Tx))

AfterClose adds f to a LILO queue of functions that will be called when the transaction is closed (either Commit or Rollback).

func (*Tx) Commit

func (tx *Tx) Commit() error

Commit commits the transaction

func (*Tx) Conn

func (tx *Tx) Conn() *Conn

Conn returns the *Conn this transaction is using.

func (*Tx) Err

func (tx *Tx) Err() error

Err returns the final error state, if any, of calling Commit or Rollback.

func (*Tx) Exec

func (tx *Tx) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec delegates to the underlying *Conn

func (*Tx) LargeObjects

func (tx *Tx) LargeObjects() (*LargeObjects, error)

LargeObjects returns a LargeObjects instance for the transaction.

func (*Tx) Query

func (tx *Tx) Query(sql string, args ...interface{}) (*Rows, error)

Query delegates to the underlying *Conn

func (*Tx) QueryRow

func (tx *Tx) QueryRow(sql string, args ...interface{}) *Row

QueryRow delegates to the underlying *Conn

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback rolls back the transaction. Rollback will return ErrTxClosed if the Tx is already closed, but is otherwise safe to call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will be called first in a non-error condition.

func (*Tx) Status

func (tx *Tx) Status() int8

Status returns the status of the transaction from the set of pgx.TxStatus* constants.

type ValueReader

type ValueReader struct {
	// contains filtered or unexported fields
}

ValueReader is used by the Scanner interface to decode values.

func (*ValueReader) Err

func (r *ValueReader) Err() error

Err returns any error that the ValueReader has experienced

func (*ValueReader) Fatal

func (r *ValueReader) Fatal(err error)

Fatal tells r that a Fatal error has occurred

func (*ValueReader) Len

func (r *ValueReader) Len() int32

Len returns the number of unread bytes

func (*ValueReader) ReadByte

func (r *ValueReader) ReadByte() byte

func (*ValueReader) ReadBytes

func (r *ValueReader) ReadBytes(count int32) []byte

ReadBytes reads count bytes and returns as []byte

func (*ValueReader) ReadInt16

func (r *ValueReader) ReadInt16() int16

func (*ValueReader) ReadInt32

func (r *ValueReader) ReadInt32() int32

func (*ValueReader) ReadInt64

func (r *ValueReader) ReadInt64() int64

func (*ValueReader) ReadOid

func (r *ValueReader) ReadOid() Oid

func (*ValueReader) ReadString

func (r *ValueReader) ReadString(count int32) string

ReadString reads count bytes and returns as string

func (*ValueReader) Type

func (r *ValueReader) Type() *FieldDescription

Type returns the *FieldDescription of the value

type WriteBuf

type WriteBuf struct {
	// contains filtered or unexported fields
}

WrifeBuf is used build messages to send to the PostgreSQL server. It is used by the Encoder interface when implementing custom encoders.

func (*WriteBuf) WriteByte

func (wb *WriteBuf) WriteByte(b byte)

func (*WriteBuf) WriteBytes

func (wb *WriteBuf) WriteBytes(b []byte)

func (*WriteBuf) WriteCString

func (wb *WriteBuf) WriteCString(s string)

func (*WriteBuf) WriteInt16

func (wb *WriteBuf) WriteInt16(n int16)

func (*WriteBuf) WriteInt32

func (wb *WriteBuf) WriteInt32(n int32)

func (*WriteBuf) WriteInt64

func (wb *WriteBuf) WriteInt64(n int64)

Directories

Path Synopsis
examples
Package stdlib is the compatibility layer from pgx to database/sql.
Package stdlib is the compatibility layer from pgx to database/sql.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL