radix

package module
v4.0.0-beta.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 4, 2020 License: MIT Imports: 23 Imported by: 63

README

Radix

Build Status GitHub tag (latest SemVer) GoDoc Go Report Card

Radix is a full-featured Redis client for Go. See the GoDoc for documentation and general usage examples.

This is the third revision of this project, the previous one has been deprecated but can be found here.

Features

  • Standard print-like API which supports all current and future redis commands.

  • Support for using an io.Reader as a command argument and writing responses to an io.Writer, as well as marshaling/unmarshaling command arguments from structs.

  • Connection pooling, which takes advantage of implicit pipelining to reduce system calls.

  • Helpers for EVAL, SCAN, and manual pipelining.

  • Support for pubsub, as well as persistent pubsub wherein if a connection is lost a new one transparently replaces it.

  • Full support for sentinel and cluster.

  • Nearly all important types are interfaces, allowing for custom implementations of nearly anything.

Installation and Usage

Radix always aims to support the most recent two versions of go, and is likely to support others prior to those two.

Module-aware mode:

go get github.com/mediocregopher/radix/v4
// import github.com/mediocregopher/radix/v4

Legacy GOPATH mode:

go get github.com/mediocregopher/radix
// import github.com/mediocregopher/radix

Testing

# requires a redis server running on 127.0.0.1:6379
go test github.com/mediocregopher/radix/v4

Benchmarks

Thanks to a huge amount of work put in by @nussjustin, and inspiration from the redispipe project and @funny-falcon, radix/v4 is significantly faster than most redis drivers, including redigo, for normal parallel workloads, and is pretty comparable for serial workloads.

Benchmarks can be run from the bench folder. The following results were obtained by running the benchmarks with -cpu set to 32 and 64, on a 32 core machine, with the redis server on a separate machine. See this thread for more details.

Some of radix's results are not included below because they use a non-default configuration.

# go get rsc.io/benchstat
# cd bench
# go test -v -run=XXX -bench=ParallelGetSet -cpu 32 -cpu 64 -benchmem . >/tmp/radix.stat
# benchstat radix.stat
name                                   time/op
ParallelGetSet/radix/default-32        2.15µs ± 0% <--- The good stuff
ParallelGetSet/radix/default-64        2.05µs ± 0% <--- The better stuff
ParallelGetSet/redigo-32               27.9µs ± 0%
ParallelGetSet/redigo-64               28.5µs ± 0%
ParallelGetSet/redispipe-32            2.02µs ± 0%
ParallelGetSet/redispipe-64            1.71µs ± 0%

name                                   alloc/op
ParallelGetSet/radix/default-32         72.0B ± 0%
ParallelGetSet/radix/default-64         84.0B ± 0%
ParallelGetSet/redigo-32                 119B ± 0%
ParallelGetSet/redigo-64                 120B ± 0%
ParallelGetSet/redispipe-32              168B ± 0%
ParallelGetSet/redispipe-64              172B ± 0%

name                                   allocs/op
ParallelGetSet/radix/default-32          4.00 ± 0%
ParallelGetSet/radix/default-64          4.00 ± 0%
ParallelGetSet/redigo-32                 6.00 ± 0%
ParallelGetSet/redigo-64                 6.00 ± 0%
ParallelGetSet/redispipe-32              8.00 ± 0%
ParallelGetSet/redispipe-64              8.00 ± 0%

Unless otherwise noted, the source files are distributed under the MIT License found in the LICENSE.txt file.

Documentation

Overview

Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more.

Creating a client

For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed:

pool, err := radix.NewPool("tcp", "127.0.0.1:6379", 10)
if err != nil {
	// handle error
}

If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead.

Commands

Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array.

err := client.Do(radix.Cmd(nil, "SET", "foo", "someval"))

var fooVal string
err := client.Do(radix.Cmd(&fooVal, "GET", "foo"))

var fooValB []byte
err := client.Do(radix.Cmd(&fooValB, "GET", "foo"))

var barI int
err := client.Do(radix.Cmd(&barI, "INCR", "bar"))

var bazEls []string
err := client.Do(radix.Cmd(&bazEls, "LRANGE", "baz", "0", "-1"))

var buzMap map[string]string
err := client.Do(radix.Cmd(&buzMap, "HGETALL", "buz"))

FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice.

Struct Scanning

Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag:

type MyType struct {
	Foo string               // Will be populated with the value for key "Foo"
	Bar string `redis:"BAR"` // Will be populated with the value for key "BAR"
	Baz string `redis:"-"`   // Will not be populated
}

Embedded structs will inline that struct's fields into the parent's:

type MyOtherType struct {
	// adds fields "Foo" and "BAR" (from above example) to MyOtherType
	MyType
	Biz int
}

The same rules for field naming apply when a struct is passed into FlatCmd as an argument.

Actions

Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method.

var fooVal string
p := radix.Pipeline(
	radix.FlatCmd(nil, "SET", "foo", 1),
	radix.Cmd(&fooVal, "GET", "foo"),
)
if err := client.Do(p); err != nil {
	panic(err)
}
fmt.Printf("fooVal: %q\n", fooVal)

Transactions

There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example).

EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster.

AUTH and other settings via ConnFunc and ClientFunc

All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools.

// this is a ConnFunc which will set up a connection which is authenticated
// and has a 1 minute timeout on all operations
customConnFunc := func(network, addr string) (radix.Conn, error) {
	return radix.Dial(network, addr,
		radix.DialTimeout(1 * time.Minute),
		radix.DialAuthPass("mySuperSecretPassword"),
	)
}

// this pool will use our ConnFunc for all connections it creates
pool, err := radix.NewPool("tcp", redisAddr, 10, PoolConnFunc(customConnFunc))

// this cluster will use the ClientFunc to create a pool to each node in the
// cluster. The pools also use our customConnFunc, but have more connections
poolFunc := func(network, addr string) (radix.Client, error) {
	return radix.NewPool(network, addr, 100, PoolConnFunc(customConnFunc))
}
cluster, err := radix.NewCluster([]string{redisAddr1, redisAddr2}, ClusterPoolFunc(poolFunc))

Custom implementations

All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier.

Errors

Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used.

var redisErr resp2.Error
err := client.Do(radix.Cmd(nil, "AUTH", "wrong password"))
if errors.As(err, &redisErr) {
	log.Printf("redis error returned: %s", redisErr.E)
}

Use the golang.org/x/xerrors package if you're using an older version of go.

Index

Examples

Constants

This section is empty.

Variables

View Source
var DefaultClientFunc = func(ctx context.Context, network, addr string) (Client, error) {
	return NewPool(ctx, network, addr, 4)
}

DefaultClientFunc is a ClientFunc which will return a Client for a redis instance using sane defaults.

View Source
var DefaultClusterConnFunc = func(ctx context.Context, network, addr string) (Conn, error) {
	c, err := DefaultConnFunc(ctx, network, addr)
	if err != nil {
		return nil, err
	} else if err := c.Do(ctx, Cmd(nil, "READONLY")); err != nil {
		c.Close()
		return nil, err
	}
	return c, nil
}

DefaultClusterConnFunc is a ConnFunc which will return a Conn for a node in a redis cluster using sane defaults and which has READONLY mode enabled, allowing read-only commands on the connection even if the connected instance is currently a replica, either by explicitly sending commands on the connection or by using the DoSecondary method on the Cluster that owns the connection.

View Source
var DefaultConnFunc = func(ctx context.Context, network, addr string) (Conn, error) {
	return Dial(ctx, network, addr)
}

DefaultConnFunc is a ConnFunc which will return a Conn for a redis instance using sane defaults.

View Source
var ErrPoolEmpty = errors.New("connection pool is empty")

ErrPoolEmpty is used by Pools created using the PoolOnEmptyErrAfter option

View Source
var ScanAllKeys = ScanOpts{
	Command: "SCAN",
}

ScanAllKeys is a shortcut ScanOpts which can be used to scan all keys

Functions

func CRC16

func CRC16(buf []byte) uint16

CRC16 returns checksum for a given set of bytes based on the crc algorithm defined for hashing redis keys in a cluster setup

func ClusterSlot

func ClusterSlot(key []byte) uint16

ClusterSlot returns the slot number the key belongs to in any redis cluster, taking into account key hash tags

Types

type Action

type Action interface {
	// Keys returns the keys which will be acted on. Empty slice or nil may be
	// returned if no keys are being acted on. The returned slice must not be
	// modified.
	Keys() []string

	// Perform actually performs the Action using an existing Conn.
	Perform(ctx context.Context, c Conn) error
}

Action performs a task using a Conn.

func Cmd

func Cmd(rcv interface{}, cmd string, args ...string) Action

Cmd is used to perform a redis command and retrieve a result. It should not be passed into Do more than once.

If the receiver value of Cmd is a primitive, a slice/map, or a struct then a pointer must be passed in. It may also be an io.Writer, an encoding.Text/BinaryUnmarshaler, or a resp.Unmarshaler. See the package docs for more on how results are unmarshaled into the receiver.

The Action returned by Cmd also implements resp.Marshaler.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	panic(err)
}

if err := client.Do(ctx, Cmd(nil, "SET", "foo", "bar")); err != nil {
	panic(err)
}

var fooVal string
if err := client.Do(ctx, Cmd(&fooVal, "GET", "foo")); err != nil {
	panic(err)
}
fmt.Println(fooVal)
Output:

bar

func FlatCmd

func FlatCmd(rcv interface{}, cmd, key string, args ...interface{}) Action

FlatCmd is like Cmd, but the arguments can be of almost any type, and FlatCmd will automatically flatten them into a single array of strings. Like Cmd, a FlatCmd should not be passed into Do more than once.

FlatCmd does _not_ work for commands whose first parameter isn't a key, or (generally) for MSET. Use Cmd for those.

FlatCmd supports using a resp.LenReader (an io.Reader with a Len() method) as an argument. *bytes.Buffer is an example of a LenReader, and the resp package has a NewLenReader function which can wrap an existing io.Reader.

FlatCmd also supports encoding.Text/BinaryMarshalers. It does _not_ currently support resp.Marshaler.

The receiver to FlatCmd follows the same rules as for Cmd.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	panic(err)
}

// performs "SET" "foo" "1"
err = client.Do(ctx, FlatCmd(nil, "SET", "foo", 1))
if err != nil {
	panic(err)
}

// performs "SADD" "fooSet" "1" "2" "3"
err = client.Do(ctx, FlatCmd(nil, "SADD", "fooSet", []string{"1", "2", "3"}))
if err != nil {
	panic(err)
}

// performs "HMSET" "foohash" "a" "1" "b" "2" "c" "3"
m := map[string]int{"a": 1, "b": 2, "c": 3}
err = client.Do(ctx, FlatCmd(nil, "HMSET", "fooHash", m))
if err != nil {
	panic(err)
}
Output:

func Pipeline

func Pipeline(actions ...Action) Action

Pipeline returns an Action which first writes multiple commands to a Conn in a single write, then reads their responses in a single read. This reduces network delay into a single round-trip.

NOTE that, while a Pipeline performs all commands on a single Conn, it shouldn't be used by itself for MULTI/EXEC transactions, because if there's an error it won't discard the incomplete transaction. Use WithConn or EvalScript for transactional functionality instead.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}
var fooVal string
p := Pipeline(
	FlatCmd(nil, "SET", "foo", 1),
	Cmd(&fooVal, "GET", "foo"),
)
if err := client.Do(ctx, p); err != nil {
	// handle error
}
fmt.Printf("fooVal: %q\n", fooVal)
Output:

fooVal: "1"

func WithConn

func WithConn(key string, fn func(context.Context, Conn) error) Action

WithConn is used to perform a set of independent Actions on the same Conn.

key should be a key which one or more of the inner Actions is going to act on, or "" if no keys are being acted on or the keys aren't yet known. key is generally only necessary when using Cluster.

The callback function is what should actually carry out the inner actions, and the error it returns will be passed back up immediately.

NOTE that WithConn only ensures all inner Actions are performed on the same Conn, it doesn't make them transactional. Use MULTI/WATCH/EXEC within a WithConn for transactions, or use EvalScript

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

// This example retrieves the current integer value of `key` and sets its
// new value to be the increment of that, all using the same connection
// instance. NOTE that it does not do this atomically like the INCR command
// would.
key := "someKey"
err = client.Do(ctx, WithConn(key, func(ctx context.Context, conn Conn) error {
	var curr int
	if err := conn.Do(ctx, Cmd(&curr, "GET", key)); err != nil {
		return err
	}

	curr++
	return conn.Do(ctx, FlatCmd(nil, "SET", key, curr))
}))
if err != nil {
	// handle error
}
Output:

Example (Transaction)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

// This example retrieves the current value of `key` and then sets a new
// value on it in an atomic transaction.
key := "someKey"
var prevVal string

err = client.Do(ctx, WithConn(key, func(ctx context.Context, c Conn) error {

	// Begin the transaction with a MULTI command
	if err := c.Do(ctx, Cmd(nil, "MULTI")); err != nil {
		return err
	}

	// If any of the calls after the MULTI call error it's important that
	// the transaction is discarded. This isn't strictly necessary if the
	// only possible error is a network error, as the connection would be
	// closed by the client anyway.
	var err error
	defer func() {
		if err != nil {
			// The return from DISCARD doesn't matter. If it's an error then
			// it's a network error and the Conn will be closed by the
			// client.
			c.Do(ctx, Cmd(nil, "DISCARD"))
		}
	}()

	// queue up the transaction's commands
	if err = c.Do(ctx, Cmd(nil, "GET", key)); err != nil {
		return err
	}
	if err = c.Do(ctx, Cmd(nil, "SET", key, "someOtherValue")); err != nil {
		return err
	}

	// execute the transaction, capturing the result in a Tuple. We only
	// care about the first element (the result from GET), so we discard the
	// second by setting nil.
	result := Tuple{&prevVal, nil}
	return c.Do(ctx, Cmd(&result, "EXEC"))
}))
if err != nil {
	// handle error
}

fmt.Printf("the value of key %q was %q\n", key, prevVal)
Output:

type Client

type Client interface {
	// Do performs an Action, returning any error.
	Do(context.Context, Action) error

	// Once Close() is called all future method calls on the Client will return
	// an error
	Close() error
}

Client describes an entity which can carry out Actions, e.g. a connection pool for a single redis instance or the cluster client.

Implementations of Client are expected to be thread-safe.

type ClientFunc

type ClientFunc func(ctx context.Context, network, addr string) (Client, error)

ClientFunc is a function which can be used to create a Client for a single redis instance on the given network/address.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster contains all information about a redis cluster needed to interact with it, including a set of pools to each of its instances. All methods on Cluster are thread-safe

func NewCluster

func NewCluster(ctx context.Context, clusterAddrs []string, opts ...ClusterOpt) (*Cluster, error)

NewCluster initializes and returns a Cluster instance. It will try every address given until it finds a usable one. From there it uses CLUSTER SLOTS to discover the cluster topology and make all the necessary connections.

NewCluster takes in a number of options which can overwrite its default behavior. The default options NewCluster uses are:

ClusterPoolFunc(DefaultClientFunc)
ClusterSyncEvery(5 * time.Second)
ClusterOnDownDelayActionsBy(100 * time.Millisecond)

func (*Cluster) Client

func (c *Cluster) Client(addr string) (Client, error)

Client returns a Client for the given address, which could be either the primary or one of the secondaries (see Topo method for retrieving known addresses).

NOTE that if there is a failover while a Client returned by this method is being used the Client may or may not continue to work as expected, depending on the nature of the failover.

NOTE the Client should _not_ be closed.

func (*Cluster) Close

func (c *Cluster) Close() error

Close cleans up all goroutines spawned by Cluster and closes all of its Pools.

func (*Cluster) Do

func (c *Cluster) Do(ctx context.Context, a Action) error

Do performs an Action on a redis instance in the cluster, with the instance being determeined by the key returned from the Action's Key() method.

This method handles MOVED and ASK errors automatically in most cases, see ClusterCanRetryAction's docs for more.

func (*Cluster) DoSecondary

func (c *Cluster) DoSecondary(ctx context.Context, a Action) error

DoSecondary is like Do but executes the Action on a random secondary for the affected keys.

For DoSecondary to work, all connections must be created in read-only mode, by using a custom ClusterPoolFunc that executes the READONLY command on each new connection.

See ClusterPoolFunc for an example using the global DefaultClusterConnFunc.

If the Action can not be handled by a secondary the Action will be send to the primary instead.

func (*Cluster) NewScanner

func (c *Cluster) NewScanner(o ScanOpts) Scanner

NewScanner will return a Scanner which will scan over every node in the cluster. This will panic if the ScanOpt's Command isn't "SCAN". For scanning operations other than "SCAN" (e.g. "HSCAN", "ZSCAN") use the normal NewScanner function.

If the cluster topology changes during a scan the Scanner may or may not error out due to it, depending on the nature of the change.

func (*Cluster) Sync

func (c *Cluster) Sync(ctx context.Context) error

Sync will synchronize the Cluster with the actual cluster, making new pools to new instances and removing ones from instances no longer in the cluster. This will be called periodically automatically, but you can manually call it at any time as well

func (*Cluster) Topo

func (c *Cluster) Topo() ClusterTopo

Topo returns the Cluster's topology as it currently knows it. See ClusterTopo's docs for more on its default order.

type ClusterCanRetryAction

type ClusterCanRetryAction interface {
	Action
	ClusterCanRetry() bool
}

ClusterCanRetryAction is an Action which is aware of Cluster's retry behavior in the event of a slot migration. If an Action receives an error from a Cluster node which is either MOVED or ASK, and that Action implements ClusterCanRetryAction, and the ClusterCanRetry method returns true, then the Action will be retried on the correct node.

NOTE that the Actions which are returned by Cmd, FlatCmd, and EvalScript.Cmd all implicitly implement this interface.

type ClusterNode

type ClusterNode struct {
	// older versions of redis might not actually send back the id, so it may be
	// blank
	Addr, ID string
	// start is inclusive, end is exclusive
	Slots [][2]uint16
	// address and id this node is the secondary of, if it's a secondary
	SecondaryOfAddr, SecondaryOfID string
}

ClusterNode describes a single node in the cluster at a moment in time.

type ClusterOpt

type ClusterOpt func(*clusterOpts)

ClusterOpt is an optional behavior which can be applied to the NewCluster function to effect a Cluster's behavior

func ClusterErrCh

func ClusterErrCh(errCh chan<- error) ClusterOpt

ClusterErrCh takes a channel which asynchronous errors encountered by the Cluster can be read off of. If the channel blocks the error will be dropped. The channel will be closed when the Cluster is closed.

func ClusterOnDownDelayActionsBy

func ClusterOnDownDelayActionsBy(d time.Duration) ClusterOpt

ClusterOnDownDelayActionsBy tells the Cluster to delay all commands by the given duration while the cluster is seen to be in the CLUSTERDOWN state. This allows fewer actions to be affected by brief outages, e.g. during a failover.

If the given duration is 0 then Cluster will not delay actions during the CLUSTERDOWN state. Note that calls to Sync will not be delayed regardless of this option.

func ClusterPoolFunc

func ClusterPoolFunc(pf ClientFunc) ClusterOpt

ClusterPoolFunc tells the Cluster to use the given ClientFunc when creating pools of connections to cluster members.

This can be used to allow for secondary reads via the Cluster.DoSecondary method by specifying a ClientFunc that internally creates connections using DefaultClusterConnFunc or a custom ConnFunc that enables READONLY mode on each connection.

Example (DefaultClusterConnFunc)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

NewCluster(ctx, clusterAddrs, ClusterPoolFunc(func(ctx context.Context, network, addr string) (Client, error) {
	return NewPool(ctx, network, addr, 4, PoolConnFunc(DefaultClusterConnFunc))
}))
Output:

func ClusterSyncEvery

func ClusterSyncEvery(d time.Duration) ClusterOpt

ClusterSyncEvery tells the Cluster to synchronize itself with the cluster's topology at the given interval. On every synchronization Cluster will ask the cluster for its topology and make/destroy its connections as necessary.

func ClusterWithTrace

func ClusterWithTrace(ct trace.ClusterTrace) ClusterOpt

ClusterWithTrace tells the Cluster to trace itself with the given ClusterTrace. Note that ClusterTrace will block every point that you set to trace.

type ClusterTopo

type ClusterTopo []ClusterNode

ClusterTopo describes the cluster topology at a given moment. It will be sorted first by slot number of each node and then by secondary status, so primaries will come before secondaries.

func (ClusterTopo) Map

func (tt ClusterTopo) Map() map[string]ClusterNode

Map returns the topology as a mapping of node address to its ClusterNode

func (ClusterTopo) MarshalRESP

func (tt ClusterTopo) MarshalRESP(w io.Writer) error

MarshalRESP implements the resp.Marshaler interface, and will marshal the ClusterTopo in the same format as the return from CLUSTER SLOTS

func (ClusterTopo) Primaries

func (tt ClusterTopo) Primaries() ClusterTopo

Primaries returns a ClusterTopo instance containing only the primary nodes from the ClusterTopo being called on

func (*ClusterTopo) UnmarshalRESP

func (tt *ClusterTopo) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface, but only supports unmarshaling the return from CLUSTER SLOTS. The unmarshaled nodes will be sorted before they are returned

type Conn

type Conn interface {
	// The Do method merely calls the Action's Perform method with the Conn as
	// the argument.
	Client

	// EncodeDecode will encode the given Marshaler onto the connection, then
	// decode a response into the given Unmarshaler. If either parameter is nil
	// then that step is skipped.
	EncodeDecode(context.Context, resp.Marshaler, resp.Unmarshaler) error

	// Returns the underlying network connection, as-is. Read, Write, and Close
	// should not be called on the returned Conn.
	NetConn() net.Conn
}

Conn is a Client wrapping a single network connection which synchronously reads/writes data using the redis resp protocol.

A Conn can be used directly as a Client, but in general you probably want to use a *Pool instead

func Dial

func Dial(ctx context.Context, network, addr string, opts ...DialOpt) (Conn, error)

Dial is a ConnFunc which creates a Conn using net.Dial and NewConn. It takes in a number of options which can overwrite its default behavior as well.

In place of a host:port address, Dial also accepts a URI, as per:

https://www.iana.org/assignments/uri-schemes/prov/redis

If the URI has an AUTH password or db specified Dial will attempt to perform the AUTH and/or SELECT as well.

If either DialAuthPass or DialSelectDB is used it overwrites the associated value passed in by the URI.

The default options Dial uses are:

DialTimeout(10 * time.Second)

func NewConn

func NewConn(netConn net.Conn) Conn

NewConn takes an existing net.Conn and wraps it to support the Conn interface of this package. The Read and Write methods on the original net.Conn should not be used after calling this method.

func NewPipeliningConn

func NewPipeliningConn(conn Conn, opts ...PipeliningConnOpt) Conn

NewPipeliningConn wraps the given Conn such that it will batch concurrent EncodeDecode calls together. Once certain thresholds are met (such as time or buffer size, see PipeliningConnOpt) all Encodes will be performed in a single write, then all Decodes will be performed in a single read.

The Do and EncodeDecode methods of the returned Conn are thread-safe. The Conn given here should not be used after this is called.

func PubSubStub

func PubSubStub(remoteNetwork, remoteAddr string, fn func([]string) interface{}) (Conn, chan<- PubSubMessage)

PubSubStub returns a (fake) Conn, much like Stub does, which pretends it is a Conn to a real redis instance, but is instead using the given callback to service requests. It is primarily useful for writing tests.

PubSubStub differes from Stub in that Encode calls for (P)SUBSCRIBE, (P)UNSUBSCRIBE, MESSAGE, and PING will be intercepted and handled as per redis' expected pubsub functionality. A PubSubMessage may be written to the returned channel at any time, and if the PubSubStub has had (P)SUBSCRIBE called matching that PubSubMessage it will be written to the PubSubStub's internal buffer as expected.

This is intended to be used so that it can mock services which can perform both normal redis commands and pubsub (e.g. a real redis instance, redis sentinel). Once created this stub can be passed into PubSub and treated like a real connection.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// Make a pubsub stub conn which will return nil for everything except
// pubsub commands (which will be handled automatically)
stub, stubCh := PubSubStub("tcp", "127.0.0.1:6379", func([]string) interface{} {
	return nil
})

// These writes shouldn't do anything, initially, since we haven't
// subscribed to anything
go func() {
	for {
		stubCh <- PubSubMessage{
			Channel: "foo",
			Message: []byte("bar"),
		}
		time.Sleep(1 * time.Second)
	}
}()

// Use PubSub to wrap the stub like we would for a normal redis connection
pstub := NewPubSubConn(stub)

// Subscribe msgCh to "foo"
msgCh := make(chan PubSubMessage)
if err := pstub.Subscribe(ctx, msgCh, "foo"); err != nil {
	log.Fatal(err)
}

// now msgCh is subscribed the publishes being made by the go-routine above
// will start being written to it
for m := range msgCh {
	log.Printf("read m: %#v", m)
}
Output:

func Stub

func Stub(remoteNetwork, remoteAddr string, fn func([]string) interface{}) Conn

Stub returns a (fake) Conn which pretends it is a Conn to a real redis instance, but is instead using the given callback to service requests. It is primarily useful for writing tests.

When EncodeDecode is called the value to be marshaled is converted into a []string and passed to the callback. The return from the callback is then marshaled into an internal buffer. The value to be decoded is unmarshaled into using the internal buffer. If the internal buffer is empty at this step then the call will block.

remoteNetwork and remoteAddr can be empty, but if given will be used as the return from the RemoteAddr method.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

m := map[string]string{}
stub := Stub("tcp", "127.0.0.1:6379", func(args []string) interface{} {
	switch args[0] {
	case "GET":
		return m[args[1]]
	case "SET":
		m[args[1]] = args[2]
		return nil
	default:
		return fmt.Errorf("this stub doesn't support command %q", args[0])
	}
})

stub.Do(ctx, Cmd(nil, "SET", "foo", "1"))

var foo int
stub.Do(ctx, Cmd(&foo, "GET", "foo"))
fmt.Printf("foo: %d\n", foo)
Output:

type ConnFunc

type ConnFunc func(ctx context.Context, network, addr string) (Conn, error)

ConnFunc is a function which returns an initialized, ready-to-be-used Conn. Functions like NewPool or NewCluster take in a ConnFunc in order to allow for things like calls to AUTH on each new connection, setting timeouts, custom Conn implementations, etc... See the package docs for more details.

type DialOpt

type DialOpt func(*dialOpts)

DialOpt is an optional behavior which can be applied to the Dial function to effect its behavior, or the behavior of the Conn it creates.

func DialAuthPass

func DialAuthPass(pass string) DialOpt

DialAuthPass will cause Dial to perform an AUTH command once the connection is created, using the given pass.

If this is set and a redis URI is passed to Dial which also has a password set, this takes precedence.

Using DialAuthPass is equivalent to calling DialAuthUser with user "default" and is kept for compatibility with older package versions.

func DialAuthUser

func DialAuthUser(user, pass string) DialOpt

DialAuthUser will cause Dial to perform an AUTH command once the connection is created, using the given user and pass.

If this is set and a redis URI is passed to Dial which also has a username and password set, this takes precedence.

func DialSelectDB

func DialSelectDB(db int) DialOpt

DialSelectDB will cause Dial to perform a SELECT command once the connection is created, using the given database index.

If this is set and a redis URI is passed to Dial which also has a database index set, this takes precedence.

func DialUseTLS

func DialUseTLS(config *tls.Config) DialOpt

DialUseTLS will cause Dial to perform a TLS handshake using the provided config. If config is nil the config is interpreted as equivalent to the zero configuration. See https://golang.org/pkg/crypto/tls/#Config

type EvalScript

type EvalScript struct {
	// contains filtered or unexported fields
}

EvalScript contains the body of a script to be used with redis' EVAL functionality. Call Cmd on a EvalScript to actually create an Action which can be run.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// set as a global variable, this script is equivalent to the builtin GETSET
// redis command
var getSet = NewEvalScript(1, `
		local prev = redis.call("GET", KEYS[1])
		redis.call("SET", KEYS[1], ARGV[1])
		return prev
`)

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

key := "someKey"
var prevVal string
if err := client.Do(ctx, getSet.Cmd(&prevVal, key, "myVal")); err != nil {
	// handle error
}

fmt.Printf("value of key %q used to be %q\n", key, prevVal)
Output:

func NewEvalScript

func NewEvalScript(numKeys int, script string) EvalScript

NewEvalScript initializes a EvalScript instance. numKeys corresponds to the number of arguments which will be keys when Cmd is called

func (EvalScript) Cmd

func (es EvalScript) Cmd(rcv interface{}, keysAndArgs ...string) Action

Cmd is like the top-level Cmd but it uses the the EvalScript to perform an EVALSHA command (and will automatically fallback to EVAL as necessary). keysAndArgs must be at least as long as the numKeys argument of NewEvalScript.

func (EvalScript) FlatCmd

func (es EvalScript) FlatCmd(rcv interface{}, keys []string, args ...interface{}) Action

FlatCmd is like the top level FlatCmd except it uses the EvalScript to perform an EVALSHA command (and will automatically fallback to EVAL as necessary). keys must be as long as the numKeys argument of NewEvalScript.

type MaybeNil

type MaybeNil struct {
	Nil        bool
	EmptyArray bool
	Rcv        interface{}
}

MaybeNil is a type which wraps a receiver. It will first detect if what's being received is a nil RESP type (either bulk string or array), and if so set Nil to true. If not the return value will be unmarshalled into Rcv normally. If the response being received is an empty array then the EmptyArray field will be set and Rcv unmarshalled into normally.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := NewPool(ctx, "tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

var rcv int64
mn := MaybeNil{Rcv: &rcv}
if err := client.Do(ctx, Cmd(&mn, "GET", "foo")); err != nil {
	// handle error
} else if mn.Nil {
	fmt.Println("rcv is nil")
} else {
	fmt.Printf("rcv is %d\n", rcv)
}
Output:

func (*MaybeNil) UnmarshalRESP

func (mn *MaybeNil) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the method for the resp.Unmarshaler interface.

type PersistentPubSubOpt

type PersistentPubSubOpt func(*persistentPubSubOpts)

PersistentPubSubOpt is an optional parameter which can be passed into PersistentPubSub in order to affect its behavior.

func PersistentPubSubAbortAfter

func PersistentPubSubAbortAfter(attempts int) PersistentPubSubOpt

PersistentPubSubAbortAfter changes PersistentPubSub's reconnect behavior. Usually PersistentPubSub will try to reconnect forever upon a disconnect, blocking any methods which have been called until reconnect is successful.

When PersistentPubSubAbortAfter is used, it will give up after that many attempts and return the error to the method which has been blocked the longest. Another method will need to be called in order for PersistentPubSub to resume trying to reconnect.

func PersistentPubSubConnFunc

func PersistentPubSubConnFunc(connFn ConnFunc) PersistentPubSubOpt

PersistentPubSubConnFunc causes PersistentPubSub to use the given ConnFunc when connecting to its destination.

func PersistentPubSubErrCh

func PersistentPubSubErrCh(errCh chan<- error) PersistentPubSubOpt

PersistentPubSubErrCh takes a channel which asynchronous errors encountered by the PersistentPubSub can be read off of. If the channel blocks the error will be dropped. The channel will be closed when PersistentPubSub is closed.

type PipeliningConnOpt

type PipeliningConnOpt func(*pipeliningConnOpts)

PipeliningConnOpt is an option which can be given to NewPipeliningConn to effect how the resulting Conn functions.

func PipeliningConnBatchDuration

func PipeliningConnBatchDuration(d time.Duration) PipeliningConnOpt

PipeliningConnBatchDuration describes the max amount of time EncodeDecode calls will block before the batch they are a part of is executed. A longer batch duration will result in fewer round-trips with the server, but more of a delay in EncodeDecode calls.

A value of 0 indicates that the batch duration should not be considered when determining when to execute the batch.

Defaults to 150 microseconds.

func PipeliningConnBatchSize

func PipeliningConnBatchSize(size int) PipeliningConnOpt

PipeliningConnBatchSize describes the max number of EncodeDecode calls which will be buffered in the PipeliningConn before it executes all at once. A larger batch size will result in fewer round-trips with the server, but more of a delay between the round-trips.

A value of 0 indicates that batch size should not be considered when determining when to execute the batch.

Defaults to 10.

type Pool

type Pool struct {
	// contains filtered or unexported fields
}

Pool is a dynamic connection pool which implements the Client interface. It takes in a number of options which can effect its specific behavior; see the NewPool method.

Pool is dynamic in that it can create more connections on-the-fly to handle increased load. The maximum number of extra connections (if any) can be configured, along with how long they are kept after load has returned to normal.

func NewPool

func NewPool(ctx context.Context, network, addr string, size int, opts ...PoolOpt) (*Pool, error)

NewPool creates a *Pool which will keep open at least the given number of connections to the redis instance at the given address.

NewPool takes in a number of options which can overwrite its default behavior. The default options NewPool uses are:

PoolConnFunc(DefaultConnFunc)
PoolOnEmptyCreateAfter(1 * time.Second)
PoolRefillInterval(1 * time.Second)
PoolOnFullBuffer((size / 3)+1, 1 * time.Second)
PoolPingInterval(5 * time.Second / (size+1))
PoolPipelineConcurrency(size)

The recommended size of the pool depends on many factors, such as the number of concurrent goroutines that will use the pool.

func (*Pool) Close

func (p *Pool) Close() error

Close implements the Close method of the Client

func (*Pool) Do

func (p *Pool) Do(ctx context.Context, a Action) error

Do implements the Do method of the Client interface by retrieving a Conn out of the pool, calling Perform on the given Action with it, and returning the Conn to the pool.

func (*Pool) NumAvailConns

func (p *Pool) NumAvailConns() int

NumAvailConns returns the number of connections currently available in the pool, as well as in the overflow buffer if that option is enabled.

type PoolOpt

type PoolOpt func(*poolOpts)

PoolOpt is an optional behavior which can be applied to the NewPool function to effect a Pool's behavior

func PoolConnFunc

func PoolConnFunc(cf ConnFunc) PoolOpt

PoolConnFunc tells the Pool to use the given ConnFunc when creating new Conns to its redis instance. The ConnFunc can be used to set timeouts, perform AUTH, or even use custom Conn implementations.

func PoolErrCh

func PoolErrCh(errCh chan<- error) PoolOpt

PoolErrCh takes a channel which asynchronous errors encountered by the Pool can be read off of. If the channel blocks the error will be dropped. The channel will be closed when the Pool is closed.

func PoolOnEmptyCreateAfter

func PoolOnEmptyCreateAfter(wait time.Duration) PoolOpt

PoolOnEmptyCreateAfter effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block until a connection becomes available or until the duration has passed. If the duration is passed a new connection is created and used.

If wait is 0 then a new connection is created immediately upon an empty Pool.

func PoolOnEmptyErrAfter

func PoolOnEmptyErrAfter(wait time.Duration) PoolOpt

PoolOnEmptyErrAfter effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block until a connection becomes available or until the duration has passed. If the duration is passed then ErrEmptyPool is returned.

If wait is 0 then ErrEmptyPool is returned immediately upon an empty Pool.

func PoolOnEmptyWait

func PoolOnEmptyWait() PoolOpt

PoolOnEmptyWait effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block as long as it takes until a connection becomes available.

func PoolOnFullBuffer

func PoolOnFullBuffer(size int, drainInterval time.Duration) PoolOpt

PoolOnFullBuffer effects the Pool's behavior when it is full. The effect is to give the pool an additional buffer for connections, called the overflow. If a connection is being put back into a full pool it will be put into the overflow. If the overflow is also full then the connection will be closed and discarded.

drainInterval specifies the interval at which a drain event happens. On each drain event a connection will be removed from the overflow buffer (if any are present in it), closed, and discarded.

If drainInterval is zero then drain events will never occur.

NOTE that if used with PoolOnEmptyWait or PoolOnEmptyErrAfter this won't have any effect, because there won't be any occasion where more connections than the pool size will be created.

func PoolOnFullClose

func PoolOnFullClose() PoolOpt

PoolOnFullClose effects the Pool's behavior when it is full. The effect is to cause any connection which is being put back into a full pool to be closed and discarded.

func PoolPingInterval

func PoolPingInterval(d time.Duration) PoolOpt

PoolPingInterval specifies the interval at which a ping event happens. On each ping event the Pool calls the PING redis command over one of it's available connections.

Since connections are used in LIFO order, the ping interval * pool size is the duration of time it takes to ping every connection once when the pool is idle.

A shorter interval means connections are pinged more frequently, but also means more traffic with the server.

func PoolRefillInterval

func PoolRefillInterval(d time.Duration) PoolOpt

PoolRefillInterval specifies the interval at which a refill event happens. On each refill event the Pool checks to see if it is full, and if it's not a single connection is created and added to it.

func PoolWithTrace

func PoolWithTrace(pt trace.PoolTrace) PoolOpt

PoolWithTrace tells the Pool to trace itself with the given PoolTrace Note that PoolTrace will block every point that you set to trace.

type PubSubConn

type PubSubConn interface {
	// Subscribe subscribes the PubSubConn to the given set of channels. msgCh
	// will receieve a PubSubMessage for every publish written to any of the
	// channels. This may be called multiple times for the same channels and
	// different msgCh's, each msgCh will receieve a copy of the PubSubMessage
	// for each publish.
	Subscribe(ctx context.Context, msgCh chan<- PubSubMessage, channels ...string) error

	// Unsubscribe unsubscribes the msgCh from the given set of channels, if it
	// was subscribed at all.
	//
	// NOTE until Unsubscribe has returned it should be assumed that msgCh can
	// still have messages written to it.
	Unsubscribe(ctx context.Context, msgCh chan<- PubSubMessage, channels ...string) error

	// PSubscribe is like Subscribe, but it subscribes msgCh to a set of
	// patterns and not individual channels.
	PSubscribe(ctx context.Context, msgCh chan<- PubSubMessage, patterns ...string) error

	// PUnsubscribe is like Unsubscribe, but it unsubscribes msgCh from a set of
	// patterns and not individual channels.
	PUnsubscribe(ctx context.Context, msgCh chan<- PubSubMessage, patterns ...string) error

	// Ping performs a simple Ping command on the PubSubConn, returning an error
	// if it failed for some reason
	Ping(context.Context) error

	// Close closes the PubSubConn so it can't be used anymore. All subscribed
	// channels will stop receiving PubSubMessages from this Conn (but will not
	// themselves be closed).
	//
	// NOTE until Close returns it should be assumed that all subscribed msgChs
	// can still be written to.
	Close() error
}

PubSubConn wraps an existing Conn to support redis' pubsub system. User-created channels can be subscribed to redis channels to receive PubSubMessages which have been published.

If any methods return an error it means the PubSubConn has been Close'd and subscribed msgCh's will no longer receive PubSubMessages from it. All methods are threadsafe, but should be called in a different go-routine than that which is reading from the PubSubMessage channels.

NOTE the PubSubMessage channels should never block. If any channels block when being written to they will block all other channels from receiving a publish and block methods from returning.

func NewPersistentPubSubConn

func NewPersistentPubSubConn(
	ctx context.Context,
	network, addr string, options ...PersistentPubSubOpt,
) (
	PubSubConn, error,
)

NewPersistentPubSubConn is like NewPubSubConn, but instead of taking in an existing Conn to wrap it will create one on the fly. If the connection is ever terminated then a new one will be created and will be reset to the previous connection's state.

This is effectively a way to have a permanent PubSubConn established which supports subscribing/unsubscribing but without the hassle of implementing reconnect/re-subscribe logic.

With default options, neither this function nor any of the methods on the returned PubSubConn will ever return an error, they will instead block until a connection can be successfully reinstated.

PersistentPubSubWithOpts takes in a number of options which can overwrite its default behavior. The default options PersistentPubSubWithOpts uses are:

PersistentPubSubConnFunc(DefaultConnFunc)
Example (Cluster)
// Example of how to use PersistentPubSub with a Cluster instance.

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// Initialize the cluster in any way you see fit
cluster, err := NewCluster(ctx, []string{"127.0.0.1:6379"})
if err != nil {
	panic(err)
}

// Have PersistentPubSub pick a random cluster node everytime it wants to
// make a new connection. If the node fails PersistentPubSub will
// automatically pick a new node to connect to.
ps, err := NewPersistentPubSubConn(ctx, "", "",
	PersistentPubSubConnFunc(func(ctx context.Context, _ string, _ string) (Conn, error) {
		topo := cluster.Topo()
		node := topo[rand.Intn(len(topo))]
		return Dial(ctx, "tcp", node.Addr)
	},
	))
if err != nil {
	panic(err)
}

// Use the PubSubConn as normal.
msgCh := make(chan PubSubMessage)
ps.Subscribe(ctx, msgCh, "myChannel")
for msg := range msgCh {
	log.Printf("publish to channel %q received: %q", msg.Channel, msg.Message)
}
Output:

func NewPubSubConn

func NewPubSubConn(rc Conn) PubSubConn

NewPubSubConn wraps the given Conn so that it becomes a PubSubConn. The passed in Conn should not be used after this call.

Example
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// Create a normal redis connection
conn, err := Dial(ctx, "tcp", "127.0.0.1:6379")
if err != nil {
	panic(err)
}

// Pass that connection into PubSub, conn should never get used after this
ps := NewPubSubConn(conn)
defer ps.Close() // this will close Conn as well

// Subscribe to a channel called "myChannel". All publishes to "myChannel"
// will get sent to msgCh after this
msgCh := make(chan PubSubMessage)
if err := ps.Subscribe(ctx, msgCh, "myChannel"); err != nil {
	panic(err)
}

// It's optional, but generally advisable, to periodically Ping the
// connection to ensure it's still alive. This should be done in a separate
// go-routine from that which is reading from msgCh.
errCh := make(chan error, 1)
go func() {
	ticker := time.NewTicker(5 * time.Second)
	defer ticker.Stop()
	for range ticker.C {
		if err := ps.Ping(ctx); err != nil {
			errCh <- err
			return
		}
	}
}()

for {
	select {
	case msg := <-msgCh:
		log.Printf("publish to channel %q received: %q", msg.Channel, msg.Message)
	case err := <-errCh:
		panic(err)
	}
}
Output:

type PubSubMessage

type PubSubMessage struct {
	Type    string // "message" or "pmessage"
	Pattern string // will be set if Type is "pmessage"
	Channel string
	Message []byte
}

PubSubMessage describes a message being published to a subscribed channel

func (PubSubMessage) MarshalRESP

func (m PubSubMessage) MarshalRESP(w io.Writer) error

MarshalRESP implements the Marshaler interface.

func (*PubSubMessage) UnmarshalRESP

func (m *PubSubMessage) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the Unmarshaler interface

type ScanOpts

type ScanOpts struct {
	// The scan command to do, e.g. "SCAN", "HSCAN", etc...
	Command string

	// The key to perform the scan on. Only necessary when Command isn't "SCAN"
	Key string

	// An optional pattern to filter returned keys by
	Pattern string

	// An optional count hint to send to redis to indicate number of keys to
	// return per call. This does not affect the actual results of the scan
	// command, but it may be useful for optimizing certain datasets
	Count int

	// An optional type name to filter for values of the given type.
	// The type names are the same as returned by the "TYPE" command.
	// This if only available in Redis 6 or newer and only works with "SCAN".
	// If used with an older version of Redis or with a Command other than
	// "SCAN", scanning will fail.
	Type string
}

ScanOpts are various parameters which can be passed into ScanWithOpts. Some fields are required depending on which type of scan is being done.

type Scanner

type Scanner interface {
	Next(context.Context, *string) bool
	Close() error
}

Scanner is used to iterate through the results of a SCAN call (or HSCAN, SSCAN, etc...)

Once created, repeatedly call Next() on it to fill the passed in string pointer with the next result. Next will return false if there's no more results to retrieve or if an error occurred, at which point Close should be called to retrieve any error.

func NewScanner

func NewScanner(c Client, o ScanOpts) Scanner

NewScanner creates a new Scanner instance which will iterate over the redis instance's Client using the ScanOpts.

NOTE if Client is a *Cluster this will not work correctly, use the NewScanner method on Cluster instead.

Example (Hscan)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := DefaultClientFunc(ctx, "tcp", "126.0.0.1:6379")
if err != nil {
	log.Fatal(err)
}

s := NewScanner(client, ScanOpts{Command: "HSCAN", Key: "somekey"})
var key string
for s.Next(ctx, &key) {
	log.Printf("key: %q", key)
}
if err := s.Close(); err != nil {
	log.Fatal(err)
}
Output:

Example (Scan)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

client, err := DefaultClientFunc(ctx, "tcp", "126.0.0.1:6379")
if err != nil {
	log.Fatal(err)
}

s := NewScanner(client, ScanAllKeys)
var key string
for s.Next(ctx, &key) {
	log.Printf("key: %q", key)
}
if err := s.Close(); err != nil {
	log.Fatal(err)
}
Output:

type Sentinel

type Sentinel struct {
	// contains filtered or unexported fields
}

Sentinel is a Client which, in the background, connects to an available sentinel node and handles all of the following:

* Creates a pool to the current primary instance, as advertised by the sentinel

* Listens for events indicating the primary has changed, and automatically creates a new Client to the new primary

* Keeps track of other sentinels in the cluster, and uses them if the currently connected one becomes unreachable

func NewSentinel

func NewSentinel(ctx context.Context, primaryName string, sentinelAddrs []string, opts ...SentinelOpt) (*Sentinel, error)

NewSentinel creates and returns a *Sentinel instance. NewSentinel takes in a number of options which can overwrite its default behavior. The default options NewSentinel uses are:

SentinelConnFunc(DefaultConnFunc)
SentinelPoolFunc(DefaultClientFunc)

func (*Sentinel) Addrs

func (sc *Sentinel) Addrs() (primAddr string, secAddrs []string)

Addrs returns the currently known network address of the current primary instance and the addresses of the secondaries.

func (*Sentinel) Client

func (sc *Sentinel) Client(ctx context.Context, addr string) (Client, error)

Client returns a Client for the given address, which could be either the primary or one of the secondaries (see Addrs method for retrieving known addresses).

NOTE that if there is a failover while a Client returned by this method is being used the Client may or may not continue to work as expected, depending on the nature of the failover.

NOTE the Client should _not_ be closed.

func (*Sentinel) Close

func (sc *Sentinel) Close() error

Close implements the method for the Client interface.

func (*Sentinel) Do

func (sc *Sentinel) Do(ctx context.Context, a Action) error

Do implements the method for the Client interface. It will pass the given action on to the current primary.

NOTE it's possible that in between Do being called and the Action being actually carried out that there could be a failover event. In that case, the Action will likely fail and return an error.

func (*Sentinel) DoSecondary

func (sc *Sentinel) DoSecondary(ctx context.Context, a Action) error

DoSecondary is like Do but executes the Action on a random replica if possible.

For DoSecondary to work, replicas must be configured with replica-read-only enabled, otherwise calls to DoSecondary may by rejected by the replica.

NOTE it's possible that in between DoSecondary being called and the Action being actually carried out that there could be a failover event. In that case, the Action will likely fail and return an error.

func (*Sentinel) SentinelAddrs

func (sc *Sentinel) SentinelAddrs() (sentAddrs []string)

SentinelAddrs returns the addresses of all known sentinels.

type SentinelOpt

type SentinelOpt func(*sentinelOpts)

SentinelOpt is an optional behavior which can be applied to the NewSentinel function to effect a Sentinel's behavior.

func SentinelConnFunc

func SentinelConnFunc(cf ConnFunc) SentinelOpt

SentinelConnFunc tells the Sentinel to use the given ConnFunc when connecting to sentinel instances.

NOTE that if SentinelConnFunc is not used then Sentinel will attempt to retrieve AUTH and SELECT information from the address provided to NewSentinel, and use that for dialing all Sentinels. If SentinelConnFunc is provided, however, those options must be given through DialAuthPass/DialSelectDB within the ConnFunc.

func SentinelErrCh

func SentinelErrCh(errCh chan<- error) SentinelOpt

SentinelErrCh takes a channel which asynchronous errors encountered by the Sentinel can be read off of. If the channel blocks the error will be dropped. The channel will be closed when the Sentinel is closed.

func SentinelPoolFunc

func SentinelPoolFunc(pf ClientFunc) SentinelOpt

SentinelPoolFunc tells the Sentinel to use the given ClientFunc when creating a pool of connections to the sentinel's primary.

type StreamEntries

type StreamEntries struct {
	Stream  string
	Entries []StreamEntry
}

StreamEntries is a stream name and set of entries as returned by XREAD and XREADGROUP. The results from a call to XREAD(GROUP) can be unmarshaled into a []StreamEntries.

func (*StreamEntries) UnmarshalRESP

func (s *StreamEntries) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface.

type StreamEntry

type StreamEntry struct {
	// ID is the ID of the entry in a stream.
	ID StreamEntryID

	// Fields contains the fields and values for the stream entry.
	Fields map[string]string
}

StreamEntry is an entry in a stream as returned by XRANGE, XREAD and XREADGROUP.

func (*StreamEntry) UnmarshalRESP

func (s *StreamEntry) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface.

type StreamEntryID

type StreamEntryID struct {
	// Time is the first part of the ID, which is based on the time of the server that Redis runs on.
	Time uint64

	// Seq is the sequence number of the ID for entries with the same Time value.
	Seq uint64
}

StreamEntryID represents an ID used in a Redis stream with the format <time>-<seq>.

func (StreamEntryID) Before

func (s StreamEntryID) Before(o StreamEntryID) bool

Before returns true if s comes before o in a stream (is less than o).

func (*StreamEntryID) MarshalRESP

func (s *StreamEntryID) MarshalRESP(w io.Writer) error

MarshalRESP implements the resp.Marshaler interface.

func (StreamEntryID) Next

func (s StreamEntryID) Next() StreamEntryID

Next returns the next stream entry ID or s if there is no higher id (s is 18446744073709551615-18446744073709551615).

func (StreamEntryID) Prev

func (s StreamEntryID) Prev() StreamEntryID

Prev returns the previous stream entry ID or s if there is no prior id (s is 0-0).

func (StreamEntryID) String

func (s StreamEntryID) String() string

String returns the ID in the format <time>-<seq> (the same format used by Redis).

String implements the fmt.Stringer interface.

func (*StreamEntryID) UnmarshalRESP

func (s *StreamEntryID) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface.

type StreamReader

type StreamReader interface {
	// Err returns any error that happened while calling Next or nil if no error happened.
	//
	// Once Err returns a non-nil error, all successive calls will return the same error.
	Err() error

	// Next returns new entries for any of the configured streams.
	//
	// The returned slice is only valid until the next call to Next.
	//
	// If there was an error, ok will be false. Otherwise, even if no entries were read, ok will be true.
	//
	// If there was an error, all future calls to Next will return ok == false.
	Next(context.Context) (stream string, entries []StreamEntry, ok bool)
}

StreamReader allows reading from on or more streams, always returning newer entries

func NewStreamReader

func NewStreamReader(c Client, opts StreamReaderOpts) StreamReader

NewStreamReader returns a new StreamReader for the given client.

Any changes on opts after calling NewStreamReader will have no effect.

type StreamReaderOpts

type StreamReaderOpts struct {
	// Streams must contain one or more stream names that will be read.
	//
	// The value for each stream can either be nil or an existing ID.
	// If a value is non-nil, only newer stream entries will be returned.
	Streams map[string]*StreamEntryID

	// Group is an optional consumer group name.
	//
	// If Group is not empty reads will use XREADGROUP with the Group as consumer group instead of XREAD.
	Group string

	// Consumer is an optional consumer name for use with Group.
	Consumer string

	// NoAck optionally enables passing the NOACK flag to XREADGROUP.
	NoAck bool

	// Block specifies the duration in milliseconds that reads will wait for new data before returning.
	//
	// If Block is negative, reads will block indefinitely until new entries can be read or there is an error.
	//
	// The default, if Block is 0, is 5 seconds.
	//
	// If Block is non-negative, the Client used for the StreamReader must not have a timeout for commands or
	// the timeout duration must be substantial higher than the Block duration (at least 50% for small Block values,
	// but may be less for higher values).
	Block time.Duration

	// NoBlock disables blocking when no new data is available.
	//
	// If this is true, setting Block will not have any effect.
	NoBlock bool

	// Count can be used to limit the number of entries retrieved by each call to Next.
	//
	// If Count is 0, all available entries will be retrieved.
	Count int
}

StreamReaderOpts contains various options given for NewStreamReader that influence the behaviour.

The only required field is Streams.

type Tuple

type Tuple []interface{}

Tuple is a helper type which can be used when unmarshaling a RESP array. Each element of Tuple should be a pointer receiver which the corresponding element of the RESP array will be unmarshaled into, or nil to skip that element. The length of Tuple must match the length of the RESP array being unmarshaled.

Tuple is useful when unmarshaling the results from commands like EXEC and EVAL.

func (Tuple) UnmarshalRESP

func (t Tuple) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the method for the resp.Unmarshaler interface.

Directories

Path Synopsis
internal
bytesutil
Package bytesutil provides utility functions for working with bytes and byte streams that are useful when working with the RESP protocol.
Package bytesutil provides utility functions for working with bytes and byte streams that are useful when working with the RESP protocol.
Package resp is an umbrella package which covers both the old RESP protocol (resp2) and the new one (resp3), allowing clients to choose which one they care to use
Package resp is an umbrella package which covers both the old RESP protocol (resp2) and the new one (resp3), allowing clients to choose which one they care to use
resp2
Package resp2 implements the original redis RESP protocol, a plaintext protocol which is also binary safe.
Package resp2 implements the original redis RESP protocol, a plaintext protocol which is also binary safe.
resp3
Package resp3 implements the upgraded redis RESP3 protocol, a plaintext protocol which is also binary safe and backwards compatible with the original RESP2 protocol.
Package resp3 implements the upgraded redis RESP3 protocol, a plaintext protocol which is also binary safe and backwards compatible with the original RESP2 protocol.
Package trace contains all the types provided for tracing within the radix package.
Package trace contains all the types provided for tracing within the radix package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL