gocql

package module
v1.0.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 16, 2023 License: BSD-3-Clause Imports: 38 Imported by: 0

README

gocql

go build GoDoc

Package gocql implements a fast and robust Cassandra client for the Go programming language.

Project Website: https://gocql.github.io/
API documentation: https://godoc.org/github.com/gocql/gocql
Discussions: https://groups.google.com/forum/#!forum/gocql

Supported Versions

The following matrix shows the versions of Go and Cassandra that are tested with the integration test suite as part of the CI build:

Go/Cassandra 4.0.x 4.1.x
1.19 yes yes
1.20 yes yes

Gocql has been tested in production against many versions of Cassandra. Due to limits in our CI setup we only test against the latest 2 GA releases.

Sunsetting Model

In general, the gocql team will focus on supporting the current and previous versions of Go. gocql may still work with older versions of Go, but official support for these versions will have been sunset.

Installation

go get github.com/gocql/gocql

Features

  • Modern Cassandra client using the native transport
  • Automatic type conversions between Cassandra and Go
    • Support for all common types including sets, lists and maps
    • Custom types can implement a Marshaler and Unmarshaler interface
    • Strict type conversions without any loss of precision
    • Built-In support for UUIDs (version 1 and 4)
  • Support for logged, unlogged and counter batches
  • Cluster management
    • Automatic reconnect on connection failures with exponential falloff
    • Round robin distribution of queries to different hosts
    • Round robin distribution of queries to different connections on a host
    • Each connection can execute up to n concurrent queries (whereby n is the limit set by the protocol version the client chooses to use)
    • Optional automatic discovery of nodes
    • Policy based connection pool with token aware and round-robin policy implementations
  • Support for password authentication
  • Iteration over paged results with configurable page size
  • Support for TLS/SSL
  • Optional frame compression (using snappy)
  • Automatic query preparation
  • Support for query tracing
  • Support for Cassandra 2.1+ binary protocol version 3
    • Support for up to 32768 streams
    • Support for tuple types
    • Support for client side timestamps by default
    • Support for UDTs via a custom marshaller or struct tags
  • Support for Cassandra 3.0+ binary protocol version 4
  • An API to access the schema metadata of a given keyspace

Performance

While the driver strives to be highly performant, there are cases where it is difficult to test and verify. The driver is built with maintainability and code readability in mind first and then performance and features, as such every now and then performance may degrade, if this occurs please report and issue and it will be looked at and remedied. The only time the driver copies data from its read buffer is when it Unmarshal's data into supplied types.

Some tips for getting more performance from the driver:

  • Use the TokenAware policy
  • Use many goroutines when doing inserts, the driver is asynchronous but provides a synchronous API, it can execute many queries concurrently
  • Tune query page size
  • Reading data from the network to unmarshal will incur a large amount of allocations, this can adversely affect the garbage collector, tune GOGC
  • Close iterators after use to recycle byte buffers

Important Default Keyspace Changes

gocql no longer supports executing "use " statements to simplify the library. The user still has the ability to define the default keyspace for connections but now the keyspace can only be defined before a session is created. Queries can still access keyspaces by indicating the keyspace in the query:

SELECT * FROM example2.table;

Example of correct usage:

	cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
	cluster.Keyspace = "example"
	...
	session, err := cluster.CreateSession()

Example of incorrect usage:

	cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
	cluster.Keyspace = "example"
	...
	session, err := cluster.CreateSession()

	if err = session.Query("use example2").Exec(); err != nil {
		log.Fatal(err)
	}

This will result in an err being returned from the session.Query line as the user is trying to execute a "use" statement.

Example

See package documentation.

Data Binding

There are various ways to bind application level data structures to CQL statements:

  • You can write the data binding by hand, as outlined in the Tweet example. This provides you with the greatest flexibility, but it does mean that you need to keep your application code in sync with your Cassandra schema.
  • You can dynamically marshal an entire query result into an []map[string]interface{} using the SliceMap() API. This returns a slice of row maps keyed by CQL column names. This method requires no special interaction with the gocql API, but it does require your application to be able to deal with a key value view of your data.
  • As a refinement on the SliceMap() API you can also call MapScan() which returns map[string]interface{} instances in a row by row fashion.
  • The Bind() API provides a client app with a low level mechanism to introspect query meta data and extract appropriate field values from application level data structures.
  • The gocqlx package is an idiomatic extension to gocql that provides usability features. With gocqlx you can bind the query parameters from maps and structs, use named query parameters (:identifier) and scan the query results into structs and slices. It comes with a fluent and flexible CQL query builder that supports full CQL spec, including BATCH statements and custom functions.
  • Building on top of the gocql driver, cqlr adds the ability to auto-bind a CQL iterator to a struct or to bind a struct to an INSERT statement.
  • Another external project that layers on top of gocql is cqlc which generates gocql compliant code from your Cassandra schema so that you can write type safe CQL statements in Go with a natural query syntax.
  • gocassa is an external project that layers on top of gocql to provide convenient query building and data binding.
  • gocqltable provides an ORM-style convenience layer to make CRUD operations with gocql easier.

Ecosystem

The following community maintained tools are known to integrate with gocql:

  • gocqlx is a gocql extension that automates data binding, adds named queries support, provides flexible query builders and plays well with gocql.
  • journey is a migration tool with Cassandra support.
  • negronicql is gocql middleware for Negroni.
  • cqlr adds the ability to auto-bind a CQL iterator to a struct or to bind a struct to an INSERT statement.
  • cqlc generates gocql compliant code from your Cassandra schema so that you can write type safe CQL statements in Go with a natural query syntax.
  • gocassa provides query building, adds data binding, and provides easy-to-use "recipe" tables for common query use-cases.
  • gocqltable is a wrapper around gocql that aims to simplify common operations.
  • gockle provides simple, mockable interfaces that wrap gocql types
  • scylladb is a fast Apache Cassandra-compatible NoSQL database
  • go-cql-driver is an CQL driver conforming to the built-in database/sql interface. It is good for simple use cases where the database/sql interface is wanted. The CQL driver is a wrapper around this project.

Other Projects

  • gocqldriver is the predecessor of gocql based on Go's database/sql package. This project isn't maintained anymore, because Cassandra wasn't a good fit for the traditional database/sql API. Use this package instead.

SEO

For some reason, when you Google golang cassandra, this project doesn't feature very highly in the result list. But if you Google go cassandra, then we're a bit higher up the list. So this is note to try to convince Google that golang is an alias for Go.

License

Copyright (c) 2012-2016 The gocql Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.

Documentation

Overview

Package gocql implements a fast and robust Cassandra driver for the Go programming language.

Connecting to the cluster

Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration:

cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")

Port can be specified as part of the address, the above is equivalent to:

cluster := gocql.NewCluster("192.168.1.1:9042", "192.168.1.2:9042", "192.168.1.3:9042")

It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events.

Then you can customize more options (see ClusterConfig):

cluster.Keyspace = "example"
cluster.Consistency = gocql.Quorum
cluster.ProtoVersion = 4

The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes).

The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead.

When ready, create a session from the configuration. Don't forget to Close the session once you are done with it:

session, err := cluster.CreateSession()
if err != nil {
	return err
}
defer session.Close()

Authentication

CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used.

To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider.

PasswordAuthenticator is provided to use for username/password authentication:

 cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
 cluster.Authenticator = gocql.PasswordAuthenticator{
		Username: "user",
		Password: "password"
 }
 session, err := cluster.CreateSession()
 if err != nil {
 	return err
 }
 defer session.Close()

Transport layer security

It is possible to secure traffic between the client and server with TLS.

To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files.

Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows:

Config.InsecureSkipVerify | EnableHostVerification | Result
Config is nil             | false                  | do not verify host
Config is nil             | true                   | verify host
false                     | false                  | verify host
true                      | false                  | do not verify host
false                     | true                   | verify host
true                      | true                   | verify host

For example:

cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.SslOpts = &gocql.SslOptions{
	EnableHostVerification: true,
}
session, err := cluster.CreateSession()
if err != nil {
	return err
}
defer session.Close()

Data-center awareness and query routing

To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database):

cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.PoolConfig.HostSelectionPolicy = gocql.DCAwareRoundRobinPolicy("dc1")

The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC).

cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.PoolConfig.HostSelectionPolicy = gocql.TokenAwareHostPolicy(gocql.DCAwareRoundRobinPolicy("dc1"))

Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback.

We recommend running with a token aware host policy in production for maximum performance.

The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of

session.Query("select value from mytable where pk1 = 'abc' AND pk2 = ?", "def")

use

session.Query("select value from mytable where pk1 = ? AND pk2 = ?", "abc", "def")

Rack-level awareness

The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack.

Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else).

RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy.

Executing queries

Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query.

To execute a query without reading results, use Query.Exec:

 err := session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
		"me", gocql.TimeUUID(), "hello world").WithContext(ctx).Exec()

Single row can be read by calling Query.Scan:

 err := session.Query(`SELECT id, text FROM tweet WHERE timeline = ? LIMIT 1`,
		"me").WithContext(ctx).Consistency(gocql.One).Scan(&id, &text)

Multiple rows can be read using Iter.Scanner:

 scanner := session.Query(`SELECT id, text FROM tweet WHERE timeline = ?`,
 	"me").WithContext(ctx).Iter().Scanner()
 for scanner.Next() {
 	var (
 		id gocql.UUID
		text string
 	)
 	err = scanner.Scan(&id, &text)
 	if err != nil {
 		log.Fatal(err)
 	}
 	fmt.Println("Tweet:", id, text)
 }
 // scanner.Err() closes the iterator, so scanner nor iter should be used afterwards.
 if err := scanner.Err(); err != nil {
 	log.Fatal(err)
 }

See Example for complete example.

Prepared statements

The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types.

When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement.

Executing multiple queries concurrently

Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level.

results := make(chan error, 2)
go func() {
	results <- session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
		"me", gocql.TimeUUID(), "hello world 1").Exec()
}()
go func() {
	results <- session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
		"me", gocql.TimeUUID(), "hello world 2").Exec()
}()

Nulls

Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string.

var text *string
err := scanner.Scan(&text)
if err != nil {
	// handle error
}
if text != nil {
	// not null
}
else {
	// null
}

See Example_nulls for full example.

Reusing slices

The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures.

scanner := session.Query(`SELECT myints FROM table WHERE pk = ?`, "key").WithContext(ctx).Iter().Scanner()
var myInts []int
for scanner.Next() {
	// This scan reuses backing store of myInts for each row.
	err = scanner.Scan(&myInts)
	if err != nil {
		log.Fatal(err)
	}
}

When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop:

scanner := session.Query(`SELECT myints FROM table WHERE pk = ?`, "key").WithContext(ctx).Iter().Scanner()
for scanner.Next() {
	var myInts []int
	// This scan always gets pointer to fresh myInts slice, so does not reuse memory.
	err = scanner.Scan(&myInts)
	if err != nil {
		log.Fatal(err)
	}
}

Paging

The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch.

It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys.

Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880).

The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster.

Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred).

Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items.

See Example_paging for an example of manual paging.

Dynamic list of columns

There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case.

See Example_dynamicColumns.

Batches

The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch.

Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters.

For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop.

It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient.

See Example_batch for an example.

Lightweight transactions

Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS.

Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS.

Retries and speculative execution

Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution.

Idempotent queries are retried in case of errors based on the configured RetryPolicy.

Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned.

User-defined types

UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces).

For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field:

type MyUDT struct {
	FieldA int32 `cql:"a"`
	FieldB string `cql:"b"`
}

See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler.

Metrics and tracing

It is possible to provide observer implementations that could be used to gather metrics:

  • QueryObserver for monitoring individual queries.
  • BatchObserver for monitoring batch queries.
  • ConnectObserver for monitoring new connections from the driver to the database.
  • FrameHeaderObserver for monitoring individual protocol frames.

CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing.

Example
package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.tweet(timeline text, id UUID, text text, PRIMARY KEY(id));
	create index on example.tweet(timeline);
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.Consistency = gocql.Quorum
	// connect to the cluster
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	// insert a tweet
	if err := session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
		"me", gocql.TimeUUID(), "hello world").WithContext(ctx).Exec(); err != nil {
		log.Fatal(err)
	}

	var id gocql.UUID
	var text string

	/* Search for a specific set of records whose 'timeline' column matches
	 * the value 'me'. The secondary index that we created earlier will be
	 * used for optimizing the search */
	if err := session.Query(`SELECT id, text FROM tweet WHERE timeline = ? LIMIT 1`,
		"me").WithContext(ctx).Consistency(gocql.One).Scan(&id, &text); err != nil {
		log.Fatal(err)
	}
	fmt.Println("Tweet:", id, text)
	fmt.Println()

	// list all tweets
	scanner := session.Query(`SELECT id, text FROM tweet WHERE timeline = ?`,
		"me").WithContext(ctx).Iter().Scanner()
	for scanner.Next() {
		err = scanner.Scan(&id, &text)
		if err != nil {
			log.Fatal(err)
		}
		fmt.Println("Tweet:", id, text)
	}
	// scanner.Err() closes the iterator, so scanner nor iter should be used afterwards.
	if err := scanner.Err(); err != nil {
		log.Fatal(err)
	}
	// Tweet: cad53821-3731-11eb-971c-708bcdaada84 hello world
	//
	// Tweet: cad53821-3731-11eb-971c-708bcdaada84 hello world
	// Tweet: d577ab85-3731-11eb-81eb-708bcdaada84 hello world
}
Output:

Example (Batch)

Example_batch demonstrates how to execute a batch of statements.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.batches(pk int, ck int, description text, PRIMARY KEY(pk, ck));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	b := session.NewBatch(gocql.UnloggedBatch).WithContext(ctx)
	b.Entries = append(b.Entries, gocql.BatchEntry{
		Stmt:       "INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)",
		Args:       []interface{}{1, 2, "1.2"},
		Idempotent: true,
	})
	b.Entries = append(b.Entries, gocql.BatchEntry{
		Stmt:       "INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)",
		Args:       []interface{}{1, 3, "1.3"},
		Idempotent: true,
	})
	err = session.ExecuteBatch(b)
	if err != nil {
		log.Fatal(err)
	}

	scanner := session.Query("SELECT pk, ck, description FROM example.batches").Iter().Scanner()
	for scanner.Next() {
		var pk, ck int32
		var description string
		err = scanner.Scan(&pk, &ck, &description)
		if err != nil {
			log.Fatal(err)
		}
		fmt.Println(pk, ck, description)
	}
	// 1 2 1.2
	// 1 3 1.3
}
Output:

Example (DynamicColumns)

Example_dynamicColumns demonstrates how to handle dynamic column list.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
	"os"
	"reflect"
	"text/tabwriter"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.table1(pk text, ck int, value1 text, value2 int, PRIMARY KEY(pk, ck));
	insert into example.table1 (pk, ck, value1, value2) values ('a', 1, 'b', 2);
	insert into example.table1 (pk, ck, value1, value2) values ('c', 3, 'd', 4);
	insert into example.table1 (pk, ck, value1, value2) values ('c', 5, null, null);
	create table example.table2(pk int, value1 timestamp, PRIMARY KEY(pk));
	insert into example.table2 (pk, value1) values (1, '2020-01-02 03:04:05');
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	printQuery := func(ctx context.Context, session *gocql.Session, stmt string, values ...interface{}) error {
		iter := session.Query(stmt, values...).WithContext(ctx).Iter()
		fmt.Println(stmt)
		w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ',
			0)
		for i, columnInfo := range iter.Columns() {
			if i > 0 {
				fmt.Fprint(w, "\t| ")
			}
			fmt.Fprintf(w, "%s (%s)", columnInfo.Name, columnInfo.TypeInfo)
		}

		for {
			rd, err := iter.RowData()
			if err != nil {
				return err
			}
			if !iter.Scan(rd.Values...) {
				break
			}
			fmt.Fprint(w, "\n")
			for i, val := range rd.Values {
				if i > 0 {
					fmt.Fprint(w, "\t| ")
				}

				fmt.Fprint(w, reflect.Indirect(reflect.ValueOf(val)).Interface())
			}
		}

		fmt.Fprint(w, "\n")
		w.Flush()
		fmt.Println()

		return iter.Close()
	}

	ctx := context.Background()

	err = printQuery(ctx, session, "SELECT * FROM table1")
	if err != nil {
		log.Fatal(err)
	}

	err = printQuery(ctx, session, "SELECT value2, pk, ck FROM table1")
	if err != nil {
		log.Fatal(err)
	}

	err = printQuery(ctx, session, "SELECT * FROM table2")
	if err != nil {
		log.Fatal(err)
	}
	// SELECT * FROM table1
	// pk (varchar) | ck (int) | value1 (varchar) | value2 (int)
	// a            | 1        | b                | 2
	// c            | 3        | d                | 4
	// c            | 5        |                  | 0
	//
	// SELECT value2, pk, ck FROM table1
	// value2 (int) | pk (varchar) | ck (int)
	// 2            | a            | 1
	// 4            | c            | 3
	// 0            | c            | 5
	//
	// SELECT * FROM table2
	// pk (int) | value1 (timestamp)
	// 1        | 2020-01-02 03:04:05 +0000 UTC
}
Output:

Example (MarshalerUnmarshaler)

Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
	"strconv"
	"strings"
)

// MyMarshaler implements Marshaler and Unmarshaler.
// It represents a version number stored as string.
type MyMarshaler struct {
	major, minor, patch int
}

func (m MyMarshaler) MarshalCQL(info gocql.TypeInfo) ([]byte, error) {
	return gocql.Marshal(info, fmt.Sprintf("%d.%d.%d", m.major, m.minor, m.patch))
}

func (m *MyMarshaler) UnmarshalCQL(info gocql.TypeInfo, data []byte) error {
	var s string
	err := gocql.Unmarshal(info, data, &s)
	if err != nil {
		return err
	}
	parts := strings.SplitN(s, ".", 3)
	if len(parts) != 3 {
		return fmt.Errorf("parse version %q: %d parts instead of 3", s, len(parts))
	}
	major, err := strconv.Atoi(parts[0])
	if err != nil {
		return fmt.Errorf("parse version %q major number: %v", s, err)
	}
	minor, err := strconv.Atoi(parts[1])
	if err != nil {
		return fmt.Errorf("parse version %q minor number: %v", s, err)
	}
	patch, err := strconv.Atoi(parts[2])
	if err != nil {
		return fmt.Errorf("parse version %q patch number: %v", s, err)
	}
	m.major = major
	m.minor = minor
	m.patch = patch
	return nil
}

// Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler.
func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.my_marshaler_table(pk int, value text, PRIMARY KEY(pk));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	value := MyMarshaler{
		major: 1,
		minor: 2,
		patch: 3,
	}
	err = session.Query("INSERT INTO example.my_marshaler_table (pk, value) VALUES (?, ?)",
		1, value).WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}
	var stringValue string
	err = session.Query("SELECT value FROM example.my_marshaler_table WHERE pk = 1").WithContext(ctx).
		Scan(&stringValue)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(stringValue)
	var unmarshaledValue MyMarshaler
	err = session.Query("SELECT value FROM example.my_marshaler_table WHERE pk = 1").WithContext(ctx).
		Scan(&unmarshaledValue)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(unmarshaledValue)
	// 1.2.3
	// {1 2 3}
}
Output:

Example (Nulls)

Example_nulls demonstrates how to distinguish between null and zero value when needed.

Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field.

package main

import (
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.stringvals(id int, value text, PRIMARY KEY(id));
	insert into example.stringvals (id, value) values (1, null);
	insert into example.stringvals (id, value) values (2, '');
	insert into example.stringvals (id, value) values (3, 'hello');
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()
	scanner := session.Query(`SELECT id, value FROM stringvals`).Iter().Scanner()
	for scanner.Next() {
		var (
			id  int32
			val *string
		)
		err := scanner.Scan(&id, &val)
		if err != nil {
			log.Fatal(err)
		}
		if val != nil {
			fmt.Printf("Row %d is %q\n", id, *val)
		} else {
			fmt.Printf("Row %d is null\n", id)
		}

	}
	err = scanner.Err()
	if err != nil {
		log.Fatal(err)
	}
	// Row 1 is null
	// Row 2 is ""
	// Row 3 is "hello"
}
Output:

Example (Paging)

Example_paging demonstrates how to manually fetch pages and use page state.

See also package documentation about paging.

package main

import (
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.itoa(id int, description text, PRIMARY KEY(id));
	insert into example.itoa (id, description) values (1, 'one');
	insert into example.itoa (id, description) values (2, 'two');
	insert into example.itoa (id, description) values (3, 'three');
	insert into example.itoa (id, description) values (4, 'four');
	insert into example.itoa (id, description) values (5, 'five');
	insert into example.itoa (id, description) values (6, 'six');
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	var pageState []byte
	for {
		// We use PageSize(2) for the sake of example, use larger values in production (default is 5000) for performance
		// reasons.
		iter := session.Query(`SELECT id, description FROM itoa`).PageSize(2).PageState(pageState).Iter()
		nextPageState := iter.PageState()
		scanner := iter.Scanner()
		for scanner.Next() {
			var (
				id          int
				description string
			)
			err = scanner.Scan(&id, &description)
			if err != nil {
				log.Fatal(err)
			}
			fmt.Println(id, description)
		}
		err = scanner.Err()
		if err != nil {
			log.Fatal(err)
		}
		fmt.Printf("next page state: %+v\n", nextPageState)
		if len(nextPageState) == 0 {
			break
		}
		pageState = nextPageState
	}
	// 5 five
	// 1 one
	// next page state: [4 0 0 0 1 0 240 127 255 255 253 0]
	// 2 two
	// 4 four
	// next page state: [4 0 0 0 4 0 240 127 255 255 251 0]
	// 6 six
	// 3 three
	// next page state: [4 0 0 0 3 0 240 127 255 255 249 0]
	// next page state: []
}
Output:

Example (Set)

Example_set demonstrates how to use sets.

package main

import (
	"fmt"
	"github.com/gocql/gocql"
	"log"
	"sort"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.sets(id int, value set<text>, PRIMARY KEY(id));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()
	err = session.Query(`UPDATE sets SET value=? WHERE id=1`, []string{"alpha", "beta", "gamma"}).Exec()
	if err != nil {
		log.Fatal(err)
	}
	err = session.Query(`UPDATE sets SET value=value+? WHERE id=1`, "epsilon").Exec()
	if err != nil {
		// This does not work because the ? expects a set, not a single item.
		fmt.Printf("expected error: %v\n", err)
	}
	err = session.Query(`UPDATE sets SET value=value+? WHERE id=1`, []string{"delta"}).Exec()
	if err != nil {
		log.Fatal(err)
	}
	// map[x]struct{} is supported too.
	toRemove := map[string]struct{}{
		"alpha": {},
		"gamma": {},
	}
	err = session.Query(`UPDATE sets SET value=value-? WHERE id=1`, toRemove).Exec()
	if err != nil {
		log.Fatal(err)
	}
	scanner := session.Query(`SELECT id, value FROM sets`).Iter().Scanner()
	for scanner.Next() {
		var (
			id  int32
			val []string
		)
		err := scanner.Scan(&id, &val)
		if err != nil {
			log.Fatal(err)
		}
		sort.Strings(val)
		fmt.Printf("Row %d is %v\n", id, val)
	}
	err = scanner.Err()
	if err != nil {
		log.Fatal(err)
	}
	// expected error: can not marshal string into set(varchar)
	// Row 1 is [beta delta]
}
Output:

Example (UserDefinedTypesMap)

Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create type example.my_udt (field_a text, field_b int);
	create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	value := map[string]interface{}{
		"field_a": "a value",
		"field_b": 42,
	}
	err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
		1, value).WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}

	var readValue map[string]interface{}

	err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").WithContext(ctx).Scan(&readValue)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(readValue["field_a"])
	fmt.Println(readValue["field_b"])
	// a value
	// 42
}
Output:

Example (UserDefinedTypesStruct)

Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

type MyUDT struct {
	FieldA string `cql:"field_a"`
	FieldB int32  `cql:"field_b"`
}

// Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs.
// See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create type example.my_udt (field_a text, field_b int);
	create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	value := MyUDT{
		FieldA: "a value",
		FieldB: 42,
	}
	err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
		1, value).WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}

	var readValue MyUDT

	err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").WithContext(ctx).Scan(&readValue)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(readValue.FieldA)
	fmt.Println(readValue.FieldB)
	// a value
	// 42
}
Output:

Index

Examples

Constants

View Source
const (
	// ErrCodeServer indicates unexpected error on server-side.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1246-L1247
	ErrCodeServer = 0x0000
	// ErrCodeProtocol indicates a protocol violation by some client message.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1248-L1250
	ErrCodeProtocol = 0x000A
	// ErrCodeCredentials indicates missing required authentication.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1251-L1254
	ErrCodeCredentials = 0x0100
	// ErrCodeUnavailable indicates unavailable error.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1255-L1265
	ErrCodeUnavailable = 0x1000
	// ErrCodeOverloaded returned in case of request on overloaded node coordinator.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1266-L1267
	ErrCodeOverloaded = 0x1001
	// ErrCodeBootstrapping returned from the coordinator node in bootstrapping phase.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1268-L1269
	ErrCodeBootstrapping = 0x1002
	// ErrCodeTruncate indicates truncation exception.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1270
	ErrCodeTruncate = 0x1003
	// ErrCodeWriteTimeout returned in case of timeout during the request write.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1271-L1304
	ErrCodeWriteTimeout = 0x1100
	// ErrCodeReadTimeout returned in case of timeout during the request read.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1305-L1321
	ErrCodeReadTimeout = 0x1200
	// ErrCodeReadFailure indicates request read error which is not covered by ErrCodeReadTimeout.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1322-L1340
	ErrCodeReadFailure = 0x1300
	// ErrCodeFunctionFailure indicates an error in user-defined function.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1341-L1347
	ErrCodeFunctionFailure = 0x1400
	// ErrCodeWriteFailure indicates request write error which is not covered by ErrCodeWriteTimeout.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1348-L1385
	ErrCodeWriteFailure = 0x1500
	// ErrCodeCDCWriteFailure is defined, but not yet documented in CQLv5 protocol.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1386
	ErrCodeCDCWriteFailure = 0x1600
	// ErrCodeCASWriteUnknown indicates only partially completed CAS operation.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1387-L1397
	ErrCodeCASWriteUnknown = 0x1700
	// ErrCodeSyntax indicates the syntax error in the query.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1399
	ErrCodeSyntax = 0x2000
	// ErrCodeUnauthorized indicates access rights violation by user on performed operation.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1400-L1401
	ErrCodeUnauthorized = 0x2100
	// ErrCodeInvalid indicates invalid query error which is not covered by ErrCodeSyntax.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1402
	ErrCodeInvalid = 0x2200
	// ErrCodeConfig indicates the configuration error.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1403
	ErrCodeConfig = 0x2300
	// ErrCodeAlreadyExists is returned for the requests creating the existing keyspace/table.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1404-L1413
	ErrCodeAlreadyExists = 0x2400
	// ErrCodeUnprepared returned from the host for prepared statement which is unknown.
	//
	// See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1414-L1417
	ErrCodeUnprepared = 0x2500
)

See CQL Binary Protocol v5, section 8 for more details. https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec

View Source
const (
	NodeUp nodeState = iota
	NodeDown
)
View Source
const (
	DEFAULT_KEY_ALIAS    = "key"
	DEFAULT_COLUMN_ALIAS = "column"
	DEFAULT_VALUE_ALIAS  = "value"
)

default alias values

View Source
const (
	REVERSED_TYPE   = "org.apache.cassandra.db.marshal.ReversedType"
	COMPOSITE_TYPE  = "org.apache.cassandra.db.marshal.CompositeType"
	COLLECTION_TYPE = "org.apache.cassandra.db.marshal.ColumnToCollectionType"
	LIST_TYPE       = "org.apache.cassandra.db.marshal.ListType"
	SET_TYPE        = "org.apache.cassandra.db.marshal.SetType"
	MAP_TYPE        = "org.apache.cassandra.db.marshal.MapType"
)
View Source
const (
	VariantNCSCompat = 0
	VariantIETF      = 2
	VariantMicrosoft = 6
	VariantFuture    = 7
)
View Source
const BatchSizeMaximum = 65535

BatchSizeMaximum is the maximum number of statements a batch operation can have. This limit is set by cassandra and could change in the future.

Variables

View Source
var (
	ErrNoHosts              = errors.New("no hosts provided")
	ErrNoConnectionsStarted = errors.New("no connections were made when creating the session")
	ErrHostQueryFailed      = errors.New("unable to populate Hosts")
)
View Source
var (
	ErrQueryArgLength    = errors.New("gocql: query argument length mismatch")
	ErrTimeoutNoResponse = errors.New("gocql: no response received from cassandra within timeout period")
	ErrTooManyTimeouts   = errors.New("gocql: too many query timeouts on the connection")
	ErrConnectionClosed  = errors.New("gocql: connection closed waiting for response")
	ErrNoStreams         = errors.New("gocql: no streams available on connection")
)
View Source
var (
	ErrNotFound             = errors.New("not found")
	ErrUnavailable          = errors.New("unavailable")
	ErrUnsupported          = errors.New("feature not supported")
	ErrTooManyStmts         = errors.New("too many statements")
	ErrUseStmt              = errors.New("use statements aren't supported. Please see https://github.com/gocql/gocql for explanation.")
	ErrSessionClosed        = errors.New("session has been closed")
	ErrNoConnections        = errors.New("gocql: no hosts available in the pool")
	ErrNoKeyspace           = errors.New("no keyspace provided")
	ErrKeyspaceDoesNotExist = errors.New("keyspace does not exist")
	ErrNoMetadata           = errors.New("no metadata available")
)
View Source
var ErrCannotFindHost = errors.New("cannot find host")
View Source
var (
	ErrFrameTooBig = errors.New("frame length is bigger than the maximum allowed")
)
View Source
var ErrHostAlreadyExists = errors.New("host already exists")
View Source
var ErrUnknownRetryType = errors.New("unknown retry type returned by retry policy")

ErrUnknownRetryType is returned if the retry policy returns a retry type unknown to the query executor.

View Source
var (
	ErrorUDTUnavailable = errors.New("UDT are not available on protocols less than 3, please update config")
)
View Source
var TimeoutLimit int64 = 0

If not zero, how many timeouts we will allow to occur before the connection is closed and restarted. This is to prevent a single query timeout from killing a connection which may be serving more queries just fine. Default is 0, should not be changed concurrently with queries.

Deprecated.

View Source
var UnsetValue = unsetColumn{}

UnsetValue represents a value used in a query binding that will be ignored by Cassandra.

By setting a field to the unset value Cassandra will ignore the write completely. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement.

UnsetValue is only available when using the version 4 of the protocol.

Functions

func JoinHostPort

func JoinHostPort(addr string, port int) string

JoinHostPort is a utility to return an address string that can be used by `gocql.Conn` to form a connection with a host.

func LookupIP

func LookupIP(host string) ([]net.IP, error)

func Marshal

func Marshal(info TypeInfo, value interface{}) ([]byte, error)

Marshal returns the CQL encoding of the value for the Cassandra internal type described by the info parameter.

nil is serialized as CQL null. If value implements Marshaler, its MarshalCQL method is called to marshal the data. If value is a pointer, the pointed-to value is marshaled.

Supported conversions are as follows, other type combinations may be added in the future:

CQL type                    | Go type (value)    | Note
varchar, ascii, blob, text  | string, []byte     |
boolean                     | bool               |
tinyint, smallint, int      | integer types      |
tinyint, smallint, int      | string             | formatted as base 10 number
bigint, counter             | integer types      |
bigint, counter             | big.Int            |
bigint, counter             | string             | formatted as base 10 number
float                       | float32            |
double                      | float64            |
decimal                     | inf.Dec            |
time                        | int64              | nanoseconds since start of day
time                        | time.Duration      | duration since start of day
timestamp                   | int64              | milliseconds since Unix epoch
timestamp                   | time.Time          |
list, set                   | slice, array       |
list, set                   | map[X]struct{}     |
map                         | map[X]Y            |
uuid, timeuuid              | gocql.UUID         |
uuid, timeuuid              | [16]byte           | raw UUID bytes
uuid, timeuuid              | []byte             | raw UUID bytes, length must be 16 bytes
uuid, timeuuid              | string             | hex representation, see ParseUUID
varint                      | integer types      |
varint                      | big.Int            |
varint                      | string             | value of number in decimal notation
inet                        | net.IP             |
inet                        | string             | IPv4 or IPv6 address string
tuple                       | slice, array       |
tuple                       | struct             | fields are marshaled in order of declaration
user-defined type           | gocql.UDTMarshaler | MarshalUDT is called
user-defined type           | map[string]interface{} |
user-defined type           | struct             | struct fields' cql tags are used for column names
date                        | int64              | milliseconds since Unix epoch to start of day (in UTC)
date                        | time.Time          | start of day (in UTC)
date                        | string             | parsed using "2006-01-02" format
duration                    | int64              | duration in nanoseconds
duration                    | time.Duration      |
duration                    | gocql.Duration     |
duration                    | string             | parsed with time.ParseDuration

func NamedValue

func NamedValue(name string, value interface{}) interface{}

NamedValue produce a value which will bind to the named parameter in a query

func NewErrProtocol

func NewErrProtocol(format string, args ...interface{}) error

func NonLocalReplicasFallback

func NonLocalReplicasFallback() func(policy *tokenAwareHostPolicy)

NonLocalReplicasFallback enables fallback to replicas that are not considered local.

TokenAwareHostPolicy used with DCAwareHostPolicy fallback first selects replicas by partition key in local DC, then falls back to other nodes in the local DC. Enabling NonLocalReplicasFallback causes TokenAwareHostPolicy to first select replicas by partition key in local DC, then replicas by partition key in remote DCs and fall back to other nodes in local DC.

func ShuffleReplicas

func ShuffleReplicas() func(*tokenAwareHostPolicy)

func SingleHostReadyPolicy

func SingleHostReadyPolicy(p HostSelectionPolicy) *singleHostReadyPolicy

SingleHostReadyPolicy wraps a HostSelectionPolicy and returns Ready after a single host has been added via HostUp

func TupleColumnName

func TupleColumnName(c string, n int) string

TupeColumnName will return the column name of a tuple value in a column named c at index n. It should be used if a specific element within a tuple is needed to be extracted from a map returned from SliceMap or MapScan.

func Unmarshal

func Unmarshal(info TypeInfo, data []byte, value interface{}) error

Unmarshal parses the CQL encoded data based on the info parameter that describes the Cassandra internal data type and stores the result in the value pointed by value.

If value implements Unmarshaler, it's UnmarshalCQL method is called to unmarshal the data. If value is a pointer to pointer, it is set to nil if the CQL value is null. Otherwise, nulls are unmarshalled as zero value.

Supported conversions are as follows, other type combinations may be added in the future:

CQL type                                | Go type (value)         | Note
varchar, ascii, blob, text              | *string                 |
varchar, ascii, blob, text              | *[]byte                 | non-nil buffer is reused
bool                                    | *bool                   |
tinyint, smallint, int, bigint, counter | *integer types          |
tinyint, smallint, int, bigint, counter | *big.Int                |
tinyint, smallint, int, bigint, counter | *string                 | formatted as base 10 number
float                                   | *float32                |
double                                  | *float64                |
decimal                                 | *inf.Dec                |
time                                    | *int64                  | nanoseconds since start of day
time                                    | *time.Duration          |
timestamp                               | *int64                  | milliseconds since Unix epoch
timestamp                               | *time.Time              |
list, set                               | *slice, *array          |
map                                     | *map[X]Y                |
uuid, timeuuid                          | *string                 | see UUID.String
uuid, timeuuid                          | *[]byte                 | raw UUID bytes
uuid, timeuuid                          | *gocql.UUID             |
timeuuid                                | *time.Time              | timestamp of the UUID
inet                                    | *net.IP                 |
inet                                    | *string                 | IPv4 or IPv6 address string
tuple                                   | *slice, *array          |
tuple                                   | *struct                 | struct fields are set in order of declaration
user-defined types                      | gocql.UDTUnmarshaler    | UnmarshalUDT is called
user-defined types                      | *map[string]interface{} |
user-defined types                      | *struct                 | cql tag is used to determine field name
date                                    | *time.Time              | time of beginning of the day (in UTC)
date                                    | *string                 | formatted with 2006-01-02 format
duration                                | *gocql.Duration         |

Types

type AddressTranslator

type AddressTranslator interface {
	// Translate will translate the provided address and/or port to another
	// address and/or port. If no translation is possible, Translate will return the
	// address and port provided to it.
	Translate(addr net.IP, port int) (net.IP, int)
}

AddressTranslator provides a way to translate node addresses (and ports) that are discovered or received as a node event. This can be useful in an ec2 environment, for instance, to translate public IPs to private IPs.

func IdentityTranslator

func IdentityTranslator() AddressTranslator

IdentityTranslator will do nothing but return what it was provided. It is essentially a no-op.

type AddressTranslatorFunc

type AddressTranslatorFunc func(addr net.IP, port int) (net.IP, int)

func (AddressTranslatorFunc) Translate

func (fn AddressTranslatorFunc) Translate(addr net.IP, port int) (net.IP, int)

type AggregateMetadata

type AggregateMetadata struct {
	Keyspace      string
	Name          string
	ArgumentTypes []TypeInfo
	FinalFunc     FunctionMetadata
	InitCond      string
	ReturnType    TypeInfo
	StateFunc     FunctionMetadata
	StateType     TypeInfo
	// contains filtered or unexported fields
}

AggregateMetadata holds metadata for aggregate constructs

type Authenticator

type Authenticator interface {
	Challenge(req []byte) (resp []byte, auth Authenticator, err error)
	Success(data []byte) error
}

type Batch

type Batch struct {
	Type    BatchType
	Entries []BatchEntry
	Cons    Consistency

	CustomPayload map[string][]byte
	// contains filtered or unexported fields
}

func NewBatch deprecated

func NewBatch(typ BatchType) *Batch

NewBatch creates a new batch operation without defaults from the cluster

Deprecated: use session.NewBatch instead

func (*Batch) AddAttempts

func (b *Batch) AddAttempts(i int, host *HostInfo)

func (*Batch) AddLatency

func (b *Batch) AddLatency(l int64, host *HostInfo)

func (*Batch) Attempts

func (b *Batch) Attempts() int

Attempts returns the number of attempts made to execute the batch.

func (*Batch) Bind

func (b *Batch) Bind(stmt string, bind func(q *QueryInfo) ([]interface{}, error))

Bind adds the query to the batch operation and correlates it with a binding callback that will be invoked when the batch is executed. The binding callback allows the application to define which query argument values will be marshalled as part of the batch execution.

func (*Batch) Cancel

func (*Batch) Cancel()

Deprecate: does nothing, cancel the context passed to WithContext

func (*Batch) Context

func (b *Batch) Context() context.Context

func (*Batch) DefaultTimestamp

func (b *Batch) DefaultTimestamp(enable bool) *Batch

DefaultTimestamp will enable the with default timestamp flag on the query. If enable, this will replace the server side assigned timestamp as default timestamp. Note that a timestamp in the query itself will still override this timestamp. This is entirely optional.

Only available on protocol >= 3

func (*Batch) GetConsistency

func (b *Batch) GetConsistency() Consistency

GetConsistency returns the currently configured consistency level for the batch operation.

func (*Batch) GetRoutingKey

func (b *Batch) GetRoutingKey() ([]byte, error)

func (*Batch) IsIdempotent

func (b *Batch) IsIdempotent() bool

func (*Batch) Keyspace

func (b *Batch) Keyspace() string

func (*Batch) Latency

func (b *Batch) Latency() int64

Latency returns the average number of nanoseconds to execute a single attempt of the batch.

func (*Batch) Observer

func (b *Batch) Observer(observer BatchObserver) *Batch

Observer enables batch-level observer on this batch. The provided observer will be called every time this batched query is executed.

func (*Batch) Query

func (b *Batch) Query(stmt string, args ...interface{})

Query adds the query to the batch operation

func (*Batch) RetryPolicy

func (b *Batch) RetryPolicy(r RetryPolicy) *Batch

RetryPolicy sets the retry policy to use when executing the batch operation

func (*Batch) SerialConsistency

func (b *Batch) SerialConsistency(cons SerialConsistency) *Batch

SerialConsistency sets the consistency level for the serial phase of conditional updates. That consistency can only be either SERIAL or LOCAL_SERIAL and if not present, it defaults to SERIAL. This option will be ignored for anything else that a conditional update/insert.

Only available for protocol 3 and above

func (*Batch) SetConsistency

func (b *Batch) SetConsistency(c Consistency)

SetConsistency sets the currently configured consistency level for the batch operation.

func (*Batch) Size

func (b *Batch) Size() int

Size returns the number of batch statements to be executed by the batch operation.

func (*Batch) SpeculativeExecutionPolicy

func (b *Batch) SpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Batch

func (*Batch) Table

func (b *Batch) Table() string

Batch has no reasonable eqivalent of Query.Table().

func (*Batch) Trace

func (b *Batch) Trace(trace Tracer) *Batch

Trace enables tracing of this batch. Look at the documentation of the Tracer interface to learn more about tracing.

func (*Batch) WithContext

func (b *Batch) WithContext(ctx context.Context) *Batch

WithContext returns a shallow copy of b with its context set to ctx.

The provided context controls the entire lifetime of executing a query, queries will be canceled and return once the context is canceled.

func (*Batch) WithTimestamp

func (b *Batch) WithTimestamp(timestamp int64) *Batch

WithTimestamp will enable the with default timestamp flag on the query like DefaultTimestamp does. But also allows to define value for timestamp. It works the same way as USING TIMESTAMP in the query itself, but should not break prepared query optimization.

Only available on protocol >= 3

type BatchEntry

type BatchEntry struct {
	Stmt       string
	Args       []interface{}
	Idempotent bool
	// contains filtered or unexported fields
}

type BatchObserver

type BatchObserver interface {
	// ObserveBatch gets called on every batch query to cassandra.
	// It also gets called once for each query in a batch.
	// It doesn't get called if there is no query because the session is closed or there are no connections available.
	// The error reported only shows query errors, i.e. if a SELECT is valid but finds no matches it will be nil.
	// Unlike QueryObserver.ObserveQuery it does no reporting on rows read.
	ObserveBatch(context.Context, ObservedBatch)
}

BatchObserver is the interface implemented by batch observers / stat collectors.

type BatchType

type BatchType byte
const (
	LoggedBatch   BatchType = 0
	UnloggedBatch BatchType = 1
	CounterBatch  BatchType = 2
)

type ClusterConfig

type ClusterConfig struct {
	// addresses for the initial connections. It is recommended to use the value set in
	// the Cassandra config for broadcast_address or listen_address, an IP address not
	// a domain name. This is because events from Cassandra will use the configured IP
	// address, which is used to index connected hosts. If the domain name specified
	// resolves to more than 1 IP address then the driver may connect multiple times to
	// the same host, and will not mark the node being down or up from events.
	Hosts []string

	// CQL version (default: 3.0.0)
	CQLVersion string

	// ProtoVersion sets the version of the native protocol to use, this will
	// enable features in the driver for specific protocol versions, generally this
	// should be set to a known version (2,3,4) for the cluster being connected to.
	//
	// If it is 0 or unset (the default) then the driver will attempt to discover the
	// highest supported protocol for the cluster. In clusters with nodes of different
	// versions the protocol selected is not defined (ie, it can be any of the supported in the cluster)
	ProtoVersion int

	// Connection timeout (default: 600ms)
	// ConnectTimeout is used to set up the default dialer and is ignored if Dialer or HostDialer is provided.
	Timeout time.Duration

	// Initial connection timeout, used during initial dial to server (default: 600ms)
	ConnectTimeout time.Duration

	// Timeout for writing a query. Defaults to Timeout if not specified.
	WriteTimeout time.Duration

	// Port used when dialing.
	// Default: 9042
	Port int

	// Initial keyspace. Optional.
	Keyspace string

	// Number of connections per host.
	// Default: 2
	NumConns int

	// Default consistency level.
	// Default: Quorum
	Consistency Consistency

	// Compression algorithm.
	// Default: nil
	Compressor Compressor

	// Default: nil
	Authenticator Authenticator

	// An Authenticator factory. Can be used to create alternative authenticators.
	// Default: nil
	AuthProvider func(h *HostInfo) (Authenticator, error)

	// Default retry policy to use for queries.
	// Default: no retries.
	RetryPolicy RetryPolicy

	// ConvictionPolicy decides whether to mark host as down based on the error and host info.
	// Default: SimpleConvictionPolicy
	ConvictionPolicy ConvictionPolicy

	// Default reconnection policy to use for reconnecting before trying to mark host as down.
	ReconnectionPolicy ReconnectionPolicy

	// The keepalive period to use, enabled if > 0 (default: 0)
	// SocketKeepalive is used to set up the default dialer and is ignored if Dialer or HostDialer is provided.
	SocketKeepalive time.Duration

	// Maximum cache size for prepared statements globally for gocql.
	// Default: 1000
	MaxPreparedStmts int

	// Maximum cache size for query info about statements for each session.
	// Default: 1000
	MaxRoutingKeyInfo int

	// Default page size to use for created sessions.
	// Default: 5000
	PageSize int

	// Consistency for the serial part of queries, values can be either SERIAL or LOCAL_SERIAL.
	// Default: unset
	SerialConsistency SerialConsistency

	// SslOpts configures TLS use when HostDialer is not set.
	// SslOpts is ignored if HostDialer is set.
	SslOpts *SslOptions

	// Sends a client side timestamp for all requests which overrides the timestamp at which it arrives at the server.
	// Default: true, only enabled for protocol 3 and above.
	DefaultTimestamp bool

	// PoolConfig configures the underlying connection pool, allowing the
	// configuration of host selection and connection selection policies.
	PoolConfig PoolConfig

	// If not zero, gocql attempt to reconnect known DOWN nodes in every ReconnectInterval.
	ReconnectInterval time.Duration

	// The maximum amount of time to wait for schema agreement in a cluster after
	// receiving a schema change frame. (default: 60s)
	MaxWaitSchemaAgreement time.Duration

	// HostFilter will filter all incoming events for host, any which don't pass
	// the filter will be ignored. If set will take precedence over any options set
	// via Discovery
	HostFilter HostFilter

	// AddressTranslator will translate addresses found on peer discovery and/or
	// node change events.
	AddressTranslator AddressTranslator

	// If IgnorePeerAddr is true and the address in system.peers does not match
	// the supplied host by either initial hosts or discovered via events then the
	// host will be replaced with the supplied address.
	//
	// For example if an event comes in with host=10.0.0.1 but when looking up that
	// address in system.local or system.peers returns 127.0.0.1, the peer will be
	// set to 10.0.0.1 which is what will be used to connect to.
	IgnorePeerAddr bool

	// If DisableInitialHostLookup then the driver will not attempt to get host info
	// from the system.peers table, this will mean that the driver will connect to
	// hosts supplied and will not attempt to lookup the hosts information, this will
	// mean that data_centre, rack and token information will not be available and as
	// such host filtering and token aware query routing will not be available.
	DisableInitialHostLookup bool

	// Configure events the driver will register for
	Events struct {
		// disable registering for status events (node up/down)
		DisableNodeStatusEvents bool
		// disable registering for topology events (node added/removed/moved)
		DisableTopologyEvents bool
		// disable registering for schema events (keyspace/table/function removed/created/updated)
		DisableSchemaEvents bool
	}

	// DisableSkipMetadata will override the internal result metadata cache so that the driver does not
	// send skip_metadata for queries, this means that the result will always contain
	// the metadata to parse the rows and will not reuse the metadata from the prepared
	// statement.
	//
	// See https://issues.apache.org/jira/browse/CASSANDRA-10786
	DisableSkipMetadata bool

	// QueryObserver will set the provided query observer on all queries created from this session.
	// Use it to collect metrics / stats from queries by providing an implementation of QueryObserver.
	QueryObserver QueryObserver

	// BatchObserver will set the provided batch observer on all queries created from this session.
	// Use it to collect metrics / stats from batch queries by providing an implementation of BatchObserver.
	BatchObserver BatchObserver

	// ConnectObserver will set the provided connect observer on all queries
	// created from this session.
	ConnectObserver ConnectObserver

	// FrameHeaderObserver will set the provided frame header observer on all frames' headers created from this session.
	// Use it to collect metrics / stats from frames by providing an implementation of FrameHeaderObserver.
	FrameHeaderObserver FrameHeaderObserver

	// StreamObserver will be notified of stream state changes.
	// This can be used to track in-flight protocol requests and responses.
	StreamObserver StreamObserver

	// Default idempotence for queries
	DefaultIdempotence bool

	// The time to wait for frames before flushing the frames connection to Cassandra.
	// Can help reduce syscall overhead by making less calls to write. Set to 0 to
	// disable.
	//
	// (default: 200 microseconds)
	WriteCoalesceWaitTime time.Duration

	// Dialer will be used to establish all connections created for this Cluster.
	// If not provided, a default dialer configured with ConnectTimeout will be used.
	// Dialer is ignored if HostDialer is provided.
	Dialer Dialer

	// HostDialer will be used to establish all connections for this Cluster.
	// If not provided, Dialer will be used instead.
	HostDialer HostDialer

	// Logger for this ClusterConfig.
	// If not specified, defaults to the global gocql.Logger.
	Logger StdLogger
	// contains filtered or unexported fields
}

ClusterConfig is a struct to configure the default cluster implementation of gocql. It has a variety of attributes that can be used to modify the behavior to fit the most common use cases. Applications that require a different setup must implement their own cluster.

func NewCluster

func NewCluster(hosts ...string) *ClusterConfig

NewCluster generates a new config for the default cluster implementation.

The supplied hosts are used to initially connect to the cluster then the rest of the ring will be automatically discovered. It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events.

func (*ClusterConfig) CreateSession

func (cfg *ClusterConfig) CreateSession() (*Session, error)

CreateSession initializes the cluster based on this config and returns a session object that can be used to interact with the database.

type CollectionType

type CollectionType struct {
	NativeType
	Key  TypeInfo // only used for TypeMap
	Elem TypeInfo // only used for TypeMap, TypeList and TypeSet
}

func (CollectionType) New

func (t CollectionType) New() interface{}

func (CollectionType) NewWithError

func (t CollectionType) NewWithError() (interface{}, error)

func (CollectionType) String

func (c CollectionType) String() string

type ColumnIndexMetadata

type ColumnIndexMetadata struct {
	Name    string
	Type    string
	Options map[string]interface{}
}

type ColumnInfo

type ColumnInfo struct {
	Keyspace string
	Table    string
	Name     string
	TypeInfo TypeInfo
}

func (ColumnInfo) String

func (c ColumnInfo) String() string

type ColumnKind

type ColumnKind int
const (
	ColumnUnkownKind ColumnKind = iota
	ColumnPartitionKey
	ColumnClusteringKey
	ColumnRegular
	ColumnCompact
	ColumnStatic
)

func (ColumnKind) String

func (c ColumnKind) String() string

func (*ColumnKind) UnmarshalCQL

func (c *ColumnKind) UnmarshalCQL(typ TypeInfo, p []byte) error

type ColumnMetadata

type ColumnMetadata struct {
	Keyspace        string
	Table           string
	Name            string
	ComponentIndex  int
	Kind            ColumnKind
	Validator       string
	Type            TypeInfo
	ClusteringOrder string
	Order           ColumnOrder
	Index           ColumnIndexMetadata
}

schema metadata for a column

type ColumnOrder

type ColumnOrder bool

the ordering of the column with regard to its comparator

const (
	ASC  ColumnOrder = false
	DESC ColumnOrder = true
)

type Compressor

type Compressor interface {
	Name() string
	Encode(data []byte) ([]byte, error)
	Decode(data []byte) ([]byte, error)
}

type Conn

type Conn struct {
	// contains filtered or unexported fields
}

Conn is a single connection to a Cassandra node. It can be used to execute queries, but users are usually advised to use a more reliable, higher level API.

func (*Conn) Address

func (c *Conn) Address() string

func (*Conn) AvailableStreams

func (c *Conn) AvailableStreams() int

func (*Conn) Close

func (c *Conn) Close()

func (*Conn) Closed

func (c *Conn) Closed() bool

func (*Conn) Pick

func (c *Conn) Pick(qry *Query) *Conn

func (*Conn) Read

func (c *Conn) Read(p []byte) (n int, err error)

func (*Conn) UseKeyspace

func (c *Conn) UseKeyspace(keyspace string) error

func (*Conn) Write

func (c *Conn) Write(p []byte) (n int, err error)

type ConnConfig

type ConnConfig struct {
	ProtoVersion   int
	CQLVersion     string
	Timeout        time.Duration
	WriteTimeout   time.Duration
	ConnectTimeout time.Duration
	Dialer         Dialer
	HostDialer     HostDialer
	Compressor     Compressor
	Authenticator  Authenticator
	AuthProvider   func(h *HostInfo) (Authenticator, error)
	Keepalive      time.Duration
	Logger         StdLogger
	// contains filtered or unexported fields
}

type ConnErrorHandler

type ConnErrorHandler interface {
	HandleError(conn *Conn, err error, closed bool)
}

type ConnectObserver

type ConnectObserver interface {
	// ObserveConnect gets called when a new connection to cassandra is made.
	ObserveConnect(ObservedConnect)
}

ConnectObserver is the interface implemented by connect observers / stat collectors.

type Consistency

type Consistency uint16
const (
	Any         Consistency = 0x00
	One         Consistency = 0x01
	Two         Consistency = 0x02
	Three       Consistency = 0x03
	Quorum      Consistency = 0x04
	All         Consistency = 0x05
	LocalQuorum Consistency = 0x06
	EachQuorum  Consistency = 0x07
	LocalOne    Consistency = 0x0A
)

func MustParseConsistency

func MustParseConsistency(s string) (Consistency, error)

MustParseConsistency is the same as ParseConsistency except it returns an error (never). It is kept here since breaking changes are not good. DEPRECATED: use ParseConsistency if you want a panic on parse error.

func ParseConsistency

func ParseConsistency(s string) Consistency

func ParseConsistencyWrapper

func ParseConsistencyWrapper(s string) (consistency Consistency, err error)

ParseConsistencyWrapper wraps gocql.ParseConsistency to provide an err return instead of a panic

func (Consistency) MarshalText

func (c Consistency) MarshalText() (text []byte, err error)

func (Consistency) String

func (c Consistency) String() string

func (*Consistency) UnmarshalText

func (c *Consistency) UnmarshalText(text []byte) error

type ConstantReconnectionPolicy

type ConstantReconnectionPolicy struct {
	MaxRetries int
	Interval   time.Duration
}

ConstantReconnectionPolicy has simple logic for returning a fixed reconnection interval.

Examples of usage:

cluster.ReconnectionPolicy = &gocql.ConstantReconnectionPolicy{MaxRetries: 10, Interval: 8 * time.Second}

func (*ConstantReconnectionPolicy) GetInterval

func (c *ConstantReconnectionPolicy) GetInterval(currentRetry int) time.Duration

func (*ConstantReconnectionPolicy) GetMaxRetries

func (c *ConstantReconnectionPolicy) GetMaxRetries() int

type ConvictionPolicy

type ConvictionPolicy interface {
	// Implementations should return `true` if the host should be convicted, `false` otherwise.
	AddFailure(error error, host *HostInfo) bool
	//Implementations should clear out any convictions or state regarding the host.
	Reset(host *HostInfo)
}

ConvictionPolicy interface is used by gocql to determine if a host should be marked as DOWN based on the error and host info

type DialedHost

type DialedHost struct {
	// Conn used to communicate with the server.
	Conn net.Conn

	// DisableCoalesce disables write coalescing for the Conn.
	// If true, the effect is the same as if WriteCoalesceWaitTime was configured to 0.
	DisableCoalesce bool
}

DialedHost contains information about established connection to a host.

func WrapTLS

func WrapTLS(ctx context.Context, conn net.Conn, addr string, tlsConfig *tls.Config) (*DialedHost, error)

WrapTLS optionally wraps a net.Conn connected to addr with the given tlsConfig. If the tlsConfig is nil, conn is not wrapped into a TLS session, so is insecure. If the tlsConfig does not have server name set, it is updated based on the default gocql rules.

type Dialer

type Dialer interface {
	DialContext(ctx context.Context, network, addr string) (net.Conn, error)
}

type DowngradingConsistencyRetryPolicy

type DowngradingConsistencyRetryPolicy struct {
	ConsistencyLevelsToTry []Consistency
}

func (*DowngradingConsistencyRetryPolicy) Attempt

func (*DowngradingConsistencyRetryPolicy) GetRetryType

func (d *DowngradingConsistencyRetryPolicy) GetRetryType(err error) RetryType

type Duration

type Duration struct {
	Months      int32
	Days        int32
	Nanoseconds int64
}

type ErrProtocol

type ErrProtocol struct {
	// contains filtered or unexported fields
}

type Error

type Error struct {
	Code    int
	Message string
}

func (Error) Error

func (e Error) Error() string

type ErrorMap

type ErrorMap map[string]uint16

type ExecutableQuery

type ExecutableQuery interface {
	GetRoutingKey() ([]byte, error)
	Keyspace() string
	Table() string
	IsIdempotent() bool

	RetryableQuery
	// contains filtered or unexported methods
}

type ExponentialBackoffRetryPolicy

type ExponentialBackoffRetryPolicy struct {
	NumRetries int
	Min, Max   time.Duration
}

ExponentialBackoffRetryPolicy sleeps between attempts

func (*ExponentialBackoffRetryPolicy) Attempt

func (*ExponentialBackoffRetryPolicy) GetRetryType

func (e *ExponentialBackoffRetryPolicy) GetRetryType(err error) RetryType

type ExponentialReconnectionPolicy

type ExponentialReconnectionPolicy struct {
	MaxRetries      int
	InitialInterval time.Duration
	MaxInterval     time.Duration
}

ExponentialReconnectionPolicy returns a growing reconnection interval.

func (*ExponentialReconnectionPolicy) GetInterval

func (e *ExponentialReconnectionPolicy) GetInterval(currentRetry int) time.Duration

func (*ExponentialReconnectionPolicy) GetMaxRetries

func (e *ExponentialReconnectionPolicy) GetMaxRetries() int

type FrameHeaderObserver

type FrameHeaderObserver interface {
	// ObserveFrameHeader gets called on every received frame header.
	ObserveFrameHeader(context.Context, ObservedFrameHeader)
}

FrameHeaderObserver is the interface implemented by frame observers / stat collectors.

Experimental, this interface and use may change

type FunctionMetadata

type FunctionMetadata struct {
	Keyspace          string
	Name              string
	ArgumentTypes     []TypeInfo
	ArgumentNames     []string
	Body              string
	CalledOnNullInput bool
	Language          string
	ReturnType        TypeInfo
}

FunctionMetadata holds metadata for function constructs

type HostDialer

type HostDialer interface {
	// DialHost establishes a connection to the host.
	// The returned connection must be directly usable for CQL protocol,
	// specifically DialHost is responsible also for setting up the TLS session if needed.
	// DialHost should disable write coalescing if the returned net.Conn does not support writev.
	// As of Go 1.18, only plain TCP connections support writev, TLS sessions should disable coalescing.
	// You can use WrapTLS helper function if you don't need to override the TLS setup.
	DialHost(ctx context.Context, host *HostInfo) (*DialedHost, error)
}

HostDialer allows customizing connection to cluster nodes.

type HostFilter

type HostFilter interface {
	// Called when a new host is discovered, returning true will cause the host
	// to be added to the pools.
	Accept(host *HostInfo) bool
}

HostFilter interface is used when a host is discovered via server sent events.

func AcceptAllFilter

func AcceptAllFilter() HostFilter

AcceptAllFilter will accept all hosts

func DataCentreHostFilter

func DataCentreHostFilter(dataCentre string) HostFilter

DataCentreHostFilter filters all hosts such that they are in the same data centre as the supplied data centre.

func DenyAllFilter

func DenyAllFilter() HostFilter

func WhiteListHostFilter

func WhiteListHostFilter(hosts ...string) HostFilter

WhiteListHostFilter filters incoming hosts by checking that their address is in the initial hosts whitelist.

type HostFilterFunc

type HostFilterFunc func(host *HostInfo) bool

HostFilterFunc converts a func(host HostInfo) bool into a HostFilter

func (HostFilterFunc) Accept

func (fn HostFilterFunc) Accept(host *HostInfo) bool

type HostInfo

type HostInfo struct {
	// contains filtered or unexported fields
}

func (*HostInfo) BroadcastAddress

func (h *HostInfo) BroadcastAddress() net.IP

func (*HostInfo) ClusterName

func (h *HostInfo) ClusterName() string

func (*HostInfo) ConnectAddress

func (h *HostInfo) ConnectAddress() net.IP

Returns the address that should be used to connect to the host. If you wish to override this, use an AddressTranslator or use a HostFilter to SetConnectAddress()

func (*HostInfo) ConnectAddressAndPort

func (h *HostInfo) ConnectAddressAndPort() string

func (*HostInfo) DSEVersion

func (h *HostInfo) DSEVersion() string

func (*HostInfo) DataCenter

func (h *HostInfo) DataCenter() string

func (*HostInfo) Equal

func (h *HostInfo) Equal(host *HostInfo) bool

func (*HostInfo) Graph

func (h *HostInfo) Graph() bool

func (*HostInfo) HostID

func (h *HostInfo) HostID() string

func (*HostInfo) HostnameAndPort

func (h *HostInfo) HostnameAndPort() string

func (*HostInfo) IsUp

func (h *HostInfo) IsUp() bool

func (*HostInfo) ListenAddress

func (h *HostInfo) ListenAddress() net.IP

func (*HostInfo) Partitioner

func (h *HostInfo) Partitioner() string

func (*HostInfo) Peer

func (h *HostInfo) Peer() net.IP

func (*HostInfo) Port

func (h *HostInfo) Port() int

func (*HostInfo) PreferredIP

func (h *HostInfo) PreferredIP() net.IP

func (*HostInfo) RPCAddress

func (h *HostInfo) RPCAddress() net.IP

func (*HostInfo) Rack

func (h *HostInfo) Rack() string

func (*HostInfo) SetConnectAddress

func (h *HostInfo) SetConnectAddress(address net.IP) *HostInfo

func (*HostInfo) SetHostID

func (h *HostInfo) SetHostID(hostID string)

func (*HostInfo) State

func (h *HostInfo) State() nodeState

func (*HostInfo) String

func (h *HostInfo) String() string

func (*HostInfo) Tokens

func (h *HostInfo) Tokens() []string

func (*HostInfo) Version

func (h *HostInfo) Version() cassVersion

func (*HostInfo) WorkLoad

func (h *HostInfo) WorkLoad() string

type HostSelectionPolicy

type HostSelectionPolicy interface {
	HostStateNotifier
	SetPartitioner
	KeyspaceChanged(KeyspaceUpdateEvent)
	Init(*Session)
	IsLocal(host *HostInfo) bool
	// Pick returns an iteration function over selected hosts.
	// Multiple attempts of a single query execution won't call the returned NextHost function concurrently,
	// so it's safe to have internal state without additional synchronization as long as every call to Pick returns
	// a different instance of NextHost.
	Pick(ExecutableQuery) NextHost
}

HostSelectionPolicy is an interface for selecting the most appropriate host to execute a given query. HostSelectionPolicy instances cannot be shared between sessions.

func DCAwareRoundRobinPolicy

func DCAwareRoundRobinPolicy(localDC string) HostSelectionPolicy

DCAwareRoundRobinPolicy is a host selection policies which will prioritize and return hosts which are in the local datacentre before returning hosts in all other datercentres

func HostPoolHostPolicy

func HostPoolHostPolicy(hp hostpool.HostPool) HostSelectionPolicy

HostPoolHostPolicy is a host policy which uses the bitly/go-hostpool library to distribute queries between hosts and prevent sending queries to unresponsive hosts. When creating the host pool that is passed to the policy use an empty slice of hosts as the hostpool will be populated later by gocql. See below for examples of usage:

// Create host selection policy using a simple host pool
cluster.PoolConfig.HostSelectionPolicy = HostPoolHostPolicy(hostpool.New(nil))

// Create host selection policy using an epsilon greedy pool
cluster.PoolConfig.HostSelectionPolicy = HostPoolHostPolicy(
    hostpool.NewEpsilonGreedy(nil, 0, &hostpool.LinearEpsilonValueCalculator{}),
)

func RackAwareRoundRobinPolicy

func RackAwareRoundRobinPolicy(localDC string, localRack string) HostSelectionPolicy

func RoundRobinHostPolicy

func RoundRobinHostPolicy() HostSelectionPolicy

RoundRobinHostPolicy is a round-robin load balancing policy, where each host is tried sequentially for each query.

func TokenAwareHostPolicy

func TokenAwareHostPolicy(fallback HostSelectionPolicy, opts ...func(*tokenAwareHostPolicy)) HostSelectionPolicy

TokenAwareHostPolicy is a token aware host selection policy, where hosts are selected based on the partition key, so queries are sent to the host which owns the partition. Fallback is used when routing information is not available.

type HostStateNotifier

type HostStateNotifier interface {
	AddHost(host *HostInfo)
	RemoveHost(host *HostInfo)
	HostUp(host *HostInfo)
	HostDown(host *HostInfo)
}

type HostTierer

type HostTierer interface {
	// HostTier returns an integer specifying how far a host is from the client.
	// Tier must start at 0.
	// The value is used to prioritize closer hosts during host selection.
	// For example this could be:
	// 0 - local rack, 1 - local DC, 2 - remote DC
	// or:
	// 0 - local DC, 1 - remote DC
	HostTier(host *HostInfo) uint

	// This function returns the maximum possible host tier
	MaxHostTier() uint
}

type Iter

type Iter struct {
	// contains filtered or unexported fields
}

Iter represents an iterator that can be used to iterate over all rows that were returned by a query. The iterator might send additional queries to the database during the iteration if paging was enabled.

func (*Iter) Close

func (iter *Iter) Close() error

Close closes the iterator and returns any errors that happened during the query or the iteration.

func (*Iter) Columns

func (iter *Iter) Columns() []ColumnInfo

Columns returns the name and type of the selected columns.

func (*Iter) GetCustomPayload

func (iter *Iter) GetCustomPayload() map[string][]byte

GetCustomPayload returns any parsed custom payload results if given in the response from Cassandra. Note that the result is not a copy.

This additional feature of CQL Protocol v4 allows additional results and query information to be returned by custom QueryHandlers running in your C* cluster. See https://datastax.github.io/java-driver/manual/custom_payloads/

func (*Iter) Host

func (iter *Iter) Host() *HostInfo

Host returns the host which the query was sent to.

func (*Iter) MapScan

func (iter *Iter) MapScan(m map[string]interface{}) bool

MapScan takes a map[string]interface{} and populates it with a row that is returned from cassandra.

Each call to MapScan() must be called with a new map object. During the call to MapScan() any pointers in the existing map are replaced with non pointer types before the call returns

iter := session.Query(`SELECT * FROM mytable`).Iter()
for {
	// New map each iteration
	row := make(map[string]interface{})
	if !iter.MapScan(row) {
		break
	}
	// Do things with row
	if fullname, ok := row["fullname"]; ok {
		fmt.Printf("Full Name: %s\n", fullname)
	}
}

You can also pass pointers in the map before each call

var fullName FullName // Implements gocql.Unmarshaler and gocql.Marshaler interfaces
var address net.IP
var age int
iter := session.Query(`SELECT * FROM scan_map_table`).Iter()
for {
	// New map each iteration
	row := map[string]interface{}{
		"fullname": &fullName,
		"age":      &age,
		"address":  &address,
	}
	if !iter.MapScan(row) {
		break
	}
	fmt.Printf("First: %s Age: %d Address: %q\n", fullName.FirstName, age, address)
}

func (*Iter) NumRows

func (iter *Iter) NumRows() int

NumRows returns the number of rows in this pagination, it will update when new pages are fetched, it is not the value of the total number of rows this iter will return unless there is only a single page returned.

func (*Iter) PageState

func (iter *Iter) PageState() []byte

PageState return the current paging state for a query which can be used for subsequent queries to resume paging this point.

func (*Iter) RowData

func (iter *Iter) RowData() (RowData, error)

func (*Iter) Scan

func (iter *Iter) Scan(dest ...interface{}) bool

Scan consumes the next row of the iterator and copies the columns of the current row into the values pointed at by dest. Use nil as a dest value to skip the corresponding column. Scan might send additional queries to the database to retrieve the next set of rows if paging was enabled.

Scan returns true if the row was successfully unmarshaled or false if the end of the result set was reached or if an error occurred. Close should be called afterwards to retrieve any potential errors.

func (*Iter) Scanner

func (iter *Iter) Scanner() Scanner

Scanner returns a row Scanner which provides an interface to scan rows in a manner which is similar to database/sql. The iter should NOT be used again after calling this method.

func (*Iter) SliceMap

func (iter *Iter) SliceMap() ([]map[string]interface{}, error)

SliceMap is a helper function to make the API easier to use returns the data from the query in the form of []map[string]interface{}

func (*Iter) Warnings

func (iter *Iter) Warnings() []string

Warnings returns any warnings generated if given in the response from Cassandra.

This is only available starting with CQL Protocol v4.

func (*Iter) WillSwitchPage

func (iter *Iter) WillSwitchPage() bool

WillSwitchPage detects if iterator reached end of current page and the next page is available.

type KeyspaceMetadata

type KeyspaceMetadata struct {
	Name            string
	DurableWrites   bool
	StrategyClass   string
	StrategyOptions map[string]interface{}
	Tables          map[string]*TableMetadata
	Functions       map[string]*FunctionMetadata
	Aggregates      map[string]*AggregateMetadata
	// Deprecated: use the MaterializedViews field for views and UserTypes field for udts instead.
	Views             map[string]*ViewMetadata
	MaterializedViews map[string]*MaterializedViewMetadata
	UserTypes         map[string]*UserTypeMetadata
}

schema metadata for a keyspace

type KeyspaceUpdateEvent

type KeyspaceUpdateEvent struct {
	Keyspace string
	Change   string
}

type MarshalError

type MarshalError string

func (MarshalError) Error

func (m MarshalError) Error() string

type Marshaler

type Marshaler interface {
	MarshalCQL(info TypeInfo) ([]byte, error)
}

Marshaler is the interface implemented by objects that can marshal themselves into values understood by Cassandra.

type MaterializedViewMetadata

type MaterializedViewMetadata struct {
	Keyspace                string
	Name                    string
	BaseTableId             UUID
	BaseTable               *TableMetadata
	BloomFilterFpChance     float64
	Caching                 map[string]string
	Comment                 string
	Compaction              map[string]string
	Compression             map[string]string
	CrcCheckChance          float64
	DcLocalReadRepairChance float64
	DefaultTimeToLive       int
	Extensions              map[string]string
	GcGraceSeconds          int
	Id                      UUID
	IncludeAllColumns       bool
	MaxIndexInterval        int
	MemtableFlushPeriodInMs int
	MinIndexInterval        int
	ReadRepairChance        float64
	SpeculativeRetry        string
	// contains filtered or unexported fields
}

MaterializedViewMetadata holds the metadata for materialized views.

type NativeType

type NativeType struct {
	// contains filtered or unexported fields
}

func NewNativeType

func NewNativeType(proto byte, typ Type, custom string) NativeType

func (NativeType) Custom

func (s NativeType) Custom() string

func (NativeType) New

func (t NativeType) New() interface{}

func (NativeType) NewWithError

func (t NativeType) NewWithError() (interface{}, error)

func (NativeType) String

func (s NativeType) String() string

func (NativeType) Type

func (s NativeType) Type() Type

func (NativeType) Version

func (s NativeType) Version() byte

type NextHost

type NextHost func() SelectedHost

NextHost is an iteration function over picked hosts

type NonSpeculativeExecution

type NonSpeculativeExecution struct{}

func (NonSpeculativeExecution) Attempts

func (sp NonSpeculativeExecution) Attempts() int

func (NonSpeculativeExecution) Delay

type ObservedBatch

type ObservedBatch struct {
	Keyspace   string
	Statements []string

	// Values holds a slice of bound values for each statement.
	// Values[i] are bound values passed to Statements[i].
	// Do not modify the values here, they are shared with multiple goroutines.
	Values [][]interface{}

	Start time.Time // time immediately before the batch query was called
	End   time.Time // time immediately after the batch query returned

	// Host is the informations about the host that performed the batch
	Host *HostInfo

	// Err is the error in the batch query.
	// It only tracks network errors or errors of bad cassandra syntax, in particular selects with no match return nil error
	Err error

	// The metrics per this host
	Metrics *hostMetrics

	// Attempt is the index of attempt at executing this query.
	// The first attempt is number zero and any retries have non-zero attempt number.
	Attempt int
}

type ObservedConnect

type ObservedConnect struct {
	// Host is the information about the host about to connect
	Host *HostInfo

	Start time.Time // time immediately before the dial is called
	End   time.Time // time immediately after the dial returned

	// Err is the connection error (if any)
	Err error
}

type ObservedFrameHeader

type ObservedFrameHeader struct {
	Version protoVersion
	Flags   byte
	Stream  int16
	Opcode  frameOp
	Length  int32

	// StartHeader is the time we started reading the frame header off the network connection.
	Start time.Time
	// EndHeader is the time we finished reading the frame header off the network connection.
	End time.Time

	// Host is Host of the connection the frame header was read from.
	Host *HostInfo
}

func (ObservedFrameHeader) String

func (f ObservedFrameHeader) String() string

type ObservedQuery

type ObservedQuery struct {
	Keyspace  string
	Statement string

	// Values holds a slice of bound values for the query.
	// Do not modify the values here, they are shared with multiple goroutines.
	Values []interface{}

	Start time.Time // time immediately before the query was called
	End   time.Time // time immediately after the query returned

	// Rows is the number of rows in the current iter.
	// In paginated queries, rows from previous scans are not counted.
	// Rows is not used in batch queries and remains at the default value
	Rows int

	// Host is the informations about the host that performed the query
	Host *HostInfo

	// The metrics per this host
	Metrics *hostMetrics

	// Err is the error in the query.
	// It only tracks network errors or errors of bad cassandra syntax, in particular selects with no match return nil error
	Err error

	// Attempt is the index of attempt at executing this query.
	// The first attempt is number zero and any retries have non-zero attempt number.
	Attempt int
}

type ObservedStream

type ObservedStream struct {
	// Host of the connection used to send the stream.
	Host *HostInfo
}

ObservedStream observes a single request/response stream.

type PasswordAuthenticator

type PasswordAuthenticator struct {
	Username              string
	Password              string
	AllowedAuthenticators []string
}

func (PasswordAuthenticator) Challenge

func (p PasswordAuthenticator) Challenge(req []byte) ([]byte, Authenticator, error)

func (PasswordAuthenticator) Success

func (p PasswordAuthenticator) Success(data []byte) error

type PoolConfig

type PoolConfig struct {
	// HostSelectionPolicy sets the policy for selecting which host to use for a
	// given query (default: RoundRobinHostPolicy())
	// It is not supported to use a single HostSelectionPolicy in multiple sessions
	// (even if you close the old session before using in a new session).
	HostSelectionPolicy HostSelectionPolicy
}

PoolConfig configures the connection pool used by the driver, it defaults to using a round-robin host selection policy and a round-robin connection selection policy for each host.

type Query

type Query struct {
	// contains filtered or unexported fields
}

Query represents a CQL statement that can be executed.

func (*Query) AddAttempts

func (q *Query) AddAttempts(i int, host *HostInfo)

func (*Query) AddLatency

func (q *Query) AddLatency(l int64, host *HostInfo)

func (*Query) Attempts

func (q *Query) Attempts() int

Attempts returns the number of times the query was executed.

func (*Query) Bind

func (q *Query) Bind(v ...interface{}) *Query

Bind sets query arguments of query. This can also be used to rebind new query arguments to an existing query instance.

func (*Query) Cancel

func (q *Query) Cancel()

Deprecate: does nothing, cancel the context passed to WithContext

func (*Query) Consistency

func (q *Query) Consistency(c Consistency) *Query

Consistency sets the consistency level for this query. If no consistency level have been set, the default consistency level of the cluster is used.

func (*Query) Context

func (q *Query) Context() context.Context

func (*Query) CustomPayload

func (q *Query) CustomPayload(customPayload map[string][]byte) *Query

CustomPayload sets the custom payload level for this query.

func (*Query) DefaultTimestamp

func (q *Query) DefaultTimestamp(enable bool) *Query

DefaultTimestamp will enable the with default timestamp flag on the query. If enable, this will replace the server side assigned timestamp as default timestamp. Note that a timestamp in the query itself will still override this timestamp. This is entirely optional.

Only available on protocol >= 3

func (*Query) Exec

func (q *Query) Exec() error

Exec executes the query without returning any rows.

func (*Query) GetConsistency

func (q *Query) GetConsistency() Consistency

GetConsistency returns the currently configured consistency level for the query.

func (*Query) GetRoutingKey

func (q *Query) GetRoutingKey() ([]byte, error)

GetRoutingKey gets the routing key to use for routing this query. If a routing key has not been explicitly set, then the routing key will be constructed if possible using the keyspace's schema and the query info for this query statement. If the routing key cannot be determined then nil will be returned with no error. On any error condition, an error description will be returned.

func (*Query) Idempotent

func (q *Query) Idempotent(value bool) *Query

Idempotent marks the query as being idempotent or not depending on the value. Non-idempotent query won't be retried. See "Retries and speculative execution" in package docs for more details.

func (*Query) IsIdempotent

func (q *Query) IsIdempotent() bool

IsIdempotent returns whether the query is marked as idempotent. Non-idempotent query won't be retried. See "Retries and speculative execution" in package docs for more details.

func (*Query) Iter

func (q *Query) Iter() *Iter

Iter executes the query and returns an iterator capable of iterating over all results.

func (*Query) Keyspace

func (q *Query) Keyspace() string

Keyspace returns the keyspace the query will be executed against.

func (*Query) Latency

func (q *Query) Latency() int64

Latency returns the average amount of nanoseconds per attempt of the query.

func (*Query) MapScan

func (q *Query) MapScan(m map[string]interface{}) error

MapScan executes the query, copies the columns of the first selected row into the map pointed at by m and discards the rest. If no rows were selected, ErrNotFound is returned.

func (*Query) MapScanCAS

func (q *Query) MapScanCAS(dest map[string]interface{}) (applied bool, err error)

MapScanCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). If the transaction fails because the existing values did not match, the previous values will be stored in dest map.

As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. MapScanCAS is added to capture them safely.

Example

ExampleQuery_MapScanCAS demonstrates how to execute a single-statement lightweight transaction.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.my_lwt_table(pk int, version int, value text, PRIMARY KEY(pk));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	err = session.Query("INSERT INTO example.my_lwt_table (pk, version, value) VALUES (?, ?, ?)",
		1, 1, "a").WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}
	m := make(map[string]interface{})
	applied, err := session.Query("UPDATE example.my_lwt_table SET value = ? WHERE pk = ? IF version = ?",
		"b", 1, 0).WithContext(ctx).MapScanCAS(m)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(applied, m)

	var value string
	err = session.Query("SELECT value FROM example.my_lwt_table WHERE pk = ?", 1).WithContext(ctx).
		Scan(&value)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(value)

	m = make(map[string]interface{})
	applied, err = session.Query("UPDATE example.my_lwt_table SET value = ? WHERE pk = ? IF version = ?",
		"b", 1, 1).WithContext(ctx).MapScanCAS(m)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(applied, m)

	var value2 string
	err = session.Query("SELECT value FROM example.my_lwt_table WHERE pk = ?", 1).WithContext(ctx).
		Scan(&value2)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(value2)
	// false map[version:1]
	// a
	// true map[]
	// b
}
Output:

func (*Query) NoSkipMetadata

func (q *Query) NoSkipMetadata() *Query

NoSkipMetadata will override the internal result metadata cache so that the driver does not send skip_metadata for queries, this means that the result will always contain the metadata to parse the rows and will not reuse the metadata from the prepared statement. This should only be used to work around cassandra bugs, such as when using CAS operations which do not end in Cas.

See https://issues.apache.org/jira/browse/CASSANDRA-11099 https://github.com/gocql/gocql/issues/612

func (*Query) Observer

func (q *Query) Observer(observer QueryObserver) *Query

Observer enables query-level observer on this query. The provided observer will be called every time this query is executed.

func (*Query) PageSize

func (q *Query) PageSize(n int) *Query

PageSize will tell the iterator to fetch the result in pages of size n. This is useful for iterating over large result sets, but setting the page size too low might decrease the performance. This feature is only available in Cassandra 2 and onwards.

func (*Query) PageState

func (q *Query) PageState(state []byte) *Query

PageState sets the paging state for the query to resume paging from a specific point in time. Setting this will disable to query paging for this query, and must be used for all subsequent pages.

func (*Query) Prefetch

func (q *Query) Prefetch(p float64) *Query

SetPrefetch sets the default threshold for pre-fetching new pages. If there are only p*pageSize rows remaining, the next page will be requested automatically.

func (*Query) Release

func (q *Query) Release()

Release releases a query back into a pool of queries. Released Queries cannot be reused.

Example:

qry := session.Query("SELECT * FROM my_table")
qry.Exec()
qry.Release()

func (*Query) RetryPolicy

func (q *Query) RetryPolicy(r RetryPolicy) *Query

RetryPolicy sets the policy to use when retrying the query.

func (*Query) RoutingKey

func (q *Query) RoutingKey(routingKey []byte) *Query

RoutingKey sets the routing key to use when a token aware connection pool is used to optimize the routing of this query.

func (*Query) Scan

func (q *Query) Scan(dest ...interface{}) error

Scan executes the query, copies the columns of the first selected row into the values pointed at by dest and discards the rest. If no rows were selected, ErrNotFound is returned.

func (*Query) ScanCAS

func (q *Query) ScanCAS(dest ...interface{}) (applied bool, err error)

ScanCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). If the transaction fails because the existing values did not match, the previous values will be stored in dest.

As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. Use MapScanCAS to capture them safely.

func (*Query) SerialConsistency

func (q *Query) SerialConsistency(cons SerialConsistency) *Query

SerialConsistency sets the consistency level for the serial phase of conditional updates. That consistency can only be either SERIAL or LOCAL_SERIAL and if not present, it defaults to SERIAL. This option will be ignored for anything else that a conditional update/insert.

func (*Query) SetConsistency

func (q *Query) SetConsistency(c Consistency)

Same as Consistency but without a return value

func (*Query) SetSpeculativeExecutionPolicy

func (q *Query) SetSpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Query

SetSpeculativeExecutionPolicy sets the execution policy

func (Query) Statement

func (q Query) Statement() string

Statement returns the statement that was used to generate this query.

func (Query) String

func (q Query) String() string

String implements the stringer interface.

func (*Query) Table

func (q *Query) Table() string

Table returns name of the table the query will be executed against.

func (*Query) Trace

func (q *Query) Trace(trace Tracer) *Query

Trace enables tracing of this query. Look at the documentation of the Tracer interface to learn more about tracing.

func (Query) Values

func (q Query) Values() []interface{}

Values returns the values passed in via Bind. This can be used by a wrapper type that needs to access the bound values.

func (*Query) WithContext

func (q *Query) WithContext(ctx context.Context) *Query

WithContext returns a shallow copy of q with its context set to ctx.

The provided context controls the entire lifetime of executing a query, queries will be canceled and return once the context is canceled.

func (*Query) WithTimestamp

func (q *Query) WithTimestamp(timestamp int64) *Query

WithTimestamp will enable the with default timestamp flag on the query like DefaultTimestamp does. But also allows to define value for timestamp. It works the same way as USING TIMESTAMP in the query itself, but should not break prepared query optimization.

Only available on protocol >= 3

type QueryInfo

type QueryInfo struct {
	Id          []byte
	Args        []ColumnInfo
	Rval        []ColumnInfo
	PKeyColumns []int
}

type QueryObserver

type QueryObserver interface {
	// ObserveQuery gets called on every query to cassandra, including all queries in an iterator when paging is enabled.
	// It doesn't get called if there is no query because the session is closed or there are no connections available.
	// The error reported only shows query errors, i.e. if a SELECT is valid but finds no matches it will be nil.
	ObserveQuery(context.Context, ObservedQuery)
}

QueryObserver is the interface implemented by query observers / stat collectors.

Experimental, this interface and use may change

type ReadyPolicy

type ReadyPolicy interface {
	Ready() bool
}

ReadyPolicy defines a policy for when a HostSelectionPolicy can be used. After each host connects during session initialization, the Ready method will be called. If you only need a single Host to be up you can wrap a HostSelectionPolicy policy with SingleHostReadyPolicy.

type ReconnectionPolicy

type ReconnectionPolicy interface {
	GetInterval(currentRetry int) time.Duration
	GetMaxRetries() int
}

ReconnectionPolicy interface is used by gocql to determine if reconnection can be attempted after connection error. The interface allows gocql users to implement their own logic to determine how to attempt reconnection.

type RequestErrAlreadyExists

type RequestErrAlreadyExists struct {
	Keyspace string
	Table    string
	// contains filtered or unexported fields
}

func (RequestErrAlreadyExists) Code

func (e RequestErrAlreadyExists) Code() int

func (RequestErrAlreadyExists) Error

func (e RequestErrAlreadyExists) Error() string

func (RequestErrAlreadyExists) Message

func (e RequestErrAlreadyExists) Message() string

func (RequestErrAlreadyExists) String

func (e RequestErrAlreadyExists) String() string

type RequestErrCASWriteUnknown

type RequestErrCASWriteUnknown struct {
	Consistency Consistency
	Received    int
	BlockFor    int
	// contains filtered or unexported fields
}

RequestErrCASWriteUnknown is distinct error for ErrCodeCasWriteUnknown.

See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1387-L1397

func (RequestErrCASWriteUnknown) Code

func (e RequestErrCASWriteUnknown) Code() int

func (RequestErrCASWriteUnknown) Error

func (e RequestErrCASWriteUnknown) Error() string

func (RequestErrCASWriteUnknown) Message

func (e RequestErrCASWriteUnknown) Message() string

func (RequestErrCASWriteUnknown) String

func (e RequestErrCASWriteUnknown) String() string

type RequestErrCDCWriteFailure

type RequestErrCDCWriteFailure struct {
	// contains filtered or unexported fields
}

func (RequestErrCDCWriteFailure) Code

func (e RequestErrCDCWriteFailure) Code() int

func (RequestErrCDCWriteFailure) Error

func (e RequestErrCDCWriteFailure) Error() string

func (RequestErrCDCWriteFailure) Message

func (e RequestErrCDCWriteFailure) Message() string

func (RequestErrCDCWriteFailure) String

func (e RequestErrCDCWriteFailure) String() string

type RequestErrFunctionFailure

type RequestErrFunctionFailure struct {
	Keyspace string
	Function string
	ArgTypes []string
	// contains filtered or unexported fields
}

func (RequestErrFunctionFailure) Code

func (e RequestErrFunctionFailure) Code() int

func (RequestErrFunctionFailure) Error

func (e RequestErrFunctionFailure) Error() string

func (RequestErrFunctionFailure) Message

func (e RequestErrFunctionFailure) Message() string

func (RequestErrFunctionFailure) String

func (e RequestErrFunctionFailure) String() string

type RequestErrReadFailure

type RequestErrReadFailure struct {
	Consistency Consistency
	Received    int
	BlockFor    int
	NumFailures int
	DataPresent bool
	ErrorMap    ErrorMap
	// contains filtered or unexported fields
}

func (RequestErrReadFailure) Code

func (e RequestErrReadFailure) Code() int

func (RequestErrReadFailure) Error

func (e RequestErrReadFailure) Error() string

func (RequestErrReadFailure) Message

func (e RequestErrReadFailure) Message() string

func (RequestErrReadFailure) String

func (e RequestErrReadFailure) String() string

type RequestErrReadTimeout

type RequestErrReadTimeout struct {
	Consistency Consistency
	Received    int
	BlockFor    int
	DataPresent byte
	// contains filtered or unexported fields
}

func (RequestErrReadTimeout) Code

func (e RequestErrReadTimeout) Code() int

func (RequestErrReadTimeout) Error

func (e RequestErrReadTimeout) Error() string

func (RequestErrReadTimeout) Message

func (e RequestErrReadTimeout) Message() string

func (RequestErrReadTimeout) String

func (e RequestErrReadTimeout) String() string

type RequestErrUnavailable

type RequestErrUnavailable struct {
	Consistency Consistency
	Required    int
	Alive       int
	// contains filtered or unexported fields
}

func (RequestErrUnavailable) Code

func (e RequestErrUnavailable) Code() int

func (RequestErrUnavailable) Error

func (e RequestErrUnavailable) Error() string

func (RequestErrUnavailable) Message

func (e RequestErrUnavailable) Message() string

func (*RequestErrUnavailable) String

func (e *RequestErrUnavailable) String() string

type RequestErrUnprepared

type RequestErrUnprepared struct {
	StatementId []byte
	// contains filtered or unexported fields
}

func (RequestErrUnprepared) Code

func (e RequestErrUnprepared) Code() int

func (RequestErrUnprepared) Error

func (e RequestErrUnprepared) Error() string

func (RequestErrUnprepared) Message

func (e RequestErrUnprepared) Message() string

func (RequestErrUnprepared) String

func (e RequestErrUnprepared) String() string

type RequestErrWriteFailure

type RequestErrWriteFailure struct {
	Consistency Consistency
	Received    int
	BlockFor    int
	NumFailures int
	WriteType   string
	ErrorMap    ErrorMap
	// contains filtered or unexported fields
}

func (RequestErrWriteFailure) Code

func (e RequestErrWriteFailure) Code() int

func (RequestErrWriteFailure) Error

func (e RequestErrWriteFailure) Error() string

func (RequestErrWriteFailure) Message

func (e RequestErrWriteFailure) Message() string

func (RequestErrWriteFailure) String

func (e RequestErrWriteFailure) String() string

type RequestErrWriteTimeout

type RequestErrWriteTimeout struct {
	Consistency Consistency
	Received    int
	BlockFor    int
	WriteType   string
	// contains filtered or unexported fields
}

func (RequestErrWriteTimeout) Code

func (e RequestErrWriteTimeout) Code() int

func (RequestErrWriteTimeout) Error

func (e RequestErrWriteTimeout) Error() string

func (RequestErrWriteTimeout) Message

func (e RequestErrWriteTimeout) Message() string

func (RequestErrWriteTimeout) String

func (e RequestErrWriteTimeout) String() string

type RequestError

type RequestError interface {
	Code() int
	Message() string
	Error() string
}

type RetryPolicy

type RetryPolicy interface {
	Attempt(RetryableQuery) bool
	GetRetryType(error) RetryType
}

RetryPolicy interface is used by gocql to determine if a query can be attempted again after a retryable error has been received. The interface allows gocql users to implement their own logic to determine if a query can be attempted again.

See SimpleRetryPolicy as an example of implementing and using a RetryPolicy interface.

type RetryType

type RetryType uint16
const (
	Retry         RetryType = 0x00 // retry on same connection
	RetryNextHost RetryType = 0x01 // retry on another connection
	Ignore        RetryType = 0x02 // ignore error and return result
	Rethrow       RetryType = 0x03 // raise error and stop retrying
)

type RetryableQuery

type RetryableQuery interface {
	Attempts() int
	SetConsistency(c Consistency)
	GetConsistency() Consistency
	Context() context.Context
}

RetryableQuery is an interface that represents a query or batch statement that exposes the correct functions for the retry policy logic to evaluate correctly.

type RowData

type RowData struct {
	Columns []string
	Values  []interface{}
}

type Scanner

type Scanner interface {
	// Next advances the row pointer to point at the next row, the row is valid until
	// the next call of Next. It returns true if there is a row which is available to be
	// scanned into with Scan.
	// Next must be called before every call to Scan.
	Next() bool

	// Scan copies the current row's columns into dest. If the length of dest does not equal
	// the number of columns returned in the row an error is returned. If an error is encountered
	// when unmarshalling a column into the value in dest an error is returned and the row is invalidated
	// until the next call to Next.
	// Next must be called before calling Scan, if it is not an error is returned.
	Scan(...interface{}) error

	// Err returns the if there was one during iteration that resulted in iteration being unable to complete.
	// Err will also release resources held by the iterator, the Scanner should not used after being called.
	Err() error
}

type SelectedHost

type SelectedHost interface {
	Info() *HostInfo
	Mark(error)
}

SelectedHost is an interface returned when picking a host from a host selection policy.

type SerialConsistency

type SerialConsistency uint16
const (
	Serial      SerialConsistency = 0x08
	LocalSerial SerialConsistency = 0x09
)

func (SerialConsistency) MarshalText

func (s SerialConsistency) MarshalText() (text []byte, err error)

func (SerialConsistency) String

func (s SerialConsistency) String() string

func (*SerialConsistency) UnmarshalText

func (s *SerialConsistency) UnmarshalText(text []byte) error

type Session

type Session struct {
	// contains filtered or unexported fields
}

Session is the interface used by users to interact with the database.

It's safe for concurrent use by multiple goroutines and a typical usage scenario is to have one global session object to interact with the whole Cassandra cluster.

This type extends the Node interface by adding a convenient query builder and automatically sets a default consistency level on all operations that do not have a consistency level set.

func NewSession

func NewSession(cfg ClusterConfig) (*Session, error)

NewSession wraps an existing Node.

func (*Session) AwaitSchemaAgreement

func (s *Session) AwaitSchemaAgreement(ctx context.Context) error

AwaitSchemaAgreement will wait until schema versions across all nodes in the cluster are the same (as seen from the point of view of the control connection). The maximum amount of time this takes is governed by the MaxWaitSchemaAgreement setting in the configuration (default: 60s). AwaitSchemaAgreement returns an error in case schema versions are not the same after the timeout specified in MaxWaitSchemaAgreement elapses.

func (*Session) Bind

func (s *Session) Bind(stmt string, b func(q *QueryInfo) ([]interface{}, error)) *Query

Bind generates a new query object based on the query statement passed in. The query is automatically prepared if it has not previously been executed. The binding callback allows the application to define which query argument values will be marshalled as part of the query execution. During execution, the meta data of the prepared query will be routed to the binding callback, which is responsible for producing the query argument values.

func (*Session) Close

func (s *Session) Close()

Close closes all connections. The session is unusable after this operation.

func (*Session) Closed

func (s *Session) Closed() bool

func (*Session) ExecuteBatch

func (s *Session) ExecuteBatch(batch *Batch) error

ExecuteBatch executes a batch operation and returns nil if successful otherwise an error is returned describing the failure.

func (*Session) ExecuteBatchCAS

func (s *Session) ExecuteBatchCAS(batch *Batch, dest ...interface{}) (applied bool, iter *Iter, err error)

ExecuteBatchCAS executes a batch operation and returns true if successful and an iterator (to scan additional rows if more than one conditional statement) was sent. Further scans on the interator must also remember to include the applied boolean as the first argument to *Iter.Scan

func (*Session) KeyspaceMetadata

func (s *Session) KeyspaceMetadata(keyspace string) (*KeyspaceMetadata, error)

KeyspaceMetadata returns the schema metadata for the keyspace specified. Returns an error if the keyspace does not exist.

func (*Session) MapExecuteBatchCAS

func (s *Session) MapExecuteBatchCAS(batch *Batch, dest map[string]interface{}) (applied bool, iter *Iter, err error)

MapExecuteBatchCAS executes a batch operation much like ExecuteBatchCAS, however it accepts a map rather than a list of arguments for the initial scan.

Example

ExampleSession_MapExecuteBatchCAS demonstrates how to execute a batch lightweight transaction.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create table example.my_lwt_batch_table(pk text, ck text, version int, value text, PRIMARY KEY(pk, ck));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	err = session.Query("INSERT INTO example.my_lwt_batch_table (pk, ck, version, value) VALUES (?, ?, ?, ?)",
		"pk1", "ck1", 1, "a").WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}

	err = session.Query("INSERT INTO example.my_lwt_batch_table (pk, ck, version, value) VALUES (?, ?, ?, ?)",
		"pk1", "ck2", 1, "A").WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}

	executeBatch := func(ck2Version int) {
		b := session.NewBatch(gocql.LoggedBatch)
		b.Entries = append(b.Entries, gocql.BatchEntry{
			Stmt: "UPDATE my_lwt_batch_table SET value=? WHERE pk=? AND ck=? IF version=?",
			Args: []interface{}{"b", "pk1", "ck1", 1},
		})
		b.Entries = append(b.Entries, gocql.BatchEntry{
			Stmt: "UPDATE my_lwt_batch_table SET value=? WHERE pk=? AND ck=? IF version=?",
			Args: []interface{}{"B", "pk1", "ck2", ck2Version},
		})
		m := make(map[string]interface{})
		applied, iter, err := session.MapExecuteBatchCAS(b.WithContext(ctx), m)
		if err != nil {
			log.Fatal(err)
		}
		fmt.Println(applied, m)

		m = make(map[string]interface{})
		for iter.MapScan(m) {
			fmt.Println(m)
			m = make(map[string]interface{})
		}

		if err := iter.Close(); err != nil {
			log.Fatal(err)
		}
	}

	printState := func() {
		scanner := session.Query("SELECT ck, value FROM example.my_lwt_batch_table WHERE pk = ?", "pk1").
			WithContext(ctx).Iter().Scanner()
		for scanner.Next() {
			var ck, value string
			err = scanner.Scan(&ck, &value)
			if err != nil {
				log.Fatal(err)
			}
			fmt.Println(ck, value)
		}
		if err := scanner.Err(); err != nil {
			log.Fatal(err)
		}
	}

	executeBatch(0)
	printState()
	executeBatch(1)
	printState()

	// false map[ck:ck1 pk:pk1 version:1]
	// map[[applied]:false ck:ck2 pk:pk1 version:1]
	// ck1 a
	// ck2 A
	// true map[]
	// ck1 b
	// ck2 B
}
Output:

func (*Session) NewBatch

func (s *Session) NewBatch(typ BatchType) *Batch

NewBatch creates a new batch operation using defaults defined in the cluster

func (*Session) Query

func (s *Session) Query(stmt string, values ...interface{}) *Query

Query generates a new query object for interacting with the database. Further details of the query may be tweaked using the resulting query value before the query is executed. Query is automatically prepared if it has not previously been executed.

func (*Session) SetConsistency

func (s *Session) SetConsistency(cons Consistency)

SetConsistency sets the default consistency level for this session. This setting can also be changed on a per-query basis and the default value is Quorum.

func (*Session) SetPageSize

func (s *Session) SetPageSize(n int)

SetPageSize sets the default page size for this session. A value <= 0 will disable paging. This setting can also be changed on a per-query basis.

func (*Session) SetPrefetch

func (s *Session) SetPrefetch(p float64)

SetPrefetch sets the default threshold for pre-fetching new pages. If there are only p*pageSize rows remaining, the next page will be requested automatically. This value can also be changed on a per-query basis and the default value is 0.25.

func (*Session) SetTrace

func (s *Session) SetTrace(trace Tracer)

SetTrace sets the default tracer for this session. This setting can also be changed on a per-query basis.

type SetHosts

type SetHosts interface {
	SetHosts(hosts []*HostInfo)
}

interface to implement to receive the host information

type SetPartitioner

type SetPartitioner interface {
	SetPartitioner(partitioner string)
}

interface to implement to receive the partitioner value

type SimpleConvictionPolicy

type SimpleConvictionPolicy struct {
}

SimpleConvictionPolicy implements a ConvictionPolicy which convicts all hosts regardless of error

func (*SimpleConvictionPolicy) AddFailure

func (e *SimpleConvictionPolicy) AddFailure(error error, host *HostInfo) bool

func (*SimpleConvictionPolicy) Reset

func (e *SimpleConvictionPolicy) Reset(host *HostInfo)

type SimpleRetryPolicy

type SimpleRetryPolicy struct {
	NumRetries int //Number of times to retry a query
}

SimpleRetryPolicy has simple logic for attempting a query a fixed number of times.

See below for examples of usage:

//Assign to the cluster
cluster.RetryPolicy = &gocql.SimpleRetryPolicy{NumRetries: 3}

//Assign to a query
query.RetryPolicy(&gocql.SimpleRetryPolicy{NumRetries: 1})

func (*SimpleRetryPolicy) Attempt

func (s *SimpleRetryPolicy) Attempt(q RetryableQuery) bool

Attempt tells gocql to attempt the query again based on query.Attempts being less than the NumRetries defined in the policy.

func (*SimpleRetryPolicy) GetRetryType

func (s *SimpleRetryPolicy) GetRetryType(err error) RetryType

type SimpleSpeculativeExecution

type SimpleSpeculativeExecution struct {
	NumAttempts  int
	TimeoutDelay time.Duration
}

func (*SimpleSpeculativeExecution) Attempts

func (sp *SimpleSpeculativeExecution) Attempts() int

func (*SimpleSpeculativeExecution) Delay

type SnappyCompressor

type SnappyCompressor struct{}

SnappyCompressor implements the Compressor interface and can be used to compress incoming and outgoing frames. The snappy compression algorithm aims for very high speeds and reasonable compression.

func (SnappyCompressor) Decode

func (s SnappyCompressor) Decode(data []byte) ([]byte, error)

func (SnappyCompressor) Encode

func (s SnappyCompressor) Encode(data []byte) ([]byte, error)

func (SnappyCompressor) Name

func (s SnappyCompressor) Name() string

type SpeculativeExecutionPolicy

type SpeculativeExecutionPolicy interface {
	Attempts() int
	Delay() time.Duration
}

type SslOptions

type SslOptions struct {
	*tls.Config

	// CertPath and KeyPath are optional depending on server
	// config, but both fields must be omitted to avoid using a
	// client certificate
	CertPath string
	KeyPath  string
	CaPath   string //optional depending on server config
	// If you want to verify the hostname and server cert (like a wildcard for cass cluster) then you should turn this
	// on.
	// This option is basically the inverse of tls.Config.InsecureSkipVerify.
	// See InsecureSkipVerify in http://golang.org/pkg/crypto/tls/ for more info.
	//
	// See SslOptions documentation to see how EnableHostVerification interacts with the provided tls.Config.
	EnableHostVerification bool
}

SslOptions configures TLS use.

Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows:

Config.InsecureSkipVerify | EnableHostVerification | Result
Config is nil             | false                  | do not verify host
Config is nil             | true                   | verify host
false                     | false                  | verify host
true                      | false                  | do not verify host
false                     | true                   | verify host
true                      | true                   | verify host

type StdLogger

type StdLogger interface {
	Print(v ...interface{})
	Printf(format string, v ...interface{})
	Println(v ...interface{})
}
var Logger StdLogger = &defaultLogger{}

Logger for logging messages. Deprecated: Use ClusterConfig.Logger instead.

type StreamObserver

type StreamObserver interface {
	// StreamContext is called before creating a new stream.
	// ctx is context passed to Session.Query / Session.Batch,
	// but might also be an internal context (for example
	// for internal requests that use control connection).
	// StreamContext might return nil if it is not interested
	// in the details of this stream.
	// StreamContext is called before the stream is created
	// and the returned StreamObserverContext might be discarded
	// without any methods called on the StreamObserverContext if
	// creation of the stream fails.
	// Note that if you don't need to track per-stream data,
	// you can always return the same StreamObserverContext.
	StreamContext(ctx context.Context) StreamObserverContext
}

StreamObserver is notified about request/response pairs. Streams are created for executing queries/batches or internal requests to the database and might live longer than execution of the query - the stream is still tracked until response arrives so that stream IDs are not reused.

type StreamObserverContext

type StreamObserverContext interface {
	// StreamStarted is called when the stream is started.
	// This happens just before a request is written to the wire.
	StreamStarted(observedStream ObservedStream)

	// StreamAbandoned is called when we stop waiting for response.
	// This happens when the underlying network connection is closed.
	// StreamFinished won't be called if StreamAbandoned is.
	StreamAbandoned(observedStream ObservedStream)

	// StreamFinished is called when we receive a response for the stream.
	StreamFinished(observedStream ObservedStream)
}

StreamObserverContext is notified about state of a stream. A stream is started every time a request is written to the server and is finished when a response is received. It is abandoned when the underlying network connection is closed before receiving a response.

type TableMetadata

type TableMetadata struct {
	Keyspace          string
	Name              string
	KeyValidator      string
	Comparator        string
	DefaultValidator  string
	KeyAliases        []string
	ColumnAliases     []string
	ValueAlias        string
	PartitionKey      []*ColumnMetadata
	ClusteringColumns []*ColumnMetadata
	Columns           map[string]*ColumnMetadata
	OrderedColumns    []string
}

schema metadata for a table (a.k.a. column family)

type Tracer

type Tracer interface {
	Trace(traceId []byte)
}

Tracer is the interface implemented by query tracers. Tracers have the ability to obtain a detailed event log of all events that happened during the execution of a query from Cassandra. Gathering this information might be essential for debugging and optimizing queries, but this feature should not be used on production systems with very high load.

func NewTraceWriter

func NewTraceWriter(session *Session, w io.Writer) Tracer

NewTraceWriter returns a simple Tracer implementation that outputs the event log in a textual format.

type TupleTypeInfo

type TupleTypeInfo struct {
	NativeType
	Elems []TypeInfo
}

func (TupleTypeInfo) New

func (t TupleTypeInfo) New() interface{}

func (TupleTypeInfo) NewWithError

func (t TupleTypeInfo) NewWithError() (interface{}, error)

func (TupleTypeInfo) String

func (t TupleTypeInfo) String() string

type Type

type Type int

String returns a human readable name for the Cassandra datatype described by t. Type is the identifier of a Cassandra internal datatype.

const (
	TypeCustom    Type = 0x0000
	TypeAscii     Type = 0x0001
	TypeBigInt    Type = 0x0002
	TypeBlob      Type = 0x0003
	TypeBoolean   Type = 0x0004
	TypeCounter   Type = 0x0005
	TypeDecimal   Type = 0x0006
	TypeDouble    Type = 0x0007
	TypeFloat     Type = 0x0008
	TypeInt       Type = 0x0009
	TypeText      Type = 0x000A
	TypeTimestamp Type = 0x000B
	TypeUUID      Type = 0x000C
	TypeVarchar   Type = 0x000D
	TypeVarint    Type = 0x000E
	TypeTimeUUID  Type = 0x000F
	TypeInet      Type = 0x0010
	TypeDate      Type = 0x0011
	TypeTime      Type = 0x0012
	TypeSmallInt  Type = 0x0013
	TypeTinyInt   Type = 0x0014
	TypeDuration  Type = 0x0015
	TypeList      Type = 0x0020
	TypeMap       Type = 0x0021
	TypeSet       Type = 0x0022
	TypeUDT       Type = 0x0030
	TypeTuple     Type = 0x0031
)

func (Type) String

func (t Type) String() string

String returns the name of the identifier.

type TypeInfo

type TypeInfo interface {
	Type() Type
	Version() byte
	Custom() string

	// New creates a pointer to an empty version of whatever type
	// is referenced by the TypeInfo receiver.
	//
	// If there is no corresponding Go type for the CQL type, New panics.
	//
	// Deprecated: Use NewWithError instead.
	New() interface{}

	// NewWithError creates a pointer to an empty version of whatever type
	// is referenced by the TypeInfo receiver.
	//
	// If there is no corresponding Go type for the CQL type, NewWithError returns an error.
	NewWithError() (interface{}, error)
}

TypeInfo describes a Cassandra specific data type.

type UDTField

type UDTField struct {
	Name string
	Type TypeInfo
}

type UDTMarshaler

type UDTMarshaler interface {
	// MarshalUDT will be called for each field in the the UDT returned by Cassandra,
	// the implementor should marshal the type to return by for example calling
	// Marshal.
	MarshalUDT(name string, info TypeInfo) ([]byte, error)
}

UDTMarshaler is an interface which should be implemented by users wishing to handle encoding UDT types to sent to Cassandra. Note: due to current implentations methods defined for this interface must be value receivers not pointer receivers.

Example

ExampleUDTMarshaler demonstrates how to implement a UDTMarshaler.

package main

import (
	"context"
	"github.com/gocql/gocql"
	"log"
)

// MyUDTMarshaler implements UDTMarshaler.
type MyUDTMarshaler struct {
	fieldA string
	fieldB int32
}

// MarshalUDT marshals the selected field to bytes.
func (m MyUDTMarshaler) MarshalUDT(name string, info gocql.TypeInfo) ([]byte, error) {
	switch name {
	case "field_a":
		return gocql.Marshal(info, m.fieldA)
	case "field_b":
		return gocql.Marshal(info, m.fieldB)
	default:
		// If you want to be strict and return error un unknown field, you can do so here instead.
		// Returning nil, nil will set the value of unknown fields to null, which might be handy if you want
		// to be forward-compatible when a new field is added to the UDT.
		return nil, nil
	}
}

// ExampleUDTMarshaler demonstrates how to implement a UDTMarshaler.
func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create type example.my_udt (field_a text, field_b int);
	create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	value := MyUDTMarshaler{
		fieldA: "a value",
		fieldB: 42,
	}
	err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
		1, value).WithContext(ctx).Exec()
	if err != nil {
		log.Fatal(err)
	}
}
Output:

type UDTTypeInfo

type UDTTypeInfo struct {
	NativeType
	KeySpace string
	Name     string
	Elements []UDTField
}

func (UDTTypeInfo) New

func (u UDTTypeInfo) New() interface{}

func (UDTTypeInfo) NewWithError

func (u UDTTypeInfo) NewWithError() (interface{}, error)

func (UDTTypeInfo) String

func (u UDTTypeInfo) String() string

type UDTUnmarshaler

type UDTUnmarshaler interface {
	// UnmarshalUDT will be called for each field in the UDT return by Cassandra,
	// the implementor should unmarshal the data into the value of their chosing,
	// for example by calling Unmarshal.
	UnmarshalUDT(name string, info TypeInfo, data []byte) error
}

UDTUnmarshaler should be implemented by users wanting to implement custom UDT unmarshaling.

Example

ExampleUDTUnmarshaler demonstrates how to implement a UDTUnmarshaler.

package main

import (
	"context"
	"fmt"
	"github.com/gocql/gocql"
	"log"
)

// MyUDTUnmarshaler implements UDTUnmarshaler.
type MyUDTUnmarshaler struct {
	fieldA string
	fieldB int32
}

// UnmarshalUDT unmarshals the field identified by name into MyUDTUnmarshaler.
func (m *MyUDTUnmarshaler) UnmarshalUDT(name string, info gocql.TypeInfo, data []byte) error {
	switch name {
	case "field_a":
		return gocql.Unmarshal(info, data, &m.fieldA)
	case "field_b":
		return gocql.Unmarshal(info, data, &m.fieldB)
	default:
		// If you want to be strict and return error un unknown field, you can do so here instead.
		// Returning nil will ignore unknown fields, which might be handy if you want
		// to be forward-compatible when a new field is added to the UDT.
		return nil
	}
}

// ExampleUDTUnmarshaler demonstrates how to implement a UDTUnmarshaler.
func main() {
	/* The example assumes the following CQL was used to setup the keyspace:
	create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
	create type example.my_udt (field_a text, field_b int);
	create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
	insert into example.my_udt_table (pk, value) values (1, {field_a: 'a value', field_b: 42});
	*/
	cluster := gocql.NewCluster("localhost:9042")
	cluster.Keyspace = "example"
	cluster.ProtoVersion = 4
	session, err := cluster.CreateSession()
	if err != nil {
		log.Fatal(err)
	}
	defer session.Close()

	ctx := context.Background()

	var value MyUDTUnmarshaler
	err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").WithContext(ctx).Scan(&value)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(value.fieldA)
	fmt.Println(value.fieldB)
	// a value
	// 42
}
Output:

type UUID

type UUID [16]byte

func MaxTimeUUID

func MaxTimeUUID(t time.Time) UUID

MaxTimeUUID generates a "fake" time based UUID (version 1) which will be the biggest possible UUID generated for the provided timestamp.

UUIDs generated by this function are not unique and are mostly suitable only in queries to select a time range of a Cassandra's TimeUUID column.

func MinTimeUUID

func MinTimeUUID(t time.Time) UUID

MinTimeUUID generates a "fake" time based UUID (version 1) which will be the smallest possible UUID generated for the provided timestamp.

UUIDs generated by this function are not unique and are mostly suitable only in queries to select a time range of a Cassandra's TimeUUID column.

func MustRandomUUID

func MustRandomUUID() UUID

func ParseUUID

func ParseUUID(input string) (UUID, error)

ParseUUID parses a 32 digit hexadecimal number (that might contain hypens) representing an UUID.

func RandomUUID

func RandomUUID() (UUID, error)

RandomUUID generates a totally random UUID (version 4) as described in RFC 4122.

func TimeUUID

func TimeUUID() UUID

TimeUUID generates a new time based UUID (version 1) using the current time as the timestamp.

func TimeUUIDWith

func TimeUUIDWith(t int64, clock uint32, node []byte) UUID

TimeUUIDWith generates a new time based UUID (version 1) as described in RFC4122 with given parameters. t is the number of 100's of nanoseconds since 15 Oct 1582 (60bits). clock is the number of clock sequence (14bits). node is a slice to gurarantee the uniqueness of the UUID (up to 6bytes). Note: calling this function does not increment the static clock sequence.

func UUIDFromBytes

func UUIDFromBytes(input []byte) (UUID, error)

UUIDFromBytes converts a raw byte slice to an UUID.

func UUIDFromTime

func UUIDFromTime(t time.Time) UUID

UUIDFromTime generates a new time based UUID (version 1) as described in RFC 4122. This UUID contains the MAC address of the node that generated the UUID, the given timestamp and a sequence number.

func (UUID) Bytes

func (u UUID) Bytes() []byte

Bytes returns the raw byte slice for this UUID. A UUID is always 128 bits (16 bytes) long.

func (UUID) Clock

func (u UUID) Clock() uint32

Clock extracts the clock sequence of this UUID. It will return zero if the UUID is not a time based UUID (version 1).

func (UUID) MarshalJSON

func (u UUID) MarshalJSON() ([]byte, error)

Marshaling for JSON

func (UUID) MarshalText

func (u UUID) MarshalText() ([]byte, error)

func (UUID) Node

func (u UUID) Node() []byte

Node extracts the MAC address of the node who generated this UUID. It will return nil if the UUID is not a time based UUID (version 1).

func (UUID) String

func (u UUID) String() string

String returns the UUID in it's canonical form, a 32 digit hexadecimal number in the form of xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.

func (UUID) Time

func (u UUID) Time() time.Time

Time is like Timestamp, except that it returns a time.Time.

func (UUID) Timestamp

func (u UUID) Timestamp() int64

Timestamp extracts the timestamp information from a time based UUID (version 1).

func (*UUID) UnmarshalJSON

func (u *UUID) UnmarshalJSON(data []byte) error

Unmarshaling for JSON

func (*UUID) UnmarshalText

func (u *UUID) UnmarshalText(text []byte) (err error)

func (UUID) Variant

func (u UUID) Variant() int

Variant returns the variant of this UUID. This package will only generate UUIDs in the IETF variant.

func (UUID) Version

func (u UUID) Version() int

Version extracts the version of this UUID variant. The RFC 4122 describes five kinds of UUIDs.

type UnmarshalError

type UnmarshalError string

func (UnmarshalError) Error

func (m UnmarshalError) Error() string

type Unmarshaler

type Unmarshaler interface {
	UnmarshalCQL(info TypeInfo, data []byte) error
}

Unmarshaler is the interface implemented by objects that can unmarshal a Cassandra specific description of themselves.

type UserTypeMetadata

type UserTypeMetadata struct {
	Keyspace   string
	Name       string
	FieldNames []string
	FieldTypes []TypeInfo
}

type ViewMetadata

type ViewMetadata struct {
	Keyspace   string
	Name       string
	FieldNames []string
	FieldTypes []TypeInfo
}

ViewMetadata holds the metadata for views. Deprecated: this is kept for backwards compatibility issues. Use MaterializedViewMetadata.

Directories

Path Synopsis
internal
lru
Package lru implements an LRU cache.
Package lru implements an LRU cache.
lz4 module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL