Documentation
¶
Overview ¶
Package gocql implements a fast and robust Cassandra driver for the Go programming language.
Upgrading to a new major version ¶
For detailed migration instructions between major versions, see the upgrade guide.
Connecting to the cluster ¶
Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration:
cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
Port can be specified as part of the address, the above is equivalent to:
cluster := gocql.NewCluster("192.168.1.1:9042", "192.168.1.2:9042", "192.168.1.3:9042")
It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events.
Then you can customize more options (see ClusterConfig):
cluster.Keyspace = "example" cluster.Consistency = gocql.Quorum cluster.ProtoVersion = 4
The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes).
Native protocol versions 3, 4, and 5 are supported. For features like per-query keyspace setting and timestamp override, use native protocol version 5.
The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead.
When ready, create a session from the configuration. Don't forget to Session.Close the session once you are done with it:
session, err := cluster.CreateSession()
if err != nil {
return err
}
defer session.Close()
Reconnection and Host Recovery ¶
The driver provides robust reconnection mechanisms to handle network failures and host outages. Two main configuration settings control reconnection behavior:
- ClusterConfig.ReconnectionPolicy: Controls retry behavior for immediate connection failures, query-driven reconnection, and background recovery
- ClusterConfig.ReconnectInterval: Controls background recovery of DOWN hosts
ReconnectionPolicy controls retry behavior for immediate connection failures, query-driven reconnection, and background recovery.
ConstantReconnectionPolicy provides predictable fixed intervals (Default):
cluster.ReconnectionPolicy = &gocql.ConstantReconnectionPolicy{
MaxRetries: 3, // Maximum retry attempts
Interval: 1 * time.Second, // Fixed interval between retries
}
ExponentialReconnectionPolicy provides gentler backoff with capped intervals:
cluster.ReconnectionPolicy = &gocql.ExponentialReconnectionPolicy{
MaxRetries: 5, // 6 total attempts: 0+1+2+4+8+15 = 30s total
InitialInterval: 1 * time.Second, // Initial retry interval
MaxInterval: 15 * time.Second, // Maximum retry interval (prevents excessive delays)
}
Note: Each reconnection attempt sequence starts fresh from InitialInterval. This applies both to immediate connection failures and each ClusterConfig.ReconnectInterval cycle. For example, if ClusterConfig.ReconnectInterval=60s, every 60 seconds the background process starts a new sequence beginning at InitialInterval, not continuing from where the previous 60-second cycle ended.
ClusterConfig.ReconnectInterval controls background recovery of DOWN hosts. When a host is marked DOWN, this process periodically attempts reconnection using the same ReconnectionPolicy settings:
cluster.ReconnectInterval = 60 * time.Second // Check DOWN hosts every 60 seconds (default)
Setting ClusterConfig.ReconnectInterval to 0 disables background reconnection.
The reconnection process involves several components working together in a specific sequence:
- Individual Connection Reconnection - Immediate retry attempts for failed connections within UP hosts
- Host State Management - Marking hosts DOWN when all connections fail and retries are exhausted
- Background Recovery - Periodic reconnection attempts for DOWN hosts via ReconnectInterval
Individual connection reconnection occurs when connections fail within a host's pool, and the driver immediately attempts reconnection using ReconnectionPolicy. For hosts that remain UP (with working connections), failed individual connections are reconnected on a query-driven basis - every query execution triggers asynchronous reconnection attempts for missing connections. Queries proceed immediately using available connections while reconnection happens asynchronously in the background. There is no query latency impact from reconnection attempts. Multiple concurrent queries to the same host will not trigger parallel reconnection attempts - the driver uses a "filling" flag to ensure only one reconnection process runs per host.
Host state management determines when a host is marked DOWN. Only when ALL connections to a host fail and ReconnectionPolicy retries are exhausted does the host get marked DOWN. DOWN hosts are excluded from query routing. Since DOWN hosts don't receive queries, they cannot benefit from query-driven reconnection. This is why the background ClusterConfig.ReconnectInterval process is essential for DOWN host recovery.
Background recovery through ClusterConfig.ReconnectInterval periodically attempts to reconnect DOWN hosts using ReconnectionPolicy settings. Event-driven recovery also triggers immediate reconnection when Cassandra sends STATUS_CHANGE UP events.
The complete recovery process follows these steps:
- Connection fails → ReconnectionPolicy immediate retry attempts
- Query-driven recovery → Each query to partially-failed hosts triggers reconnection attempts
- Host marked DOWN → All connections failed and retries exhausted
- Background recovery → ClusterConfig.ReconnectInterval process attempts reconnection using ReconnectionPolicy
- Event recovery → Cassandra events can trigger immediate reconnection
Here's a practical example showing how the settings work together:
cluster.ReconnectionPolicy = &gocql.ExponentialReconnectionPolicy{
MaxRetries: 8, // 9 total attempts (0s, 1s, 2s, 4s, 8s, 16s, 30s, 30s, 30s)
InitialInterval: 1 * time.Second, // Starts at 1 second
MaxInterval: 30 * time.Second, // Caps exponential growth at 30 seconds
}
cluster.ReconnectInterval = 60 * time.Second // Background checks every 60 seconds
Timeline Example: With this configuration, when a host loses ALL connections:
T=0:00 - Host has 2 connections, both fail
T=0:00 - Immediate reconnection attempt 1: 0s delay
T=0:01 - Immediate reconnection attempt 2: 1s delay
T=0:03 - Immediate reconnection attempt 3: 2s delay
T=0:07 - Immediate reconnection attempt 4: 4s delay
T=0:15 - Immediate reconnection attempt 5: 8s delay
T=0:31 - Immediate reconnection attempt 6: 16s delay
T=1:01 - Immediate reconnection attempt 7: 30s delay (capped by MaxInterval)
T=1:31 - Immediate reconnection attempt 8: 30s delay
T=2:01 - Immediate reconnection attempt 9: 30s delay
T=2:31 - All immediate attempts failed, host marked DOWN
T=3:31 - Background recovery attempt 1 starts (60s after DOWN)
ReconnectionPolicy sequence: 0s, 1s, 2s, 4s, 8s, 16s, 30s, 30s, 30s
T=4:31 - ClusterConfig.ReconnectInterval timer fires, tick buffered (timer channel capacity=1)
T=5:31 - ClusterConfig.ReconnectInterval timer fires again, there is already a tick buffered so ignore
T=5:32 - Background recovery attempt 1 completes (after 2:01), immediately reads buffered tick
T=5:32 - Background recovery attempt 2 starts (buffered timer from T=5:31)
T=6:32 - ClusterConfig.ReconnectInterval timer fires, tick buffered
T=7:32 - ClusterConfig.ReconnectInterval timer fires again, there is already a tick buffered so ignore
T=7:33 - Background recovery attempt 2 completes (after 2:01), immediately reads buffered tick
T=7:33 - Background recovery attempt 3 starts (buffered timer from T=7:32)
Timer Behavior and Predictable Timing:
Note: time.Ticker.C has buffer capacity=1, but Go drops ticks for "slow receivers." The reconnection process is a slow receiver (taking 2+ minutes vs 60s interval). First missed tick gets buffered, subsequent ticks are dropped. When reconnection completes, it immediately reads the buffered tick and starts the next attempt. This causes attempts to run back-to-back at the ReconnectionPolicy duration interval (121s) instead of the intended ClusterConfig.ReconnectInterval (60s), but timing remains predictable.
To avoid this buffering/dropping behavior, ensure ClusterConfig.ReconnectInterval is larger than the total ReconnectionPolicy duration. You can achieve this by either:
- Increasing ClusterConfig.ReconnectInterval (e.g., 150s > 121s sequence duration)
- Reducing ReconnectionPolicy duration (e.g., 30s sequence < 60s ClusterConfig.ReconnectInterval)
This ensures predictable timing with each recovery attempt starting exactly ClusterConfig.ReconnectInterval apart. Approach #2 provides faster recovery while maintaining predictable timing.
Individual failed connections within UP hosts are reconnected asynchronously without affecting query performance.
Best Practices and Configuration Guidelines:
- ReconnectionPolicy: Use ConstantReconnectionPolicy for predictable behavior or ExponentialReconnectionPolicy for gentler recovery. Aggressive settings affect background reconnection frequency but don't impact query latency
- ClusterConfig.ReconnectInterval: Set to 30-60 seconds for most cases. Shorter intervals provide faster recovery but more traffic
- Timing Predictability: For predictable background recovery timing, ensure ClusterConfig.ReconnectInterval exceeds the total ReconnectionPolicy sequence duration. This prevents Go's ticker from buffering/dropping ticks due to "slow receiver" behavior. You can achieve this by either increasing ClusterConfig.ReconnectInterval or reducing ReconnectionPolicy duration (fewer retries/shorter intervals). The latter approach provides faster recovery while maintaining predictable timing
- Monitoring: Enable logging to observe reconnection behavior and tune settings
Compression ¶
The driver supports Snappy and LZ4 compression of protocol frames.
For Snappy compression (via github.com/apache/cassandra-gocql-driver/v2/snappy package):
import "github.com/apache/cassandra-gocql-driver/v2/snappy"
cluster.Compressor = &snappy.SnappyCompressor{}
For LZ4 compression (via github.com/apache/cassandra-gocql-driver/v2/lz4 package):
import "github.com/apache/cassandra-gocql-driver/v2/lz4"
cluster.Compressor = &lz4.LZ4Compressor{}
Both compressors use efficient append-like semantics for optimal performance and memory usage.
Structured Logging ¶
The driver provides structured logging through the StructuredLogger interface. Built-in integrations are available for popular logging libraries:
For Zap logger (via github.com/apache/cassandra-gocql-driver/v2/gocqlzap package):
import "github.com/apache/cassandra-gocql-driver/v2/gocqlzap" zapLogger, _ := zap.NewProduction() cluster.Logger = gocqlzap.NewZapLogger(zapLogger)
For Zerolog (via github.com/apache/cassandra-gocql-driver/v2/gocqlzerolog package):
import "github.com/apache/cassandra-gocql-driver/v2/gocqlzerolog" zerologLogger := zerolog.New(os.Stdout).With().Timestamp().Logger() cluster.Logger = gocqlzerolog.NewZerologLogger(&zerologLogger)
You can also use the built-in standard library logger:
cluster.Logger = gocql.NewLogger(gocql.LogLevelInfo)
Native Protocol Version 5 Features ¶
Native protocol version 5 provides several advanced capabilities:
Set keyspace for individual queries (useful for multi-tenant applications):
err := session.Query("SELECT * FROM table").SetKeyspace("tenant1").Exec()
Target queries to specific nodes (useful for virtual tables in Cassandra 4.0+):
err := session.Query("SELECT * FROM system_views.settings").
SetHostID("host-uuid").Exec()
Use current timestamp override for testing and consistency:
err := session.Query("INSERT INTO table (id, data) VALUES (?, ?)").
WithNowInSeconds(specificTimestamp).
Bind(id, data).Exec()
These features are also available on batch operations, excluding SetHostID():
err := session.Batch(LoggedBatch).
Query("INSERT INTO table (id, data) VALUES (?, ?)", id, data).
SetKeyspace("tenant1").
WithNowInSeconds(specificTimestamp).
Exec()
Authentication ¶
CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used.
To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider.
PasswordAuthenticator is provided to use for username/password authentication:
cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "user",
Password: "password"
}
session, err := cluster.CreateSession()
if err != nil {
return err
}
defer session.Close()
By default, PasswordAuthenticator will attempt to authenticate regardless of what implementation the server returns in its AUTHENTICATE message as its authenticator, (e.g. org.apache.cassandra.auth.PasswordAuthenticator). If you wish to restrict this you may use PasswordAuthenticator.AllowedAuthenticators:
cluster.Authenticator = gocql.PasswordAuthenticator {
Username: "user",
Password: "password"
AllowedAuthenticators: []string{"org.apache.cassandra.auth.PasswordAuthenticator"},
}
Transport layer security ¶
It is possible to secure traffic between the client and server with TLS.
To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *crypto/tls.Config so you can set that directly. There are also helpers to load keys/certificates from files.
Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set SslOptions.EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *crypto/tls.Config. SslOptions and crypto/tls.Config.InsecureSkipVerify interact as follows:
Config.InsecureSkipVerify | EnableHostVerification | Result Config is nil | false | do not verify host Config is nil | true | verify host false | false | verify host true | false | do not verify host false | true | verify host true | true | verify host
For example:
cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.SslOpts = &gocql.SslOptions{
EnableHostVerification: true,
}
session, err := cluster.CreateSession()
if err != nil {
return err
}
defer session.Close()
Data-center awareness and query routing ¶
To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database):
cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.PoolConfig.HostSelectionPolicy = gocql.DCAwareRoundRobinPolicy("dc1")
The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC).
cluster := gocql.NewCluster("192.168.1.1", "192.168.1.2", "192.168.1.3")
cluster.PoolConfig.HostSelectionPolicy = gocql.TokenAwareHostPolicy(gocql.DCAwareRoundRobinPolicy("dc1"))
Note that TokenAwareHostPolicy can take options such as ShuffleReplicas and NonLocalReplicasFallback.
We recommend running with a token aware host policy in production for maximum performance.
The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of
session.Query("select value from mytable where pk1 = 'abc' AND pk2 = ?", "def")
use
session.Query("select value from mytable where pk1 = ? AND pk2 = ?", "abc", "def")
Rack-level awareness ¶
The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack.
Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else).
RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy.
Executing queries ¶
Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query.
To execute a query without reading results, use Query.Exec:
err := session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`, "me", gocql.TimeUUID(), "hello world").WithContext(ctx).Exec()
Single row can be read by calling Query.Scan:
err := session.Query(`SELECT id, text FROM tweet WHERE timeline = ? LIMIT 1`, "me").WithContext(ctx).Consistency(gocql.One).Scan(&id, &text)
Multiple rows can be read using Iter.Scanner:
scanner := session.Query(`SELECT id, text FROM tweet WHERE timeline = ?`,
"me").WithContext(ctx).Iter().Scanner()
for scanner.Next() {
var (
id gocql.UUID
text string
)
err = scanner.Scan(&id, &text)
if err != nil {
log.Fatal(err)
}
fmt.Println("Tweet:", id, text)
}
// scanner.Err() closes the iterator, so scanner nor iter should be used afterwards.
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
See Example for complete example.
Vector types (Cassandra 5.0+) ¶
The driver supports Cassandra 5.0 vector types, enabling powerful vector search capabilities:
// Create a table with vector column
err := session.Query(`CREATE TABLE vectors (
id int PRIMARY KEY,
embedding vector<float, 128>
)`).Exec()
// Insert vector data
embedding := make([]float32, 128)
// ... populate embedding values
err = session.Query("INSERT INTO vectors (id, embedding) VALUES (?, ?)",
1, embedding).Exec()
// Query vector data
var retrievedEmbedding []float32
err = session.Query("SELECT embedding FROM vectors WHERE id = ?", 1).
Scan(&retrievedEmbedding)
Vector types support various element types including basic types, collections, and user-defined types. Vector search requires Cassandra 5.0 or later.
Prepared statements ¶
The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types.
When using native protocol >= 4, it is possible to use UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement.
Executing multiple queries concurrently ¶
Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level.
results := make(chan error, 2)
go func() {
results <- session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
"me", gocql.TimeUUID(), "hello world 1").Exec()
}()
go func() {
results <- session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
"me", gocql.TimeUUID(), "hello world 2").Exec()
}()
Nulls ¶
Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string.
var text *string
err := scanner.Scan(&text)
if err != nil {
// handle error
}
if text != nil {
// not null
}
else {
// null
}
See Example_nulls for full example.
Reusing slices ¶
The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures.
scanner := session.Query(`SELECT myints FROM table WHERE pk = ?`, "key").WithContext(ctx).Iter().Scanner()
var myInts []int
for scanner.Next() {
// This scan reuses backing store of myInts for each row.
err = scanner.Scan(&myInts)
if err != nil {
log.Fatal(err)
}
}
When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop:
scanner := session.Query(`SELECT myints FROM table WHERE pk = ?`, "key").WithContext(ctx).Iter().Scanner()
for scanner.Next() {
var myInts []int
// This scan always gets pointer to fresh myInts slice, so does not reuse memory.
err = scanner.Scan(&myInts)
if err != nil {
log.Fatal(err)
}
}
Paging ¶
The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Query.PageSize, and Query.Prefetch.
It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys.
Paging state is specific to the native protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (see CASSANDRA-10880).
The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster.
Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred).
Using too low values of ClusterConfig.PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly ClusterConfig.PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items.
See Example_paging for an example of manual paging.
Dynamic list of columns ¶
There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case.
See Example_dynamicColumns.
Batches ¶
The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.Batch to create a new batch and then fill-in details of individual queries. Then execute the batch with Batch.Exec.
Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters.
For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop.
It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Batch.Exec prepares individual statements in the batch. If you have variable-length batches using the same statement, using Batch.Exec is more efficient.
See Example_batch for an example.
The Batch API provides a fluent interface for building and executing batch operations:
// Create and execute a batch using fluent API
err := session.Batch(LoggedBatch).
Query("INSERT INTO table1 (id, name) VALUES (?, ?)", id1, name1).
Query("INSERT INTO table2 (id, value) VALUES (?, ?)", id2, value2).
Exec()
// Lightweight transactions with batches
applied, iter, err := session.Batch(LoggedBatch).
Query("INSERT INTO users (id, name) VALUES (?, ?) IF NOT EXISTS", id, name).
ExecCAS()
if err != nil {
// handle error
}
if !applied {
// handle conditional failure
}
Lightweight transactions ¶
Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS.
Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Batch.ExecCAS and Batch.MapExecCAS when executing the batch to learn about the result of the LWT. See example for Batch.MapExecCAS.
SERIAL Consistency for Reads ¶
The driver supports SERIAL and LOCAL_SERIAL consistency levels on SELECT statements. These special consistency levels are designed for reading data that may have been written using lightweight transactions (LWT) with IF conditions, providing linearizable consistency guarantees.
When to use SERIAL consistency levels:
Use SERIAL or LOCAL_SERIAL consistency when you need to:
- Read the most recent committed value after lightweight transactions
- Ensure linearizable consistency (stronger than eventual consistency)
- Read data that might have uncommitted lightweight transactions in progress
Important considerations:
- SERIAL reads have higher latency and resource usage than normal reads
- Only use when you specifically need linearizable consistency
- If a SERIAL read finds an uncommitted transaction, it will commit that transaction
- Most applications should use regular consistency levels (ONE, QUORUM, etc.)
Immutable Execution ¶
Query and Batch objects follow an immutable execution model that enables safe reuse and concurrent execution without object mutation.
Query and Batch Object Reusability:
Query and Batch objects remain unchanged during execution, allowing for safe reuse and concurrent execution:
// Create a query once
query := session.Query("SELECT * FROM users WHERE id = ?", userID)
// Safe to execute multiple times
iter1 := query.Iter()
defer iter1.Close()
iter2 := query.Iter() // Same query, separate execution
defer iter2.Close()
// Safe to use from multiple goroutines
go func() {
iter := query.Iter()
defer iter.Close()
// ... process results
}()
The same applies to Batch objects:
// Create batch once using fluent API
batch := session.Batch(LoggedBatch).
Query("INSERT INTO table1 (id, name) VALUES (?, ?)", id1, name1).
Query("INSERT INTO table2 (id, value) VALUES (?, ?)", id2, value2)
// Safe to execute multiple times
err1 := batch.Exec()
err2 := batch.Exec() // Same batch, separate execution
Execution Metrics and Metadata:
Execution-specific information such as metrics, attempts, and latency are available through the Iter object returned by execution. This provides per-execution metrics while keeping the original objects unchanged:
query := session.Query("SELECT * FROM users WHERE id = ?", userID)
iter := query.Iter()
defer iter.Close()
// Access execution metrics through Iter
attempts := iter.Attempts() // Number of times this execution was attempted
latency := iter.Latency() // Average latency per attempt in nanoseconds
keyspace := iter.Keyspace() // Keyspace the query was executed against
table := iter.Table() // Table the query was executed against (if determinable)
host := iter.Host() // Host that executed the query
For batches, execution methods that return an Iter (like ExecCAS) provide the same metrics:
// Execute CAS operation and get metrics through Iter using fluent API
applied, iter, err := session.Batch(LoggedBatch).
Query("INSERT INTO users (id, name) VALUES (?, ?) IF NOT EXISTS", id, name).
ExecCAS()
defer iter.Close()
if err != nil {
log.Printf("Batch failed after %d attempts with average latency %d ns",
iter.Attempts(), iter.Latency())
}
Retries and speculative execution ¶
Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution.
Idempotent queries are retried in case of errors based on the configured RetryPolicy.
Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned.
User-defined types ¶
Cassandra User-Defined Types (UDTs) are composite data types that group related fields together. GoCQL provides several ways to work with UDTs in Go, from simple struct mapping to advanced custom marshaling.
Basic UDT Usage with Structs:
The simplest way to work with UDTs is using Go structs with `cql` tags:
// Cassandra UDT definition:
// CREATE TYPE address (street text, city text, zip_code int);
type Address struct {
Street string `cql:"street"`
City string `cql:"city"`
ZipCode int `cql:"zip_code"`
}
// Usage in queries
addr := Address{Street: "123 Main St", City: "Anytown", ZipCode: 12345}
err := session.Query("INSERT INTO users (id, address) VALUES (?, ?)",
userID, addr).Exec()
// Reading UDTs
var readAddr Address
err = session.Query("SELECT address FROM users WHERE id = ?",
userID).Scan(&readAddr)
Field Mapping:
GoCQL maps struct fields to UDT fields using two strategies:
1. CQL tags: Use `cql:"field_name"` to explicitly map fields 2. Name matching: If no tag is present, field names must match exactly (case-sensitive)
type MyUDT struct {
FieldA int32 `cql:"field_a"` // Maps to UDT field "field_a"
FieldB string `cql:"field_b"` // Maps to UDT field "field_b"
FieldC string // Maps to UDT field "FieldC" (exact name match)
}
Working with Maps:
UDTs can also be marshaled to/from `map[string]interface{}`:
// Marshal to map
var udtMap map[string]interface{}
err := session.Query("SELECT address FROM users WHERE id = ?",
userID).Scan(&udtMap)
// Access fields
street := udtMap["street"].(string)
zipCode := udtMap["zip_code"].(int)
Advanced Custom Marshaling:
For complex scenarios, implement the UDTMarshaler and UDTUnmarshaler interfaces:
type CustomUDT struct {
fieldA string
fieldB int32
}
// UDTMarshaler for writing to Cassandra
func (c CustomUDT) MarshalUDT(name string, info TypeInfo) ([]byte, error) {
switch name {
case "field_a":
return Marshal(info, c.fieldA)
case "field_b":
return Marshal(info, c.fieldB)
default:
return nil, nil // Unknown fields set to null
}
}
// UDTUnmarshaler for reading from Cassandra
func (c *CustomUDT) UnmarshalUDT(name string, info TypeInfo, data []byte) error {
switch name {
case "field_a":
return Unmarshal(info, data, &c.fieldA)
case "field_b":
return Unmarshal(info, data, &c.fieldB)
default:
return nil // Ignore unknown fields for forward compatibility
}
}
Nested UDTs and Collections:
UDTs can contain other UDTs and collection types:
// Cassandra definitions:
// CREATE TYPE address (street text, city text);
// CREATE TYPE person (name text, addresses list<frozen<address>>);
type Person struct {
Name string `cql:"name"`
Addresses []Address `cql:"addresses"`
}
See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler.
Metrics and tracing ¶
It is possible to provide observer implementations that could be used to gather metrics:
- QueryObserver for monitoring individual queries.
- BatchObserver for monitoring batch queries.
- ConnectObserver for monitoring new connections from the driver to the database.
- FrameHeaderObserver for monitoring individual protocol frames.
CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing.
Example ¶
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.tweet(timeline text, id UUID, text text, PRIMARY KEY(id));
create index on example.tweet(timeline);
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.Consistency = gocql.Quorum
// connect to the cluster
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
// insert a tweet
if err := session.Query(`INSERT INTO tweet (timeline, id, text) VALUES (?, ?, ?)`,
"me", gocql.TimeUUID(), "hello world").ExecContext(ctx); err != nil {
log.Fatal(err)
}
var id gocql.UUID
var text string
/* Search for a specific set of records whose 'timeline' column matches
* the value 'me'. The secondary index that we created earlier will be
* used for optimizing the search */
if err := session.Query(`SELECT id, text FROM tweet WHERE timeline = ? LIMIT 1`,
"me").Consistency(gocql.One).ScanContext(ctx, &id, &text); err != nil {
log.Fatal(err)
}
fmt.Println("Tweet:", id, text)
fmt.Println()
// list all tweets
scanner := session.Query(`SELECT id, text FROM tweet WHERE timeline = ?`,
"me").IterContext(ctx).Scanner()
for scanner.Next() {
err = scanner.Scan(&id, &text)
if err != nil {
log.Fatal(err)
}
fmt.Println("Tweet:", id, text)
}
// scanner.Err() closes the iterator, so scanner nor iter should be used afterwards.
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
// Tweet: cad53821-3731-11eb-971c-708bcdaada84 hello world
//
// Tweet: cad53821-3731-11eb-971c-708bcdaada84 hello world
// Tweet: d577ab85-3731-11eb-81eb-708bcdaada84 hello world
}
Example (Batch) ¶
Example_batch demonstrates how to execute a batch of statements.
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.batches(pk int, ck int, description text, PRIMARY KEY(pk, ck));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
// Example 1: Simple batch using the Query() method - recommended approach
batch := session.Batch(gocql.LoggedBatch)
batch.Query("INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)", 1, 2, "1.2")
batch.Query("INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)", 1, 3, "1.3")
err = batch.ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Example 2: Advanced batch usage with Entries for more control
b := session.Batch(gocql.UnloggedBatch)
b.Entries = append(b.Entries, gocql.BatchEntry{
Stmt: "INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)",
Args: []interface{}{1, 4, "1.4"},
Idempotent: true,
})
b.Entries = append(b.Entries, gocql.BatchEntry{
Stmt: "INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)",
Args: []interface{}{1, 5, "1.5"},
Idempotent: true,
})
err = b.ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Example 3: Fluent style chaining
err = session.Batch(gocql.LoggedBatch).
Query("INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)", 1, 6, "1.6").
Query("INSERT INTO example.batches (pk, ck, description) VALUES (?, ?, ?)", 1, 7, "1.7").
ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Verification: Display all inserted data
fmt.Println("All inserted data:")
scanner := session.Query("SELECT pk, ck, description FROM example.batches").IterContext(ctx).Scanner()
for scanner.Next() {
var pk, ck int32
var description string
err = scanner.Scan(&pk, &ck, &description)
if err != nil {
log.Fatal(err)
}
fmt.Println(pk, ck, description)
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
// All inserted data:
// 1 2 1.2
// 1 3 1.3
// 1 4 1.4
// 1 5 1.5
// 1 6 1.6
// 1 7 1.7
}
Example (DynamicColumns) ¶
Example_dynamicColumns demonstrates how to handle dynamic column list.
package main
import (
"context"
"fmt"
"log"
"os"
"reflect"
"text/tabwriter"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.table1(pk text, ck int, value1 text, value2 int, PRIMARY KEY(pk, ck));
insert into example.table1 (pk, ck, value1, value2) values ('a', 1, 'b', 2);
insert into example.table1 (pk, ck, value1, value2) values ('c', 3, 'd', 4);
insert into example.table1 (pk, ck, value1, value2) values ('c', 5, null, null);
create table example.table2(pk int, value1 timestamp, PRIMARY KEY(pk));
insert into example.table2 (pk, value1) values (1, '2020-01-02 03:04:05');
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
printQuery := func(ctx context.Context, session *gocql.Session, stmt string, values ...interface{}) error {
iter := session.Query(stmt, values...).IterContext(ctx)
fmt.Println(stmt)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ',
0)
for i, columnInfo := range iter.Columns() {
if i > 0 {
fmt.Fprint(w, "\t| ")
}
fmt.Fprintf(w, "%s (%s)", columnInfo.Name, columnInfo.TypeInfo)
}
for {
rd, err := iter.RowData()
if err != nil {
return err
}
if !iter.Scan(rd.Values...) {
break
}
fmt.Fprint(w, "\n")
for i, val := range rd.Values {
if i > 0 {
fmt.Fprint(w, "\t| ")
}
fmt.Fprint(w, reflect.Indirect(reflect.ValueOf(val)).Interface())
}
}
fmt.Fprint(w, "\n")
w.Flush()
fmt.Println()
return iter.Close()
}
ctx := context.Background()
err = printQuery(ctx, session, "SELECT * FROM table1")
if err != nil {
log.Fatal(err)
}
err = printQuery(ctx, session, "SELECT value2, pk, ck FROM table1")
if err != nil {
log.Fatal(err)
}
err = printQuery(ctx, session, "SELECT * FROM table2")
if err != nil {
log.Fatal(err)
}
// SELECT * FROM table1
// pk (varchar) | ck (int) | value1 (varchar) | value2 (int)
// a | 1 | b | 2
// c | 3 | d | 4
// c | 5 | | 0
//
// SELECT value2, pk, ck FROM table1
// value2 (int) | pk (varchar) | ck (int)
// 2 | a | 1
// 4 | c | 3
// 0 | c | 5
//
// SELECT * FROM table2
// pk (int) | value1 (timestamp)
// 1 | 2020-01-02 03:04:05 +0000 UTC
}
Example (MarshalerUnmarshaler) ¶
Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler.
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Content before git sha 34fdeebefcbf183ed7f916f931aa0586fdaa1b40
* Copyright (c) 2016, The Gocql authors,
* provided under the BSD-3-Clause License.
* See the NOTICE file distributed with this work for additional information.
*/
package main
import (
"context"
"fmt"
"log"
"strconv"
"strings"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
// MyMarshaler implements Marshaler and Unmarshaler.
// It represents a version number stored as string.
type MyMarshaler struct {
major, minor, patch int
}
func (m MyMarshaler) MarshalCQL(info gocql.TypeInfo) ([]byte, error) {
return gocql.Marshal(info, fmt.Sprintf("%d.%d.%d", m.major, m.minor, m.patch))
}
func (m *MyMarshaler) UnmarshalCQL(info gocql.TypeInfo, data []byte) error {
var s string
err := gocql.Unmarshal(info, data, &s)
if err != nil {
return err
}
parts := strings.SplitN(s, ".", 3)
if len(parts) != 3 {
return fmt.Errorf("parse version %q: %d parts instead of 3", s, len(parts))
}
major, err := strconv.Atoi(parts[0])
if err != nil {
return fmt.Errorf("parse version %q major number: %v", s, err)
}
minor, err := strconv.Atoi(parts[1])
if err != nil {
return fmt.Errorf("parse version %q minor number: %v", s, err)
}
patch, err := strconv.Atoi(parts[2])
if err != nil {
return fmt.Errorf("parse version %q patch number: %v", s, err)
}
m.major = major
m.minor = minor
m.patch = patch
return nil
}
// Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler.
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.my_marshaler_table(pk int, value text, PRIMARY KEY(pk));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
value := MyMarshaler{
major: 1,
minor: 2,
patch: 3,
}
err = session.Query("INSERT INTO example.my_marshaler_table (pk, value) VALUES (?, ?)",
1, value).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
var stringValue string
err = session.Query("SELECT value FROM example.my_marshaler_table WHERE pk = 1").
ScanContext(ctx, &stringValue)
if err != nil {
log.Fatal(err)
}
fmt.Println(stringValue)
var unmarshaledValue MyMarshaler
err = session.Query("SELECT value FROM example.my_marshaler_table WHERE pk = 1").
ScanContext(ctx, &unmarshaledValue)
if err != nil {
log.Fatal(err)
}
fmt.Println(unmarshaledValue)
// 1.2.3
// {1 2 3}
}
Example (Nulls) ¶
Example_nulls demonstrates how to distinguish between null and zero value when needed.
Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field.
package main
import (
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.stringvals(id int, value text, PRIMARY KEY(id));
insert into example.stringvals (id, value) values (1, null);
insert into example.stringvals (id, value) values (2, '');
insert into example.stringvals (id, value) values (3, 'hello');
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
scanner := session.Query(`SELECT id, value FROM stringvals`).Iter().Scanner()
for scanner.Next() {
var (
id int32
val *string
)
err := scanner.Scan(&id, &val)
if err != nil {
log.Fatal(err)
}
if val != nil {
fmt.Printf("Row %d is %q\n", id, *val)
} else {
fmt.Printf("Row %d is null\n", id)
}
}
err = scanner.Err()
if err != nil {
log.Fatal(err)
}
// Row 1 is null
// Row 2 is ""
// Row 3 is "hello"
}
Example (Paging) ¶
Example_paging demonstrates how to manually fetch pages and use page state.
See also package documentation about paging.
package main
import (
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.itoa(id int, description text, PRIMARY KEY(id));
insert into example.itoa (id, description) values (1, 'one');
insert into example.itoa (id, description) values (2, 'two');
insert into example.itoa (id, description) values (3, 'three');
insert into example.itoa (id, description) values (4, 'four');
insert into example.itoa (id, description) values (5, 'five');
insert into example.itoa (id, description) values (6, 'six');
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
var pageState []byte
for {
// We use PageSize(2) for the sake of example, use larger values in production (default is 5000) for performance
// reasons.
iter := session.Query(`SELECT id, description FROM itoa`).PageSize(2).PageState(pageState).Iter()
nextPageState := iter.PageState()
scanner := iter.Scanner()
for scanner.Next() {
var (
id int
description string
)
err = scanner.Scan(&id, &description)
if err != nil {
log.Fatal(err)
}
fmt.Println(id, description)
}
err = scanner.Err()
if err != nil {
log.Fatal(err)
}
fmt.Printf("next page state: %+v\n", nextPageState)
if len(nextPageState) == 0 {
break
}
pageState = nextPageState
}
// 5 five
// 1 one
// next page state: [4 0 0 0 1 0 240 127 255 255 253 0]
// 2 two
// 4 four
// next page state: [4 0 0 0 4 0 240 127 255 255 251 0]
// 6 six
// 3 three
// next page state: [4 0 0 0 3 0 240 127 255 255 249 0]
// next page state: []
}
Example (Set) ¶
Example_set demonstrates how to use sets.
package main
import (
"fmt"
"log"
"sort"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.sets(id int, value set<text>, PRIMARY KEY(id));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
err = session.Query(`UPDATE sets SET value=? WHERE id=1`, []string{"alpha", "beta", "gamma"}).Exec()
if err != nil {
log.Fatal(err)
}
err = session.Query(`UPDATE sets SET value=value+? WHERE id=1`, "epsilon").Exec()
if err != nil {
// This does not work because the ? expects a set, not a single item.
fmt.Printf("expected error: %v\n", err)
}
err = session.Query(`UPDATE sets SET value=value+? WHERE id=1`, []string{"delta"}).Exec()
if err != nil {
log.Fatal(err)
}
// map[x]struct{} is supported too.
toRemove := map[string]struct{}{
"alpha": {},
"gamma": {},
}
err = session.Query(`UPDATE sets SET value=value-? WHERE id=1`, toRemove).Exec()
if err != nil {
log.Fatal(err)
}
scanner := session.Query(`SELECT id, value FROM sets`).Iter().Scanner()
for scanner.Next() {
var (
id int32
val []string
)
err := scanner.Scan(&id, &val)
if err != nil {
log.Fatal(err)
}
sort.Strings(val)
fmt.Printf("Row %d is %v\n", id, val)
}
err = scanner.Err()
if err != nil {
log.Fatal(err)
}
// expected error: can not marshal string into set(varchar)
// Row 1 is [beta delta]
}
Example (SetKeyspace) ¶
Example_setKeyspace demonstrates the SetKeyspace method that allows specifying keyspace per query, available with Protocol 5+ (Cassandra 4.0+).
This example shows the complete keyspace precedence hierarchy: 1. Keyspace in CQL query string (keyspace.table) - HIGHEST precedence 2. SetKeyspace() method - MIDDLE precedence 3. Default session keyspace - LOWEST precedence
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspaces:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create keyspace example2 with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create keyspace example3 with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.users(id int, name text, PRIMARY KEY(id));
create table example2.users(id int, name text, PRIMARY KEY(id));
create table example3.users(id int, name text, PRIMARY KEY(id));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.ProtoVersion = 5 // SetKeyspace requires Protocol 5+, available in Cassandra 4.0+
cluster.Keyspace = "example" // Set a default keyspace
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
// Example 1: Keyspace Precedence Hierarchy Demonstration
fmt.Println("Demonstrating complete keyspace precedence hierarchy:")
fmt.Println("1. Keyspace in CQL (keyspace.table) - HIGHEST")
fmt.Println("2. SetKeyspace() method - MIDDLE")
fmt.Println("3. Default session keyspace - LOWEST")
fmt.Println()
// Insert test data
// Default keyspace (example) - lowest precedence
err = session.Query("INSERT INTO users (id, name) VALUES (?, ?)").
Bind(1, "Alice").
ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// SetKeyspace overrides default - middle precedence
err = session.Query("INSERT INTO users (id, name) VALUES (?, ?)").
SetKeyspace("example2").
Bind(1, "Bob").
ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Fully qualified table name - highest precedence
err = session.Query("INSERT INTO example3.users (id, name) VALUES (?, ?)").
Bind(1, "Charlie").
ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Example 2: Fully qualified table names override SetKeyspace
fmt.Println("Example 2: Fully qualified table names take precedence over SetKeyspace:")
// This query sets keyspace to "example2" via SetKeyspace, but the fully qualified
// table name "example3.users" takes precedence - query will target example3
err = session.Query("INSERT INTO example3.users (id, name) VALUES (?, ?)").
SetKeyspace("example2"). // This is IGNORED because CQL has "example3.users"
Bind(2, "Diana").
ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Println("Inserted Diana into example3.users despite SetKeyspace(\"example2\")")
// Verify data went to example3, not example2
var count int
iter := session.Query("SELECT COUNT(*) FROM users").
SetKeyspace("example2").
IterContext(ctx)
if iter.Scan(&count) {
fmt.Printf("Count in example2: %d (only Bob)\n", count)
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
iter = session.Query("SELECT COUNT(*) FROM users").
SetKeyspace("example3").
IterContext(ctx)
if iter.Scan(&count) {
fmt.Printf("Count in example3: %d (Charlie and Diana)\n", count)
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
// Example 3: SetKeyspace overrides default keyspace
fmt.Println("\nExample 3: SetKeyspace overrides default keyspace:")
// Query using default keyspace (no SetKeyspace)
var id int
var name string
iter = session.Query("SELECT id, name FROM users WHERE id = ?", 1).
IterContext(ctx) // Uses default keyspace "example"
if iter.Scan(&id, &name) {
fmt.Printf("Default keyspace (example): ID %d, Name %s\n", id, name)
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
// SetKeyspace overrides default
iter = session.Query("SELECT id, name FROM users WHERE id = ?", 1).
SetKeyspace("example2"). // Override default keyspace
IterContext(ctx)
if iter.Scan(&id, &name) {
fmt.Printf("SetKeyspace override (example2): ID %d, Name %s\n", id, name)
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
// Example 4: Mixed query patterns in one workflow
fmt.Println("\nExample 4: Using all precedence levels in one workflow:")
// Query from default keyspace
iter = session.Query("SELECT name FROM users WHERE id = 1").IterContext(ctx)
if iter.Scan(&name) {
fmt.Printf("Default (example): %s\n", name)
}
iter.Close()
// Query using SetKeyspace
iter = session.Query("SELECT name FROM users WHERE id = 1").
SetKeyspace("example2").IterContext(ctx)
if iter.Scan(&name) {
fmt.Printf("SetKeyspace (example2): %s\n", name)
}
iter.Close()
// Query using fully qualified table name (ignores both default and SetKeyspace)
iter = session.Query("SELECT name FROM example3.users WHERE id = 1").
SetKeyspace("example2"). // This is ignored due to qualified table name
IterContext(ctx)
if iter.Scan(&name) {
fmt.Printf("Qualified name (example3): %s\n", name)
}
iter.Close()
// Demonstrating complete keyspace precedence hierarchy:
// 1. Keyspace in CQL (keyspace.table) - HIGHEST
// 2. SetKeyspace() method - MIDDLE
// 3. Default session keyspace - LOWEST
//
// Example 2: Fully qualified table names take precedence over SetKeyspace:
// Inserted Diana into example3.users despite SetKeyspace("example2")
// Count in example2: 1 (only Bob)
// Count in example3: 2 (Charlie and Diana)
//
// Example 3: SetKeyspace overrides default keyspace:
// Default keyspace (example): ID 1, Name Alice
// SetKeyspace override (example2): ID 1, Name Bob
//
// Example 4: Using all precedence levels in one workflow:
// Default (example): Alice
// SetKeyspace (example2): Bob
// Qualified name (example3): Charlie
}
Example (StructuredLogging) ¶
Example_structuredLogging demonstrates the new structured logging features introduced in 2.0.0. The driver now supports structured logging with proper log levels and integration with popular logging libraries like Zap and Zerolog. This example shows production-ready configurations for structured logging with proper component separation to distinguish between application and driver logs.
package main
import (
"context"
"log"
"os"
"github.com/rs/zerolog"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
gocql "github.com/apache/cassandra-gocql-driver/v2"
"github.com/apache/cassandra-gocql-driver/v2/gocqlzap"
"github.com/apache/cassandra-gocql-driver/v2/gocqlzerolog"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.log_demo(id int, value text, PRIMARY KEY(id));
*/
ctx := context.Background()
// Example 1: Using Zap logger integration
// Create a production Zap logger with structured JSON output and human-readable timestamps
// Production config uses JSON encoding, info level, and proper error handling
config := zap.NewProductionConfig()
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder // Human-readable timestamp format
zapLogger, err := config.Build()
if err != nil {
log.Fatal(err)
}
defer zapLogger.Sync()
// Create base logger with service identifier
baseLogger := zapLogger.With(zap.String("service", "gocql-app"))
// Create application and driver loggers with component identifiers
appLogger := baseLogger.With(zap.String("component", "app"))
driverLogger := baseLogger.With(zap.String("component", "gocql-driver"))
appLogger.Info("Starting Zap structured logging example",
zap.String("example", "structured_logging"),
zap.String("logger_type", "zap"))
// Create gocql logger from driver logger
gocqlZapLogger := gocqlzap.NewUnnamedZapLogger(driverLogger)
zapCluster := gocql.NewCluster("localhost:9042")
zapCluster.Keyspace = "example"
zapCluster.Logger = gocqlZapLogger
zapSession, err := zapCluster.CreateSession()
if err != nil {
appLogger.Fatal("Failed to create session", zap.Error(err))
}
defer zapSession.Close()
// Perform some operations that will generate logs
appLogger.Info("Inserting data into database",
zap.String("operation", "insert"),
zap.Int("record_id", 1))
err = zapSession.Query("INSERT INTO example.log_demo (id, value) VALUES (?, ?)").
Bind(1, "zap logging demo").
ExecContext(ctx)
if err != nil {
appLogger.Error("Insert operation failed", zap.Error(err))
log.Fatal(err)
}
appLogger.Info("Querying data from database",
zap.String("operation", "select"),
zap.Int("record_id", 1))
var id int
var value string
iter := zapSession.Query("SELECT id, value FROM example.log_demo WHERE id = ?").
Bind(1).
IterContext(ctx)
if iter.Scan(&id, &value) {
// Successfully scanned the row
}
err = iter.Close()
if err != nil {
appLogger.Error("Select operation failed", zap.Error(err))
log.Fatal(err)
}
appLogger.Info("Database operation completed successfully",
zap.String("operation", "select"),
zap.Int("record_id", id),
zap.String("record_value", value))
// Example 2: Using Zerolog integration
// Create a production Zerolog logger with structured JSON output
// Production config includes timestamps, service info, and appropriate log level
baseZerologLogger := zerolog.New(os.Stdout).
Level(zerolog.InfoLevel).
With().
Timestamp().
Str("service", "gocql-app").
Logger()
// Create application logger with component identifier
appZerologLogger := baseZerologLogger.With().
Str("component", "app").
Logger()
// Create driver logger with component identifier
driverZerologLogger := baseZerologLogger.With().
Str("component", "gocql-driver").
Logger()
appZerologLogger.Info().
Str("example", "structured_logging").
Str("logger_type", "zerolog").
Msg("Starting Zerolog structured logging example")
// Create gocql logger from driver logger
gocqlZerologLogger := gocqlzerolog.NewUnnamedZerologLogger(driverZerologLogger)
zerologCluster := gocql.NewCluster("localhost:9042")
zerologCluster.Keyspace = "example"
zerologCluster.Logger = gocqlZerologLogger
zerologSession, err := zerologCluster.CreateSession()
if err != nil {
appZerologLogger.Fatal().Err(err).Msg("Failed to create session")
}
defer zerologSession.Close()
// Perform operations with Zerolog
appZerologLogger.Info().
Str("operation", "insert").
Int("record_id", 2).
Msg("Inserting data into database")
err = zerologSession.Query("INSERT INTO example.log_demo (id, value) VALUES (?, ?)").
Bind(2, "zerolog logging demo").
ExecContext(ctx)
if err != nil {
appZerologLogger.Error().Err(err).Msg("Insert operation failed")
log.Fatal(err)
}
appZerologLogger.Info().
Str("operation", "select").
Int("record_id", 2).
Msg("Querying data from database")
iter = zerologSession.Query("SELECT id, value FROM example.log_demo WHERE id = ?").
Bind(2).
IterContext(ctx)
if iter.Scan(&id, &value) {
// Successfully scanned the row
}
err = iter.Close()
if err != nil {
appZerologLogger.Error().Err(err).Msg("Select operation failed")
log.Fatal(err)
}
appZerologLogger.Info().
Str("operation", "select").
Int("record_id", id).
Str("record_value", value).
Msg("Database operation completed successfully")
// Example 1 - Zap structured logging output (JSON format):
// {"level":"info","timestamp":"2023-12-31T12:00:00.000Z","msg":"Starting Zap structured logging example","service":"gocql-app","component":"app","example":"structured_logging","logger_type":"zap"}
// {"level":"info","timestamp":"2023-12-31T12:00:00.100Z","msg":"Discovered protocol version.","service":"gocql-app","component":"gocql-driver","protocol_version":5}
// {"level":"info","timestamp":"2023-12-31T12:00:00.200Z","msg":"Control connection connected to host.","service":"gocql-app","component":"gocql-driver","host_addr":"127.0.0.1","host_id":"a1b2c3d4-e5f6-7890-abcd-ef1234567890"}
// {"level":"info","timestamp":"2023-12-31T12:00:00.300Z","msg":"Refreshed ring.","service":"gocql-app","component":"gocql-driver","ring":"[127.0.0.1-a1b2c3d4-e5f6-7890-abcd-ef1234567890:UP]"}
// {"level":"info","timestamp":"2023-12-31T12:00:00.400Z","msg":"Session initialized successfully.","service":"gocql-app","component":"gocql-driver"}
// {"level":"info","timestamp":"2023-12-31T12:00:01.000Z","msg":"Inserting data into database","service":"gocql-app","component":"app","operation":"insert","record_id":1}
// {"level":"info","timestamp":"2023-12-31T12:00:02.000Z","msg":"Querying data from database","service":"gocql-app","component":"app","operation":"select","record_id":1}
// {"level":"info","timestamp":"2023-12-31T12:00:03.000Z","msg":"Database operation completed successfully","service":"gocql-app","component":"app","operation":"select","record_id":1,"record_value":"zap logging demo"}
//
// Example 2 - Zerolog structured logging output (JSON format):
// {"level":"info","service":"gocql-app","component":"app","example":"structured_logging","logger_type":"zerolog","time":"2023-12-31T12:00:10Z","message":"Starting Zerolog structured logging example"}
// {"level":"info","service":"gocql-app","component":"gocql-driver","protocol_version":5,"time":"2023-12-31T12:00:10.1Z","message":"Discovered protocol version."}
// {"level":"info","service":"gocql-app","component":"gocql-driver","host_addr":"127.0.0.1","host_id":"a1b2c3d4-e5f6-7890-abcd-ef1234567890","time":"2023-12-31T12:00:10.2Z","message":"Control connection connected to host."}
// {"level":"info","service":"gocql-app","component":"gocql-driver","ring":"[127.0.0.1-a1b2c3d4-e5f6-7890-abcd-ef1234567890:UP]","time":"2023-12-31T12:00:10.3Z","message":"Refreshed ring."}
// {"level":"info","service":"gocql-app","component":"gocql-driver","time":"2023-12-31T12:00:10.4Z","message":"Session initialized successfully."}
// {"level":"info","service":"gocql-app","component":"app","operation":"insert","record_id":2,"time":"2023-12-31T12:00:11Z","message":"Inserting data into database"}
// {"level":"info","service":"gocql-app","component":"app","operation":"select","record_id":2,"time":"2023-12-31T12:00:12Z","message":"Querying data from database"}
// {"level":"info","service":"gocql-app","component":"app","operation":"select","record_id":2,"record_value":"zerolog logging demo","time":"2023-12-31T12:00:13Z","message":"Database operation completed successfully"}
}
Example (UserDefinedTypesMap) ¶
Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs.
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create type example.my_udt (field_a text, field_b int);
create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
value := map[string]interface{}{
"field_a": "a value",
"field_b": 42,
}
err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
1, value).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Read the UDT value back
var readValue map[string]interface{}
err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").ScanContext(ctx, &readValue)
if err != nil {
log.Fatal(err)
}
fmt.Println(readValue["field_a"])
fmt.Println(readValue["field_b"])
// a value
// 42
}
Example (UserDefinedTypesStruct) ¶
Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Content before git sha 34fdeebefcbf183ed7f916f931aa0586fdaa1b40
* Copyright (c) 2016, The Gocql authors,
* provided under the BSD-3-Clause License.
* See the NOTICE file distributed with this work for additional information.
*/
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
type MyUDT struct {
FieldA string `cql:"field_a"`
FieldB int32 `cql:"field_b"`
}
// Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs.
// See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create type example.my_udt (field_a text, field_b int);
create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
value := MyUDT{
FieldA: "a value",
FieldB: 42,
}
err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
1, value).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Read the UDT value back
var readValue MyUDT
err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").ScanContext(ctx, &readValue)
if err != nil {
log.Fatal(err)
}
fmt.Println(readValue.FieldA)
fmt.Println(readValue.FieldB)
// a value
// 42
}
Example (Vector) ¶
Example_vector demonstrates how to work with vector search in Cassandra 5.0+. This example shows Cassandra's native vector search capabilities using ANN (Approximate Nearest Neighbor) search with ORDER BY ... ANN OF syntax for finding similar vectors. Note: Requires Cassandra 5.0+ and a SAI index on the vector column for ANN search.
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.vectors(
id int,
item_name text,
embedding vector<float, 5>,
PRIMARY KEY(id)
);
-- Create SAI index for vector search (required for ANN)
CREATE INDEX IF NOT EXISTS ann_index ON example.vectors(embedding) USING 'sai';
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
// Create the table first (if it doesn't exist)
err = session.Query(`CREATE TABLE IF NOT EXISTS example.vectors(
id int,
item_name text,
embedding vector<float, 5>,
PRIMARY KEY(id)
)`).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Create SAI index for vector search (required for ANN search)
fmt.Println("Creating SAI index for vector search...")
err = session.Query(`CREATE INDEX IF NOT EXISTS ann_index
ON example.vectors(embedding) USING 'sai'`).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
// Insert sample vectors representing different items
// These could be embeddings from ML models for products, documents, etc.
vectorData := []struct {
id int
name string
vector []float32
}{
{1, "apple", []float32{0.8, 0.2, 0.1, 0.9, 0.3}},
{2, "orange", []float32{0.7, 0.3, 0.2, 0.8, 0.4}},
{3, "banana", []float32{0.6, 0.4, 0.9, 0.2, 0.7}},
{4, "grape", []float32{0.9, 0.1, 0.3, 0.7, 0.5}},
{5, "watermelon", []float32{0.2, 0.8, 0.6, 0.4, 0.9}},
{6, "strawberry", []float32{0.8, 0.3, 0.2, 0.9, 0.4}},
{7, "pineapple", []float32{0.3, 0.7, 0.8, 0.1, 0.6}},
{8, "mango", []float32{0.7, 0.4, 0.5, 0.8, 0.2}},
}
// Insert all vectors
fmt.Println("Inserting sample vectors...")
for _, item := range vectorData {
err = session.Query("INSERT INTO example.vectors (id, item_name, embedding) VALUES (?, ?, ?)",
item.id, item.name, item.vector).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
}
// Define a query vector (e.g., searching for items similar to "apple-like" characteristics)
queryVector := []float32{0.8, 0.2, 0.1, 0.9, 0.3}
fmt.Printf("Searching for vectors similar to: %v\n\n", queryVector)
// Perform ANN (Approximate Nearest Neighbor) search using ORDER BY ... ANN OF
// This finds the 3 most similar vectors to our query vector
fmt.Println("Top 3 most similar items (using ANN search):")
iter := session.Query(`
SELECT id, item_name, embedding
FROM example.vectors
ORDER BY embedding ANN OF ?
LIMIT 3`,
queryVector,
).IterContext(ctx)
for {
var id int
var itemName string
var embedding []float32
if !iter.Scan(&id, &itemName, &embedding) {
break
}
fmt.Printf(" %s (ID: %d) - Vector: %v\n", itemName, id, embedding)
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
fmt.Println()
// Perform similarity search with a different query vector
queryVector2 := []float32{0.2, 0.8, 0.6, 0.4, 0.9}
fmt.Printf("Searching for vectors similar to: %v\n", queryVector2)
fmt.Println("Top 4 most similar items:")
scanner := session.Query(`
SELECT id, item_name, embedding
FROM example.vectors
ORDER BY embedding ANN OF ?
LIMIT 4`,
queryVector2,
).IterContext(ctx).Scanner()
for scanner.Next() {
var id int
var itemName string
var embedding []float32
err = scanner.Scan(&id, &itemName, &embedding)
if err != nil {
log.Fatal(err)
}
fmt.Printf(" %s (ID: %d) - Vector: %v\n", itemName, id, embedding)
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
fmt.Println()
// Basic vector retrieval (traditional approach)
fmt.Println("Basic vector retrieval by ID:")
var id int
var itemName string
var embedding []float32
iter = session.Query("SELECT id, item_name, embedding FROM example.vectors WHERE id = ?", 1).
IterContext(ctx)
if !iter.Scan(&id, &itemName, &embedding) {
log.Fatal(iter.Close())
}
fmt.Printf(" %s (ID: %d) - Vector: %v\n", itemName, id, embedding)
fmt.Println()
// Show all vectors for comparison
fmt.Println("All vectors in the database:")
allVectors := session.Query("SELECT id, item_name, embedding FROM example.vectors").IterContext(ctx)
for {
var id int
var itemName string
var embedding []float32
if !allVectors.Scan(&id, &itemName, &embedding) {
break
}
fmt.Printf(" %s (ID: %d) - Vector: %v\n", itemName, id, embedding)
}
if err := allVectors.Close(); err != nil {
log.Fatal(err)
}
// Example output:
// Creating SAI index for vector search...
// Inserting sample vectors...
// Searching for vectors similar to: [0.8 0.2 0.1 0.9 0.3]
//
// Top 3 most similar items (using ANN search):
// apple (ID: 1) - Vector: [0.8 0.2 0.1 0.9 0.3]
// strawberry (ID: 6) - Vector: [0.8 0.3 0.2 0.9 0.4]
// orange (ID: 2) - Vector: [0.7 0.3 0.2 0.8 0.4]
}
Index ¶
- Constants
- Variables
- func Crc24(buf []byte) uint32
- func Crc32(b []byte) uint32
- func JoinHostPort(addr string, port int) string
- func LookupIP(host string) ([]net.IP, error)
- func Marshal(info TypeInfo, value interface{}) ([]byte, error)
- func NamedValue(name string, value interface{}) interface{}
- func NewErrProtocol(format string, args ...interface{}) error
- func NonLocalReplicasFallback() func(policy *tokenAwareHostPolicy)
- func ShuffleReplicas() func(*tokenAwareHostPolicy)
- func SingleHostReadyPolicy(p HostSelectionPolicy) *singleHostReadyPolicy
- func TupleColumnName(c string, n int) string
- func Unmarshal(info TypeInfo, data []byte, value interface{}) error
- type AddressTranslator
- type AddressTranslatorFunc
- type AggregateMetadata
- type Authenticator
- type Batch
- func (b *Batch) Bind(stmt string, bind func(q *QueryInfo) ([]interface{}, error))
- func (b *Batch) Consistency(cons Consistency) *Batch
- func (b *Batch) Context() context.Contextdeprecated
- func (b *Batch) DefaultTimestamp(enable bool) *Batch
- func (b *Batch) Exec() error
- func (b *Batch) ExecCAS(dest ...interface{}) (applied bool, iter *Iter, err error)
- func (b *Batch) ExecCASContext(ctx context.Context, dest ...interface{}) (applied bool, iter *Iter, err error)
- func (b *Batch) ExecContext(ctx context.Context) error
- func (b *Batch) GetConsistency() Consistency
- func (b *Batch) IsIdempotent() bool
- func (b *Batch) Iter() *Iter
- func (b *Batch) IterContext(ctx context.Context) *Iter
- func (b *Batch) Keyspace() string
- func (b *Batch) MapExecCAS(dest map[string]interface{}) (applied bool, iter *Iter, err error)
- func (b *Batch) MapExecCASContext(ctx context.Context, dest map[string]interface{}) (applied bool, iter *Iter, err error)
- func (b *Batch) Observer(observer BatchObserver) *Batch
- func (b *Batch) Query(stmt string, args ...interface{}) *Batch
- func (b *Batch) RetryPolicy(r RetryPolicy) *Batch
- func (b *Batch) SerialConsistency(cons Consistency) *Batch
- func (b *Batch) SetConsistency(c Consistency)deprecated
- func (b *Batch) SetKeyspace(keyspace string) *Batch
- func (b *Batch) Size() int
- func (b *Batch) SpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Batch
- func (b *Batch) Trace(trace Tracer) *Batch
- func (b *Batch) WithContext(ctx context.Context) *Batchdeprecated
- func (b *Batch) WithNowInSeconds(now int) *Batch
- func (b *Batch) WithTimestamp(timestamp int64) *Batch
- type BatchEntry
- type BatchObserver
- type BatchType
- type CQLType
- type ClusterConfig
- type CollectionType
- type ColumnIndexMetadata
- type ColumnInfo
- type ColumnKind
- type ColumnMetadata
- type ColumnOrder
- type Compressor
- type Conn
- type ConnConfig
- type ConnErrorHandler
- type ConnReader
- type ConnectObserver
- type Consistency
- type ConstantReconnectionPolicy
- type ConvictionPolicy
- type DialedHost
- type Dialer
- type DowngradingConsistencyRetryPolicy
- type Duration
- type ErrProtocol
- type Errordeprecated
- type ErrorMap
- type ExecutableQuerydeprecated
- type ExecutableStatement
- type ExponentialBackoffRetryPolicy
- type ExponentialReconnectionPolicy
- type FrameHeaderObserver
- type FunctionMetadata
- type HostDialer
- type HostFilter
- type HostFilterFunc
- type HostInfo
- func (h *HostInfo) BroadcastAddress() net.IP
- func (h *HostInfo) ClusterName() string
- func (h *HostInfo) ConnectAddress() net.IP
- func (h *HostInfo) ConnectAddressAndPort() string
- func (h *HostInfo) DSEVersion() string
- func (h *HostInfo) DataCenter() string
- func (h *HostInfo) Equal(host *HostInfo) bool
- func (h *HostInfo) Graph() bool
- func (h *HostInfo) HostID() string
- func (h *HostInfo) HostnameAndPort() string
- func (h *HostInfo) IsUp() bool
- func (h *HostInfo) ListenAddress() net.IP
- func (h *HostInfo) Partitioner() string
- func (h *HostInfo) Peer() net.IP
- func (h *HostInfo) Port() int
- func (h *HostInfo) PreferredIP() net.IP
- func (h *HostInfo) RPCAddress() net.IP
- func (h *HostInfo) Rack() string
- func (h *HostInfo) State() nodeState
- func (h *HostInfo) String() string
- func (h *HostInfo) Tokens() []string
- func (h *HostInfo) Version() cassVersion
- func (h *HostInfo) WorkLoad() string
- type HostSelectionPolicy
- func DCAwareRoundRobinPolicy(localDC string) HostSelectionPolicy
- func RackAwareRoundRobinPolicy(localDC string, localRack string) HostSelectionPolicy
- func RoundRobinHostPolicy() HostSelectionPolicy
- func TokenAwareHostPolicy(fallback HostSelectionPolicy, opts ...func(*tokenAwareHostPolicy)) HostSelectionPolicy
- type HostStateNotifier
- type HostTierer
- type Iter
- func (iter *Iter) Attempts() int
- func (iter *Iter) Close() error
- func (iter *Iter) Columns() []ColumnInfo
- func (iter *Iter) GetCustomPayload() map[string][]byte
- func (iter *Iter) Host() *HostInfo
- func (iter *Iter) Keyspace() string
- func (iter *Iter) Latency() int64
- func (iter *Iter) MapScan(m map[string]interface{}) bool
- func (iter *Iter) NumRows() int
- func (iter *Iter) PageState() []byte
- func (iter *Iter) RowData() (RowData, error)
- func (iter *Iter) Scan(dest ...interface{}) bool
- func (iter *Iter) Scanner() Scanner
- func (iter *Iter) SliceMap() ([]map[string]interface{}, error)
- func (iter *Iter) Table() string
- func (iter *Iter) Warnings() []string
- func (iter *Iter) WillSwitchPage() bool
- type KeyspaceMetadata
- type KeyspaceUpdateEvent
- type LogField
- type LogFieldValue
- type LogFieldValueType
- type LogLevel
- type MarshalError
- type Marshaler
- type MaterializedViewMetadata
- type NextHost
- type NonSpeculativeExecution
- type ObservedBatch
- type ObservedConnect
- type ObservedFrameHeader
- type ObservedQuery
- type ObservedStream
- type PasswordAuthenticator
- type PoolConfig
- type Query
- func (q *Query) Bind(v ...interface{}) *Query
- func (q *Query) Consistency(c Consistency) *Query
- func (q *Query) Context() context.Contextdeprecated
- func (q *Query) CustomPayload(customPayload map[string][]byte) *Query
- func (q *Query) DefaultTimestamp(enable bool) *Query
- func (q *Query) Exec() error
- func (q *Query) ExecContext(ctx context.Context) error
- func (q *Query) GetConsistency() Consistency
- func (q *Query) GetHostID() string
- func (q *Query) Idempotent(value bool) *Query
- func (q *Query) IsIdempotent() bool
- func (q *Query) Iter() *Iter
- func (q *Query) IterContext(ctx context.Context) *Iter
- func (q *Query) Keyspace() string
- func (q *Query) MapScan(m map[string]interface{}) error
- func (q *Query) MapScanCAS(dest map[string]interface{}) (applied bool, err error)
- func (q *Query) MapScanCASContext(ctx context.Context, dest map[string]interface{}) (applied bool, err error)
- func (q *Query) MapScanContext(ctx context.Context, m map[string]interface{}) error
- func (q *Query) NoSkipMetadata() *Query
- func (q *Query) Observer(observer QueryObserver) *Query
- func (q *Query) PageSize(n int) *Query
- func (q *Query) PageState(state []byte) *Query
- func (q *Query) Prefetch(p float64) *Query
- func (q *Query) RetryPolicy(r RetryPolicy) *Query
- func (q *Query) RoutingKey(routingKey []byte) *Query
- func (q *Query) Scan(dest ...interface{}) error
- func (q *Query) ScanCAS(dest ...interface{}) (applied bool, err error)
- func (q *Query) ScanCASContext(ctx context.Context, dest ...interface{}) (applied bool, err error)
- func (q *Query) ScanContext(ctx context.Context, dest ...interface{}) error
- func (q *Query) SerialConsistency(cons Consistency) *Query
- func (q *Query) SetConsistency(c Consistency)deprecated
- func (q *Query) SetHostID(hostID string) *Query
- func (q *Query) SetKeyspace(keyspace string) *Query
- func (q *Query) SetSpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Query
- func (q Query) Statement() string
- func (q Query) String() string
- func (q *Query) Trace(trace Tracer) *Query
- func (q Query) Values() []interface{}
- func (q *Query) WithContext(ctx context.Context) *Querydeprecated
- func (q *Query) WithNowInSeconds(now int) *Query
- func (q *Query) WithTimestamp(timestamp int64) *Query
- type QueryInfo
- type QueryObserver
- type ReadyPolicy
- type ReconnectionPolicy
- type RegisteredTypes
- type RequestErrAlreadyExists
- type RequestErrCASWriteUnknown
- type RequestErrCDCWriteFailure
- type RequestErrFunctionFailure
- type RequestErrReadFailure
- type RequestErrReadTimeout
- type RequestErrUnavailable
- type RequestErrUnprepared
- type RequestErrWriteFailure
- type RequestErrWriteTimeout
- type RequestError
- type RetryPolicy
- type RetryType
- type RetryableQuery
- type RowData
- type Scanner
- type SelectedHost
- type SerialConsistency
- type Session
- func (s *Session) AwaitSchemaAgreement(ctx context.Context) error
- func (s *Session) Batch(typ BatchType) *Batch
- func (s *Session) Bind(stmt string, b func(q *QueryInfo) ([]interface{}, error)) *Query
- func (s *Session) Close()
- func (s *Session) Closed() bool
- func (s *Session) ExecuteBatch(batch *Batch) errordeprecated
- func (s *Session) ExecuteBatchCAS(batch *Batch, dest ...interface{}) (applied bool, iter *Iter, err error)deprecated
- func (s *Session) GetHosts() []*HostInfo
- func (s *Session) KeyspaceMetadata(keyspace string) (*KeyspaceMetadata, error)
- func (s *Session) MapExecuteBatchCAS(batch *Batch, dest map[string]interface{}) (applied bool, iter *Iter, err error)deprecated
- func (s *Session) NewBatch(typ BatchType) *Batchdeprecated
- func (s *Session) Query(stmt string, values ...interface{}) *Query
- type SetHosts
- type SetPartitioner
- type SimpleCQLType
- type SimpleConvictionPolicy
- type SimpleRetryPolicy
- type SimpleSpeculativeExecution
- type SpeculativeExecutionPolicy
- type SslOptions
- type Statement
- type StdLoggerdeprecated
- type StreamObserver
- type StreamObserverContext
- type StructuredLogger
- type TableMetadata
- type Tracer
- type TupleTypeInfo
- type Type
- type TypeInfo
- type UDTField
- type UDTMarshaler
- type UDTTypeInfo
- type UDTUnmarshaler
- type UUID
- func MaxTimeUUID(t time.Time) UUID
- func MinTimeUUID(t time.Time) UUID
- func MustRandomUUID() UUID
- func ParseUUID(input string) (UUID, error)
- func RandomUUID() (UUID, error)
- func TimeUUID() UUID
- func TimeUUIDWith(t int64, clock uint32, node []byte) UUID
- func UUIDFromBytes(input []byte) (UUID, error)
- func UUIDFromTime(t time.Time) UUID
- func (u UUID) Bytes() []byte
- func (u UUID) Clock() uint32
- func (u UUID) MarshalJSON() ([]byte, error)
- func (u UUID) MarshalText() ([]byte, error)
- func (u UUID) Node() []byte
- func (u UUID) String() string
- func (u UUID) Time() time.Time
- func (u UUID) Timestamp() int64
- func (u *UUID) UnmarshalJSON(data []byte) error
- func (u *UUID) UnmarshalText(text []byte) (err error)
- func (u UUID) Variant() int
- func (u UUID) Version() int
- type UnmarshalError
- type Unmarshaler
- type UserTypeMetadata
- type VectorType
Examples ¶
- Package
- Package (Batch)
- Package (DynamicColumns)
- Package (MarshalerUnmarshaler)
- Package (Nulls)
- Package (Paging)
- Package (Set)
- Package (SetKeyspace)
- Package (StructuredLogging)
- Package (UserDefinedTypesMap)
- Package (UserDefinedTypesStruct)
- Package (Vector)
- Batch.MapExecCAS
- Query.MapScanCAS
- UDTMarshaler
- UDTUnmarshaler
Constants ¶
const ( // ErrCodeServer indicates unexpected error on server-side. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1246-L1247 ErrCodeServer = 0x0000 // ErrCodeProtocol indicates a protocol violation by some client message. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1248-L1250 ErrCodeProtocol = 0x000A // ErrCodeCredentials indicates missing required authentication. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1251-L1254 ErrCodeCredentials = 0x0100 // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1255-L1265 ErrCodeUnavailable = 0x1000 // ErrCodeOverloaded returned in case of request on overloaded node coordinator. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1266-L1267 ErrCodeOverloaded = 0x1001 // ErrCodeBootstrapping returned from the coordinator node in bootstrapping phase. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1268-L1269 ErrCodeBootstrapping = 0x1002 // ErrCodeTruncate indicates truncation exception. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1270 ErrCodeTruncate = 0x1003 // ErrCodeWriteTimeout returned in case of timeout during the request write. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1271-L1304 ErrCodeWriteTimeout = 0x1100 // ErrCodeReadTimeout returned in case of timeout during the request read. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1305-L1321 ErrCodeReadTimeout = 0x1200 // ErrCodeReadFailure indicates request read error which is not covered by ErrCodeReadTimeout. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1322-L1340 ErrCodeReadFailure = 0x1300 // ErrCodeFunctionFailure indicates an error in user-defined function. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1341-L1347 ErrCodeFunctionFailure = 0x1400 // ErrCodeWriteFailure indicates request write error which is not covered by ErrCodeWriteTimeout. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1348-L1385 ErrCodeWriteFailure = 0x1500 // ErrCodeCDCWriteFailure is defined, but not yet documented in CQLv5 protocol. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1386 ErrCodeCDCWriteFailure = 0x1600 // ErrCodeCASWriteUnknown indicates only partially completed CAS operation. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1387-L1397 ErrCodeCASWriteUnknown = 0x1700 // ErrCodeSyntax indicates the syntax error in the query. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1399 ErrCodeSyntax = 0x2000 // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1400-L1401 ErrCodeUnauthorized = 0x2100 // ErrCodeInvalid indicates invalid query error which is not covered by ErrCodeSyntax. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1402 ErrCodeInvalid = 0x2200 // ErrCodeConfig indicates the configuration error. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1403 ErrCodeConfig = 0x2300 // ErrCodeAlreadyExists is returned for the requests creating the existing keyspace/table. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1404-L1413 ErrCodeAlreadyExists = 0x2400 // ErrCodeUnprepared returned from the host for prepared statement which is unknown. // // See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1414-L1417 ErrCodeUnprepared = 0x2500 )
See CQL Binary Protocol v5, section 8 for more details. https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec
const ( NodeUp nodeState = iota NodeDown )
const ( LogLevelDebug = LogLevel(5) LogLevelInfo = LogLevel(4) LogLevelWarn = LogLevel(3) LogLevelError = LogLevel(2) LogLevelNone = LogLevel(0) )
const ( DEFAULT_KEY_ALIAS = "key" DEFAULT_COLUMN_ALIAS = "column" DEFAULT_VALUE_ALIAS = "value" )
default alias values
const ( REVERSED_TYPE = "org.apache.cassandra.db.marshal.ReversedType" COMPOSITE_TYPE = "org.apache.cassandra.db.marshal.CompositeType" COLLECTION_TYPE = "org.apache.cassandra.db.marshal.ColumnToCollectionType" VECTOR_TYPE = "org.apache.cassandra.db.marshal.VectorType" )
const ( VariantNCSCompat = 0 VariantIETF = 2 VariantMicrosoft = 6 VariantFuture = 7 )
const BatchSizeMaximum = 65535
BatchSizeMaximum is the maximum number of statements a batch operation can have. This limit is set by Cassandra and could change in the future.
Variables ¶
var ( // ErrNoHosts is returned when no hosts are provided to the cluster configuration. ErrNoHosts = errors.New("no hosts provided") // ErrNoConnectionsStarted is returned when no connections could be established during session creation. ErrNoConnectionsStarted = errors.New("no connections were made when creating the session") // Deprecated: Never used or returned by the driver. ErrHostQueryFailed = errors.New("unable to populate Hosts") )
var ( ErrTimeoutNoResponse = errors.New("gocql: no response received from cassandra within timeout period") ErrConnectionClosed = errors.New("gocql: connection closed waiting for response") ErrNoStreams = errors.New("gocql: no streams available on connection") // Deprecated: TimeoutLimit was removed so this is never returned by the driver now ErrTooManyTimeouts = errors.New("gocql: too many query timeouts on the connection") // Deprecated: Never returned by the driver ErrQueryArgLength = errors.New("gocql: query argument length mismatch") )
var ( ErrCannotFindHost = errors.New("cannot find host") ErrHostAlreadyExists = errors.New("host already exists") )
var ( ErrNotFound = errors.New("not found") ErrUnsupported = errors.New("feature not supported") ErrTooManyStmts = errors.New("too many statements") ErrUseStmt = errors.New("use statements aren't supported. Please see https://github.com/apache/cassandra-gocql-driver for explanation.") ErrSessionClosed = errors.New("session has been closed") ErrNoConnections = errors.New("gocql: no hosts available in the pool") ErrNoKeyspace = errors.New("no keyspace provided") ErrKeyspaceDoesNotExist = errors.New("keyspace does not exist") ErrNoMetadata = errors.New("no metadata available") )
var (
ErrFrameTooBig = errors.New("frame length is bigger than the maximum allowed")
)
var ErrUnknownRetryType = errors.New("unknown retry type returned by retry policy")
ErrUnknownRetryType is returned if the retry policy returns a retry type unknown to the query executor.
var ( ErrorUDTUnavailable = errors.New("UDT are not available on protocols less than 3, please update config") )
var GlobalTypes = func() *RegisteredTypes { r := &RegisteredTypes{} r.init() r.addDefaultTypes() return r }()
GlobalTypes is the set of types that are registered globally and are copied by all sessions that don't define their own RegisteredTypes in ClusterConfig. Since a new session copies this, you should be modifying this or creating your own before a session is created.
var UnsetValue = unsetColumn{}
UnsetValue represents a value used in a query binding that will be ignored by Cassandra.
By setting a field to the unset value Cassandra will ignore the write completely. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement.
UnsetValue is only available when using the version 4 of the protocol.
Functions ¶
func JoinHostPort ¶
JoinHostPort is a utility to return an address string that can be used by `gocql.Conn` to form a connection with a host.
func Marshal ¶
Marshal returns the CQL encoding of the value for the Cassandra internal type described by the info parameter.
nil is serialized as CQL null. If value implements Marshaler, its MarshalCQL method is called to marshal the data. If value is a pointer, the pointed-to value is marshaled.
For supported Go to CQL type conversions, see Session.Query documentation.
func NamedValue ¶
func NamedValue(name string, value interface{}) interface{}
NamedValue produce a value which will bind to the named parameter in a query
func NewErrProtocol ¶
NewErrProtocol creates a new protocol error with the specified format and arguments.
func NonLocalReplicasFallback ¶
func NonLocalReplicasFallback() func(policy *tokenAwareHostPolicy)
NonLocalReplicasFallback enables fallback to replicas that are not considered local.
TokenAwareHostPolicy used with DCAwareHostPolicy fallback first selects replicas by partition key in local DC, then falls back to other nodes in the local DC. Enabling NonLocalReplicasFallback causes TokenAwareHostPolicy to first select replicas by partition key in local DC, then replicas by partition key in remote DCs and fall back to other nodes in local DC.
func ShuffleReplicas ¶
func ShuffleReplicas() func(*tokenAwareHostPolicy)
func SingleHostReadyPolicy ¶
func SingleHostReadyPolicy(p HostSelectionPolicy) *singleHostReadyPolicy
SingleHostReadyPolicy wraps a HostSelectionPolicy and returns Ready after a single host has been added via HostUp
func TupleColumnName ¶
TupleColumnName will return the column name of a tuple value in a column named c at index n. It should be used if a specific element within a tuple is needed to be extracted from a map returned from SliceMap or MapScan.
func Unmarshal ¶
Unmarshal parses the CQL encoded data based on the info parameter that describes the Cassandra internal data type and stores the result in the value pointed by value.
If value implements Unmarshaler, it's UnmarshalCQL method is called to unmarshal the data. If value is a pointer to pointer, it is set to nil if the CQL value is null. Otherwise, nulls are unmarshalled as zero value.
For supported CQL to Go type conversions, see Iter.Scan documentation.
Types ¶
type AddressTranslator ¶
type AddressTranslator interface {
// Translate will translate the provided address and/or port to another
// address and/or port. If no translation is possible, Translate will return the
// address and port provided to it.
Translate(addr net.IP, port int) (net.IP, int)
}
AddressTranslator provides a way to translate node addresses (and ports) that are discovered or received as a node event. This can be useful in an ec2 environment, for instance, to translate public IPs to private IPs.
func IdentityTranslator ¶
func IdentityTranslator() AddressTranslator
IdentityTranslator will do nothing but return what it was provided. It is essentially a no-op.
type AddressTranslatorFunc ¶
AddressTranslatorFunc is a function type that implements AddressTranslator.
type AggregateMetadata ¶
type AggregateMetadata struct {
Keyspace string
Name string
ArgumentTypes []TypeInfo
FinalFunc FunctionMetadata
InitCond string
ReturnType TypeInfo
StateFunc FunctionMetadata
StateType TypeInfo
// contains filtered or unexported fields
}
AggregateMetadata holds metadata for aggregate constructs
type Authenticator ¶
type Authenticator interface {
Challenge(req []byte) (resp []byte, auth Authenticator, err error)
Success(data []byte) error
}
Authenticator handles authentication challenges and responses during connection setup.
type Batch ¶
type Batch struct {
Type BatchType
Entries []BatchEntry
Cons Consistency
CustomPayload map[string][]byte
// contains filtered or unexported fields
}
func (*Batch) Bind ¶
Bind adds the query to the batch operation and correlates it with a binding callback that will be invoked when the batch is executed. The binding callback allows the application to define which query argument values will be marshalled as part of the batch execution.
For supported Go to CQL type conversions for query parameters, see Session.Query documentation.
func (*Batch) Consistency ¶
func (b *Batch) Consistency(cons Consistency) *Batch
Consistency sets the consistency level for this batch. If no consistency level have been set, the default consistency level of the cluster is used.
func (*Batch) DefaultTimestamp ¶
DefaultTimestamp will enable the with default timestamp flag on the query. If enable, this will replace the server side assigned timestamp as default timestamp. Note that a timestamp in the query itself will still override this timestamp. This is entirely optional.
Only available on protocol >= 3
func (*Batch) Exec ¶
Exec executes a batch operation and returns nil if successful otherwise an error is returned describing the failure.
func (*Batch) ExecCAS ¶
ExecCAS executes a batch operation and returns true if successful and an iterator (to scan additional rows if more than one conditional statement) was sent. Further scans on the interator must also remember to include the applied boolean as the first argument to *Iter.Scan
func (*Batch) ExecCASContext ¶
func (b *Batch) ExecCASContext(ctx context.Context, dest ...interface{}) (applied bool, iter *Iter, err error)
ExecCASContext executes a batch operation with the provided context and returns true if successful and an iterator (to scan additional rows if more than one conditional statement) was sent. Further scans on the interator must also remember to include the applied boolean as the first argument to *Iter.Scan
func (*Batch) ExecContext ¶
ExecContext executes a batch operation with the provided context and returns nil if successful otherwise an error is returned describing the failure.
func (*Batch) GetConsistency ¶
func (b *Batch) GetConsistency() Consistency
GetConsistency returns the currently configured consistency level for the batch operation.
func (*Batch) IsIdempotent ¶
func (*Batch) Iter ¶
Iter executes a batch operation and returns an Iter object that can be used to access properties related to the execution like Iter.Attempts and Iter.Latency
func (*Batch) IterContext ¶
IterContext executes a batch operation with the provided context and returns an Iter object that can be used to access properties related to the execution like Iter.Attempts and Iter.Latency
func (*Batch) MapExecCAS ¶
MapExecCAS executes a batch operation much like ExecuteBatchCAS, however it accepts a map rather than a list of arguments for the initial scan.
Example ¶
ExampleBatch_MapExecCAS demonstrates how to execute a batch lightweight transaction.
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.my_lwt_batch_table(pk text, ck text, version int, value text, PRIMARY KEY(pk, ck));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
err = session.Query("INSERT INTO example.my_lwt_batch_table (pk, ck, version, value) VALUES (?, ?, ?, ?)",
"pk1", "ck1", 1, "a").ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
err = session.Query("INSERT INTO example.my_lwt_batch_table (pk, ck, version, value) VALUES (?, ?, ?, ?)",
"pk1", "ck2", 1, "A").ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
executeBatch := func(ck2Version int) {
b := session.Batch(gocql.LoggedBatch)
b.Entries = append(b.Entries, gocql.BatchEntry{
Stmt: "UPDATE my_lwt_batch_table SET value=? WHERE pk=? AND ck=? IF version=?",
Args: []interface{}{"b", "pk1", "ck1", 1},
})
b.Entries = append(b.Entries, gocql.BatchEntry{
Stmt: "UPDATE my_lwt_batch_table SET value=? WHERE pk=? AND ck=? IF version=?",
Args: []interface{}{"B", "pk1", "ck2", ck2Version},
})
m := make(map[string]interface{})
applied, iter, err := b.MapExecCASContext(ctx, m)
if err != nil {
log.Fatal(err)
}
fmt.Println(applied, m)
m = make(map[string]interface{})
for iter.MapScan(m) {
fmt.Println(m)
m = make(map[string]interface{})
}
if err := iter.Close(); err != nil {
log.Fatal(err)
}
}
printState := func() {
scanner := session.Query("SELECT ck, value FROM example.my_lwt_batch_table WHERE pk = ?", "pk1").
IterContext(ctx).Scanner()
for scanner.Next() {
var ck, value string
err = scanner.Scan(&ck, &value)
if err != nil {
log.Fatal(err)
}
fmt.Println(ck, value)
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}
executeBatch(0)
printState()
executeBatch(1)
printState()
// false map[ck:ck1 pk:pk1 version:1]
// map[[applied]:false ck:ck2 pk:pk1 version:1]
// ck1 a
// ck2 A
// true map[]
// ck1 b
// ck2 B
}
func (*Batch) MapExecCASContext ¶
func (b *Batch) MapExecCASContext(ctx context.Context, dest map[string]interface{}) (applied bool, iter *Iter, err error)
MapExecCASContext executes a batch operation with the provided context much like ExecuteBatchCAS, however it accepts a map rather than a list of arguments for the initial scan.
func (*Batch) Observer ¶
func (b *Batch) Observer(observer BatchObserver) *Batch
Observer enables batch-level observer on this batch. The provided observer will be called every time this batched query is executed.
func (*Batch) Query ¶
Query adds the query to the batch operation.
For supported Go to CQL type conversions for query parameters, see Session.Query documentation.
func (*Batch) RetryPolicy ¶
func (b *Batch) RetryPolicy(r RetryPolicy) *Batch
RetryPolicy sets the retry policy to use when executing the batch operation
func (*Batch) SerialConsistency ¶
func (b *Batch) SerialConsistency(cons Consistency) *Batch
SerialConsistency sets the consistency level for the serial phase of conditional updates. That consistency can only be either SERIAL or LOCAL_SERIAL and if not present, it defaults to SERIAL. This option will be ignored for anything else that a conditional update/insert.
Only available for protocol 3 and above
func (*Batch) SetConsistency
deprecated
func (b *Batch) SetConsistency(c Consistency)
Deprecated: Use Batch.Consistency
func (*Batch) SetKeyspace ¶
SetKeyspace will enable keyspace flag on the query. It allows to specify the keyspace that the query should be executed in
Only available on protocol >= 5.
func (*Batch) Size ¶
Size returns the number of batch statements to be executed by the batch operation.
func (*Batch) SpeculativeExecutionPolicy ¶
func (b *Batch) SpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Batch
func (*Batch) Trace ¶
Trace enables tracing of this batch. Look at the documentation of the Tracer interface to learn more about tracing.
func (*Batch) WithContext
deprecated
Deprecated: Use Batch.ExecContext or Batch.IterContext instead. This will be removed in a future major version.
WithContext returns a shallow copy of b with its context set to ctx.
The provided context controls the entire lifetime of executing a query, queries will be canceled and return once the context is canceled.
func (*Batch) WithNowInSeconds ¶
WithNowInSeconds will enable the with now_in_seconds flag on the query. Also, it allows to define now_in_seconds value.
Only available on protocol >= 5.
func (*Batch) WithTimestamp ¶
WithTimestamp will enable the with default timestamp flag on the query like DefaultTimestamp does. But also allows to define value for timestamp. It works the same way as USING TIMESTAMP in the query itself, but should not break prepared query optimization.
Only available on protocol >= 3
type BatchEntry ¶
type BatchEntry struct {
Stmt string
Args []interface{}
Idempotent bool
// contains filtered or unexported fields
}
BatchEntry represents a single statement within a batch operation. It contains the statement, arguments, and execution metadata.
type BatchObserver ¶
type BatchObserver interface {
// ObserveBatch gets called on every batch query to cassandra.
// It also gets called once for each query in a batch.
// It doesn't get called if there is no query because the session is closed or there are no connections available.
// The error reported only shows query errors, i.e. if a SELECT is valid but finds no matches it will be nil.
// Unlike QueryObserver.ObserveQuery it does no reporting on rows read.
ObserveBatch(context.Context, ObservedBatch)
}
BatchObserver is the interface implemented by batch observers / stat collectors.
type BatchType ¶
type BatchType byte
BatchType represents the type of batch. Available types: LoggedBatch, UnloggedBatch, CounterBatch.
type CQLType ¶
type CQLType interface {
// Params should return a new slice of zero values of the closest Go types to'
// be filled when parsing a frame. These values associcated with the types
// are sent to TypeInfoFromParams after being read from the frame.
//
// The supported types are: Type, TypeInfo, []UDTField, []byte, string, int, uint16, byte.
// Pointers are followed when filling the params but the slice sent to
// TypeInfoFromParams will only contain the underlying values. Since
// TypeInfo(nil) isn't supported, you can send (*TypeInfo)(nil) and
// TypeInfoFromParams will get a TypeInfo (not *TypeInfo since that's not usable).
//
// If no params are needed this can return a nil slice and TypeInfoFromParams
// will be sent nil.
Params(proto int) []interface{}
// TypeInfoFromParams should return a TypeInfo implementation for the type
// with the given filled parameters. See the Params() method for what values
// to expect.
TypeInfoFromParams(proto int, params []interface{}) (TypeInfo, error)
// TypeInfoFromString should return a TypeInfo implementation for the type with
// the given names/classes. Only the portion within the parantheses or arrows
// are passed to this function. For simple types, the name passed might be empty.
TypeInfoFromString(proto int, name string) (TypeInfo, error)
}
CQLType is the interface that must be implemented by all registered types. For simple types, you can just wrap a TypeInfo with SimpleCQLType.
type ClusterConfig ¶
type ClusterConfig struct {
// addresses for the initial connections. It is recommended to use the value set in
// the Cassandra config for broadcast_address or listen_address, an IP address not
// a domain name. This is because events from Cassandra will use the configured IP
// address, which is used to index connected hosts. If the domain name specified
// resolves to more than 1 IP address then the driver may connect multiple times to
// the same host, and will not mark the node being down or up from events.
Hosts []string
// CQL version (default: 3.0.0)
CQLVersion string
// ProtoVersion sets the version of the native protocol to use, this will
// enable features in the driver for specific protocol versions, generally this
// should be set to a known version (2,3,4) for the cluster being connected to.
//
// If it is 0 or unset (the default) then the driver will attempt to discover the
// highest supported protocol for the cluster. In clusters with nodes of different
// versions the protocol selected is not defined (ie, it can be any of the supported in the cluster)
ProtoVersion int
// Timeout limits the time spent on the client side while executing a query.
// Specifically, query or batch execution will return an error if the client does not receive a response
// from the server within the Timeout period.
// Timeout is also used to configure the read timeout on the underlying network connection.
// Client Timeout should always be higher than the request timeouts configured on the server,
// so that retries don't overload the server.
// Timeout has a default value of 11 seconds, which is higher than default server timeout for most query types.
// Timeout is not applied to requests during initial connection setup, see ConnectTimeout.
Timeout time.Duration
// ConnectTimeout limits the time spent during connection setup.
// During initial connection setup, internal queries, AUTH requests will return an error if the client
// does not receive a response within the ConnectTimeout period.
// ConnectTimeout is applied to the connection setup queries independently.
// ConnectTimeout also limits the duration of dialing a new TCP connection
// in case there is no Dialer nor HostDialer configured.
// ConnectTimeout has a default value of 11 seconds.
ConnectTimeout time.Duration
// WriteTimeout limits the time the driver waits to write a request to a network connection.
// WriteTimeout should be lower than or equal to Timeout.
// WriteTimeout defaults to the value of Timeout.
WriteTimeout time.Duration
// Port used when dialing.
// Default: 9042
Port int
// Initial keyspace. Optional.
Keyspace string
// The size of the connection pool for each host.
// The pool filling runs in separate gourutine during the session initialization phase.
// gocql will always try to get 1 connection on each host pool
// during session initialization AND it will attempt
// to fill each pool afterward asynchronously if NumConns > 1.
// Notice: There is no guarantee that pool filling will be finished in the initialization phase.
// Also, it describes a maximum number of connections at the same time.
// Default: 2
NumConns int
// Default consistency level.
// Default: Quorum
Consistency Consistency
// Compression algorithm.
// Default: nil
Compressor Compressor
// Default: nil
Authenticator Authenticator
// An Authenticator factory. Can be used to create alternative authenticators.
// Default: nil
AuthProvider func(h *HostInfo) (Authenticator, error)
// Default retry policy to use for queries.
// Default: no retries.
RetryPolicy RetryPolicy
// ConvictionPolicy decides whether to mark host as down based on the error and host info.
// Default: SimpleConvictionPolicy
ConvictionPolicy ConvictionPolicy
// Default reconnection policy to use for reconnecting before trying to mark host as down.
ReconnectionPolicy ReconnectionPolicy
// The keepalive period to use, enabled if > 0 (default: 0)
// SocketKeepalive is used to set up the default dialer and is ignored if Dialer or HostDialer is provided.
SocketKeepalive time.Duration
// Maximum cache size for prepared statements globally for gocql.
// Default: 1000
MaxPreparedStmts int
// Maximum cache size for query info about statements for each session.
// Default: 1000
MaxRoutingKeyInfo int
// Default page size to use for created sessions.
// Default: 5000
PageSize int
// Consistency for the serial part of queries, values can be either SERIAL or LOCAL_SERIAL.
// Default: unset
SerialConsistency Consistency
// SslOpts configures TLS use when HostDialer is not set.
// SslOpts is ignored if HostDialer is set.
SslOpts *SslOptions
// Sends a client side timestamp for all requests which overrides the timestamp at which it arrives at the server.
// Default: true, only enabled for protocol 3 and above.
DefaultTimestamp bool
// PoolConfig configures the underlying connection pool, allowing the
// configuration of host selection and connection selection policies.
PoolConfig PoolConfig
// If not zero, gocql attempt to reconnect known DOWN nodes in every ReconnectInterval.
ReconnectInterval time.Duration
// The maximum amount of time to wait for schema agreement in a cluster after
// receiving a schema change frame. (default: 60s)
MaxWaitSchemaAgreement time.Duration
// HostFilter will filter all incoming events for host, any which don't pass
// the filter will be ignored. If set will take precedence over any options set
// via Discovery
HostFilter HostFilter
// AddressTranslator will translate addresses found on peer discovery and/or
// node change events.
AddressTranslator AddressTranslator
// If IgnorePeerAddr is true and the address in system.peers does not match
// the supplied host by either initial hosts or discovered via events then the
// host will be replaced with the supplied address.
//
// For example if an event comes in with host=10.0.0.1 but when looking up that
// address in system.local or system.peers returns 127.0.0.1, the peer will be
// set to 10.0.0.1 which is what will be used to connect to.
IgnorePeerAddr bool
// If DisableInitialHostLookup then the driver will not attempt to get host info
// from the system.peers table, this will mean that the driver will connect to
// hosts supplied and will not attempt to lookup the hosts information, this will
// mean that data_center, rack and token information will not be available and as
// such host filtering and token aware query routing will not be available.
DisableInitialHostLookup bool
// Configure events the driver will register for
Events struct {
// disable registering for status events (node up/down)
DisableNodeStatusEvents bool
// disable registering for topology events (node added/removed/moved)
DisableTopologyEvents bool
// disable registering for schema events (keyspace/table/function removed/created/updated)
DisableSchemaEvents bool
}
// DisableSkipMetadata will override the internal result metadata cache so that the driver does not
// send skip_metadata for queries, this means that the result will always contain
// the metadata to parse the rows and will not reuse the metadata from the prepared
// statement.
//
// See https://issues.apache.org/jira/browse/CASSANDRA-10786
DisableSkipMetadata bool
// QueryObserver will set the provided query observer on all queries created from this session.
// Use it to collect metrics / stats from queries by providing an implementation of QueryObserver.
QueryObserver QueryObserver
// BatchObserver will set the provided batch observer on all queries created from this session.
// Use it to collect metrics / stats from batch queries by providing an implementation of BatchObserver.
BatchObserver BatchObserver
// ConnectObserver will set the provided connect observer on all queries
// created from this session.
ConnectObserver ConnectObserver
// FrameHeaderObserver will set the provided frame header observer on all frames' headers created from this session.
// Use it to collect metrics / stats from frames by providing an implementation of FrameHeaderObserver.
FrameHeaderObserver FrameHeaderObserver
// StreamObserver will be notified of stream state changes.
// This can be used to track in-flight protocol requests and responses.
StreamObserver StreamObserver
// Default idempotence for queries
DefaultIdempotence bool
// The time to wait for frames before flushing the frames connection to Cassandra.
// Can help reduce syscall overhead by making less calls to write. Set to 0 to
// disable.
//
// (default: 200 microseconds)
WriteCoalesceWaitTime time.Duration
// Dialer will be used to establish all connections created for this Cluster.
// If not provided, a default dialer configured with ConnectTimeout will be used.
// Dialer is ignored if HostDialer is provided.
Dialer Dialer
// HostDialer will be used to establish all connections for this Cluster.
// If not provided, Dialer will be used instead.
HostDialer HostDialer
// StructuredLogger for this ClusterConfig.
//
// There are 3 built in implementations of StructuredLogger:
// - std library "log" package: gocql.NewLogger
// - zerolog: gocqlzerolog.NewZerologLogger
// - zap: gocqlzap.NewZapLogger
//
// You can also provide your own logger implementation of the StructuredLogger interface.
Logger StructuredLogger
// Tracer will be used for all queries. Alternatively it can be set of on a
// per query basis.
// default: nil
Tracer Tracer
// NextPagePrefetch sets the default threshold for pre-fetching new pages. If
// there are only p*pageSize rows remaining, the next page will be requested
// automatically. This value can also be changed on a per-query basis.
// default: 0.25.
NextPagePrefetch float64
// RegisteredTypes will be copied for all sessions created from this Cluster.
// If not provided, a copy of GlobalTypes will be used.
RegisteredTypes *RegisteredTypes
// contains filtered or unexported fields
}
ClusterConfig is a struct to configure the default cluster implementation of gocql. It has a variety of attributes that can be used to modify the behavior to fit the most common use cases. Applications that require a different setup must implement their own cluster.
func NewCluster ¶
func NewCluster(hosts ...string) *ClusterConfig
NewCluster generates a new config for the default cluster implementation.
The supplied hosts are used to initially connect to the cluster then the rest of the ring will be automatically discovered. It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events.
func (*ClusterConfig) CreateSession ¶
func (cfg *ClusterConfig) CreateSession() (*Session, error)
CreateSession initializes the cluster based on this config and returns a session object that can be used to interact with the database.
type CollectionType ¶
type CollectionType struct {
Key TypeInfo // only used for TypeMap
Elem TypeInfo // only used for TypeMap, TypeList and TypeSet
// contains filtered or unexported fields
}
CollectionType represents type information for Cassandra collection types (list, set, map). It provides marshaling and unmarshaling for collection types.
func (CollectionType) Marshal ¶
func (c CollectionType) Marshal(value interface{}) ([]byte, error)
Marshal marshals the value into a byte slice.
func (CollectionType) String ¶
func (c CollectionType) String() string
String returns the string representation of the collection.
func (CollectionType) Type ¶
func (c CollectionType) Type() Type
Type returns the type of the collection.
func (CollectionType) Unmarshal ¶
func (c CollectionType) Unmarshal(data []byte, value interface{}) error
Unmarshal unmarshals the byte slice into the value.
func (CollectionType) Zero ¶
func (c CollectionType) Zero() interface{}
Zero returns the zero value for the collection CQL type.
type ColumnIndexMetadata ¶
ColumnIndexMetadata represents metadata for a column index in Cassandra. It contains the index name, type, and configuration options.
type ColumnInfo ¶
ColumnInfo represents metadata about a column in a query result. It contains the keyspace, table, column name, and type information.
func (ColumnInfo) String ¶
func (c ColumnInfo) String() string
type ColumnKind ¶
type ColumnKind int
ColumnKind represents the kind of column in a Cassandra table. It indicates whether the column is part of the partition key, clustering key, or a regular column. Available values: ColumnUnkownKind, ColumnPartitionKey, ColumnClusteringKey, ColumnRegular, ColumnCompact, ColumnStatic.
const ( ColumnUnkownKind ColumnKind = iota ColumnPartitionKey ColumnClusteringKey ColumnRegular ColumnCompact ColumnStatic )
func (ColumnKind) String ¶
func (c ColumnKind) String() string
func (*ColumnKind) UnmarshalCQL ¶
func (c *ColumnKind) UnmarshalCQL(typ TypeInfo, p []byte) error
type ColumnMetadata ¶
type ColumnMetadata struct {
Keyspace string
Table string
Name string
ComponentIndex int
Kind ColumnKind
Validator string
Type TypeInfo
ClusteringOrder string
Order ColumnOrder
Index ColumnIndexMetadata
}
schema metadata for a column
type ColumnOrder ¶
type ColumnOrder bool
ColumnOrder represents the ordering of a column with regard to its comparator. It indicates whether the column is sorted in ascending or descending order. Available values: ASC, DESC.
const ( ASC ColumnOrder = false DESC ColumnOrder = true )
type Compressor ¶
type Compressor interface {
Name() string
// AppendCompressedWithLength compresses src bytes, appends the length of the compressed bytes to dst
// and then appends the compressed bytes to dst.
// It returns a new byte slice that is the result of the append operation.
AppendCompressedWithLength(dst, src []byte) ([]byte, error)
// AppendDecompressedWithLength reads the length of the decompressed bytes from src,
// decompressed bytes from src and appends the decompressed bytes to dst.
// It returns a new byte slice that is the result of the append operation.
AppendDecompressedWithLength(dst, src []byte) ([]byte, error)
// AppendCompressed compresses src bytes and appends the compressed bytes to dst.
// It returns a new byte slice that is the result of the append operation.
AppendCompressed(dst, src []byte) ([]byte, error)
// AppendDecompressed decompresses bytes from src and appends the decompressed bytes to dst.
// It returns a new byte slice that is the result of the append operation.
AppendDecompressed(dst, src []byte, decompressedLength uint32) ([]byte, error)
}
Compressor defines the interface for frame compression and decompression. Implementations provide compression algorithms like Snappy and LZ4 that can be used to reduce network traffic between the driver and Cassandra nodes.
type Conn ¶
type Conn struct {
// contains filtered or unexported fields
}
Conn is a single connection to a Cassandra node. It can be used to execute queries, but users are usually advised to use a more reliable, higher level API.
func (*Conn) AvailableStreams ¶
func (*Conn) UseKeyspace ¶
type ConnConfig ¶
type ConnConfig struct {
ProtoVersion int
CQLVersion string
Timeout time.Duration
WriteTimeout time.Duration
ConnectTimeout time.Duration
Dialer Dialer
HostDialer HostDialer
Compressor Compressor
Authenticator Authenticator
AuthProvider func(h *HostInfo) (Authenticator, error)
Keepalive time.Duration
Logger StructuredLogger
// contains filtered or unexported fields
}
ConnConfig contains configuration options for establishing connections to Cassandra nodes.
type ConnErrorHandler ¶
ConnErrorHandler handles connection errors and state changes for connections.
type ConnReader ¶
type ConnReader interface {
net.Conn
// SetTimeout sets timeout duration for reading data form conn
SetTimeout(timeout time.Duration)
// GetTimeout returns timeout duration
GetTimeout() time.Duration
}
ConnReader is like net.Conn but also allows to set timeout duration.
type ConnectObserver ¶
type ConnectObserver interface {
// ObserveConnect gets called when a new connection to cassandra is made.
ObserveConnect(ObservedConnect)
}
ConnectObserver is the interface implemented by connect observers / stat collectors.
type Consistency ¶
type Consistency uint16
Consistency represents the consistency level for read and write operations. Available levels: Any, One, Two, Three, Quorum, All, LocalQuorum, EachQuorum, Serial, LocalSerial, LocalOne.
const ( Any Consistency = 0x00 One Consistency = 0x01 Two Consistency = 0x02 Three Consistency = 0x03 Quorum Consistency = 0x04 All Consistency = 0x05 LocalQuorum Consistency = 0x06 EachQuorum Consistency = 0x07 Serial Consistency = 0x08 LocalSerial Consistency = 0x09 LocalOne Consistency = 0x0A )
func ParseConsistency ¶
func ParseConsistency(s string) Consistency
func ParseConsistencyWrapper ¶
func ParseConsistencyWrapper(s string) (consistency Consistency, err error)
ParseConsistencyWrapper wraps gocql.ParseConsistency to provide an err return instead of a panic
func (Consistency) MarshalText ¶
func (c Consistency) MarshalText() (text []byte, err error)
func (Consistency) String ¶
func (c Consistency) String() string
func (*Consistency) UnmarshalText ¶
func (c *Consistency) UnmarshalText(text []byte) error
type ConstantReconnectionPolicy ¶
ConstantReconnectionPolicy has simple logic for returning a fixed reconnection interval.
Examples of usage:
cluster.ReconnectionPolicy = &gocql.ConstantReconnectionPolicy{MaxRetries: 10, Interval: 8 * time.Second}
func (*ConstantReconnectionPolicy) GetInterval ¶
func (c *ConstantReconnectionPolicy) GetInterval(currentRetry int) time.Duration
func (*ConstantReconnectionPolicy) GetMaxRetries ¶
func (c *ConstantReconnectionPolicy) GetMaxRetries() int
type ConvictionPolicy ¶
type ConvictionPolicy interface {
// Implementations should return `true` if the host should be convicted, `false` otherwise.
AddFailure(error error, host *HostInfo) bool
// Implementations should clear out any convictions or state regarding the host.
Reset(host *HostInfo)
}
ConvictionPolicy interface is used by gocql to determine if a host should be marked as DOWN based on the error and host info
type DialedHost ¶
type DialedHost struct {
// Conn used to communicate with the server.
Conn net.Conn
// DisableCoalesce disables write coalescing for the Conn.
// If true, the effect is the same as if WriteCoalesceWaitTime was configured to 0.
DisableCoalesce bool
}
DialedHost contains information about established connection to a host.
func WrapTLS ¶
func WrapTLS(ctx context.Context, conn net.Conn, addr string, tlsConfig *tls.Config) (*DialedHost, error)
WrapTLS optionally wraps a net.Conn connected to addr with the given tlsConfig. If the tlsConfig is nil, conn is not wrapped into a TLS session, so is insecure. If the tlsConfig does not have server name set, it is updated based on the default gocql rules.
type Dialer ¶
Dialer is the interface that wraps the DialContext method for establishing network connections to Cassandra nodes.
This interface allows customization of how gocql establishes TCP connections, which is useful for: connecting through proxies or load balancers, custom TLS configurations, custom timeouts/keep-alive settings, service mesh integration, testing with mocked connections, and corporate network routing.
type DowngradingConsistencyRetryPolicy ¶
type DowngradingConsistencyRetryPolicy struct {
ConsistencyLevelsToTry []Consistency
}
func (*DowngradingConsistencyRetryPolicy) Attempt ¶
func (d *DowngradingConsistencyRetryPolicy) Attempt(q RetryableQuery) bool
func (*DowngradingConsistencyRetryPolicy) GetRetryType ¶
func (d *DowngradingConsistencyRetryPolicy) GetRetryType(err error) RetryType
type Duration ¶
Duration represents a Cassandra duration type, which consists of months, days, and nanoseconds components.
type ErrProtocol ¶
type ErrProtocol struct {
// contains filtered or unexported fields
}
ErrProtocol represents a protocol-level error.
type ErrorMap ¶
ErrorMap maps node IP addresses to their respective error codes for read/write failure responses. Each entry represents a node that failed during the operation, with the key being the node's IP address as a string and the value being the specific error code returned by that node.
type ExecutableQuery
deprecated
type ExecutableQuery = ExecutableStatement
Deprecated: Will be removed in a future major release. Also Query and Batch no longer implement this interface.
Please use Statement (for Query / Batch objects) or ExecutableStatement (in HostSelectionPolicy implementations) instead.
type ExecutableStatement ¶
type ExecutableStatement interface {
GetRoutingKey() ([]byte, error)
Keyspace() string
Table() string
IsIdempotent() bool
GetHostID() string
Statement() Statement
}
ExecutableStatement is an interface that represents a query or batch statement that exposes the correct functions for the HostSelectionPolicy to operate correctly.
type ExponentialBackoffRetryPolicy ¶
ExponentialBackoffRetryPolicy sleeps between attempts
func (*ExponentialBackoffRetryPolicy) Attempt ¶
func (e *ExponentialBackoffRetryPolicy) Attempt(q RetryableQuery) bool
func (*ExponentialBackoffRetryPolicy) GetRetryType ¶
func (e *ExponentialBackoffRetryPolicy) GetRetryType(err error) RetryType
type ExponentialReconnectionPolicy ¶
type ExponentialReconnectionPolicy struct {
MaxRetries int
InitialInterval time.Duration
MaxInterval time.Duration
}
ExponentialReconnectionPolicy returns a growing reconnection interval.
func (*ExponentialReconnectionPolicy) GetInterval ¶
func (e *ExponentialReconnectionPolicy) GetInterval(currentRetry int) time.Duration
func (*ExponentialReconnectionPolicy) GetMaxRetries ¶
func (e *ExponentialReconnectionPolicy) GetMaxRetries() int
type FrameHeaderObserver ¶
type FrameHeaderObserver interface {
// ObserveFrameHeader gets called on every received frame header.
ObserveFrameHeader(context.Context, ObservedFrameHeader)
}
FrameHeaderObserver is the interface implemented by frame observers / stat collectors.
Experimental, this interface and use may change
type FunctionMetadata ¶
type FunctionMetadata struct {
Keyspace string
Name string
ArgumentTypes []TypeInfo
ArgumentNames []string
Body string
CalledOnNullInput bool
Language string
ReturnType TypeInfo
}
FunctionMetadata holds metadata for function constructs
type HostDialer ¶
type HostDialer interface {
// DialHost establishes a connection to the host.
// The returned connection must be directly usable for CQL protocol,
// specifically DialHost is responsible also for setting up the TLS session if needed.
// DialHost should disable write coalescing if the returned net.Conn does not support writev.
// As of Go 1.18, only plain TCP connections support writev, TLS sessions should disable coalescing.
// You can use WrapTLS helper function if you don't need to override the TLS setup.
DialHost(ctx context.Context, host *HostInfo) (*DialedHost, error)
}
HostDialer allows customizing connection to cluster nodes.
type HostFilter ¶
type HostFilter interface {
// Called when a new host is discovered, returning true will cause the host
// to be added to the pools.
Accept(host *HostInfo) bool
}
HostFilter interface is used when a host is discovered via server sent events.
func DataCenterHostFilter ¶
func DataCenterHostFilter(dataCenter string) HostFilter
DataCenterHostFilter filters all hosts such that they are in the same data center as the supplied data center.
func DataCentreHostFilter
deprecated
func DataCentreHostFilter(dataCenter string) HostFilter
Deprecated: Use DataCenterHostFilter instead. DataCentreHostFilter is an alias that doesn't use the preferred spelling.
func DenyAllFilter ¶
func DenyAllFilter() HostFilter
func WhiteListHostFilter ¶
func WhiteListHostFilter(hosts ...string) HostFilter
WhiteListHostFilter filters incoming hosts by checking that their address is in the initial hosts whitelist.
type HostFilterFunc ¶
HostFilterFunc converts a func(host HostInfo) bool into a HostFilter
func (HostFilterFunc) Accept ¶
func (fn HostFilterFunc) Accept(host *HostInfo) bool
type HostInfo ¶
type HostInfo struct {
// contains filtered or unexported fields
}
HostInfo represents a server Host/Node. You can create a HostInfo object with either NewHostInfoFromAddrPort or NewTestHostInfoFromRow.
func NewHostInfoFromAddrPort ¶
NewHostInfoFromAddrPort creates HostInfo with provided connectAddress and port. It returns an error if addr is invalid.
If you're looking for a way to create a HostInfo object with more than just an address and port for testing purposes then you can use NewTestHostInfoFromRow
func NewTestHostInfoFromRow ¶
NewTestHostInfoFromRow creates a new HostInfo object from a system.peers or system.local row. The port defaults to 9042.
You can create a HostInfo object for testing purposes using this function:
Example usage:
row := map[string]interface{}{
"broadcast_address": net.ParseIP("10.0.0.1"),
"listen_address": net.ParseIP("10.0.0.1"),
"rpc_address": net.ParseIP("10.0.0.1"),
"peer": net.ParseIP("10.0.0.1"), // system.peers only
"data_center": "dc1",
"rack": "rack1",
"host_id": MustRandomUUID(), // can also use ParseUUID("550e8400-e29b-41d4-a716-446655440000")
"release_version": "4.0.0",
"native_port": 9042,
}
host, err := NewTestHostInfoFromRow(row)
func (*HostInfo) BroadcastAddress ¶
func (*HostInfo) ClusterName ¶
func (*HostInfo) ConnectAddress ¶
ConnectAddress Returns the address that should be used to connect to the host. If you wish to override this, use an AddressTranslator
func (*HostInfo) ConnectAddressAndPort ¶
func (*HostInfo) DSEVersion ¶
func (*HostInfo) DataCenter ¶
func (*HostInfo) HostnameAndPort ¶
func (*HostInfo) ListenAddress ¶
func (*HostInfo) Partitioner ¶
func (*HostInfo) PreferredIP ¶
func (*HostInfo) RPCAddress ¶
type HostSelectionPolicy ¶
type HostSelectionPolicy interface {
HostStateNotifier
SetPartitioner
KeyspaceChanged(KeyspaceUpdateEvent)
Init(*Session)
IsLocal(host *HostInfo) bool
// Pick returns an iteration function over selected hosts.
// Multiple attempts of a single query execution won't call the returned NextHost function concurrently,
// so it's safe to have internal state without additional synchronization as long as every call to Pick returns
// a different instance of NextHost.
Pick(statement ExecutableStatement) NextHost
}
HostSelectionPolicy is an interface for selecting the most appropriate host to execute a given query. HostSelectionPolicy instances cannot be shared between sessions.
func DCAwareRoundRobinPolicy ¶
func DCAwareRoundRobinPolicy(localDC string) HostSelectionPolicy
DCAwareRoundRobinPolicy is a host selection policies which will prioritize and return hosts which are in the local datacenter before returning hosts in all other datacenters
func RackAwareRoundRobinPolicy ¶
func RackAwareRoundRobinPolicy(localDC string, localRack string) HostSelectionPolicy
func RoundRobinHostPolicy ¶
func RoundRobinHostPolicy() HostSelectionPolicy
RoundRobinHostPolicy is a round-robin load balancing policy, where each host is tried sequentially for each query.
func TokenAwareHostPolicy ¶
func TokenAwareHostPolicy(fallback HostSelectionPolicy, opts ...func(*tokenAwareHostPolicy)) HostSelectionPolicy
TokenAwareHostPolicy is a token aware host selection policy, where hosts are selected based on the partition key, so queries are sent to the host which owns the partition. Fallback is used when routing information is not available.
type HostStateNotifier ¶
type HostStateNotifier interface {
AddHost(host *HostInfo)
RemoveHost(host *HostInfo)
HostUp(host *HostInfo)
HostDown(host *HostInfo)
}
HostStateNotifier is an interface for notifying about host state changes. It allows host selection policies to be informed when hosts are added, removed, or change their availability status.
type HostTierer ¶
type HostTierer interface {
// HostTier returns an integer specifying how far a host is from the client.
// Tier must start at 0.
// The value is used to prioritize closer hosts during host selection.
// For example this could be:
// 0 - local rack, 1 - local DC, 2 - remote DC
// or:
// 0 - local DC, 1 - remote DC
HostTier(host *HostInfo) uint
// This function returns the maximum possible host tier
MaxHostTier() uint
}
type Iter ¶
type Iter struct {
// contains filtered or unexported fields
}
Iter represents the result that was returned by the execution of a statement.
If the statement is a query then this can be seen as an iterator that can be used to iterate over all rows that were returned by the query. The iterator might send additional queries to the database during the iteration if paging was enabled.
It also contains metadata about the request that can be accessed by Iter.Keyspace(), Iter.Table(), Iter.Attempts(), Iter.Latency().
func (*Iter) Close ¶
Close closes the iterator and returns any errors that happened during the query or the iteration.
func (*Iter) Columns ¶
func (iter *Iter) Columns() []ColumnInfo
Columns returns the name and type of the selected columns.
func (*Iter) GetCustomPayload ¶
GetCustomPayload returns any parsed custom payload results if given in the response from Cassandra. Note that the result is not a copy.
This additional feature of CQL Protocol v4 allows additional results and query information to be returned by custom QueryHandlers running in your C* cluster. See https://datastax.github.io/java-driver/manual/custom_payloads/
func (*Iter) Keyspace ¶
Keyspace returns the keyspace the statement was executed against if the driver could determine it.
func (*Iter) Latency ¶
Latency returns the average amount of nanoseconds per attempt of the statement.
func (*Iter) MapScan ¶
MapScan takes a map[string]interface{} and populates it with a row that is returned from Cassandra.
Each call to MapScan() must be called with a new map object. During the call to MapScan() any pointers in the existing map are replaced with non pointer types before the call returns.
Columns are automatically converted to Go types based on their CQL type. See SliceMap for the complete CQL to Go type mapping table and examples.
Usage Examples:
iter := session.Query(`SELECT * FROM mytable`).Iter()
for {
// New map each iteration
row := make(map[string]interface{})
if !iter.MapScan(row) {
break
}
// Do things with row
if fullname, ok := row["fullname"]; ok {
fmt.Printf("Full Name: %s\n", fullname)
}
}
You can also pass pointers in the map before each call:
var fullName FullName // Implements gocql.Unmarshaler and gocql.Marshaler interfaces
var address net.IP
var age int
iter := session.Query(`SELECT * FROM scan_map_table`).Iter()
for {
// New map each iteration
row := map[string]interface{}{
"fullname": &fullName,
"age": &age,
"address": &address,
}
if !iter.MapScan(row) {
break
}
fmt.Printf("First: %s Age: %d Address: %q\n", fullName.FirstName, age, address)
}
func (*Iter) NumRows ¶
NumRows returns the number of rows in this pagination, it will update when new pages are fetched, it is not the value of the total number of rows this iter will return unless there is only a single page returned.
func (*Iter) PageState ¶
PageState return the current paging state for a query which can be used for subsequent queries to resume paging this point.
func (*Iter) Scan ¶
Scan consumes the next row of the iterator and copies the columns of the current row into the values pointed at by dest. Use nil as a dest value to skip the corresponding column. Scan might send additional queries to the database to retrieve the next set of rows if paging was enabled.
Scan returns true if the row was successfully unmarshaled or false if the end of the result set was reached or if an error occurred. Close should be called afterwards to retrieve any potential errors.
Supported CQL to Go type conversions are as follows, other type combinations may be added in the future:
CQL Type | Go Type (dest) | Note
ascii, text, varchar | *string |
ascii, text, varchar | *[]byte | non-nil buffer is reused
bigint, counter | *int64 |
bigint, counter | *int, *int32, *int16, *int8 | with range checking
bigint, counter | *uint64, *uint32, *uint16 | with range checking
bigint, counter | *big.Int |
bigint, counter | *string | formatted as base 10 number
blob | *[]byte | non-nil buffer is reused
boolean | *bool |
date | *time.Time | start of day in UTC
date | *string | formatted as "2006-01-02"
decimal | *inf.Dec |
double | *float64 |
duration | *gocql.Duration |
duration | *time.Duration | with range checking
float | *float32 |
inet | *net.IP |
inet | *string | IPv4 or IPv6 address string
int | *int |
int | *int32, *int16, *int8 | with range checking
int | *uint32, *uint16, *uint8 | with range checking
list<T>, set<T> | *[]T |
list<T>, set<T> | *[N]T | array with compatible size
map<K,V> | *map[K]V |
smallint | *int16 |
smallint | *int, *int32, *int8 | with range checking
smallint | *uint16, *uint8 | with range checking
time | *time.Duration | nanoseconds since start of day
time | *int64 | nanoseconds since start of day
timestamp | *time.Time |
timestamp | *int64 | milliseconds since Unix epoch
timeuuid | *gocql.UUID |
timeuuid | *time.Time | timestamp of the UUID
timeuuid | *string | hex representation
timeuuid | *[]byte | 16-byte raw UUID
tinyint | *int8 |
tinyint | *int, *int32, *int16 | with range checking
tinyint | *uint8 | with range checking
tuple<T1,T2,...> | *[]interface{} |
tuple<T1,T2,...> | *[N]interface{} | array with compatible size
tuple<T1,T2,...> | *struct | fields unmarshaled in declaration order
user-defined types | gocql.UDTUnmarshaler | UnmarshalUDT is called
user-defined types | *map[string]interface{} |
user-defined types | *struct | cql tag or field name matching
uuid | *gocql.UUID |
uuid | *string | hex representation
uuid | *[]byte | 16-byte raw UUID
varint | *big.Int |
varint | *int64, *int32, *int16, *int8 | with range checking
varint | *string | formatted as base 10 number
vector<T,N> | *[]T |
vector<T,N> | *[N]T | array with exact size match
Important Notes:
- NULL values are unmarshaled as zero values of the destination type
- Use **Type (pointer to pointer) to distinguish NULL from zero values
- Range checking prevents overflow when converting between numeric types
- For SliceMap/MapScan type mappings, see Iter.SliceMap documentation
func (*Iter) Scanner ¶
Scanner returns a row Scanner which provides an interface to scan rows in a manner which is similar to database/sql. The iter should NOT be used again after calling this method.
func (*Iter) SliceMap ¶
SliceMap is a helper function to make the API easier to use. It returns the data from the query in the form of []map[string]interface{}.
Columns are automatically converted to Go types based on their CQL type. The following table shows exactly what Go type to expect when accessing map values:
CQL Type | Go Type (Non-NULL) | Go Value for NULL | Type Assertion Example
ascii | string | "" | row["col"].(string)
bigint | int64 | int64(0) | row["col"].(int64)
blob | []byte | []byte(nil) | row["col"].([]byte)
boolean | bool | false | row["col"].(bool)
counter | int64 | int64(0) | row["col"].(int64)
date | time.Time | time.Time{} | row["col"].(time.Time)
decimal | *inf.Dec | (*inf.Dec)(nil) | row["col"].(*inf.Dec)
double | float64 | float64(0) | row["col"].(float64)
duration | gocql.Duration | gocql.Duration{} | row["col"].(gocql.Duration)
float | float32 | float32(0) | row["col"].(float32)
inet | net.IP | net.IP(nil) | row["col"].(net.IP)
int | int | int(0) | row["col"].(int)
list<T> | []T | []T(nil) | row["col"].([]string)
map<K,V> | map[K]V | map[K]V(nil) | row["col"].(map[string]int)
set<T> | []T | []T(nil) | row["col"].([]int)
smallint | int16 | int16(0) | row["col"].(int16)
text | string | "" | row["col"].(string)
time | time.Duration | time.Duration(0) | row["col"].(time.Duration)
timestamp | time.Time | time.Time{} | row["col"].(time.Time)
timeuuid | gocql.UUID | gocql.UUID{} | row["col"].(gocql.UUID)
tinyint | int8 | int8(0) | row["col"].(int8)
tuple<T1,T2,...> | (see below) | (see below) | (see below)
uuid | gocql.UUID | gocql.UUID{} | row["col"].(gocql.UUID)
varchar | string | "" | row["col"].(string)
varint | *big.Int | (*big.Int)(nil) | row["col"].(*big.Int)
vector<T,N> | []T | []T(nil) | row["col"].([]float32)
Special Cases:
Tuple Types: Tuple elements are split into separate map entries with keys like "column[0]", "column[1]", etc. Use TupleColumnName to generate the correct key:
// For tuple<int, text> column named "my_tuple"
elem0 := row[gocql.TupleColumnName("my_tuple", 0)].(int)
elem1 := row[gocql.TupleColumnName("my_tuple", 1)].(string)
User-Defined Types (UDTs): Returned as map[string]interface{} with field names as keys:
udt := row["my_udt"].(map[string]interface{})
name := udt["name"].(string)
age := udt["age"].(int)
Important Notes:
- Always use type assertions when accessing map values: row["col"].(ExpectedType)
- NULL database values return Go zero values or nil for pointer types
- Collection types (list, set, map, vector) return nil slices/maps for NULL values
- Migration from v1.x: inet columns now return net.IP instead of string values
func (*Iter) Table ¶
Table returns name of the table the statement was executed against if the driver could determine it.
func (*Iter) Warnings ¶
Warnings returns any warnings generated if given in the response from Cassandra.
This is only available starting with CQL Protocol v4.
func (*Iter) WillSwitchPage ¶
WillSwitchPage detects if iterator reached end of current page and the next page is available.
type KeyspaceMetadata ¶
type KeyspaceMetadata struct {
Name string
DurableWrites bool
StrategyClass string
StrategyOptions map[string]interface{}
Tables map[string]*TableMetadata
Functions map[string]*FunctionMetadata
Aggregates map[string]*AggregateMetadata
MaterializedViews map[string]*MaterializedViewMetadata
UserTypes map[string]*UserTypeMetadata
}
schema metadata for a keyspace
type KeyspaceUpdateEvent ¶
KeyspaceUpdateEvent represents a keyspace change event. It contains information about which keyspace changed and what type of change occurred.
type LogField ¶
type LogField struct {
Name string
Value LogFieldValue
}
LogField represents a structured log field with a name and value. It is used to provide structured logging information.
type LogFieldValue ¶
type LogFieldValue struct {
// contains filtered or unexported fields
}
A LogFieldValue can represent any Go value, but unlike type any, it can represent most small values without an allocation. The zero Value corresponds to nil.
func (LogFieldValue) Any ¶
func (v LogFieldValue) Any() interface{}
Any returns v's value as an interface.
func (LogFieldValue) Bool ¶
func (v LogFieldValue) Bool() bool
Bool returns v's value as a bool. It panics if v is not a bool.
func (LogFieldValue) Int64 ¶
func (v LogFieldValue) Int64() int64
Int64 returns v's value as an int64. It panics if v is not a signed integer.
func (LogFieldValue) LogFieldValueType ¶
func (v LogFieldValue) LogFieldValueType() LogFieldValueType
LogFieldValueType returns v's LogFieldValueType.
func (LogFieldValue) String ¶
func (v LogFieldValue) String() string
String returns LogFieldValue's value as a string, formatted like fmt.Sprint.
Unlike the methods Int64 and Bool which panic if v is of the wrong LogFieldValueType, String never panics (i.e. it can be called for any LogFieldValueType, not just LogFieldTypeString)
type LogFieldValueType ¶
type LogFieldValueType int
LogFieldValueType represents the type of a LogFieldValue. It is used to determine how to interpret the value stored in LogFieldValue. Available types: LogFieldTypeAny, LogFieldTypeBool, LogFieldTypeInt64, LogFieldTypeString.
const ( LogFieldTypeAny LogFieldValueType = iota LogFieldTypeBool LogFieldTypeInt64 LogFieldTypeString )
It's important that LogFieldTypeAny is 0 so that a zero Value represents nil.
func (LogFieldValueType) String ¶
func (t LogFieldValueType) String() string
type LogLevel ¶
type LogLevel int
LogLevel represents the level of logging to be performed. Higher values indicate more verbose logging. Available levels: LogLevelDebug, LogLevelInfo, LogLevelWarn, LogLevelError, LogLevelNone.
type MarshalError ¶
type MarshalError string
MarshalError represents an error that occurred during marshaling.
func (MarshalError) Error ¶
func (m MarshalError) Error() string
type Marshaler ¶
Marshaler is the interface implemented by objects that can marshal themselves into values understood by Cassandra.
type MaterializedViewMetadata ¶
type MaterializedViewMetadata struct {
Keyspace string
Name string
AdditionalWritePolicy string
BaseTableId UUID
BaseTable *TableMetadata
BloomFilterFpChance float64
Caching map[string]string
Comment string
Compaction map[string]string
Compression map[string]string
CrcCheckChance float64
DcLocalReadRepairChance float64
DefaultTimeToLive int
Extensions map[string]string
GcGraceSeconds int
Id UUID
IncludeAllColumns bool
MaxIndexInterval int
MemtableFlushPeriodInMs int
MinIndexInterval int
ReadRepair string // Only present in Cassandra 4.0+
ReadRepairChance float64 // Note: Cassandra 4.0 removed ReadRepairChance and added ReadRepair instead
SpeculativeRetry string
// contains filtered or unexported fields
}
MaterializedViewMetadata holds the metadata for materialized views.
type NextHost ¶
type NextHost func() SelectedHost
NextHost is an iteration function over picked hosts Should return nil eventually to prevent endless query execution.
type NonSpeculativeExecution ¶
type NonSpeculativeExecution struct{}
NonSpeculativeExecution is a policy that disables speculative execution. It implements SpeculativeExecutionPolicy with zero attempts.
func (NonSpeculativeExecution) Attempts ¶
func (sp NonSpeculativeExecution) Attempts() int
func (NonSpeculativeExecution) Delay ¶
func (sp NonSpeculativeExecution) Delay() time.Duration
type ObservedBatch ¶
type ObservedBatch struct {
Keyspace string
Statements []string
// Values holds a slice of bound values for each statement.
// Values[i] are bound values passed to Statements[i].
// Do not modify the values here, they are shared with multiple goroutines.
Values [][]interface{}
Start time.Time // time immediately before the batch query was called
End time.Time // time immediately after the batch query returned
// Host is the informations about the host that performed the batch
Host *HostInfo
// Err is the error in the batch query.
// It only tracks network errors or errors of bad cassandra syntax, in particular selects with no match return nil error
Err error
// The metrics per this host
Metrics *hostMetrics
// Attempt is the index of attempt at executing this query.
// The first attempt is number zero and any retries have non-zero attempt number.
Attempt int
// Batch object associated with this request. Should be used as read only.
Batch *Batch
}
type ObservedConnect ¶
type ObservedFrameHeader ¶
type ObservedFrameHeader struct {
Version protoVersion
Flags byte
Stream int16
Opcode frameOp
Length int32
// StartHeader is the time we started reading the frame header off the network connection.
Start time.Time
// EndHeader is the time we finished reading the frame header off the network connection.
End time.Time
// Host is Host of the connection the frame header was read from.
Host *HostInfo
}
func (ObservedFrameHeader) String ¶
func (f ObservedFrameHeader) String() string
type ObservedQuery ¶
type ObservedQuery struct {
Keyspace string
Statement string
// Values holds a slice of bound values for the query.
// Do not modify the values here, they are shared with multiple goroutines.
Values []interface{}
Start time.Time // time immediately before the query was called
End time.Time // time immediately after the query returned
// Rows is the number of rows in the current iter.
// In paginated queries, rows from previous scans are not counted.
// Rows is not used in batch queries and remains at the default value
Rows int
// Host is the information about the host that performed the query
Host *HostInfo
// The metrics per this host
Metrics *hostMetrics
// Err is the error in the query.
// It only tracks network errors or errors of bad cassandra syntax, in particular selects with no match return nil error
Err error
// Attempt is the index of attempt at executing this query.
// The first attempt is number zero and any retries have non-zero attempt number.
Attempt int
// Query object associated with this request. Should be used as read only.
Query *Query
}
type ObservedStream ¶
type ObservedStream struct {
// Host of the connection used to send the stream.
Host *HostInfo
}
ObservedStream observes a single request/response stream.
type PasswordAuthenticator ¶
type PasswordAuthenticator struct {
Username string
Password string
// Setting this to nil or empty will allow authenticating with any authenticator
// provided by the server. This is the default behavior of most other driver
// implementations.
AllowedAuthenticators []string
}
PasswordAuthenticator specifies credentials to be used when authenticating. It can be configured with an "allow list" of authenticator class names to avoid attempting to authenticate with Cassandra if it doesn't provide an expected authenticator.
func (PasswordAuthenticator) Challenge ¶
func (p PasswordAuthenticator) Challenge(req []byte) ([]byte, Authenticator, error)
func (PasswordAuthenticator) Success ¶
func (p PasswordAuthenticator) Success(data []byte) error
type PoolConfig ¶
type PoolConfig struct {
// HostSelectionPolicy sets the policy for selecting which host to use for a
// given query (default: RoundRobinHostPolicy())
// It is not supported to use a single HostSelectionPolicy in multiple sessions
// (even if you close the old session before using in a new session).
HostSelectionPolicy HostSelectionPolicy
}
PoolConfig configures the connection pool used by the driver, it defaults to using a round-robin host selection policy and a round-robin connection selection policy for each host.
type Query ¶
type Query struct {
// contains filtered or unexported fields
}
Query represents a CQL statement that can be executed.
func (*Query) Bind ¶
Bind sets query arguments of query. This can also be used to rebind new query arguments to an existing query instance.
For supported Go to CQL type conversions for query parameters, see Session.Query documentation.
func (*Query) Consistency ¶
func (q *Query) Consistency(c Consistency) *Query
Consistency sets the consistency level for this query. If no consistency level have been set, the default consistency level of the cluster is used.
func (*Query) CustomPayload ¶
CustomPayload sets the custom payload level for this query. The map is not copied internally so it shouldn't be modified after the query is scheduled for execution.
func (*Query) DefaultTimestamp ¶
DefaultTimestamp will enable the with default timestamp flag on the query. If enable, this will replace the server side assigned timestamp as default timestamp. Note that a timestamp in the query itself will still override this timestamp. This is entirely optional.
Only available on protocol >= 3
func (*Query) ExecContext ¶
ExecContext executes the query with the provided context without returning any rows.
func (*Query) GetConsistency ¶
func (q *Query) GetConsistency() Consistency
GetConsistency returns the currently configured consistency level for the query.
func (*Query) Idempotent ¶
Idempotent marks the query as being idempotent or not depending on the value. Non-idempotent query won't be retried. See "Retries and speculative execution" in package docs for more details.
func (*Query) IsIdempotent ¶
IsIdempotent returns whether the query is marked as idempotent. Non-idempotent query won't be retried. See "Retries and speculative execution" in package docs for more details.
func (*Query) Iter ¶
Iter executes the query and returns an iterator capable of iterating over all results.
func (*Query) IterContext ¶
IterContext executes the query with the provided context and returns an iterator capable of iterating over all results.
func (*Query) MapScan ¶
MapScan executes the query, copies the columns of the first selected row into the map pointed at by m and discards the rest. If no rows were selected, ErrNotFound is returned.
Columns are automatically converted to Go types based on their CQL type. See Iter.SliceMap for the complete CQL to Go type mapping table and examples.
func (*Query) MapScanCAS ¶
MapScanCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). If the transaction fails because the existing values did not match, the previous values will be stored in dest map.
As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. MapScanCAS is added to capture them safely.
Example ¶
ExampleQuery_MapScanCAS demonstrates how to execute a single-statement lightweight transaction.
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create table example.my_lwt_table(pk int, version int, value text, PRIMARY KEY(pk));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
err = session.Query("INSERT INTO example.my_lwt_table (pk, version, value) VALUES (?, ?, ?)",
1, 1, "a").ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
m := make(map[string]interface{})
applied, err := session.Query("UPDATE example.my_lwt_table SET value = ? WHERE pk = ? IF version = ?",
"b", 1, 0).MapScanCASContext(ctx, m)
if err != nil {
log.Fatal(err)
}
fmt.Println(applied, m)
var value string
err = session.Query("SELECT value FROM example.my_lwt_table WHERE pk = ?", 1).
ScanContext(ctx, &value)
if err != nil {
log.Fatal(err)
}
fmt.Println(value)
m = make(map[string]interface{})
applied, err = session.Query("UPDATE example.my_lwt_table SET value = ? WHERE pk = ? IF version = ?",
"b", 1, 1).MapScanCASContext(ctx, m)
if err != nil {
log.Fatal(err)
}
fmt.Println(applied, m)
var value2 string
err = session.Query("SELECT value FROM example.my_lwt_table WHERE pk = ?", 1).
ScanContext(ctx, &value2)
if err != nil {
log.Fatal(err)
}
fmt.Println(value2)
// false map[version:1]
// a
// true map[]
// b
}
func (*Query) MapScanCASContext ¶
func (q *Query) MapScanCASContext(ctx context.Context, dest map[string]interface{}) (applied bool, err error)
MapScanCASContext executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause) with the provided context. If the transaction fails because the existing values did not match, the previous values will be stored in dest map.
As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. MapScanCAS is added to capture them safely.
func (*Query) MapScanContext ¶
MapScanContext executes the query with the provided context, copies the columns of the first selected row into the map pointed at by m and discards the rest. If no rows were selected, ErrNotFound is returned.
func (*Query) NoSkipMetadata ¶
NoSkipMetadata will override the internal result metadata cache so that the driver does not send skip_metadata for queries, this means that the result will always contain the metadata to parse the rows and will not reuse the metadata from the prepared statement. This should only be used to work around cassandra bugs, such as when using CAS operations which do not end in Cas.
See https://issues.apache.org/jira/browse/CASSANDRA-11099 https://github.com/apache/cassandra-gocql-driver/issues/612
func (*Query) Observer ¶
func (q *Query) Observer(observer QueryObserver) *Query
Observer enables query-level observer on this query. The provided observer will be called every time this query is executed.
func (*Query) PageSize ¶
PageSize will tell the iterator to fetch the result in pages of size n. This is useful for iterating over large result sets, but setting the page size too low might decrease the performance. This feature is only available in Cassandra 2 and onwards.
func (*Query) PageState ¶
PageState sets the paging state for the query to resume paging from a specific point in time. Setting this will disable to query paging for this query, and must be used for all subsequent pages.
func (*Query) Prefetch ¶
SetPrefetch sets the default threshold for pre-fetching new pages. If there are only p*pageSize rows remaining, the next page will be requested automatically.
func (*Query) RetryPolicy ¶
func (q *Query) RetryPolicy(r RetryPolicy) *Query
RetryPolicy sets the policy to use when retrying the query.
func (*Query) RoutingKey ¶
RoutingKey sets the routing key to use when a token aware connection pool is used to optimize the routing of this query.
func (*Query) Scan ¶
Scan executes the query, copies the columns of the first selected row into the values pointed at by dest and discards the rest. If no rows were selected, ErrNotFound is returned.
For supported CQL to Go type conversions, see Iter.Scan documentation.
func (*Query) ScanCAS ¶
ScanCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). If the transaction fails because the existing values did not match, the previous values will be stored in dest.
As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. Use MapScanCAS to capture them safely.
For supported CQL to Go type conversions, see Iter.Scan documentation.
func (*Query) ScanCASContext ¶
ScanCASContext executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause) with the provided context. If the transaction fails because the existing values did not match, the previous values will be stored in dest.
As for INSERT .. IF NOT EXISTS, previous values will be returned as if SELECT * FROM. So using ScanCAS with INSERT is inherently prone to column mismatching. Use MapScanCAS to capture them safely.
For supported CQL to Go type conversions, see Iter.Scan documentation.
func (*Query) ScanContext ¶
ScanContext executes the query with the provided context, copies the columns of the first selected row into the values pointed at by dest and discards the rest. If no rows were selected, ErrNotFound is returned.
For supported CQL to Go type conversions, see Iter.Scan documentation.
func (*Query) SerialConsistency ¶
func (q *Query) SerialConsistency(cons Consistency) *Query
SerialConsistency sets the consistency level for the serial phase of conditional updates. That consistency can only be either SERIAL or LOCAL_SERIAL and if not present, it defaults to SERIAL. This option will be ignored for anything else that a conditional update/insert.
func (*Query) SetConsistency
deprecated
func (q *Query) SetConsistency(c Consistency)
Deprecated: use Query.Consistency instead
func (*Query) SetHostID ¶
SetHostID allows to define the host the query should be executed against. If the host was filtered or otherwise unavailable, then the query will error. If an empty string is sent, the default behavior, using the configured HostSelectionPolicy will be used. A hostID can be obtained from HostInfo.HostID() after calling GetHosts().
func (*Query) SetKeyspace ¶
SetKeyspace will enable keyspace flag on the query. It allows to specify the keyspace that the query should be executed in
Only available on protocol >= 5.
func (*Query) SetSpeculativeExecutionPolicy ¶
func (q *Query) SetSpeculativeExecutionPolicy(sp SpeculativeExecutionPolicy) *Query
SetSpeculativeExecutionPolicy sets the execution policy
func (*Query) Trace ¶
Trace enables tracing of this query. Look at the documentation of the Tracer interface to learn more about tracing.
func (Query) Values ¶
func (q Query) Values() []interface{}
Values returns the values passed in via Bind. This can be used by a wrapper type that needs to access the bound values.
func (*Query) WithContext
deprecated
Deprecated: Use Query.ExecContext or Query.IterContext instead. This will be removed in a future major version.
WithContext returns a shallow copy of q with its context set to ctx.
The provided context controls the entire lifetime of executing a query, queries will be canceled and return once the context is canceled.
func (*Query) WithNowInSeconds ¶
WithNowInSeconds will enable the with now_in_seconds flag on the query. Also, it allows to define now_in_seconds value.
Only available on protocol >= 5.
func (*Query) WithTimestamp ¶
WithTimestamp will enable the with default timestamp flag on the query like DefaultTimestamp does. But also allows to define value for timestamp. It works the same way as USING TIMESTAMP in the query itself, but should not break prepared query optimization.
Only available on protocol >= 3
type QueryInfo ¶
type QueryInfo struct {
Id []byte
Args []ColumnInfo
Rval []ColumnInfo
PKeyColumns []int
}
QueryInfo represents metadata information about a prepared query. It contains the query ID, argument information, result information, and primary key columns.
type QueryObserver ¶
type QueryObserver interface {
// ObserveQuery gets called on every query to cassandra, including all queries in an iterator when paging is enabled.
// It doesn't get called if there is no query because the session is closed or there are no connections available.
// The error reported only shows query errors, i.e. if a SELECT is valid but finds no matches it will be nil.
ObserveQuery(context.Context, ObservedQuery)
}
QueryObserver is the interface implemented by query observers / stat collectors.
Experimental, this interface and use may change
type ReadyPolicy ¶
type ReadyPolicy interface {
Ready() bool
}
ReadyPolicy defines a policy for when a HostSelectionPolicy can be used. After each host connects during session initialization, the Ready method will be called. If you only need a single Host to be up you can wrap a HostSelectionPolicy policy with SingleHostReadyPolicy.
type ReconnectionPolicy ¶
type ReconnectionPolicy interface {
GetInterval(currentRetry int) time.Duration
GetMaxRetries() int
}
ReconnectionPolicy interface is used by gocql to determine if reconnection can be attempted after connection error. The interface allows gocql users to implement their own logic to determine how to attempt reconnection.
type RegisteredTypes ¶
type RegisteredTypes struct {
// contains filtered or unexported fields
}
RegisteredTypes is a collection of CQL types
func (*RegisteredTypes) AddAlias ¶
func (r *RegisteredTypes) AddAlias(name, as string) error
AddAlias adds an alias for an already registered type. If you expect a type to be referenced as multiple different types or if you need to add the Java marshal class for a type you should call this method. This function must not be called after a session has been created.
func (*RegisteredTypes) Copy ¶
func (r *RegisteredTypes) Copy() *RegisteredTypes
Copy returns a new shallow copy of the RegisteredTypes
func (*RegisteredTypes) RegisterCustom ¶
func (r *RegisteredTypes) RegisterCustom(name string, t CQLType) error
RegisterCustom registers a new custom CQL type. Name is the name of the type as returned in the metadata for the column. CQLType is the implementation of the type. This function must not be called after a session has been created.
func (*RegisteredTypes) RegisterType ¶
func (r *RegisteredTypes) RegisterType(typ Type, name string, t CQLType) error
RegisterType registers a new CQL data type. Type should be the CQL id for the type. Name is the name of the type as returned in the metadata for the column. CQLType is the implementation of the type. This function must not be called after a session has been created.
type RequestErrAlreadyExists ¶
type RequestErrAlreadyExists struct {
Keyspace string
Table string
// contains filtered or unexported fields
}
RequestErrAlreadyExists represents an "already exists" error returned by Cassandra. This error occurs when attempting to create a keyspace or table that already exists.
type RequestErrCASWriteUnknown ¶
type RequestErrCASWriteUnknown struct {
Consistency Consistency
Received int
BlockFor int
// contains filtered or unexported fields
}
RequestErrCASWriteUnknown is distinct error for ErrCodeCasWriteUnknown.
See https://github.com/apache/cassandra/blob/7337fc0/doc/native_protocol_v5.spec#L1387-L1397
type RequestErrCDCWriteFailure ¶
type RequestErrCDCWriteFailure struct {
// contains filtered or unexported fields
}
RequestErrCDCWriteFailure represents a CDC write failure error returned by Cassandra. This error occurs when a write to the Change Data Capture log fails.
type RequestErrFunctionFailure ¶
type RequestErrFunctionFailure struct {
Keyspace string
Function string
ArgTypes []string
// contains filtered or unexported fields
}
RequestErrFunctionFailure represents a function failure error returned by Cassandra. This error occurs when a user-defined function fails during execution.
type RequestErrReadFailure ¶
type RequestErrReadFailure struct {
Consistency Consistency
Received int
BlockFor int
NumFailures int
DataPresent bool
ErrorMap ErrorMap
// contains filtered or unexported fields
}
RequestErrReadFailure represents a read failure error returned by Cassandra. This error occurs when a read request fails on one or more replicas.
type RequestErrReadTimeout ¶
type RequestErrReadTimeout struct {
Consistency Consistency
Received int
BlockFor int
DataPresent byte
// contains filtered or unexported fields
}
RequestErrReadTimeout represents a read timeout error returned by Cassandra. This error occurs when a read request times out after the coordinator has received some responses but not enough to satisfy the required consistency level.
type RequestErrUnavailable ¶
type RequestErrUnavailable struct {
// contains filtered or unexported fields
}
RequestErrUnavailable represents an unavailable error returned by Cassandra. This error occurs when there are not enough nodes available to fulfill the request.
func (*RequestErrUnavailable) String ¶
func (e *RequestErrUnavailable) String() string
type RequestErrUnprepared ¶
type RequestErrUnprepared struct {
StatementId []byte
// contains filtered or unexported fields
}
RequestErrUnprepared represents an "unprepared" error returned by Cassandra. This error occurs when a prepared statement is no longer available on the server.
type RequestErrWriteFailure ¶
type RequestErrWriteFailure struct {
Consistency Consistency
Received int
BlockFor int
NumFailures int
WriteType string
ErrorMap ErrorMap
// contains filtered or unexported fields
}
RequestErrWriteFailure represents a write failure error returned by Cassandra. This error occurs when a write request fails on one or more replicas.
type RequestErrWriteTimeout ¶
type RequestErrWriteTimeout struct {
Consistency Consistency
Received int
BlockFor int
WriteType string
// contains filtered or unexported fields
}
RequestErrWriteTimeout represents a write timeout error returned by Cassandra. This error occurs when a write request times out after the coordinator has successfully written to some replicas but not enough to satisfy the required consistency level.
type RequestError ¶
RequestError represents errors returned by Cassandra server.
type RetryPolicy ¶
type RetryPolicy interface {
Attempt(RetryableQuery) bool
GetRetryType(error) RetryType
}
RetryPolicy interface is used by gocql to determine if a query can be attempted again after a retryable error has been received. The interface allows gocql users to implement their own logic to determine if a query can be attempted again.
See SimpleRetryPolicy as an example of implementing and using a RetryPolicy interface.
type RetryType ¶
type RetryType uint16
RetryType represents the type of retry that should be performed by the retry policy. Available types: Retry, RetryNextHost, Ignore, Rethrow.
type RetryableQuery ¶
type RetryableQuery interface {
Attempts() int
SetConsistency(c Consistency)
GetConsistency() Consistency
Context() context.Context
}
RetryableQuery is an interface that represents a query or batch statement that exposes the correct functions for the retry policy logic to evaluate correctly.
type RowData ¶
type RowData struct {
Columns []string
Values []interface{}
}
RowData contains the column names and pointers to the default values for each column
type Scanner ¶
type Scanner interface {
// Next advances the row pointer to point at the next row, the row is valid until
// the next call of Next. It returns true if there is a row which is available to be
// scanned into with Scan.
// Next must be called before every call to Scan.
Next() bool
// Scan copies the current row's columns into dest. If the length of dest does not equal
// the number of columns returned in the row an error is returned. If an error is encountered
// when unmarshalling a column into the value in dest an error is returned and the row is invalidated
// until the next call to Next.
// Next must be called before calling Scan, if it is not an error is returned.
//
// For supported CQL to Go type conversions, see Iter.Scan documentation.
Scan(...interface{}) error
// Err returns the if there was one during iteration that resulted in iteration being unable to complete.
// Err will also release resources held by the iterator, the Scanner should not used after being called.
Err() error
}
type SelectedHost ¶
SelectedHost is an interface returned when picking a host from a host selection policy.
type SerialConsistency ¶
type SerialConsistency = Consistency
SerialConsistency is deprecated. Use Consistency instead.
type Session ¶
type Session struct {
// contains filtered or unexported fields
}
Session is the interface used by users to interact with the database.
It's safe for concurrent use by multiple goroutines and a typical usage scenario is to have one global session object to interact with the whole Cassandra cluster.
This type extends the Node interface by adding a convenient query builder and automatically sets a default consistency level on all operations that do not have a consistency level set.
func NewSession ¶
func NewSession(cfg ClusterConfig) (*Session, error)
NewSession wraps an existing Node.
func (*Session) AwaitSchemaAgreement ¶
AwaitSchemaAgreement will wait until schema versions across all nodes in the cluster are the same (as seen from the point of view of the control connection). The maximum amount of time this takes is governed by the MaxWaitSchemaAgreement setting in the configuration (default: 60s). AwaitSchemaAgreement returns an error in case schema versions are not the same after the timeout specified in MaxWaitSchemaAgreement elapses.
func (*Session) Bind ¶
Bind generates a new query object based on the query statement passed in. The query is automatically prepared if it has not previously been executed. The binding callback allows the application to define which query argument values will be marshalled as part of the query execution. During execution, the meta data of the prepared query will be routed to the binding callback, which is responsible for producing the query argument values.
For supported Go to CQL type conversions for query parameters, see Session.Query documentation.
func (*Session) Close ¶
func (s *Session) Close()
Close closes all connections. The session is unusable after this operation.
func (*Session) ExecuteBatch
deprecated
func (*Session) ExecuteBatchCAS
deprecated
func (s *Session) ExecuteBatchCAS(batch *Batch, dest ...interface{}) (applied bool, iter *Iter, err error)
Deprecated: use Batch.ExecCAS instead ExecuteBatchCAS executes a batch operation and returns true if successful and an iterator (to scan additional rows if more than one conditional statement) was sent. Further scans on the interator must also remember to include the applied boolean as the first argument to *Iter.Scan
func (*Session) KeyspaceMetadata ¶
func (s *Session) KeyspaceMetadata(keyspace string) (*KeyspaceMetadata, error)
KeyspaceMetadata returns the schema metadata for the keyspace specified. Returns an error if the keyspace does not exist.
func (*Session) MapExecuteBatchCAS
deprecated
func (s *Session) MapExecuteBatchCAS(batch *Batch, dest map[string]interface{}) (applied bool, iter *Iter, err error)
Deprecated: use Batch.MapExecCAS instead MapExecuteBatchCAS executes a batch operation much like ExecuteBatchCAS, however it accepts a map rather than a list of arguments for the initial scan.
func (*Session) Query ¶
Query generates a new query object for interacting with the database. Further details of the query may be tweaked using the resulting query value before the query is executed. Query is automatically prepared if it has not previously been executed.
Supported Go to CQL type conversions for query parameters are as follows:
Go type (value) | CQL type | Note
string, []byte | varchar, ascii, blob, text |
bool | boolean |
integer types | tinyint, smallint, int |
string | tinyint, smallint, int | formatted as base 10 number
integer types | bigint, counter |
big.Int | bigint, counter | according to cassandra bigint specification the big.Int value limited to int64 size(an eight-byte two's complement integer.)
string | bigint, counter | formatted as base 10 number
float32 | float |
float64 | double |
inf.Dec | decimal |
int64 | time | nanoseconds since start of day
time.Duration | time | duration since start of day
int64 | timestamp | milliseconds since Unix epoch
time.Time | timestamp |
slice, array | list, set |
map[X]struct{} | list, set |
map[X]Y | map |
gocql.UUID | uuid, timeuuid |
[16]byte | uuid, timeuuid | raw UUID bytes
[]byte | uuid, timeuuid | raw UUID bytes, length must be 16 bytes
string | uuid, timeuuid | hex representation, see ParseUUID
integer types | varint |
big.Int | varint |
string | varint | value of number in decimal notation
net.IP | inet |
string | inet | IPv4 or IPv6 address string
slice, array | tuple |
struct | tuple | fields are marshaled in order of declaration
gocql.UDTMarshaler | user-defined type | MarshalUDT is called
map[string]interface{} | user-defined type |
struct | user-defined type | struct fields' cql tags are used for column names
int64 | date | milliseconds since Unix epoch to start of day (in UTC)
time.Time | date | start of day (in UTC)
string | date | parsed using "2006-01-02" format
int64 | duration | duration in nanoseconds
time.Duration | duration |
gocql.Duration | duration |
string | duration | parsed with time.ParseDuration
type SetHosts ¶
type SetHosts interface {
SetHosts(hosts []*HostInfo)
}
interface to implement to receive the host information
type SetPartitioner ¶
type SetPartitioner interface {
SetPartitioner(partitioner string)
}
interface to implement to receive the partitioner value
type SimpleCQLType ¶
type SimpleCQLType struct {
TypeInfo
}
SimpleCQLType is a convenience wrapper around a TypeInfo that implements CQLType by returning nil for Params, and the TypeInfo for TypeInfoFromParams and TypeInfoFromString.
func (SimpleCQLType) TypeInfoFromParams ¶
func (s SimpleCQLType) TypeInfoFromParams(proto int, params []interface{}) (TypeInfo, error)
TypeInfoFromParams returns the wrapped TypeInfo.
func (SimpleCQLType) TypeInfoFromString ¶
func (s SimpleCQLType) TypeInfoFromString(proto int, name string) (TypeInfo, error)
TypeInfoFromString returns the wrapped TypeInfo.
type SimpleConvictionPolicy ¶
type SimpleConvictionPolicy struct{}
SimpleConvictionPolicy implements a ConvictionPolicy which convicts all hosts regardless of error
func (*SimpleConvictionPolicy) AddFailure ¶
func (e *SimpleConvictionPolicy) AddFailure(error error, host *HostInfo) bool
func (*SimpleConvictionPolicy) Reset ¶
func (e *SimpleConvictionPolicy) Reset(host *HostInfo)
type SimpleRetryPolicy ¶
type SimpleRetryPolicy struct {
NumRetries int // Number of times to retry a query
}
SimpleRetryPolicy has simple logic for attempting a query a fixed number of times.
See below for examples of usage:
//Assign to the cluster
cluster.RetryPolicy = &gocql.SimpleRetryPolicy{NumRetries: 3}
//Assign to a query
query.RetryPolicy(&gocql.SimpleRetryPolicy{NumRetries: 1})
func (*SimpleRetryPolicy) Attempt ¶
func (s *SimpleRetryPolicy) Attempt(q RetryableQuery) bool
Attempt tells gocql to attempt the query again based on query.Attempts being less than the NumRetries defined in the policy.
func (*SimpleRetryPolicy) GetRetryType ¶
func (s *SimpleRetryPolicy) GetRetryType(err error) RetryType
type SimpleSpeculativeExecution ¶
SimpleSpeculativeExecution is a policy that enables speculative execution with a fixed number of attempts and delay.
func (*SimpleSpeculativeExecution) Attempts ¶
func (sp *SimpleSpeculativeExecution) Attempts() int
func (*SimpleSpeculativeExecution) Delay ¶
func (sp *SimpleSpeculativeExecution) Delay() time.Duration
type SpeculativeExecutionPolicy ¶
SpeculativeExecutionPolicy defines the interface for speculative execution policies. These policies determine when and how many speculative queries to execute.
type SslOptions ¶
type SslOptions struct {
*tls.Config
// CertPath and KeyPath are optional depending on server
// config, but both fields must be omitted to avoid using a
// client certificate
CertPath string
KeyPath string
CaPath string //optional depending on server config
// If you want to verify the hostname and server cert (like a wildcard for cass cluster) then you should turn this
// on.
// This option is basically the inverse of tls.Config.InsecureSkipVerify.
// See InsecureSkipVerify in http://golang.org/pkg/crypto/tls/ for more info.
//
// See SslOptions documentation to see how EnableHostVerification interacts with the provided tls.Config.
EnableHostVerification bool
}
SslOptions configures TLS use.
Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows:
Config.InsecureSkipVerify | EnableHostVerification | Result Config is nil | false | do not verify host Config is nil | true | verify host false | false | verify host true | false | do not verify host false | true | verify host true | true | verify host
type Statement ¶
type Statement interface {
Iter() *Iter
IterContext(ctx context.Context) *Iter
Exec() error
ExecContext(ctx context.Context) error
}
Statement is an interface that represents a CQL statement that the driver can execute (currently Query and Batch via Session.Query and Session.Batch)
type StdLogger
deprecated
type StdLogger interface{}
Deprecated: use StructuredLogger instead
type StreamObserver ¶
type StreamObserver interface {
// StreamContext is called before creating a new stream.
// ctx is context passed to Session.Query / Session.Batch,
// but might also be an internal context (for example
// for internal requests that use control connection).
// StreamContext might return nil if it is not interested
// in the details of this stream.
// StreamContext is called before the stream is created
// and the returned StreamObserverContext might be discarded
// without any methods called on the StreamObserverContext if
// creation of the stream fails.
// Note that if you don't need to track per-stream data,
// you can always return the same StreamObserverContext.
StreamContext(ctx context.Context) StreamObserverContext
}
StreamObserver is notified about request/response pairs. Streams are created for executing queries/batches or internal requests to the database and might live longer than execution of the query - the stream is still tracked until response arrives so that stream IDs are not reused.
type StreamObserverContext ¶
type StreamObserverContext interface {
// StreamStarted is called when the stream is started.
// This happens just before a request is written to the wire.
StreamStarted(observedStream ObservedStream)
// StreamAbandoned is called when we stop waiting for response.
// This happens when the underlying network connection is closed.
// StreamFinished won't be called if StreamAbandoned is.
StreamAbandoned(observedStream ObservedStream)
// StreamFinished is called when we receive a response for the stream.
StreamFinished(observedStream ObservedStream)
}
StreamObserverContext is notified about state of a stream. A stream is started every time a request is written to the server and is finished when a response is received. It is abandoned when the underlying network connection is closed before receiving a response.
type StructuredLogger ¶
type StructuredLogger interface {
Error(msg string, fields ...LogField)
Warning(msg string, fields ...LogField)
Info(msg string, fields ...LogField)
Debug(msg string, fields ...LogField)
}
func NewLogger ¶
func NewLogger(logLevel LogLevel) StructuredLogger
NewLogger creates a StructuredLogger that uses the standard library log package.
This logger will write log messages in the following format:
<LOG_LEVEL> gocql: <message> <fields[0].Name>=<fields[0].Value> <fields[1].Name>=<fields[1].Value>
LOG_LEVEL is always a 3 letter string:
- DEBUG -> DBG
- INFO -> INF
- WARNING -> WRN
- ERROR -> ERR
Example:
INF gocql: Adding host (session initialization). host_addr=127.0.0.1 host_id=a21dd06e-9e7e-4528-8ad7-039604e25e73
type TableMetadata ¶
type TableMetadata struct {
Keyspace string
Name string
KeyValidator string
Comparator string
DefaultValidator string
KeyAliases []string
ColumnAliases []string
ValueAlias string
PartitionKey []*ColumnMetadata
ClusteringColumns []*ColumnMetadata
Columns map[string]*ColumnMetadata
OrderedColumns []string
}
schema metadata for a table (a.k.a. column family)
type Tracer ¶
type Tracer interface {
Trace(traceId []byte)
}
Tracer is the interface implemented by query tracers. Tracers have the ability to obtain a detailed event log of all events that happened during the execution of a query from Cassandra. Gathering this information might be essential for debugging and optimizing queries, but this feature should not be used on production systems with very high load.
type TupleTypeInfo ¶
type TupleTypeInfo struct {
Elems []TypeInfo
}
TupleTypeInfo represents type information for Cassandra tuple types. It contains information about the element types in the tuple.
func (TupleTypeInfo) Marshal ¶
func (tuple TupleTypeInfo) Marshal(value interface{}) ([]byte, error)
Marshal marshals the value into a byte slice.
func (TupleTypeInfo) Type ¶
func (TupleTypeInfo) Type() Type
func (TupleTypeInfo) Unmarshal ¶
func (tuple TupleTypeInfo) Unmarshal(data []byte, value interface{}) error
Unmarshal unmarshals the byte slice into the value. currently only support unmarshal into a list of values, this makes it possible to support tuples without changing the query API. In the future this can be extend to allow unmarshalling into custom tuple types.
func (TupleTypeInfo) Zero ¶
func (t TupleTypeInfo) Zero() interface{}
Zero returns the zero value for the tuple CQL type.
type Type ¶
type Type int
Type is the identifier of a Cassandra internal datatype. Available types include: TypeCustom, TypeAscii, TypeBigInt, TypeBlob, TypeBoolean, TypeCounter, TypeDecimal, TypeDouble, TypeFloat, TypeInt, TypeText, TypeTimestamp, TypeUUID, TypeVarchar, TypeVarint, TypeTimeUUID, TypeInet, TypeDate, TypeTime, TypeSmallInt, TypeTinyInt, TypeDuration, TypeList, TypeMap, TypeSet, TypeUDT, TypeTuple.
const ( TypeCustom Type = 0x0000 TypeAscii Type = 0x0001 TypeBigInt Type = 0x0002 TypeBlob Type = 0x0003 TypeBoolean Type = 0x0004 TypeCounter Type = 0x0005 TypeDecimal Type = 0x0006 TypeDouble Type = 0x0007 TypeFloat Type = 0x0008 TypeInt Type = 0x0009 TypeText Type = 0x000A TypeTimestamp Type = 0x000B TypeUUID Type = 0x000C TypeVarchar Type = 0x000D TypeVarint Type = 0x000E TypeTimeUUID Type = 0x000F TypeInet Type = 0x0010 TypeDate Type = 0x0011 TypeTime Type = 0x0012 TypeSmallInt Type = 0x0013 TypeTinyInt Type = 0x0014 TypeDuration Type = 0x0015 TypeList Type = 0x0020 TypeMap Type = 0x0021 TypeSet Type = 0x0022 TypeUDT Type = 0x0030 TypeTuple Type = 0x0031 )
type TypeInfo ¶
type TypeInfo interface {
// Type returns the Type id for the TypeInfo.
Type() Type
// Zero returns the Go zero value. For types that directly map to a Go type like
// list<integer> it should return []int(nil) but for complex types like a
// tuple<integer, boolean> it should be []interface{}{int(0), bool(false)}.
Zero() interface{}
// Marshal should marshal the value for the given TypeInfo into a byte slice
Marshal(value interface{}) ([]byte, error)
// Unmarshal should unmarshal the byte slice into the value for the given
// TypeInfo.
Unmarshal(data []byte, value interface{}) error
}
TypeInfo describes a Cassandra specific data type and handles marshalling and unmarshalling.
type UDTField ¶
UDTField represents a field in a User Defined Type. It contains the field name and its type information.
type UDTMarshaler ¶
type UDTMarshaler interface {
// MarshalUDT will be called for each field in the the UDT returned by Cassandra,
// the implementor should marshal the type to return by for example calling
// Marshal.
MarshalUDT(name string, info TypeInfo) ([]byte, error)
}
UDTMarshaler is an interface which should be implemented by users wishing to handle encoding UDT types to sent to Cassandra. Note: due to current implentations methods defined for this interface must be value receivers not pointer receivers.
Example ¶
ExampleUDTMarshaler demonstrates how to implement a UDTMarshaler.
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Content before git sha 34fdeebefcbf183ed7f916f931aa0586fdaa1b40
* Copyright (c) 2016, The Gocql authors,
* provided under the BSD-3-Clause License.
* See the NOTICE file distributed with this work for additional information.
*/
package main
import (
"context"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
// MyUDTMarshaler implements UDTMarshaler.
type MyUDTMarshaler struct {
fieldA string
fieldB int32
}
// MarshalUDT marshals the selected field to bytes.
func (m MyUDTMarshaler) MarshalUDT(name string, info gocql.TypeInfo) ([]byte, error) {
switch name {
case "field_a":
return gocql.Marshal(info, m.fieldA)
case "field_b":
return gocql.Marshal(info, m.fieldB)
default:
// If you want to be strict and return error un unknown field, you can do so here instead.
// Returning nil, nil will set the value of unknown fields to null, which might be handy if you want
// to be forward-compatible when a new field is added to the UDT.
return nil, nil
}
}
// ExampleUDTMarshaler demonstrates how to implement a UDTMarshaler.
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create type example.my_udt (field_a text, field_b int);
create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
value := MyUDTMarshaler{
fieldA: "a value",
fieldB: 42,
}
err = session.Query("INSERT INTO example.my_udt_table (pk, value) VALUES (?, ?)",
1, value).ExecContext(ctx)
if err != nil {
log.Fatal(err)
}
}
type UDTTypeInfo ¶
UDTTypeInfo represents type information for Cassandra User Defined Types (UDT). It contains the keyspace, type name, and field definitions.
func (UDTTypeInfo) Marshal ¶
func (udt UDTTypeInfo) Marshal(value interface{}) ([]byte, error)
Marshal marshals the value into a byte slice.
func (UDTTypeInfo) Type ¶
func (u UDTTypeInfo) Type() Type
func (UDTTypeInfo) Unmarshal ¶
func (udt UDTTypeInfo) Unmarshal(data []byte, value interface{}) error
Unmarshal unmarshals the byte slice into the value.
func (UDTTypeInfo) Zero ¶
func (UDTTypeInfo) Zero() interface{}
Zero returns the zero value for the UDT CQL type.
type UDTUnmarshaler ¶
type UDTUnmarshaler interface {
// UnmarshalUDT will be called for each field in the UDT return by Cassandra,
// the implementor should unmarshal the data into the value of their chosing,
// for example by calling Unmarshal.
UnmarshalUDT(name string, info TypeInfo, data []byte) error
}
UDTUnmarshaler should be implemented by users wanting to implement custom UDT unmarshaling.
Example ¶
ExampleUDTUnmarshaler demonstrates how to implement a UDTUnmarshaler.
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Content before git sha 34fdeebefcbf183ed7f916f931aa0586fdaa1b40
* Copyright (c) 2016, The Gocql authors,
* provided under the BSD-3-Clause License.
* See the NOTICE file distributed with this work for additional information.
*/
package main
import (
"context"
"fmt"
"log"
gocql "github.com/apache/cassandra-gocql-driver/v2"
)
// MyUDTUnmarshaler implements UDTUnmarshaler.
type MyUDTUnmarshaler struct {
fieldA string
fieldB int32
}
// UnmarshalUDT unmarshals the field identified by name into MyUDTUnmarshaler.
func (m *MyUDTUnmarshaler) UnmarshalUDT(name string, info gocql.TypeInfo, data []byte) error {
switch name {
case "field_a":
return gocql.Unmarshal(info, data, &m.fieldA)
case "field_b":
return gocql.Unmarshal(info, data, &m.fieldB)
default:
// If you want to be strict and return error un unknown field, you can do so here instead.
// Returning nil will ignore unknown fields, which might be handy if you want
// to be forward-compatible when a new field is added to the UDT.
return nil
}
}
// ExampleUDTUnmarshaler demonstrates how to implement a UDTUnmarshaler.
func main() {
/* The example assumes the following CQL was used to setup the keyspace:
create keyspace example with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
create type example.my_udt (field_a text, field_b int);
create table example.my_udt_table(pk int, value frozen<my_udt>, PRIMARY KEY(pk));
insert into example.my_udt_table (pk, value) values (1, {field_a: 'a value', field_b: 42});
*/
cluster := gocql.NewCluster("localhost:9042")
cluster.Keyspace = "example"
cluster.ProtoVersion = 4
session, err := cluster.CreateSession()
if err != nil {
log.Fatal(err)
}
defer session.Close()
ctx := context.Background()
// Read the UDT value back
var value MyUDTUnmarshaler
err = session.Query("SELECT value FROM example.my_udt_table WHERE pk = 1").ScanContext(ctx, &value)
if err != nil {
log.Fatal(err)
}
fmt.Println(value.fieldA)
fmt.Println(value.fieldB)
// a value
// 42
}
type UUID ¶
type UUID [16]byte
UUID represents a 16-byte Universally Unique Identifier as defined by RFC 4122. It provides methods for generating, parsing, and manipulating UUIDs, with support for both random (version 4) and time-based (version 1) UUIDs.
func MaxTimeUUID ¶
MaxTimeUUID generates a "fake" time based UUID (version 1) which will be the biggest possible UUID generated for the provided timestamp.
UUIDs generated by this function are not unique and are mostly suitable only in queries to select a time range of a Cassandra's TimeUUID column.
func MinTimeUUID ¶
MinTimeUUID generates a "fake" time based UUID (version 1) which will be the smallest possible UUID generated for the provided timestamp.
UUIDs generated by this function are not unique and are mostly suitable only in queries to select a time range of a Cassandra's TimeUUID column.
func MustRandomUUID ¶
func MustRandomUUID() UUID
func ParseUUID ¶
ParseUUID parses a 32 digit hexadecimal number (that might contain hypens) representing an UUID.
func RandomUUID ¶
RandomUUID generates a totally random UUID (version 4) as described in RFC 4122.
func TimeUUID ¶
func TimeUUID() UUID
TimeUUID generates a new time based UUID (version 1) using the current time as the timestamp.
func TimeUUIDWith ¶
TimeUUIDWith generates a new time based UUID (version 1) as described in RFC4122 with given parameters. t is the number of 100's of nanoseconds since 15 Oct 1582 (60bits). clock is the number of clock sequence (14bits). node is a slice to gurarantee the uniqueness of the UUID (up to 6bytes). Note: calling this function does not increment the static clock sequence.
func UUIDFromBytes ¶
UUIDFromBytes converts a raw byte slice to an UUID.
func UUIDFromTime ¶
UUIDFromTime generates a new time based UUID (version 1) as described in RFC 4122. This UUID contains the MAC address of the node that generated the UUID, the given timestamp and a sequence number.
func (UUID) Bytes ¶
Bytes returns the raw byte slice for this UUID. A UUID is always 128 bits (16 bytes) long.
func (UUID) Clock ¶
Clock extracts the clock sequence of this UUID. It will return zero if the UUID is not a time based UUID (version 1).
func (UUID) MarshalText ¶
func (UUID) Node ¶
Node extracts the MAC address of the node who generated this UUID. It will return nil if the UUID is not a time based UUID (version 1).
func (UUID) String ¶
String returns the UUID in it's canonical form, a 32 digit hexadecimal number in the form of xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
func (UUID) Timestamp ¶
Timestamp extracts the timestamp information from a time based UUID (version 1).
func (*UUID) UnmarshalText ¶
type UnmarshalError ¶
type UnmarshalError string
UnmarshalError represents an error that occurred during unmarshaling.
func (UnmarshalError) Error ¶
func (m UnmarshalError) Error() string
type Unmarshaler ¶
Unmarshaler is the interface implemented by objects that can unmarshal a Cassandra specific description of themselves.
type UserTypeMetadata ¶
type UserTypeMetadata struct {
Keyspace string // The keyspace where the UDT is defined
Name string // The name of the User Defined Type
FieldNames []string // Ordered list of field names in the UDT
FieldTypes []TypeInfo // Corresponding type information for each field
}
UserTypeMetadata represents metadata information about a Cassandra User Defined Type (UDT). This Go struct holds descriptive information about a UDT that exists in the Cassandra schema, including the type name, keyspace, field names, and field types. It is not the UDT itself, but rather a representation of the UDT's schema structure for use within the gocql driver.
A Cassandra User Defined Type is a custom data type that allows you to group related fields together. This metadata struct provides the necessary information to marshal and unmarshal values to and from the corresponding UDT in Cassandra.
For type information used in marshaling/unmarshaling operations, see UDTTypeInfo. Actual UDT values are typically represented as map[string]interface{}, Go structs with cql tags, or types implementing UDTMarshaler/UDTUnmarshaler interfaces.
type VectorType ¶
VectorType represents a Cassandra vector type, which stores an array of values with a fixed dimension. It's commonly used for machine learning applications and similarity searches. The SubType defines the element type and Dimensions specifies the fixed size of the vector.
func (VectorType) Marshal ¶
func (v VectorType) Marshal(value interface{}) ([]byte, error)
Marshal marshals the value into a byte slice.
func (VectorType) String ¶
func (t VectorType) String() string
func (VectorType) Type ¶
func (VectorType) Type() Type
func (VectorType) Unmarshal ¶
func (v VectorType) Unmarshal(data []byte, value interface{}) error
Unmarshal unmarshals the byte slice into the value.
func (VectorType) Zero ¶
func (v VectorType) Zero() interface{}
Zero returns the zero value for the vector CQL type.
Source Files
¶
- address_translators.go
- cluster.go
- compressor.go
- conn.go
- connectionpool.go
- control.go
- cqltypes.go
- crc.go
- dial.go
- doc.go
- errors.go
- events.go
- filters.go
- frame.go
- helpers.go
- host_source.go
- logger.go
- marshal.go
- metadata.go
- policies.go
- prepared_cache.go
- query_executor.go
- ring.go
- session.go
- token.go
- topology.go
- types.go
- uuid.go
- vector.go
- version.go
Directories
¶
| Path | Synopsis |
|---|---|
|
Package gocqlzap provides Zap logger integration for the gocql Cassandra driver.
|
Package gocqlzap provides Zap logger integration for the gocql Cassandra driver. |
|
Package gocqlzerolog provides Zerolog logger integration for the gocql Cassandra driver.
|
Package gocqlzerolog provides Zerolog logger integration for the gocql Cassandra driver. |
|
Package hostpool provides host selection policies for gocql that integrate with the go-hostpool library for intelligent host pooling and load balancing.
|
Package hostpool provides host selection policies for gocql that integrate with the go-hostpool library for intelligent host pooling and load balancing. |
|
internal
|
|
|
Package lz4 provides LZ4 compression for the Cassandra Native Protocol.
|
Package lz4 provides LZ4 compression for the Cassandra Native Protocol. |
|
Package snappy provides Snappy compression for the Cassandra Native Protocol.
|
Package snappy provides Snappy compression for the Cassandra Native Protocol. |