gopgbase

package module
v0.0.0-...-a40add4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2026 License: MIT Imports: 18 Imported by: 0

README

gopgbase

CI codecov Go Report Card Go Reference Release

A unified, security-first Go client for PostgreSQL-compatible databases with constructor-injected adaptors, fluent query building, and production-grade observability.

Overview

gopgbase abstracts multiple PostgreSQL-compatible databases behind a single DataStore interface. Inject one or many adaptors at runtime using Constructor Injection — the Client never creates its own connections.

Supported Providers
Provider Adaptor Default Port Default SSL
PostgreSQL (self-hosted) NewPostgresAdaptor 5432 verify-full
AWS RDS for PostgreSQL NewPostgresAdaptor 5432 verify-full
Railway / Render NewPostgresAdaptor 5432 verify-full
Supabase NewSupabaseAdaptor 5432/6543 verify-full
CockroachDB NewCockroachAdaptor 26257 verify-full
Neon NewNeonAdaptor 5432 require
Amazon Redshift NewRedshiftAdaptor 5439 verify-full
TimescaleDB NewTimescaleAdaptor 5432 verify-full

Installation

go get github.com/goozt/gopgbase

Quick Start

Single Adaptor (PostgreSQL)
package main

import (
    "context"
    "database/sql"
    "log"

    "github.com/goozt/gopgbase"
    "github.com/goozt/gopgbase/adaptors"
)

func main() {
    ctx := context.Background()

    // Create an adaptor (manages its own connection pool).
    ds, err := adaptors.NewPostgresAdaptor(adaptors.PostgresConfig{
        BaseConfig: adaptors.BaseConfig{
            Host: "localhost", Port: 5432,
            User: "postgres", Password: "secret", DBName: "mydb",
            Insecure: true, // Local dev only! Disables TLS.
        },
    })
    if err != nil {
        log.Fatal(err)
    }
    defer ds.Close()

    // Inject into Client — all DB access goes through DataStore.
    client := gopgbase.NewClient(ds)

    // Transaction with auto commit/rollback.
    err = client.Transaction(ctx, func(tx *sql.Tx) error {
        _, err := tx.ExecContext(ctx, "INSERT INTO users (name) VALUES ($1)", "Alice")
        return err
    })
    if err != nil {
        log.Fatal(err)
    }

    // Convenience helpers.
    count, _ := client.Count(ctx, "users", "active = $1", true)
    log.Printf("Active users: %d", count)
}
URL-Based Configuration (Railway, Render)
ds, err := adaptors.NewPostgresAdaptor(adaptors.PostgresConfig{
    ConnectionURL: os.Getenv("DATABASE_URL"),
})
Supabase Adaptor
ds, err := adaptors.NewSupabaseAdaptor(adaptors.SupabaseConfig{
    ConnectionURL: os.Getenv("SUPABASE_DB_URL"),
})
client := gopgbase.NewClient(ds)

// Use Supabase companion library for RLS, auth, storage.
sbLib, _ := supabase.NewSupabaseLibrary(client, supabase.Config{
    ProjectURL:     os.Getenv("SUPABASE_URL"),
    APIKey:         os.Getenv("SUPABASE_ANON_KEY"),
    ServiceRoleKey: os.Getenv("SUPABASE_SERVICE_ROLE_KEY"),
})
sbLib.EnableRLS(ctx, "profiles", "policy_name", "USING (auth.uid() = user_id)")
Multiple Adaptors (Supabase + Redshift)
// Each adaptor has its own connection pool.
sbDS, _ := adaptors.NewSupabaseAdaptor(supabaseCfg)
rsDS, _ := adaptors.NewRedshiftAdaptor(redshiftCfg)

// Separate clients for different workloads.
userClient := gopgbase.NewClient(sbDS)     // OLTP: user data
analyticsClient := gopgbase.NewClient(rsDS) // OLAP: analytics

count, _ := userClient.Count(ctx, "users", "")
exists, _ := analyticsClient.Exists(ctx, "SELECT 1 FROM daily_metrics WHERE date = $1", today)

Architecture

┌─────────────┐     ┌──────────────┐
│   Client    │────▶│  DataStore   │ (interface)
└─────────────┘     └──────────────┘
                           │
        ┌──────────────────┼──────────────────┐
        ▼                  ▼                  ▼
  ┌──────────┐      ┌──────────┐      ┌──────────┐
  │ Postgres │      │ Supabase │      │  Custom  │
  │ Adaptor  │      │ Adaptor  │      │  Mock    │
  └──────────┘      └──────────┘      └──────────┘
       │                  │
       ▼                  ▼
    *sql.DB            *sql.DB
DataStore Interface
type DataStore interface {
    QueryRowContext(ctx context.Context, query string, args ...any) *sql.Row
    QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)
    ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
    BeginTx(ctx context.Context, opts *sql.TxOptions) (*sql.Tx, error)
    PingContext(ctx context.Context) error
    Close() error
}

All database access flows through this interface. Users can implement their own DataStore for mocking, custom pools, or alternative drivers — no internal types required.

Concrete adaptors also expose Unwrap() *sql.DB for interop with tools like goose.

Client Features

QueryBuilder (Fluent DSL)
rows, err := client.QueryBuilder().
    Select("users").
    Columns("id", "name", "email").
    Join("INNER JOIN orders ON users.id = orders.user_id").
    Where("age > ?", 18).          // ? auto-converts to $N
    Where("active = $2", true).    // $N passed as-is (don't mix!)
    GroupBy("name").
    OrderBy("name ASC").
    Limit(10).Offset(20).
    Query(ctx)
Transactions
// Auto commit/rollback with panic recovery.
client.Transaction(ctx, func(tx *sql.Tx) error { ... })

// With isolation level.
client.TransactionWithIsolation(ctx, sql.LevelSerializable, func(tx *sql.Tx) error { ... })

// Read-only transaction.
client.ReadOnlyTransaction(ctx, func(tx *sql.Tx) error { ... })

// Savepoints (nested transactions).
client.Transaction(ctx, func(tx *sql.Tx) error {
    client.Savepoint(ctx, tx, "sp1", func(tx *sql.Tx) error { ... })
    return nil
})

// Batch operations in single transaction.
client.BatchTransaction(ctx, op1, op2, op3)
StructScan
type User struct {
    ID   int    `db:"id"`
    Name string `db:"name"`
}

rows, _ := client.DataStore().QueryContext(ctx, "SELECT id, name FROM users")
for rows.Next() {
    var u User
    client.StructScan(ctx, rows, &u)
}
Bulk Operations
// BulkInsert — parameterized multi-row INSERT.
n, err := client.BulkInsert(ctx, "users", []string{"name", "age"},
    [][]any{{"Alice", 30}, {"Bob", 25}, {"Charlie", 35}})

// BulkCopy — pgx COPY protocol (falls back to BulkInsert for non-pgx).
n, err := client.BulkCopy(ctx, "metrics", []string{"time", "value"}, data)
ForEachRow (Memory-Efficient)
err := client.ForEachRow(ctx, "SELECT id, name FROM users", nil,
    func(row map[string]any) error {
        fmt.Println(row["name"])
        return nil
    })

Security: The Insecure Flag

Insecure Behavior Use Case
false (default) TLS enabled, certificates verified (verify-full or provider equivalent) Production
true TLS disabled (sslmode=disable) Local development only

Insecure only affects TLS/SSL settings. It never bypasses SQL parameterization or other safety measures.

Never use Insecure: true in production.

Companion Libraries

Optional, provider-specific helper libraries in libs/:

Package Provider Key Features
libs/common All Pagination, SoftDelete, AuditTrail, SchemaDiff, Migrations
libs/postgres PostgreSQL Extensions, VacuumAnalyze, IndexAdvisor, ReplicationLag, LockWatcher
libs/timescale TimescaleDB Hypertables, TimeBucket, ContinuousAggs, Compression, LTTB
libs/supabase Supabase RLS, Auth/JWT, EdgeFunctions, Storage, UserManager
libs/redshift Redshift Vacuum, MaterializedViews, WLM, Spectrum
libs/cockroachdb CockroachDB MultiRegion, GlobalTables, DistSQL, Backup, CDC
libs/neon Neon pgvector, VectorIndex, Branching, ConnectionPooler
Migrations (via Goose)
//go:embed migrations/*.sql
var MigrationsFS embed.FS

migrator := common.NewMigrateLibrary(client)
migrator.Up(ctx, MigrationsFS)        // Run pending migrations
migrator.Down(ctx, MigrationsFS, 1)   // Rollback 1
migrator.Version(ctx)                 // Current version
migrator.Status(ctx, MigrationsFS)    // Print status

CLI users can use goose directly:

go install github.com/pressly/goose/v3/cmd/goose@latest
goose -dir migrations postgres "$DATABASE_URL" up

Observability

obs := client.EnableObservability(ctx)

// Prometheus metrics auto-registered: gopgbase_queries_total, gopgbase_query_duration_seconds, etc.
http.Handle("/metrics", promhttp.Handler())
http.HandleFunc("/healthz", client.HealthCheckHandler)

// Import pre-built Grafana dashboard.
client.ImportGrafanaDashboard("http://grafana:3000", apiKey)

// Connection pool tuning.
client.TunePool(ctx, runtime.NumCPU(), 1000)

Development

# Install Task (one-time)
go install github.com/go-task/task/v3/cmd/task@latest

# Setup dev tools
task init

# Daily workflow
task test          # Run tests
task lint          # Pre-commit checks
task testcover     # Coverage report
task bench         # Benchmarks
task gen           # Regenerate mocks
task ci            # Full CI pipeline

Project Structure

gopgbase/
├── datastore.go          # DataStore interface
├── client.go             # Client, QueryBuilder, StructScan, BulkCopy
├── tx.go                 # Transaction helpers
├── observability.go      # Prometheus, OTEL, Grafana
├── adaptors/
│   ├── config.go         # BaseConfig, pgxDataStore
│   ├── postgres.go       # PostgreSQL, RDS, Railway, Render
│   ├── supabase.go
│   ├── cockroach.go
│   ├── neon.go
│   ├── redshift.go
│   └── timescale.go
├── libs/
│   ├── common/           # Pagination, AuditTrail, Migrations, ExplainAnalyze
│   ├── postgres/         # Extensions, IndexAdvisor, LockWatcher
│   ├── timescale/        # Hypertables, ContinuousAggs, LTTB
│   ├── supabase/         # RLS, Auth, EdgeFunctions, Storage
│   ├── redshift/         # Vacuum, Spectrum, WLM
│   ├── cockroachdb/      # MultiRegion, CDC, Backup
│   └── neon/             # pgvector, Branching
├── examples/
├── testdata/
├── .github/workflows/    # CI + Release
├── Taskfile.yml
└── Makefile

License

See LICENSE for details.

Documentation

Overview

Package gopgbase provides a unified PostgreSQL client with constructor injection.

gopgbase abstracts multiple PostgreSQL-compatible databases (PostgreSQL, AWS RDS, Supabase, CockroachDB, Neon, Redshift, TimescaleDB, Railway, Render, and others) behind the DataStore interface.

All database access flows through the DataStore interface — never directly through *sql.DB or *sql.Tx. Users inject a DataStore into NewClient and interact exclusively via Client methods.

Custom DataStore implementations (for mocking, alternative drivers, or custom connection pools) are encouraged and require no internal types.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is the main entry point for all database operations.

It wraps a DataStore (injected via NewClient) and provides helpers for transactions, queries, bulk operations, struct scanning, and more.

Client never constructs its own database connections — all access flows through the injected DataStore. Users may inject different adaptors, share a single adaptor across multiple Clients, or provide a custom DataStore implementation.

All Client methods are safe for concurrent use.

func NewClient

func NewClient(ds DataStore) *Client

NewClient creates a new Client backed by the given DataStore.

This is the constructor injection point: the caller is responsible for creating and configuring the DataStore (e.g., via an adaptor constructor).

Example:

ds, err := adaptors.NewPostgresAdaptor(cfg)
if err != nil { log.Fatal(err) }
client := gopgbase.NewClient(ds)

func (*Client) BatchTransaction

func (c *Client) BatchTransaction(ctx context.Context, operations ...func(tx *sql.Tx) error) error

BatchTransaction executes multiple operations sequentially within a single transaction.

If any operation returns an error, the entire transaction is rolled back.

func (*Client) BulkCopy

func (c *Client) BulkCopy(ctx context.Context, table string, columns []string, data [][]any) (int64, error)

BulkCopy performs a high-performance COPY operation for bulk data loading.

This method requires a pgx-backed DataStore (one that implements Unwrap). For non-pgx DataStores, it falls back to BulkInsert.

Example:

n, err := client.BulkCopy(ctx, "metrics", []string{"time", "value"},
    [][]any{{time.Now(), 42.0}, {time.Now(), 43.0}})

func (*Client) BulkInsert

func (c *Client) BulkInsert(ctx context.Context, table string, columns []string, values [][]any) (int64, error)

BulkInsert inserts multiple rows into the given table using parameterized queries.

columns lists the column names, and values is a slice of rows where each row is a slice of column values. Returns the number of rows affected.

The insert is performed as a single statement with multiple value groups. For very large inserts (>65535 parameters), consider using BulkCopy instead.

Example:

n, err := client.BulkInsert(ctx, "users", []string{"name", "age"},
    [][]any{{"Alice", 30}, {"Bob", 25}})

func (*Client) Count

func (c *Client) Count(ctx context.Context, table string, condition string, args ...any) (int64, error)

Count returns the number of rows matching the condition in the given table.

The table parameter must be a trusted identifier (e.g., a constant or validated name) — it is quoted as an identifier but not parameterized. The condition is placed in a WHERE clause with args passed as placeholders. An empty condition counts all rows.

Example:

n, err := client.Count(ctx, "users", "active = $1", true)

func (*Client) DataStore

func (c *Client) DataStore() DataStore

DataStore returns the underlying DataStore for advanced or escape-hatch usage.

func (*Client) EnableObservability

func (c *Client) EnableObservability(_ context.Context) *ObservabilityLibrary

EnableObservability initializes and returns the observability subsystem.

After calling this, Prometheus metrics are registered and available at the standard /metrics endpoint (via promhttp.Handler).

func (*Client) EnablePreparedStatements

func (c *Client) EnablePreparedStatements(_ context.Context) error

EnablePreparedStatements enables prepared statement caching on the underlying connection pool. This requires an Unwrapper-capable DataStore.

func (*Client) Exists

func (c *Client) Exists(ctx context.Context, query string, args ...any) (bool, error)

Exists runs the provided SELECT query and returns true if at least one row is returned, false otherwise.

The query is wrapped in SELECT EXISTS(...). Always use placeholders for values in the query.

Example:

ok, err := client.Exists(ctx, "SELECT 1 FROM users WHERE email = $1", email)

func (*Client) ForEachRow

func (c *Client) ForEachRow(ctx context.Context, query string, args []any, fn func(row map[string]any) error) error

ForEachRow executes query and calls fn for each row. Rows are not buffered in memory — fn is called as each row is scanned.

If fn returns an error, iteration stops and that error is returned. The rows are closed automatically.

Example:

err := client.ForEachRow(ctx, "SELECT id, name FROM users", nil,
    func(row map[string]any) error {
        fmt.Println(row["name"])
        return nil
    })

func (*Client) GrafanaDashboardJSON

func (c *Client) GrafanaDashboardJSON() string

GrafanaDashboardJSON returns a pre-built Grafana dashboard JSON for gopgbase metrics. Users can import this into their Grafana instance.

func (*Client) HealthCheckHandler

func (c *Client) HealthCheckHandler(w http.ResponseWriter, r *http.Request)

HealthCheckHandler returns an http.HandlerFunc that performs a database health check and responds with JSON.

func (*Client) ImportGrafanaDashboard

func (c *Client) ImportGrafanaDashboard(grafanaURL, apiKey string) error

ImportGrafanaDashboard pushes the gopgbase dashboard to a Grafana instance.

grafanaURL is the base URL (e.g., "http://grafana.local:3000"). apiKey is a Grafana API key with dashboard creation permissions.

func (*Client) QueryBuilder

func (c *Client) QueryBuilder() *QueryBuilderDSL

QueryBuilder provides a fluent interface for constructing SQL queries.

It supports dual placeholder modes:

  • MySQL-style ? placeholders: auto-converted to PostgreSQL $N before execution.
  • Native PostgreSQL $N placeholders: passed through as-is.
  • Mixing ? and $N in the same query is an error.

Example:

results, err := client.QueryBuilder().
    Select("users").
    Columns("id", "name", "email").
    Where("age > ?", 18).
    OrderBy("name ASC").
    Limit(10).
    Query(ctx)

func (*Client) ReadOnlyTransaction

func (c *Client) ReadOnlyTransaction(ctx context.Context, fn func(tx *sql.Tx) error) error

ReadOnlyTransaction executes fn within a read-only transaction.

The database will reject any write operations (INSERT, UPDATE, DELETE) inside fn, which is useful for ensuring SELECT-only logic.

func (*Client) Savepoint

func (c *Client) Savepoint(ctx context.Context, tx *sql.Tx, name string, fn func(tx *sql.Tx) error) (err error)

Savepoint executes fn within a named savepoint inside an existing transaction.

If fn returns an error, the savepoint is rolled back (but the outer transaction remains active). If fn succeeds, the savepoint is released.

func (*Client) StructScan

func (c *Client) StructScan(_ context.Context, rows *sql.Rows, dest any) error

StructScan scans the current row of rows into the struct pointed to by dest.

It maps column names to struct fields using the "db" tag, falling back to the lowercase field name. Supports JSONB (scanned as json.RawMessage or any json.Unmarshaler) and PostgreSQL arrays (scanned as slices).

Example:

type User struct {
    ID   int    `db:"id"`
    Name string `db:"name"`
}
rows, _ := client.DataStore().QueryContext(ctx, "SELECT id, name FROM users")
for rows.Next() {
    var u User
    if err := client.StructScan(ctx, rows, &u); err != nil { ... }
}

func (*Client) Transaction

func (c *Client) Transaction(ctx context.Context, fn func(tx *sql.Tx) error) error

Transaction executes fn within a database transaction.

Behavior:

  • Starts a read/write transaction with default isolation via BeginTx.
  • If fn returns nil, the transaction is committed.
  • If fn returns an error, the transaction is rolled back and the error is returned.
  • If fn panics, the transaction is rolled back and the panic is re-raised.
  • Respects ctx cancellation and deadlines throughout.

Transaction is safe for concurrent use — each call gets its own *sql.Tx.

func (*Client) TransactionWithIsolation

func (c *Client) TransactionWithIsolation(ctx context.Context, level sql.IsolationLevel, fn func(tx *sql.Tx) error) error

TransactionWithIsolation executes fn within a transaction at the given isolation level.

See sql.IsolationLevel constants (e.g., sql.LevelSerializable).

func (*Client) TunePool

func (c *Client) TunePool(_ context.Context, cpuCores, qps int) error

TunePool adjusts the connection pool parameters based on CPU cores and expected QPS.

It applies a heuristic: maxOpen = cpuCores * 2 + 1 (capped by qps/100), maxIdle = cpuCores, idleTimeout = 5 minutes. These are starting-point defaults; users should monitor and adjust as needed.

func (*Client) WithReadReplica

func (c *Client) WithReadReplica(_ context.Context, replica DataStore) *Client

WithReadReplica returns a new Client that uses the given DataStore as a read replica. The returned client shares the same write DataStore but directs read operations to the replica.

type DataStore

type DataStore interface {
	// QueryRowContext executes a query expected to return at most one row.
	QueryRowContext(ctx context.Context, query string, args ...any) *sql.Row

	// QueryContext executes a query that returns rows, typically a SELECT.
	QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)

	// ExecContext executes a query without returning rows (INSERT, UPDATE, DELETE, etc.).
	ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)

	// BeginTx starts a transaction with the given options.
	BeginTx(ctx context.Context, opts *sql.TxOptions) (*sql.Tx, error)

	// PingContext verifies the connection to the database is alive.
	PingContext(ctx context.Context) error

	// Close closes the underlying connection pool and releases resources.
	Close() error
}

DataStore defines the minimal database operations required by Client.

It mirrors key parts of *sql.DB but is abstract enough for custom implementations (mocks, alternative pools, or wrapped drivers).

All concrete adaptors in the adaptors package implement this interface. Users may also provide their own implementation for testing or custom setups.

Concrete adaptor types additionally expose Unwrap() *sql.DB for interop with tools (like goose) that require a raw *sql.DB. Unwrap is intentionally NOT part of this interface to keep it driver-agnostic.

type ExplainPlan

type ExplainPlan struct {
	JSONPlan map[string]any `json:"json_plan,omitempty"`
	Query    string         `json:"query"`
	Plan     string         `json:"plan"`
}

ExplainPlan holds the output of EXPLAIN ANALYZE.

type HealthStatus

type HealthStatus struct {
	Metadata map[string]any `json:"metadata,omitempty"`
	Error    string         `json:"error,omitempty"`
	Latency  time.Duration  `json:"latency_ns"`
	Healthy  bool           `json:"healthy"`
}

HealthStatus represents the result of a database health check.

type ObservabilityLibrary

type ObservabilityLibrary struct {
	// contains filtered or unexported fields
}

ObservabilityLibrary provides database monitoring, metrics, and tracing.

func (*ObservabilityLibrary) ConnectionPoolMetrics

func (o *ObservabilityLibrary) ConnectionPoolMetrics() map[string]int

ConnectionPoolMetrics returns connection pool statistics.

func (*ObservabilityLibrary) Enable

func (o *ObservabilityLibrary) Enable(_ context.Context)

Enable activates observability collection.

func (*ObservabilityLibrary) ExplainAnalyze

func (o *ObservabilityLibrary) ExplainAnalyze(ctx context.Context, query string, args ...any) (*ExplainPlan, error)

ExplainAnalyze runs EXPLAIN ANALYZE on the given query and returns the plan.

func (*ObservabilityLibrary) PrometheusExporter

func (o *ObservabilityLibrary) PrometheusExporter(_ context.Context, port int) error

PrometheusExporter starts a Prometheus metrics HTTP server on the given port.

func (*ObservabilityLibrary) QueryMetrics

func (o *ObservabilityLibrary) QueryMetrics() map[string]float64

QueryMetrics returns current query performance metrics as a map. Keys include "qps", "p95_ms", "p99_ms", "error_rate".

func (*ObservabilityLibrary) RecordQuery

func (o *ObservabilityLibrary) RecordQuery(operation, query string, duration time.Duration, err error)

RecordQuery records a query execution for observability purposes.

func (*ObservabilityLibrary) SlowQueryDetector

func (o *ObservabilityLibrary) SlowQueryDetector(_ context.Context, thresholdMS int) []SlowQuery

SlowQueryDetector returns queries that exceeded the given threshold.

func (*ObservabilityLibrary) TraceQueries

func (o *ObservabilityLibrary) TraceQueries(_ context.Context, sampleRate float64)

TraceQueries enables OpenTelemetry tracing for sampled queries.

func (*ObservabilityLibrary) TraceQuery

func (o *ObservabilityLibrary) TraceQuery(ctx context.Context, operation, query string) (context.Context, trace.Span)

TraceQuery creates an OTEL span for a query if sampling allows it.

func (*ObservabilityLibrary) UpdatePoolMetrics

func (o *ObservabilityLibrary) UpdatePoolMetrics()

updatePoolMetrics refreshes connection pool gauge metrics.

type QueryBuilderDSL

type QueryBuilderDSL struct {
	// contains filtered or unexported fields
}

QueryBuilderDSL is a fluent SQL query builder.

func (*QueryBuilderDSL) Build

func (qb *QueryBuilderDSL) Build() (string, []any, error)

Build constructs the final SQL query string and arguments. Placeholder conversion (? → $N) is applied here.

func (*QueryBuilderDSL) Columns

func (qb *QueryBuilderDSL) Columns(cols ...string) *QueryBuilderDSL

Columns sets the columns to select. If not called, "*" is used.

func (*QueryBuilderDSL) Exec

func (qb *QueryBuilderDSL) Exec(ctx context.Context) (sql.Result, error)

Exec executes the built query (for non-SELECT statements adapted to the builder).

func (*QueryBuilderDSL) GroupBy

func (qb *QueryBuilderDSL) GroupBy(group string) *QueryBuilderDSL

GroupBy sets the GROUP BY clause.

func (*QueryBuilderDSL) Having

func (qb *QueryBuilderDSL) Having(having string, args ...any) *QueryBuilderDSL

Having sets the HAVING clause (used with GroupBy).

func (*QueryBuilderDSL) Join

func (qb *QueryBuilderDSL) Join(join string) *QueryBuilderDSL

Join adds a JOIN clause (e.g., "INNER JOIN orders ON users.id = orders.user_id").

func (*QueryBuilderDSL) Limit

func (qb *QueryBuilderDSL) Limit(n int) *QueryBuilderDSL

Limit sets the maximum number of rows to return.

func (*QueryBuilderDSL) Offset

func (qb *QueryBuilderDSL) Offset(n int) *QueryBuilderDSL

Offset sets the number of rows to skip.

func (*QueryBuilderDSL) OrderBy

func (qb *QueryBuilderDSL) OrderBy(order string) *QueryBuilderDSL

OrderBy sets the ORDER BY clause.

func (*QueryBuilderDSL) Query

func (qb *QueryBuilderDSL) Query(ctx context.Context) (*sql.Rows, error)

Query executes the built SELECT query and returns the rows.

func (*QueryBuilderDSL) Select

func (qb *QueryBuilderDSL) Select(table string) *QueryBuilderDSL

Select sets the table for a SELECT query.

func (*QueryBuilderDSL) Where

func (qb *QueryBuilderDSL) Where(condition string, args ...any) *QueryBuilderDSL

Where adds a WHERE condition. Multiple calls are ANDed together. Use ? or $N for placeholders.

type SlowQuery

type SlowQuery struct {
	Time     time.Time     `json:"time"`
	Query    string        `json:"query"`
	Duration time.Duration `json:"duration_ns"`
}

SlowQuery represents a query that exceeded the configured latency threshold.

type Unwrapper

type Unwrapper interface {
	Unwrap() *sql.DB
}

Unwrapper is an optional interface that concrete DataStore implementations may satisfy to expose the underlying *sql.DB. This is useful for interop with libraries (like goose) that require a raw *sql.DB.

Example:

if u, ok := ds.(Unwrapper); ok {
    rawDB := u.Unwrap()
}

Directories

Path Synopsis
Package adaptors provides DataStore implementations for various PostgreSQL-compatible databases and services.
Package adaptors provides DataStore implementations for various PostgreSQL-compatible databases and services.
libs
cockroachdb
Package cockroachdb provides CockroachDB-specific operations including multi-region management, distributed SQL, backup/restore, and CDC.
Package cockroachdb provides CockroachDB-specific operations including multi-region management, distributed SQL, backup/restore, and CDC.
common
Package common provides shared utility functions that work across all PostgreSQL-compatible adaptors in gopgbase.
Package common provides shared utility functions that work across all PostgreSQL-compatible adaptors in gopgbase.
neon
Package neon provides Neon serverless PostgreSQL-specific operations including database branching, compute scaling, connection pooler configuration, and pgvector support.
Package neon provides Neon serverless PostgreSQL-specific operations including database branching, compute scaling, connection pooler configuration, and pgvector support.
postgres
Package postgres provides PostgreSQL-specific convenience operations including extension management, maintenance, and monitoring.
Package postgres provides PostgreSQL-specific convenience operations including extension management, maintenance, and monitoring.
redshift
Package redshift provides Amazon Redshift-specific operations including vacuum/analyze, materialized views, WLM queue management, concurrency scaling, and Spectrum external tables.
Package redshift provides Amazon Redshift-specific operations including vacuum/analyze, materialized views, WLM queue management, concurrency scaling, and Spectrum external tables.
supabase
Package supabase provides Supabase-specific convenience operations including Row Level Security, auth/JWT helpers, Edge Functions, and storage operations.
Package supabase provides Supabase-specific convenience operations including Row Level Security, auth/JWT helpers, Edge Functions, and storage operations.
timescale
Package timescale provides TimescaleDB-specific operations including hypertable management, continuous aggregates, compression policies, retention policies, and hyperfunctions.
Package timescale provides TimescaleDB-specific operations including hypertable management, continuous aggregates, compression policies, retention policies, and hyperfunctions.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL