Documentation
¶
Overview ¶
Package metrics provides production-grade performance monitoring and observability for GoSQLX operations. It enables real-time tracking of tokenization, parsing, and object pool performance with race-free atomic operations.
This package is designed for enterprise production environments requiring detailed performance insights, SLA monitoring, and operational observability. All operations are thread-safe and validated to be race-free under high concurrency.
Core Features ¶
- Tokenization and parsing operation counts and timings
- Error rates and categorization by error type
- Object pool efficiency tracking (AST, tokenizer, statement, expression pools)
- Query size distribution (min, max, average bytes processed)
- Operations per second throughput metrics
- Pool hit rates and memory efficiency statistics
- Zero-overhead when disabled (immediate return from all Record* functions)
Performance Characteristics ¶
GoSQLX v1.6.0 metrics system:
- Thread-Safe: All operations use atomic counters and RWMutex for safe concurrency
- Race-Free: Validated with 20,000+ concurrent operations (go test -race)
- Low Overhead: < 100ns per metric recording operation when enabled
- Lock-Free: Atomic operations for all counters (no contention)
- Zero Cost: When disabled, all Record* functions return immediately
Basic Usage ¶
Enable metrics collection:
import "github.com/ajitpratap0/GoSQLX/pkg/metrics"
// Enable metrics tracking
metrics.Enable()
defer metrics.Disable()
// Perform operations (metrics automatically collected)
// ...
// Retrieve statistics
stats := metrics.GetStats()
fmt.Printf("Operations: %d\n", stats.TokenizeOperations)
fmt.Printf("Error rate: %.2f%%\n", stats.TokenizeErrorRate*100)
fmt.Printf("Avg duration: %v\n", stats.AverageTokenizeDuration)
Tokenization Metrics ¶
Track tokenizer performance:
import "time" start := time.Now() tokens, err := tokenizer.Tokenize(sqlBytes) duration := time.Since(start) // Record tokenization metrics metrics.RecordTokenization(duration, len(sqlBytes), err)
Automatic integration with tokenizer:
// The tokenizer package automatically records metrics when enabled tkz := tokenizer.GetTokenizer() defer tokenizer.PutTokenizer(tkz) tokens, err := tkz.Tokenize(sqlBytes) // Metrics recorded automatically if metrics.Enable() was called
Parser Metrics ¶
Track parser performance:
start := time.Now() ast, err := parser.Parse(tokens) duration := time.Since(start) // Record parser metrics statementCount := len(ast.Statements) metrics.RecordParse(duration, statementCount, err)
Object Pool Metrics ¶
Track pool efficiency for all pool types:
// Tokenizer pool
tkz := tokenizer.GetTokenizer()
metrics.RecordPoolGet(true) // true = from pool, false = new allocation
defer func() {
tokenizer.PutTokenizer(tkz)
metrics.RecordPoolPut()
}()
// AST pool
ast := ast.NewAST()
metrics.RecordASTPoolGet()
defer func() {
ast.ReleaseAST(ast)
metrics.RecordASTPoolPut()
}()
// Statement pool (SELECT, INSERT, UPDATE, DELETE)
stmt := ast.NewSelectStatement()
metrics.RecordStmtPoolGet()
defer func() {
ast.ReleaseSelectStatement(stmt)
metrics.RecordStmtPoolPut()
}()
// Expression pool (identifiers, literals, binary expressions)
expr := ast.NewIdentifier("column_name")
metrics.RecordExprPoolGet()
defer func() {
ast.ReleaseIdentifier(expr)
metrics.RecordExprPoolPut()
}()
Retrieving Statistics ¶
Get comprehensive performance statistics:
stats := metrics.GetStats()
// Tokenization performance
fmt.Printf("Tokenize ops/sec: %.0f\n", stats.TokenizeOperationsPerSecond)
fmt.Printf("Avg tokenize time: %v\n", stats.AverageTokenizeDuration)
fmt.Printf("Tokenize error rate: %.2f%%\n", stats.TokenizeErrorRate*100)
// Parser performance
fmt.Printf("Parse ops/sec: %.0f\n", stats.ParseOperationsPerSecond)
fmt.Printf("Avg parse time: %v\n", stats.AverageParseDuration)
fmt.Printf("Statements created: %d\n", stats.StatementsCreated)
// Pool efficiency
poolHitRate := (1 - stats.PoolMissRate) * 100
fmt.Printf("Pool hit rate: %.1f%%\n", poolHitRate)
fmt.Printf("AST pool balance: %d\n", stats.ASTPoolBalance)
// Query size metrics
fmt.Printf("Query size range: %d - %d bytes\n", stats.MinQuerySize, stats.MaxQuerySize)
fmt.Printf("Avg query size: %.0f bytes\n", stats.AverageQuerySize)
fmt.Printf("Total processed: %d bytes\n", stats.TotalBytesProcessed)
Error Tracking ¶
View error breakdown by type:
stats := metrics.GetStats()
if len(stats.ErrorsByType) > 0 {
fmt.Println("Errors by type:")
for errorType, count := range stats.ErrorsByType {
fmt.Printf(" %s: %d\n", errorType, count)
}
}
Record errors with categorization:
// Tokenization error
err := tokenizer.Tokenize(sqlBytes)
if err != nil {
metrics.RecordError("E1001") // Error code from pkg/errors
}
// Parser error
ast, err := parser.Parse(tokens)
if err != nil {
metrics.RecordError("E2001")
}
Production Monitoring ¶
Integrate with monitoring systems:
import "time"
// Periodic stats reporting
ticker := time.NewTicker(30 * time.Second)
go func() {
for range ticker.C {
stats := metrics.GetStats()
// Export to Prometheus, DataDog, New Relic, etc.
prometheusGauge.WithLabelValues("tokenize_ops_per_sec").Set(stats.TokenizeOperationsPerSecond)
prometheusGauge.WithLabelValues("pool_miss_rate").Set(stats.PoolMissRate)
prometheusCounter.WithLabelValues("tokenize_total").Add(float64(stats.TokenizeOperations))
// Alert on high error rates
if stats.TokenizeErrorRate > 0.05 {
log.Printf("WARNING: High tokenize error rate: %.2f%%",
stats.TokenizeErrorRate*100)
}
// Monitor pool efficiency
if stats.PoolMissRate > 0.2 {
log.Printf("WARNING: Low pool hit rate: %.1f%%",
(1-stats.PoolMissRate)*100)
}
// Check pool balance (gets should roughly equal puts)
if abs(stats.ASTPoolBalance) > 1000 {
log.Printf("WARNING: AST pool imbalance: %d", stats.ASTPoolBalance)
}
}
}()
Pool Efficiency Monitoring ¶
Track all pool types independently:
stats := metrics.GetStats()
// Tokenizer pool (sync.Pool for tokenizer instances)
fmt.Printf("Tokenizer pool gets: %d, puts: %d, balance: %d\n",
stats.PoolGets, stats.PoolPuts, stats.PoolBalance)
fmt.Printf("Tokenizer pool miss rate: %.1f%%\n", stats.PoolMissRate*100)
// AST pool (main AST container objects)
fmt.Printf("AST pool gets: %d, puts: %d, balance: %d\n",
stats.ASTPoolGets, stats.ASTPoolPuts, stats.ASTPoolBalance)
// Statement pool (SELECT/INSERT/UPDATE/DELETE statements)
fmt.Printf("Statement pool gets: %d, puts: %d, balance: %d\n",
stats.StmtPoolGets, stats.StmtPoolPuts, stats.StmtPoolBalance)
// Expression pool (identifiers, binary expressions, literals)
fmt.Printf("Expression pool gets: %d, puts: %d, balance: %d\n",
stats.ExprPoolGets, stats.ExprPoolPuts, stats.ExprPoolBalance)
Pool balance interpretation:
- Balance = 0: Perfect equilibrium (gets == puts)
- Balance > 0: More gets than puts (potential leak or objects still in use)
- Balance < 0: More puts than gets (should never happen - indicates bug)
Resetting Metrics ¶
Reset all metrics (useful for testing or service restart):
metrics.Reset()
fmt.Println("All metrics reset to zero")
Note: Reset() preserves the enabled/disabled state but clears all counters. The start time is also reset to the current time.
SLA Monitoring ¶
Track service level objectives:
stats := metrics.GetStats()
// P99 latency approximation (average as baseline)
if stats.AverageTokenizeDuration > 10*time.Millisecond {
log.Printf("WARNING: High tokenize latency: %v", stats.AverageTokenizeDuration)
}
// Throughput SLO
if stats.TokenizeOperationsPerSecond < 100000 {
log.Printf("WARNING: Low throughput: %.0f ops/sec", stats.TokenizeOperationsPerSecond)
}
// Error rate SLO
if stats.TokenizeErrorRate > 0.01 { // 1% error threshold
log.Printf("CRITICAL: Error rate %.2f%% exceeds SLO", stats.TokenizeErrorRate*100)
}
Performance Impact ¶
The metrics package uses atomic operations for lock-free performance tracking.
Overhead measurements (on modern x86_64):
- When disabled: ~1-2ns per Record* call (immediate return)
- When enabled: ~50-100ns per Record* call (atomic increment)
- GetStats(): ~1-2μs (copies all counters with read lock)
For reference, GoSQLX v1.6.0 tokenization takes ~700ns for typical queries, so metrics overhead is < 15% even when enabled.
Thread Safety ¶
All functions in this package are safe for concurrent use from multiple goroutines:
- Enable/Disable: Safe to call from any goroutine
- Record* functions: Use atomic operations for counters
- GetStats: Uses RWMutex to safely copy all metrics
- Reset: Uses write lock to safely clear all metrics
The package has been validated to be race-free under high concurrency with 20,000+ concurrent operations tested using go test -race.
JSON Serialization ¶
The Stats struct supports JSON marshaling for easy integration with monitoring and logging systems:
stats := metrics.GetStats()
jsonData, err := json.MarshalIndent(stats, "", " ")
if err != nil {
log.Fatal(err)
}
fmt.Println(string(jsonData))
Example output:
{
"tokenize_operations": 150000,
"tokenize_operations_per_second": 1380000.0,
"average_tokenize_duration": "724ns",
"tokenize_error_rate": 0.002,
"pool_miss_rate": 0.05,
"pool_reuse": 95.0,
"average_query_size": 1024.5
}
Stats Structure ¶
The Stats struct provides comprehensive metrics:
type Stats struct {
// Tokenization metrics
TokenizeOperations int64 // Total tokenization calls
TokenizeErrors int64 // Total tokenization errors
TokenizeOperationsPerSecond float64 // Ops/sec throughput
AverageTokenizeDuration time.Duration // Average tokenization time
TokenizeErrorRate float64 // Error rate (0.0-1.0)
LastTokenizeTime time.Time // Timestamp of last tokenization
// Parser metrics
ParseOperations int64 // Total parse calls
ParseErrors int64 // Total parse errors
ParseOperationsPerSecond float64 // Ops/sec throughput
AverageParseDuration time.Duration // Average parse time
ParseErrorRate float64 // Error rate (0.0-1.0)
StatementsCreated int64 // Total statements parsed
LastParseTime time.Time // Timestamp of last parse
// Pool metrics (tokenizer pool)
PoolGets int64 // Total pool retrievals
PoolPuts int64 // Total pool returns
PoolMisses int64 // Pool misses (new allocations)
PoolBalance int64 // Gets - Puts (should be ~0)
PoolMissRate float64 // Miss rate (0.0-1.0)
PoolReuse float64 // Reuse percentage (0-100)
// AST pool metrics
ASTPoolGets int64 // AST pool retrievals
ASTPoolPuts int64 // AST pool returns
ASTPoolBalance int64 // Gets - Puts
// Statement pool metrics
StmtPoolGets int64 // Statement pool retrievals
StmtPoolPuts int64 // Statement pool returns
StmtPoolBalance int64 // Gets - Puts
// Expression pool metrics
ExprPoolGets int64 // Expression pool retrievals
ExprPoolPuts int64 // Expression pool returns
ExprPoolBalance int64 // Gets - Puts
// Query size metrics
MinQuerySize int64 // Smallest query processed (bytes)
MaxQuerySize int64 // Largest query processed (bytes)
TotalBytesProcessed int64 // Total SQL bytes processed
AverageQuerySize float64 // Average query size (bytes)
// Error tracking
ErrorsByType map[string]int64 // Error counts by error code
// Timing
StartTime time.Time // When metrics were enabled/reset
Uptime time.Duration // Duration since start
}
Integration Examples ¶
Prometheus exporter:
func exportPrometheusMetrics() {
stats := metrics.GetStats()
// Gauges for current rates
tokenizeOpsPerSec.Set(stats.TokenizeOperationsPerSecond)
parseOpsPerSec.Set(stats.ParseOperationsPerSecond)
poolMissRate.Set(stats.PoolMissRate)
// Counters for totals
tokenizeTotal.Add(float64(stats.TokenizeOperations))
parseTotal.Add(float64(stats.ParseOperations))
tokenizeErrors.Add(float64(stats.TokenizeErrors))
parseErrors.Add(float64(stats.ParseErrors))
// Histograms for latencies
tokenizeLatency.Observe(stats.AverageTokenizeDuration.Seconds())
parseLatency.Observe(stats.AverageParseDuration.Seconds())
}
DataDog exporter:
func exportDataDogMetrics() {
stats := metrics.GetStats()
statsd.Gauge("gosqlx.tokenize.ops_per_second", stats.TokenizeOperationsPerSecond, nil, 1)
statsd.Gauge("gosqlx.parse.ops_per_second", stats.ParseOperationsPerSecond, nil, 1)
statsd.Gauge("gosqlx.pool.miss_rate", stats.PoolMissRate, nil, 1)
statsd.Gauge("gosqlx.pool.hit_rate", 1-stats.PoolMissRate, nil, 1)
statsd.Count("gosqlx.tokenize.total", stats.TokenizeOperations, nil, 1)
statsd.Count("gosqlx.parse.total", stats.ParseOperations, nil, 1)
statsd.Histogram("gosqlx.tokenize.duration", float64(stats.AverageTokenizeDuration), nil, 1)
}
Design Principles ¶
The metrics package follows GoSQLX design philosophy:
- Zero Dependencies: Only depends on Go standard library
- Thread-Safe: All operations safe for concurrent use
- Low Overhead: Minimal impact on performance (< 15% when enabled)
- Atomic Operations: Lock-free counters for high concurrency
- Comprehensive: Tracks all major subsystems (tokenizer, parser, pools)
- Production-Ready: Validated race-free under high load
Testing and Quality ¶
The package maintains high quality standards:
- Comprehensive test coverage for all functions
- Race detection validation (go test -race)
- Concurrent access testing (20,000+ operations)
- Performance benchmarks for all operations
- Real-world usage validation in production environments
Version ¶
This package is part of GoSQLX v1.6.0 and is production-ready for enterprise use.
For complete examples and advanced usage, see:
- docs/GETTING_STARTED.md - Quick start guide
- docs/USAGE_GUIDE.md - Comprehensive usage documentation
- examples/ directory - Production-ready examples
Package metrics provides production-grade performance monitoring and observability for GoSQLX operations. It enables real-time tracking of tokenization, parsing, and object pool performance with race-free atomic operations.
Overview ¶
The metrics package collects comprehensive runtime statistics including:
- Tokenization and parsing operation counts and timings
- Error rates and categorization by error type
- Object pool efficiency (AST, tokenizer, statement, expression pools)
- Query size distribution (min, max, average)
- Operations per second throughput
- Pool hit rates and memory efficiency
All metric operations are thread-safe using atomic operations, making them suitable for high-concurrency production environments.
Basic Usage ¶
Enable metrics collection:
import "github.com/ajitpratap0/GoSQLX/pkg/metrics"
// Enable metrics tracking
metrics.Enable()
defer metrics.Disable()
// Perform operations (metrics automatically collected)
// ...
// Retrieve statistics
stats := metrics.GetStats()
fmt.Printf("Operations: %d\n", stats.TokenizeOperations)
fmt.Printf("Error rate: %.2f%%\n", stats.TokenizeErrorRate*100)
fmt.Printf("Avg duration: %v\n", stats.AverageTokenizeDuration)
Tokenization Metrics ¶
Track tokenizer performance:
import "time" start := time.Now() tokens, err := tokenizer.Tokenize(sqlBytes) duration := time.Since(start) metrics.RecordTokenization(duration, len(sqlBytes), err)
Parser Metrics ¶
Track parser performance:
start := time.Now() ast, err := parser.Parse(tokens) duration := time.Since(start) statementCount := len(ast.Statements) metrics.RecordParse(duration, statementCount, err)
Object Pool Metrics ¶
Track pool efficiency:
// Tokenizer pool
tkz := tokenizer.GetTokenizer()
metrics.RecordPoolGet(true) // true = from pool, false = new allocation
defer func() {
tokenizer.PutTokenizer(tkz)
metrics.RecordPoolPut()
}()
// AST pool
ast := ast.NewAST()
metrics.RecordASTPoolGet()
defer func() {
ast.ReleaseAST(ast)
metrics.RecordASTPoolPut()
}()
Retrieving Statistics ¶
Get comprehensive performance statistics:
stats := metrics.GetStats()
// Tokenization performance
fmt.Printf("Tokenize ops/sec: %.0f\n", stats.TokenizeOperationsPerSecond)
fmt.Printf("Avg tokenize time: %v\n", stats.AverageTokenizeDuration)
fmt.Printf("Tokenize error rate: %.2f%%\n", stats.TokenizeErrorRate*100)
// Parser performance
fmt.Printf("Parse ops/sec: %.0f\n", stats.ParseOperationsPerSecond)
fmt.Printf("Avg parse time: %v\n", stats.AverageParseDuration)
fmt.Printf("Statements created: %d\n", stats.StatementsCreated)
// Pool efficiency
fmt.Printf("Pool hit rate: %.1f%%\n", (1-stats.PoolMissRate)*100)
fmt.Printf("AST pool balance: %d\n", stats.ASTPoolBalance)
// Query size metrics
fmt.Printf("Query size range: %d - %d bytes\n", stats.MinQuerySize, stats.MaxQuerySize)
fmt.Printf("Avg query size: %.0f bytes\n", stats.AverageQuerySize)
fmt.Printf("Total processed: %d bytes\n", stats.TotalBytesProcessed)
Error Tracking ¶
View error breakdown by type:
stats := metrics.GetStats()
if len(stats.ErrorsByType) > 0 {
fmt.Println("Errors by type:")
for errorType, count := range stats.ErrorsByType {
fmt.Printf(" %s: %d\n", errorType, count)
}
}
Production Monitoring ¶
Integrate with monitoring systems:
import "time"
// Periodic stats reporting
ticker := time.NewTicker(30 * time.Second)
go func() {
for range ticker.C {
stats := metrics.GetStats()
// Export to Prometheus, DataDog, etc.
prometheusGauge.Set(stats.TokenizeOperationsPerSecond)
prometheusGauge.Set(stats.PoolMissRate)
prometheusCounter.Add(float64(stats.TokenizeOperations))
// Alert on high error rates
if stats.TokenizeErrorRate > 0.05 {
log.Printf("WARNING: High tokenize error rate: %.2f%%",
stats.TokenizeErrorRate*100)
}
// Monitor pool efficiency
if stats.PoolMissRate > 0.2 {
log.Printf("WARNING: Low pool hit rate: %.1f%%",
(1-stats.PoolMissRate)*100)
}
}
}()
Pool Efficiency Monitoring ¶
Track all pool types:
stats := metrics.GetStats()
// Tokenizer pool
fmt.Printf("Tokenizer pool gets: %d, puts: %d, balance: %d\n",
stats.PoolGets, stats.PoolPuts, stats.PoolBalance)
fmt.Printf("Tokenizer pool miss rate: %.1f%%\n", stats.PoolMissRate*100)
// AST pool
fmt.Printf("AST pool gets: %d, puts: %d, balance: %d\n",
stats.ASTPoolGets, stats.ASTPoolPuts, stats.ASTPoolBalance)
// Statement pool
fmt.Printf("Statement pool gets: %d, puts: %d, balance: %d\n",
stats.StmtPoolGets, stats.StmtPoolPuts, stats.StmtPoolBalance)
// Expression pool
fmt.Printf("Expression pool gets: %d, puts: %d, balance: %d\n",
stats.ExprPoolGets, stats.ExprPoolPuts, stats.ExprPoolBalance)
Resetting Metrics ¶
Reset all metrics (useful for testing or service restart):
metrics.Reset()
fmt.Println("All metrics reset to zero")
Performance Impact ¶
The metrics package uses atomic operations for lock-free performance tracking. When disabled, all recording functions return immediately with minimal overhead. When enabled, the overhead per operation is typically < 100ns.
Thread Safety ¶
All functions in this package are safe for concurrent use from multiple goroutines. The package has been validated to be race-free under high concurrency (20,000+ concurrent operations tested).
JSON Serialization ¶
The Stats struct supports JSON marshaling for easy integration with monitoring and logging systems:
stats := metrics.GetStats()
jsonData, err := json.MarshalIndent(stats, "", " ")
if err != nil {
log.Fatal(err)
}
fmt.Println(string(jsonData))
Version ¶
This package is part of GoSQLX v1.6.0 and is production-ready for enterprise use.
Index ¶
- func Disable()
- func Enable()
- func IsEnabled() bool
- func RecordASTPoolGet()
- func RecordASTPoolPut()
- func RecordExpressionPoolGet()
- func RecordExpressionPoolPut()
- func RecordParse(duration time.Duration, statementCount int, err error)
- func RecordPoolGet(fromPool bool)
- func RecordPoolPut()
- func RecordStatementPoolGet()
- func RecordStatementPoolPut()
- func RecordTokenization(duration time.Duration, querySize int, err error)
- func Reset()
- type Metrics
- type Stats
- func GetStats() Stats
- func LogStats() Statsdeprecated
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Disable ¶
func Disable()
Disable deactivates metrics collection globally. After calling Disable, all Record* functions become no-ops. Existing metrics data is preserved until Reset() is called.
This function is safe to call multiple times.
Example:
metrics.Disable() // Metrics collection stopped but data preserved stats := metrics.GetStats() // Still returns last collected stats
func Enable ¶
func Enable()
Enable activates metrics collection globally. After calling Enable, all Record* functions will track operations. The start time is reset when metrics are enabled.
This function is safe to call multiple times.
Example:
metrics.Enable() defer metrics.Disable() // All operations are now tracked
func IsEnabled ¶
func IsEnabled() bool
IsEnabled returns whether metrics collection is currently active. Returns true if Enable() has been called, false otherwise.
Example:
if metrics.IsEnabled() {
fmt.Println("Metrics are being collected")
}
func RecordASTPoolGet ¶ added in v1.6.0
func RecordASTPoolGet()
RecordASTPoolGet records an AST pool retrieval. This function is a no-op if metrics are disabled. Use this to track AST pool efficiency.
func RecordASTPoolPut ¶ added in v1.6.0
func RecordASTPoolPut()
RecordASTPoolPut records an AST pool return. This function is a no-op if metrics are disabled. Use this to track AST pool efficiency.
func RecordExpressionPoolGet ¶ added in v1.6.0
func RecordExpressionPoolGet()
RecordExpressionPoolGet records an expression pool retrieval. This function is a no-op if metrics are disabled. Use this to track expression pool efficiency.
func RecordExpressionPoolPut ¶ added in v1.6.0
func RecordExpressionPoolPut()
RecordExpressionPoolPut records an expression pool return. This function is a no-op if metrics are disabled. Use this to track expression pool efficiency.
func RecordParse ¶ added in v1.6.0
RecordParse records a parse operation with duration, statement count, and error. This function is a no-op if metrics are disabled.
Call this after each parse operation to track performance metrics.
Parameters:
- duration: Time taken to parse the SQL
- statementCount: Number of statements successfully parsed
- err: Error returned from parsing, or nil if successful
Example:
start := time.Now() ast, err := parser.Parse(tokens) duration := time.Since(start) statementCount := len(ast.Statements) metrics.RecordParse(duration, statementCount, err)
func RecordPoolGet ¶
func RecordPoolGet(fromPool bool)
RecordPoolGet records a tokenizer pool retrieval operation. This function is a no-op if metrics are disabled.
Call this each time a tokenizer is retrieved from the pool.
Parameters:
- fromPool: true if the tokenizer came from the pool, false if newly allocated
Example:
tkz := tokenizer.GetTokenizer()
metrics.RecordPoolGet(true) // Retrieved from pool
defer func() {
tokenizer.PutTokenizer(tkz)
metrics.RecordPoolPut()
}()
func RecordPoolPut ¶
func RecordPoolPut()
RecordPoolPut records a tokenizer pool return operation. This function is a no-op if metrics are disabled.
Call this each time a tokenizer is returned to the pool.
Example:
defer func() {
tokenizer.PutTokenizer(tkz)
metrics.RecordPoolPut()
}()
func RecordStatementPoolGet ¶ added in v1.6.0
func RecordStatementPoolGet()
RecordStatementPoolGet records a statement pool retrieval. This function is a no-op if metrics are disabled. Use this to track statement pool efficiency.
func RecordStatementPoolPut ¶ added in v1.6.0
func RecordStatementPoolPut()
RecordStatementPoolPut records a statement pool return. This function is a no-op if metrics are disabled. Use this to track statement pool efficiency.
func RecordTokenization ¶
RecordTokenization records a tokenization operation with duration, query size, and error. This function is a no-op if metrics are disabled.
Call this after each tokenization operation to track performance metrics.
Parameters:
- duration: Time taken to tokenize the SQL
- querySize: Size of the SQL query in bytes
- err: Error returned from tokenization, or nil if successful
Example:
start := time.Now() tokens, err := tokenizer.Tokenize(sqlBytes) duration := time.Since(start) metrics.RecordTokenization(duration, len(sqlBytes), err)
func Reset ¶
func Reset()
Reset clears all metrics and resets counters to zero. This is useful for testing, benchmarking, or when restarting metric collection.
The function resets:
- All operation counts (tokenization, parsing)
- All timing data
- Pool statistics
- Query size metrics
- Error counts and breakdown
- Start time (reset to current time)
Note: This does not affect the enabled/disabled state. If metrics are enabled before Reset(), they remain enabled after.
Example:
// Reset before benchmark
metrics.Reset()
metrics.Enable()
// Run operations
// ...
// Check clean metrics
stats := metrics.GetStats()
fmt.Printf("Operations: %d\n", stats.TokenizeOperations)
Types ¶
type Metrics ¶
type Metrics struct {
// contains filtered or unexported fields
}
Metrics collects runtime performance data for GoSQLX operations. It uses atomic operations for all counters to ensure thread-safe, race-free metric collection in high-concurrency environments.
This is the internal metrics structure. Use the global functions (Enable, Disable, RecordTokenization, etc.) to interact with metrics.
type Stats ¶
type Stats struct {
// Tokenization counts
TokenizeOperations int64 `json:"tokenize_operations"`
TokenizeErrors int64 `json:"tokenize_errors"`
TokenizeErrorRate float64 `json:"tokenize_error_rate"`
// Parser counts
ParseOperations int64 `json:"parse_operations"`
ParseErrors int64 `json:"parse_errors"`
ParseErrorRate float64 `json:"parse_error_rate"`
StatementsCreated int64 `json:"statements_created"`
// Tokenization performance metrics
AverageTokenizeDuration time.Duration `json:"average_tokenize_duration"`
TokenizeOperationsPerSecond float64 `json:"tokenize_operations_per_second"`
// Parser performance metrics
AverageParseDuration time.Duration `json:"average_parse_duration"`
ParseOperationsPerSecond float64 `json:"parse_operations_per_second"`
// Tokenizer pool metrics
PoolGets int64 `json:"pool_gets"`
PoolPuts int64 `json:"pool_puts"`
PoolBalance int64 `json:"pool_balance"`
PoolMissRate float64 `json:"pool_miss_rate"`
// AST pool metrics
ASTPoolGets int64 `json:"ast_pool_gets"`
ASTPoolPuts int64 `json:"ast_pool_puts"`
ASTPoolBalance int64 `json:"ast_pool_balance"`
// Statement pool metrics
StmtPoolGets int64 `json:"stmt_pool_gets"`
StmtPoolPuts int64 `json:"stmt_pool_puts"`
StmtPoolBalance int64 `json:"stmt_pool_balance"`
// Expression pool metrics
ExprPoolGets int64 `json:"expr_pool_gets"`
ExprPoolPuts int64 `json:"expr_pool_puts"`
ExprPoolBalance int64 `json:"expr_pool_balance"`
// Query size metrics
MinQuerySize int64 `json:"min_query_size"`
MaxQuerySize int64 `json:"max_query_size"`
AverageQuerySize float64 `json:"average_query_size"`
TotalBytesProcessed int64 `json:"total_bytes_processed"`
// Timing
Uptime time.Duration `json:"uptime"`
LastOperationTime time.Time `json:"last_operation_time"`
// Error breakdown
ErrorsByType map[string]int64 `json:"errors_by_type"`
// Legacy field for backwards compatibility
ErrorRate float64 `json:"error_rate"`
}
Stats represents a snapshot of current performance statistics. All fields are populated by GetStats() and provide comprehensive performance and efficiency data for GoSQLX operations.
The struct supports JSON marshaling for easy integration with monitoring systems, logging, and dashboards.
func GetStats ¶
func GetStats() Stats
GetStats returns a snapshot of current performance statistics. This function is safe to call concurrently and can be called whether metrics are enabled or disabled.
When metrics are disabled, returns a Stats struct with zero values.
The returned Stats struct contains comprehensive information including:
- Operation counts and timings (tokenization, parsing)
- Error rates and error breakdown by type
- Pool efficiency metrics (hit rates, balance)
- Query size statistics
- Operations per second throughput
- Uptime since metrics were enabled
Example:
stats := metrics.GetStats()
// Display tokenization performance
fmt.Printf("Tokenize ops/sec: %.0f\n", stats.TokenizeOperationsPerSecond)
fmt.Printf("Avg tokenize time: %v\n", stats.AverageTokenizeDuration)
fmt.Printf("Error rate: %.2f%%\n", stats.TokenizeErrorRate*100)
// Display pool efficiency
fmt.Printf("Pool hit rate: %.1f%%\n", (1-stats.PoolMissRate)*100)
fmt.Printf("Pool balance: %d\n", stats.PoolBalance)
// Export to JSON
jsonData, _ := json.MarshalIndent(stats, "", " ")
fmt.Println(string(jsonData))