Documentation
¶
Overview ¶
Package monitor provides lightweight performance monitoring for GoSQLX operations.
This package is a simpler alternative to pkg/metrics, designed for applications that need basic performance tracking without the full feature set. It focuses on core metrics: tokenizer/parser timings, pool efficiency, and memory statistics.
For comprehensive production monitoring with error tracking, query size distribution, and detailed pool metrics, use pkg/metrics instead.
Overview ¶
The monitor package tracks:
- Tokenizer call counts and cumulative duration
- Parser call counts and cumulative duration
- Object pool hit/miss rates and reuse percentages
- Basic memory allocation statistics
- Error counts for tokenizer and parser operations
All operations are thread-safe using atomic counters and RWMutex for safe concurrent access from multiple goroutines.
Basic Usage ¶
Enable monitoring:
import "github.com/ajitpratap0/GoSQLX/pkg/sql/monitor"
// Enable metrics collection
monitor.Enable()
defer monitor.Disable()
// Perform operations
// ...
// Get metrics snapshot
metrics := monitor.GetMetrics()
fmt.Printf("Tokenizer calls: %d\n", metrics.TokenizerCalls)
fmt.Printf("Parser calls: %d\n", metrics.ParserCalls)
fmt.Printf("Pool reuse: %.1f%%\n", metrics.PoolReuse)
Recording Operations ¶
Record tokenizer operations:
start := time.Now() tokens, err := tokenizer.Tokenize(sqlBytes) duration := time.Since(start) monitor.RecordTokenizerCall(duration, len(tokens), err)
Record parser operations:
start := time.Now() ast, err := parser.Parse(tokens) duration := time.Since(start) monitor.RecordParserCall(duration, err)
Pool Tracking ¶
Record pool hits and misses:
// Successful pool retrieval monitor.RecordPoolHit() // Pool miss (new allocation required) monitor.RecordPoolMiss()
Example with tokenizer pool:
tkz := tokenizer.GetTokenizer()
if tkz != nil {
monitor.RecordPoolHit()
} else {
monitor.RecordPoolMiss()
}
defer tokenizer.PutTokenizer(tkz)
Metrics Snapshot ¶
Retrieve current metrics:
metrics := monitor.GetMetrics()
// Tokenizer metrics
fmt.Printf("Tokenizer calls: %d\n", metrics.TokenizerCalls)
fmt.Printf("Tokenizer duration: %v\n", metrics.TokenizerDuration)
fmt.Printf("Tokens processed: %d\n", metrics.TokensProcessed)
fmt.Printf("Tokenizer errors: %d\n", metrics.TokenizerErrors)
// Parser metrics
fmt.Printf("Parser calls: %d\n", metrics.ParserCalls)
fmt.Printf("Parser duration: %v\n", metrics.ParserDuration)
fmt.Printf("Statements processed: %d\n", metrics.StatementsProcessed)
fmt.Printf("Parser errors: %d\n", metrics.ParserErrors)
// Pool metrics
fmt.Printf("Pool hits: %d\n", metrics.PoolHits)
fmt.Printf("Pool misses: %d\n", metrics.PoolMisses)
fmt.Printf("Pool reuse rate: %.1f%%\n", metrics.PoolReuse)
// Uptime
fmt.Printf("Monitoring started: %v\n", metrics.StartTime)
Performance Summary ¶
Get aggregated performance summary:
summary := monitor.GetSummary()
fmt.Printf("Uptime: %v\n", summary.Uptime)
fmt.Printf("Total operations: %d\n", summary.TotalOperations)
fmt.Printf("Operations/sec: %.0f\n", summary.OperationsPerSecond)
fmt.Printf("Tokens/sec: %.0f\n", summary.TokensPerSecond)
fmt.Printf("Avg tokenizer latency: %v\n", summary.AvgTokenizerLatency)
fmt.Printf("Avg parser latency: %v\n", summary.AvgParserLatency)
fmt.Printf("Error rate: %.2f%%\n", summary.ErrorRate)
fmt.Printf("Pool efficiency: %.1f%%\n", summary.PoolEfficiency)
Resetting Metrics ¶
Clear all metrics:
monitor.Reset()
fmt.Println("Metrics reset")
Uptime Tracking ¶
Get time since monitoring started or was reset:
uptime := monitor.Uptime()
fmt.Printf("Monitoring for: %v\n", uptime)
Enable/Disable Control ¶
Check if monitoring is enabled:
if monitor.IsEnabled() {
fmt.Println("Monitoring is active")
} else {
fmt.Println("Monitoring is disabled")
}
Enable/disable on demand:
// Enable for specific section monitor.Enable() // ... operations to monitor ... monitor.Disable()
Comparison with pkg/metrics ¶
Use pkg/monitor when:
- You need simple performance tracking
- You want minimal overhead and dependencies
- You don't need error categorization by type
- You don't need query size distribution
- You don't need separate pool tracking (AST, stmt, expr pools)
Use pkg/metrics when:
- You need comprehensive production monitoring
- You want detailed error tracking by error code
- You need query size distribution (min/max/avg)
- You need separate metrics for all pool types
- You want integration with Prometheus/DataDog/etc.
Thread Safety ¶
All functions in this package are safe for concurrent use:
- Enable/Disable: Atomic flag for thread-safe enable/disable
- Record* functions: Use atomic operations for counters
- GetMetrics: Uses RWMutex for safe concurrent reads
- Reset: Uses write lock to safely clear all metrics
The package has been validated to be race-free under concurrent access.
Performance Impact ¶
When disabled:
- All Record* functions check atomic flag and return immediately
- Overhead: ~1-2ns per call (negligible)
When enabled:
- Atomic increment operations for counters
- Mutex-protected duration updates
- Overhead: ~50-100ns per call (minimal)
Production Integration ¶
Example with periodic reporting:
import "time"
ticker := time.NewTicker(60 * time.Second)
go func() {
for range ticker.C {
summary := monitor.GetSummary()
log.Printf("Performance: %.0f ops/sec, %.2f%% errors, %.1f%% pool efficiency",
summary.OperationsPerSecond,
summary.ErrorRate,
summary.PoolEfficiency)
// Alert on performance degradation
if summary.OperationsPerSecond < 100000 {
log.Printf("WARNING: Low throughput detected")
}
if summary.ErrorRate > 5.0 {
log.Printf("WARNING: High error rate detected")
}
if summary.PoolEfficiency < 80.0 {
log.Printf("WARNING: Low pool efficiency")
}
}
}()
Design Principles ¶
The monitor package follows GoSQLX design philosophy:
- Simplicity: Focused on core metrics only
- Low Overhead: Minimal performance impact
- Thread-Safe: Safe for concurrent use
- Zero Dependencies: Only uses Go standard library
Version ¶
This package is part of GoSQLX v1.6.0 and is production-ready for use.
For complete examples, see:
- docs/USAGE_GUIDE.md - Comprehensive usage documentation
- examples/ directory - Production-ready examples
Package monitor provides performance monitoring and metrics collection for GoSQLX
Index ¶
- func Disable()
- func Enable()
- func IsEnabled() bool
- func RecordParserCall(duration time.Duration, err error)
- func RecordPoolHit()
- func RecordPoolMiss()
- func RecordTokenizerCall(duration time.Duration, tokens int, err error)
- func Reset()
- func Uptime() time.Duration
- type Metrics
- type MetricsSnapshot
- type Summary
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Disable ¶
func Disable()
Disable deactivates metrics collection globally.
After calling Disable, all Record* functions become no-ops. Existing metrics data is preserved until Reset() is called. This function is safe to call multiple times and from multiple goroutines.
Example:
monitor.Disable() // Metrics collection stopped but data preserved metrics := monitor.GetMetrics() // Still returns last collected data
func Enable ¶
func Enable()
Enable activates metrics collection globally.
After calling Enable, all Record* functions will track operations. This function is safe to call multiple times and from multiple goroutines.
Example:
monitor.Enable() defer monitor.Disable() // All operations are now tracked
func IsEnabled ¶
func IsEnabled() bool
IsEnabled returns whether metrics collection is currently active.
Returns true if Enable() has been called, false otherwise. This function is safe to call from multiple goroutines.
Example:
if monitor.IsEnabled() {
fmt.Println("Metrics are being collected")
}
func RecordParserCall ¶
RecordParserCall records a parse operation with timing and error information.
This function is a no-op if metrics are disabled. Call this after each parse operation to track performance.
Parameters:
- duration: Time taken to parse the SQL
- err: Error returned from parsing, or nil if successful
Thread safety: Safe to call from multiple goroutines concurrently.
Example:
start := time.Now() ast, err := parser.Parse(tokens) duration := time.Since(start) monitor.RecordParserCall(duration, err)
func RecordPoolHit ¶
func RecordPoolHit()
RecordPoolHit records a successful object retrieval from the pool.
Call this when an object is successfully retrieved from sync.Pool (i.e., the pool had an available object to reuse).
This function is a no-op if metrics are disabled. Thread safety: Safe to call from multiple goroutines concurrently.
Example:
obj := pool.Get()
if obj != nil {
monitor.RecordPoolHit()
} else {
monitor.RecordPoolMiss()
}
func RecordPoolMiss ¶
func RecordPoolMiss()
RecordPoolMiss records a pool miss requiring new allocation.
Call this when sync.Pool.Get() returns nil and a new object must be allocated. High pool miss rates indicate insufficient pool warm-up or excessive load.
This function is a no-op if metrics are disabled. Thread safety: Safe to call from multiple goroutines concurrently.
Example:
obj := pool.Get()
if obj == nil {
monitor.RecordPoolMiss()
obj = &NewObject{} // Create new object
}
func RecordTokenizerCall ¶
RecordTokenizerCall records a tokenization operation with timing and error information.
This function is a no-op if metrics are disabled. Call this after each tokenization operation to track performance.
Parameters:
- duration: Time taken to tokenize the SQL
- tokens: Number of tokens generated
- err: Error returned from tokenization, or nil if successful
Thread safety: Safe to call from multiple goroutines concurrently.
Example:
start := time.Now() tokens, err := tokenizer.Tokenize(sqlBytes) duration := time.Since(start) monitor.RecordTokenizerCall(duration, len(tokens), err)
func Reset ¶
func Reset()
Reset clears all metrics and resets the start time.
This function resets all counters to zero and sets the start time to now. The enabled/disabled state is preserved.
Useful for testing, service restart, or when you want to start fresh metrics collection without stopping the service.
Thread safety: Safe to call from multiple goroutines concurrently.
Example:
monitor.Reset()
fmt.Println("All metrics cleared")
func Uptime ¶
Uptime returns the duration since metrics were enabled or reset.
This provides the time window over which current metrics have been collected. Useful for calculating rates (operations per second, etc.).
Thread safety: Safe to call from multiple goroutines concurrently.
Example:
uptime := monitor.Uptime()
metrics := monitor.GetMetrics()
opsPerSec := float64(metrics.TokenizerCalls) / uptime.Seconds()
fmt.Printf("Uptime: %v, Ops/sec: %.0f\n", uptime, opsPerSec)
Types ¶
type Metrics ¶
type Metrics struct {
// Tokenizer metrics
TokenizerCalls int64 // Total tokenization operations (atomic)
TokenizerDuration time.Duration // Cumulative tokenization time
TokensProcessed int64 // Total tokens generated (atomic)
TokenizerErrors int64 // Total tokenization errors (atomic)
// Parser metrics
ParserCalls int64 // Total parse operations (atomic)
ParserDuration time.Duration // Cumulative parse time
StatementsProcessed int64 // Total statements parsed (atomic)
ParserErrors int64 // Total parse errors (atomic)
// Pool metrics
PoolHits int64 // Pool retrieval hits (atomic)
PoolMisses int64 // Pool retrieval misses (atomic)
PoolReuse float64 // Pool reuse percentage (calculated)
// Memory metrics (currently unused - reserved for future use)
AllocBytes uint64 // Memory allocation in bytes
TotalAllocs uint64 // Total allocation count
LastGCPause time.Duration // Last GC pause duration
// contains filtered or unexported fields
}
Metrics holds performance metrics for the tokenizer and parser with thread-safe access.
This is the internal metrics structure protected by a read-write mutex. Do not access this directly; use the global functions (Enable, Disable, RecordTokenizerCall, RecordParserCall, etc.) instead.
The mutex ensures safe concurrent access from multiple goroutines. All metric fields use atomic operations or are protected by the mutex.
type MetricsSnapshot ¶
type MetricsSnapshot struct {
// TokenizerCalls is the total number of tokenization operations performed
TokenizerCalls int64
// TokenizerDuration is the cumulative time spent in tokenization
TokenizerDuration time.Duration
// TokensProcessed is the total number of tokens generated
TokensProcessed int64
// TokenizerErrors is the total number of tokenization failures
TokenizerErrors int64
// ParserCalls is the total number of parse operations performed
ParserCalls int64
// ParserDuration is the cumulative time spent in parsing
ParserDuration time.Duration
// StatementsProcessed is the total number of SQL statements successfully parsed
StatementsProcessed int64
// ParserErrors is the total number of parse failures
ParserErrors int64
// PoolHits is the number of successful pool retrievals (object reused from pool)
PoolHits int64
// PoolMisses is the number of pool misses (new allocation required)
PoolMisses int64
// PoolReuse is the pool reuse percentage (0-100)
PoolReuse float64
// AllocBytes is the current memory allocation in bytes (currently unused)
AllocBytes uint64
// TotalAllocs is the total number of allocations (currently unused)
TotalAllocs uint64
// LastGCPause is the duration of the last garbage collection pause (currently unused)
LastGCPause time.Duration
// StartTime is when metrics collection started or was last reset
StartTime time.Time
}
MetricsSnapshot represents a point-in-time snapshot of performance metrics.
This structure contains all metric data without internal locks, making it safe to pass between goroutines and serialize for monitoring systems.
Use GetMetrics() to obtain a snapshot of current metrics.
Example:
metrics := monitor.GetMetrics()
fmt.Printf("Tokenizer calls: %d\n", metrics.TokenizerCalls)
fmt.Printf("Pool reuse: %.1f%%\n", metrics.PoolReuse)
func GetMetrics ¶
func GetMetrics() MetricsSnapshot
GetMetrics returns a snapshot of current performance metrics.
This function is safe to call concurrently and can be called whether metrics are enabled or disabled. When disabled, returns a snapshot with the last collected values.
The returned MetricsSnapshot is a copy and safe to use across goroutines. The PoolReuse field is calculated as (PoolHits / (PoolHits + PoolMisses)) * 100.
Thread safety: Safe to call from multiple goroutines concurrently.
Example:
metrics := monitor.GetMetrics()
fmt.Printf("Tokenizer calls: %d\n", metrics.TokenizerCalls)
fmt.Printf("Tokenizer errors: %d\n", metrics.TokenizerErrors)
fmt.Printf("Pool reuse: %.1f%%\n", metrics.PoolReuse)
fmt.Printf("Uptime: %v\n", time.Since(metrics.StartTime))
type Summary ¶
type Summary struct {
// Uptime is the duration since metrics were started or reset
Uptime time.Duration
// TotalOperations is the sum of tokenizer and parser operations
TotalOperations int64
// OperationsPerSecond is the average operations per second (total ops / uptime)
OperationsPerSecond float64
// TokensPerSecond is the average tokens generated per second
TokensPerSecond float64
// AvgTokenizerLatency is the average time per tokenization operation
AvgTokenizerLatency time.Duration
// AvgParserLatency is the average time per parse operation
AvgParserLatency time.Duration
// ErrorRate is the percentage of failed operations (0-100)
ErrorRate float64
// PoolEfficiency is the pool reuse percentage (0-100)
PoolEfficiency float64
}
Summary contains aggregated performance statistics and calculated rates.
This structure provides high-level performance metrics derived from the raw MetricsSnapshot data. Use GetSummary() to obtain this information.
All rate calculations are based on the uptime duration.
Example:
summary := monitor.GetSummary()
fmt.Printf("Uptime: %v\n", summary.Uptime)
fmt.Printf("Operations/sec: %.0f\n", summary.OperationsPerSecond)
fmt.Printf("Error rate: %.2f%%\n", summary.ErrorRate)
func GetSummary ¶
func GetSummary() Summary
GetSummary returns an aggregated performance summary with calculated rates.
This function computes derived metrics from the raw counters:
- Operations per second (total operations / uptime)
- Tokens per second (total tokens / uptime)
- Average latencies (total duration / operation count)
- Overall error rate across tokenizer and parser
- Pool efficiency percentage
Returns a Summary struct with all calculated fields populated. Safe to call concurrently from multiple goroutines.
Example:
summary := monitor.GetSummary()
fmt.Printf("Summary:\n")
fmt.Printf(" Uptime: %v\n", summary.Uptime)
fmt.Printf(" Total Operations: %d\n", summary.TotalOperations)
fmt.Printf(" Operations/sec: %.0f\n", summary.OperationsPerSecond)
fmt.Printf(" Tokens/sec: %.0f\n", summary.TokensPerSecond)
fmt.Printf(" Avg Tokenizer Latency: %v\n", summary.AvgTokenizerLatency)
fmt.Printf(" Avg Parser Latency: %v\n", summary.AvgParserLatency)
fmt.Printf(" Error Rate: %.2f%%\n", summary.ErrorRate)
fmt.Printf(" Pool Efficiency: %.1f%%\n", summary.PoolEfficiency)