iris

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 4, 2025 License: MPL-2.0 Imports: 18 Imported by: 6

README

Iris — High-Performance Structured Logging for Go

an AGILira fragment

Iris is an ultra-high performance, zero-allocation structured logging library for Go, built for applications that demand maximum throughput, enterprise security, and production-grade reliability — without compromising developer experience.

CI/CD Pipeline Security Go Report Card Test Coverage Xantos Powered

Key Features
  • Smart API: Zero-configuration setup with automatic optimization for your environment
  • Intelligent Auto-Scaling: Real-time switching between SingleRing and MPSC modes based on workload
  • Pofessional Security: Built-in sensitive data redaction and log injection protection
  • OpenTelemetry Integration: Native distributed tracing with automatic correlation and baggage propagation
  • Time Optimization: 121x faster timestamps with intelligent caching (timecache.CachedTime)
  • Hot Reload Configuration: Runtime configuration changes without service restarts
  • Advanced Idle Strategies: Progressive, spinning, sleeping, and hybrid strategies for optimal CPU usage
  • Backpressure Policies: Intelligent handling of high-load scenarios with multiple strategies

Installation

go get github.com/agilira/iris

Quick Start

import "github.com/agilira/iris"

// Smart API automatically configures everything optimally
logger, err := iris.New(iris.Config{})
if err != nil {
    panic(err)
}
defer logger.Sync()

logger.Start()

// Zero-allocation structured logging
logger.Info("User authenticated", 
    iris.Str("user_id", "12345"),
    iris.Dur("response_time", time.Millisecond*150),
    iris.Secret("api_key", apiKey))  // Automatically redacted

Complete Quick Start Guide → - Get running in 2 minutes with detailed examples

Performance

Iris prioritizes performance without sacrificing developer experience. Through careful engineering of zero-allocation field encoding, intelligent time caching, and lock-free ring buffers, we've achieved consistent sub-50ns logging operations.

Rather than making claims, we invite you to run the benchmarks yourself and see how Iris performs in your specific workloads:

Logging a message and 10 fields:

Package Time Time % to iris Objects Allocated
iris 31 ns/op +0% 0 allocs/op
zerolog 60 ns/op +94% 0 allocs/op
zap 411 ns/op +1,227% 1 allocs/op
slog 851 ns/op +2,645% 11 allocs/op
go-kit 2,342 ns/op +7,461% 36 allocs/op
apex/log 5,719 ns/op +18,352% 35 allocs/op
log15 7,251 ns/op +23,290% 42 allocs/op
logrus 8,396 ns/op +26,990% 52 allocs/op

Logging with accumulated context (10 fields already present):

Package Time Time % to iris Objects Allocated
iris 11 ns/op +0% 0 allocs/op
zerolog 25 ns/op +127% 0 allocs/op
zap 108 ns/op +882% 0 allocs/op
slog 173 ns/op +1,473% 0 allocs/op
go-kit 919 ns/op +8,255% 19 allocs/op
apex/log 2,851 ns/op +25,827% 13 allocs/op
log15 3,396 ns/op +30,782% 23 allocs/op
logrus 5,412 ns/op +49,018% 35 allocs/op

Adding fields at log site:

Package Time Time % to iris Objects Allocated
iris 37 ns/op +0% 0 allocs/op
zerolog 68 ns/op +86% 0 allocs/op
zap 297 ns/op +703% 1 allocs/op
slog 529 ns/op +1,330% 7 allocs/op
go-kit 1,529 ns/op +4,032% 28 allocs/op
apex/log 4,160 ns/op +11,143% 24 allocs/op
log15 5,428 ns/op +14,573% 34 allocs/op
logrus 6,256 ns/op +16,813% 40 allocs/op

Architecture

Iris provides intelligent logging through Smart API optimization and security-first design:

graph TD
    A[Application] --> B[Smart API<br/>Auto-Configuration]
    B --> C[Logger Instance<br/>Zero-Config Setup]
    C --> D[ZephyrosLite MPSC<br/>Ring Buffer + Batching]
    D --> E[Field Processing<br/>Type-Safe + Security]
    E --> F[Encoder Selection<br/>JSON/Text/Console]
    F --> G[Output Writers<br/>File/Stdout/Custom]
    E --> H[Security Layer<br/>Redaction + Injection Protection]
    B --> I[Time Cache<br/>121x Performance Boost]
    
    classDef primary fill:#e1f5fe,stroke:#01579b,stroke-width:2px
    classDef secondary fill:#e8f5e8,stroke:#1b5e20,stroke-width:2px
    classDef security fill:#fce4ec,stroke:#880e4f,stroke-width:2px
    classDef performance fill:#fff3e0,stroke:#e65100,stroke-width:2px
    
    class A,G primary
    class B,C,F secondary
    class E,H security
    class D,I performance
Advanced Features

Auto-Scaling Architecture:

  • SingleRing Mode: 25ns/op for low-contention scenarios
  • MPSC Mode: 35ns/op per thread for high-contention workloads
  • Automatic switching based on write frequency, contention, latency, and goroutine count
  • Real-time metrics for optimal performance monitoring

OpenTelemetry Integration:

  • Automatic trace correlation with trace_id and span_id extraction
  • Baggage propagation for distributed context across services
  • Resource detection for service name, version, and environment
  • Zero-allocation performance using Iris's ContextExtractor pattern

Idle Strategies:

  • Progressive Strategy: Adaptive CPU usage (default, auto-optimized)
  • Spinning Strategy: Ultra-low latency with maximum CPU usage
  • Sleeping Strategy: Minimal CPU usage for low-throughput scenarios
  • Hybrid Strategy: Balanced approach with brief spinning then sleep

Hot Reload Configuration:

  • Runtime updates without service restarts
  • Comprehensive audit trails for compliance and security
  • Multi-format support for flexible configuration management
  • Production-ready with graceful error handling and fallbacks

Core Framework

Smart API - Zero Configuration

Auto-detection and configuration of architecture, capacity, encoder, and logging level without any setup.

Security-First Design
  • Secret Redaction: Automatic masking of sensitive data (passwords, API keys, tokens)
  • Injection Protection: Complete defense against log manipulation attacks
  • Field Sanitization: Safe handling of user input and dynamic keys
Multi-Format Output
  • JSON: Structured logging for production systems and log aggregation
  • Text: Human-readable format for development and debugging
  • Console: Color-coded output with intelligent TTY detection
Field Type System
  • Type-Safe Constructors: Strongly typed field creation (String, Int64, Duration, etc.)
  • Union Storage: Memory-efficient field storage with type indicators
  • Extensible Design: Support for custom types and serialization
// Type-safe field construction with automatic security
logger.Info("Payment processed",
    iris.Str("transaction_id", "tx-123456"),
    iris.Int64("amount_cents", 2499),
    iris.Dur("processing_time", time.Millisecond*45),
    iris.Secret("card_number", cardNumber),  // Automatically redacted
)

// Output (JSON): {"ts":"2025-09-03T10:00:00Z","level":"info","msg":"Payment processed","transaction_id":"tx-123456","amount_cents":2499,"processing_time":"45ms","card_number":"[REDACTED]"}

Performance: 324-537 ns/op encoding with 0 allocations per field

Auto-Scaling Architecture → | OpenTelemetry Integration →

The Philosophy Behind Iris

In Greek mythology, Iris was the personification of the rainbow and messenger of the gods, known for her speed in delivering messages across vast distances while maintaining their integrity and beauty. She served as the link between heaven and earth, ensuring that divine communications reached their intended recipients without distortion.

This embodies Iris' design philosophy: lightning-fast delivery of structured log messages while preserving their integrity through security features and maintaining beauty through readable output formats. The library provides a reliable bridge between your application's events and their destinations, ensuring that critical information flows seamlessly without performance degradation or security vulnerabilities.

Iris doesn't just log events—it delivers them with the speed of light while safeguarding their content and maintaining clarity for human and machine consumption alike.

Documentation

Quick Links:

License

Iris is licensed under the Mozilla Public License 2.0.


Iris • an AGILira fragment

Documentation

Overview

Package iris provides a high-performance, structured logging library for Go applications.

Iris is designed for production environments where performance, security, and reliability are critical. It offers zero-allocation logging paths, automatic memory management, and comprehensive security features including secure field handling and log injection prevention.

Key Features

  • Smart API with zero-configuration setup and automatic optimization
  • High-performance structured logging with zero-allocation fast paths
  • Automatic memory management with buffer pooling and ring buffer architecture
  • Comprehensive security features including field sanitization and injection prevention
  • Multiple output formats: JSON, text, and console with smart formatting
  • Dynamic configuration with hot-reload capabilities
  • Built-in caller information and stack trace support
  • Backpressure handling and automatic scaling
  • OpenTelemetry integration support
  • Extensive field types with type-safe APIs

Smart API - Zero Configuration

The revolutionary Smart API automatically detects optimal settings for your environment:

// Smart API: Everything auto-configured
logger, err := iris.New(iris.Config{})
logger.Start()
logger.Info("Hello world", iris.String("user", "alice"))

Smart features include:

  • Architecture detection (SingleRing vs ThreadedRings based on CPU count)
  • Capacity optimization (8KB per CPU core, bounded 8KB-64KB)
  • Encoder selection (Text for development, JSON for production)
  • Level detection (from environment or development mode)
  • Time optimization (121x faster cached time)

Quick Start

Basic usage with Smart API (recommended):

logger, err := iris.New(iris.Config{})
if err != nil {
	panic(err)
}
logger.Start()
defer logger.Sync()

logger.Info("Application started", iris.String("version", "1.0.0"))

Development mode with debug logging:

logger, err := iris.New(iris.Config{}, iris.Development())
logger.Start()
logger.Debug("Debug information visible")

Configuration

While Smart API handles most scenarios, you can override specific settings:

// Override only what you need, rest is auto-detected
config := iris.Config{
	Output: myCustomWriter,  // Custom output
	Level:  iris.ErrorLevel, // Error level only
	// Everything else: auto-optimized
}
logger, err := iris.New(config)

Environment variable support:

export IRIS_LEVEL=debug  # Automatically detected by Smart API

Performance Optimizations

Iris includes several performance optimizations automatically enabled by Smart API:

  • Time caching for high-frequency logging scenarios (121x faster than time.Now())
  • Buffer pooling to minimize garbage collection
  • Ring buffer architecture for lock-free writes
  • Smart idle strategies for CPU optimization
  • Zero-allocation fast paths for common operations
  • Architecture auto-detection based on system resources

Security Features

Security is built into every aspect of Iris:

  • Field sanitization prevents log injection attacks
  • Secret field redaction protects sensitive data
  • Caller verification prevents stack manipulation
  • Safe string handling prevents buffer overflows

Field Types

Iris supports a comprehensive set of field types with type-safe constructors:

logger.Info("User operation",
	iris.String("user_id", "12345"),
	iris.Int64("timestamp", time.Now().Unix()),
	iris.Duration("elapsed", time.Since(start)),
	iris.Error("error", err),
	iris.Secret("password", "[REDACTED]"),
)

Advanced Usage

For advanced scenarios, Iris provides:

  • Custom encoders for specialized output formats
  • Hierarchical loggers with inherited fields
  • Sampling for high-volume scenarios
  • Integration with monitoring systems
  • Custom sink implementations
  • Manual configuration overrides when needed

Error Handling

Iris uses non-blocking error handling to maintain performance:

logger, err := iris.New(iris.Config{})
if err != nil {
	// Handle configuration errors
}
logger.Start()

if dropped := logger.Dropped(); dropped > 0 {
	// Handle dropped log entries
}

Performance Comparison

Smart API delivers significant performance improvements:

  • Hot Path Allocations: 1-3 allocs/op (67% reduction)
  • Encoding Performance: 324-537 ns/op (40-60% improvement)
  • Memory per Record: 2.5KB (75% reduction)
  • Configuration: Zero lines vs 15-20 lines manually

Best Practices

  • Use Smart API for all new projects (iris.New(iris.Config{}))
  • Prefer structured fields instead of formatted messages
  • Use typed field constructors (String, Int64, etc.)
  • Leverage environment variables for deployment configuration
  • Monitor dropped log entries in high-load scenarios
  • Use iris.Development() for local development
  • Use iris.Secret() for sensitive data fields

For comprehensive documentation and examples, see: https://github.com/agilira/iris

Index

Constants

View Source
const (
	// Core logging errors
	ErrCodeLoggerCreation errors.ErrorCode = "IRIS_LOGGER_CREATION"
	ErrCodeLoggerNotFound errors.ErrorCode = "IRIS_LOGGER_NOT_FOUND"
	ErrCodeLoggerDisabled errors.ErrorCode = "IRIS_LOGGER_DISABLED"
	ErrCodeLoggerClosed   errors.ErrorCode = "IRIS_LOGGER_CLOSED"

	// Configuration errors
	ErrCodeInvalidConfig errors.ErrorCode = "IRIS_INVALID_CONFIG"
	ErrCodeInvalidLevel  errors.ErrorCode = "IRIS_INVALID_LEVEL"
	ErrCodeInvalidFormat errors.ErrorCode = "IRIS_INVALID_FORMAT"
	ErrCodeInvalidOutput errors.ErrorCode = "IRIS_INVALID_OUTPUT"

	// Field and encoding errors
	ErrCodeInvalidField      errors.ErrorCode = "IRIS_INVALID_FIELD"
	ErrCodeEncodingFailed    errors.ErrorCode = "IRIS_ENCODING_FAILED"
	ErrCodeFieldTypeMismatch errors.ErrorCode = "IRIS_FIELD_TYPE_MISMATCH"
	ErrCodeBufferOverflow    errors.ErrorCode = "IRIS_BUFFER_OVERFLOW"

	// Writer and output errors
	ErrCodeWriterNotAvailable errors.ErrorCode = "IRIS_WRITER_NOT_AVAILABLE"
	ErrCodeWriteFailed        errors.ErrorCode = "IRIS_WRITE_FAILED"
	ErrCodeFlushFailed        errors.ErrorCode = "IRIS_FLUSH_FAILED"
	ErrCodeSyncFailed         errors.ErrorCode = "IRIS_SYNC_FAILED"

	// Performance and resource errors
	ErrCodeMemoryAllocation errors.ErrorCode = "IRIS_MEMORY_ALLOCATION"
	ErrCodePoolExhausted    errors.ErrorCode = "IRIS_POOL_EXHAUSTED"
	ErrCodeTimeout          errors.ErrorCode = "IRIS_TIMEOUT"
	ErrCodeResourceLimit    errors.ErrorCode = "IRIS_RESOURCE_LIMIT"

	// Ring buffer errors
	ErrCodeRingInvalidCapacity  errors.ErrorCode = "IRIS_RING_INVALID_CAPACITY"
	ErrCodeRingInvalidBatchSize errors.ErrorCode = "IRIS_RING_INVALID_BATCH_SIZE"
	ErrCodeRingMissingProcessor errors.ErrorCode = "IRIS_RING_MISSING_PROCESSOR"
	ErrCodeRingClosed           errors.ErrorCode = "IRIS_RING_CLOSED"
	ErrCodeRingBuildFailed      errors.ErrorCode = "IRIS_RING_BUILD_FAILED"

	// Hook and middleware errors
	ErrCodeHookExecution   errors.ErrorCode = "IRIS_HOOK_EXECUTION"
	ErrCodeMiddlewareChain errors.ErrorCode = "IRIS_MIDDLEWARE_CHAIN"
	ErrCodeFilterFailed    errors.ErrorCode = "IRIS_FILTER_FAILED"

	// File and rotation errors
	ErrCodeFileOpen         errors.ErrorCode = "IRIS_FILE_OPEN"
	ErrCodeFileWrite        errors.ErrorCode = "IRIS_FILE_WRITE"
	ErrCodeFileRotation     errors.ErrorCode = "IRIS_FILE_ROTATION"
	ErrCodePermissionDenied errors.ErrorCode = "IRIS_PERMISSION_DENIED"
)

LoggerError codes - specific error codes for the iris logging library

View Source
const ErrCodeLoggerExecution errors.ErrorCode = "IRIS_LOGGER_EXECUTION"

ErrCodeLoggerExecution represents the error code for logger execution failures

Variables

View Source
var (
	// ErrLoggerNotStarted is returned when logging operations are attempted on a non-started logger
	ErrLoggerNotStarted = errors.New(ErrCodeLoggerNotFound, "logger not started - call Start() first")

	// ErrLoggerClosed is returned when logging operations are attempted on a closed logger
	ErrLoggerClosed = errors.New(ErrCodeLoggerClosed, "logger is closed")

	// ErrLoggerCreationFailed is returned when logger creation fails
	ErrLoggerCreationFailed = errors.New(ErrCodeLoggerCreation, "failed to create logger")
)

Logger errors

View Source
var BalancedStrategy = NewProgressiveIdleStrategy()

BalancedStrategy provides good performance for most production workloads. Uses progressive strategy that adapts to workload patterns. Equivalent to NewProgressiveIdleStrategy().

View Source
var DefaultContextExtractor = &ContextExtractor{
	Keys: map[ContextKey]string{
		RequestIDKey: "request_id",
		TraceIDKey:   "trace_id",
		SpanIDKey:    "span_id",
		UserIDKey:    "user_id",
		SessionIDKey: "session_id",
	},
	MaxDepth: 10,
}

DefaultContextExtractor provides sensible defaults for common use cases.

View Source
var EfficientStrategy = NewSleepingIdleStrategy(time.Millisecond, 0)

EfficientStrategy minimizes CPU usage for low-throughput scenarios. Uses 1ms sleep with no initial spinning. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 0).

View Source
var HybridStrategy = NewSleepingIdleStrategy(time.Millisecond, 1000)

HybridStrategy provides a good compromise between latency and CPU usage. Spins briefly then sleeps for 1ms. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 1000).

View Source
var SpinningStrategy = NewSpinningIdleStrategy()

SpinningStrategy provides ultra-low latency with maximum CPU usage. Equivalent to NewSpinningIdleStrategy().

Functions

func AllLevelNames

func AllLevelNames() []string

AllLevelNames returns a slice of all valid level names. This is useful for generating help text and validation messages.

func CIFriendlyRetryCount

func CIFriendlyRetryCount(normalRetries int) int

CIFriendlyRetryCount returns an appropriate retry count for the given operation In CI environments, retry counts are increased to account for scheduler variability

func CIFriendlySleep

func CIFriendlySleep(normalDuration time.Duration)

CIFriendlySleep sleeps for an appropriate duration In CI environments, sleep durations are increased to allow for slower scheduling

func CIFriendlyTimeout

func CIFriendlyTimeout(normalTimeout time.Duration) time.Duration

CIFriendlyTimeout returns an appropriate timeout for the given operation In CI environments, timeouts are increased to account for resource constraints

func FreeStack

func FreeStack(stack *Stack)

FreeStack returns a Stack to the pool for reuse

func GetErrorCode

func GetErrorCode(err error) errors.ErrorCode

GetErrorCode extracts the error code from an error

func GetUserMessage

func GetUserMessage(err error) string

GetUserMessage extracts a user-friendly message from an error

func IsCIEnvironment

func IsCIEnvironment() bool

IsCIEnvironment returns true if running in a CI environment

func IsFileSyncer

func IsFileSyncer(ws WriteSyncer) bool

IsFileSyncer checks if a WriteSyncer is backed by a file. This can be useful for conditional logic based on the underlying writer type, such as applying different buffering strategies.

func IsLoggerError

func IsLoggerError(err error, code errors.ErrorCode) bool

IsLoggerError checks if an error is an iris logger error

func IsNopSyncer

func IsNopSyncer(ws WriteSyncer) bool

IsNopSyncer checks if a WriteSyncer uses no-op synchronization. This can help optimize write patterns when sync operations are known to be no-ops.

func IsRetryableError

func IsRetryableError(err error) bool

IsRetryableError checks if an error is retryable

func IsValidLevel

func IsValidLevel(level Level) bool

IsValidLevel checks if the given level is a valid predefined level.

func NewAtomicLevelFromConfig

func NewAtomicLevelFromConfig(config *Config) *atomicLevel

NewAtomicLevelFromConfig creates a new atomicLevel initialized with the config's level. This function bridges the gap between static configuration and dynamic level management.

func NewLoggerError

func NewLoggerError(code errors.ErrorCode, message string) *errors.Error

NewLoggerError creates a new logger-specific error with standard context

func NewLoggerErrorWithField

func NewLoggerErrorWithField(code errors.ErrorCode, message, field, value string) *errors.Error

NewLoggerErrorWithField creates a logger error with field and value information

func RecoverWithError

func RecoverWithError(code errors.ErrorCode) *errors.Error

RecoverWithError recovers from a panic and converts it to a logger error

func SafeExecute

func SafeExecute(fn func() error, operation string) error

SafeExecute executes a function safely, handling any panics

func SetErrorHandler

func SetErrorHandler(handler ErrorHandler)

SetErrorHandler sets a custom error handler for the iris logging system This allows applications to customize how logging errors are handled

func WrapLoggerError

func WrapLoggerError(originalErr error, code errors.ErrorCode, message string) *errors.Error

WrapLoggerError wraps an existing error with logger-specific context

Types

type Architecture

type Architecture int

Architecture represents the ring buffer architecture type

const (
	// SingleRing uses a single Zephyros ring for maximum single-thread performance
	// Best for: benchmarks, single-producer scenarios, maximum single-thread throughput
	// Performance: ~25ns/op single-thread, limited concurrency scaling
	SingleRing Architecture = iota

	// ThreadedRings uses ThreadedZephyros with multiple rings for multi-producer scaling
	// Best for: production, multi-producer scenarios, high concurrency
	// Performance: ~35ns/op per thread, excellent scaling (4x+ improvement with multiple producers)
	ThreadedRings
)

func ParseArchitecture

func ParseArchitecture(s string) (Architecture, error)

ParseArchitecture parses a string into an Architecture

func (Architecture) String

func (a Architecture) String() string

String returns the string representation of the architecture

type AtomicLevel

type AtomicLevel struct {
	// contains filtered or unexported fields
}

AtomicLevel provides atomic operations on Level values. This is useful for dynamically changing log levels in concurrent environments.

func NewAtomicLevel

func NewAtomicLevel(level Level) *AtomicLevel

NewAtomicLevel creates a new AtomicLevel with the given initial level.

func (*AtomicLevel) Enabled

func (al *AtomicLevel) Enabled(level Level) bool

Enabled checks if the given level is enabled atomically. This is a high-performance method for checking levels in hot paths.

func (*AtomicLevel) Level

func (al *AtomicLevel) Level() Level

Level returns the current level atomically.

func (*AtomicLevel) MarshalText

func (al *AtomicLevel) MarshalText() ([]byte, error)

MarshalText implements encoding.TextMarshaler for AtomicLevel.

func (*AtomicLevel) SetLevel

func (al *AtomicLevel) SetLevel(level Level)

SetLevel sets the level atomically.

func (*AtomicLevel) String

func (al *AtomicLevel) String() string

String returns the string representation of the current level.

func (*AtomicLevel) UnmarshalText

func (al *AtomicLevel) UnmarshalText(b []byte) error

UnmarshalText implements encoding.TextUnmarshaler for AtomicLevel.

type AutoScalingConfig

type AutoScalingConfig struct {
	// Scaling thresholds (inspired by Lethe's shouldScaleToMPSC)
	ScaleToMPSCWriteThreshold   uint64        // Min writes/sec to consider MPSC (e.g., 1000)
	ScaleToMPSCContentionRatio  uint32        // Min contention % to scale to MPSC (e.g., 10 = 10%)
	ScaleToMPSCLatencyThreshold time.Duration // Max latency before scaling to MPSC (e.g., 1ms)
	ScaleToMPSCGoroutineCount   uint32        // Min active goroutines for MPSC (e.g., 3)

	// Scale down thresholds
	ScaleToSingleWriteThreshold  uint64        // Max writes/sec to scale back to Single (e.g., 100)
	ScaleToSingleContentionRatio uint32        // Max contention % for Single mode (e.g., 1%)
	ScaleToSingleLatencyMax      time.Duration // Max latency for Single mode (e.g., 100µs)

	// Measurement and stability
	MeasurementWindow    time.Duration // How often to check metrics (e.g., 100ms)
	ScalingCooldown      time.Duration // Min time between scale operations (e.g., 1s)
	StabilityRequirement int           // Consecutive measurements before scaling (e.g., 3)
}

AutoScalingConfig defines auto-scaling behavior

func DefaultAutoScalingConfig

func DefaultAutoScalingConfig() AutoScalingConfig

DefaultAutoScalingConfig returns production-ready auto-scaling configuration

type AutoScalingLogger

type AutoScalingLogger struct {
	// contains filtered or unexported fields
}

AutoScalingLogger implements an auto-scaling logging architecture

func NewAutoScalingLogger

func NewAutoScalingLogger(cfg Config, scalingConfig AutoScalingConfig, opts ...Option) (*AutoScalingLogger, error)

NewAutoScalingLogger creates an auto-scaling logger

func (*AutoScalingLogger) Close

func (asl *AutoScalingLogger) Close() error

Close gracefully shuts down auto-scaling logger

func (*AutoScalingLogger) Debug

func (asl *AutoScalingLogger) Debug(msg string, fields ...Field)

Debug logs at Debug level with automatic scaling

func (*AutoScalingLogger) Error

func (asl *AutoScalingLogger) Error(msg string, fields ...Field)

Error logs at Error level with automatic scaling

func (*AutoScalingLogger) GetCurrentMode

func (asl *AutoScalingLogger) GetCurrentMode() AutoScalingMode

GetCurrentMode returns the current scaling mode

func (*AutoScalingLogger) GetScalingStats

func (asl *AutoScalingLogger) GetScalingStats() AutoScalingStats

GetScalingStats returns auto-scaling performance statistics

func (*AutoScalingLogger) Info

func (asl *AutoScalingLogger) Info(msg string, fields ...Field)

Info logs at Info level with automatic scaling

func (*AutoScalingLogger) Start

func (asl *AutoScalingLogger) Start() error

Start begins auto-scaling operations

func (*AutoScalingLogger) Warn

func (asl *AutoScalingLogger) Warn(msg string, fields ...Field)

Warn logs at Warn level with automatic scaling

type AutoScalingMetrics

type AutoScalingMetrics struct {
	// contains filtered or unexported fields
}

AutoScalingMetrics tracks performance metrics for scaling decisions

type AutoScalingMode

type AutoScalingMode uint32

AutoScalingMode represents the current scaling mode

const (
	// SingleRingMode represents ultra-fast single-threaded logging (~25ns/op)
	// Best for: Low contention, single producers, benchmarks
	SingleRingMode AutoScalingMode = iota

	// MPSCMode represents multi-producer high-contention mode (~35ns/op per thread)
	// Best for: High contention, multiple goroutines, high throughput
	MPSCMode
)

func (AutoScalingMode) String

func (m AutoScalingMode) String() string

type AutoScalingStats

type AutoScalingStats struct {
	CurrentMode          AutoScalingMode
	TotalScaleOperations uint64
	ScaleToMPSCCount     uint64
	ScaleToSingleCount   uint64
	TotalWrites          uint64
	ContentionCount      uint64
	ActiveGoroutines     uint32
}

AutoScalingStats provides auto-scaling performance insights

type Config

type Config struct {
	// Ring buffer configuration (power-of-two recommended for Capacity)
	// Capacity determines the maximum number of log entries that can be buffered
	// before blocking or dropping occurs. Larger values improve throughput but
	// increase memory usage.
	Capacity int64

	// BatchSize controls how many log entries are processed together.
	// Higher values improve throughput but may increase latency.
	// Optimal values are typically 8-64 depending on workload.
	BatchSize int64

	// Architecture determines the ring buffer architecture type
	// SingleRing: Maximum single-thread performance (~25ns/op) - best for benchmarks
	// ThreadedRings: Multi-producer scaling (~35ns/op per thread) - best for production
	// Default: SingleRing for benchmark compatibility
	Architecture Architecture

	// NumRings specifies the number of rings for ThreadedRings architecture
	// Only used when Architecture = ThreadedRings
	// Higher values provide better parallelism but use more memory
	// Default: 4 (optimal for most multi-core systems)
	NumRings int

	// BackpressurePolicy determines the behavior when the ring buffer is full
	// DropOnFull: Drops new messages for maximum performance (default)
	// BlockOnFull: Blocks caller until space is available (guaranteed delivery)
	BackpressurePolicy zephyroslite.BackpressurePolicy

	// IdleStrategy controls CPU usage when no log records are being processed
	// Different strategies provide various trade-offs between latency and CPU usage:
	// - SpinningIdleStrategy: Ultra-low latency, ~100% CPU usage
	// - SleepingIdleStrategy: Balanced CPU/latency, ~1-10% CPU usage
	// - YieldingIdleStrategy: Moderate reduction, ~10-50% CPU usage
	// - ChannelIdleStrategy: Minimal CPU usage, ~microsecond latency
	// - ProgressiveIdleStrategy: Adaptive strategy for variable workloads (default)
	IdleStrategy zephyroslite.IdleStrategy

	// Output and formatting configuration
	// Output specifies where log entries are written. Must implement WriteSyncer
	// for proper synchronization guarantees.
	Output WriteSyncer

	// Encoder determines the output format (JSON, Console, etc.)
	// The encoder converts log records to their final byte representation
	Encoder Encoder

	// Level sets the minimum logging level. Messages below this level
	// are filtered out early for maximum performance.
	Level Level // default: Info

	// TimeFn allows custom time source for timestamps.
	// Default: time.Now for real-time logging
	// Can be overridden for testing or performance optimization
	TimeFn func() time.Time

	// Optional performance tuning
	// Sampler controls log sampling for high-volume scenarios
	// Can be nil to disable sampling
	Sampler Sampler

	// Name provides a human-readable identifier for this logger instance
	// Useful for debugging and metrics collection
	Name string
}

Config represents the core configuration for an iris logger instance. This structure centralizes all logging parameters with intelligent defaults and performance optimizations. All fields are designed for zero-copy operations and minimal memory allocation.

Performance considerations: - Capacity should be a power-of-two for optimal ring buffer performance - BatchSize affects throughput vs latency trade-offs - TimeFn allows for custom time sources (useful for testing and optimization)

Thread-safety: Config structs are immutable after logger creation

func LoadConfigFromEnv

func LoadConfigFromEnv() (*Config, error)

LoadConfigFromEnv loads logger configuration from environment variables

func LoadConfigFromJSON

func LoadConfigFromJSON(filename string) (*Config, error)

LoadConfigFromJSON loads logger configuration from a JSON file

func LoadConfigMultiSource

func LoadConfigMultiSource(jsonFile string) (*Config, error)

LoadConfigMultiSource loads configuration from multiple sources with precedence: 1. Environment variables (highest priority) 2. JSON file 3. Default values (lowest priority)

func (*Config) Clone

func (c *Config) Clone() *Config

Clone creates a deep copy of the configuration. This is useful for creating derived configurations without affecting the original.

func (*Config) GetStats

func (c *Config) GetStats() *stats

GetStats creates a new stats instance for tracking logger metrics. This factory function ensures proper initialization of all atomic counters.

func (*Config) Validate

func (c *Config) Validate() error

Validate checks the configuration for common errors and returns an error if the configuration is invalid. This helps catch configuration issues early before logger creation.

Performance: Fast validation with early returns for common cases

type ConsoleEncoder

type ConsoleEncoder struct {
	// TimeFormat specifies the time layout for timestamps (default: time.RFC3339Nano)
	TimeFormat string

	// LevelCasing controls level text casing: "upper" (default) or "lower"
	LevelCasing string

	// EnableColor enables ANSI color codes for different log levels (default: false)
	EnableColor bool
}

ConsoleEncoder implements an encoder for human-readable console output. It provides configurable time formatting, level casing, and optional ANSI color support for development and debugging environments.

func NewColorConsoleEncoder

func NewColorConsoleEncoder() *ConsoleEncoder

NewColorConsoleEncoder creates a new console encoder with ANSI colors enabled. This is useful for development environments where color output enhances readability.

func NewConsoleEncoder

func NewConsoleEncoder() *ConsoleEncoder

NewConsoleEncoder creates a new console encoder with default settings. By default, it uses RFC3339Nano time format, uppercase level casing, and no colors.

func (*ConsoleEncoder) Encode

func (e *ConsoleEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode writes a log record to the buffer in console-friendly format. The output format is: timestamp level message key=value key=value...

type ContextExtractor

type ContextExtractor struct {
	// Keys maps context keys to field names in log output
	Keys map[ContextKey]string

	// MaxDepth limits how deep to search in context chain (default: 10)
	MaxDepth int
}

ContextExtractor defines which context keys should be extracted and logged. This prevents the performance overhead of scanning all context values.

type ContextKey

type ContextKey string

ContextKey represents a key type for context values that should be logged.

const (
	RequestIDKey ContextKey = "request_id"
	TraceIDKey   ContextKey = "trace_id"
	SpanIDKey    ContextKey = "span_id"
	UserIDKey    ContextKey = "user_id"
	SessionIDKey ContextKey = "session_id"
)

Common context keys for standardized logging

type ContextLogger

type ContextLogger struct {
	// contains filtered or unexported fields
}

ContextLogger wraps a Logger with pre-extracted context fields. This avoids context.Value() calls in the hot logging path.

func (*ContextLogger) Debug

func (cl *ContextLogger) Debug(msg string, fields ...Field)

Debug logs a message at debug level with context fields

func (*ContextLogger) Error

func (cl *ContextLogger) Error(msg string, fields ...Field)

Error logs a message at error level with context fields

func (*ContextLogger) Fatal

func (cl *ContextLogger) Fatal(msg string, fields ...Field)

Fatal logs a message at fatal level with context fields and exits

func (*ContextLogger) Info

func (cl *ContextLogger) Info(msg string, fields ...Field)

Info logs a message at info level with context fields

func (*ContextLogger) Warn

func (cl *ContextLogger) Warn(msg string, fields ...Field)

Warn logs a message at warn level with context fields

func (*ContextLogger) With

func (cl *ContextLogger) With(fields ...Field) *ContextLogger

With creates a new ContextLogger with additional fields. This preserves both context fields and manually added fields.

func (*ContextLogger) WithAdditionalContext

func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, extractor *ContextExtractor) *ContextLogger

WithAdditionalContext extracts additional context values without losing existing ones.

type Depth

type Depth int

Depth specifies how deep of a stack trace should be captured

const (
	// FirstFrame captures only the first frame (caller info)
	FirstFrame Depth = iota
	// FullStack captures the entire call stack
	FullStack
)

type DynamicConfigWatcher

type DynamicConfigWatcher struct {
	// contains filtered or unexported fields
}

DynamicConfigWatcher manages dynamic configuration changes using Argus Provides real-time hot reload of Iris logger configuration with audit trail

func EnableDynamicLevel

func EnableDynamicLevel(logger *Logger, configPath string) (*DynamicConfigWatcher, error)

EnableDynamicLevel creates and starts a config watcher for the given logger and config file This is a convenience function that combines NewDynamicConfigWatcher + Start

Example:

logger, err := iris.New(config)
if err != nil {
    return err
}

watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
    log.Printf("Dynamic level disabled: %v", err)
} else {
    defer watcher.Stop()
    log.Println("✅ Dynamic level changes enabled!")
}

func NewDynamicConfigWatcher

func NewDynamicConfigWatcher(configPath string, atomicLevel *AtomicLevel) (*DynamicConfigWatcher, error)

NewDynamicConfigWatcher creates a new dynamic config watcher for iris logger This enables runtime log level changes by watching the configuration file

Parameters:

  • configPath: Path to the JSON configuration file to watch
  • atomicLevel: The atomic level instance from iris logger

Example usage:

logger, err := iris.New(config)
if err != nil {
    return err
}

watcher, err := iris.NewDynamicConfigWatcher("config.json", logger.Level())
if err != nil {
    return err
}
defer watcher.Stop()

if err := watcher.Start(); err != nil {
    return err
}

Now when you modify config.json and change the "level" field, the logger will automatically update its level without restart!

func (*DynamicConfigWatcher) IsRunning

func (w *DynamicConfigWatcher) IsRunning() bool

IsRunning returns true if the watcher is currently active

func (*DynamicConfigWatcher) Start

func (w *DynamicConfigWatcher) Start() error

Start begins watching the configuration file for changes

func (*DynamicConfigWatcher) Stop

func (w *DynamicConfigWatcher) Stop() error

Stop stops watching the configuration file

type Encoder

type Encoder interface {
	Encode(rec *Record, now time.Time, buf *bytes.Buffer)
}

Encoder astratto (permette anche encoder binari futuri).

type ErrorHandler

type ErrorHandler func(err *errors.Error)

ErrorHandler represents a function that handles errors within the logging system

func GetErrorHandler

func GetErrorHandler() ErrorHandler

GetErrorHandler returns the current error handler

type Field

type Field struct {
	// K is the field key/name
	K string
	// T indicates the type of data stored in this field
	T kind
	// I64 stores signed integers, bools (as 0/1), durations, and timestamps
	I64 int64
	// U64 stores unsigned integers
	U64 uint64
	// F64 stores floating-point numbers
	F64 float64
	// Str stores string values
	Str string
	// B stores byte slices
	B []byte
	// Obj stores arbitrary objects (errors, stringers, etc.)
	Obj interface{}
}

Field represents a key-value pair with type information for structured logging. It uses a union-like approach to minimize memory allocation and maximize performance. The T field indicates which of the value fields (I64, U64, F64, Str, B, Obj) contains the actual data.

func Binary

func Binary(k string, v []byte) Field

Binary creates a byte slice field (alias for Bytes).

func Bool

func Bool(k string, v bool) Field

Bool creates a boolean field. Internally stored as int64 (1 for true, 0 for false) for efficiency.

func Bytes

func Bytes(k string, v []byte) Field

Bytes creates a byte slice field. Useful for binary data, encoded strings, or raw bytes.

func Dur

func Dur(k string, v time.Duration) Field

Dur creates a duration field from time.Duration. Stored as int64 nanoseconds for precision and efficiency.

func Err

func Err(err error) Field

Err creates an error field with key "error". If err is nil, returns a field with empty string (compatible but not elided).

func ErrorField

func ErrorField(err error) Field

ErrorField creates an error field for logging errors. Equivalent to NamedErr("error", err) but uses the proper error type for potential optimization.

func Errors

func Errors(k string, errs []error) Field

Errors creates a field for multiple errors (like Zap's ErrorsField).

func Float32

func Float32(k string, v float32) Field

Float32 creates a field from a float32 value.

func Float64

func Float64(k string, v float64) Field

Float64 creates a 64-bit floating-point field. Suitable for decimal numbers and scientific notation.

func Int

func Int(k string, v int) Field

Int creates a signed integer field from an int value. The int is converted to int64 for consistent storage.

func Int16

func Int16(k string, v int16) Field

Int16 creates a field from an int16 value.

func Int32

func Int32(k string, v int32) Field

Int32 creates a field from an int32 value.

func Int64

func Int64(k string, v int64) Field

Int64 creates a signed 64-bit integer field. Use this for large integers or when you specifically need int64.

func Int8

func Int8(k string, v int8) Field

Int8 creates a field from an int8 value.

func NamedErr

func NamedErr(k string, err error) Field

NamedErr creates an error field with a custom key. If err is nil, returns a field with empty string (compatible but not elided).

func NamedError

func NamedError(k string, err error) Field

NamedError creates an error field with a custom key using proper error type.

func Object

func Object(k string, val interface{}) Field

Object creates an object field for arbitrary data.

func Secret

func Secret(k, v string) Field

Secret creates a field for sensitive data that will be automatically redacted. The actual value is stored but will appear as "[REDACTED]" in log output. Use this for passwords, API keys, tokens, personal data, or any sensitive information.

Example:

logger.Info("User login", iris.Secret("password", userPassword))
// Output: {"level":"info","msg":"User login","password":"[REDACTED]"}

Security: This prevents accidental exposure of sensitive data in logs while maintaining the field structure for debugging purposes.

func Str

func Str(k, v string) Field

Str creates a string field for logging. This is one of the most commonly used field types.

func String

func String(k, v string) Field

String creates a string field (alias for Str for consistency with Go naming).

func Stringer

func Stringer(k string, val interface{ String() string }) Field

Stringer creates a stringer field for objects implementing fmt.Stringer.

func Time

func Time(k string, v time.Time) Field

Time creates a timestamp field from time.Time (alias for TimeField for consistency).

func TimeField

func TimeField(k string, v time.Time) Field

TimeField creates a timestamp field from time.Time. Stored as Unix nanoseconds for high precision and compact representation.

func Uint

func Uint(k string, v uint) Field

Uint creates a field from a uint value.

func Uint16

func Uint16(k string, v uint16) Field

Uint16 creates a field from a uint16 value.

func Uint32

func Uint32(k string, v uint32) Field

Uint32 creates a field from a uint32 value.

func Uint64

func Uint64(k string, v uint64) Field

Uint64 creates an unsigned 64-bit integer field. Use this for non-negative values that may exceed int64 range.

func Uint8

func Uint8(k string, v uint8) Field

Uint8 creates a field from a uint8 value.

func (Field) BoolValue

func (f Field) BoolValue() bool

BoolValue returns the boolean value if the field is a bool, false otherwise.

func (Field) BytesValue

func (f Field) BytesValue() []byte

BytesValue returns the byte slice value if the field is bytes, nil otherwise.

func (Field) DurationValue

func (f Field) DurationValue() time.Duration

DurationValue returns the time.Duration value if the field is a duration, 0 otherwise.

func (Field) FloatValue

func (f Field) FloatValue() float64

FloatValue returns the float64 value if the field is a float, 0.0 otherwise.

func (Field) IntValue

func (f Field) IntValue() int64

IntValue returns the int64 value if the field is an integer, 0 otherwise.

func (Field) IsBool

func (f Field) IsBool() bool

IsBool returns true if the field contains boolean data.

func (Field) IsBytes

func (f Field) IsBytes() bool

IsBytes returns true if the field contains byte slice data.

func (Field) IsDuration

func (f Field) IsDuration() bool

IsDuration returns true if the field contains duration data.

func (Field) IsFloat

func (f Field) IsFloat() bool

IsFloat returns true if the field contains floating-point data.

func (Field) IsInt

func (f Field) IsInt() bool

IsInt returns true if the field contains integer data.

func (Field) IsString

func (f Field) IsString() bool

IsString returns true if the field contains string data.

func (Field) IsTime

func (f Field) IsTime() bool

IsTime returns true if the field contains timestamp data.

func (Field) IsUint

func (f Field) IsUint() bool

IsUint returns true if the field contains unsigned integer data.

func (Field) Key

func (f Field) Key() string

Key returns the field's key name.

func (Field) StringValue

func (f Field) StringValue() string

StringValue returns the string value if the field is a string, empty string otherwise.

func (Field) TimeValue

func (f Field) TimeValue() time.Time

TimeValue returns the time.Time value if the field is a timestamp, zero time otherwise.

func (Field) Type

func (f Field) Type() kind

Type returns the kind of data stored in this field.

func (Field) UintValue

func (f Field) UintValue() uint64

UintValue returns the uint64 value if the field is an unsigned integer, 0 otherwise.

type Hook

type Hook func(rec *Record)

Hook represents a function executed in the consumer thread after log record processing.

Hooks are executed in the consumer thread to avoid contention with producer threads. This design ensures maximum performance for logging operations while still allowing powerful post-processing capabilities.

Hook functions receive the fully populated Record after encoding but before the buffer is returned to the pool. This allows for:

  • Metrics collection
  • Log forwarding to external systems
  • Custom processing based on log content
  • Development-time debugging

Performance Notes:

  • Executed in single consumer thread (no locks needed)
  • Called after encoding is complete
  • Should avoid blocking operations to maintain throughput

Thread Safety: Hooks are called from single consumer thread only

type IdleStrategy

type IdleStrategy = zephyroslite.IdleStrategy

IdleStrategy defines the interface for consumer idle behavior. This type alias exposes the internal interface for configuration purposes.

func NewChannelIdleStrategy

func NewChannelIdleStrategy(timeout time.Duration) IdleStrategy

NewChannelIdleStrategy creates an efficient blocking wait strategy. This strategy puts the consumer goroutine into an efficient wait state using Go channels, providing near-zero CPU usage when idle.

Parameters:

  • timeout: Maximum time to wait before checking for shutdown (0 = no timeout)

Best for: Minimum CPU usage with acceptable latency for low-throughput scenarios CPU Usage: Near 0% when idle Latency: ~microseconds (channel wake-up time)

Note: This strategy works best with lower throughput workloads where the overhead of channel operations is acceptable.

Examples:

// No timeout - maximum efficiency
NewChannelIdleStrategy(0)

// With timeout for responsive shutdown
NewChannelIdleStrategy(100*time.Millisecond)

func NewProgressiveIdleStrategy

func NewProgressiveIdleStrategy() IdleStrategy

NewProgressiveIdleStrategy creates an adaptive idle strategy. This strategy automatically adjusts its behavior based on work patterns, starting with spinning for ultra-low latency and progressively reducing CPU usage as idle time increases.

This is the default strategy, providing good performance for most workloads without requiring manual tuning.

Best for: Variable workload patterns where both low latency and low CPU usage are important CPU Usage: Adaptive - starts high, reduces over time when idle Latency: Starts at minimum, increases gradually when idle

Behavior:

  • Hot spin for first 1000 iterations (minimum latency)
  • Occasional yielding up to 10000 iterations
  • Progressive sleep with exponential backoff
  • Resets to hot spin when work is found

Example:

config := &Config{
    IdleStrategy: NewProgressiveIdleStrategy(),
    // ... other config
}

func NewSleepingIdleStrategy

func NewSleepingIdleStrategy(sleepDuration time.Duration, maxSpins int) IdleStrategy

NewSleepingIdleStrategy creates a CPU-efficient idle strategy with controlled latency. This strategy reduces CPU usage by sleeping when no work is available, with optional initial spinning for hybrid behavior.

Parameters:

  • sleepDuration: How long to sleep when no work is found (e.g., time.Millisecond)
  • maxSpins: Number of spin iterations before sleeping (0 = sleep immediately)

Best for: Balanced CPU usage and latency in production environments CPU Usage: ~1-10% depending on sleep duration and spin count Latency: ~1-10ms depending on sleep duration

Examples:

// Low CPU usage, higher latency
NewSleepingIdleStrategy(5*time.Millisecond, 0)

// Hybrid: spin briefly then sleep
NewSleepingIdleStrategy(time.Millisecond, 1000)

func NewSpinningIdleStrategy

func NewSpinningIdleStrategy() IdleStrategy

NewSpinningIdleStrategy creates an ultra-low latency idle strategy. This strategy provides the minimum possible latency by continuously checking for work without ever yielding the CPU.

Best for: Ultra-low latency requirements where CPU consumption is not a concern CPU Usage: ~100% of one core when idle Latency: Minimum possible (~nanoseconds)

Example:

config := &Config{
    IdleStrategy: NewSpinningIdleStrategy(),
    // ... other config
}

func NewYieldingIdleStrategy

func NewYieldingIdleStrategy(maxSpins int) IdleStrategy

NewYieldingIdleStrategy creates a moderate CPU reduction strategy. This strategy spins for a configurable number of iterations before yielding to the Go scheduler, providing a middle ground between spinning and sleeping approaches.

Parameters:

  • maxSpins: Number of spins before yielding to scheduler

Best for: Moderate CPU reduction while maintaining reasonable latency CPU Usage: ~10-50% depending on max spins configuration Latency: ~microseconds to low milliseconds

Examples:

// More aggressive yielding (lower CPU, higher latency)
NewYieldingIdleStrategy(100)

// Conservative yielding (higher CPU, lower latency)
NewYieldingIdleStrategy(10000)

type JSONEncoder

type JSONEncoder struct {
	TimeKey  string // default "ts"
	LevelKey string // default "level"
	MsgKey   string // default "msg"
	RFC3339  bool   // default true (alternativa: UnixNano int64)
}

JSONEncoder implements NDJSON (one line per record) with zero-reflection encoding

func NewJSONEncoder

func NewJSONEncoder() *JSONEncoder

NewJSONEncoder creates a new JSONEncoder with optimal defaults

func (*JSONEncoder) Encode

func (e *JSONEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode encodes a log record to JSON format

type Level

type Level int32

Level represents the severity level of a log message. Levels are ordered from least to most severe: Debug < Info < Warn < Error < DPanic < Panic < Fatal

Performance Notes: - Level is implemented as int32 for fast comparisons - Atomic operations used for thread-safe level changes - Zero allocation for level checks via inlined comparisons

const (
	Debug  Level = iota - 1 // Debug information, typically disabled in production
	Info                    // General information messages
	Warn                    // Warning messages for potentially harmful situations
	Error                   // Error messages for failure conditions
	DPanic                  // Development panic - panics in development, errors in production
	Panic                   // Panic level - logs message then panics
	Fatal                   // Fatal level - logs message then calls os.Exit(1)

	// StacktraceDisabled is a sentinel value used to disable stack trace collection
	StacktraceDisabled Level = -999
)

Log levels in order of increasing severity

func AllLevels

func AllLevels() []Level

AllLevels returns a slice of all valid levels in ascending order. This is useful for documentation, validation, and testing.

func ParseLevel

func ParseLevel(s string) (Level, error)

ParseLevel parses a string representation of a level and returns the corresponding Level. It handles common aliases and is case-insensitive. Returns Info level for empty strings as a sensible default.

func (Level) Enabled

func (l Level) Enabled(min Level) bool

Enabled determines if this level is enabled given a minimum level. This is a critical hot path function optimized for maximum performance.

func (Level) IsDPanic

func (l Level) IsDPanic() bool

IsDPanic returns true if the level is DPanic. Convenience method for checking development panic level.

func (Level) IsDebug

func (l Level) IsDebug() bool

IsDebug returns true if the level is Debug. Convenience method for frequently checked debug level.

func (Level) IsError

func (l Level) IsError() bool

IsError returns true if the level is Error. Convenience method for frequently checked error level.

func (Level) IsFatal

func (l Level) IsFatal() bool

IsFatal returns true if the level is Fatal. Convenience method for checking fatal level.

func (Level) IsInfo

func (l Level) IsInfo() bool

IsInfo returns true if the level is Info. Convenience method for frequently checked info level.

func (Level) IsPanic

func (l Level) IsPanic() bool

IsPanic returns true if the level is Panic. Convenience method for checking panic level.

func (Level) IsWarn

func (l Level) IsWarn() bool

IsWarn returns true if the level is Warn. Convenience method for frequently checked warn level.

func (Level) MarshalText

func (l Level) MarshalText() ([]byte, error)

MarshalText implements encoding.TextMarshaler for JSON/XML serialization. This method is optimized to avoid allocations in the common case.

func (Level) String

func (l Level) String() string

String returns the string representation of the level. This is used for human-readable output and serialization.

func (*Level) UnmarshalText

func (l *Level) UnmarshalText(b []byte) error

UnmarshalText implements encoding.TextUnmarshaler for JSON/XML deserialization. This method provides detailed error information for debugging.

type LevelFlag

type LevelFlag struct {
	// contains filtered or unexported fields
}

LevelFlag is a command-line flag implementation for Level. It implements the flag.Value interface for easy CLI integration.

func NewLevelFlag

func NewLevelFlag(level *Level) *LevelFlag

NewLevelFlag creates a new LevelFlag pointing to the given Level.

func (*LevelFlag) Set

func (lf *LevelFlag) Set(s string) error

Set parses and sets the level from a string. This method is called by the flag package when parsing command-line arguments.

func (*LevelFlag) String

func (lf *LevelFlag) String() string

String returns the string representation of the level.

func (*LevelFlag) Type

func (lf *LevelFlag) Type() string

Type returns the type description for help text.

type Logger

type Logger struct {
	// contains filtered or unexported fields
}

Logger provides ultra-high performance logging with zero-allocation structured fields.

The Logger uses a lock-free MPSC (Multiple Producer, Single Consumer) ring buffer for maximum throughput. Multiple goroutines can log concurrently while a single background goroutine processes and outputs the log records.

Thread Safety:

  • All logging methods (Debug, Info, Warn, Error) are thread-safe
  • Multiple goroutines can log concurrently without locks
  • Configuration changes (SetLevel) are atomic and thread-safe

Performance Features:

  • Zero allocations for structured logging with pre-allocated fields
  • Lock-free atomic operations for level checking
  • Intelligent sampling to reduce log volume
  • Efficient buffer pooling to minimize GC pressure
  • Adaptive batching based on log volume
  • Context inheritance with With() for repeated fields

Lifecycle:

  • Create with New() - configures but doesn't start processing
  • Call Start() to begin background processing
  • Use logging methods (Debug, Info, etc.) for actual logging
  • Call Close() for graceful shutdown with guaranteed log processing

func New

func New(cfg Config, opts ...Option) (*Logger, error)

New creates a new high-performance logger with the specified configuration and options.

The logger is created but not started - call Start() to begin processing. This separation allows for configuration verification and testing setup before actual log processing begins.

Parameters:

  • cfg: Logger configuration with output, encoding, and performance settings
  • opts: Optional configuration functions for advanced features

The configuration is validated and enhanced with intelligent defaults:

  • Missing TimeFn defaults to time.Now
  • Zero BatchSize gets auto-sized based on Capacity
  • Nil Output or Encoder will cause an error

Returns:

  • *Logger: Configured logger ready for Start()
  • error: Configuration validation error

Example:

logger, err := iris.New(iris.Config{
    Level:    iris.Info,
    Output:   os.Stdout,
    Encoder:  iris.NewJSONEncoder(),
    Capacity: 8192,
}, iris.WithCaller(), iris.Development())
if err != nil {
    return err
}
logger.Start()

func (*Logger) AtomicLevel

func (l *Logger) AtomicLevel() *AtomicLevel

AtomicLevel returns a pointer to the logger's atomic level.

This method provides access to the underlying atomic level structure, which can be used with dynamic configuration watchers like Argus to enable runtime level changes without logger restarts.

Returns:

  • *AtomicLevel: Pointer to the atomic level instance

Example usage with dynamic config watching:

watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
    log.Printf("Dynamic level disabled: %v", err)
} else {
    defer watcher.Stop()
    log.Println("✅ Dynamic level changes enabled!")
}

Thread Safety: The returned AtomicLevel is thread-safe

func (*Logger) Close

func (l *Logger) Close() error

Close gracefully shuts down the logger.

This method stops the background processing goroutine and ensures all buffered log records are processed before shutdown. The shutdown is deterministic - Close() will not return until all pending logs have been written to the output.

After Close() is called:

  • All subsequent logging operations will fail silently
  • The ring buffer becomes unusable
  • All buffered records are guaranteed to be processed

The method is idempotent - calling Close() multiple times is safe.

Close flushes any pending log data and closes the logger Close should be called when the logger is no longer needed

Performance Characteristics:

  • Blocks until all pending records are processed
  • Automatically syncs output before closing
  • Cannot be used after Close() is called

Thread Safety: Safe to call from multiple goroutines

func (*Logger) DPanic

func (l *Logger) DPanic(msg string, fields ...Field) bool

DPanic logs a message at a special development panic level.

DPanic (Development Panic) logs at Error level but panics if the logger is in development mode. This allows for aggressive error detection during development while maintaining stability in production.

Behavior:

  • Development mode: Logs and then panics
  • Production mode: Logs only (no panic)

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Performance: Same as Error level logging with conditional panic Zap-compat: DPanic/Panic/Fatal con livelli dedicati

func (*Logger) Debug

func (l *Logger) Debug(msg string, fields ...Field) bool

Debug logs a message at Debug level with structured fields.

Debug level is intended for detailed diagnostic information useful during development and troubleshooting. These messages are typically disabled in production environments.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Debugf

func (l *Logger) Debugf(format string, args ...any) bool

Debugf logs a message at debug level using printf-style formatting

func (*Logger) Error

func (l *Logger) Error(msg string, fields ...Field) bool

Error logs a message at Error level with structured fields.

Error level is intended for error events that allow the application to continue running. These messages indicate failures that need immediate attention but don't crash the application.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Errorf

func (l *Logger) Errorf(format string, args ...any) bool

Errorf logs a message at error level using printf-style formatting

func (*Logger) Fatal

func (l *Logger) Fatal(msg string, fields ...Field)

Fatal logs a message at fatal level and exits the program

func (*Logger) Info

func (l *Logger) Info(msg string, fields ...Field) bool

Info logs a message at Info level with structured fields.

Info level is intended for general information about program execution. These messages provide insight into application flow and important events.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Zero allocations for simple messages, optimized fast path for messages with fields

func (*Logger) InfoFields

func (l *Logger) InfoFields(msg string, fields ...Field) bool

InfoFields logs a message at Info level with structured fields.

This method supports structured logging with key-value pairs for detailed context. Use the simpler Info() method for messages without fields to achieve zero allocations.

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Infof

func (l *Logger) Infof(format string, args ...any) bool

Infof logs a message at info level using printf-style formatting

func (*Logger) Level

func (l *Logger) Level() Level

Level atomically reads the current minimum logging level.

Returns the current minimum level threshold used for filtering log messages. Messages below this level are discarded early for maximum performance.

Returns:

  • Level: Current minimum logging level

Performance Notes:

  • Atomic load operation
  • Zero allocations
  • Sub-nanosecond read performance

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Named

func (l *Logger) Named(name string) *Logger

Named creates a new logger with the specified name.

Named loggers are useful for organizing logs by component, module, or functionality. The name typically appears in log output to help with filtering and analysis.

Parameters:

  • name: Name to assign to the new logger instance

Returns:

  • *Logger: New logger instance with the specified name

Example:

dbLogger := logger.Named("database")
apiLogger := logger.Named("api")
dbLogger.Info("Connection established") // Includes "database" context

Performance Notes:

  • String assignment only (minimal overhead)
  • Name is included in log output by encoder
  • Zero allocations during normal operation

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Panic

func (l *Logger) Panic(msg string, fields ...Field) bool

Panic logs a message at panic level and panics

func (*Logger) SetLevel

func (l *Logger) SetLevel(min Level)

SetLevel atomically changes the minimum logging level.

This method allows dynamic level adjustment during runtime without restarting the logger. Level changes take effect immediately for subsequent log operations.

Parameters:

  • min: New minimum level (Debug, Info, Warn, Error)

Performance Notes:

  • Atomic operation with no locks or allocations
  • Sub-nanosecond level changes
  • Thread-safe concurrent access

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Start

func (l *Logger) Start()

Start begins background processing of log records.

This method starts the consumer goroutine that processes log records from the ring buffer and writes them to the configured output. The method is idempotent - calling Start() multiple times is safe and has no effect after the first call.

The consumer goroutine will continue processing until Close() is called. All logging operations require Start() to be called first, otherwise log records will accumulate in the ring buffer without being processed.

Performance Notes:

  • Uses lock-free atomic operations for state management
  • Single consumer goroutine eliminates lock contention
  • Processing begins immediately after Start() returns

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Stats

func (l *Logger) Stats() map[string]int64

Stats returns comprehensive performance statistics for monitoring.

This method provides real-time metrics about logger performance, buffer utilization, and operational health. The statistics are collected atomically and can be safely called from multiple goroutines.

Returns:

  • map[string]int64: Performance metrics including:
  • Ring buffer statistics (capacity, utilization, etc.)
  • Dropped message count
  • Processing throughput metrics
  • Memory usage indicators

The returned map contains:

  • "dropped": Number of messages dropped due to ring buffer full
  • "writer_position": Current writer position in ring buffer
  • "reader_position": Current reader position in ring buffer
  • "buffer_size": Ring buffer capacity
  • "items_buffered": Number of items waiting to be processed
  • "utilization_percent": Buffer utilization percentage
  • Additional ring buffer specific statistics

Performance: Atomic reads with zero allocations for metric collection

func (*Logger) Sync

func (l *Logger) Sync() error

Sync flushes any buffered log entries.

This method ensures that all buffered log entries are written to their destination. It's useful before program termination or when immediate log delivery is required.

Returns:

  • error: Any error encountered during synchronization

Performance Notes:

  • May block until all buffers are flushed
  • Should be called sparingly in hot paths
  • Automatically called during Close()

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Warn

func (l *Logger) Warn(msg string, fields ...Field) bool

Warn logs a message at Warn level with structured fields.

Warn level is intended for potentially harmful situations that don't prevent the application from continuing. These messages indicate conditions that should be investigated.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Warnf

func (l *Logger) Warnf(format string, args ...any) bool

Warnf logs a message at warn level using printf-style formatting

func (*Logger) With

func (l *Logger) With(fields ...Field) *Logger

With creates a new logger with additional structured fields.

This method creates a new logger instance that automatically includes the specified fields in every log message. This is useful for adding context that applies to multiple log statements, such as request IDs, user IDs, or component names.

Parameters:

  • fields: Structured fields to include in all log messages

Returns:

  • *Logger: New logger instance with pre-populated fields

Implementation Note: The fields are stored in the logger and applied to each log record during the logging operation.

Example:

requestLogger := logger.With(
    iris.String("request_id", reqID),
    iris.String("user_id", userID),
)
requestLogger.Info("Processing request") // Includes request_id and user_id

Performance Notes:

  • Fields are stored once in logger instance
  • Applied during each log operation (small overhead)
  • Zero allocations for field storage in logger

Thread Safety: Safe to call from multiple goroutines

func (*Logger) WithContext

func (l *Logger) WithContext(ctx context.Context) *ContextLogger

WithContext creates a new ContextLogger with fields extracted from context. This is the recommended way to use context integration - extract once, log many times with the same context.

Performance: O(k) where k is number of configured keys, not context depth.

func (*Logger) WithContextExtractor

func (l *Logger) WithContextExtractor(ctx context.Context, extractor *ContextExtractor) *ContextLogger

WithContextExtractor creates a ContextLogger with custom extraction rules.

func (*Logger) WithContextValue

func (l *Logger) WithContextValue(ctx context.Context, key ContextKey, fieldName string) *ContextLogger

WithContextValue creates a ContextLogger with a single context value. Optimized for cases where you only need one context field.

func (*Logger) WithOptions

func (l *Logger) WithOptions(opts ...Option) *Logger

WithOptions creates a new logger with the specified options applied.

This method clones the current logger and applies additional configuration options. The original logger is unchanged, ensuring immutable configuration and thread safety. The new logger shares the same ring buffer and output configuration but can have different caller, hook, and development settings.

Parameters:

  • opts: Option functions to apply to the new logger instance

Returns:

  • *Logger: New logger instance with applied options

Example:

devLogger := logger.WithOptions(
    iris.WithCaller(),
    iris.AddStacktrace(iris.Error),
    iris.Development(),
)

Performance Notes:

  • Clones logger configuration (minimal allocation)
  • Shares ring buffer and output resources
  • Options are applied once during creation

Thread Safety: Safe to call from multiple goroutines

func (*Logger) WithRequestID

func (l *Logger) WithRequestID(ctx context.Context) *ContextLogger

WithRequestID extracts request ID with minimal allocations. Optimized for the most common use case.

func (*Logger) WithTraceID

func (l *Logger) WithTraceID(ctx context.Context) *ContextLogger

WithTraceID extracts trace ID for distributed tracing.

func (*Logger) WithUserID

func (l *Logger) WithUserID(ctx context.Context) *ContextLogger

WithUserID extracts user ID from context for user-specific logging.

func (*Logger) Write

func (l *Logger) Write(fill func(*Record)) bool

Write provides zero-allocation logging with a fill function.

This is the fastest logging method, allowing direct manipulation of a pre-allocated Record in the ring buffer. The fill function is called with a pointer to a Record that should be populated with log data.

Parameters:

  • fill: Function to populate the log record (zero allocations)

Returns:

  • bool: true if record was successfully queued, false if ring buffer full

Performance Features:

  • Zero heap allocations during normal operation
  • Direct record manipulation in ring buffer
  • Lock-free atomic operations
  • Fastest possible logging path

Example:

success := logger.Write(func(r *Record) {
    r.Level = iris.Error
    r.Msg = "Critical system error"
    r.AddField(iris.String("component", "database"))
})

Thread Safety: Safe to call from multiple goroutines

type Option

type Option func(*loggerOptions)

Option represents a function that modifies logger options during construction.

Options use the functional options pattern to provide a clean, extensible API for logger configuration. Each Option function modifies the options structure in place during logger creation or cloning.

Pattern Benefits:

  • Backward compatible API evolution
  • Clear, self-documenting configuration
  • Composable option sets
  • Type-safe configuration

Usage:

logger := logger.WithOptions(
    iris.WithCaller(),
    iris.AddStacktrace(iris.Error),
    iris.Development(),
)

func AddStacktrace

func AddStacktrace(min Level) Option

AddStacktrace enables stack trace capture for log levels at or above the specified minimum.

Stack traces provide detailed call stack information for debugging complex issues. They are automatically captured for severe log levels (typically Error and above) to aid in troubleshooting.

Parameters:

  • min: Minimum log level for stack trace capture (Debug, Info, Warn, Error)

Performance Impact:

  • Stack trace capture is expensive (runtime.Stack() call)
  • Only enabled for specified log levels to minimize overhead
  • Stack traces are captured in producer thread but processed in consumer

Returns:

  • Option: Configuration function to enable stack trace capture

Example:

// Capture stack traces for Error level and above
logger := logger.WithOptions(iris.AddStacktrace(iris.Error))
logger.Error("critical error") // Will include stack trace
logger.Warn("warning")         // No stack trace

func Development

func Development() Option

Development enables development-specific behaviors for enhanced debugging.

Development mode changes logger behavior to be more suitable for development and testing environments:

  • DPanic level causes panic() in addition to logging
  • Enhanced error reporting and validation
  • More verbose debugging information

This option should typically be disabled in production environments for optimal performance and stability.

Returns:

  • Option: Configuration function to enable development mode

Example:

logger := logger.WithOptions(iris.Development())
logger.DPanic("development panic") // Will panic in dev mode, log in production

func WithCaller

func WithCaller() Option

WithCaller enables caller information capture for log records.

When enabled, the logger will capture the file name, line number, and function name of the calling code for each log record. This information is added to the log output for debugging and troubleshooting.

Performance Impact:

  • Adds runtime.Caller() call per log operation
  • Minimal allocation for caller information
  • Skip level optimization reduces overhead

Returns:

  • Option: Configuration function to enable caller capture

Example:

logger := logger.WithOptions(iris.WithCaller())
logger.Info("message") // Will include caller info

func WithCallerSkip

func WithCallerSkip(n int) Option

WithCallerSkip sets the number of stack frames to skip for caller detection.

This option is useful when the logger is wrapped by helper functions and you want the caller information to point to the actual calling code rather than the wrapper function.

Parameters:

  • n: Number of stack frames to skip (negative values are treated as 0)

Common Skip Values:

  • 0: Direct caller of log method
  • 1: Skip one wrapper function
  • 2+: Skip multiple wrapper layers

Returns:

  • Option: Configuration function to set caller skip level

Example:

// Skip helper function to show actual caller
logger := logger.WithOptions(
    iris.WithCaller(),
    iris.WithCallerSkip(1),
)

func WithHook

func WithHook(h Hook) Option

WithHook adds a post-processing hook to the logger.

Hooks are functions executed in the consumer thread after log records are processed but before buffers are returned to the pool. This design ensures zero contention with producer threads while enabling powerful post-processing.

Hook Use Cases:

  • Metrics collection based on log content
  • Log forwarding to external systems
  • Custom alerting on specific log patterns
  • Development-time debugging and validation

Parameters:

  • h: Hook function to execute (nil hooks are ignored)

Performance Notes:

  • Hooks are executed sequentially in consumer thread
  • Should avoid blocking operations to maintain throughput
  • No allocation overhead in producer threads

Returns:

  • Option: Configuration function to add the hook

Example:

metricHook := func(rec *Record) {
    if rec.Level >= iris.Error {
        errorCounter.Inc()
    }
}
logger := logger.WithOptions(iris.WithHook(metricHook))

type ProcessorFunc

type ProcessorFunc func(record *Record)

ProcessorFunc defines the signature for record processing functions

This function is called for each log record that flows through the ring buffer. It should be efficient and avoid blocking operations to maintain high throughput.

Parameters:

  • record: The log record to process (guaranteed non-nil)

Performance Notes:

  • Called from the consumer thread only (single-threaded)
  • Should avoid allocations and blocking operations
  • Can safely access shared state (no concurrent access)

type Record

type Record struct {
	Level  Level  // Log level
	Msg    string // Log message
	Logger string // Logger name
	Caller string // Caller information (file:line)
	Stack  string // Stack trace
	// contains filtered or unexported fields
}

Record represents a log entry with optimized field storage

func NewRecord

func NewRecord(level Level, msg string) *Record

NewRecord creates a new Record with the specified level and message. Uses pre-allocated field storage to avoid heap allocations during logging.

func (*Record) AddField

func (r *Record) AddField(field Field) bool

AddField adds a structured field to this record. Returns false if the field array is full (32 fields max - optimal for performance).

func (*Record) FieldCount

func (r *Record) FieldCount() int

FieldCount returns the number of fields in this record.

func (*Record) GetField

func (r *Record) GetField(index int) Field

GetField returns the field at the specified index. Panics if index is out of bounds (for test simplicity).

func (*Record) Reset

func (r *Record) Reset()

Reset clears the record for reuse.

type Ring

type Ring struct {
	// contains filtered or unexported fields
}

Ring provides ultra-high performance logging with embedded Zephyros Light

The Ring uses the embedded ZephyrosLight engine to provide optimal performance for logging operations while eliminating external dependencies and maintaining the core features needed for high-performance logging.

Embedded Zephyros Light Features:

  • Single ring architecture optimized for logging
  • ~15-20ns/op performance (vs 9ns commercial, 25ns previous)
  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Fixed batch processing (simplified vs adaptive)

Architecture Simplification:

  • SingleRing only (ThreadedRings removed - commercial feature)
  • Simplified configuration (fewer options, better defaults)
  • Embedded implementation (no external dependencies)

Performance Characteristics:

  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Fixed batching optimized for logging workloads
  • Simplified spinning strategy for low latency

func (*Ring) Close

func (r *Ring) Close()

Close gracefully shuts down the ring buffer

This method signals the consumer to stop processing and ensures all buffered records are processed before shutdown. It is safe to call multiple times and from multiple goroutines.

After Close() is called:

  • Write() will return false for all subsequent calls
  • Loop() will process all remaining records and then exit
  • The ring buffer becomes unusable

Shutdown Guarantees:

  • All buffered records are processed before shutdown
  • Multiple Close() calls are safe (idempotent)
  • Deterministic shutdown behavior for testing

func (*Ring) Flush

func (r *Ring) Flush() error

Flush ensures all pending writes are visible to the consumer

In the embedded ZephyrosLight architecture, this method ensures that all writes from producer threads are visible to the consumer thread. This is primarily useful for testing and ensuring deterministic behavior.

Note: In normal operation, flushing is automatic and this method exists primarily for API compatibility and testing scenarios.

func (*Ring) Loop

func (r *Ring) Loop()

Loop starts the record processing loop (CONSUMER THREAD ONLY)

This method should be called from exactly one goroutine to consume and process log records. The embedded ZephyrosLight implements an optimized spinning strategy for balanced performance and CPU usage.

The loop continues until Close() is called, after which it processes all remaining records before exiting.

Performance Features:

  • Fixed batching optimized for logging workloads
  • Simplified idle strategy to minimize CPU usage
  • Guaranteed processing of all records during shutdown

Warning: Only call this method from one goroutine per ring buffer. Multiple consumers will cause race conditions and data loss.

func (*Ring) ProcessBatch

func (r *Ring) ProcessBatch() int

ProcessBatch processes a single batch of records and returns the count

This method is useful for custom consumer implementations that need fine-grained control over processing timing. It processes up to batchSize records in a single call using the embedded ZephyrosLight engine.

Returns:

  • int: Number of records processed in this batch (0 if no records available)

Note: This is a lower-level method. Most applications should use Loop() which handles the complete consumer lifecycle automatically.

func (*Ring) Stats

func (r *Ring) Stats() map[string]int64

Stats returns detailed performance statistics for monitoring and debugging

The returned map contains real-time metrics about the embedded ZephyrosLight ring buffer's performance and current state. This is useful for monitoring, alerting, and performance optimization.

Returned Statistics:

  • "writer_position": Last claimed sequence number
  • "reader_position": Current reader position
  • "buffer_size": Total ring buffer capacity
  • "items_buffered": Number of records waiting to be processed
  • "items_processed": Total records processed
  • "items_dropped": Total records dropped due to full buffer
  • "closed": Ring buffer closed state (0=open, 1=closed)
  • "capacity": Configured ring capacity
  • "batch_size": Configured batch size
  • "utilization_percent": Buffer utilization percentage
  • "engine": "zephyros_light" (embedded engine identifier)

Returns:

  • map[string]int64: Real-time performance statistics

Example:

stats := ring.Stats()
fmt.Printf("Buffer utilization: %d%%\n", stats["utilization_percent"])
fmt.Printf("Items buffered: %d\n", stats["items_buffered"])

func (*Ring) Write

func (r *Ring) Write(fill func(*Record)) bool

Write adds a log record to the ring buffer using zero-allocation pattern

The fill function is called with a pointer to a pre-allocated Record in the embedded Zephyros Light ring buffer. This avoids any heap allocations during logging operations while providing excellent performance.

The function is thread-safe and can be called concurrently from multiple goroutines. The embedded ZephyrosLight uses atomic operations for lock-free performance.

Performance: Target ~15-20ns/op with embedded Zephyros Light engine

Parameters:

  • fill: Function to populate the log record (called with pre-allocated Record)

Returns:

  • bool: true if record was successfully written, false if ring is full or closed

Performance Notes:

  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Returns false instead of blocking when ring is full
  • Optimized for high-frequency logging scenarios

Example:

success := ring.Write(func(r *Record) {
    r.Level = ErrorLevel
    r.Message = "Critical error occurred"
    r.Timestamp = time.Now()
})

type Sampler

type Sampler interface {
	// Allow determines if a log entry at the given level should be processed.
	// Returns true if the entry should be logged, false if it should be dropped.
	Allow(level Level) bool
}

Sampler defines the interface for log sampling strategies. Implementations control which log entries are allowed through to prevent overwhelming downstream systems.

type Stack

type Stack struct {
	// contains filtered or unexported fields
}

Stack represents a captured stack trace with program counters

func CaptureStack

func CaptureStack(skip int, depth Depth) *Stack

CaptureStack captures a stack trace of the specified depth, skipping frames. skip=0 identifies the caller of CaptureStack. The caller must call FreeStack on the returned stack after using it.

func (*Stack) FormatStack

func (s *Stack) FormatStack() string

FormatStack formats the entire stack trace into a string using buffer pooling

func (*Stack) Next

func (s *Stack) Next() (runtime.Frame, bool)

Next returns the next frame in the stack trace

type TextEncoder

type TextEncoder struct {
	// TimeFormat specifies the time format (default: RFC3339)
	TimeFormat string
	// QuoteValues determines if string values should be quoted (default: true for safety)
	QuoteValues bool
	// SanitizeKeys determines if field keys should be sanitized (default: true)
	SanitizeKeys bool
}

TextEncoder provides secure human-readable text encoding for log records. Implements comprehensive log injection protection by sanitizing all field keys and values to prevent malicious log manipulation.

Security Features:

  • Field key sanitization to prevent injection via malformed keys
  • Value sanitization with proper quoting and escaping
  • Control character neutralization
  • Newline injection protection
  • Unicode direction override protection

Output Format:

time=2025-08-22T10:30:00Z level=info msg="User login" user=john_doe ip=192.168.1.1

func NewTextEncoder

func NewTextEncoder() *TextEncoder

NewTextEncoder creates a new secure text encoder with safe defaults.

func (*TextEncoder) Encode

func (e *TextEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode writes the record to the buffer in secure text format.

type TokenBucketSampler

type TokenBucketSampler struct {
	// contains filtered or unexported fields
}

TokenBucketSampler implements rate limiting using a token bucket algorithm. Provides burst capacity with sustained rate limiting for high-volume logging.

func NewTokenBucketSampler

func NewTokenBucketSampler(capacity, refill int64, every time.Duration) *TokenBucketSampler

NewTokenBucketSampler creates a new token bucket sampler with the specified parameters. Validates inputs and sets reasonable defaults for invalid values.

Parameters:

  • capacity: Maximum number of tokens (burst capacity)
  • refill: Number of tokens added per refill period
  • every: Time duration between refills

Returns a configured sampler ready for concurrent use.

func (*TokenBucketSampler) Allow

func (s *TokenBucketSampler) Allow(_ Level) bool

Allow implements the Sampler interface using token bucket rate limiting. Thread-safe implementation that refills tokens based on elapsed time and consumes tokens for allowed log entries.

Parameters:

  • level: Log level (unused in this implementation, all levels treated equally)

Returns true if logging should proceed, false if rate limited.

type WriteSyncer

type WriteSyncer interface {
	io.Writer
	Sync() error
}

WriteSyncer combines io.Writer with the ability to synchronize written data to persistent storage. This interface is essential for ensuring data durability in logging scenarios where data loss is unacceptable.

Performance considerations: - Sync() should be called judiciously as it may involve expensive syscalls - Implementations should be thread-safe for concurrent logging scenarios - Zero allocations in hot paths for maximum throughput

func AddSync

func AddSync(w io.Writer) WriteSyncer

AddSync is an alias for WrapWriter for familiarity with zap

func MultiWriteSyncer

func MultiWriteSyncer(writers ...WriteSyncer) WriteSyncer

MultiWriteSyncer creates a WriteSyncer that duplicates writes to multiple writers

func MultiWriter

func MultiWriter(writers ...io.Writer) WriteSyncer

MultiWriter accepts io.Writer interfaces, wraps them and creates a MultiWriteSyncer

func NewFileSyncer

func NewFileSyncer(file *os.File) WriteSyncer

NewFileSyncer creates a WriteSyncer specifically for file operations. This function provides explicit file syncing capabilities and should be used when you need guaranteed durability for file-based logging.

Performance: Direct file operations with explicit sync control

func NewNopSyncer

func NewNopSyncer(w io.Writer) WriteSyncer

NewNopSyncer creates a WriteSyncer that performs no synchronization. This is useful for scenarios where sync is handled externally or where the underlying writer doesn't support/need synchronization.

Performance: Zero-cost wrapper with inline no-op sync

func WrapWriter

func WrapWriter(w io.Writer) WriteSyncer

WrapWriter intelligently converts any io.Writer into a WriteSyncer. This function provides automatic detection and wrapping of different writer types to ensure optimal performance and correct synchronization behavior.

Type-specific optimizations: - *os.File: Uses fileSyncer for explicit sync() syscalls - WriteSyncer: Returns as-is (already implements interface) - Other writers: Uses nopSyncer (no-op sync for non-file writers)

Performance: Zero allocations for WriteSyncer inputs, minimal overhead for type switching in other cases.

Usage patterns:

  • File logging: WrapWriter(file) -> fileSyncer (with sync)
  • Buffer logging: WrapWriter(buffer) -> nopSyncer (no sync needed)
  • Network logging: WrapWriter(conn) -> nopSyncer (sync at protocol level)

Directories

Path Synopsis
cmd
test command
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL