Documentation
¶
Overview ¶
Package iris provides a high-performance, structured logging library for Go applications.
Iris is designed for production environments where performance, security, and reliability are critical. It offers zero-allocation logging paths, automatic memory management, and comprehensive security features including secure field handling and log injection prevention.
Key Features ¶
- Smart API with zero-configuration setup and automatic optimization
- High-performance structured logging with zero-allocation fast paths
- Automatic memory management with buffer pooling and ring buffer architecture
- Comprehensive security features including field sanitization and injection prevention
- Multiple output formats: JSON, text, and console with smart formatting
- Dynamic configuration with hot-reload capabilities
- Built-in caller information and stack trace support
- Backpressure handling and automatic scaling
- OpenTelemetry integration support
- Extensive field types with type-safe APIs
Smart API - Zero Configuration ¶
The revolutionary Smart API automatically detects optimal settings for your environment:
// Smart API: Everything auto-configured
logger, err := iris.New(iris.Config{})
logger.Start()
logger.Info("Hello world", iris.String("user", "alice"))
Smart features include:
- Architecture detection (SingleRing vs ThreadedRings based on CPU count)
- Capacity optimization (8KB per CPU core, bounded 8KB-64KB)
- Encoder selection (Text for development, JSON for production)
- Level detection (from environment or development mode)
- Time optimization (121x faster cached time)
Quick Start ¶
Basic usage with Smart API (recommended):
logger, err := iris.New(iris.Config{})
if err != nil {
panic(err)
}
logger.Start()
defer logger.Sync()
logger.Info("Application started", iris.String("version", "1.0.0"))
Development mode with debug logging:
logger, err := iris.New(iris.Config{}, iris.Development())
logger.Start()
logger.Debug("Debug information visible")
Configuration ¶
While Smart API handles most scenarios, you can override specific settings:
// Override only what you need, rest is auto-detected
config := iris.Config{
Output: myCustomWriter, // Custom output
Level: iris.ErrorLevel, // Error level only
// Everything else: auto-optimized
}
logger, err := iris.New(config)
Environment variable support:
export IRIS_LEVEL=debug # Automatically detected by Smart API
Performance Optimizations ¶
Iris includes several performance optimizations automatically enabled by Smart API:
- Time caching for high-frequency logging scenarios (121x faster than time.Now())
- Buffer pooling to minimize garbage collection
- Ring buffer architecture for lock-free writes
- Smart idle strategies for CPU optimization
- Zero-allocation fast paths for common operations
- Architecture auto-detection based on system resources
Security Features ¶
Security is built into every aspect of Iris:
- Field sanitization prevents log injection attacks
- Secret field redaction protects sensitive data
- Caller verification prevents stack manipulation
- Safe string handling prevents buffer overflows
Field Types ¶
Iris supports a comprehensive set of field types with type-safe constructors:
logger.Info("User operation",
iris.String("user_id", "12345"),
iris.Int64("timestamp", time.Now().Unix()),
iris.Duration("elapsed", time.Since(start)),
iris.Error("error", err),
iris.Secret("password", "[REDACTED]"),
)
Advanced Usage ¶
For advanced scenarios, Iris provides:
- Custom encoders for specialized output formats
- Hierarchical loggers with inherited fields
- Sampling for high-volume scenarios
- Integration with monitoring systems
- Custom sink implementations
- Manual configuration overrides when needed
Error Handling ¶
Iris uses non-blocking error handling to maintain performance:
logger, err := iris.New(iris.Config{})
if err != nil {
// Handle configuration errors
}
logger.Start()
if dropped := logger.Dropped(); dropped > 0 {
// Handle dropped log entries
}
Performance Comparison ¶
Smart API delivers significant performance improvements:
- Hot Path Allocations: 1-3 allocs/op (67% reduction)
- Encoding Performance: 324-537 ns/op (40-60% improvement)
- Memory per Record: 2.5KB (75% reduction)
- Configuration: Zero lines vs 15-20 lines manually
Best Practices ¶
- Use Smart API for all new projects (iris.New(iris.Config{}))
- Prefer structured fields instead of formatted messages
- Use typed field constructors (String, Int64, etc.)
- Leverage environment variables for deployment configuration
- Monitor dropped log entries in high-load scenarios
- Use iris.Development() for local development
- Use iris.Secret() for sensitive data fields
For comprehensive documentation and examples, see: https://github.com/agilira/iris
Index ¶
- Constants
- Variables
- func AllLevelNames() []string
- func CIFriendlyRetryCount(normalRetries int) int
- func CIFriendlySleep(normalDuration time.Duration)
- func CIFriendlyTimeout(normalTimeout time.Duration) time.Duration
- func FreeStack(stack *Stack)
- func GetErrorCode(err error) errors.ErrorCode
- func GetUserMessage(err error) string
- func IsCIEnvironment() bool
- func IsFileSyncer(ws WriteSyncer) bool
- func IsLoggerError(err error, code errors.ErrorCode) bool
- func IsNopSyncer(ws WriteSyncer) bool
- func IsRetryableError(err error) bool
- func IsValidLevel(level Level) bool
- func NewAtomicLevelFromConfig(config *Config) *atomicLevel
- func NewLoggerError(code errors.ErrorCode, message string) *errors.Error
- func NewLoggerErrorWithField(code errors.ErrorCode, message, field, value string) *errors.Error
- func RecoverWithError(code errors.ErrorCode) *errors.Error
- func SafeExecute(fn func() error, operation string) error
- func SetErrorHandler(handler ErrorHandler)
- func WrapLoggerError(originalErr error, code errors.ErrorCode, message string) *errors.Error
- type Architecture
- type AtomicLevel
- type AutoScalingConfig
- type AutoScalingLogger
- func (asl *AutoScalingLogger) Close() error
- func (asl *AutoScalingLogger) Debug(msg string, fields ...Field)
- func (asl *AutoScalingLogger) Error(msg string, fields ...Field)
- func (asl *AutoScalingLogger) GetCurrentMode() AutoScalingMode
- func (asl *AutoScalingLogger) GetScalingStats() AutoScalingStats
- func (asl *AutoScalingLogger) Info(msg string, fields ...Field)
- func (asl *AutoScalingLogger) Start() error
- func (asl *AutoScalingLogger) Warn(msg string, fields ...Field)
- type AutoScalingMetrics
- type AutoScalingMode
- type AutoScalingStats
- type Config
- type ConsoleEncoder
- type ContextExtractor
- type ContextKey
- type ContextLogger
- func (cl *ContextLogger) Debug(msg string, fields ...Field)
- func (cl *ContextLogger) Error(msg string, fields ...Field)
- func (cl *ContextLogger) Fatal(msg string, fields ...Field)
- func (cl *ContextLogger) Info(msg string, fields ...Field)
- func (cl *ContextLogger) Warn(msg string, fields ...Field)
- func (cl *ContextLogger) With(fields ...Field) *ContextLogger
- func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, extractor *ContextExtractor) *ContextLogger
- type Depth
- type DynamicConfigWatcher
- type Encoder
- type ErrorHandler
- type Field
- func Binary(k string, v []byte) Field
- func Bool(k string, v bool) Field
- func Bytes(k string, v []byte) Field
- func Dur(k string, v time.Duration) Field
- func Err(err error) Field
- func ErrorField(err error) Field
- func Errors(k string, errs []error) Field
- func Float32(k string, v float32) Field
- func Float64(k string, v float64) Field
- func Int(k string, v int) Field
- func Int16(k string, v int16) Field
- func Int32(k string, v int32) Field
- func Int64(k string, v int64) Field
- func Int8(k string, v int8) Field
- func NamedErr(k string, err error) Field
- func NamedError(k string, err error) Field
- func Object(k string, val interface{}) Field
- func Secret(k, v string) Field
- func Str(k, v string) Field
- func String(k, v string) Field
- func Stringer(k string, val interface{ ... }) Field
- func Time(k string, v time.Time) Field
- func TimeField(k string, v time.Time) Field
- func Uint(k string, v uint) Field
- func Uint16(k string, v uint16) Field
- func Uint32(k string, v uint32) Field
- func Uint64(k string, v uint64) Field
- func Uint8(k string, v uint8) Field
- func (f Field) BoolValue() bool
- func (f Field) BytesValue() []byte
- func (f Field) DurationValue() time.Duration
- func (f Field) FloatValue() float64
- func (f Field) IntValue() int64
- func (f Field) IsBool() bool
- func (f Field) IsBytes() bool
- func (f Field) IsDuration() bool
- func (f Field) IsFloat() bool
- func (f Field) IsInt() bool
- func (f Field) IsString() bool
- func (f Field) IsTime() bool
- func (f Field) IsUint() bool
- func (f Field) Key() string
- func (f Field) StringValue() string
- func (f Field) TimeValue() time.Time
- func (f Field) Type() kind
- func (f Field) UintValue() uint64
- type Hook
- type IdleStrategy
- type JSONEncoder
- type Level
- func (l Level) Enabled(min Level) bool
- func (l Level) IsDPanic() bool
- func (l Level) IsDebug() bool
- func (l Level) IsError() bool
- func (l Level) IsFatal() bool
- func (l Level) IsInfo() bool
- func (l Level) IsPanic() bool
- func (l Level) IsWarn() bool
- func (l Level) MarshalText() ([]byte, error)
- func (l Level) String() string
- func (l *Level) UnmarshalText(b []byte) error
- type LevelFlag
- type Logger
- func (l *Logger) AtomicLevel() *AtomicLevel
- func (l *Logger) Close() error
- func (l *Logger) DPanic(msg string, fields ...Field) bool
- func (l *Logger) Debug(msg string, fields ...Field) bool
- func (l *Logger) Debugf(format string, args ...any) bool
- func (l *Logger) Error(msg string, fields ...Field) bool
- func (l *Logger) Errorf(format string, args ...any) bool
- func (l *Logger) Fatal(msg string, fields ...Field)
- func (l *Logger) Info(msg string, fields ...Field) bool
- func (l *Logger) InfoFields(msg string, fields ...Field) bool
- func (l *Logger) Infof(format string, args ...any) bool
- func (l *Logger) Level() Level
- func (l *Logger) Named(name string) *Logger
- func (l *Logger) Panic(msg string, fields ...Field) bool
- func (l *Logger) SetLevel(min Level)
- func (l *Logger) Start()
- func (l *Logger) Stats() map[string]int64
- func (l *Logger) Sync() error
- func (l *Logger) Warn(msg string, fields ...Field) bool
- func (l *Logger) Warnf(format string, args ...any) bool
- func (l *Logger) With(fields ...Field) *Logger
- func (l *Logger) WithContext(ctx context.Context) *ContextLogger
- func (l *Logger) WithContextExtractor(ctx context.Context, extractor *ContextExtractor) *ContextLogger
- func (l *Logger) WithContextValue(ctx context.Context, key ContextKey, fieldName string) *ContextLogger
- func (l *Logger) WithOptions(opts ...Option) *Logger
- func (l *Logger) WithRequestID(ctx context.Context) *ContextLogger
- func (l *Logger) WithTraceID(ctx context.Context) *ContextLogger
- func (l *Logger) WithUserID(ctx context.Context) *ContextLogger
- func (l *Logger) Write(fill func(*Record)) bool
- type Option
- type ProcessorFunc
- type Record
- type Ring
- type Sampler
- type Stack
- type TextEncoder
- type TokenBucketSampler
- type WriteSyncer
Constants ¶
const ( // Core logging errors ErrCodeLoggerCreation errors.ErrorCode = "IRIS_LOGGER_CREATION" ErrCodeLoggerNotFound errors.ErrorCode = "IRIS_LOGGER_NOT_FOUND" ErrCodeLoggerDisabled errors.ErrorCode = "IRIS_LOGGER_DISABLED" ErrCodeLoggerClosed errors.ErrorCode = "IRIS_LOGGER_CLOSED" // Configuration errors ErrCodeInvalidConfig errors.ErrorCode = "IRIS_INVALID_CONFIG" ErrCodeInvalidLevel errors.ErrorCode = "IRIS_INVALID_LEVEL" ErrCodeInvalidFormat errors.ErrorCode = "IRIS_INVALID_FORMAT" ErrCodeInvalidOutput errors.ErrorCode = "IRIS_INVALID_OUTPUT" // Field and encoding errors ErrCodeInvalidField errors.ErrorCode = "IRIS_INVALID_FIELD" ErrCodeEncodingFailed errors.ErrorCode = "IRIS_ENCODING_FAILED" ErrCodeFieldTypeMismatch errors.ErrorCode = "IRIS_FIELD_TYPE_MISMATCH" ErrCodeBufferOverflow errors.ErrorCode = "IRIS_BUFFER_OVERFLOW" // Writer and output errors ErrCodeWriterNotAvailable errors.ErrorCode = "IRIS_WRITER_NOT_AVAILABLE" ErrCodeWriteFailed errors.ErrorCode = "IRIS_WRITE_FAILED" ErrCodeFlushFailed errors.ErrorCode = "IRIS_FLUSH_FAILED" ErrCodeSyncFailed errors.ErrorCode = "IRIS_SYNC_FAILED" // Performance and resource errors ErrCodeMemoryAllocation errors.ErrorCode = "IRIS_MEMORY_ALLOCATION" ErrCodePoolExhausted errors.ErrorCode = "IRIS_POOL_EXHAUSTED" ErrCodeTimeout errors.ErrorCode = "IRIS_TIMEOUT" ErrCodeResourceLimit errors.ErrorCode = "IRIS_RESOURCE_LIMIT" // Ring buffer errors ErrCodeRingInvalidCapacity errors.ErrorCode = "IRIS_RING_INVALID_CAPACITY" ErrCodeRingInvalidBatchSize errors.ErrorCode = "IRIS_RING_INVALID_BATCH_SIZE" ErrCodeRingMissingProcessor errors.ErrorCode = "IRIS_RING_MISSING_PROCESSOR" ErrCodeRingClosed errors.ErrorCode = "IRIS_RING_CLOSED" ErrCodeRingBuildFailed errors.ErrorCode = "IRIS_RING_BUILD_FAILED" // Hook and middleware errors ErrCodeHookExecution errors.ErrorCode = "IRIS_HOOK_EXECUTION" ErrCodeMiddlewareChain errors.ErrorCode = "IRIS_MIDDLEWARE_CHAIN" ErrCodeFilterFailed errors.ErrorCode = "IRIS_FILTER_FAILED" // File and rotation errors ErrCodeFileOpen errors.ErrorCode = "IRIS_FILE_OPEN" ErrCodeFileWrite errors.ErrorCode = "IRIS_FILE_WRITE" ErrCodeFileRotation errors.ErrorCode = "IRIS_FILE_ROTATION" ErrCodePermissionDenied errors.ErrorCode = "IRIS_PERMISSION_DENIED" )
LoggerError codes - specific error codes for the iris logging library
const ErrCodeLoggerExecution errors.ErrorCode = "IRIS_LOGGER_EXECUTION"
ErrCodeLoggerExecution represents the error code for logger execution failures
Variables ¶
var ( // ErrLoggerNotStarted is returned when logging operations are attempted on a non-started logger ErrLoggerNotStarted = errors.New(ErrCodeLoggerNotFound, "logger not started - call Start() first") // ErrLoggerClosed is returned when logging operations are attempted on a closed logger ErrLoggerClosed = errors.New(ErrCodeLoggerClosed, "logger is closed") // ErrLoggerCreationFailed is returned when logger creation fails ErrLoggerCreationFailed = errors.New(ErrCodeLoggerCreation, "failed to create logger") )
Logger errors
var BalancedStrategy = NewProgressiveIdleStrategy()
BalancedStrategy provides good performance for most production workloads. Uses progressive strategy that adapts to workload patterns. Equivalent to NewProgressiveIdleStrategy().
var DefaultContextExtractor = &ContextExtractor{ Keys: map[ContextKey]string{ RequestIDKey: "request_id", TraceIDKey: "trace_id", SpanIDKey: "span_id", UserIDKey: "user_id", SessionIDKey: "session_id", }, MaxDepth: 10, }
DefaultContextExtractor provides sensible defaults for common use cases.
var EfficientStrategy = NewSleepingIdleStrategy(time.Millisecond, 0)
EfficientStrategy minimizes CPU usage for low-throughput scenarios. Uses 1ms sleep with no initial spinning. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 0).
var HybridStrategy = NewSleepingIdleStrategy(time.Millisecond, 1000)
HybridStrategy provides a good compromise between latency and CPU usage. Spins briefly then sleeps for 1ms. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 1000).
var SpinningStrategy = NewSpinningIdleStrategy()
SpinningStrategy provides ultra-low latency with maximum CPU usage. Equivalent to NewSpinningIdleStrategy().
Functions ¶
func AllLevelNames ¶
func AllLevelNames() []string
AllLevelNames returns a slice of all valid level names. This is useful for generating help text and validation messages.
func CIFriendlyRetryCount ¶
CIFriendlyRetryCount returns an appropriate retry count for the given operation In CI environments, retry counts are increased to account for scheduler variability
func CIFriendlySleep ¶
CIFriendlySleep sleeps for an appropriate duration In CI environments, sleep durations are increased to allow for slower scheduling
func CIFriendlyTimeout ¶
CIFriendlyTimeout returns an appropriate timeout for the given operation In CI environments, timeouts are increased to account for resource constraints
func GetErrorCode ¶
func GetErrorCode(err error) errors.ErrorCode
GetErrorCode extracts the error code from an error
func GetUserMessage ¶
GetUserMessage extracts a user-friendly message from an error
func IsCIEnvironment ¶
func IsCIEnvironment() bool
IsCIEnvironment returns true if running in a CI environment
func IsFileSyncer ¶
func IsFileSyncer(ws WriteSyncer) bool
IsFileSyncer checks if a WriteSyncer is backed by a file. This can be useful for conditional logic based on the underlying writer type, such as applying different buffering strategies.
func IsLoggerError ¶
IsLoggerError checks if an error is an iris logger error
func IsNopSyncer ¶
func IsNopSyncer(ws WriteSyncer) bool
IsNopSyncer checks if a WriteSyncer uses no-op synchronization. This can help optimize write patterns when sync operations are known to be no-ops.
func IsRetryableError ¶
IsRetryableError checks if an error is retryable
func IsValidLevel ¶
IsValidLevel checks if the given level is a valid predefined level.
func NewAtomicLevelFromConfig ¶
func NewAtomicLevelFromConfig(config *Config) *atomicLevel
NewAtomicLevelFromConfig creates a new atomicLevel initialized with the config's level. This function bridges the gap between static configuration and dynamic level management.
func NewLoggerError ¶
func NewLoggerError(code errors.ErrorCode, message string) *errors.Error
NewLoggerError creates a new logger-specific error with standard context
func NewLoggerErrorWithField ¶
func NewLoggerErrorWithField(code errors.ErrorCode, message, field, value string) *errors.Error
NewLoggerErrorWithField creates a logger error with field and value information
func RecoverWithError ¶
func RecoverWithError(code errors.ErrorCode) *errors.Error
RecoverWithError recovers from a panic and converts it to a logger error
func SafeExecute ¶
SafeExecute executes a function safely, handling any panics
func SetErrorHandler ¶
func SetErrorHandler(handler ErrorHandler)
SetErrorHandler sets a custom error handler for the iris logging system This allows applications to customize how logging errors are handled
func WrapLoggerError ¶
WrapLoggerError wraps an existing error with logger-specific context
Types ¶
type Architecture ¶
type Architecture int
Architecture represents the ring buffer architecture type
const ( // SingleRing uses a single Zephyros ring for maximum single-thread performance // Best for: benchmarks, single-producer scenarios, maximum single-thread throughput // Performance: ~25ns/op single-thread, limited concurrency scaling SingleRing Architecture = iota // ThreadedRings uses ThreadedZephyros with multiple rings for multi-producer scaling // Best for: production, multi-producer scenarios, high concurrency // Performance: ~35ns/op per thread, excellent scaling (4x+ improvement with multiple producers) ThreadedRings )
func ParseArchitecture ¶
func ParseArchitecture(s string) (Architecture, error)
ParseArchitecture parses a string into an Architecture
func (Architecture) String ¶
func (a Architecture) String() string
String returns the string representation of the architecture
type AtomicLevel ¶
type AtomicLevel struct {
// contains filtered or unexported fields
}
AtomicLevel provides atomic operations on Level values. This is useful for dynamically changing log levels in concurrent environments.
func NewAtomicLevel ¶
func NewAtomicLevel(level Level) *AtomicLevel
NewAtomicLevel creates a new AtomicLevel with the given initial level.
func (*AtomicLevel) Enabled ¶
func (al *AtomicLevel) Enabled(level Level) bool
Enabled checks if the given level is enabled atomically. This is a high-performance method for checking levels in hot paths.
func (*AtomicLevel) Level ¶
func (al *AtomicLevel) Level() Level
Level returns the current level atomically.
func (*AtomicLevel) MarshalText ¶
func (al *AtomicLevel) MarshalText() ([]byte, error)
MarshalText implements encoding.TextMarshaler for AtomicLevel.
func (*AtomicLevel) SetLevel ¶
func (al *AtomicLevel) SetLevel(level Level)
SetLevel sets the level atomically.
func (*AtomicLevel) String ¶
func (al *AtomicLevel) String() string
String returns the string representation of the current level.
func (*AtomicLevel) UnmarshalText ¶
func (al *AtomicLevel) UnmarshalText(b []byte) error
UnmarshalText implements encoding.TextUnmarshaler for AtomicLevel.
type AutoScalingConfig ¶
type AutoScalingConfig struct {
// Scaling thresholds (inspired by Lethe's shouldScaleToMPSC)
ScaleToMPSCWriteThreshold uint64 // Min writes/sec to consider MPSC (e.g., 1000)
ScaleToMPSCContentionRatio uint32 // Min contention % to scale to MPSC (e.g., 10 = 10%)
ScaleToMPSCLatencyThreshold time.Duration // Max latency before scaling to MPSC (e.g., 1ms)
ScaleToMPSCGoroutineCount uint32 // Min active goroutines for MPSC (e.g., 3)
// Scale down thresholds
ScaleToSingleWriteThreshold uint64 // Max writes/sec to scale back to Single (e.g., 100)
ScaleToSingleContentionRatio uint32 // Max contention % for Single mode (e.g., 1%)
ScaleToSingleLatencyMax time.Duration // Max latency for Single mode (e.g., 100µs)
// Measurement and stability
MeasurementWindow time.Duration // How often to check metrics (e.g., 100ms)
ScalingCooldown time.Duration // Min time between scale operations (e.g., 1s)
StabilityRequirement int // Consecutive measurements before scaling (e.g., 3)
}
AutoScalingConfig defines auto-scaling behavior
func DefaultAutoScalingConfig ¶
func DefaultAutoScalingConfig() AutoScalingConfig
DefaultAutoScalingConfig returns production-ready auto-scaling configuration
type AutoScalingLogger ¶
type AutoScalingLogger struct {
// contains filtered or unexported fields
}
AutoScalingLogger implements an auto-scaling logging architecture
func NewAutoScalingLogger ¶
func NewAutoScalingLogger(cfg Config, scalingConfig AutoScalingConfig, opts ...Option) (*AutoScalingLogger, error)
NewAutoScalingLogger creates an auto-scaling logger
func (*AutoScalingLogger) Close ¶
func (asl *AutoScalingLogger) Close() error
Close gracefully shuts down auto-scaling logger
func (*AutoScalingLogger) Debug ¶
func (asl *AutoScalingLogger) Debug(msg string, fields ...Field)
Debug logs at Debug level with automatic scaling
func (*AutoScalingLogger) Error ¶
func (asl *AutoScalingLogger) Error(msg string, fields ...Field)
Error logs at Error level with automatic scaling
func (*AutoScalingLogger) GetCurrentMode ¶
func (asl *AutoScalingLogger) GetCurrentMode() AutoScalingMode
GetCurrentMode returns the current scaling mode
func (*AutoScalingLogger) GetScalingStats ¶
func (asl *AutoScalingLogger) GetScalingStats() AutoScalingStats
GetScalingStats returns auto-scaling performance statistics
func (*AutoScalingLogger) Info ¶
func (asl *AutoScalingLogger) Info(msg string, fields ...Field)
Info logs at Info level with automatic scaling
func (*AutoScalingLogger) Start ¶
func (asl *AutoScalingLogger) Start() error
Start begins auto-scaling operations
func (*AutoScalingLogger) Warn ¶
func (asl *AutoScalingLogger) Warn(msg string, fields ...Field)
Warn logs at Warn level with automatic scaling
type AutoScalingMetrics ¶
type AutoScalingMetrics struct {
// contains filtered or unexported fields
}
AutoScalingMetrics tracks performance metrics for scaling decisions
type AutoScalingMode ¶
type AutoScalingMode uint32
AutoScalingMode represents the current scaling mode
const ( // SingleRingMode represents ultra-fast single-threaded logging (~25ns/op) // Best for: Low contention, single producers, benchmarks SingleRingMode AutoScalingMode = iota // MPSCMode represents multi-producer high-contention mode (~35ns/op per thread) // Best for: High contention, multiple goroutines, high throughput MPSCMode )
func (AutoScalingMode) String ¶
func (m AutoScalingMode) String() string
type AutoScalingStats ¶
type AutoScalingStats struct {
CurrentMode AutoScalingMode
TotalScaleOperations uint64
ScaleToMPSCCount uint64
ScaleToSingleCount uint64
TotalWrites uint64
ContentionCount uint64
ActiveGoroutines uint32
}
AutoScalingStats provides auto-scaling performance insights
type Config ¶
type Config struct {
// Ring buffer configuration (power-of-two recommended for Capacity)
// Capacity determines the maximum number of log entries that can be buffered
// before blocking or dropping occurs. Larger values improve throughput but
// increase memory usage.
Capacity int64
// BatchSize controls how many log entries are processed together.
// Higher values improve throughput but may increase latency.
// Optimal values are typically 8-64 depending on workload.
BatchSize int64
// Architecture determines the ring buffer architecture type
// SingleRing: Maximum single-thread performance (~25ns/op) - best for benchmarks
// ThreadedRings: Multi-producer scaling (~35ns/op per thread) - best for production
// Default: SingleRing for benchmark compatibility
Architecture Architecture
// NumRings specifies the number of rings for ThreadedRings architecture
// Only used when Architecture = ThreadedRings
// Higher values provide better parallelism but use more memory
// Default: 4 (optimal for most multi-core systems)
NumRings int
// BackpressurePolicy determines the behavior when the ring buffer is full
// DropOnFull: Drops new messages for maximum performance (default)
// BlockOnFull: Blocks caller until space is available (guaranteed delivery)
BackpressurePolicy zephyroslite.BackpressurePolicy
// IdleStrategy controls CPU usage when no log records are being processed
// Different strategies provide various trade-offs between latency and CPU usage:
// - SpinningIdleStrategy: Ultra-low latency, ~100% CPU usage
// - SleepingIdleStrategy: Balanced CPU/latency, ~1-10% CPU usage
// - YieldingIdleStrategy: Moderate reduction, ~10-50% CPU usage
// - ChannelIdleStrategy: Minimal CPU usage, ~microsecond latency
// - ProgressiveIdleStrategy: Adaptive strategy for variable workloads (default)
IdleStrategy zephyroslite.IdleStrategy
// Output and formatting configuration
// Output specifies where log entries are written. Must implement WriteSyncer
// for proper synchronization guarantees.
Output WriteSyncer
// Encoder determines the output format (JSON, Console, etc.)
// The encoder converts log records to their final byte representation
Encoder Encoder
// Level sets the minimum logging level. Messages below this level
// are filtered out early for maximum performance.
Level Level // default: Info
// TimeFn allows custom time source for timestamps.
// Default: time.Now for real-time logging
// Can be overridden for testing or performance optimization
TimeFn func() time.Time
// Optional performance tuning
// Sampler controls log sampling for high-volume scenarios
// Can be nil to disable sampling
Sampler Sampler
// Name provides a human-readable identifier for this logger instance
// Useful for debugging and metrics collection
Name string
}
Config represents the core configuration for an iris logger instance. This structure centralizes all logging parameters with intelligent defaults and performance optimizations. All fields are designed for zero-copy operations and minimal memory allocation.
Performance considerations: - Capacity should be a power-of-two for optimal ring buffer performance - BatchSize affects throughput vs latency trade-offs - TimeFn allows for custom time sources (useful for testing and optimization)
Thread-safety: Config structs are immutable after logger creation
func LoadConfigFromEnv ¶
LoadConfigFromEnv loads logger configuration from environment variables
func LoadConfigFromJSON ¶
LoadConfigFromJSON loads logger configuration from a JSON file
func LoadConfigMultiSource ¶
LoadConfigMultiSource loads configuration from multiple sources with precedence: 1. Environment variables (highest priority) 2. JSON file 3. Default values (lowest priority)
func (*Config) Clone ¶
Clone creates a deep copy of the configuration. This is useful for creating derived configurations without affecting the original.
type ConsoleEncoder ¶
type ConsoleEncoder struct {
// TimeFormat specifies the time layout for timestamps (default: time.RFC3339Nano)
TimeFormat string
// LevelCasing controls level text casing: "upper" (default) or "lower"
LevelCasing string
// EnableColor enables ANSI color codes for different log levels (default: false)
EnableColor bool
}
ConsoleEncoder implements an encoder for human-readable console output. It provides configurable time formatting, level casing, and optional ANSI color support for development and debugging environments.
func NewColorConsoleEncoder ¶
func NewColorConsoleEncoder() *ConsoleEncoder
NewColorConsoleEncoder creates a new console encoder with ANSI colors enabled. This is useful for development environments where color output enhances readability.
func NewConsoleEncoder ¶
func NewConsoleEncoder() *ConsoleEncoder
NewConsoleEncoder creates a new console encoder with default settings. By default, it uses RFC3339Nano time format, uppercase level casing, and no colors.
type ContextExtractor ¶
type ContextExtractor struct {
// Keys maps context keys to field names in log output
Keys map[ContextKey]string
// MaxDepth limits how deep to search in context chain (default: 10)
MaxDepth int
}
ContextExtractor defines which context keys should be extracted and logged. This prevents the performance overhead of scanning all context values.
type ContextKey ¶
type ContextKey string
ContextKey represents a key type for context values that should be logged.
const ( RequestIDKey ContextKey = "request_id" TraceIDKey ContextKey = "trace_id" SpanIDKey ContextKey = "span_id" UserIDKey ContextKey = "user_id" SessionIDKey ContextKey = "session_id" )
Common context keys for standardized logging
type ContextLogger ¶
type ContextLogger struct {
// contains filtered or unexported fields
}
ContextLogger wraps a Logger with pre-extracted context fields. This avoids context.Value() calls in the hot logging path.
func (*ContextLogger) Debug ¶
func (cl *ContextLogger) Debug(msg string, fields ...Field)
Debug logs a message at debug level with context fields
func (*ContextLogger) Error ¶
func (cl *ContextLogger) Error(msg string, fields ...Field)
Error logs a message at error level with context fields
func (*ContextLogger) Fatal ¶
func (cl *ContextLogger) Fatal(msg string, fields ...Field)
Fatal logs a message at fatal level with context fields and exits
func (*ContextLogger) Info ¶
func (cl *ContextLogger) Info(msg string, fields ...Field)
Info logs a message at info level with context fields
func (*ContextLogger) Warn ¶
func (cl *ContextLogger) Warn(msg string, fields ...Field)
Warn logs a message at warn level with context fields
func (*ContextLogger) With ¶
func (cl *ContextLogger) With(fields ...Field) *ContextLogger
With creates a new ContextLogger with additional fields. This preserves both context fields and manually added fields.
func (*ContextLogger) WithAdditionalContext ¶
func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, extractor *ContextExtractor) *ContextLogger
WithAdditionalContext extracts additional context values without losing existing ones.
type DynamicConfigWatcher ¶
type DynamicConfigWatcher struct {
// contains filtered or unexported fields
}
DynamicConfigWatcher manages dynamic configuration changes using Argus Provides real-time hot reload of Iris logger configuration with audit trail
func EnableDynamicLevel ¶
func EnableDynamicLevel(logger *Logger, configPath string) (*DynamicConfigWatcher, error)
EnableDynamicLevel creates and starts a config watcher for the given logger and config file This is a convenience function that combines NewDynamicConfigWatcher + Start
Example:
logger, err := iris.New(config)
if err != nil {
return err
}
watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
log.Printf("Dynamic level disabled: %v", err)
} else {
defer watcher.Stop()
log.Println("✅ Dynamic level changes enabled!")
}
func NewDynamicConfigWatcher ¶
func NewDynamicConfigWatcher(configPath string, atomicLevel *AtomicLevel) (*DynamicConfigWatcher, error)
NewDynamicConfigWatcher creates a new dynamic config watcher for iris logger This enables runtime log level changes by watching the configuration file
Parameters:
- configPath: Path to the JSON configuration file to watch
- atomicLevel: The atomic level instance from iris logger
Example usage:
logger, err := iris.New(config)
if err != nil {
return err
}
watcher, err := iris.NewDynamicConfigWatcher("config.json", logger.Level())
if err != nil {
return err
}
defer watcher.Stop()
if err := watcher.Start(); err != nil {
return err
}
Now when you modify config.json and change the "level" field, the logger will automatically update its level without restart!
func (*DynamicConfigWatcher) IsRunning ¶
func (w *DynamicConfigWatcher) IsRunning() bool
IsRunning returns true if the watcher is currently active
func (*DynamicConfigWatcher) Start ¶
func (w *DynamicConfigWatcher) Start() error
Start begins watching the configuration file for changes
func (*DynamicConfigWatcher) Stop ¶
func (w *DynamicConfigWatcher) Stop() error
Stop stops watching the configuration file
type ErrorHandler ¶
type ErrorHandler func(err *errors.Error)
ErrorHandler represents a function that handles errors within the logging system
func GetErrorHandler ¶
func GetErrorHandler() ErrorHandler
GetErrorHandler returns the current error handler
type Field ¶
type Field struct {
// K is the field key/name
K string
// T indicates the type of data stored in this field
T kind
// I64 stores signed integers, bools (as 0/1), durations, and timestamps
I64 int64
// U64 stores unsigned integers
U64 uint64
// F64 stores floating-point numbers
F64 float64
// Str stores string values
Str string
// B stores byte slices
B []byte
// Obj stores arbitrary objects (errors, stringers, etc.)
Obj interface{}
}
Field represents a key-value pair with type information for structured logging. It uses a union-like approach to minimize memory allocation and maximize performance. The T field indicates which of the value fields (I64, U64, F64, Str, B, Obj) contains the actual data.
func Bool ¶
Bool creates a boolean field. Internally stored as int64 (1 for true, 0 for false) for efficiency.
func Bytes ¶
Bytes creates a byte slice field. Useful for binary data, encoded strings, or raw bytes.
func Dur ¶
Dur creates a duration field from time.Duration. Stored as int64 nanoseconds for precision and efficiency.
func Err ¶
Err creates an error field with key "error". If err is nil, returns a field with empty string (compatible but not elided).
func ErrorField ¶
ErrorField creates an error field for logging errors. Equivalent to NamedErr("error", err) but uses the proper error type for potential optimization.
func Float64 ¶
Float64 creates a 64-bit floating-point field. Suitable for decimal numbers and scientific notation.
func Int ¶
Int creates a signed integer field from an int value. The int is converted to int64 for consistent storage.
func Int64 ¶
Int64 creates a signed 64-bit integer field. Use this for large integers or when you specifically need int64.
func NamedErr ¶
NamedErr creates an error field with a custom key. If err is nil, returns a field with empty string (compatible but not elided).
func NamedError ¶
NamedError creates an error field with a custom key using proper error type.
func Secret ¶
Secret creates a field for sensitive data that will be automatically redacted. The actual value is stored but will appear as "[REDACTED]" in log output. Use this for passwords, API keys, tokens, personal data, or any sensitive information.
Example:
logger.Info("User login", iris.Secret("password", userPassword))
// Output: {"level":"info","msg":"User login","password":"[REDACTED]"}
Security: This prevents accidental exposure of sensitive data in logs while maintaining the field structure for debugging purposes.
func Str ¶
Str creates a string field for logging. This is one of the most commonly used field types.
func TimeField ¶
TimeField creates a timestamp field from time.Time. Stored as Unix nanoseconds for high precision and compact representation.
func Uint64 ¶
Uint64 creates an unsigned 64-bit integer field. Use this for non-negative values that may exceed int64 range.
func (Field) BoolValue ¶
BoolValue returns the boolean value if the field is a bool, false otherwise.
func (Field) BytesValue ¶
BytesValue returns the byte slice value if the field is bytes, nil otherwise.
func (Field) DurationValue ¶
DurationValue returns the time.Duration value if the field is a duration, 0 otherwise.
func (Field) FloatValue ¶
FloatValue returns the float64 value if the field is a float, 0.0 otherwise.
func (Field) IsDuration ¶
IsDuration returns true if the field contains duration data.
func (Field) StringValue ¶
StringValue returns the string value if the field is a string, empty string otherwise.
type Hook ¶
type Hook func(rec *Record)
Hook represents a function executed in the consumer thread after log record processing.
Hooks are executed in the consumer thread to avoid contention with producer threads. This design ensures maximum performance for logging operations while still allowing powerful post-processing capabilities.
Hook functions receive the fully populated Record after encoding but before the buffer is returned to the pool. This allows for:
- Metrics collection
- Log forwarding to external systems
- Custom processing based on log content
- Development-time debugging
Performance Notes:
- Executed in single consumer thread (no locks needed)
- Called after encoding is complete
- Should avoid blocking operations to maintain throughput
Thread Safety: Hooks are called from single consumer thread only
type IdleStrategy ¶
type IdleStrategy = zephyroslite.IdleStrategy
IdleStrategy defines the interface for consumer idle behavior. This type alias exposes the internal interface for configuration purposes.
func NewChannelIdleStrategy ¶
func NewChannelIdleStrategy(timeout time.Duration) IdleStrategy
NewChannelIdleStrategy creates an efficient blocking wait strategy. This strategy puts the consumer goroutine into an efficient wait state using Go channels, providing near-zero CPU usage when idle.
Parameters:
- timeout: Maximum time to wait before checking for shutdown (0 = no timeout)
Best for: Minimum CPU usage with acceptable latency for low-throughput scenarios CPU Usage: Near 0% when idle Latency: ~microseconds (channel wake-up time)
Note: This strategy works best with lower throughput workloads where the overhead of channel operations is acceptable.
Examples:
// No timeout - maximum efficiency NewChannelIdleStrategy(0) // With timeout for responsive shutdown NewChannelIdleStrategy(100*time.Millisecond)
func NewProgressiveIdleStrategy ¶
func NewProgressiveIdleStrategy() IdleStrategy
NewProgressiveIdleStrategy creates an adaptive idle strategy. This strategy automatically adjusts its behavior based on work patterns, starting with spinning for ultra-low latency and progressively reducing CPU usage as idle time increases.
This is the default strategy, providing good performance for most workloads without requiring manual tuning.
Best for: Variable workload patterns where both low latency and low CPU usage are important CPU Usage: Adaptive - starts high, reduces over time when idle Latency: Starts at minimum, increases gradually when idle
Behavior:
- Hot spin for first 1000 iterations (minimum latency)
- Occasional yielding up to 10000 iterations
- Progressive sleep with exponential backoff
- Resets to hot spin when work is found
Example:
config := &Config{
IdleStrategy: NewProgressiveIdleStrategy(),
// ... other config
}
func NewSleepingIdleStrategy ¶
func NewSleepingIdleStrategy(sleepDuration time.Duration, maxSpins int) IdleStrategy
NewSleepingIdleStrategy creates a CPU-efficient idle strategy with controlled latency. This strategy reduces CPU usage by sleeping when no work is available, with optional initial spinning for hybrid behavior.
Parameters:
- sleepDuration: How long to sleep when no work is found (e.g., time.Millisecond)
- maxSpins: Number of spin iterations before sleeping (0 = sleep immediately)
Best for: Balanced CPU usage and latency in production environments CPU Usage: ~1-10% depending on sleep duration and spin count Latency: ~1-10ms depending on sleep duration
Examples:
// Low CPU usage, higher latency NewSleepingIdleStrategy(5*time.Millisecond, 0) // Hybrid: spin briefly then sleep NewSleepingIdleStrategy(time.Millisecond, 1000)
func NewSpinningIdleStrategy ¶
func NewSpinningIdleStrategy() IdleStrategy
NewSpinningIdleStrategy creates an ultra-low latency idle strategy. This strategy provides the minimum possible latency by continuously checking for work without ever yielding the CPU.
Best for: Ultra-low latency requirements where CPU consumption is not a concern CPU Usage: ~100% of one core when idle Latency: Minimum possible (~nanoseconds)
Example:
config := &Config{
IdleStrategy: NewSpinningIdleStrategy(),
// ... other config
}
func NewYieldingIdleStrategy ¶
func NewYieldingIdleStrategy(maxSpins int) IdleStrategy
NewYieldingIdleStrategy creates a moderate CPU reduction strategy. This strategy spins for a configurable number of iterations before yielding to the Go scheduler, providing a middle ground between spinning and sleeping approaches.
Parameters:
- maxSpins: Number of spins before yielding to scheduler
Best for: Moderate CPU reduction while maintaining reasonable latency CPU Usage: ~10-50% depending on max spins configuration Latency: ~microseconds to low milliseconds
Examples:
// More aggressive yielding (lower CPU, higher latency) NewYieldingIdleStrategy(100) // Conservative yielding (higher CPU, lower latency) NewYieldingIdleStrategy(10000)
type JSONEncoder ¶
type JSONEncoder struct {
TimeKey string // default "ts"
LevelKey string // default "level"
MsgKey string // default "msg"
RFC3339 bool // default true (alternativa: UnixNano int64)
}
JSONEncoder implements NDJSON (one line per record) with zero-reflection encoding
func NewJSONEncoder ¶
func NewJSONEncoder() *JSONEncoder
NewJSONEncoder creates a new JSONEncoder with optimal defaults
type Level ¶
type Level int32
Level represents the severity level of a log message. Levels are ordered from least to most severe: Debug < Info < Warn < Error < DPanic < Panic < Fatal
Performance Notes: - Level is implemented as int32 for fast comparisons - Atomic operations used for thread-safe level changes - Zero allocation for level checks via inlined comparisons
const ( Debug Level = iota - 1 // Debug information, typically disabled in production Info // General information messages Warn // Warning messages for potentially harmful situations Error // Error messages for failure conditions DPanic // Development panic - panics in development, errors in production Panic // Panic level - logs message then panics Fatal // Fatal level - logs message then calls os.Exit(1) // StacktraceDisabled is a sentinel value used to disable stack trace collection StacktraceDisabled Level = -999 )
Log levels in order of increasing severity
func AllLevels ¶
func AllLevels() []Level
AllLevels returns a slice of all valid levels in ascending order. This is useful for documentation, validation, and testing.
func ParseLevel ¶
ParseLevel parses a string representation of a level and returns the corresponding Level. It handles common aliases and is case-insensitive. Returns Info level for empty strings as a sensible default.
func (Level) Enabled ¶
Enabled determines if this level is enabled given a minimum level. This is a critical hot path function optimized for maximum performance.
func (Level) IsDPanic ¶
IsDPanic returns true if the level is DPanic. Convenience method for checking development panic level.
func (Level) IsDebug ¶
IsDebug returns true if the level is Debug. Convenience method for frequently checked debug level.
func (Level) IsError ¶
IsError returns true if the level is Error. Convenience method for frequently checked error level.
func (Level) IsFatal ¶
IsFatal returns true if the level is Fatal. Convenience method for checking fatal level.
func (Level) IsInfo ¶
IsInfo returns true if the level is Info. Convenience method for frequently checked info level.
func (Level) IsPanic ¶
IsPanic returns true if the level is Panic. Convenience method for checking panic level.
func (Level) IsWarn ¶
IsWarn returns true if the level is Warn. Convenience method for frequently checked warn level.
func (Level) MarshalText ¶
MarshalText implements encoding.TextMarshaler for JSON/XML serialization. This method is optimized to avoid allocations in the common case.
func (Level) String ¶
String returns the string representation of the level. This is used for human-readable output and serialization.
func (*Level) UnmarshalText ¶
UnmarshalText implements encoding.TextUnmarshaler for JSON/XML deserialization. This method provides detailed error information for debugging.
type LevelFlag ¶
type LevelFlag struct {
// contains filtered or unexported fields
}
LevelFlag is a command-line flag implementation for Level. It implements the flag.Value interface for easy CLI integration.
func NewLevelFlag ¶
NewLevelFlag creates a new LevelFlag pointing to the given Level.
func (*LevelFlag) Set ¶
Set parses and sets the level from a string. This method is called by the flag package when parsing command-line arguments.
type Logger ¶
type Logger struct {
// contains filtered or unexported fields
}
Logger provides ultra-high performance logging with zero-allocation structured fields.
The Logger uses a lock-free MPSC (Multiple Producer, Single Consumer) ring buffer for maximum throughput. Multiple goroutines can log concurrently while a single background goroutine processes and outputs the log records.
Thread Safety:
- All logging methods (Debug, Info, Warn, Error) are thread-safe
- Multiple goroutines can log concurrently without locks
- Configuration changes (SetLevel) are atomic and thread-safe
Performance Features:
- Zero allocations for structured logging with pre-allocated fields
- Lock-free atomic operations for level checking
- Intelligent sampling to reduce log volume
- Efficient buffer pooling to minimize GC pressure
- Adaptive batching based on log volume
- Context inheritance with With() for repeated fields
Lifecycle:
- Create with New() - configures but doesn't start processing
- Call Start() to begin background processing
- Use logging methods (Debug, Info, etc.) for actual logging
- Call Close() for graceful shutdown with guaranteed log processing
func New ¶
New creates a new high-performance logger with the specified configuration and options.
The logger is created but not started - call Start() to begin processing. This separation allows for configuration verification and testing setup before actual log processing begins.
Parameters:
- cfg: Logger configuration with output, encoding, and performance settings
- opts: Optional configuration functions for advanced features
The configuration is validated and enhanced with intelligent defaults:
- Missing TimeFn defaults to time.Now
- Zero BatchSize gets auto-sized based on Capacity
- Nil Output or Encoder will cause an error
Returns:
- *Logger: Configured logger ready for Start()
- error: Configuration validation error
Example:
logger, err := iris.New(iris.Config{
Level: iris.Info,
Output: os.Stdout,
Encoder: iris.NewJSONEncoder(),
Capacity: 8192,
}, iris.WithCaller(), iris.Development())
if err != nil {
return err
}
logger.Start()
func (*Logger) AtomicLevel ¶
func (l *Logger) AtomicLevel() *AtomicLevel
AtomicLevel returns a pointer to the logger's atomic level.
This method provides access to the underlying atomic level structure, which can be used with dynamic configuration watchers like Argus to enable runtime level changes without logger restarts.
Returns:
- *AtomicLevel: Pointer to the atomic level instance
Example usage with dynamic config watching:
watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
log.Printf("Dynamic level disabled: %v", err)
} else {
defer watcher.Stop()
log.Println("✅ Dynamic level changes enabled!")
}
Thread Safety: The returned AtomicLevel is thread-safe
func (*Logger) Close ¶
Close gracefully shuts down the logger.
This method stops the background processing goroutine and ensures all buffered log records are processed before shutdown. The shutdown is deterministic - Close() will not return until all pending logs have been written to the output.
After Close() is called:
- All subsequent logging operations will fail silently
- The ring buffer becomes unusable
- All buffered records are guaranteed to be processed
The method is idempotent - calling Close() multiple times is safe.
Close flushes any pending log data and closes the logger Close should be called when the logger is no longer needed
Performance Characteristics:
- Blocks until all pending records are processed
- Automatically syncs output before closing
- Cannot be used after Close() is called
Thread Safety: Safe to call from multiple goroutines
func (*Logger) DPanic ¶
DPanic logs a message at a special development panic level.
DPanic (Development Panic) logs at Error level but panics if the logger is in development mode. This allows for aggressive error detection during development while maintaining stability in production.
Behavior:
- Development mode: Logs and then panics
- Production mode: Logs only (no panic)
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Performance: Same as Error level logging with conditional panic Zap-compat: DPanic/Panic/Fatal con livelli dedicati
func (*Logger) Debug ¶
Debug logs a message at Debug level with structured fields.
Debug level is intended for detailed diagnostic information useful during development and troubleshooting. These messages are typically disabled in production environments.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Error ¶
Error logs a message at Error level with structured fields.
Error level is intended for error events that allow the application to continue running. These messages indicate failures that need immediate attention but don't crash the application.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Info ¶
Info logs a message at Info level with structured fields.
Info level is intended for general information about program execution. These messages provide insight into application flow and important events.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Zero allocations for simple messages, optimized fast path for messages with fields
func (*Logger) InfoFields ¶
InfoFields logs a message at Info level with structured fields.
This method supports structured logging with key-value pairs for detailed context. Use the simpler Info() method for messages without fields to achieve zero allocations.
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Level ¶
Level atomically reads the current minimum logging level.
Returns the current minimum level threshold used for filtering log messages. Messages below this level are discarded early for maximum performance.
Returns:
- Level: Current minimum logging level
Performance Notes:
- Atomic load operation
- Zero allocations
- Sub-nanosecond read performance
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Named ¶
Named creates a new logger with the specified name.
Named loggers are useful for organizing logs by component, module, or functionality. The name typically appears in log output to help with filtering and analysis.
Parameters:
- name: Name to assign to the new logger instance
Returns:
- *Logger: New logger instance with the specified name
Example:
dbLogger := logger.Named("database")
apiLogger := logger.Named("api")
dbLogger.Info("Connection established") // Includes "database" context
Performance Notes:
- String assignment only (minimal overhead)
- Name is included in log output by encoder
- Zero allocations during normal operation
Thread Safety: Safe to call from multiple goroutines
func (*Logger) SetLevel ¶
SetLevel atomically changes the minimum logging level.
This method allows dynamic level adjustment during runtime without restarting the logger. Level changes take effect immediately for subsequent log operations.
Parameters:
- min: New minimum level (Debug, Info, Warn, Error)
Performance Notes:
- Atomic operation with no locks or allocations
- Sub-nanosecond level changes
- Thread-safe concurrent access
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Start ¶
func (l *Logger) Start()
Start begins background processing of log records.
This method starts the consumer goroutine that processes log records from the ring buffer and writes them to the configured output. The method is idempotent - calling Start() multiple times is safe and has no effect after the first call.
The consumer goroutine will continue processing until Close() is called. All logging operations require Start() to be called first, otherwise log records will accumulate in the ring buffer without being processed.
Performance Notes:
- Uses lock-free atomic operations for state management
- Single consumer goroutine eliminates lock contention
- Processing begins immediately after Start() returns
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Stats ¶
Stats returns comprehensive performance statistics for monitoring.
This method provides real-time metrics about logger performance, buffer utilization, and operational health. The statistics are collected atomically and can be safely called from multiple goroutines.
Returns:
- map[string]int64: Performance metrics including:
- Ring buffer statistics (capacity, utilization, etc.)
- Dropped message count
- Processing throughput metrics
- Memory usage indicators
The returned map contains:
- "dropped": Number of messages dropped due to ring buffer full
- "writer_position": Current writer position in ring buffer
- "reader_position": Current reader position in ring buffer
- "buffer_size": Ring buffer capacity
- "items_buffered": Number of items waiting to be processed
- "utilization_percent": Buffer utilization percentage
- Additional ring buffer specific statistics
Performance: Atomic reads with zero allocations for metric collection
func (*Logger) Sync ¶
Sync flushes any buffered log entries.
This method ensures that all buffered log entries are written to their destination. It's useful before program termination or when immediate log delivery is required.
Returns:
- error: Any error encountered during synchronization
Performance Notes:
- May block until all buffers are flushed
- Should be called sparingly in hot paths
- Automatically called during Close()
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Warn ¶
Warn logs a message at Warn level with structured fields.
Warn level is intended for potentially harmful situations that don't prevent the application from continuing. These messages indicate conditions that should be investigated.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) With ¶
With creates a new logger with additional structured fields.
This method creates a new logger instance that automatically includes the specified fields in every log message. This is useful for adding context that applies to multiple log statements, such as request IDs, user IDs, or component names.
Parameters:
- fields: Structured fields to include in all log messages
Returns:
- *Logger: New logger instance with pre-populated fields
Implementation Note: The fields are stored in the logger and applied to each log record during the logging operation.
Example:
requestLogger := logger.With(
iris.String("request_id", reqID),
iris.String("user_id", userID),
)
requestLogger.Info("Processing request") // Includes request_id and user_id
Performance Notes:
- Fields are stored once in logger instance
- Applied during each log operation (small overhead)
- Zero allocations for field storage in logger
Thread Safety: Safe to call from multiple goroutines
func (*Logger) WithContext ¶
func (l *Logger) WithContext(ctx context.Context) *ContextLogger
WithContext creates a new ContextLogger with fields extracted from context. This is the recommended way to use context integration - extract once, log many times with the same context.
Performance: O(k) where k is number of configured keys, not context depth.
func (*Logger) WithContextExtractor ¶
func (l *Logger) WithContextExtractor(ctx context.Context, extractor *ContextExtractor) *ContextLogger
WithContextExtractor creates a ContextLogger with custom extraction rules.
func (*Logger) WithContextValue ¶
func (l *Logger) WithContextValue(ctx context.Context, key ContextKey, fieldName string) *ContextLogger
WithContextValue creates a ContextLogger with a single context value. Optimized for cases where you only need one context field.
func (*Logger) WithOptions ¶
WithOptions creates a new logger with the specified options applied.
This method clones the current logger and applies additional configuration options. The original logger is unchanged, ensuring immutable configuration and thread safety. The new logger shares the same ring buffer and output configuration but can have different caller, hook, and development settings.
Parameters:
- opts: Option functions to apply to the new logger instance
Returns:
- *Logger: New logger instance with applied options
Example:
devLogger := logger.WithOptions(
iris.WithCaller(),
iris.AddStacktrace(iris.Error),
iris.Development(),
)
Performance Notes:
- Clones logger configuration (minimal allocation)
- Shares ring buffer and output resources
- Options are applied once during creation
Thread Safety: Safe to call from multiple goroutines
func (*Logger) WithRequestID ¶
func (l *Logger) WithRequestID(ctx context.Context) *ContextLogger
WithRequestID extracts request ID with minimal allocations. Optimized for the most common use case.
func (*Logger) WithTraceID ¶
func (l *Logger) WithTraceID(ctx context.Context) *ContextLogger
WithTraceID extracts trace ID for distributed tracing.
func (*Logger) WithUserID ¶
func (l *Logger) WithUserID(ctx context.Context) *ContextLogger
WithUserID extracts user ID from context for user-specific logging.
func (*Logger) Write ¶
Write provides zero-allocation logging with a fill function.
This is the fastest logging method, allowing direct manipulation of a pre-allocated Record in the ring buffer. The fill function is called with a pointer to a Record that should be populated with log data.
Parameters:
- fill: Function to populate the log record (zero allocations)
Returns:
- bool: true if record was successfully queued, false if ring buffer full
Performance Features:
- Zero heap allocations during normal operation
- Direct record manipulation in ring buffer
- Lock-free atomic operations
- Fastest possible logging path
Example:
success := logger.Write(func(r *Record) {
r.Level = iris.Error
r.Msg = "Critical system error"
r.AddField(iris.String("component", "database"))
})
Thread Safety: Safe to call from multiple goroutines
type Option ¶
type Option func(*loggerOptions)
Option represents a function that modifies logger options during construction.
Options use the functional options pattern to provide a clean, extensible API for logger configuration. Each Option function modifies the options structure in place during logger creation or cloning.
Pattern Benefits:
- Backward compatible API evolution
- Clear, self-documenting configuration
- Composable option sets
- Type-safe configuration
Usage:
logger := logger.WithOptions(
iris.WithCaller(),
iris.AddStacktrace(iris.Error),
iris.Development(),
)
func AddStacktrace ¶
AddStacktrace enables stack trace capture for log levels at or above the specified minimum.
Stack traces provide detailed call stack information for debugging complex issues. They are automatically captured for severe log levels (typically Error and above) to aid in troubleshooting.
Parameters:
- min: Minimum log level for stack trace capture (Debug, Info, Warn, Error)
Performance Impact:
- Stack trace capture is expensive (runtime.Stack() call)
- Only enabled for specified log levels to minimize overhead
- Stack traces are captured in producer thread but processed in consumer
Returns:
- Option: Configuration function to enable stack trace capture
Example:
// Capture stack traces for Error level and above
logger := logger.WithOptions(iris.AddStacktrace(iris.Error))
logger.Error("critical error") // Will include stack trace
logger.Warn("warning") // No stack trace
func Development ¶
func Development() Option
Development enables development-specific behaviors for enhanced debugging.
Development mode changes logger behavior to be more suitable for development and testing environments:
- DPanic level causes panic() in addition to logging
- Enhanced error reporting and validation
- More verbose debugging information
This option should typically be disabled in production environments for optimal performance and stability.
Returns:
- Option: Configuration function to enable development mode
Example:
logger := logger.WithOptions(iris.Development())
logger.DPanic("development panic") // Will panic in dev mode, log in production
func WithCaller ¶
func WithCaller() Option
WithCaller enables caller information capture for log records.
When enabled, the logger will capture the file name, line number, and function name of the calling code for each log record. This information is added to the log output for debugging and troubleshooting.
Performance Impact:
- Adds runtime.Caller() call per log operation
- Minimal allocation for caller information
- Skip level optimization reduces overhead
Returns:
- Option: Configuration function to enable caller capture
Example:
logger := logger.WithOptions(iris.WithCaller())
logger.Info("message") // Will include caller info
func WithCallerSkip ¶
WithCallerSkip sets the number of stack frames to skip for caller detection.
This option is useful when the logger is wrapped by helper functions and you want the caller information to point to the actual calling code rather than the wrapper function.
Parameters:
- n: Number of stack frames to skip (negative values are treated as 0)
Common Skip Values:
- 0: Direct caller of log method
- 1: Skip one wrapper function
- 2+: Skip multiple wrapper layers
Returns:
- Option: Configuration function to set caller skip level
Example:
// Skip helper function to show actual caller
logger := logger.WithOptions(
iris.WithCaller(),
iris.WithCallerSkip(1),
)
func WithHook ¶
WithHook adds a post-processing hook to the logger.
Hooks are functions executed in the consumer thread after log records are processed but before buffers are returned to the pool. This design ensures zero contention with producer threads while enabling powerful post-processing.
Hook Use Cases:
- Metrics collection based on log content
- Log forwarding to external systems
- Custom alerting on specific log patterns
- Development-time debugging and validation
Parameters:
- h: Hook function to execute (nil hooks are ignored)
Performance Notes:
- Hooks are executed sequentially in consumer thread
- Should avoid blocking operations to maintain throughput
- No allocation overhead in producer threads
Returns:
- Option: Configuration function to add the hook
Example:
metricHook := func(rec *Record) {
if rec.Level >= iris.Error {
errorCounter.Inc()
}
}
logger := logger.WithOptions(iris.WithHook(metricHook))
type ProcessorFunc ¶
type ProcessorFunc func(record *Record)
ProcessorFunc defines the signature for record processing functions
This function is called for each log record that flows through the ring buffer. It should be efficient and avoid blocking operations to maintain high throughput.
Parameters:
- record: The log record to process (guaranteed non-nil)
Performance Notes:
- Called from the consumer thread only (single-threaded)
- Should avoid allocations and blocking operations
- Can safely access shared state (no concurrent access)
type Record ¶
type Record struct {
Level Level // Log level
Msg string // Log message
Logger string // Logger name
Caller string // Caller information (file:line)
Stack string // Stack trace
// contains filtered or unexported fields
}
Record represents a log entry with optimized field storage
func NewRecord ¶
NewRecord creates a new Record with the specified level and message. Uses pre-allocated field storage to avoid heap allocations during logging.
func (*Record) AddField ¶
AddField adds a structured field to this record. Returns false if the field array is full (32 fields max - optimal for performance).
func (*Record) FieldCount ¶
FieldCount returns the number of fields in this record.
type Ring ¶
type Ring struct {
// contains filtered or unexported fields
}
Ring provides ultra-high performance logging with embedded Zephyros Light
The Ring uses the embedded ZephyrosLight engine to provide optimal performance for logging operations while eliminating external dependencies and maintaining the core features needed for high-performance logging.
Embedded Zephyros Light Features:
- Single ring architecture optimized for logging
- ~15-20ns/op performance (vs 9ns commercial, 25ns previous)
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Fixed batch processing (simplified vs adaptive)
Architecture Simplification:
- SingleRing only (ThreadedRings removed - commercial feature)
- Simplified configuration (fewer options, better defaults)
- Embedded implementation (no external dependencies)
Performance Characteristics:
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Fixed batching optimized for logging workloads
- Simplified spinning strategy for low latency
func (*Ring) Close ¶
func (r *Ring) Close()
Close gracefully shuts down the ring buffer
This method signals the consumer to stop processing and ensures all buffered records are processed before shutdown. It is safe to call multiple times and from multiple goroutines.
After Close() is called:
- Write() will return false for all subsequent calls
- Loop() will process all remaining records and then exit
- The ring buffer becomes unusable
Shutdown Guarantees:
- All buffered records are processed before shutdown
- Multiple Close() calls are safe (idempotent)
- Deterministic shutdown behavior for testing
func (*Ring) Flush ¶
Flush ensures all pending writes are visible to the consumer
In the embedded ZephyrosLight architecture, this method ensures that all writes from producer threads are visible to the consumer thread. This is primarily useful for testing and ensuring deterministic behavior.
Note: In normal operation, flushing is automatic and this method exists primarily for API compatibility and testing scenarios.
func (*Ring) Loop ¶
func (r *Ring) Loop()
Loop starts the record processing loop (CONSUMER THREAD ONLY)
This method should be called from exactly one goroutine to consume and process log records. The embedded ZephyrosLight implements an optimized spinning strategy for balanced performance and CPU usage.
The loop continues until Close() is called, after which it processes all remaining records before exiting.
Performance Features:
- Fixed batching optimized for logging workloads
- Simplified idle strategy to minimize CPU usage
- Guaranteed processing of all records during shutdown
Warning: Only call this method from one goroutine per ring buffer. Multiple consumers will cause race conditions and data loss.
func (*Ring) ProcessBatch ¶
ProcessBatch processes a single batch of records and returns the count
This method is useful for custom consumer implementations that need fine-grained control over processing timing. It processes up to batchSize records in a single call using the embedded ZephyrosLight engine.
Returns:
- int: Number of records processed in this batch (0 if no records available)
Note: This is a lower-level method. Most applications should use Loop() which handles the complete consumer lifecycle automatically.
func (*Ring) Stats ¶
Stats returns detailed performance statistics for monitoring and debugging
The returned map contains real-time metrics about the embedded ZephyrosLight ring buffer's performance and current state. This is useful for monitoring, alerting, and performance optimization.
Returned Statistics:
- "writer_position": Last claimed sequence number
- "reader_position": Current reader position
- "buffer_size": Total ring buffer capacity
- "items_buffered": Number of records waiting to be processed
- "items_processed": Total records processed
- "items_dropped": Total records dropped due to full buffer
- "closed": Ring buffer closed state (0=open, 1=closed)
- "capacity": Configured ring capacity
- "batch_size": Configured batch size
- "utilization_percent": Buffer utilization percentage
- "engine": "zephyros_light" (embedded engine identifier)
Returns:
- map[string]int64: Real-time performance statistics
Example:
stats := ring.Stats()
fmt.Printf("Buffer utilization: %d%%\n", stats["utilization_percent"])
fmt.Printf("Items buffered: %d\n", stats["items_buffered"])
func (*Ring) Write ¶
Write adds a log record to the ring buffer using zero-allocation pattern
The fill function is called with a pointer to a pre-allocated Record in the embedded Zephyros Light ring buffer. This avoids any heap allocations during logging operations while providing excellent performance.
The function is thread-safe and can be called concurrently from multiple goroutines. The embedded ZephyrosLight uses atomic operations for lock-free performance.
Performance: Target ~15-20ns/op with embedded Zephyros Light engine
Parameters:
- fill: Function to populate the log record (called with pre-allocated Record)
Returns:
- bool: true if record was successfully written, false if ring is full or closed
Performance Notes:
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Returns false instead of blocking when ring is full
- Optimized for high-frequency logging scenarios
Example:
success := ring.Write(func(r *Record) {
r.Level = ErrorLevel
r.Message = "Critical error occurred"
r.Timestamp = time.Now()
})
type Sampler ¶
type Sampler interface {
// Allow determines if a log entry at the given level should be processed.
// Returns true if the entry should be logged, false if it should be dropped.
Allow(level Level) bool
}
Sampler defines the interface for log sampling strategies. Implementations control which log entries are allowed through to prevent overwhelming downstream systems.
type Stack ¶
type Stack struct {
// contains filtered or unexported fields
}
Stack represents a captured stack trace with program counters
func CaptureStack ¶
CaptureStack captures a stack trace of the specified depth, skipping frames. skip=0 identifies the caller of CaptureStack. The caller must call FreeStack on the returned stack after using it.
func (*Stack) FormatStack ¶
FormatStack formats the entire stack trace into a string using buffer pooling
type TextEncoder ¶
type TextEncoder struct {
// TimeFormat specifies the time format (default: RFC3339)
TimeFormat string
// QuoteValues determines if string values should be quoted (default: true for safety)
QuoteValues bool
// SanitizeKeys determines if field keys should be sanitized (default: true)
SanitizeKeys bool
}
TextEncoder provides secure human-readable text encoding for log records. Implements comprehensive log injection protection by sanitizing all field keys and values to prevent malicious log manipulation.
Security Features:
- Field key sanitization to prevent injection via malformed keys
- Value sanitization with proper quoting and escaping
- Control character neutralization
- Newline injection protection
- Unicode direction override protection
Output Format:
time=2025-08-22T10:30:00Z level=info msg="User login" user=john_doe ip=192.168.1.1
func NewTextEncoder ¶
func NewTextEncoder() *TextEncoder
NewTextEncoder creates a new secure text encoder with safe defaults.
type TokenBucketSampler ¶
type TokenBucketSampler struct {
// contains filtered or unexported fields
}
TokenBucketSampler implements rate limiting using a token bucket algorithm. Provides burst capacity with sustained rate limiting for high-volume logging.
func NewTokenBucketSampler ¶
func NewTokenBucketSampler(capacity, refill int64, every time.Duration) *TokenBucketSampler
NewTokenBucketSampler creates a new token bucket sampler with the specified parameters. Validates inputs and sets reasonable defaults for invalid values.
Parameters:
- capacity: Maximum number of tokens (burst capacity)
- refill: Number of tokens added per refill period
- every: Time duration between refills
Returns a configured sampler ready for concurrent use.
func (*TokenBucketSampler) Allow ¶
func (s *TokenBucketSampler) Allow(_ Level) bool
Allow implements the Sampler interface using token bucket rate limiting. Thread-safe implementation that refills tokens based on elapsed time and consumes tokens for allowed log entries.
Parameters:
- level: Log level (unused in this implementation, all levels treated equally)
Returns true if logging should proceed, false if rate limited.
type WriteSyncer ¶
WriteSyncer combines io.Writer with the ability to synchronize written data to persistent storage. This interface is essential for ensuring data durability in logging scenarios where data loss is unacceptable.
Performance considerations: - Sync() should be called judiciously as it may involve expensive syscalls - Implementations should be thread-safe for concurrent logging scenarios - Zero allocations in hot paths for maximum throughput
func AddSync ¶
func AddSync(w io.Writer) WriteSyncer
AddSync is an alias for WrapWriter for familiarity with zap
func MultiWriteSyncer ¶
func MultiWriteSyncer(writers ...WriteSyncer) WriteSyncer
MultiWriteSyncer creates a WriteSyncer that duplicates writes to multiple writers
func MultiWriter ¶
func MultiWriter(writers ...io.Writer) WriteSyncer
MultiWriter accepts io.Writer interfaces, wraps them and creates a MultiWriteSyncer
func NewFileSyncer ¶
func NewFileSyncer(file *os.File) WriteSyncer
NewFileSyncer creates a WriteSyncer specifically for file operations. This function provides explicit file syncing capabilities and should be used when you need guaranteed durability for file-based logging.
Performance: Direct file operations with explicit sync control
func NewNopSyncer ¶
func NewNopSyncer(w io.Writer) WriteSyncer
NewNopSyncer creates a WriteSyncer that performs no synchronization. This is useful for scenarios where sync is handled externally or where the underlying writer doesn't support/need synchronization.
Performance: Zero-cost wrapper with inline no-op sync
func WrapWriter ¶
func WrapWriter(w io.Writer) WriteSyncer
WrapWriter intelligently converts any io.Writer into a WriteSyncer. This function provides automatic detection and wrapping of different writer types to ensure optimal performance and correct synchronization behavior.
Type-specific optimizations: - *os.File: Uses fileSyncer for explicit sync() syscalls - WriteSyncer: Returns as-is (already implements interface) - Other writers: Uses nopSyncer (no-op sync for non-file writers)
Performance: Zero allocations for WriteSyncer inputs, minimal overhead for type switching in other cases.
Usage patterns:
- File logging: WrapWriter(file) -> fileSyncer (with sync)
- Buffer logging: WrapWriter(buffer) -> nopSyncer (no sync needed)
- Network logging: WrapWriter(conn) -> nopSyncer (sync at protocol level)