mtlog

package module
v0.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 4, 2025 License: MIT Imports: 19 Imported by: 15

README

mtlog - Message Template Logging for Go

Go Reference

mtlog is a high-performance structured logging library for Go, inspired by Serilog. It brings message templates and pipeline architecture to the Go ecosystem, achieving zero allocations for simple logging operations while providing powerful features for complex scenarios.

Features

Core Features
  • Zero-allocation logging for simple messages (17.3 ns/op)
  • Message templates with positional property extraction and format specifiers
  • Go template syntax support ({{.Property}}) alongside traditional syntax
  • OpenTelemetry compatibility with support for dotted property names ({http.method}, {service.name})
  • Structured fields via With() method for slog-style key-value pairs
  • Output templates for customizable log formatting
  • Per-message sampling with multiple strategies (counter, rate, time, adaptive) for intelligent log volume control
  • ForType logging with automatic SourceContext from Go types and intelligent caching
  • LogContext scoped properties that flow through operation contexts
  • Source context enrichment with intelligent caching for automatic logger categorization
  • Context deadline awareness with automatic timeout warnings and deadline tracking
  • Pipeline architecture for clean separation of concerns
  • Type-safe generics for better compile-time safety
  • LogValue interface for safe logging of sensitive data
  • SelfLog diagnostics for debugging silent failures and configuration issues
  • Standard library compatibility via slog.Handler adapter (Go 1.21+)
  • Kubernetes ecosystem support via logr.LogSink adapter
HTTP Middleware
  • Multi-framework support for net/http, Gin, Echo, Fiber, and Chi
  • High-performance request/response logging with minimal overhead (~2.3μs per request)
  • Object pooling for zero-allocation paths in high-throughput scenarios
  • Advanced sampling strategies (rate, adaptive, path-based) for log volume control
  • Request body logging with automatic sanitization of sensitive data
  • Distributed tracing with W3C Trace Context, B3, and X-Ray format support
  • Health check handlers with configurable metrics and liveness/readiness probes
  • Panic recovery with detailed stack traces and custom error handling
Sinks & Output
  • Console sink with customizable themes (dark, light, ANSI, Literate)
  • File sink with rolling policies (size, time-based)
  • Seq integration with CLEF format and dynamic level control
  • Elasticsearch sink for centralized log storage and search
  • Splunk sink with HEC (HTTP Event Collector) support
  • OpenTelemetry (OTLP) sink with gRPC/HTTP transport, batching, and trace correlation
  • Sentry integration with error tracking, performance monitoring, and intelligent sampling
  • Conditional sink for predicate-based routing with zero overhead
  • Router sink for multi-destination routing with FirstMatch/AllMatch modes
  • Async sink wrapper for high-throughput scenarios
  • Durable buffering with persistent storage for reliability
Pipeline Components
  • Rich enrichment with built-in and custom enrichers
  • Advanced filtering including rate limiting and sampling
  • Minimum level overrides by source context patterns
  • Type-safe capturing with caching for performance
  • Dynamic level control with runtime adjustments
  • Configuration from JSON for flexible deployment

Installation

go get github.com/willibrandon/mtlog

Quick Start

package main

import (
    "github.com/willibrandon/mtlog"
    "github.com/willibrandon/mtlog/core"
)

func main() {
    // Create a logger with console output
    log := mtlog.New(
        mtlog.WithConsole(),
        mtlog.WithMinimumLevel(core.InformationLevel),
    )

    // Simple logging
    log.Info("Application started")
    
    // Message templates with properties
    userId := 123
    log.Info("User {UserId} logged in", userId)
    
    // Capturing complex types
    order := Order{ID: 456, Total: 99.95}
    log.Info("Processing {@Order}", order)
}

// For libraries that need error handling:
func NewLibraryLogger() (*mtlog.Logger, error) {
    return mtlog.Build(
        mtlog.WithConsoleTemplate("[${Timestamp:HH:mm:ss} ${Level:u3}] ${Message}"),
        mtlog.WithMinimumLevel(core.DebugLevel),
    )
}

Visual Example

mtlog with Literate theme

Message Templates

mtlog uses message templates that preserve structure throughout the logging pipeline:

// Properties are extracted positionally
log.Information("User {UserId} logged in from {IP}", userId, ipAddress)

// Go template syntax is also supported
log.Information("User {{.UserId}} logged in from {{.IP}}", userId, ipAddress)

// OTEL-style dotted properties for compatibility with OpenTelemetry conventions
log.Information("HTTP {http.method} to {http.url} returned {http.status_code}", "GET", "/api", 200)
log.Information("Service {service.name} in {service.namespace}", "api", "production")

// Mix both syntaxes as needed
log.Information("User {UserId} ({{.Username}}) from {IP}", userId, username, ipAddress)

// Capturing hints:
// @ - capture complex types into properties
log.Information("Order {@Order} created", order)

// $ - force scalar rendering (stringify)
log.Information("Error occurred: {$Error}", err)

// Format specifiers
log.Information("Processing time: {Duration:F2}ms", 123.456)
log.Information("Disk usage at {Percentage:P1}", 0.85)  // 85.0%
log.Information("Order {OrderId:000} total: ${Amount:F2}", 42, 99.95)

// String formatting - default is no quotes (Go-idiomatic)
log.Information("User {Name} logged in", "Alice")  // User Alice logged in
log.Information("User {Name:q} logged in", "Alice")  // User "Alice" logged in (explicit quotes)
log.Information("User {Name:l} logged in", "Alice")  // User Alice logged in (same as default)

// JSON formatting - outputs any value as JSON
log.Information("Config: {Settings:j}", map[string]any{"debug": true, "port": 8080})
// Config: {"debug":true,"port":8080}

// Numeric indexing (like string.Format in .NET)
log.Information("Processing {0} of {1} items", 5, 10)
log.Information("The {0} {1} {2} jumped over the {3} {4}", 
    "quick", "brown", "fox", "lazy", "dog")
Numeric Indexing

mtlog supports numeric indexing similar to .NET's string.Format and Serilog:

// Pure numeric indexing uses index values (like string.Format)
log.Information("Processing {0} of {1}", 5, 10)  // Processing 5 of 10
log.Information("Result: {1} before {0}", "first", "second")  // Result: second before first

// Mixed named and numeric uses positional matching (left-to-right)
log.Information("User {UserId} processed {0} of {1}", 123, 50, 100)
// UserId=123 (1st arg), 0=50 (2nd arg), 1=100 (3rd arg)

// Note: Avoid mixing named and numeric properties for clarity

Output Templates

Control how log events are formatted for output with customizable templates. Output templates use ${...} syntax for built-in elements to distinguish them from message template properties:

// Console with custom output template and theme
log := mtlog.New(
    mtlog.WithConsoleTemplateAndTheme(
        "[${Timestamp:HH:mm:ss} ${Level:u3}] {SourceContext}: ${Message}",
        sinks.LiterateTheme(),
    ),
)

// File with detailed template
log := mtlog.New(
    mtlog.WithFileTemplate("app.log", 
        "[${Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} ${Level:u3}] {SourceContext}: ${Message}${NewLine}${Exception}"),
)
Template Properties
  • ${Timestamp} - Event timestamp with optional format
  • ${Level} - Log level with format options (u3, u, l)
  • ${Message} - Rendered message from template
  • {SourceContext} - Logger context/category
  • ${Exception} - Exception details if present
  • ${NewLine} - Platform-specific line separator
  • Custom properties by name: {RequestId}, {UserId}, etc.
Format Specifiers
  • Timestamps: HH:mm:ss, yyyy-MM-dd, HH:mm:ss.fff
  • Levels:
    • u3 or w3 - Three-letter uppercase (INF, WRN, ERR)
    • u - Full uppercase (INFORMATION, WARNING, ERROR)
    • w - Full lowercase (information, warning, error)
    • l - Lowercase three-letter (inf, wrn, err) [deprecated, use w3]
  • Numbers: 000 (zero-pad), F2 (2 decimals), P1 (percentage)
  • Strings: :l removes quotes (literal format)
Design Note: Why ${...} for Built-ins?

Unlike Serilog which uses {...} for both built-in elements and properties in output templates, mtlog uses ${...} for built-ins. This design choice prevents ambiguity when user properties have the same names as built-in elements (e.g., a property named "Message" would conflict with the built-in {Message}).

The ${...} syntax provides clear disambiguation:

  • ${Message}, ${Timestamp}, ${Level} - Built-in template elements
  • {UserId}, {OrderId}, {Message} - User properties from your log events

This means you can safely log a property called "Message" without conflicts:

log.Information("Processing {Message} from {Queue}", userMessage, queueName)
// Output template: "[${Timestamp}] ${Level}: ${Message}"
// Result: "[2024-01-15] INF: Processing Hello World from orders"

Pipeline Architecture

The logging pipeline processes events through distinct stages:

Message Template Parser → Enrichment → Filtering → Capturing → Output
Configuration with Functional Options
log := mtlog.New(
    // Output configuration
    mtlog.WithConsoleTheme("dark"),     // Console with dark theme
    mtlog.WithRollingFile("app.log", 10*1024*1024), // Rolling file (10MB)
    mtlog.WithSeq("http://localhost:5341", "api-key"), // Seq integration
    
    // Enrichment
    mtlog.WithTimestamp(),              // Add timestamp
    mtlog.WithMachineName(),            // Add hostname
    mtlog.WithProcessInfo(),            // Add process ID/name
    mtlog.WithCallersInfo(),            // Add file/line info
    
    // Filtering & Level Control
    mtlog.WithMinimumLevel(core.DebugLevel),
    mtlog.WithDynamicLevel(levelSwitch), // Runtime level control
    mtlog.WithFilter(customFilter),
    
    // Capturing
    mtlog.WithCapturing(),          // Enable @ hints
    mtlog.WithCapturingDepth(5),    // Max depth
)

Enrichers

Enrichers add contextual information to all log events:

// Built-in enrichers
log := mtlog.New(
    mtlog.WithTimestamp(),
    mtlog.WithMachineName(),
    mtlog.WithProcessInfo(),
    mtlog.WithEnvironmentVariables("APP_ENV", "VERSION"),
    mtlog.WithThreadId(),
    mtlog.WithCallersInfo(),
    mtlog.WithCorrelationId("RequestId"),
    mtlog.WithSourceContext(), // Auto-detect logger context
)

// Structured fields with With() - slog-style
log.With("service", "auth", "version", "1.0").Info("Service started")
log.With("user_id", 123, "request_id", "abc-123").Info("Processing request")

// Context-based enrichment
ctx := context.WithValue(context.Background(), "RequestId", "abc-123")
log.ForContext("UserId", userId).Information("Processing request")

// Source context for sub-loggers
serviceLog := log.ForSourceContext("MyApp.Services.UserService")
serviceLog.Information("User service started")

// Type-based loggers with automatic SourceContext
userLogger := mtlog.ForType[User](log)
userLogger.Information("User operation") // SourceContext: "User"

orderLogger := mtlog.ForType[OrderService](log)
orderLogger.Information("Processing order") // SourceContext: "OrderService"

Per-Message Sampling

Efficient log volume management through intelligent per-message sampling. mtlog provides comprehensive sampling capabilities to help control log volume in production while preserving important events.

Quick Examples
// Basic sampling methods
logger.Sample(10).Info("Every 10th message")
logger.SampleRate(0.2).Info("20% of messages")
logger.SampleDuration(time.Second).Info("Once per second")
logger.SampleFirst(100).Info("First 100 only")

// Adaptive sampling - maintains target events/second
logger.SampleAdaptive(100).Info("Auto-adjusting rate")

// Use predefined profiles
logger.SampleProfile("HighTrafficAPI").Info("API call")
logger.SampleProfile("ProductionErrors").Error("Error occurred")
Key Features
  • Multiple Strategies: Counter, rate, time-based, first-N, group, conditional, and exponential backoff sampling
  • Adaptive Sampling: Automatically adjusts rates to maintain target throughput with hysteresis and dampening
  • Predefined Profiles: Ready-to-use configurations for common scenarios
  • Custom Profiles: Define and register your own reusable sampling configurations
  • Version Management: Support for versioned profiles with auto-migration
  • Zero Allocations: Optimized for minimal performance impact
Learn More

For comprehensive documentation including advanced strategies, configuration options, and best practices, see the Sampling Guide.

Structured Fields with With()

The With() method provides a convenient way to add structured fields to log events, following the slog convention of accepting variadic key-value pairs:

// Basic usage with key-value pairs
logger.With("service", "api", "version", "1.0").Info("Service started")

// Chaining With() calls
logger.
    With("environment", "production").
    With("region", "us-west-2").
    Info("Deployment complete")

// Create a base logger with common fields
apiLogger := logger.With(
    "component", "api",
    "host", "api-server-01",
)

// Reuse the base logger for multiple operations
apiLogger.Info("Handling request")
apiLogger.With("endpoint", "/users").Info("GET /users")
apiLogger.With("endpoint", "/products", "method", "POST").Info("POST /products")

// Request-scoped logging
requestLogger := apiLogger.With(
    "request_id", "abc-123",
    "user_id", 456,
)
requestLogger.Info("Request started")
requestLogger.With("duration_ms", 42).Info("Request completed")

// Combine With() and ForContext()
logger.
    With("service", "payment").
    ForContext("transaction_id", "tx-789").
    With("amount", 99.99, "currency", "USD").
    Info("Payment processed")
With() vs ForContext()
  • With(): Accepts variadic key-value pairs (slog-style), convenient for multiple fields
  • ForContext(): Takes a single property name and value, returns a new logger
  • Both methods create a new logger instance with the combined properties
  • Both are safe for concurrent use
Property Precedence

When combining With() and ForContext(), properties follow a precedence order:

  • Properties passed directly to log methods take highest precedence
  • ForContext() properties override With() properties
  • Later With() calls override earlier With() calls in a chain

Example:

logger.With("user", "alice").              // user=alice
    ForContext("user", "bob").             // user=bob (ForContext overrides)
    With("user", "charlie").               // user=charlie (later With overrides)
    Info("User {user} logged in", "david") // user=david (event property overrides all)

LogContext - Scoped Properties

LogContext provides a way to attach properties to a context that will be automatically included in all log events created from loggers using that context. Properties follow a precedence order: event-specific properties (passed directly to log methods) override ForContext properties, which override LogContext properties (set via PushProperty).

// Add properties to context that flow through all operations
ctx := context.Background()
ctx = mtlog.PushProperty(ctx, "RequestId", "req-12345")
ctx = mtlog.PushProperty(ctx, "UserId", userId)
ctx = mtlog.PushProperty(ctx, "TenantId", "acme-corp")

// Create a logger that includes context properties
log := logger.WithContext(ctx)
log.Information("Processing request") // Includes all pushed properties

// Properties are inherited - child contexts get parent properties
func processOrder(ctx context.Context, orderId string) {
    // Add operation-specific properties
    ctx = mtlog.PushProperty(ctx, "OrderId", orderId)
    ctx = mtlog.PushProperty(ctx, "Operation", "ProcessOrder")
    
    log := logger.WithContext(ctx)
    log.Information("Order processing started") // Includes all parent + new properties
}

// Property precedence example
ctx = mtlog.PushProperty(ctx, "UserId", 123)
logger.WithContext(ctx).Information("Test")                          // UserId=123
logger.WithContext(ctx).ForContext("UserId", 456).Information("Test") // UserId=456 (ForContext overrides)
logger.WithContext(ctx).Information("User {UserId}", 789)            // UserId=789 (event property overrides all)

This is particularly useful for:

  • Request tracing in web applications
  • Maintaining context through async operations
  • Multi-tenant applications
  • Batch processing with job-specific context

ForType - Type-Based Logging

ForType provides automatic SourceContext from Go types, making it easy to categorize logs by the types they relate to without manual string constants:

// Automatic SourceContext from type names
userLogger := mtlog.ForType[User](logger)
userLogger.Information("User created") // SourceContext: "User"

productLogger := mtlog.ForType[Product](logger)
productLogger.Information("Product updated") // SourceContext: "Product"

// Works with pointers (automatically dereferenced)
mtlog.ForType[*User](logger).Information("User updated") // SourceContext: "User"

// Service-based logging
type UserService struct {
    logger core.Logger
}

func NewUserService(baseLogger core.Logger) *UserService {
    return &UserService{
        logger: mtlog.ForType[UserService](baseLogger),
    }
}

func (s *UserService) CreateUser(name string) {
    s.logger.Information("Creating user {Name}", name)
    // All logs from this service have SourceContext: "UserService"
}
Advanced Type Naming

For more control over type names, use ExtractTypeName with TypeNameOptions:

// Include package for disambiguation
opts := mtlog.TypeNameOptions{
    IncludePackage: true,
    PackageDepth:   1, // Only immediate package
}
name := mtlog.ExtractTypeName[User](opts) // Result: "myapp.User"
logger := baseLogger.ForContext("SourceContext", name)

// Add prefixes for microservice identification
opts = mtlog.TypeNameOptions{Prefix: "UserAPI."}
name = mtlog.ExtractTypeName[User](opts) // Result: "UserAPI.User"

// Simplify anonymous structs
opts = mtlog.TypeNameOptions{SimplifyAnonymous: true}
name = mtlog.ExtractTypeName[struct{ Name string }](opts) // Result: "AnonymousStruct"

// Disable warnings for production
opts = mtlog.TypeNameOptions{WarnOnUnknown: false}
name = mtlog.ExtractTypeName[interface{}](opts) // Result: "Unknown" (no warning logged)

// Combine multiple options
opts = mtlog.TypeNameOptions{
    IncludePackage:    true,
    PackageDepth:      1,
    Prefix:            "MyApp.",
    Suffix:            ".Handler",
    SimplifyAnonymous: true,
}
Performance & Caching

ForType uses reflection with intelligent caching for optimal performance:

  • ~7% overhead vs manual ForSourceContext (uncached)
  • ~1% overhead with caching enabled (subsequent calls)
  • Thread-safe caching with sync.Map
  • Zero allocations for cached type names
// Performance comparison
ForType[User](logger).Information("User operation")           // ~7% slower than manual
logger.ForSourceContext("User").Information("User operation") // Baseline performance

// But subsequent ForType calls are nearly free due to caching

// Cache statistics for monitoring
stats := mtlog.GetTypeNameCacheStats()
fmt.Printf("Cache hits: %d, misses: %d, evictions: %d, hit ratio: %.1f%%, size: %d/%d", 
    stats.Hits, stats.Misses, stats.Evictions, stats.HitRatio, stats.Size, stats.MaxSize)

// For testing scenarios requiring cache isolation
mtlog.ResetTypeNameCache() // Clears cache and statistics
Multi-Tenant Support

For applications serving multiple tenants, ForType supports tenant-specific cache namespaces:

// Multi-tenant type-based logging with separate cache namespaces
func CreateTenantLogger(baseLogger core.Logger, tenantID string) core.Logger {
    tenantPrefix := fmt.Sprintf("tenant:%s", tenantID)
    return mtlog.ForTypeWithCacheKey[UserService](baseLogger, tenantPrefix)
}

// Each tenant gets separate cache entries
acmeLogger := CreateTenantLogger(logger, "acme-corp")    // Cache key: tenant:acme-corp + UserService
globexLogger := CreateTenantLogger(logger, "globex-inc") // Cache key: tenant:globex-inc + UserService

acmeLogger.Information("Processing acme user")   // SourceContext: "UserService" (acme cache)
globexLogger.Information("Processing globex user") // SourceContext: "UserService" (globex cache)

// Custom type naming per tenant
opts := mtlog.TypeNameOptions{Prefix: "AcmeCorp."}
acmeName := mtlog.ExtractTypeNameWithCacheKey[User](opts, "tenant:acme")
// Result: "AcmeCorp.User" (cached separately per tenant)
Type Name Cache Configuration

The type name cache can be configured via environment variables:

# Set cache size limit (default: 10,000 entries)
export MTLOG_TYPE_NAME_CACHE_SIZE=50000  # For large applications
export MTLOG_TYPE_NAME_CACHE_SIZE=1000   # For memory-constrained environments
export MTLOG_TYPE_NAME_CACHE_SIZE=0      # Disable caching entirely

The cache uses LRU (Least Recently Used) eviction when the size limit is exceeded, ensuring memory usage stays bounded while keeping frequently used type names cached.

This is particularly useful for:

  • Large applications with many service types
  • Type-safe logger categorization
  • Automatic SourceContext without string constants
  • Service-oriented architectures
  • Multi-tenant applications requiring cache isolation

Context Deadline Awareness

mtlog can automatically detect and warn when operations are approaching their context deadlines, helping catch timeout-related issues before they fail:

// Configure deadline awareness
logger := mtlog.New(
    mtlog.WithConsole(),
    mtlog.WithContextDeadlineWarning(100*time.Millisecond), // Warn within 100ms
)

// Use context-aware logging methods
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancel()

logger.InfoContext(ctx, "Starting operation")
time.Sleep(350 * time.Millisecond)
logger.InfoContext(ctx, "Still processing...") // Warning: approaching deadline!

// Percentage-based thresholds
logger := mtlog.New(
    mtlog.WithDeadlinePercentageThreshold(
        1*time.Millisecond,  // Minimum absolute threshold
        0.2,                 // Warn when 20% of time remains
    ),
)

// HTTP handler example
func handler(w http.ResponseWriter, r *http.Request) {
    ctx, cancel := context.WithTimeout(r.Context(), 200*time.Millisecond)
    defer cancel()
    
    logger.InfoContext(ctx, "Processing request")
    // ... perform operations ...
    logger.InfoContext(ctx, "Response ready") // Warns if close to timeout
}

Features:

  • Zero overhead when no deadline is present (2.7ns, 0 allocations)
  • Automatic level upgrading - Info logs become Warnings when deadline approaches
  • OTEL-style properties - deadline.remaining_ms, deadline.at, deadline.approaching
  • First warning tracking - Marks the first warning for each context
  • Deadline exceeded detection - Tracks operations that continue past deadline
  • LRU cache with TTL - Efficient tracking with automatic cleanup
  • Custom handlers - Add metrics, alerts, or custom logic when deadlines approach

Filters

Control which events are logged with powerful filtering:

// Level filtering
mtlog.WithMinimumLevel(core.WarningLevel)

// Minimum level overrides by source context
mtlog.WithMinimumLevelOverrides(map[string]core.LogEventLevel{
    "github.com/gin-gonic/gin":       core.WarningLevel,    // Suppress Gin info logs
    "github.com/go-redis/redis":      core.ErrorLevel,      // Only Redis errors
    "myapp/internal/services":        core.DebugLevel,      // Debug for internal services
    "myapp/internal/services/auth":   core.VerboseLevel,    // Verbose for auth debugging
})

// Custom predicate
mtlog.WithFilter(filters.NewPredicateFilter(func(e *core.LogEvent) bool {
    return !strings.Contains(e.MessageTemplate.Text, "health-check")
}))

// Rate limiting
mtlog.WithFilter(filters.NewRateLimitFilter(100, time.Minute))

// Statistical sampling
mtlog.WithFilter(filters.NewSamplingFilter(0.1)) // 10% of events

// Property-based filtering
mtlog.WithFilter(filters.NewExpressionFilter("UserId", 123))

Sinks

mtlog supports multiple output destinations with advanced features:

Console Sink with Themes
// Literate theme - beautiful, easy on the eyes
mtlog.WithConsoleTheme(sinks.LiterateTheme())

// Dark theme (default)
mtlog.WithConsoleTheme(sinks.DarkTheme())

// Light theme
mtlog.WithConsoleTheme(sinks.LightTheme()) 

// Plain text (no colors)
mtlog.WithConsoleTheme(sinks.NoColorTheme())
File Sinks
// Simple file output
mtlog.WithFileSink("app.log")

// Rolling file by size
mtlog.WithRollingFile("app.log", 10*1024*1024) // 10MB

// Rolling file by time
mtlog.WithRollingFileTime("app.log", time.Hour) // Every hour
Seq Integration
// Basic Seq integration
mtlog.WithSeq("http://localhost:5341")

// With API key
mtlog.WithSeq("http://localhost:5341", "your-api-key")

// Advanced configuration
mtlog.WithSeqAdvanced("http://localhost:5341",
    sinks.WithSeqBatchSize(100),
    sinks.WithSeqBatchTimeout(5*time.Second),
    sinks.WithSeqCompression(true),
)

// Dynamic level control via Seq
levelOption, levelSwitch, controller := mtlog.WithSeqLevelControl(
    "http://localhost:5341",
    mtlog.SeqLevelControllerOptions{
        CheckInterval: 30*time.Second,
        InitialCheck: true,
    },
)
Elasticsearch Integration
// Basic Elasticsearch
mtlog.WithElasticsearch("http://localhost:9200", "logs")

// With authentication
mtlog.WithElasticsearchAdvanced(
    []string{"http://localhost:9200"},
    elasticsearch.WithIndex("myapp-logs"),
    elasticsearch.WithAPIKey("api-key"),
    elasticsearch.WithBatchSize(100),
)
Splunk Integration
// Splunk HEC integration
mtlog.WithSplunk("http://localhost:8088", "your-hec-token")

// Advanced Splunk configuration
mtlog.WithSplunkAdvanced("http://localhost:8088",
    sinks.WithSplunkToken("hec-token"),
    sinks.WithSplunkIndex("main"),
    sinks.WithSplunkSource("myapp"),
)
Sentry Integration
import (
    "github.com/willibrandon/mtlog"
    "github.com/willibrandon/mtlog/adapters/sentry"
)

// Basic Sentry error tracking
sink, _ := sentry.WithSentry("https://key@sentry.io/project")
log := mtlog.New(mtlog.WithSink(sink))

// With sampling for high-volume applications
sink, _ := sentry.WithSentry("https://key@sentry.io/project",
    sentry.WithFixedSampling(0.1), // 10% sampling
)
log := mtlog.New(mtlog.WithSink(sink))

// Advanced configuration with performance monitoring
sink, _ := sentry.WithSentry("https://key@sentry.io/project",
    sentry.WithEnvironment("production"),
    sentry.WithRelease("v1.2.3"),
    sentry.WithTracesSampleRate(0.2),
    sentry.WithProfilesSampleRate(0.1),
    sentry.WithAdaptiveSampling(0.01, 0.5), // 1% to 50% adaptive
    sentry.WithRetryPolicy(3, time.Second),
    sentry.WithStackTraceCache(1000),
)
log := mtlog.New(mtlog.WithSink(sink))
Async and Durable Sinks
// Wrap any sink for async processing
mtlog.WithAsync(mtlog.WithFileSink("app.log"))

// Durable buffering (survives crashes)
mtlog.WithDurable(
    mtlog.WithSeq("http://localhost:5341"),
    sinks.WithDurableDirectory("./logs/buffer"),
    sinks.WithDurableMaxSize(100*1024*1024), // 100MB buffer
)
Event Routing with Conditional and Router Sinks

Route log events to different destinations based on their properties:

Conditional Sink

Filter events based on predicates with zero overhead for non-matching events:

// Create a conditional sink for critical alerts
alertSink, _ := sinks.NewFileSink("alerts.log")
criticalAlertSink := sinks.NewConditionalSink(
    func(event *core.LogEvent) bool {
        return event.Level >= core.ErrorLevel && 
               event.Properties["Alert"] != nil
    },
    alertSink,
)

// Use built-in predicates
auditSink := sinks.NewConditionalSink(
    sinks.PropertyPredicate("Audit"),
    auditFileSink,
)

// Combine predicates
complexFilter := sinks.NewConditionalSink(
    sinks.AndPredicate(
        sinks.LevelPredicate(core.ErrorLevel),
        sinks.PropertyPredicate("Critical"),
        sinks.PropertyValuePredicate("Environment", "production"),
    ),
    targetSink,
)

logger := mtlog.New(
    mtlog.WithSink(sinks.NewConsoleSink()),
    mtlog.WithSink(criticalAlertSink),
    mtlog.WithSink(auditSink),
)

// Only critical errors with Alert property go to alerts.log
logger.With("Alert", true).Error("Database connection lost")
Router Sink

Advanced routing with multiple destinations and routing modes:

// FirstMatch mode - exclusive routing (stops at first match)
router := sinks.NewRouterSink(sinks.FirstMatch,
    sinks.Route{
        Name:      "errors",
        Predicate: sinks.LevelPredicate(core.ErrorLevel),
        Sink:      errorSink,
    },
    sinks.Route{
        Name:      "warnings",
        Predicate: sinks.LevelPredicate(core.WarningLevel),
        Sink:      warningSink,
    },
)

// AllMatch mode - broadcast to all matching routes
router := sinks.NewRouterSink(sinks.AllMatch,
    sinks.MetricRoute("metrics", metricsSink),
    sinks.AuditRoute("audit", auditSink),
    sinks.ErrorRoute("errors", errorSink),
)

// With default sink for non-matching events
router := sinks.NewRouterSinkWithDefault(
    sinks.FirstMatch,
    defaultSink,
    routes...,
)

// Dynamic route management at runtime
router.AddRoute(sinks.Route{
    Name:      "debug",
    Predicate: func(e *core.LogEvent) bool { 
        return e.Level <= core.DebugLevel 
    },
    Sink:      debugSink,
})
router.RemoveRoute("debug")

// Fluent route builder API
route := sinks.NewRoute("special-events").
    When(func(e *core.LogEvent) bool {
        category, _ := e.Properties["Category"].(string)
        return category == "Special"
    }).
    To(specialSink)

logger := mtlog.New(
    mtlog.WithSink(router),
    mtlog.WithSink(sinks.NewConsoleSink()),
)

Dynamic Level Control

Control logging levels at runtime without restarting your application:

Manual Level Control
// Create a level switch
levelSwitch := mtlog.NewLoggingLevelSwitch(core.InformationLevel)

logger := mtlog.New(
    mtlog.WithLevelSwitch(levelSwitch),
    mtlog.WithConsole(),
)

// Change level at runtime
levelSwitch.SetLevel(core.DebugLevel)

// Fluent interface
levelSwitch.Debug().Information().Warning()

// Check if level is enabled
if levelSwitch.IsEnabled(core.VerboseLevel) {
    // Expensive logging operation
}
Centralized Level Control with Seq
// Automatic level synchronization with Seq server
options := mtlog.SeqLevelControllerOptions{
    CheckInterval: 30 * time.Second,
    InitialCheck:  true,
}

loggerOption, levelSwitch, controller := mtlog.WithSeqLevelControl(
    "http://localhost:5341", options)
defer controller.Close()

logger := mtlog.New(loggerOption)

// Level changes in Seq UI automatically update your application

Configuration from JSON

Configure loggers using JSON for flexible deployments:

// Load from JSON file
config, err := configuration.LoadFromFile("logging.json")
if err != nil {
    log.Fatal(err)
}

logger := config.CreateLogger()

Example logging.json:

{
    "minimumLevel": "Information",
    "sinks": [
        {
            "type": "Console",
            "theme": "dark"
        },
        {
            "type": "RollingFile",
            "path": "logs/app.log",
            "maxSize": "10MB"
        },
        {
            "type": "Seq",
            "serverUrl": "http://localhost:5341",
            "apiKey": "${SEQ_API_KEY}"
        }
    ],
    "enrichers": ["Timestamp", "MachineName", "ProcessInfo"]
}

Safe Logging with LogValue

Protect sensitive data with the LogValue interface:

type User struct {
    ID       int
    Username string
    Password string // Never logged
}

func (u User) LogValue() interface{} {
    return map[string]interface{}{
        "id":       u.ID,
        "username": u.Username,
        // Password intentionally omitted
    }
}

// Password won't appear in logs
user := User{ID: 1, Username: "alice", Password: "secret"}
log.Information("User logged in: {@User}", user)

Performance

Benchmark results on AMD Ryzen 9 9950X:

Operation mtlog zap zerolog Winner
Simple string 16.82 ns 146.6 ns 36.46 ns mtlog
Filtered out 1.47 ns 3.57 ns 1.71 ns mtlog
Two properties 190.6 ns 216.9 ns 49.48 ns zerolog
With context 205.2 ns 130.8 ns 35.25 ns zerolog

Examples

See the examples directory and adapter examples (OTEL, Sentry, middleware) for complete examples:

Ecosystem Compatibility

HTTP Middleware

mtlog provides HTTP middleware adapters for popular Go web frameworks:

import (
    "github.com/willibrandon/mtlog"
    "github.com/willibrandon/mtlog/adapters/middleware"
)

logger := mtlog.New(mtlog.WithConsole())

// net/http
mw := middleware.Middleware(middleware.DefaultOptions(logger))
handler := mw(yourHandler)

// Gin
router := gin.New()
router.Use(middleware.Gin(logger))

// Echo
e := echo.New()
e.Use(middleware.Echo(logger))

// Fiber
app := fiber.New()
app.Use(middleware.Fiber(logger))

// Chi
r := chi.NewRouter()
r.Use(middleware.Chi(logger))

Features include request/response logging, body capture with sanitization, distributed tracing, health checks, and object pooling for high-performance scenarios. See the HTTP Middleware Guide for detailed documentation.

Standard Library (slog)

mtlog provides full compatibility with Go's standard log/slog package:

// Use mtlog as a backend for slog
slogger := mtlog.NewSlogLogger(
    mtlog.WithSeq("http://localhost:5341"),
    mtlog.WithMinimumLevel(core.InformationLevel),
)

// Use standard slog API
slogger.Info("user logged in", "user_id", 123, "ip", "192.168.1.1")

// Or create a custom slog handler
logger := mtlog.New(mtlog.WithConsole())
slogger = slog.New(logger.AsSlogHandler())
Kubernetes (logr)

mtlog integrates with the Kubernetes ecosystem via logr:

// Use mtlog as a backend for logr
import mtlogr "github.com/willibrandon/mtlog/adapters/logr"

logrLogger := mtlogr.NewLogger(
    mtlog.WithConsole(),
    mtlog.WithMinimumLevel(core.DebugLevel),
)

// Use standard logr API
logrLogger.Info("reconciling", "namespace", "default", "name", "my-app")
logrLogger.Error(err, "failed to update resource")

// Or create a custom logr sink
logger := mtlog.New(mtlog.WithSeq("http://localhost:5341"))
logrLogger = logr.New(logger.AsLogrSink())
OpenTelemetry (OTEL)

mtlog provides comprehensive OpenTelemetry integration with bidirectional support - use mtlog as an OTEL logger or send mtlog events to OTEL collectors:

import "github.com/willibrandon/mtlog/adapters/otel"

// Basic OTLP sink with automatic trace correlation
logger := otel.NewOTELLogger(
    otel.WithOTLPEndpoint("localhost:4317"),
    otel.WithOTLPInsecure(), // For non-TLS connections
)

// Advanced configuration with batching and TLS
logger := mtlog.New(
    otel.WithOTLPSink(
        otel.WithOTLPEndpoint("otel-collector:4317"),
        otel.WithOTLPTransport(otel.OTLPTransportGRPC), // or OTLPTransportHTTP
        otel.WithOTLPBatching(100, 5*time.Second),
        otel.WithOTLPCompression("gzip"),
        otel.WithOTLPClientCert("client.crt", "client.key"),
    ),
)

// Automatic trace context enrichment in HTTP handlers
func handleRequest(w http.ResponseWriter, r *http.Request) {
    ctx := r.Context()
    logger := otel.NewRequestLogger(ctx,
        otel.WithOTLPEndpoint("localhost:4317"),
        otel.WithOTLPInsecure(),
    )
    
    // Logs automatically include trace.id, span.id, trace.flags
    logger.Information("Processing request for {Path}", r.URL.Path)
}

// Sampling strategies for high-volume scenarios
logger := mtlog.New(
    otel.WithOTLPSink(
        otel.WithOTLPEndpoint("localhost:4317"),
        otel.WithOTLPSampling(otel.NewRateSampler(0.1)), // Sample 10% of events
        // or: otel.NewLevelSampler(core.WarningLevel)    // Only warnings and above
        // or: otel.NewAdaptiveSampler(1000)              // Target 1000 events/sec
    ),
)

// Prometheus metrics export for monitoring
exporter, _ := otel.NewMetricsExporter(
    otel.WithMetricsPort(9090),
    otel.WithMetricsPath("/metrics"),
)
defer exporter.Close()

// Use mtlog as an OTEL Bridge (mtlog -> OTEL)
otelLogger := otel.NewBridge(logger)
otelLogger.Emit(ctx, record) // Use OTEL log.Logger interface

// Use OTEL as an mtlog sink (OTEL -> mtlog)
handler := otel.NewHandler(otelLogger)
logger := mtlog.New(mtlog.WithSink(handler))

For complete OpenTelemetry integration documentation, see the OTEL adapter README.

Environment Variables

mtlog respects several environment variables for runtime configuration:

Color Control
# Force specific color mode (overrides terminal detection)
export MTLOG_FORCE_COLOR=none     # Disable all colors
export MTLOG_FORCE_COLOR=8        # Force 8-color mode (basic ANSI)
export MTLOG_FORCE_COLOR=256      # Force 256-color mode

# Standard NO_COLOR variable is also respected
export NO_COLOR=1                 # Disable colors (follows no-color.org)
Performance Tuning
# Adjust source context cache size (default: 10000)
export MTLOG_SOURCE_CTX_CACHE=50000  # Increase for large applications
export MTLOG_SOURCE_CTX_CACHE=1000   # Decrease for memory-constrained environments

# Adjust type name cache size (default: 10000)
export MTLOG_TYPE_NAME_CACHE_SIZE=50000  # For applications with many types
export MTLOG_TYPE_NAME_CACHE_SIZE=1000   # For memory-constrained environments
export MTLOG_TYPE_NAME_CACHE_SIZE=0      # Disable type name caching

Tools

mtlog-analyzer

A static analysis tool that catches common mtlog mistakes at compile time:

# Install the analyzer
go install github.com/willibrandon/mtlog/cmd/mtlog-analyzer@latest

# Run with go vet
go vet -vettool=$(which mtlog-analyzer) ./...

The analyzer detects:

  • Template/argument count mismatches
  • Invalid property names (spaces, starting with numbers)
  • Duplicate properties in templates and With() calls
  • Missing capturing hints for complex types
  • Error logging without error values
  • With() method issues (odd arguments, non-string keys, empty keys)
  • Cross-call duplicate detection for property overrides
  • Reserved property name shadowing (opt-in)

Example catches:

// ❌ Template has 2 properties but 1 argument provided
log.Information("User {UserId} logged in from {IP}", userId)

// ❌ Duplicate property 'UserId'
log.Information("User {UserId} did {Action} as {UserId}", id, "login", id)

// ❌ With() requires even number of arguments (MTLOG009)
log.With("key1", "value1", "key2")  // Missing value

// ❌ With() key must be a string (MTLOG010)
log.With(123, "value")

// ❌ Duplicate keys in With() (MTLOG003)
log.With("id", 1, "name", "test", "id", 2)

// ⚠️ Cross-call property override (MTLOG011)
logger := log.With("service", "api")
logger.With("service", "auth")  // Overrides previous 'service'

// ✅ Correct usage
log.Information("User {@User} has {Count} items", user, count)
log.With("userId", 123, "requestId", "abc").Info("Request processed")

See mtlog-analyzer README for detailed documentation and CI integration.

mtlog-lsp

A Language Server Protocol implementation that bundles mtlog-analyzer for editor integrations:

# Install the LSP server
go install github.com/willibrandon/mtlog/cmd/mtlog-lsp@latest

mtlog-lsp provides:

  • Zero-subprocess overhead with bundled analyzer
  • Real-time diagnostics for all MTLOG001-MTLOG013 issues
  • Code actions and quick fixes
  • Workspace configuration support
  • Performance optimized with package caching

Primarily used by the Zed extension. See mtlog-lsp README for detailed documentation.

IDE Extensions
VS Code Extension

For real-time validation in Visual Studio Code, install the mtlog-analyzer extension:

  1. Install mtlog-analyzer: go install github.com/willibrandon/mtlog/cmd/mtlog-analyzer@latest
  2. Install the extension from VS Code Marketplace (search for "mtlog-analyzer")
  3. Get instant feedback on template errors as you type

The extension provides:

  • 🔍 Real-time diagnostics with squiggly underlines
  • 🎯 Precise error locations - click to jump to issues
  • 📊 Three severity levels: errors, warnings, and suggestions
  • 🔧 Quick fixes for common issues (Ctrl+. for PascalCase conversion, argument count fixes)
  • ⚙️ Configurable analyzer path and flags
GoLand Plugin

For real-time validation in GoLand and other JetBrains IDEs with Go support, install the mtlog-analyzer plugin:

  1. Install mtlog-analyzer: go install github.com/willibrandon/mtlog/cmd/mtlog-analyzer@latest
  2. Install the plugin from JetBrains Marketplace (search for "mtlog-analyzer")
  3. Get instant feedback on template errors as you type

The plugin provides:

  • 🔍 Real-time template validation as you type
  • 🎯 Intelligent highlighting (template errors highlight the full string, property warnings highlight just the property)
  • 🔧 Quick fixes for common issues (Alt+Enter for PascalCase conversion, argument count fixes)
  • ⚙️ Configurable analyzer path, flags, and severity levels
  • 🚀 Performance optimized with caching and debouncing
Neovim Plugin

For Neovim users, a comprehensive plugin is included in the repository at neovim-plugin/:

-- Install with lazy.nvim
{
  'willibrandon/mtlog',
  lazy = false,  -- Load immediately to ensure commands are available
  config = function(plugin)
    -- Handle the plugin's subdirectory structure
    vim.opt.rtp:append(plugin.dir .. "/neovim-plugin")
    vim.cmd("runtime plugin/mtlog.vim")
    
    require('mtlog').setup()
  end,
  ft = 'go',
}

The plugin provides:

  • 🔍 Real-time analysis on save with debouncing
  • 🎯 LSP integration for code actions
  • 🔧 Quick fixes and diagnostic suppression
  • 📊 Statusline integration with diagnostic counts
  • ⚡ Advanced features: queue management, context rules, help system
  • 🚀 Performance optimized with caching and async operations

See the plugin README for detailed configuration and usage.

Zed Extension

For real-time validation in Zed editor, install the mtlog-analyzer extension:

  1. Install mtlog-lsp (includes bundled analyzer):
    go install github.com/willibrandon/mtlog/cmd/mtlog-lsp@latest
    
  2. Install the extension from Zed's extension manager (search for "mtlog-analyzer")
  3. Get instant feedback on template errors as you type

The extension provides:

  • 🔍 Real-time diagnostics for all MTLOG001-MTLOG013 issues
  • 🔧 Quick fixes via code actions for common issues
  • 🚀 Automatic binary detection in standard Go paths
  • ⚙️ Configurable analyzer flags and custom paths

Advanced Usage

Custom Sinks

Implement the core.LogEventSink interface for custom outputs:

type CustomSink struct {
    output io.Writer
}

func (s *CustomSink) Emit(event *core.LogEvent) {
    // Use RenderMessage() to properly render the message template
    // This handles format specifiers, capturing operators, and scalar hints
    message := event.RenderMessage()

    // Format and write the log entry
    timestamp := event.Timestamp.Format("15:04:05")
    fmt.Fprintf(s.output, "[%s] %s: %s\n", timestamp, event.Level, message)

    // Optionally include extra properties not in the template
    for key, value := range event.Properties {
        if !strings.Contains(event.MessageTemplate, "{"+key) {
            fmt.Fprintf(s.output, "  %s: %v\n", key, value)
        }
    }
}

func (s *CustomSink) Close() error {
    // Cleanup if needed
    return nil
}

// Use your custom sink
log := mtlog.New(
    mtlog.WithSink(&CustomSink{output: os.Stdout}),
)

// This will properly render with RenderMessage():
log.Info("User {@User} performed {Action} at {Time:HH:mm}", user, "login", time.Now())
Custom Enrichers

Add custom properties to all events:

type UserEnricher struct {
    userID int
}

func (e *UserEnricher) Enrich(event *core.LogEvent, factory core.LogEventPropertyFactory) {
    event.AddPropertyIfAbsent(factory.CreateProperty("UserId", e.userID))
}

log := mtlog.New(
    mtlog.WithEnricher(&UserEnricher{userID: 123}),
)
Type Registration

Register types for special handling during capturing:

capturer := capture.NewDefaultCapturer()
capturer.RegisterScalarType(reflect.TypeOf(uuid.UUID{}))

Documentation

For comprehensive guides and examples, see the docs directory:

Testing

# Run unit tests
go test ./...

# Run integration tests with Docker Compose
docker-compose -f docker/docker-compose.test.yml up -d
go test -tags=integration ./...
docker-compose -f docker/docker-compose.test.yml down

# Run benchmarks (in benchmarks/ folder)
cd benchmarks && go test -bench=. -benchmem

See testing.md for detailed testing guide and manual container setup.

Contributing

Contributions are welcome! Please see our Contributing Guide for details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DefaultTypeNameOptions = TypeNameOptions{
	IncludePackage:    false,
	PackageDepth:      1,
	Prefix:            "",
	Suffix:            "",
	SimplifyAnonymous: false,
	WarnOnUnknown:     true,
}

DefaultTypeNameOptions provides sensible defaults for type name extraction.

Functions

func AddCustomProfile added in v0.10.0

func AddCustomProfile(name, description string, factory func() core.LogEventFilter) error

AddCustomProfile allows users to add their own sampling profiles (before freezing)

func AddCustomProfileWithVersion added in v0.10.0

func AddCustomProfileWithVersion(name, description, version string, deprecated bool, replacement string, factory func() core.LogEventFilter) error

AddCustomProfileWithVersion allows users to add versioned sampling profiles (before freezing)

func Build added in v0.2.0

func Build(opts ...Option) (*logger, error)

Build creates a new logger with the specified options. Returns an error if any option fails during configuration.

func DisableSamplingDebug added in v0.10.0

func DisableSamplingDebug()

DisableSamplingDebug disables logging of sampling decisions.

func EnableSamplingDebug added in v0.10.0

func EnableSamplingDebug()

EnableSamplingDebug enables logging of sampling decisions to selflog for debugging. This helps understand why certain events are being sampled or skipped. The debug output includes the template, the sampling decision, and the filter that made it.

func ExtractTypeName added in v0.3.0

func ExtractTypeName[T any](options TypeNameOptions) string

ExtractTypeName extracts a string representation of the type T using the provided options. This is the exported version of extractTypeName for creating custom loggers.

func ExtractTypeNameWithCacheKey added in v0.3.0

func ExtractTypeNameWithCacheKey[T any](options TypeNameOptions, cacheKeyPrefix string) string

ExtractTypeNameWithCacheKey extracts a string representation of the type T using the provided options and a custom cache key prefix for multi-tenant scenarios. The cacheKeyPrefix allows for separate cache namespaces, useful in multi-tenant applications where different tenants might have different type naming requirements.

Example usage:

tenantPrefix := fmt.Sprintf("tenant:%s", tenantID)
name := ExtractTypeNameWithCacheKey[User](options, tenantPrefix)
logger := baseLogger.ForContext("SourceContext", name)

func ForType added in v0.3.0

func ForType[T any](logger core.Logger) core.Logger

ForType creates a logger with SourceContext automatically set from the type name. The type name is extracted using reflection and provides a convenient way to categorize logs by the types they relate to.

By default, only the type name is used (without package). For example:

type User struct { Name string }
ForType[User](logger).Information("User created") // SourceContext: "User"
ForType[*User](logger).Information("User updated") // SourceContext: "User" (pointer dereferenced)

This is equivalent to:

logger.ForContext("SourceContext", "User").Information("User created")

But more convenient and less error-prone for type-specific logging.

Performance: Uses reflection for type name extraction, which incurs a small performance overhead (~7%) compared to manual ForSourceContext. For high-performance scenarios, consider using ForSourceContext with string literals.

Edge Cases:

  • Anonymous structs: Return their full definition (e.g., "struct { Name string }")
  • Interfaces: Return "Unknown" as they have no concrete type name
  • Generic types: Include type parameters (e.g., "GenericType[string]")
  • Built-in types: Return their standard names (e.g., "string", "int", "[]string")
  • Function/channel types: Return their signature (e.g., "func(string) error", "chan string")

For more control over type name formatting, use extractTypeName with custom TypeNameOptions. See createCustomLogger in examples/fortype/main.go for a pattern to create custom loggers with TypeNameOptions for specific naming requirements.

func ForTypeWithCacheKey added in v0.3.0

func ForTypeWithCacheKey[T any](logger core.Logger, cacheKeyPrefix string) core.Logger

ForTypeWithCacheKey creates a logger with SourceContext automatically set from the type name using a custom cache key prefix. This is useful in multi-tenant scenarios where different tenants might require separate cache namespaces.

Example usage:

tenantPrefix := fmt.Sprintf("tenant:%s", tenantID)
userLogger := ForTypeWithCacheKey[User](logger, tenantPrefix)
userLogger.Information("User operation") // SourceContext: "User" (cached per tenant)

func FreezeProfiles added in v0.10.0

func FreezeProfiles()

FreezeProfiles makes the profile registry immutable, preventing further modifications. This should be called after all custom profiles have been registered, typically during application initialization to ensure thread-safety and prevent accidental modifications.

func GetAvailableProfileDescriptions added in v0.10.0

func GetAvailableProfileDescriptions() map[string]string

GetAvailableProfileDescriptions returns a map of profile names to their descriptions. This is useful for runtime configuration and discovery of available profiles.

func GetAvailableProfiles added in v0.10.0

func GetAvailableProfiles() []string

GetAvailableProfiles returns a list of available sampling profiles

func GetProfileDescription added in v0.10.0

func GetProfileDescription(profileName string) (string, bool)

GetProfileDescription returns the description for a given profile

func GetProfileVersion added in v0.10.0

func GetProfileVersion(profileName string) (string, bool)

GetProfileVersion returns the version of the currently active profile

func GetProfileVersions added in v0.10.0

func GetProfileVersions(profileName string) []string

GetProfileVersions returns all available versions for a profile

func GetTypeNameSimple added in v0.3.0

func GetTypeNameSimple[T any]() string

GetTypeNameSimple is the exported version of getTypeNameSimple for testing and benchmarking.

func GetTypeNameWithPackage added in v0.3.0

func GetTypeNameWithPackage[T any]() string

GetTypeNameWithPackage is the exported version of getTypeNameWithPackage for testing and benchmarking.

func IsProfileDeprecated added in v0.10.0

func IsProfileDeprecated(profileName string) (bool, string)

IsProfileDeprecated checks if a profile is marked as deprecated

func IsProfileRegistryFrozen added in v0.10.0

func IsProfileRegistryFrozen() bool

IsProfileRegistryFrozen returns true if the profile registry has been frozen.

func IsSamplingDebugEnabled added in v0.10.0

func IsSamplingDebugEnabled() bool

IsSamplingDebugEnabled returns whether sampling debug logging is enabled.

func New

func New(opts ...Option) *logger

New creates a new logger with the specified options. If any option returns an error during configuration, New will panic. Use Build() for non-panicking initialization.

func NewSlogLogger

func NewSlogLogger(options ...Option) *slog.Logger

NewSlogLogger creates a new slog.Logger backed by mtlog

func PushProperty added in v0.3.0

func PushProperty(ctx context.Context, name string, value any) context.Context

PushProperty adds a property to the context that will be included in all log events created from loggers using this context. Properties are inherited - if a context already has properties, the new context will include both the existing and new properties.

Properties set via PushProperty have the lowest precedence and are overridden by ForContext properties and event-specific properties.

Thread Safety: This function is thread-safe. It creates a new context value with a copy of the properties map, ensuring immutability. Multiple goroutines can safely call PushProperty on the same context, and each will receive its own independent context.

Example:

ctx := context.Background()
ctx = mtlog.PushProperty(ctx, "UserId", 123)
ctx = mtlog.PushProperty(ctx, "TenantId", "acme-corp")

// Both UserId and TenantId will be included in this log
logger.WithContext(ctx).Information("Processing user request")

// Properties can be overridden at more specific scopes
logger.WithContext(ctx).ForContext("UserId", 456).Information("Override test")
// Results in UserId=456 (ForContext overrides PushProperty)

func RegisterCustomProfiles added in v0.10.0

func RegisterCustomProfiles(customProfiles map[string]SamplingProfile) error

RegisterCustomProfiles allows bulk registration of custom profiles before freezing. Returns an error if any profile fails to register or if the registry is already frozen.

func ResetTypeNameCache added in v0.3.0

func ResetTypeNameCache()

ResetTypeNameCache clears the type name cache and statistics. Useful for testing and benchmarking.

func Route added in v0.10.0

func Route(name string) *sinks.RouteBuilder

Route creates a new route builder for fluent route configuration.

func SetMigrationPolicy added in v0.10.0

func SetMigrationPolicy(policy MigrationPolicy) error

SetMigrationPolicy updates the global migration policy

func WarmupSamplingBackoff added in v0.10.0

func WarmupSamplingBackoff(keys []string, defaultFactor float64)

WarmupSamplingBackoff pre-populates the backoff sampling cache with common keys. This helps avoid cold-start allocation spikes in high-traffic applications.

func WarmupSamplingGroups added in v0.10.0

func WarmupSamplingGroups(groupNames []string)

WarmupSamplingGroups pre-populates the sampling group cache with common group names. This helps avoid cold-start allocation spikes in high-traffic applications.

func WithControlledLevel

func WithControlledLevel(initialLevel core.LogEventLevel) (Option, *LoggingLevelSwitch)

WithControlledLevel creates a level switch and applies it to the logger. Returns both the option and the level switch for external control.

func WithSeqLevelControl

func WithSeqLevelControl(serverURL string, options SeqLevelControllerOptions, seqOptions ...sinks.SeqOption) (Option, *LoggingLevelSwitch, *SeqLevelController)

WithSeqLevelControl creates a logger with Seq-controlled dynamic level adjustment. This convenience function sets up both a Seq sink and automatic level control.

Types

type AdaptiveSamplingFilter added in v0.10.0

type AdaptiveSamplingFilter struct {
	// contains filtered or unexported fields
}

AdaptiveSamplingFilter adjusts sampling rates based on system load or event frequency. Features hysteresis, dampening, and configurable aggressiveness for stable production use.

func NewAdaptiveSamplingFilter added in v0.10.0

func NewAdaptiveSamplingFilter(targetEventsPerSecond uint64) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilter creates a filter that adjusts sampling based on target events per second.

func NewAdaptiveSamplingFilterPresetDefaults added in v0.10.0

func NewAdaptiveSamplingFilterPresetDefaults(targetEventsPerSecond uint64, preset DampeningPreset) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilterPresetDefaults creates a filter using a preset with default rate limits

func NewAdaptiveSamplingFilterWithDampening added in v0.10.0

func NewAdaptiveSamplingFilterWithDampening(targetEventsPerSecond uint64, minRate, maxRate float64, adjustmentInterval time.Duration, hysteresisThreshold, aggressiveness, dampeningFactor float64) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilterWithDampening creates a filter with complete control including dampening factor.

func NewAdaptiveSamplingFilterWithHysteresis added in v0.10.0

func NewAdaptiveSamplingFilterWithHysteresis(targetEventsPerSecond uint64, minRate, maxRate float64, adjustmentInterval time.Duration, hysteresisThreshold, aggressiveness float64) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilterWithHysteresis creates a filter with hysteresis and aggressiveness control for stability.

func NewAdaptiveSamplingFilterWithOptions added in v0.10.0

func NewAdaptiveSamplingFilterWithOptions(targetEventsPerSecond uint64, minRate, maxRate float64, adjustmentInterval time.Duration) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilterWithOptions creates a filter with custom options.

func NewAdaptiveSamplingFilterWithPreset added in v0.10.0

func NewAdaptiveSamplingFilterWithPreset(targetEventsPerSecond uint64, preset DampeningPreset, minRate, maxRate float64) *AdaptiveSamplingFilter

NewAdaptiveSamplingFilterWithPreset creates a filter using a predefined dampening preset

func (*AdaptiveSamplingFilter) GetCurrentRate added in v0.10.0

func (f *AdaptiveSamplingFilter) GetCurrentRate() float64

GetCurrentRate returns the current sampling rate (0.0 to 1.0).

func (*AdaptiveSamplingFilter) GetStats added in v0.10.0

GetStats returns statistics about the adaptive sampling.

func (*AdaptiveSamplingFilter) IsEnabled added in v0.10.0

func (f *AdaptiveSamplingFilter) IsEnabled(event *core.LogEvent) bool

IsEnabled implements core.LogEventFilter.

func (*AdaptiveSamplingFilter) Reset added in v0.10.0

func (f *AdaptiveSamplingFilter) Reset()

Reset resets the adaptive sampling state.

type AdaptiveSamplingStats added in v0.10.0

type AdaptiveSamplingStats struct {
	CurrentRate             float64
	TargetEventsPerSecond   uint64
	RecentEventCount        uint64
	SmoothedEventsPerSecond float64
	LastAdjustment          time.Time
	HysteresisThreshold     float64
	Aggressiveness          float64
}

AdaptiveSamplingStats provides statistics about adaptive sampling behavior.

type DampeningConfig added in v0.10.0

type DampeningConfig struct {
	Name                string
	Description         string
	HysteresisThreshold float64       // Threshold before making adjustments
	Aggressiveness      float64       // How quickly to adjust rates
	DampeningFactor     float64       // Additional dampening for extreme variations
	AdjustmentInterval  time.Duration // How often to check for adjustments
}

DampeningConfig holds the configuration for a dampening preset

func GetAvailableDampeningPresets added in v0.10.0

func GetAvailableDampeningPresets() []DampeningConfig

GetAvailableDampeningPresets returns descriptions of all available presets

func GetDampeningConfig added in v0.10.0

func GetDampeningConfig(preset DampeningPreset) DampeningConfig

GetDampeningConfig returns the configuration for a given preset

type DampeningPreset added in v0.10.0

type DampeningPreset int

DampeningPreset represents a predefined dampening configuration

const (
	// DampeningConservative - Heavy dampening for stable, predictable environments
	DampeningConservative DampeningPreset = iota
	// DampeningModerate - Balanced dampening for general use (default)
	DampeningModerate
	// DampeningAggressive - Light dampening for dynamic environments that need quick response
	DampeningAggressive
	// DampeningUltraStable - Maximum dampening for critical systems where stability is paramount
	DampeningUltraStable
	// DampeningResponsive - Minimal dampening for development or testing environments
	DampeningResponsive
)

type LogBuilder

type LogBuilder struct {
	// contains filtered or unexported fields
}

LogBuilder provides a fluent API for building typed log entries

func (*LogBuilder) Level

func (b *LogBuilder) Level(level core.LogEventLevel) *LogBuilder

Level sets the log level

func (*LogBuilder) Message

func (b *LogBuilder) Message(template string) *LogBuilder

Message sets the message template

func (*LogBuilder) Property

func (b *LogBuilder) Property(name string, value any) *LogBuilder

Property adds a property

func (*LogBuilder) PropertyTyped

func (b *LogBuilder) PropertyTyped(name string, value any) *LogBuilder

PropertyTyped adds a typed property

func (*LogBuilder) Write

func (b *LogBuilder) Write()

Write writes the log entry

type LoggerG

type LoggerG[T any] interface {
	// Logging methods with type safety
	VerboseT(messageTemplate string, value T, args ...any)
	DebugT(messageTemplate string, value T, args ...any)
	InformationT(messageTemplate string, value T, args ...any)
	WarningT(messageTemplate string, value T, args ...any)
	ErrorT(messageTemplate string, value T, args ...any)
	FatalT(messageTemplate string, value T, args ...any)

	// Short method names
	InfoT(messageTemplate string, value T, args ...any)
	WarnT(messageTemplate string, value T, args ...any)

	// ForContextT adds a typed property
	ForContextT(propertyName string, value T) LoggerG[T]
}

LoggerG is a generic type-safe logger interface

func NewTyped

func NewTyped[T any](opts ...Option) LoggerG[T]

NewTyped creates a new typed logger

type LoggingLevelSwitch

type LoggingLevelSwitch struct {
	// contains filtered or unexported fields
}

LoggingLevelSwitch provides thread-safe, runtime control of the minimum log level. It enables dynamic adjustment of logging levels without restarting the application.

func NewLoggingLevelSwitch

func NewLoggingLevelSwitch(initialLevel core.LogEventLevel) *LoggingLevelSwitch

NewLoggingLevelSwitch creates a new logging level switch with the specified initial level.

func (*LoggingLevelSwitch) Debug

Debug sets the minimum level to Debug.

func (*LoggingLevelSwitch) Error

Error sets the minimum level to Error.

func (*LoggingLevelSwitch) Fatal

Fatal sets the minimum level to Fatal.

func (*LoggingLevelSwitch) Information

func (ls *LoggingLevelSwitch) Information() *LoggingLevelSwitch

Information sets the minimum level to Information.

func (*LoggingLevelSwitch) IsEnabled

func (ls *LoggingLevelSwitch) IsEnabled(level core.LogEventLevel) bool

IsEnabled returns true if the specified level would be processed with the current minimum level setting.

func (*LoggingLevelSwitch) Level

Level returns the current minimum log level.

func (*LoggingLevelSwitch) SetLevel

func (ls *LoggingLevelSwitch) SetLevel(level core.LogEventLevel)

SetLevel updates the minimum log level. This operation is thread-safe and takes effect immediately.

func (*LoggingLevelSwitch) Verbose

func (ls *LoggingLevelSwitch) Verbose() *LoggingLevelSwitch

Verbose sets the minimum level to Verbose.

func (*LoggingLevelSwitch) Warning

func (ls *LoggingLevelSwitch) Warning() *LoggingLevelSwitch

Warning sets the minimum level to Warning.

type MigrationConsent added in v0.10.0

type MigrationConsent int

MigrationConsent represents user consent for profile version migration

const (
	// MigrationDeny - Do not migrate, use requested version or fail
	MigrationDeny MigrationConsent = iota
	// MigrationPrompt - Ask for consent (default, logs warning and uses current)
	MigrationPrompt
	// MigrationAuto - Automatically migrate to latest version
	MigrationAuto
)

type MigrationPolicy added in v0.10.0

type MigrationPolicy struct {
	Consent            MigrationConsent // How to handle missing versions
	PreferStable       bool             // Prefer stable versions over latest
	MaxVersionDistance int              // Maximum version distance for auto-migration (0 = no limit)
}

MigrationPolicy controls how profile version migration is handled

func GetMigrationPolicy added in v0.10.0

func GetMigrationPolicy() MigrationPolicy

GetMigrationPolicy returns the current migration policy

type Option

type Option func(*config)

Option is a functional option for configuring a logger.

func Debug

func Debug() Option

Debug sets the minimum level to Debug.

func Error

func Error() Option

Error sets the minimum level to Error.

func Information

func Information() Option

Information sets the minimum level to Information.

func Verbose

func Verbose() Option

Verbose sets the minimum level to Verbose.

func Warning

func Warning() Option

Warning sets the minimum level to Warning.

func WithAutoSourceContext added in v0.2.0

func WithAutoSourceContext() Option

WithAutoSourceContext adds automatic source context detection.

func WithCallers

func WithCallers(skip int) Option

WithCallers adds caller information enrichment.

func WithCallersInfo

func WithCallersInfo() Option

WithCallersInfo adds caller information enrichment with default skip.

func WithCapturer added in v0.6.0

func WithCapturer(capturer core.Capturer) Option

WithCapturer sets the capturer for the pipeline.

func WithCapturing added in v0.6.0

func WithCapturing() Option

WithCapturing adds the cached capturer for better performance.

func WithCapturingDepth added in v0.6.0

func WithCapturingDepth(maxDepth int) Option

WithCapturingDepth adds capturing with a specific max depth.

func WithCommonEnvironment

func WithCommonEnvironment() Option

WithCommonEnvironment adds enrichers for common environment variables.

func WithConditional added in v0.10.0

func WithConditional(predicate func(*core.LogEvent) bool, sink core.LogEventSink) Option

WithConditional adds a conditional sink that only forwards events matching the predicate.

func WithConsole

func WithConsole() Option

WithConsole adds a console sink.

func WithConsoleProperties

func WithConsoleProperties() Option

WithConsoleProperties adds a console sink that displays properties.

func WithConsoleTemplate added in v0.2.0

func WithConsoleTemplate(template string) Option

WithConsoleTemplate adds a console sink with a custom output template.

func WithConsoleTemplateAndTheme added in v0.9.0

func WithConsoleTemplateAndTheme(template string, theme *sinks.ConsoleTheme) Option

WithConsoleTemplateAndTheme adds a console sink with both a custom output template and theme.

func WithConsoleTheme added in v0.2.0

func WithConsoleTheme(theme *sinks.ConsoleTheme) Option

WithConsoleTheme adds a console sink with a custom theme.

func WithContextDeadlineWarning added in v0.10.0

func WithContextDeadlineWarning(threshold time.Duration, opts ...enrichers.DeadlineOption) Option

WithContextDeadlineWarning enables automatic context deadline detection and warning. When a context deadline is approaching (within the specified threshold), the logger will automatically add deadline information to log events and optionally upgrade their level to Warning.

Example:

logger := mtlog.New(
    mtlog.WithConsole(),
    mtlog.WithContextDeadlineWarning(100*time.Millisecond),
)

This will warn when operations are within 100ms of their deadline.

func WithCorrelationId

func WithCorrelationId(correlationId string) Option

WithCorrelationId adds a fixed correlation ID to all log events.

func WithCustomCapturing added in v0.6.0

func WithCustomCapturing(maxDepth, maxStringLength, maxCollectionCount int) Option

WithCustomCapturing adds a capturer with custom limits.

func WithDeadlineOptions added in v0.10.0

func WithDeadlineOptions(opts ...enrichers.DeadlineOption) Option

WithDeadlineOptions applies additional deadline enricher options to an existing configuration. This is useful for fine-tuning deadline behavior without recreating the entire enricher.

func WithDeadlinePercentageOnly added in v0.10.0

func WithDeadlinePercentageOnly(percent float64, opts ...enrichers.DeadlineOption) Option

WithDeadlinePercentageOnly configures deadline warnings based only on percentage of time remaining, without requiring an absolute threshold. For example, 0.2 means warn when 20% of time remains.

Example:

logger := mtlog.New(
    mtlog.WithDeadlinePercentageOnly(0.2), // Warn at 20% remaining
)

func WithDeadlinePercentageThreshold added in v0.10.0

func WithDeadlinePercentageThreshold(threshold time.Duration, percent float64, opts ...enrichers.DeadlineOption) Option

WithDeadlinePercentageThreshold configures deadline warnings based on percentage of time remaining. For example, 0.1 means warn when 10% of the total time remains.

This can be used together with absolute threshold - warnings will trigger when either condition is met.

func WithDefaultSampling added in v0.10.0

func WithDefaultSampling(n uint64) Option

WithDefaultSampling sets a default sampling rate for all messages. This can be overridden by per-message sampling methods.

func WithDurableBuffer

func WithDurableBuffer(wrapped core.LogEventSink, bufferPath string) Option

WithDurableBuffer adds durable buffering to a sink for reliability.

func WithDurableBufferAdvanced

func WithDurableBufferAdvanced(wrapped core.LogEventSink, options sinks.DurableOptions) Option

WithDurableBufferAdvanced adds durable buffering with advanced options.

func WithDurableElasticsearch

func WithDurableElasticsearch(url, bufferPath string) Option

WithDurableElasticsearch adds an Elasticsearch sink with durable buffering.

func WithDurableSeq

func WithDurableSeq(serverURL, bufferPath string) Option

WithDurableSeq adds a Seq sink with durable buffering.

func WithDurableSplunk

func WithDurableSplunk(url, token, bufferPath string) Option

WithDurableSplunk adds a Splunk sink with durable buffering.

func WithDynamicLevel

func WithDynamicLevel(levelSwitch *LoggingLevelSwitch) Option

WithDynamicLevel enables dynamic level control using a level switch. This is an alias for WithLevelSwitch for better readability.

func WithElasticsearch

func WithElasticsearch(url string) Option

WithElasticsearch adds an Elasticsearch sink with default configuration.

func WithElasticsearchAPIKey

func WithElasticsearchAPIKey(url, apiKey string) Option

WithElasticsearchAPIKey adds an Elasticsearch sink with API key authentication.

func WithElasticsearchAdvanced

func WithElasticsearchAdvanced(url string, opts ...sinks.ElasticsearchOption) Option

WithElasticsearchAdvanced adds an Elasticsearch sink with advanced options.

func WithElasticsearchBasicAuth

func WithElasticsearchBasicAuth(url, username, password string) Option

WithElasticsearchBasicAuth adds an Elasticsearch sink with basic authentication.

func WithEnricher

func WithEnricher(enricher core.LogEventEnricher) Option

WithEnricher adds an enricher to the pipeline.

func WithEnvironment

func WithEnvironment(variableName, propertyName string) Option

WithEnvironment adds environment variable enrichment.

func WithEnvironmentVariables

func WithEnvironmentVariables(variables ...string) Option

WithEnvironmentVariables adds enrichers for multiple environment variables.

func WithExcludeFilter

func WithExcludeFilter(predicate func(*core.LogEvent) bool) Option

WithExcludeFilter adds a filter that excludes events matching the predicate.

func WithFile

func WithFile(path string) Option

WithFile adds a file sink.

func WithFileTemplate added in v0.2.0

func WithFileTemplate(path string, template string) Option

WithFileTemplate adds a file sink with a custom output template.

func WithFilter

func WithFilter(filter core.LogEventFilter) Option

WithFilter adds a filter to the pipeline.

func WithHashSampling

func WithHashSampling(propertyName string, rate float32) Option

WithHashSampling adds a hash-based sampling filter.

func WithLevelFilter

func WithLevelFilter(minimumLevel core.LogEventLevel) Option

WithLevelFilter adds a minimum level filter.

func WithLevelSwitch

func WithLevelSwitch(levelSwitch *LoggingLevelSwitch) Option

WithLevelSwitch enables dynamic level control using the specified level switch. When a level switch is provided, it takes precedence over the static minimum level.

func WithMachineName

func WithMachineName() Option

WithMachineName adds machine name enrichment.

func WithMinimumLevel

func WithMinimumLevel(level core.LogEventLevel) Option

WithMinimumLevel sets the minimum log level.

func WithMinimumLevelOverrides added in v0.2.0

func WithMinimumLevelOverrides(defaultLevel core.LogEventLevel, overrides map[string]core.LogEventLevel) Option

WithMinimumLevelOverrides adds source context-based level filtering.

func WithProcess

func WithProcess() Option

WithProcess adds process information enrichment.

func WithProcessInfo

func WithProcessInfo() Option

WithProcessInfo is an alias for WithProcess.

func WithProperties

func WithProperties(properties map[string]any) Option

WithProperties adds multiple global properties.

func WithProperty

func WithProperty(name string, value any) Option

WithProperty adds a global property to all log events.

func WithPropertyFilter

func WithPropertyFilter(propertyName string, expectedValue any) Option

WithPropertyFilter adds a filter that matches a specific property value.

func WithRateLimit

func WithRateLimit(maxEvents int, windowNanos int64) Option

WithRateLimit adds a rate limiting filter.

func WithRouter added in v0.10.0

func WithRouter(routes ...sinks.Route) Option

WithRouter adds a router sink for sophisticated event routing.

func WithRouterDefault added in v0.10.0

func WithRouterDefault(mode sinks.RoutingMode, defaultSink core.LogEventSink, routes ...sinks.Route) Option

WithRouterDefault adds a router sink with a default fallback sink.

func WithRouterMode added in v0.10.0

func WithRouterMode(mode sinks.RoutingMode, routes ...sinks.Route) Option

WithRouterMode adds a router sink with a specific routing mode.

func WithSampling

func WithSampling(rate float32) Option

WithSampling adds a sampling filter.

func WithSamplingMemoryLimit added in v0.10.0

func WithSamplingMemoryLimit(maxKeys int) Option

WithSamplingMemoryLimit sets a maximum number of sampling keys to track. This helps prevent unbounded memory growth from dynamic sampling keys. The limit applies to both group sampling and backoff sampling caches.

func WithSamplingPolicy added in v0.10.0

func WithSamplingPolicy(policy core.SamplingPolicy) Option

WithSamplingPolicy creates an Option from a custom SamplingPolicy.

func WithSeq

func WithSeq(serverURL string) Option

WithSeq adds a Seq sink with default configuration.

func WithSeqAPIKey

func WithSeqAPIKey(serverURL, apiKey string) Option

WithSeqAPIKey adds a Seq sink with API key authentication.

func WithSeqAdvanced

func WithSeqAdvanced(serverURL string, opts ...sinks.SeqOption) Option

WithSeqAdvanced adds a Seq sink with advanced options.

func WithSink

func WithSink(sink core.LogEventSink) Option

WithSink adds a sink to the pipeline.

func WithSourceContext added in v0.2.0

func WithSourceContext(sourceContext string) Option

WithSourceContext adds source context enrichment with the specified context.

func WithSplunk

func WithSplunk(url, token string) Option

WithSplunk adds a Splunk sink to the logger.

func WithSplunkAdvanced

func WithSplunkAdvanced(url, token string, opts ...sinks.SplunkOption) Option

WithSplunkAdvanced adds a Splunk sink with advanced options.

func WithThreadId

func WithThreadId() Option

WithThreadId adds goroutine ID enrichment.

func WithTimestamp

func WithTimestamp() Option

WithTimestamp adds timestamp enrichment.

type Repository added in v0.3.0

type Repository interface {
	Save(any) error
}

Repository interface for testing - used across multiple test files

type SamplingConfigBuilder added in v0.10.0

type SamplingConfigBuilder struct {
	// contains filtered or unexported fields
}

SamplingConfigBuilder provides a fluent interface for building complex sampling configurations.

func Sampling added in v0.10.0

func Sampling() *SamplingConfigBuilder

Sampling creates a new sampling configuration builder.

func (*SamplingConfigBuilder) AsOption added in v0.10.0

func (s *SamplingConfigBuilder) AsOption() Option

AsOption is an alias for Build for convenience.

func (*SamplingConfigBuilder) Backoff added in v0.10.0

func (s *SamplingConfigBuilder) Backoff(key string, factor float64) *SamplingConfigBuilder

Backoff samples with exponential backoff.

func (*SamplingConfigBuilder) Build added in v0.10.0

func (s *SamplingConfigBuilder) Build() Option

Build returns an Option that applies all the configured sampling filters in a pipeline.

Pipeline Mode (Build): Each filter processes the output of the previous filter sequentially.

Event → Every(2) → Rate(0.5) → First(10) → Output
        ↓ 50%      ↓ 25%       ↓ 10 max

In this example: - Every(2): Passes 50% of events (every 2nd message) - Rate(0.5): Processes only the events that passed Every(2), passes 50% of those (25% total) - First(10): Processes only events that passed both previous filters, passes first 10

func (*SamplingConfigBuilder) CombineAND added in v0.10.0

func (s *SamplingConfigBuilder) CombineAND() Option

CombineAND creates a composite filter that requires all conditions to pass.

Composite AND Mode: All filters evaluate the same event independently. ALL must approve for the event to pass.

Event → [Every(2)]   ⎤
      → [Rate(0.5)]  ⎬ → AND → Output (only if ALL approve)
      → [First(10)]  ⎦

In this example, an event passes only if: - Every(2) approves (even-numbered events), AND - Rate(0.5) approves (50% random chance), AND - First(10) approves (within first 10 evaluations)

func (*SamplingConfigBuilder) CombineOR added in v0.10.0

func (s *SamplingConfigBuilder) CombineOR() Option

CombineOR creates a composite filter that passes if any condition passes.

Composite OR Mode: All filters evaluate the same event independently. ANY can approve for the event to pass.

Event → [Every(2)]   ⎤
      → [Rate(0.5)]  ⎬ → OR → Output (if ANY approve)
      → [First(10)]  ⎦

In this example, an event passes if: - Every(2) approves (even-numbered events), OR - Rate(0.5) approves (50% random chance), OR - First(10) approves (within first 10 evaluations)

func (*SamplingConfigBuilder) Duration added in v0.10.0

Duration samples at most once per duration.

func (*SamplingConfigBuilder) Every added in v0.10.0

Every samples every Nth message.

func (*SamplingConfigBuilder) First added in v0.10.0

First logs only the first N occurrences.

func (*SamplingConfigBuilder) Group added in v0.10.0

Group samples within a named group.

func (*SamplingConfigBuilder) Rate added in v0.10.0

Rate samples a percentage of messages (0.0 to 1.0).

func (*SamplingConfigBuilder) When added in v0.10.0

func (s *SamplingConfigBuilder) When(predicate func() bool, n uint64) *SamplingConfigBuilder

When samples conditionally based on a predicate.

type SamplingProfile added in v0.10.0

type SamplingProfile struct {
	// contains filtered or unexported fields
}

SamplingProfile represents a predefined sampling configuration for common scenarios with versioning support

func GetProfileWithMigration added in v0.10.0

func GetProfileWithMigration(profileName, requestedVersion string) (SamplingProfile, string, bool)

GetProfileWithMigration retrieves a profile with automatic migration support Returns the profile, the actual version used, and whether it was found

func GetProfileWithVersion added in v0.10.0

func GetProfileWithVersion(profileName, version string) (SamplingProfile, bool)

GetProfileWithVersion returns a specific version of a profile

type SeqLevelController

type SeqLevelController struct {
	// contains filtered or unexported fields
}

SeqLevelController automatically updates a LoggingLevelSwitch based on the minimum level configured in Seq. This enables centralized level control where Seq acts as the source of truth for log levels.

func NewSeqLevelController

func NewSeqLevelController(levelSwitch *LoggingLevelSwitch, seqSink *sinks.SeqSink, options SeqLevelControllerOptions) *SeqLevelController

NewSeqLevelController creates a new controller that synchronizes a level switch with Seq's minimum level setting. The controller will periodically query Seq and update the level switch when changes are detected.

func (*SeqLevelController) Close

func (slc *SeqLevelController) Close()

Close stops the level controller and waits for background operations to complete.

func (*SeqLevelController) ForceCheck

func (slc *SeqLevelController) ForceCheck() error

ForceCheck immediately queries Seq for the current level and updates the level switch if necessary. This is useful for testing or immediate synchronization.

func (*SeqLevelController) GetCurrentLevel

func (slc *SeqLevelController) GetCurrentLevel() core.LogEventLevel

GetCurrentLevel returns the current level from the level switch.

func (*SeqLevelController) GetLastSeqLevel

func (slc *SeqLevelController) GetLastSeqLevel() core.LogEventLevel

GetLastSeqLevel returns the last level retrieved from Seq.

type SeqLevelControllerBuilder

type SeqLevelControllerBuilder struct {
	// contains filtered or unexported fields
}

SeqLevelControllerBuilder provides a fluent interface for building Seq level controllers.

func NewSeqLevelControllerBuilder

func NewSeqLevelControllerBuilder(serverURL string) *SeqLevelControllerBuilder

NewSeqLevelControllerBuilder creates a new builder for Seq level controllers.

func (*SeqLevelControllerBuilder) Build

Build creates the Seq level controller and returns the logger option, level switch, and controller.

func (*SeqLevelControllerBuilder) WithCheckInterval

func (b *SeqLevelControllerBuilder) WithCheckInterval(interval time.Duration) *SeqLevelControllerBuilder

WithCheckInterval sets the interval for querying Seq.

func (*SeqLevelControllerBuilder) WithErrorHandler

func (b *SeqLevelControllerBuilder) WithErrorHandler(onError func(error)) *SeqLevelControllerBuilder

WithErrorHandler sets the error handler for level checking failures.

func (*SeqLevelControllerBuilder) WithInitialCheck

func (b *SeqLevelControllerBuilder) WithInitialCheck(initialCheck bool) *SeqLevelControllerBuilder

WithInitialCheck controls whether to perform an initial level check.

func (*SeqLevelControllerBuilder) WithLevelSwitch

WithLevelSwitch uses an existing level switch instead of creating a new one.

func (*SeqLevelControllerBuilder) WithSeqAPIKey

WithSeqAPIKey adds API key authentication for Seq.

type SeqLevelControllerOptions

type SeqLevelControllerOptions struct {
	// CheckInterval is how often to query Seq for level changes.
	// Default: 30 seconds
	CheckInterval time.Duration

	// OnError is called when an error occurs during level checking.
	// Default: no-op
	OnError func(error)

	// InitialCheck determines whether to perform an initial check immediately.
	// Default: true
	InitialCheck bool
}

SeqLevelControllerOptions configures a Seq level controller.

type StructuredLogger

type StructuredLogger struct {
	// contains filtered or unexported fields
}

StructuredLogger provides type-safe structured logging

func NewStructured

func NewStructured(opts ...Option) *StructuredLogger

NewStructured creates a new structured logger

func (*StructuredLogger) Log

func (sl *StructuredLogger) Log(level core.LogEventLevel, messageTemplate string, properties *core.PropertyBag)

Log logs with a typed property bag

func (*StructuredLogger) LogWith

func (sl *StructuredLogger) LogWith() *LogBuilder

LogWith logs with typed properties using a builder pattern

type TypeNameCacheStats added in v0.3.0

type TypeNameCacheStats struct {
	Hits      int64   // Number of cache hits
	Misses    int64   // Number of cache misses
	Evictions int64   // Number of cache evictions due to LRU policy
	HitRatio  float64 // Hit ratio as a percentage (0-100)
	Size      int64   // Number of entries currently in the cache
	MaxSize   int64   // Maximum cache size before eviction
}

TypeNameCacheStats provides performance statistics for the type name cache.

func GetTypeNameCacheStats added in v0.3.0

func GetTypeNameCacheStats() TypeNameCacheStats

GetTypeNameCacheStats returns current cache performance statistics.

type TypeNameOptions added in v0.3.0

type TypeNameOptions struct {
	// IncludePackage determines whether to include the package path in the type name.
	// Default: false (only type name)
	IncludePackage bool

	// PackageDepth limits how many package path segments to include when IncludePackage is true.
	// 0 means include full package path. Default: 1 (only immediate package)
	PackageDepth int

	// Prefix is prepended to the type name for consistent naming.
	// Example: "MyApp." would result in "MyApp.User" for User type
	Prefix string

	// Suffix is appended to the type name.
	// Example: ".Service" would result in "User.Service" for User type
	Suffix string

	// SimplifyAnonymous controls how anonymous struct types are displayed.
	// When true, anonymous structs return "AnonymousStruct" instead of their full definition.
	// Default: false (shows full struct definition)
	SimplifyAnonymous bool

	// WarnOnUnknown controls whether to log warnings for "Unknown" type names.
	// When true, warnings are logged to help with debugging interface types and unresolvable types.
	// When false, warnings are suppressed to reduce log noise in production.
	// Default: true (warnings enabled)
	WarnOnUnknown bool
}

TypeNameOptions controls how type names are extracted and formatted for SourceContext.

Use with extractTypeName to create custom loggers with more control over type name formatting than the default ForType function. See examples/fortype/main.go demonstrateTypeNameOptions for practical usage examples.

Example:

opts := TypeNameOptions{IncludePackage: true, Prefix: "MyApp."}
name := extractTypeName[User](opts) // Result: "MyApp.mypackage.User"
logger := baseLogger.ForContext("SourceContext", name)

type TypedLogger

type TypedLogger[T any] struct {
	// contains filtered or unexported fields
}

TypedLogger wraps a regular logger with type-safe methods

func (*TypedLogger[T]) DebugT

func (l *TypedLogger[T]) DebugT(messageTemplate string, value T, args ...any)

DebugT logs at debug level with typed value

func (*TypedLogger[T]) ErrorT

func (l *TypedLogger[T]) ErrorT(messageTemplate string, value T, args ...any)

ErrorT logs at error level with typed value

func (*TypedLogger[T]) FatalT

func (l *TypedLogger[T]) FatalT(messageTemplate string, value T, args ...any)

FatalT logs at fatal level with typed value

func (*TypedLogger[T]) ForContextT

func (l *TypedLogger[T]) ForContextT(propertyName string, value T) LoggerG[T]

ForContextT adds a typed property to the logger context

func (*TypedLogger[T]) InfoT

func (l *TypedLogger[T]) InfoT(messageTemplate string, value T, args ...any)

InfoT writes an information-level log event with a typed value (alias for InformationT)

func (*TypedLogger[T]) InformationT

func (l *TypedLogger[T]) InformationT(messageTemplate string, value T, args ...any)

InformationT logs at information level with typed value

func (*TypedLogger[T]) VerboseT

func (l *TypedLogger[T]) VerboseT(messageTemplate string, value T, args ...any)

VerboseT logs at verbose level with typed value

func (*TypedLogger[T]) WarnT

func (l *TypedLogger[T]) WarnT(messageTemplate string, value T, args ...any)

WarnT writes a warning-level log event with a typed value (alias for WarningT)

func (*TypedLogger[T]) WarningT

func (l *TypedLogger[T]) WarningT(messageTemplate string, value T, args ...any)

WarningT logs at warning level with typed value

type UserRepository added in v0.3.0

type UserRepository struct{}

UserRepository implements Repository for testing

func (*UserRepository) Save added in v0.3.0

func (ur *UserRepository) Save(any) error

Directories

Path Synopsis
adapters
otel module
sentry module
cmd
Package core provides the fundamental interfaces and types for mtlog.
Package core provides the fundamental interfaces and types for mtlog.
examples
async command
basic command
capturing command
conditional command
configuration command
context command
custom-sink command
deadline-awareness command
Package main demonstrates mtlog's context deadline awareness feature.
Package main demonstrates mtlog's context deadline awareness feature.
deadline-awareness-metrics command
Package main demonstrates integrating mtlog deadline awareness with metrics systems.
Package main demonstrates integrating mtlog deadline awareness with metrics systems.
durable command
dynamic-levels command
elasticsearch command
enrichers command
filtering command
fortype command
Package main demonstrates using ForType for automatic SourceContext from type names.
Package main demonstrates using ForType for automatic SourceContext from type names.
generics command
go-templates command
logcontext command
Package main demonstrates using LogContext for scoped properties.
Package main demonstrates using LogContext for scoped properties.
logvalue command
otel command
Package main demonstrates using OTEL-style dotted property names with mtlog.
Package main demonstrates using OTEL-style dotted property names with mtlog.
rolling command
router command
sampling command
sampling-debug command
seq command
showcase command
splunk command
themes command
with command
goland-plugin
internal
Package selflog provides internal diagnostic logging for mtlog.
Package selflog provides internal diagnostic logging for mtlog.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL