logmgr

package module
v1.1.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 9, 2025 License: MIT Imports: 11 Imported by: 0

README

logmgr

CI Release Coverage Lines of Code

Quality Gate Status Go Report Card GoDoc GitHub release

Ultra-high performance, zero-config logging library for Go that manages everything.


img

Features

  • 🚀 Ridiculously Fast: Lock-free ring buffer, object pooling, batch processing
  • 📊 JSON Output: ELK/Loki/Grafana compatible structured logging
  • 🔄 Decoupled I/O: Async background workers for non-blocking logging
  • 📋 Deterministic: Consistent field ordering for reliable parsing
  • 🛡️ Production Safe: Race condition protection and bounds checking
  • 🎯 Multiple Outputs: Console, file with rotation, custom sinks
  • 📈 Multiple Levels: Debug, Info, Warn, Error, Fatal with filtering
  • 🔧 Zero Config: Works out of the box with sensible defaults
  • 💾 Memory Efficient: Object pooling reduces GC pressure
  • 🔒 Thread Safe: Concurrent logging from multiple goroutines
  • 🏗️ Structured Fields: Type-safe field API with flattened JSON output

Performance

  • Lock-free logging: Sub-microsecond log calls
  • Batch processing: Efficient I/O with configurable batching
  • Object pooling: Minimal memory allocations
  • Background workers: One worker per CPU core for optimal throughput
  • Buffered I/O: Large buffers reduce system call overhead
Benchmarks

Run benchmarks:

make benchmark
# or
go test -bench=. -benchmem
Local Benchmarking

To run comprehensive benchmarks on your system:

# Run all benchmarks with memory allocation stats
go test -bench=. -benchmem -count=3

# Run specific benchmark multiple times for accuracy
go test -bench=BenchmarkLogInfo -benchmem -count=5

# Run benchmarks with CPU profiling
go test -bench=. -benchmem -cpuprofile=cpu.prof

# Run benchmarks with memory profiling
go test -bench=. -benchmem -memprofile=mem.prof

# Compare with baseline (save results first)
go test -bench=. -benchmem > baseline.txt
# After changes:
go test -bench=. -benchmem > optimized.txt
# Use benchcmp tool to compare
Benchmark Interpretation
  • ns/op: Nanoseconds per operation (lower is better)
  • B/op: Bytes allocated per operation (lower is better)
  • allocs/op: Number of allocations per operation (lower is better)
Performance Tips
  1. Use structured fields sparingly - Each field adds ~50ns overhead
  2. Pre-filter log levels - Use logmgr.GetLevel() for expensive operations
  3. Batch operations - Sinks automatically batch for optimal I/O
  4. Choose appropriate sinks - Console < File < Async File for performance
  5. Monitor allocations - Zero allocations for simple logging is the goal
Profiling Integration
import _ "net/http/pprof"
import "net/http"

// Enable pprof endpoint for production profiling
go func() {
    log.Println(http.ListenAndServe("localhost:6060", nil))
}()

// Profile your logging-heavy code paths
// Visit http://localhost:6060/debug/pprof/

Installation

go get github.com/bxrne/logmgr

Usage

Basic Usage
package main

import (
	"time"
	"github.com/bxrne/logmgr"
)

func main() {
	// Initialize the logger
	logmgr.SetLevel(logmgr.DebugLevel) // Set the desired log level (optional)
	
	// Add sinks for output
	logmgr.AddSink(logmgr.DefaultConsoleSink) // Console output
	
	// Add file sink with rotation (24 hours, 100MB max)
	fileSink, err := logmgr.NewFileSink("app.log", 24*time.Hour, 100*1024*1024)
	if err != nil {
		panic(err)
	}
	logmgr.AddSink(fileSink)

	// Log messages with structured fields
	logmgr.Debug("This is a debug message")
	logmgr.Info("User logged in", 
		logmgr.Field("user_id", 12345),
		logmgr.Field("action", "login"),
		logmgr.Field("ip", "192.168.1.1"),
	)
	logmgr.Warn("High memory usage", 
		logmgr.Field("memory_percent", 85.5),
		logmgr.Field("threshold", 80.0),
	)
	logmgr.Error("Database connection failed", 
		logmgr.Field("error", "connection timeout"),
		logmgr.Field("host", "db.example.com"),
		logmgr.Field("port", 5432),
		logmgr.Field("retries", 3),
	)
	
	// Gracefully shutdown to flush all logs
	logmgr.Shutdown()
	
	// Fatal logs and exits with code 1
	// logmgr.Fatal("Critical system failure", 
	//   logmgr.Field("error", "out of memory"),
	//   logmgr.Field("available_memory", "0MB"),
	// )
}
Advanced Usage
Custom Sinks
// Create your own sink
type CustomSink struct{}

func (cs *CustomSink) Write(entries []*logmgr.Entry) error {
	for _, entry := range entries {
		// Process entries (send to external service, etc.)
	}
	return nil
}

func (cs *CustomSink) Close() error {
	return nil
}

// Add custom sink
logmgr.AddSink(&CustomSink{})
High-Performance File Logging
// Async file sink for maximum performance
asyncSink, err := logmgr.NewAsyncFileSink(
	"app.log",           // filename
	24*time.Hour,        // max age
	100*1024*1024,       // max size (100MB)
	1000,                // buffer size
)
if err != nil {
	panic(err)
}
logmgr.AddSink(asyncSink)
Multiple Sinks
// Set multiple sinks at once
logmgr.SetSinks(
	logmgr.DefaultConsoleSink,
	fileSink,
	customSink,
)
Structured Logging
// Rich structured logging with type safety
logmgr.Info("API request processed",
	logmgr.Field("method", "POST"),
	logmgr.Field("path", "/api/users"),
	logmgr.Field("status_code", 201),
	logmgr.Field("duration_ms", 45.67),
	logmgr.Field("user_id", 12345),
	logmgr.Field("request_id", "req-abc-123"),
)

// Conditional logging
if logmgr.GetLevel() <= logmgr.DebugLevel {
	logmgr.Debug("Detailed debug info",
		logmgr.Field("internal_state", complexObject),
		logmgr.Field("memory_usage", getMemoryUsage()),
	)
}

JSON Output Format

Fields are flattened directly into the root JSON object for better performance and easier parsing. Field ordering is deterministic for consistent log parsing:

Field Ordering Convention
  1. level - Always first for easy filtering
  2. timestamp - Always second for chronological sorting
  3. message - Always third for human readability
  4. Custom fields - Sorted alphabetically by key name

This ensures consistent output across all log entries, making parsing and analysis predictable.

{
  "level": "info",
  "timestamp": "2024-01-15T10:30:45.123456789Z",
  "message": "User logged in",
  "action": "login",
  "ip": "192.168.1.1",
  "user_id": 12345
}

Error example with alphabetically sorted custom fields:

{
  "level": "error",
  "timestamp": "2024-01-15T10:30:45.123456789Z", 
  "message": "Database connection failed",
  "error": "connection timeout",
  "host": "db.example.com",
  "port": 5432,
  "retries": 3
}
Safety Features
  • Slice bounds protection: Prevents runtime panics during high-load scenarios
  • Race condition handling: Safe concurrent access to ring buffers and worker pools
  • Nil pointer safety: Defensive programming against edge cases
  • Overflow protection: Handles uint64 to int conversions safely
  • Memory bounds checking: Prevents buffer overruns and out-of-bounds access

API Reference

Core Functions
  • logmgr.SetLevel(level Level) - Set global log level
  • logmgr.GetLevel() Level - Get current log level
  • logmgr.AddSink(sink Sink) - Add output sink
  • logmgr.SetSinks(sinks ...Sink) - Replace all sinks
  • logmgr.Shutdown() - Graceful shutdown with log flushing
Logging Functions
  • logmgr.Debug(message string, fields ...LogField) - Debug level logging
  • logmgr.Info(message string, fields ...LogField) - Info level logging
  • logmgr.Warn(message string, fields ...LogField) - Warning level logging
  • logmgr.Error(message string, fields ...LogField) - Error level logging
  • logmgr.Fatal(message string, fields ...LogField) - Fatal level logging (exits program)
Field Creation
  • logmgr.Field(key string, value interface{}) LogField - Create structured field
Log Levels
  • logmgr.DebugLevel - Detailed debugging information
  • logmgr.InfoLevel - General informational messages
  • logmgr.WarnLevel - Potentially harmful situations
  • logmgr.ErrorLevel - Error events that might allow the application to continue
  • logmgr.FatalLevel - Very severe error events that will lead the application to abort
Sinks
Console Sinks
  • logmgr.DefaultConsoleSink - Pre-configured stdout sink
  • logmgr.NewConsoleSink() - Create new stdout sink
  • logmgr.NewStderrSink() - Create new stderr sink
File Sinks
  • logmgr.NewFileSink(filename, maxAge, maxSize) - Create file sink with rotation
  • logmgr.NewDefaultFileSink(filename, maxAge) - Create file sink with default 100MB size limit
  • logmgr.NewAsyncFileSink(filename, maxAge, maxSize, bufferSize) - Create async file sink

Development

Prerequisites
  • Go 1.21 or later
  • Make (optional, for convenience commands)
Development Commands
# Install development tools
make install-tools

# Run tests
make test

# Run tests with race detection
make test-race

# Run tests with coverage
make test-cover

# Run benchmarks
make benchmark

# Format code
make fmt

# Lint code
make lint

# Run all checks
make check

# Build example application
make build

# Build for all platforms
make build-all

# Clean build artifacts
make clean

# Show all available commands
make help
Manual Commands

If you prefer not to use Make:

# Install tools
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
go install golang.org/x/tools/cmd/goimports@latest

# Test
go test -v ./...
go test -v -race ./...
go test -v -race -coverprofile=coverage.out ./...

# Lint and format
golangci-lint run --timeout=5m
go fmt ./...
goimports -w .
go vet ./...

# Build
go build -o bin/logmgr-example ./example

# Benchmarks
go test -bench=. -benchmem ./...

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Run tests and linting (make check)
  5. Commit your changes using Conventional Commits
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request
Commit Message Format

This project uses Conventional Commits for automated versioning and changelog generation:

  • feat: - New features (minor version bump)
  • fix: - Bug fixes (patch version bump)
  • perf: - Performance improvements (patch version bump)
  • refactor: - Code refactoring (patch version bump)
  • docs: - Documentation changes (no version bump)
  • test: - Test changes (no version bump)
  • chore: - Maintenance tasks (no version bump)
  • BREAKING CHANGE: - Breaking changes (major version bump)

Examples:

feat: add async file sink for high-performance logging
fix: resolve race condition in ring buffer
perf: optimize JSON marshaling for better throughput
docs: update API documentation with examples

License

MIT License - see the LICENSE file for details.

Changelog

See CHANGELOG.md for a detailed history of changes.

Documentation

Overview

Package logmgr provides high-performance structured logging for Go applications.

Index

Constants

This section is empty.

Variables

View Source
var DefaultConsoleSink = NewConsoleSink()

DefaultConsoleSink is a pre-configured console sink instance that can be used immediately. This is the most common way to add console logging to your application.

Example:

logmgr.AddSink(logmgr.DefaultConsoleSink)

Functions

func AddSink

func AddSink(sink Sink)

AddSink adds a sink to the logger

Example:

logmgr.AddSink(logmgr.DefaultConsoleSink)
fileSink, _ := logmgr.NewFileSink("app.log", 24*time.Hour, 100*1024*1024)
logmgr.AddSink(fileSink)

func Debug

func Debug(message string, fields ...LogField)

Debug logs a message at debug level with optional structured fields

Example:

logmgr.Debug("Processing request",
  logmgr.Field("request_id", "req-123"),
  logmgr.Field("user_id", 456),
)

func Error

func Error(message string, fields ...LogField)

Error logs a message at error level with optional structured fields

Example:

logmgr.Error("Database connection failed",
  logmgr.Field("error", "connection timeout"),
  logmgr.Field("host", "db.example.com"),
  logmgr.Field("retries", 3),
)

func Fatal

func Fatal(message string, fields ...LogField)

Fatal logs a message at fatal level with optional structured fields and exits the program

Example:

logmgr.Fatal("Critical system failure",
  logmgr.Field("error", "out of memory"),
  logmgr.Field("available_memory", "0MB"),
)

func Info

func Info(message string, fields ...LogField)

Info logs a message at info level with optional structured fields

Example:

logmgr.Info("User logged in",
  logmgr.Field("user_id", 12345),
  logmgr.Field("action", "login"),
)

func SetLevel

func SetLevel(level Level)

SetLevel sets the global log level

Example:

logmgr.SetLevel(logmgr.DebugLevel)

func SetSinks

func SetSinks(sinks ...Sink)

SetSinks replaces all sinks with the provided ones

Example:

logmgr.SetSinks(logmgr.DefaultConsoleSink, fileSink)

func Shutdown

func Shutdown()

Shutdown gracefully shuts down the logger, ensuring all logs are flushed

Example:

defer logmgr.Shutdown()

func Warn

func Warn(message string, fields ...LogField)

Warn logs a message at warn level with optional structured fields

Example:

logmgr.Warn("High memory usage",
  logmgr.Field("memory_percent", 85.5),
  logmgr.Field("threshold", 80.0),
)

Types

type AsyncFileSink

type AsyncFileSink struct {
	*FileSink // Embedded FileSink for actual file operations
	// contains filtered or unexported fields
}

AsyncFileSink is a high-performance asynchronous file sink that writes log entries in a background goroutine. This provides maximum performance by decoupling log writing from the application's critical path.

Features:

  • Non-blocking writes with configurable buffer
  • Background writer goroutine
  • Automatic fallback to synchronous writes when buffer is full
  • All FileSink features (rotation, buffering, etc.)

Example:

asyncSink, err := logmgr.NewAsyncFileSink("app.log", 24*time.Hour, 100*1024*1024, 1000)
if err != nil {
  panic(err)
}
defer asyncSink.Close()
logmgr.AddSink(asyncSink)

func NewAsyncFileSink

func NewAsyncFileSink(filename string, maxAge time.Duration, maxSize int64, bufferSize int) (*AsyncFileSink, error)

NewAsyncFileSink creates a new asynchronous file sink with the specified parameters.

Parameters:

  • filename: Path to the log file
  • maxAge: Maximum age before rotation (0 disables age-based rotation)
  • maxSize: Maximum size in bytes before rotation (0 disables size-based rotation)
  • bufferSize: Size of the internal channel buffer for async writes

The background writer goroutine is started automatically and will process log entries every 100ms or when entries are available.

Example:

// High-performance async sink with 1000-entry buffer
sink, err := logmgr.NewAsyncFileSink("logs/app.log", 24*time.Hour, 100*1024*1024, 1000)
if err != nil {
  return err
}
defer sink.Close()

func (*AsyncFileSink) Close

func (afs *AsyncFileSink) Close() error

Close gracefully shuts down the async file sink by stopping the background writer and ensuring all buffered entries are written to disk.

This method will block until all pending writes are completed, ensuring no log entries are lost during shutdown. It's safe to call Close multiple times.

Example:

defer asyncSink.Close()

func (*AsyncFileSink) Write

func (afs *AsyncFileSink) Write(entries []*Entry) error

Write writes log entries to the async buffer for background processing. This method is non-blocking and returns immediately in most cases.

If the internal buffer is full, the method falls back to synchronous writing to prevent blocking the application. A copy of the entries is made since the original slice may be reused by the caller.

This method is safe for concurrent use.

type ConsoleSink

type ConsoleSink struct {
	// contains filtered or unexported fields
}

ConsoleSink writes log entries to stdout in JSON format with high performance buffering. It implements the Sink interface and is safe for concurrent use.

Example:

sink := logmgr.NewConsoleSink()
logmgr.AddSink(sink)

func NewConsoleSink

func NewConsoleSink() *ConsoleSink

NewConsoleSink creates a new console sink that writes to stdout. The sink uses an 8KB buffer for optimal performance.

Example:

consoleSink := logmgr.NewConsoleSink()
logmgr.AddSink(consoleSink)

func (*ConsoleSink) Close

func (cs *ConsoleSink) Close() error

Close flushes any remaining buffered data and closes the console sink. This method should be called during application shutdown to ensure all log entries are written.

Example:

defer consoleSink.Close()

func (*ConsoleSink) Write

func (cs *ConsoleSink) Write(entries []*Entry) error

Write writes a batch of log entries to stdout in JSON format. Each entry is written as a single line of JSON followed by a newline. This method is safe for concurrent use.

The method will skip any entries that fail to marshal to JSON rather than failing the entire batch.

Example output:

{"level":"info","timestamp":"2024-01-15T10:30:45.123Z","message":"User logged in","user_id":12345}
{"level":"error","timestamp":"2024-01-15T10:30:46.456Z","message":"Database error","error":"connection timeout"}

type Entry

type Entry struct {
	Level     Level                  `json:"level"`
	Timestamp time.Time              `json:"timestamp"`
	Message   string                 `json:"message"`
	Fields    map[string]interface{} `json:"-"` // Don't marshal this directly
	// contains filtered or unexported fields
}

Entry represents a log entry with structured fields

func (*Entry) MarshalJSON

func (e *Entry) MarshalJSON() ([]byte, error)

MarshalJSON implements custom JSON marshaling for Entry with flattened fields Field ordering convention: level, timestamp, message, then custom fields in alphabetical order

type FileSink

type FileSink struct {
	// contains filtered or unexported fields
}

FileSink writes log entries to a file with automatic rotation support. It implements the Sink interface and provides both time-based and size-based rotation. The sink is safe for concurrent use and uses buffered I/O for optimal performance.

Features:

  • Automatic file rotation based on age and/or size
  • Thread-safe concurrent writes
  • Buffered I/O with 16KB buffer
  • Automatic directory creation
  • Timestamped rotated files

Example:

fileSink, err := logmgr.NewFileSink("app.log", 24*time.Hour, 100*1024*1024)
if err != nil {
  panic(err)
}
logmgr.AddSink(fileSink)

func NewDefaultFileSink

func NewDefaultFileSink(filename string, maxAge time.Duration) *FileSink

NewDefaultFileSink creates a file sink with default settings for backward compatibility. Uses a default maximum size of 100MB with the specified age limit.

This function will panic if the file cannot be created, making it suitable for initialization code where errors should be fatal.

Example:

sink := logmgr.NewDefaultFileSink("app.log", 24*time.Hour)
logmgr.AddSink(sink)

func NewFileSink

func NewFileSink(filename string, maxAge time.Duration, maxSize int64) (*FileSink, error)

NewFileSink creates a new file sink with the specified rotation parameters.

Parameters:

  • filename: Path to the log file (directories will be created if needed)
  • maxAge: Maximum age before rotation (0 disables age-based rotation)
  • maxSize: Maximum size in bytes before rotation (0 disables size-based rotation)

The sink will rotate the file when either condition is met. Rotated files are renamed with a timestamp suffix (e.g., "app_2024-01-15_10-30-45.log").

Example:

// Rotate daily or when file reaches 100MB
sink, err := logmgr.NewFileSink("logs/app.log", 24*time.Hour, 100*1024*1024)
if err != nil {
  return err
}
defer sink.Close()

func (*FileSink) Close

func (fs *FileSink) Close() error

Close flushes any remaining buffered data and closes the file sink. This method should be called during application shutdown to ensure all log entries are written and the file is properly closed. It's safe to call Close multiple times.

Example:

defer fileSink.Close()

func (*FileSink) Write

func (fs *FileSink) Write(entries []*Entry) error

Write writes a batch of log entries to the file in JSON format. Each entry is written as a single line of JSON followed by a newline. This method is safe for concurrent use and will automatically rotate the file if rotation conditions are met.

The method will skip any entries that fail to marshal to JSON rather than failing the entire batch. File rotation is checked before writing the batch.

Example output in file:

{"level":"info","timestamp":"2024-01-15T10:30:45.123Z","message":"User logged in","user_id":12345}
{"level":"error","timestamp":"2024-01-15T10:30:46.456Z","message":"Database error","error":"connection timeout"}

type Level

type Level int32

Level represents log severity levels

const (
	// DebugLevel is used for detailed debugging information
	DebugLevel Level = iota
	// InfoLevel is used for general informational messages
	InfoLevel
	// WarnLevel is used for potentially harmful situations
	WarnLevel
	// ErrorLevel is used for error events that might still allow the application to continue
	ErrorLevel
	// FatalLevel is used for very severe error events that will lead the application to abort
	FatalLevel
)

func GetLevel

func GetLevel() Level

GetLevel returns the current log level

func (Level) String

func (l Level) String() string

String returns the string representation of the level

type LogField

type LogField struct {
	Key   string
	Value interface{}
}

LogField represents a structured logging field

func Field

func Field(key string, value interface{}) LogField

Field creates a new structured logging field

Example:

logmgr.Info("User action",
  logmgr.Field("user_id", 12345),
  logmgr.Field("action", "login"),
)

type Logger

type Logger struct {
	// contains filtered or unexported fields
}

Logger represents the main logger instance with high-performance features

type RingBuffer

type RingBuffer struct {
	// contains filtered or unexported fields
}

RingBuffer implements a lock-free ring buffer for high-performance logging

func NewRingBuffer

func NewRingBuffer(size uint64) *RingBuffer

NewRingBuffer creates a new ring buffer with the given size (must be power of 2)

func (*RingBuffer) Pop

func (rb *RingBuffer) Pop(entries []*Entry) int

Pop removes and returns entries from the ring buffer

func (*RingBuffer) Push

func (rb *RingBuffer) Push(entry *Entry) bool

Push adds an entry to the ring buffer (lock-free)

type Sink

type Sink interface {
	// Write processes a batch of log entries
	Write(entries []*Entry) error
	// Close gracefully shuts down the sink
	Close() error
}

Sink interface for output destinations

type StderrSink

type StderrSink struct {
	// contains filtered or unexported fields
}

StderrSink writes log entries to stderr in JSON format. This is useful for separating error logs from regular output or when stdout is used for application data.

Example:

stderrSink := logmgr.NewStderrSink()
logmgr.AddSink(stderrSink)

func NewStderrSink

func NewStderrSink() *StderrSink

NewStderrSink creates a new stderr sink that writes to stderr. The sink uses an 8KB buffer for optimal performance.

Example:

stderrSink := logmgr.NewStderrSink()
logmgr.AddSink(stderrSink)

func (*StderrSink) Close

func (ss *StderrSink) Close() error

Close flushes any remaining buffered data and closes the stderr sink. This method should be called during application shutdown to ensure all log entries are written.

func (*StderrSink) Write

func (ss *StderrSink) Write(entries []*Entry) error

Write writes a batch of log entries to stderr in JSON format. Each entry is written as a single line of JSON followed by a newline. This method is safe for concurrent use.

The method will skip any entries that fail to marshal to JSON rather than failing the entire batch.

type Worker

type Worker struct {
	// contains filtered or unexported fields
}

Worker processes log entries in background

func NewWorker

func NewWorker(id int, logger *Logger, batchSize int) *Worker

NewWorker creates a new background worker

func (*Worker) Run

func (w *Worker) Run()

Run starts the worker loop

func (*Worker) Stop

func (w *Worker) Stop()

Stop stops the worker

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL