Documentation
¶
Overview ¶
Package connwatch provides service-level health monitoring with exponential backoff for external dependencies (Home Assistant, Ollama, etc).
This is distinct from httpkit's transport-level retry, which handles sub-second transient dial errors (macOS ARP races, issue #53). connwatch handles multi-second to multi-minute outages: service restarts, network partitions, and macOS Local Network permission delays (issue #96).
Each Watcher probes a single service in two phases:
- Startup: exponential backoff (2s, 4s, 8s, ... capped at 60s)
- Background: periodic polling (every 60s) with state-transition callbacks
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BackoffConfig ¶
type BackoffConfig struct {
// InitialDelay is the delay before the first retry (default: 2s).
InitialDelay time.Duration
// MaxDelay is the ceiling for backoff growth (default: 60s).
MaxDelay time.Duration
// Multiplier scales the delay after each retry (default: 2.0).
Multiplier float64
// MaxRetries is the maximum number of startup probe attempts (default: 10).
MaxRetries int
// PollInterval is the background check interval after startup
// retries are exhausted or after a successful connection (default: 60s).
PollInterval time.Duration
// ProbeTimeout limits how long each individual probe call may take (default: 10s).
ProbeTimeout time.Duration
}
BackoffConfig controls the exponential backoff behavior.
func DefaultBackoffConfig ¶
func DefaultBackoffConfig() BackoffConfig
DefaultBackoffConfig returns the backoff schedule from issue #96: 2s, 4s, 8s, 16s, 32s, 60s (capped), with 10 startup retries and 60-second background polling.
type Manager ¶
type Manager struct {
// contains filtered or unexported fields
}
Manager coordinates multiple service watchers.
func NewManager ¶
NewManager creates a connection watch manager.
func (*Manager) Status ¶
func (m *Manager) Status() map[string]ServiceStatus
Status returns the health status of all watched services.
func (*Manager) Stop ¶
func (m *Manager) Stop()
Stop shuts down all watchers and waits for their goroutines to exit.
func (*Manager) Watch ¶
func (m *Manager) Watch(ctx context.Context, cfg WatcherConfig) *Watcher
Watch registers and starts a new service watcher. The watcher runs in a background goroutine until ctx is cancelled or Stop is called.
Panics if Name is empty or Probe is nil — these are programming errors that should be caught during development, not silently ignored at runtime. Zero-value BackoffConfig fields are replaced with defaults.
type ServiceStatus ¶
type ServiceStatus struct {
Name string `json:"name"`
Ready bool `json:"ready"`
LastCheck time.Time `json:"last_check"`
LastError string `json:"last_error,omitempty"`
}
ServiceStatus is the health status of a watched service, suitable for JSON serialization in health endpoints.
type Watcher ¶
type Watcher struct {
// contains filtered or unexported fields
}
Watcher monitors a single service's health.
func (*Watcher) Status ¶
func (w *Watcher) Status() ServiceStatus
Status returns the current health status.
type WatcherConfig ¶
type WatcherConfig struct {
// Name is a human-readable identifier for logging (e.g., "homeassistant").
Name string
// Probe checks service health. Must be safe for concurrent use.
Probe ProbeFunc
// Backoff controls retry timing. Use DefaultBackoffConfig() as a starting point.
Backoff BackoffConfig
// OnReady is called when the service transitions from not-ready to ready.
// Called in a separate goroutine; must not block indefinitely. Optional.
OnReady func()
// OnDown is called when the service transitions from ready to not-ready.
// Called in a separate goroutine; must not block indefinitely. Optional.
OnDown func(err error)
// Logger for structured logging. Uses slog.Default() if nil.
Logger *slog.Logger
}
WatcherConfig configures a single service watcher.