Documentation
¶
Overview ¶
Package config loads Errata settings from environment variables and .env.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ApplyRecipe ¶
ApplyRecipe overlays recipe settings onto cfg.
When a section was declared in the parsed recipe (tracked by SectionsPresent), ALL fields in that section are written atomically — even zero values — so that declaring "## Constraints" with only max_steps clears the default timeout.
When a section is absent (or SectionsPresent is nil, i.e. programmatic recipes), legacy field-by-field merge is used: only non-zero recipe fields override cfg.
func MaskKey ¶
MaskKey returns a masked version of an API key for display. Keys >= 12 chars show first 5 and last 4 chars; shorter keys become "****".
func ProviderConfigured ¶
ProviderConfigured returns true if all required env vars for the named provider have non-empty values in the current process environment.
Types ¶
type Config ¶
type Config struct {
AnthropicAPIKey string
OpenAIAPIKey string
GoogleAPIKey string
OpenRouterAPIKey string
// LiteLLMBaseURL is the base URL for a LiteLLM proxy (e.g. "http://localhost:4000/v1").
// Empty disables the LiteLLM adapter.
LiteLLMBaseURL string
// LiteLLMAPIKey is optional; many local LiteLLM deployments don't require auth.
LiteLLMAPIKey string
// BedrockRegion is the AWS region for Amazon Bedrock (e.g. "us-east-1").
// Uses the AWS SDK default credential chain (AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY,
// AWS_PROFILE, or IAM role). Empty disables the Bedrock adapter.
BedrockRegion string
// AzureOpenAIAPIKey is the API key for Azure OpenAI Service.
AzureOpenAIAPIKey string
// AzureOpenAIEndpoint is the Azure resource endpoint (e.g. "https://myresource.openai.azure.com").
// Both key and endpoint must be set to enable the Azure OpenAI adapter.
AzureOpenAIEndpoint string
// AzureOpenAIAPIVersion is the Azure OpenAI API version (default "2024-10-21").
AzureOpenAIAPIVersion string
// VertexAIProject is the GCP project ID for Vertex AI.
// Uses Application Default Credentials (gcloud auth or GOOGLE_APPLICATION_CREDENTIALS).
VertexAIProject string
// VertexAILocation is the GCP region for Vertex AI (e.g. "us-central1").
// Both project and location must be set to enable the Vertex AI adapter.
VertexAILocation string
// ActiveModels is the explicit model list (set via recipe ## Models).
// Empty means auto-detect one model per available provider.
// OpenRouter models use "provider/model" format (e.g. "anthropic/claude-sonnet-4-6").
// LiteLLM models use "litellm/<model>" format (e.g. "litellm/claude-sonnet-4-6").
ActiveModels []string
DefaultAnthropicModel string
DefaultOpenAIModel string
DefaultGeminiModel string
DefaultBedrockModel string
DefaultAzureModel string
DefaultVertexModel string
// DataDir is the root directory for all persistent data files.
// Default is "data"; override via ERRATA_DATA_DIR env var.
DataDir string
// MCPServers is the serialised MCP server config (set via recipe ## MCP Servers).
// Format: "name:command arg1 arg2,name2:command2"
// Empty disables MCP entirely.
MCPServers string
// SystemPromptExtra is appended after the built-in tool guidance in every
// adapter's system prompt. Use for project-specific context, coding conventions,
// or domain knowledge that should influence all models.
// Set via recipe ## System Prompt.
SystemPromptExtra string
// SubagentModel is the model ID used when spawning sub-agents via spawn_agent.
// Empty means use the same model as the parent. Set via recipe ## Sub-Agent model:.
SubagentModel string
// SubagentMaxDepth is the maximum spawn_agent recursion depth.
// 1 (default) means sub-agents cannot spawn further sub-agents.
// 0 disables spawn_agent entirely. Set via recipe ## Sub-Agent max_depth:.
SubagentMaxDepth int
// MaxSteps is the maximum number of agentic tool-use turns per adapter.
// 0 means unlimited. Set via recipe ## Constraints max_steps:.
MaxSteps int
// AgentTimeout is the per-adapter wall-clock timeout for a single RunAgent call.
// 0 means use the runner's built-in default (5 minutes).
// Set via recipe ## Constraints timeout:.
AgentTimeout time.Duration
// CompactThreshold is the context fill fraction that triggers auto-compact.
// 0 means use the runner's built-in default (0.80).
// Set via recipe ## Context compact_threshold:.
CompactThreshold float64
// MaxHistoryTurns is the maximum number of conversation turns kept per model.
// Default is 20. Set via recipe ## Context max_history_turns:.
MaxHistoryTurns int
// Seed is the pseudorandom seed passed to model APIs for reproducible sampling.
// nil means not set (provider default); non-nil is passed through even if 0.
// Set via recipe ## Model Parameters seed: or /seed command.
Seed *int64
}
Config holds all runtime settings.
func Load ¶
func Load() Config
Load reads .env (if present) then environment variables and returns a Config.
func (Config) ResolvedActiveModels ¶
ResolvedActiveModels returns the explicit model list, or one default per provider whose API key is present.
type ProviderEnv ¶
type ProviderEnv struct {
Name string // shorthand for /keys (e.g. "anthropic")
EnvVars []string // env vars (first is the primary key)
DefaultModel string
}
ProviderEnv describes a provider's required environment variables.
func ProviderEnvInfo ¶
func ProviderEnvInfo() []ProviderEnv
ProviderEnvInfo returns the static list of supported providers and their env vars.