Documentation
¶
Overview ¶
Package config provides configuration loading, merging, and persistence for the markedup CLI. Configuration cascades from global (~/.markedup.yaml) to local ({kbDir}/.markedup.yaml) to environment variables.
Package config provides configuration management for markedup, including auto-detection of local model serving endpoints.
Package config provides configuration management for markedup, including secure API key storage via OS-native keychains.
Package config provides configuration helpers for the markedup CLI, including model download utilities for Ollama and HuggingFace.
Index ¶
- Constants
- func DeleteKey(name string) error
- func Exists(kbDir string) bool
- func GetKey(name string) (string, error)
- func GlobalPath() string
- func HFAvailable() bool
- func HFPull(ctx context.Context, model string, w io.Writer) error
- func HydrateKeys(cfg *Config) error
- func IsNuExtractModel(id string) bool
- func KeyNameForEndpoint(endpoint string) string
- func KeyringAvailable() bool
- func LocalPath(kbDir string) string
- func NormalizeEndpoint(endpoint string) string
- func OllamaAvailable() bool
- func OllamaPull(ctx context.Context, model string, w io.Writer) error
- func Save(cfg *Config, path string) error
- func StoreKey(name, value string) error
- type Config
- type Endpoint
- type EnrichConfig
- type FallbackConfig
- type MigrationResult
- type NuExtractConfig
- type ProbeResult
- func ProbeChat(ctx context.Context, endpoint, model, apiKey string) ProbeResult
- func ProbeEmbed(ctx context.Context, endpoint, model, apiKey string) ProbeResult
- func ProbeRerank(ctx context.Context, endpoint, model, apiKey string) ProbeResult
- func ProbeService(ctx context.Context, service, endpoint, model, apiKey string) ProbeResult
- type RerankConfig
- type ServiceConfig
Constants ¶
const ProbeTimeout = 10 * time.Second
ProbeTimeout is the per-request deadline for Test Connection probes. Kept generous because cold-loaded local models can take several seconds to respond to the first request.
Variables ¶
This section is empty.
Functions ¶
func DeleteKey ¶
DeleteKey removes a named key from the OS keychain. Returns nil if the key did not exist.
func GetKey ¶
GetKey retrieves a named key from the OS keychain. Returns ("", nil) if the key is not found.
func GlobalPath ¶
func GlobalPath() string
GlobalPath returns the path to the global config file (~/.markedup.yaml).
func HFAvailable ¶
func HFAvailable() bool
HFAvailable reports whether the hf CLI binary is in PATH.
func HFPull ¶
HFPull runs "hf download <model>" and streams output to w. The process is terminated if ctx is cancelled.
func HydrateKeys ¶
HydrateKeys runs the one-time-per-process legacy-key migration and then fills any empty APIKey fields in cfg from the OS keyring, looked up by endpoint. It is safe to call multiple times in one process: migration is gated by sync.Once so it runs at most once, while hydration runs every call (different cfgs may have different endpoints).
Subcommands that make outbound API calls (enrich, serve, embed, tui) or that store credentials (setup) must call this after config.Load(). Local- only subcommands (check, init, search, explore, show, export) must not.
func IsNuExtractModel ¶
IsNuExtractModel reports whether a model id names a NuExtract-2.0 model. Matches the canonical form `nuextract-2.0-*` (anchored on the trailing dash) with any `publisher/` prefix stripped. Case-insensitive. Examples matched: "nuextract-2.0-8b", "numind/NuExtract-2.0-2B", "someorg/nuextract-2.0-4b-gguf". Examples rejected: "nuextract-2.0", "nuextract-2.0foo", "nuextract-1.5-*", "nuextract2.0-8b".
func KeyNameForEndpoint ¶
KeyNameForEndpoint returns the keyring entry name used to store the API key for the given endpoint. Format: "apikey-<16 hex chars of sha256(normalized)>". Returns "" if the endpoint is empty after normalization.
func KeyringAvailable ¶
func KeyringAvailable() bool
KeyringAvailable returns true if the OS keyring can be opened successfully. Returns false in headless/CI environments where no keychain backend is available.
func NormalizeEndpoint ¶
NormalizeEndpoint canonicalizes an endpoint URL for keyring keying. Lowercases the host, strips default ports (80/443), and removes trailing slashes from the path so trivially equivalent URLs hash identically. If the input cannot be parsed as a URL, the trimmed lowercase string is returned as a stable fallback.
func OllamaAvailable ¶
func OllamaAvailable() bool
OllamaAvailable reports whether the ollama CLI binary is in PATH.
func OllamaPull ¶
OllamaPull runs "ollama pull <model>" and streams output to w. The process is terminated if ctx is cancelled.
Types ¶
type Config ¶
type Config struct {
Embed ServiceConfig `yaml:"embed"`
Rerank RerankConfig `yaml:"rerank"`
LLM ServiceConfig `yaml:"llm"`
Triplex ServiceConfig `yaml:"triplex,omitempty"`
NuExtract NuExtractConfig `yaml:"nuextract,omitempty"`
Enrich EnrichConfig `yaml:"enrich,omitempty"`
// Format selects the Tier 2 extractor: "triplex", "nuextract", "generic", or "" (auto).
Format string `yaml:"format,omitempty"`
}
Config holds all service configurations for markedup.
type Endpoint ¶
type Endpoint struct {
Name string // "Ollama", "LM Studio", "TEI", "vLLM", "llama.cpp"
URL string // e.g. "http://localhost:11434"
Type string // "embed", "llm", "rerank", "multi"
Healthy bool
Models []string // populated when the service exposes a models API
// Formats lists Tier 2 extractors this endpoint can serve (e.g. "nuextract"
// when a NuExtract-2.0 model id is present in Models).
Formats []string
}
Endpoint describes a discovered local model-serving endpoint.
func DetectLocal ¶
DetectLocal probes well-known localhost ports concurrently and returns all discovered endpoints sorted by name. It never returns an error; unreachable services are silently skipped.
type EnrichConfig ¶
type EnrichConfig struct {
Fallback FallbackConfig `yaml:"fallback,omitempty"`
}
EnrichConfig holds enrich-pipeline-wide settings. Currently a single nested Fallback block; left as a struct (not inlined) so future enrich settings — Parallel, Verbose, FailOnError — can land without churning the top-level config shape.
type FallbackConfig ¶
type FallbackConfig struct {
// Enabled is a tri-state pointer: nil = "use the local/cloud default",
// non-nil = explicit user choice. Local endpoints default-on; cloud
// endpoints default-off (cost guard).
Enabled *bool `yaml:"enabled,omitempty"`
// Endpoint overrides cfg.LLM.Endpoint for fallback only. Empty = inherit.
Endpoint string `yaml:"endpoint,omitempty"`
// Model overrides cfg.LLM.Model for fallback only. Empty = inherit.
Model string `yaml:"model,omitempty"`
// APIKey overrides cfg.LLM.APIKey for fallback only.
APIKey string `yaml:"-"` // NEVER written to YAML
// MaxFiles caps the number of files dispatched per enrich run. Nil = use
// the local/cloud default (50 cloud, unbounded local). Explicit 0 = unbounded.
MaxFiles *int `yaml:"max_files,omitempty"`
// Parallel enables concurrent fallback LLM calls (#141). Default false —
// sequential execution remains the safe default. Opt-in for users with
// capable hardware or cloud endpoints with high RPS.
Parallel bool `yaml:"parallel,omitempty"`
// ParallelWorkers caps the worker pool when Parallel=true. <=0 falls back
// to runtime.NumCPU() at dispatch time.
ParallelWorkers int `yaml:"parallel_workers,omitempty"`
}
FallbackConfig configures the LLM fallback dispatcher (issue #140).
Defaults are resolved at call-time, not parse-time: an empty Endpoint inherits cfg.LLM.Endpoint (the same env-var-backed reasoning endpoint the markedup_reason MCP tool uses), and Enabled / MaxFiles defaults depend on whether the resolved endpoint is local or cloud. See enrich.IsLocalEndpoint and the resolution path in internal/cli/enrich.go.
type MigrationResult ¶
type MigrationResult struct {
Migrated []string // legacy names successfully copied + removed
Conflicts []string // legacy names left in place because the endpoint-keyed entry already held a different value
Skipped []string // legacy names with no endpoint configured (deferred)
}
MigrationResult summarizes what MigrateLegacyKeys did, primarily for tests and logging. Counts are zero on a no-op (no legacy entries present).
func MigrateLegacyKeys ¶
func MigrateLegacyKeys(cfg *Config) MigrationResult
MigrateLegacyKeys consolidates legacy per-service keyring entries ("embed-api-key", "llm-api-key", etc.) into the new endpoint-keyed format.
Behavior:
- For each legacy entry that exists, look up the configured endpoint for that service. If the endpoint is unknown, leave the legacy entry alone (a later run, with a populated config, can migrate it).
- If no endpoint-keyed entry exists yet, write the legacy value there and delete the legacy entry.
- If an endpoint-keyed entry exists with the SAME value, the legacy entry is a duplicate; delete it.
- If an endpoint-keyed entry exists with a DIFFERENT value, this is a conflict. Preserve BOTH entries (do not delete the legacy one) and emit a warning so the user can reconcile via the setup wizard. Idempotent on re-run: same situation -> same warning, no destructive action.
Idempotent: repeated invocations after a successful migration are no-ops because the legacy entries no longer exist; conflict cases stay stable too.
type NuExtractConfig ¶
type NuExtractConfig struct {
ServiceConfig `yaml:",inline"`
Mode string `yaml:"mode,omitempty"`
Transport string `yaml:"transport,omitempty"`
Predicates []string `yaml:"predicates,omitempty"`
EntityTypes []string `yaml:"entity_types,omitempty"`
}
NuExtractConfig configures the NuExtract-2.0 template-based extractor. Mode controls the two-pass runner: "parallel" (default) fires entities and relations calls simultaneously; "single" sends one combined template. Transport selects request shape: "native" uses chat_template_kwargs for vLLM/HF; "manual" renders the prompt client-side for GGUF runtimes. Empty Transport auto-detects from the endpoint URL.
type ProbeResult ¶
ProbeResult is returned by every Probe* function. OK reflects success (HTTP 200 + a structurally valid response). Detail provides a short human-readable summary suitable for inline display in the wizard (either an error reason or a confirmation snippet).
func ProbeChat ¶
func ProbeChat(ctx context.Context, endpoint, model, apiKey string) ProbeResult
ProbeChat sends a tiny chat-completion to <endpoint>/v1/chat/completions and verifies the model returns a non-empty response. Used for LLM, Triplex, and NuExtract endpoints (all OpenAI-compatible chat).
func ProbeEmbed ¶
func ProbeEmbed(ctx context.Context, endpoint, model, apiKey string) ProbeResult
ProbeEmbed sends a tiny embedding request to <endpoint>/v1/embeddings and verifies a non-empty vector is returned.
func ProbeRerank ¶
func ProbeRerank(ctx context.Context, endpoint, model, apiKey string) ProbeResult
ProbeRerank sends a tiny rerank request to <endpoint>/v1/rerank and verifies a 200 response. The shape of rerank responses varies between providers, so we accept any 200 with a parseable JSON body.
func ProbeService ¶
func ProbeService(ctx context.Context, service, endpoint, model, apiKey string) ProbeResult
ProbeService dispatches to the appropriate probe function based on service name. Recognized names: "embed", "llm", "triplex", "nuextract", "rerank". Unknown service names return a failed result.
type RerankConfig ¶
type RerankConfig struct {
ServiceConfig `yaml:",inline"`
Format string `yaml:"format,omitempty"`
}
RerankConfig extends ServiceConfig with a reranker-specific format field.
type ServiceConfig ¶
type ServiceConfig struct {
Endpoint string `yaml:"endpoint"`
Model string `yaml:"model"`
APIKey string `yaml:"-"` // NEVER written to YAML
}
ServiceConfig holds endpoint, model, and authentication for a single service.