Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func NewRootCmd ¶
Types ¶
type ExitCodeError ¶
type ExitCodeError struct {
Code int
}
ExitCodeError signals to Execute that the CLI should exit with the given non-zero code without printing any additional error text. The subcommand that returns this error is responsible for having already written any user-facing output.
func (*ExitCodeError) Error ¶
func (e *ExitCodeError) Error() string
type LLMCallCounts ¶ added in v0.2.0
type LLMCallCounts struct {
Small, Typical, Large int64
}
LLMCallCounts is a snapshot of how many LLM calls have flowed through each tier of an llmTiering. Reported at the end of an analyze run when --verbose is set.
func (LLMCallCounts) Total ¶ added in v0.2.0
func (c LLMCallCounts) Total() int64
Total returns the sum across all tiers.
type ModelCapabilities ¶ added in v0.4.0
type ModelCapabilities struct {
Provider string
Model string
ToolUse bool
Vision bool
// MaxCompletionTokens is the per-model output cap. Zero means "use the
// BifrostClient default". Set explicitly only for models whose API rejects
// the default 32k request (e.g., Groq's llama-4-scout caps at 8192).
MaxCompletionTokens int
// MaxInputTokens is the per-model input cap including system prompt,
// tool definitions, and accumulated chat history. Zero disables the
// budget gate (used for self-hosted ollama/lmstudio "*" rows where the
// user picks the model). The decorator gates sends at 0.9 × this value;
// see .plans/2026-05-07-token-budget-design.md.
MaxInputTokens int
}
ModelCapabilities describes which optional LLM features a (provider, model) pair supports. Looked up via ResolveCapabilities at tier construction time and travels with the LLMClient so analysis branches without naming providers.
func ResolveCapabilities ¶ added in v0.4.0
func ResolveCapabilities(provider, model string) (ModelCapabilities, bool)
ResolveCapabilities returns the capability flags for (provider, model). The bool is true when the provider is recognized; for known providers with an unknown model, it returns a zero-value ModelCapabilities and true so the caller can run with no optional features rather than failing the run.
type Project ¶ added in v0.2.0
Project is one analyzed repo whose Hugo site lives under cacheDir/<Name>/site.
func ListAnalyzedProjects ¶ added in v0.2.0
ListAnalyzedProjects returns every immediate subdirectory of cacheDir that contains a `site` subdirectory. A non-existent cacheDir is treated as "no analyzed projects" (not an error) so the caller can produce one helpful message.
Source Files
¶
- analyze.go
- auto_serve.go
- cachescan.go
- capabilities.go
- codefeatures_cache.go
- docsfeaturemap_cache.go
- doctor.go
- drift_cache.go
- featuremap_cache.go
- llm_client.go
- llm_counter.go
- picker.go
- precheck_hooks.go
- priority_counts.go
- render.go
- root.go
- screenshot_audit.go
- screenshots_cache.go
- serve.go
- server_runner.go
- tier_parse.go
- tier_validate.go
- whydocument_cache.go