Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func BuildPrompt ¶
func BuildPrompt(intent string, snap *context.Snapshot, projectHistory []history.Entry, globalHistory []history.Entry) string
BuildPrompt constructs the user prompt from intent, context snapshot, and history.
func SystemPrompt ¶
func SystemPrompt() string
SystemPrompt returns the system prompt that instructs the LLM how to respond.
Types ¶
type Candidate ¶
type Candidate struct {
Cmd string `json:"cmd"`
Reason string `json:"reason"`
Confidence float64 `json:"confidence"`
Risk string `json:"risk"` // "safe", "moderate", "dangerous"
}
Candidate represents a single command suggestion from the LLM.
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client wraps the Anthropic API for command suggestion.
func (*Client) Generate ¶
func (c *Client) Generate(snap context.Snapshot, intent string, projectHistory []history.Entry, globalHistory []history.Entry) (*Response, error)
Generate calls the fast model with tools, allowing the LLM to explore the project before producing candidates. If the top candidate's confidence is below the escalation threshold, it escalates to the strong model (single-shot).
type Response ¶
type Response struct {
Candidates []Candidate `json:"candidates"`
Rounds []ToolRound `json:"-"` // tool-use metadata, omitted from JSON output
}
Response holds the list of candidate commands returned by the LLM.
Click to show internal directories.
Click to hide internal directories.