Documentation
¶
Overview ¶
Package loop provides a provider-agnostic conversation loop for LLM-powered agents. It handles message exchange, tool call dispatch, and streaming — with no HTTP, persistence, or UI concerns.
Class: primitive UseWhen: Any LLM interaction in libraries or HTTP services. For CLI agents, use axon-hand instead which provides axon-loop automatically. Always paired with axon-talk and axon-tool.
Index ¶
- Constants
- func Stream(ctx context.Context, cfg RunConfig) <-chan Event
- func WithRetry(client talk.LLMClient, opts ...RetryOption) talk.LLMClient
- type Callbacks
- type ContextStrategy
- type ContextStrategyFunc
- type DoneEvent
- type Event
- type LLMClient
- type Message
- type Request
- type Response
- type Result
- type RetryOption
- type Role
- type RunConfig
- type ToolCall
- type ToolUseEvent
- type TrimEvent
- type Usage
Examples ¶
Constants ¶
const ( RoleSystem = talk.RoleSystem RoleUser = talk.RoleUser RoleAssistant = talk.RoleAssistant RoleTool = talk.RoleTool )
Variables ¶
This section is empty.
Functions ¶
func Stream ¶ added in v0.4.0
Stream executes a conversation loop and returns a channel of Events. The channel is closed when the loop completes or fails. The caller reads events without building a callback-to-channel bridge.
Any Callbacks in cfg are ignored — Stream bridges all events to the returned channel.
func WithRetry ¶ added in v0.7.1
func WithRetry(client talk.LLMClient, opts ...RetryOption) talk.LLMClient
WithRetry wraps an LLMClient with retry logic. Requests that fail with retryable errors are retried with jittered exponential backoff.
Streaming safety: if the callback fn has been invoked (tokens delivered), the request is not retried to avoid duplicate content.
Types ¶
type Callbacks ¶
type Callbacks struct {
OnToken func(token string)
OnThinking func(token string)
OnToolUse func(name string, args map[string]any)
OnTrim func(dropped []Message) // Called when context trimming drops messages.
OnDone func(durationMs int64)
}
Callbacks receives streaming events from the conversation loop. All fields are optional — nil callbacks are skipped.
type ContextStrategy ¶ added in v0.6.1
type ContextStrategy interface {
// Trim returns the messages to send to the LLM.
// The input slice must not be modified.
Trim(messages []Message) []Message
}
ContextStrategy controls how conversation history is trimmed before each LLM request. Implementations receive the full message slice and return the subset to send. The system prompt (if present as the first message) should generally be preserved.
func SlidingWindow ¶ added in v0.6.1
func SlidingWindow(n int) ContextStrategy
SlidingWindow keeps the system prompt plus the last n conversation messages. Useful when you want a fixed-size history regardless of token count.
Example ¶
A sliding window keeps the system prompt plus the last n messages, discarding older history as the conversation grows.
package main
import (
"fmt"
loop "github.com/benaskins/axon-loop"
)
func main() {
strategy := loop.SlidingWindow(10)
messages := []loop.Message{
{Role: loop.RoleSystem, Content: "You are a helpful assistant."},
{Role: loop.RoleUser, Content: "Hello"},
{Role: loop.RoleAssistant, Content: "Hi there!"},
{Role: loop.RoleUser, Content: "What is Go?"},
}
trimmed := strategy.Trim(messages)
fmt.Println(trimmed[0].Role, "message preserved")
fmt.Println(len(trimmed)-1, "conversation messages kept")
}
Output: system message preserved 3 conversation messages kept
func TokenBudget ¶ added in v0.6.1
func TokenBudget(budget int) ContextStrategy
TokenBudget trims conversation history to fit within an estimated token budget. The system prompt is always preserved; older messages are dropped first. This is the strategy used when Request.MaxTokens is set without an explicit ContextStrategy.
Example ¶
A token budget trims older messages so the total estimated token count stays within the budget. The system prompt is always preserved.
package main
import (
"fmt"
loop "github.com/benaskins/axon-loop"
)
func main() {
strategy := loop.TokenBudget(4096)
messages := []loop.Message{
{Role: loop.RoleSystem, Content: "You are a helpful assistant."},
{Role: loop.RoleUser, Content: "Summarise this long document..."},
}
trimmed := strategy.Trim(messages)
fmt.Println(len(trimmed), "messages after trim")
}
Output: 2 messages after trim
func TokenBudgetWithMinWindow ¶ added in v0.6.1
func TokenBudgetWithMinWindow(budget, minMessages int) ContextStrategy
TokenBudgetWithMinWindow trims to a token budget but guarantees at least minMessages conversation messages are kept (plus the system prompt). Whichever approach retains more messages wins. This prevents a large system prompt from starving the conversation history.
type ContextStrategyFunc ¶ added in v0.6.1
ContextStrategyFunc adapts a plain function into a ContextStrategy.
func (ContextStrategyFunc) Trim ¶ added in v0.6.1
func (f ContextStrategyFunc) Trim(messages []Message) []Message
type Event ¶ added in v0.4.0
type Event struct {
// Exactly one of these is set per event.
Token string // incremental content token
Thinking string // incremental thinking token
ToolUse *ToolUseEvent // a tool was invoked
Trim *TrimEvent // context was trimmed
Done *DoneEvent // the loop completed
Err error // the loop failed
}
Event is a streaming event emitted by Stream. Consumers receive these on a channel instead of registering callbacks.
type LLMClient ¶ added in v0.2.0
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type Message ¶
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type Request ¶ added in v0.3.0
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type Response ¶ added in v0.3.0
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type Result ¶
type Result struct {
Content string
Thinking string
Usage *Usage // accumulated token usage across all turns; nil if provider doesn't report
}
Result is the final output of a conversation loop run.
type RetryOption ¶ added in v0.7.1
type RetryOption func(*retryConfig)
RetryOption configures the retry decorator.
func WithBackoff ¶ added in v0.7.1
func WithBackoff(initial, max time.Duration) RetryOption
WithBackoff sets the initial and maximum backoff durations. Default is 1s initial, 30s max. Jittered exponential backoff is applied.
func WithMaxRetries ¶ added in v0.7.1
func WithMaxRetries(n int) RetryOption
WithMaxRetries sets the maximum number of retry attempts. Default is 3.
func WithRetryable ¶ added in v0.7.1
func WithRetryable(fn func(error) bool) RetryOption
WithRetryable sets a custom function to determine if an error is retryable. By default, errors with HTTP status 429, 500, 502, or 503 are retried.
type Role ¶ added in v0.6.0
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type RunConfig ¶ added in v0.6.0
type RunConfig struct {
Client LLMClient
Request *Request
Tools map[string]tool.ToolDef
ToolCtx *tool.ToolContext
Context ContextStrategy // Optional; trims messages before each LLM call.
Callbacks
}
RunConfig bundles parameters for Run, keeping the function signature small.
Example ¶
RunConfig assembles everything needed for a conversation loop: an LLM client, the initial request, tool definitions, and a context strategy. This is the primary struct consumers construct.
package main
import (
"context"
"fmt"
loop "github.com/benaskins/axon-loop"
tool "github.com/benaskins/axon-tool"
)
func main() {
// In production, use an axon-talk adapter (e.g. talk.NewOllamaClient).
var client loop.LLMClient
// Define tools the model can call.
tools := map[string]tool.ToolDef{
"get_weather": {
Name: "get_weather",
Description: "Get current weather for a city",
Parameters: tool.ParameterSchema{
Type: "object",
Required: []string{"city"},
Properties: map[string]tool.PropertySchema{
"city": {Type: "string", Description: "City name"},
},
},
Execute: func(ctx *tool.ToolContext, args map[string]any) tool.ToolResult {
city, _ := args["city"].(string)
return tool.ToolResult{Content: fmt.Sprintf("22°C and sunny in %s", city)}
},
},
}
cfg := loop.RunConfig{
Client: client,
Request: &loop.Request{
Model: "claude-sonnet-4-20250514",
Stream: true,
Messages: []loop.Message{
{Role: loop.RoleSystem, Content: "You are a weather assistant."},
{Role: loop.RoleUser, Content: "What's the weather in Melbourne?"},
},
MaxIterations: 5,
},
Tools: tools,
Context: loop.SlidingWindow(20),
Callbacks: loop.Callbacks{
OnToken: func(token string) {
fmt.Print(token)
},
},
}
_, _ = loop.Run(context.Background(), cfg)
}
Output:
type ToolCall ¶
Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.
type ToolUseEvent ¶ added in v0.4.0
ToolUseEvent is emitted when the LLM invokes a tool.