loop

package module
v0.11.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 21, 2026 License: MIT Imports: 12 Imported by: 3

README

axon-loop

Primitives · Part of the lamina workspace

Provider-agnostic conversation loop for LLM-powered agents. Handles message exchange, tool call dispatch, and streaming with no HTTP, persistence, or UI concerns. Bring your own LLMClient implementation (e.g. Ollama, OpenAI, Anthropic) and axon-loop drives the send-stream-tool-repeat cycle.

Getting started

go get github.com/benaskins/axon-loop@latest

Requires Go 1.26+.

result, err := loop.Run(ctx, loop.RunConfig{
    Client: client,
    Request: &loop.Request{
        Model:    "llama3",
        Messages: []loop.Message{{Role: "user", Content: "Hello"}},
        Stream:   true,
    },
    Tools: tools,
    Callbacks: loop.Callbacks{
        OnToken: func(t string) { fmt.Print(t) },
    },
})

See example/main.go for a runnable sketch.

Key types

  • LLMClient — interface for communicating with any LLM backend
  • Request / Response — provider-agnostic request and streamed response chunk
  • RunConfig — bundles parameters for Run(), including client, request, tools, context strategy, and callbacks
  • Message — a single message in a conversation
  • ToolCall — an LLM's decision to invoke a tool
  • Run() — executes the conversation loop: Run(ctx, RunConfig) (*Result, error)
  • Stream() — executes the loop and returns a channel of Event values instead of using callbacks
  • Event — streaming event emitted by Stream() (token, thinking, tool use, trim, done, or error)
  • ContextStrategy — interface for trimming conversation history before each LLM call (implementations: SlidingWindow, TokenBudget, TokenBudgetWithMinWindow)
  • Callbacks — optional hooks for tokens, thinking, tool use, context trimming, and completion

License

MIT — see LICENSE.

Documentation

Overview

Package loop provides a provider-agnostic conversation loop for LLM-powered agents. It handles message exchange, tool call dispatch, and streaming — with no HTTP, persistence, or UI concerns.

Class: primitive UseWhen: Any LLM interaction in libraries or HTTP services. For CLI agents, use axon-hand instead which provides axon-loop automatically. Always paired with axon-talk and axon-tool.

Index

Examples

Constants

View Source
const (
	RoleSystem    = talk.RoleSystem
	RoleUser      = talk.RoleUser
	RoleAssistant = talk.RoleAssistant
	RoleTool      = talk.RoleTool
)

Variables

This section is empty.

Functions

func Stream added in v0.4.0

func Stream(ctx context.Context, cfg RunConfig) <-chan Event

Stream executes a conversation loop and returns a channel of Events. The channel is closed when the loop completes or fails. The caller reads events without building a callback-to-channel bridge.

Any Callbacks in cfg are ignored — Stream bridges all events to the returned channel.

func WithRetry added in v0.7.1

func WithRetry(client talk.LLMClient, opts ...RetryOption) talk.LLMClient

WithRetry wraps an LLMClient with retry logic. Requests that fail with retryable errors are retried with jittered exponential backoff.

Streaming safety: if the callback fn has been invoked (tokens delivered), the request is not retried to avoid duplicate content.

Types

type Callbacks

type Callbacks struct {
	OnToken    func(token string)
	OnThinking func(token string)
	OnToolUse  func(name string, args map[string]any)
	OnTrim     func(dropped []Message) // Called when context trimming drops messages.
	OnDone     func(durationMs int64)
}

Callbacks receives streaming events from the conversation loop. All fields are optional — nil callbacks are skipped.

type ContextStrategy added in v0.6.1

type ContextStrategy interface {
	// Trim returns the messages to send to the LLM.
	// The input slice must not be modified.
	Trim(messages []Message) []Message
}

ContextStrategy controls how conversation history is trimmed before each LLM request. Implementations receive the full message slice and return the subset to send. The system prompt (if present as the first message) should generally be preserved.

func SlidingWindow added in v0.6.1

func SlidingWindow(n int) ContextStrategy

SlidingWindow keeps the system prompt plus the last n conversation messages. Useful when you want a fixed-size history regardless of token count.

Example

A sliding window keeps the system prompt plus the last n messages, discarding older history as the conversation grows.

package main

import (
	"fmt"

	loop "github.com/benaskins/axon-loop"
)

func main() {
	strategy := loop.SlidingWindow(10)

	messages := []loop.Message{
		{Role: loop.RoleSystem, Content: "You are a helpful assistant."},
		{Role: loop.RoleUser, Content: "Hello"},
		{Role: loop.RoleAssistant, Content: "Hi there!"},
		{Role: loop.RoleUser, Content: "What is Go?"},
	}

	trimmed := strategy.Trim(messages)
	fmt.Println(trimmed[0].Role, "message preserved")
	fmt.Println(len(trimmed)-1, "conversation messages kept")
}
Output:
system message preserved
3 conversation messages kept

func TokenBudget added in v0.6.1

func TokenBudget(budget int) ContextStrategy

TokenBudget trims conversation history to fit within an estimated token budget. The system prompt is always preserved; older messages are dropped first. This is the strategy used when Request.MaxTokens is set without an explicit ContextStrategy.

Example

A token budget trims older messages so the total estimated token count stays within the budget. The system prompt is always preserved.

package main

import (
	"fmt"

	loop "github.com/benaskins/axon-loop"
)

func main() {
	strategy := loop.TokenBudget(4096)

	messages := []loop.Message{
		{Role: loop.RoleSystem, Content: "You are a helpful assistant."},
		{Role: loop.RoleUser, Content: "Summarise this long document..."},
	}

	trimmed := strategy.Trim(messages)
	fmt.Println(len(trimmed), "messages after trim")
}
Output:
2 messages after trim

func TokenBudgetWithMinWindow added in v0.6.1

func TokenBudgetWithMinWindow(budget, minMessages int) ContextStrategy

TokenBudgetWithMinWindow trims to a token budget but guarantees at least minMessages conversation messages are kept (plus the system prompt). Whichever approach retains more messages wins. This prevents a large system prompt from starving the conversation history.

type ContextStrategyFunc added in v0.6.1

type ContextStrategyFunc func(messages []Message) []Message

ContextStrategyFunc adapts a plain function into a ContextStrategy.

func (ContextStrategyFunc) Trim added in v0.6.1

func (f ContextStrategyFunc) Trim(messages []Message) []Message

type DoneEvent added in v0.4.0

type DoneEvent struct {
	Content    string
	Thinking   string
	DurationMs int64
}

DoneEvent is emitted when the loop completes successfully.

type Event added in v0.4.0

type Event struct {
	// Exactly one of these is set per event.
	Token    string        // incremental content token
	Thinking string        // incremental thinking token
	ToolUse  *ToolUseEvent // a tool was invoked
	Trim     *TrimEvent    // context was trimmed
	Done     *DoneEvent    // the loop completed
	Err      error         // the loop failed
}

Event is a streaming event emitted by Stream. Consumers receive these on a channel instead of registering callbacks.

type LLMClient added in v0.2.0

type LLMClient = talk.LLMClient

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type Message

type Message = talk.Message

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type Request added in v0.3.0

type Request = talk.Request

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type Response added in v0.3.0

type Response = talk.Response

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type Result

type Result struct {
	Content  string
	Thinking string
	Usage    *Usage // accumulated token usage across all turns; nil if provider doesn't report
}

Result is the final output of a conversation loop run.

func Run

func Run(ctx context.Context, cfg RunConfig) (*Result, error)

Run executes a conversation loop: sends messages to the LLM, streams the response, executes tool calls, and repeats until no more tool calls are made.

tools and toolCtx may be nil for simple chat without tool support.

type RetryOption added in v0.7.1

type RetryOption func(*retryConfig)

RetryOption configures the retry decorator.

func WithBackoff added in v0.7.1

func WithBackoff(initial, max time.Duration) RetryOption

WithBackoff sets the initial and maximum backoff durations. Default is 1s initial, 30s max. Jittered exponential backoff is applied.

func WithMaxRetries added in v0.7.1

func WithMaxRetries(n int) RetryOption

WithMaxRetries sets the maximum number of retry attempts. Default is 3.

func WithRetryable added in v0.7.1

func WithRetryable(fn func(error) bool) RetryOption

WithRetryable sets a custom function to determine if an error is retryable. By default, errors with HTTP status 429, 500, 502, or 503 are retried.

type Role added in v0.6.0

type Role = talk.Role

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type RunConfig added in v0.6.0

type RunConfig struct {
	Client  LLMClient
	Request *Request
	Tools   map[string]tool.ToolDef
	ToolCtx *tool.ToolContext
	Context ContextStrategy // Optional; trims messages before each LLM call.
	Callbacks
}

RunConfig bundles parameters for Run, keeping the function signature small.

Example

RunConfig assembles everything needed for a conversation loop: an LLM client, the initial request, tool definitions, and a context strategy. This is the primary struct consumers construct.

package main

import (
	"context"
	"fmt"

	loop "github.com/benaskins/axon-loop"
	tool "github.com/benaskins/axon-tool"
)

func main() {
	// In production, use an axon-talk adapter (e.g. talk.NewOllamaClient).
	var client loop.LLMClient

	// Define tools the model can call.
	tools := map[string]tool.ToolDef{
		"get_weather": {
			Name:        "get_weather",
			Description: "Get current weather for a city",
			Parameters: tool.ParameterSchema{
				Type:     "object",
				Required: []string{"city"},
				Properties: map[string]tool.PropertySchema{
					"city": {Type: "string", Description: "City name"},
				},
			},
			Execute: func(ctx *tool.ToolContext, args map[string]any) tool.ToolResult {
				city, _ := args["city"].(string)
				return tool.ToolResult{Content: fmt.Sprintf("22°C and sunny in %s", city)}
			},
		},
	}

	cfg := loop.RunConfig{
		Client: client,
		Request: &loop.Request{
			Model:  "claude-sonnet-4-20250514",
			Stream: true,
			Messages: []loop.Message{
				{Role: loop.RoleSystem, Content: "You are a weather assistant."},
				{Role: loop.RoleUser, Content: "What's the weather in Melbourne?"},
			},
			MaxIterations: 5,
		},
		Tools:   tools,
		Context: loop.SlidingWindow(20),
		Callbacks: loop.Callbacks{
			OnToken: func(token string) {
				fmt.Print(token)
			},
		},
	}

	_, _ = loop.Run(context.Background(), cfg)
}

type ToolCall

type ToolCall = talk.ToolCall

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

type ToolUseEvent added in v0.4.0

type ToolUseEvent struct {
	Name string
	Args map[string]any
}

ToolUseEvent is emitted when the LLM invokes a tool.

type TrimEvent added in v0.6.1

type TrimEvent struct {
	Dropped []Message
}

TrimEvent is emitted when the context strategy drops messages.

type Usage added in v0.8.0

type Usage = talk.Usage

Type aliases re-exported from axon-talk for backwards compatibility within this package. Internal code uses these directly; external consumers should migrate to axon-talk types.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL