libagent

package
v0.4.17 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 18, 2026 License: MIT Imports: 38 Imported by: 0

Documentation

Overview

Package libagent provides a stateful agent with tool execution and event streaming. It is built on charm.land/fantasy for LLM access.

Index

Constants

View Source
const (
	DefaultMaxTokens     = 32000
	DefaultContextWindow = 200000
	DefaultMaxImages     = 20
)
View Source
const CodexProviderID = "openai-codex"

CodexProviderID is the stable identifier for the OpenAI Codex custom provider.

View Source
const SyntheticProviderID = "synthetic"

SyntheticProviderID is the stable identifier for the Synthetic custom provider.

View Source
const ZenGoProviderID = "opencode-go"

ZenGoProviderID is the stable identifier for the OpenCode Go custom provider.

View Source
const ZenProviderID = "opencode"

ZenProviderID is the stable identifier for the OpenCode Zen custom provider.

Variables

View Source
var ErrCatalogNotModified = errors.New("catalog not modified")

errNotModified is re-exported so callers can detect 304 responses from Refresh.

View Source
var ErrMessageNotFound = errors.New("message not found")

Functions

func AdaptTools

func AdaptTools(tools []Tool) []fantasy.AgentTool

AdaptTools converts a []Tool slice into []fantasy.AgentTool for use with libagent internals.

func AssistantReasoning added in v0.2.11

func AssistantReasoning(msg *AssistantMessage) string

func AssistantText added in v0.2.11

func AssistantText(msg *AssistantMessage) string

func BuildProviderOptions

func BuildProviderOptions(
	providerID string,
	providerType string,
	modelID string,
	systemPrompt string,
	thinkingLevel ThinkingLevel,
	catalogDefaults map[string]any,
	rawOverrides map[string]any,
) fantasy.ProviderOptions

BuildProviderOptions constructs a fantasy.ProviderOptions map for a single LLM call. It merges three sources in priority order (lowest → highest):

  1. catalogDefaults – raw map from the catwalk catalog (e.g. ThinkingConfig)
  2. cfg.ProviderOptions – per-model raw overrides stored in ModelConfig
  3. Derived options – reasoning/thinking derived from cfg.ThinkingLevel

The result is routed through each provider's ParseOptions / ParseResponsesOptions so that fantasy receives strongly-typed ProviderOptionsData values.

Special cases:

  • Codex (providerID == CodexProviderID): injects Instructions = systemPrompt, ParallelToolCalls = true, Include = [reasoning_encrypted_content], and a ReasoningEffort when ThinkingLevel is enabled.
  • OpenAI Responses API models (detected via openai.IsResponsesModel): same reasoning_summary + include treatment as Codex.

func DecodeDataString added in v0.2.6

func DecodeDataString(data string) []byte

func DefaultConvertToLLM

func DefaultConvertToLLM(_ context.Context, messages []Message) ([]fantasy.Message, error)

DefaultConvertToLLM is the default ConvertToLLMFn. It passes through user, assistant, and toolResult messages and drops everything else.

func DefaultLoginCallbacks

func DefaultLoginCallbacks() oauth.LoginCallbacks

DefaultLoginCallbacks returns OAuth callbacks that work out of the box without any configuration:

  • OnAuth: opens the authorization URL in the system browser (falls back to printing it if the browser cannot be launched), then prints any extra instructions (e.g. the device code).
  • OnProgress: prints status messages to stderr.
  • OnPrompt: prints the prompt to stderr and reads a line from stdin.
  • OnManualCodeInput: prints a hint to stderr and reads a line from stdin, racing with the local callback server for providers that start one (Gemini CLI, Antigravity, OpenAI Codex).

func EncodeDataString added in v0.2.6

func EncodeDataString(data []byte) string

func FormatErrorForCLI added in v0.2.6

func FormatErrorForCLI(err error) string

FormatErrorForCLI returns a user-facing error string. For provider errors it includes HTTP status, request id (when available), and the raw provider response body for easier debugging.

func HasBijectiveToolCoupling added in v0.2.8

func HasBijectiveToolCoupling(messages []Message) bool

HasBijectiveToolCoupling returns true when assistant tool calls and tool results are balanced in contiguous turns:

  • every assistant tool-call message is immediately followed by matching tool-result messages before any user/assistant message appears
  • every tool call has a matching tool result with the same ID and tool name
  • all call IDs and tool names are non-empty

func LoginCallbacksWithPrinter

func LoginCallbacksWithPrinter(printer func(string)) oauth.LoginCallbacks

LoginCallbacksWithPrinter returns OAuth callbacks that route status messages to a custom printer function instead of stderr. Useful for TUI applications.

func MarshalAssistantContent added in v0.3.0

func MarshalAssistantContent(content fantasy.ResponseContent) json.RawMessage

MarshalAssistantContent serializes assistant response content in a format that can be safely decoded back into concrete fantasy content types.

func MessageID added in v0.2.6

func MessageID(m Message) string

func NewModelFromProvider

func NewModelFromProvider(ctx context.Context, p fantasy.Provider, modelID string) (fantasy.LanguageModel, error)

NewModelFromProvider creates a fantasy.LanguageModel from any already-constructed fantasy.Provider. Use this when you need full control over provider options.

func ParseJSONInput

func ParseJSONInput(input []byte, dst any) error

ParseJSONInput parses potentially malformed LLM-generated JSON into dst, using schema.ParsePartialJSON to repair the input before unmarshalling.

func PromptWithUserAttachments added in v0.2.6

func PromptWithUserAttachments(prompt string, attachments []FilePart) string

func SetAssistantUsage added in v0.4.1

func SetAssistantUsage(m *AssistantMessage, inputTokens, cacheReadTokens, outputTokens int64)

SetAssistantUsage updates provider-reported token accounting without exposing fantasy types to callers outside libagent.

func SetMessageMeta added in v0.2.6

func SetMessageMeta(m Message, meta MessageMeta)

func SkipMaxOutputTokens

func SkipMaxOutputTokens(providerID string) bool

SkipMaxOutputTokens reports whether the provider should not send max_output_tokens. Codex's ChatGPT backend rejects this field.

func StreamText

func StreamText(ctx context.Context, model fantasy.LanguageModel, systemPrompt, prompt string, maxOutputTokens int64, onDelta func(string)) error

StreamText runs a single non-tool LLM call and collects text deltas via onDelta. It is a convenience helper for compaction and other one-shot generation needs.

func UnixMilliToTime added in v0.2.6

func UnixMilliToTime(ms int64) time.Time

func UnmarshalAssistantContent added in v0.3.0

func UnmarshalAssistantContent(raw json.RawMessage) fantasy.ResponseContent

UnmarshalAssistantContent decodes assistant response content previously serialized by MarshalAssistantContent.

Types

type Agent

type Agent struct {
	// contains filtered or unexported fields
}

Agent is a stateful, event-emitting agent that manages conversation history, tool execution, and streaming LLM responses.

func NewAgent

func NewAgent(opts AgentOptions) *Agent

NewAgent creates a new Agent with the given options.

func (*Agent) Abort

func (a *Agent) Abort()

Abort cancels the current streaming operation.

func (*Agent) AppendMessage

func (a *Agent) AppendMessage(m Message)

AppendMessage appends a single message to the history.

func (*Agent) ClearMessages

func (a *Agent) ClearMessages()

ClearMessages removes all messages.

func (*Agent) Continue

func (a *Agent) Continue(ctx context.Context) error

Continue resumes from the existing context without adding a new message. The last message must not be an assistant message (it must be user or toolResult).

func (*Agent) Prompt

func (a *Agent) Prompt(ctx context.Context, text string, files ...FilePart) error

Prompt sends a text prompt (and optional file attachments) to the agent. Blocks until the agent finishes or the context is cancelled. Returns an error if the agent is already streaming.

func (*Agent) PromptMessages

func (a *Agent) PromptMessages(ctx context.Context, msgs ...Message) error

PromptMessages sends one or more pre-built messages as the next prompt.

func (*Agent) ReplaceMessages

func (a *Agent) ReplaceMessages(ms []Message)

ReplaceMessages replaces the message history with a copy of ms.

func (*Agent) Reset

func (a *Agent) Reset()

Reset clears messages and resets streaming state.

func (*Agent) RuntimeModel

func (a *Agent) RuntimeModel() RuntimeModel

RuntimeModel returns the current runtime model and metadata.

func (*Agent) SessionID

func (a *Agent) SessionID() string

SessionID returns the current session ID.

func (*Agent) SetRuntimeModel

func (a *Agent) SetRuntimeModel(m RuntimeModel)

SetRuntimeModel replaces the runtime model and metadata.

func (*Agent) SetSessionID

func (a *Agent) SetSessionID(id string)

SetSessionID updates the session ID.

func (*Agent) SetSystemPrompt

func (a *Agent) SetSystemPrompt(v string)

SetSystemPrompt replaces the system prompt.

func (*Agent) SetTools

func (a *Agent) SetTools(t []Tool)

SetTools replaces the tool list.

func (*Agent) State

func (a *Agent) State() AgentState

State returns a snapshot of the current agent state.

func (*Agent) Subscribe

func (a *Agent) Subscribe() (<-chan AgentEvent, func())

Subscribe registers a listener for agent events. Returns the event channel and an unsubscribe function. Events are delivered in order and never dropped; the channel is closed when unsubscribed. Subscribe and unsubscribe are safe to call concurrently with emit.

func (*Agent) WaitForIdle

func (a *Agent) WaitForIdle()

WaitForIdle blocks until the agent is not streaming.

type AgentContext

type AgentContext struct {
	SystemPrompt string
	Messages     []Message
	Tools        []fantasy.AgentTool
}

AgentContext carries the complete conversation state passed to the loop.

type AgentEvent

type AgentEvent struct {
	Type AgentEventType

	// AgentEnd
	Messages []Message

	// TurnEnd
	TurnMessage Message
	ToolResults []*ToolResultMessage

	// MessageStart / MessageUpdate / MessageEnd
	Message Message
	// MessageUpdate only: streaming increment
	Delta *StreamDelta

	// ToolExecutionStart / ToolExecutionUpdate / ToolExecutionEnd
	ToolCallID  string
	ToolName    string
	ToolArgs    string
	ToolResult  string
	ToolIsError bool

	// Retry only: message describing the retry attempt
	RetryMessage string

	// ContextCompaction only
	ContextCompaction *ContextCompactionEvent
}

AgentEvent is emitted by the agent loop to describe what is happening.

type AgentEventType

type AgentEventType string

AgentEventType identifies the type of an AgentEvent.

const (
	// AgentEventTypeAgentStart is emitted when the agent begins processing.
	AgentEventTypeAgentStart AgentEventType = "agent_start"
	// AgentEventTypeAgentEnd is emitted when the agent completes with all new messages.
	AgentEventTypeAgentEnd AgentEventType = "agent_end"
	// AgentEventTypeTurnStart is emitted when a new turn begins (one LLM call + tool executions).
	AgentEventTypeTurnStart AgentEventType = "turn_start"
	// AgentEventTypeTurnEnd is emitted when a turn completes.
	AgentEventTypeTurnEnd AgentEventType = "turn_end"
	// AgentEventTypeMessageStart is emitted when any message begins.
	AgentEventTypeMessageStart AgentEventType = "message_start"
	// AgentEventTypeMessageUpdate is emitted during assistant streaming (text deltas etc.).
	AgentEventTypeMessageUpdate AgentEventType = "message_update"
	// AgentEventTypeMessageEnd is emitted when a message is fully received.
	AgentEventTypeMessageEnd AgentEventType = "message_end"
	// AgentEventTypeToolExecutionStart is emitted when a tool begins executing.
	AgentEventTypeToolExecutionStart AgentEventType = "tool_execution_start"
	// AgentEventTypeToolExecutionUpdate is emitted when a tool streams progress.
	AgentEventTypeToolExecutionUpdate AgentEventType = "tool_execution_update"
	// AgentEventTypeToolExecutionEnd is emitted when a tool completes.
	AgentEventTypeToolExecutionEnd AgentEventType = "tool_execution_end"
	// AgentEventTypeRetry is emitted when retrying a failed stream due to connection error.
	AgentEventTypeRetry AgentEventType = "retry"
	// AgentEventTypeContextCompaction is emitted when the session context is compacted.
	AgentEventTypeContextCompaction AgentEventType = "context_compaction"
)

type AgentLoopConfig

type AgentLoopConfig struct {
	Model fantasy.LanguageModel

	// ConvertToLLM converts AgentMessages to LLM-compatible fantasy.Messages.
	// If nil, only user/assistant/toolResult messages are forwarded.
	ConvertToLLM ConvertToLLMFn

	// TransformContext is an optional pre-pass over messages before ConvertToLLM.
	TransformContext TransformContextFn

	// SystemPromptOverride replaces context.SystemPrompt if set (used internally by Agent).
	SystemPromptOverride *string

	// ProviderOptions are passed to every fantasy.Call made by the loop.
	// Build them with BuildProviderOptions or RuntimeModel.BuildCallProviderOptions.
	ProviderOptions fantasy.ProviderOptions

	// MaxOutputTokens caps each LLM response. When nil no limit is sent.
	MaxOutputTokens *int64

	// OnCompleteHook optionally validates a final assistant response before the
	// loop returns. Returning ok=false injects a user follow-up and continues.
	OnCompleteHook OnCompleteHook
}

AgentLoopConfig is the configuration for agentLoop / agentLoopContinue.

type AgentOptions

type AgentOptions struct {
	// RuntimeModel sets the initial language model and capabilities metadata. Required.
	RuntimeModel RuntimeModel
	// SystemPrompt sets the initial system prompt.
	SystemPrompt string
	// Tools sets the initial tool list using the stdlib-only Tool interface.
	Tools []Tool
	// Messages pre-loads conversation history.
	Messages []Message

	// ConvertToLLM converts agent messages to LLM-compatible messages.
	// Defaults to DefaultConvertToLLM.
	ConvertToLLM ConvertToLLMFn
	// TransformContext optionally transforms messages before ConvertToLLM.
	TransformContext TransformContextFn

	// SessionID is forwarded to providers that support session-based caching.
	SessionID string

	// ProviderOptions are passed to every LLM call the agent makes.
	// Build with BuildProviderOptions or RuntimeModel.BuildCallProviderOptions.
	ProviderOptions fantasy.ProviderOptions

	// MaxOutputTokens caps each LLM response. When nil or 0 no limit is sent.
	// Set to 0 for providers like Codex that reject this field.
	MaxOutputTokens int64

	// OnCompleteHook optionally validates final assistant responses and may
	// inject a user follow-up to continue the same run.
	OnCompleteHook OnCompleteHook
}

AgentOptions configures a new Agent.

type AgentState

type AgentState struct {
	SystemPrompt     string
	Model            fantasy.LanguageModel
	Tools            []fantasy.AgentTool
	Messages         []Message
	IsStreaming      bool
	StreamMessage    Message
	PendingToolCalls map[string]struct{}
	Error            error
}

AgentState holds the full runtime state of an Agent.

type AssistantMessage

type AssistantMessage struct {
	Role            string
	Text            string
	Reasoning       string
	ToolCalls       []ToolCallItem
	Completed       bool
	CompleteReason  string
	CompleteMessage string
	CompleteDetails string
	Content         fantasy.ResponseContent
	FinishReason    fantasy.FinishReason
	Usage           fantasy.Usage
	Error           error
	Timestamp       time.Time
	Meta            MessageMeta
}

AssistantMessage is a completed assistant response.

func NewAssistantMessage

func NewAssistantMessage(text, reasoning string, toolCalls []ToolCallItem, ts time.Time) *AssistantMessage

NewAssistantMessage builds an AssistantMessage from plain-Go data. It constructs the internal fantasy.ResponseContent without requiring callers to import charm.

func (*AssistantMessage) GetRole

func (m *AssistantMessage) GetRole() string

func (*AssistantMessage) GetTimestamp

func (m *AssistantMessage) GetTimestamp() time.Time

type Catalog

type Catalog struct {
	// contains filtered or unexported fields
}

Catalog provides model discovery backed by catwalk's embedded provider list and an optional live catwalk server for up-to-date data.

Use DefaultCatalog() to get the ready-to-use catalog backed by catwalk's embedded data, or NewCatalog() to build one with custom providers.

OAuth: every Catalog is pre-configured with DefaultLoginCallbacks so that NewModel triggers authentication automatically (browser open + stdin/stdout). Override with SetLoginCallbacks for custom UI, and persist credentials between runs via SetOnCredentialsUpdated and SetCredentials.

func DefaultCatalog

func DefaultCatalog() *Catalog

DefaultCatalog returns a catalog pre-loaded with all of catwalk's embedded (offline) providers plus built-in custom providers (e.g. OpenAI Codex). This is the most convenient starting point.

func NewCatalog

func NewCatalog() *Catalog

NewCatalog returns an empty catalog pre-configured with DefaultLoginCallbacks and automatic credential persistence to ~/.config/libagent/oauth_credentials.json. Add providers with AddProvider or AddEmbedded before calling NewModel.

func (*Catalog) AddCustomProvider

func (c *Catalog) AddCustomProvider(p CustomProvider)

AddCustomProvider registers a CustomProvider, overriding any existing entry with the same ID. Custom providers take precedence over catwalk providers when both share the same ID.

func (*Catalog) AddEmbedded

func (c *Catalog) AddEmbedded()

AddEmbedded loads all providers from catwalk's offline embedded snapshot.

func (*Catalog) AddProvider

func (c *Catalog) AddProvider(p catwalk.Provider)

AddProvider registers a catwalk.Provider, overriding any existing entry with the same ID. Use this to add custom or updated providers on top of the embedded catalog.

func (*Catalog) Credentials

func (c *Catalog) Credentials(providerID string) (oauth.Credentials, bool)

Credentials returns the stored credentials for a provider, if any.

func (*Catalog) FindModel

func (c *Catalog) FindModel(providerID, modelID string) (ModelInfo, catwalk.Provider, error)

FindModel returns the ModelInfo for a given provider ID and model ID. It searches custom providers first, then catwalk providers. The second return value is the catwalk.Provider; it is zero for custom providers.

func (*Catalog) FindModelOptions

func (c *Catalog) FindModelOptions(providerID, modelID string) (providerType string, catalogProviderOptions map[string]any)

FindModelOptions returns the per-model provider options from the catalog, plus the provider type string. For custom providers, model-level provider options are currently not defined, so this returns the provider type and nil options.

func (*Catalog) ListProviders

func (c *Catalog) ListProviders() []catwalk.Provider

ListProviders returns all registered providers (catwalk + custom). When both define the same provider ID, the custom provider wins.

func (*Catalog) NewModel

func (c *Catalog) NewModel(ctx context.Context, providerID, modelID, apiKey string) (fantasy.LanguageModel, error)

NewModel resolves providerID and modelID from the catalog, constructs the appropriate fantasy provider using apiKey, and returns a ready-to-use fantasy.LanguageModel.

It supports all provider types known to catwalk (openai, anthropic, google, openrouter, openai-compat) as well as custom providers registered via AddCustomProvider.

OAuth: when apiKey is empty and the provider has a registered OAuth flow, NewModel automatically resolves credentials — refreshing if stale or running the login flow if none exist (requires SetLoginCallbacks to have been called).

func (*Catalog) Refresh

func (c *Catalog) Refresh(ctx context.Context, client *catwalk.Client, etag string) error

Refresh fetches the latest provider list from a live catwalk server and merges it into the catalog, overriding stale embedded entries. The etag parameter enables conditional requests; pass "" to always fetch. Returns catwalk.ErrNotModified if the server returned 304.

func (*Catalog) SetCredentials

func (c *Catalog) SetCredentials(providerID string, creds oauth.Credentials)

SetCredentials stores pre-loaded credentials for a provider (keyed by catwalk provider ID, e.g. "copilot", "anthropic"). Call this at startup to restore previously persisted credentials.

func (*Catalog) SetLoginCallbacks

func (c *Catalog) SetLoginCallbacks(cb oauth.LoginCallbacks)

SetLoginCallbacks overrides the OAuth login callbacks used when NewModel must trigger an authentication flow. Every Catalog starts with DefaultLoginCallbacks; call this only when custom UI behaviour is needed.

func (*Catalog) SetOnCredentialsUpdated

func (c *Catalog) SetOnCredentialsUpdated(fn func(providerID string, creds oauth.Credentials))

SetOnCredentialsUpdated registers a hook called whenever credentials are created or refreshed. Use it to persist the updated Credentials to disk.

type ContextCompactionEvent added in v0.4.5

type ContextCompactionEvent struct {
	Phase                  ContextCompactionPhase `json:"phase"`
	Mode                   ContextCompactionMode  `json:"mode"`
	TriggerEstimatedTokens int64                  `json:"trigger_estimated_tokens,omitempty"`
	TriggerContextPercent  float64                `json:"trigger_context_percent,omitempty"`
	TokensBefore           int64                  `json:"tokens_before,omitempty"`
	Summarized             int                    `json:"summarized,omitempty"`
	Kept                   int                    `json:"kept,omitempty"`
	ErrorMessage           string                 `json:"error_message,omitempty"`
}

ContextCompactionEvent carries structured metadata for context compaction.

type ContextCompactionMode added in v0.4.5

type ContextCompactionMode string
const (
	ContextCompactionModeAuto   ContextCompactionMode = "auto"
	ContextCompactionModeManual ContextCompactionMode = "manual"
)

type ContextCompactionPhase added in v0.4.5

type ContextCompactionPhase string
const (
	ContextCompactionPhaseStart  ContextCompactionPhase = "start"
	ContextCompactionPhaseEnd    ContextCompactionPhase = "end"
	ContextCompactionPhaseFailed ContextCompactionPhase = "failed"
)

type ConvertToLLMFn

type ConvertToLLMFn func(ctx context.Context, messages []Message) ([]fantasy.Message, error)

ConvertToLLMFn converts AgentMessages to fantasy.Message slices before each LLM call. Custom message types should be converted or filtered here.

type CustomProvider

type CustomProvider struct {
	// ID is the unique provider identifier used in NewModel / FindModel calls.
	ID string
	// Name is the human-readable provider name shown in ListProviders.
	Name string
	// Type is the provider type used for option parsing/routing metadata.
	// Optional; defaults to openai-compat when omitted.
	Type catwalk.Type
	// APIKey is an optional env-var placeholder (for UI prefill), e.g. "$OPENAI_API_KEY".
	APIKey string
	// APIEndpoint is an optional base URL shown in provider listings.
	APIEndpoint string
	// Models is the list of models this provider exposes.
	Models []ModelInfo
	// Build constructs the fantasy.Provider for a given API key.
	// It is called every time NewModel is invoked for this provider.
	Build func(apiKey string) (fantasy.Provider, error)
}

CustomProvider lets callers register providers that are not in catwalk's embedded catalog. The Build function receives the resolved API key and must return a ready fantasy.Provider.

func CodexProvider

func CodexProvider() CustomProvider

CodexProvider returns a CustomProvider for OpenAI Codex (ChatGPT OAuth).

Models are sourced from catwalk's embedded OpenAI provider. The ChatGPT OAuth backend exposes only a subset of GPT models, so we keep a tight model-ID filter here instead of importing the whole OpenAI catalog.

Authentication uses the OpenAI Codex OAuth flow from the oauth package. Register it with the Catalog via AddCustomProvider and call SetLoginCallbacks so that NewModel can trigger authentication automatically when no credentials are stored.

func SyntheticProvider

func SyntheticProvider() CustomProvider

SyntheticProvider returns a CustomProvider for Synthetic.

func ZenGoProvider added in v0.2.6

func ZenGoProvider() CustomProvider

ZenGoProvider returns a CustomProvider for OpenCode Go. Go models use a different endpoint: https://opencode.ai/zen/go/v1 MiniMax M2.5 uses Anthropic API format, while GLM-5 and Kimi K2.5 use OpenAI format.

func ZenProvider

func ZenProvider() CustomProvider

ZenProvider returns a CustomProvider for OpenCode Zen.

type FilePart

type FilePart struct {
	Filename  string
	MediaType string
	Data      []byte
}

FilePart is a binary file attachment using only stdlib types. It mirrors fantasy.FilePart but avoids the charm import in callers.

func NonTextFiles added in v0.2.6

func NonTextFiles(files []FilePart) []FilePart

type LanguageModel added in v0.4.1

type LanguageModel = fantasy.LanguageModel

LanguageModel re-exports the runtime model interface so packages outside libagent don't need to import fantasy directly.

type MediaSupport

type MediaSupport struct {
	Known bool
	// Enabled reports whether media inputs are supported by the active runtime model.
	Enabled bool
}

MediaSupport configures whether image/video inputs should be included at runtime. This is applied dynamically per run and does not mutate persisted message history.

type Message

type Message interface {
	// GetRole returns the role of the message.
	GetRole() string
	// GetTimestamp returns when the message was created.
	GetTimestamp() time.Time
}

Message represents a conversation message that the agent can work with. It can be a standard LLM message (user, assistant, toolResult) or a custom app-specific message type registered via CustomAgentMessage.

func AgentLoop

func AgentLoop(
	ctx context.Context,
	prompts []Message,
	agentCtx *AgentContext,
	cfg AgentLoopConfig,
	eventCh chan<- AgentEvent,
) ([]Message, error)

AgentLoop runs a new agent turn starting from the given prompt messages. It pushes events to eventCh and signals completion on done. The caller is responsible for closing nothing; the function closes done when done.

Returns the slice of new messages added during this loop (prompts + assistant + tool outputs + injected attachment user messages).

func AgentLoopContinue

func AgentLoopContinue(
	ctx context.Context,
	agentCtx *AgentContext,
	cfg AgentLoopConfig,
	eventCh chan<- AgentEvent,
) ([]Message, error)

AgentLoopContinue resumes an agent loop from the existing context without adding new messages. The last message in context must not be an assistant message.

func CloneMessage added in v0.2.6

func CloneMessage(m Message) Message

func SanitizeHistory added in v0.2.6

func SanitizeHistory(messages []Message) []Message

type MessageMeta added in v0.2.6

type MessageMeta struct {
	ID        string
	SessionID string
	Model     string
	Provider  string
	CreatedAt int64
	UpdatedAt int64
}

MessageMeta carries persistence metadata so runtime messages can be stored directly without sidecar structs.

func MessageMetaOf added in v0.2.6

func MessageMetaOf(m Message) MessageMeta

type MessageService added in v0.2.6

type MessageService interface {
	Create(ctx context.Context, sessionID string, msg Message) (Message, error)
	Update(ctx context.Context, msg Message) error
	Get(ctx context.Context, id string) (Message, error)
	List(ctx context.Context, sessionID string) ([]Message, error)
}

MessageService defines runtime-message persistence operations.

type ModelCapability

type ModelCapability string

ModelInfo carries metadata about a model as returned by the catalog.

const (
	// ModelCapabilityText indicates text input/output capability.
	ModelCapabilityText ModelCapability = "text"
	// ModelCapabilityImage indicates image input capability.
	ModelCapabilityImage ModelCapability = "image"
	// ModelCapabilityAudio indicates audio input capability.
	ModelCapabilityAudio ModelCapability = "audio"
)

type ModelConfig

type ModelConfig struct {
	// Name is the display/store identifier (e.g. "anthropic/claude-opus-4-5").
	Name string `json:"name,omitempty" toml:"name,omitempty"`

	Provider string `json:"provider" toml:"provider"`
	Model    string `json:"model" toml:"model"`

	APIKey  string  `json:"api_key,omitempty" toml:"api_key,omitempty"`
	BaseURL *string `json:"base_url,omitempty" toml:"base_url,omitempty"`

	// ThinkingLevel controls reasoning intensity across all providers.
	// Mapped to provider-native options by BuildProviderOptions.
	ThinkingLevel ThinkingLevel `json:"thinking_level,omitempty" toml:"thinking_level,omitempty"`

	// ProviderOptions holds raw provider-specific option overrides.
	// These are merged on top of catalog defaults and ThinkingLevel-derived
	// options when BuildProviderOptions is called.
	ProviderOptions map[string]any `json:"provider_options,omitempty" toml:"provider_options,omitempty"`

	// MaxImages caps how many image attachments from recent history are kept in
	// runtime context for this model. Nil uses the default runtime budget.
	MaxImages *int `json:"max_images,omitempty" toml:"max_images,omitempty"`

	MaxTokens     int64    `json:"max_tokens,omitempty" toml:"max_tokens,omitempty"`
	ContextWindow int64    `json:"context_window,omitempty" toml:"context_window,omitempty"`
	Temperature   *float64 `json:"temperature,omitempty" toml:"temperature,omitempty"`
	TopP          *float64 `json:"top_p,omitempty" toml:"top_p,omitempty"`
	TopK          *int64   `json:"top_k,omitempty" toml:"top_k,omitempty"`
}

ModelConfig stores a selected model configuration for serialization and runtime use. It uses only stdlib types so packages that don't import charm can hold it.

func (ModelConfig) EffectiveMaxImages added in v0.4.10

func (m ModelConfig) EffectiveMaxImages() int

EffectiveMaxImages returns the configured image attachment budget.

func (ModelConfig) Normalize

func (m ModelConfig) Normalize() ModelConfig

Normalize returns a copy of ModelConfig with defaults applied.

type ModelInfo

type ModelInfo struct {
	// ProviderID is the inference provider identifier (e.g. "openai", "anthropic").
	ProviderID string
	// ModelID is the model identifier within the provider (e.g. "gpt-4o").
	ModelID string
	// Name is the human-readable model name.
	Name string
	// ContextWindow is the model's context window in tokens.
	ContextWindow int64
	// DefaultMaxTokens is the model's default maximum output tokens.
	DefaultMaxTokens int64
	// CanReason indicates whether the model supports extended reasoning.
	CanReason bool
	// Capabilities declares model modalities/features (e.g. text, image, audio).
	Capabilities []ModelCapability
	// SupportsImages is kept for backward compatibility; prefer Capabilities.
	// SupportsImages indicates whether the model accepts image input.
	SupportsImages bool
	// CostPer1MIn is the cost per 1M input tokens in USD.
	CostPer1MIn float64
	// CostPer1MOut is the cost per 1M output tokens in USD.
	CostPer1MOut float64
	// CostPer1MInCached is the cost per 1M cached input tokens in USD.
	CostPer1MInCached float64
	// CostPer1MOutCached is the cost per 1M cached output tokens in USD.
	CostPer1MOutCached float64
}

ModelInfo carries metadata about a model as returned by the catalog.

func (ModelInfo) HasCapability

func (m ModelInfo) HasCapability(c ModelCapability) bool

HasCapability reports whether the model advertises a capability.

type OnCompleteHook added in v0.4.8

type OnCompleteHook func(ctx context.Context, final *AssistantMessage, messages []Message) (injectUserMessage string, ok bool, err error)

OnCompleteHook runs after a final assistant response is produced for a turn with no pending tool calls. It can accept the response or inject a user follow-up and request another turn in the same run.

type RuntimeModel

type RuntimeModel struct {
	Model     fantasy.LanguageModel
	ModelInfo ModelInfo
	ModelCfg  ModelConfig
	// ProviderType is the catwalk provider type string (e.g. "openai", "anthropic").
	// Used by BuildProviderOptions to route to the correct provider option parser.
	ProviderType string
	// CatalogProviderOptions are default provider options from the catwalk catalog entry.
	CatalogProviderOptions map[string]any
}

RuntimeModel bundles a resolved language model with its config and catalog metadata.

func (RuntimeModel) BuildCallProviderOptions

func (r RuntimeModel) BuildCallProviderOptions(systemPrompt string) fantasy.ProviderOptions

BuildCallProviderOptions constructs fantasy.ProviderOptions for a single LLM call using the model's provider type, thinking level, and raw overrides.

func (RuntimeModel) EffectiveContextWindow

func (r RuntimeModel) EffectiveContextWindow() int64

EffectiveContextWindow returns the best-known context window for this model, preferring the catalog value over the stored config value.

func (RuntimeModel) EffectiveMaxImages added in v0.4.10

func (r RuntimeModel) EffectiveMaxImages() int

EffectiveMaxImages returns the configured image attachment budget for runtime context.

func (RuntimeModel) MediaSupport

func (r RuntimeModel) MediaSupport() MediaSupport

MediaSupport reports runtime media capability metadata derived from catalog model info. When model identity is unknown, Known is false and callers should avoid destructive filtering.

type Schema

type Schema = map[string]any

Schema is an alias for a JSON Schema property map, matching the fantasy/schema format.

type StaticTextModel added in v0.4.1

type StaticTextModel struct {
	Response   string
	PromptLen  int
	PromptJSON string
}

StaticTextModel is a small test helper that streams a fixed text response and captures prompt metadata without requiring callers outside libagent to import fantasy.

func (*StaticTextModel) Generate added in v0.4.1

func (*StaticTextModel) GenerateObject added in v0.4.1

func (*StaticTextModel) Model added in v0.4.1

func (m *StaticTextModel) Model() string

func (*StaticTextModel) Provider added in v0.4.1

func (m *StaticTextModel) Provider() string

func (*StaticTextModel) Stream added in v0.4.1

func (*StaticTextModel) StreamObject added in v0.4.1

type StreamDelta

type StreamDelta struct {
	// Type is one of: "text_delta", "reasoning_delta", "tool_input_delta",
	// "text_start", "text_end", "reasoning_start", "reasoning_end",
	// "tool_input_start", "tool_input_end".
	Type string
	// ID is the content block ID.
	ID string
	// Delta is the incremental text (for delta events).
	Delta string
	// ToolName is the tool name (for tool_input_start).
	ToolName string
}

StreamDelta describes a single streaming increment from the assistant.

type StreamingAgentTool

type StreamingAgentTool interface {
	fantasy.AgentTool
	RunStreaming(ctx context.Context, params fantasy.ToolCall, onUpdate ToolUpdateFn) (fantasy.ToolResponse, error)
}

StreamingAgentTool is an optional extension of fantasy.AgentTool for tools that want to stream incremental updates during execution. If a tool implements this interface, libagent calls RunStreaming instead of Run and emits tool_execution_update events for each call to onUpdate.

type StreamingTool added in v0.4.0

type StreamingTool interface {
	Tool
	RunStreaming(ctx context.Context, call ToolCall, onUpdate func(ToolResponse)) (ToolResponse, error)
}

StreamingTool optionally extends Tool with incremental update support.

type ThinkingLevel

type ThinkingLevel string

ThinkingLevel is a provider-agnostic reasoning intensity.

const (
	ThinkingLevelLow    ThinkingLevel = "low"
	ThinkingLevelMedium ThinkingLevel = "medium"
	ThinkingLevelHigh   ThinkingLevel = "high"
	ThinkingLevelMax    ThinkingLevel = "max"
)

func NormalizeThinkingLevel

func NormalizeThinkingLevel(level ThinkingLevel) ThinkingLevel

NormalizeThinkingLevel resolves a ThinkingLevel string into a canonical level.

func (ThinkingLevel) Enabled

func (l ThinkingLevel) Enabled() bool

Enabled reports whether thinking is turned on.

type Tool

type Tool interface {
	Info() ToolInfo
	Run(ctx context.Context, call ToolCall) (ToolResponse, error)
}

Tool is the provider-agnostic tool interface implemented by callers. It uses only stdlib types so that packages outside libagent can implement it without importing charm.

func NewParallelTypedTool

func NewParallelTypedTool[TInput any](
	name, description string,
	fn func(ctx context.Context, input TInput, call ToolCall) (ToolResponse, error),
) Tool

NewParallelTypedTool creates a Tool marked as safe for parallel execution.

func NewTypedTool

func NewTypedTool[TInput any](
	name, description string,
	fn func(ctx context.Context, input TInput, call ToolCall) (ToolResponse, error),
) Tool

NewTypedTool creates a Tool with automatic JSON schema generation from TInput. The handler receives a parsed TInput struct.

type ToolCall

type ToolCall struct {
	ID    string
	Name  string
	Input string
}

ToolCall is a tool invocation from the LLM.

type ToolCallItem

type ToolCallItem struct {
	ID               string
	Name             string
	Input            string
	ProviderExecuted bool
}

ToolCallItem carries the data for a single assistant tool call, used when constructing AssistantMessages from stored history.

func AssistantToolCalls added in v0.2.11

func AssistantToolCalls(msg *AssistantMessage) []ToolCallItem

type ToolInfo

type ToolInfo struct {
	Name        string
	Description string
	Parameters  map[string]any
	Required    []string
	Parallel    bool
}

ToolInfo describes a callable tool using only stdlib types.

type ToolResponse

type ToolResponse struct {
	Type      ToolResponseType
	Content   string
	Data      []byte
	MediaType string
	Metadata  string
	IsError   bool
}

ToolResponse is the response returned by a tool using only stdlib types.

func NewMediaResponse

func NewMediaResponse(data []byte, mediaType string) ToolResponse

NewMediaResponse creates a binary media tool response.

func NewTextErrorResponse

func NewTextErrorResponse(content string) ToolResponse

NewTextErrorResponse creates an error text tool response.

func NewTextResponse

func NewTextResponse(content string) ToolResponse

NewTextResponse creates a successful text tool response.

func WithResponseMetadata

func WithResponseMetadata(response ToolResponse, metadata any) ToolResponse

WithResponseMetadata attaches JSON-marshalled metadata to a response.

type ToolResponseType

type ToolResponseType string

ToolResponseType identifies the payload kind returned by tools.

const (
	ToolResponseTypeText  ToolResponseType = "text"
	ToolResponseTypeMedia ToolResponseType = "media"
)

type ToolResult

type ToolResult struct {
	ToolCallID string
	Name       string
	Content    string
	IsError    bool
	Data       []byte
	MIMEType   string
	Metadata   string
}

ToolResult carries the output of a single tool execution, used when constructing ToolResultMessages from stored history.

type ToolResultMessage

type ToolResultMessage struct {
	Role       string
	ToolCallID string
	ToolName   string
	Content    string
	IsError    bool
	Data       []byte
	MIMEType   string
	Metadata   string
	Timestamp  time.Time
	Meta       MessageMeta
}

ToolResultMessage carries the result of a tool execution.

func NewToolResultMessage

func NewToolResultMessage(tr ToolResult, ts time.Time) *ToolResultMessage

NewToolResultMessage builds a ToolResultMessage from plain-Go data.

func (*ToolResultMessage) GetRole

func (m *ToolResultMessage) GetRole() string

func (*ToolResultMessage) GetTimestamp

func (m *ToolResultMessage) GetTimestamp() time.Time

type ToolUpdateFn

type ToolUpdateFn func(partial fantasy.ToolResponse)

ToolUpdateFn is the callback a streaming tool calls to push partial results. The partial argument carries the same type as the final ToolResponse.

type TransformContextFn

type TransformContextFn func(ctx context.Context, messages []Message) ([]Message, error)

TransformContextFn optionally transforms messages before ConvertToLLMFn. Use it for context-window pruning or injecting external context.

type UserMessage

type UserMessage struct {
	Role      string
	Content   string
	Files     []FilePart
	Timestamp time.Time
	Meta      MessageMeta
}

UserMessage is a standard user message.

func (*UserMessage) GetRole

func (m *UserMessage) GetRole() string

func (*UserMessage) GetTimestamp

func (m *UserMessage) GetTimestamp() time.Time

Directories

Path Synopsis
cmd
synthetic command
zen command
Package oauth provides OAuth login and token-refresh flows for AI providers that require OAuth rather than plain API keys:
Package oauth provides OAuth login and token-refresh flows for AI providers that require OAuth rather than plain API keys:

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL