provider

package
v0.5.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 28, 2026 License: MIT Imports: 37 Imported by: 3

Documentation

Overview

Package provider defines the AI provider interface for agent backends.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func DownloadHuggingFaceFile added in v0.5.3

func DownloadHuggingFaceFile(ctx context.Context, repo, filename, outputDir string, progress func(pct float64)) (string, error)

DownloadHuggingFaceFile downloads any file from a HuggingFace model repo. The file is saved to outputDir/<repo-slug>/<filename> where repo-slug replaces "/" with "--". If the file already exists it is returned as-is. Downloads resume from a .part temp file if interrupted. The optional progress callback receives completion percentage (0.0–1.0).

func EnsureLlamaServer added in v0.5.3

func EnsureLlamaServer(ctx context.Context) (string, error)

EnsureLlamaServer finds or downloads the llama-server binary. Search order: PATH (checked by caller) → cache dir → download from GitHub releases.

func HuggingFaceBaseURL added in v0.5.3

func HuggingFaceBaseURL() string

HuggingFaceBaseURL returns the base URL used for HuggingFace downloads.

func NewAnthropicBedrockProvider

func NewAnthropicBedrockProvider(cfg AnthropicBedrockConfig) (*anthropicBedrockProvider, error)

NewAnthropicBedrockProvider creates a provider that accesses Claude via Amazon Bedrock.

Docs: https://platform.claude.com/docs/en/build-with-claude/claude-on-amazon-bedrock

func NewAnthropicFoundryProvider

func NewAnthropicFoundryProvider(cfg AnthropicFoundryConfig) (*anthropicFoundryProvider, error)

NewAnthropicFoundryProvider creates a provider that accesses Claude via Azure AI Foundry.

Docs: https://platform.claude.com/docs/en/build-with-claude/claude-in-microsoft-foundry

func NewAnthropicVertexProvider

func NewAnthropicVertexProvider(cfg AnthropicVertexConfig) (*anthropicVertexProvider, error)

NewAnthropicVertexProvider creates a provider that accesses Claude via Google Vertex AI.

Docs: https://platform.claude.com/docs/en/build-with-claude/claude-on-vertex-ai

func ParseThinking added in v0.5.3

func ParseThinking(raw string) (thinking, content string)

ParseThinking extracts <think>...</think> blocks from model output. The first block's content becomes thinking; the remainder becomes content. If no <think> block is present, all text is returned as content.

func SetHuggingFaceBaseURL added in v0.5.3

func SetHuggingFaceBaseURL(url string)

SetHuggingFaceBaseURL overrides the base URL (used in tests).

func ValidateBaseURL added in v0.5.3

func ValidateBaseURL(rawURL string) error

ValidateBaseURL checks that a caller-supplied base URL is safe to contact:

  • Empty string is always allowed (callers fall back to the default URL).
  • Scheme must be "https".
  • Resolved IP must not be loopback, link-local, or RFC 1918 private.

Types

type AnthropicBedrockConfig

type AnthropicBedrockConfig struct {
	// Region is the AWS region (e.g. "us-east-1").
	Region string
	// Model is the Bedrock model ID (e.g. "anthropic.claude-sonnet-4-20250514-v1:0").
	Model string
	// MaxTokens limits the response length.
	MaxTokens int
	// AccessKeyID is the AWS access key (required).
	AccessKeyID string
	// SecretAccessKey is the AWS secret key (required).
	SecretAccessKey string
	// SessionToken is the AWS session token for temporary credentials (optional).
	SessionToken string
	// Profile is the AWS config profile name (reserved for future use).
	Profile string
	// HTTPClient is the HTTP client to use (defaults to http.DefaultClient).
	HTTPClient *http.Client
	// BaseURL overrides the endpoint (for testing).
	BaseURL string
}

AnthropicBedrockConfig configures the Anthropic provider for Amazon Bedrock. Uses AWS IAM SigV4 authentication against the Bedrock Runtime API.

type AnthropicConfig

type AnthropicConfig struct {
	APIKey     string
	Model      string
	BaseURL    string
	MaxTokens  int
	HTTPClient *http.Client
}

AnthropicConfig holds configuration for the Anthropic provider.

type AnthropicFoundryConfig

type AnthropicFoundryConfig struct {
	// Resource is the Azure AI Services resource name (forms the URL: {resource}.services.ai.azure.com).
	Resource string
	// Model is the model deployment name.
	Model string
	// MaxTokens limits the response length.
	MaxTokens int
	// APIKey is the Azure API key (use this OR Entra ID token, not both).
	APIKey string
	// EntraToken is a Microsoft Entra ID bearer token (optional, alternative to APIKey).
	EntraToken string
	// HTTPClient is the HTTP client to use (defaults to http.DefaultClient).
	HTTPClient *http.Client
}

AnthropicFoundryConfig configures the Anthropic provider for Microsoft Azure AI Foundry. Uses Azure API keys or Entra ID (formerly Azure AD) tokens.

type AnthropicProvider

type AnthropicProvider struct {
	// contains filtered or unexported fields
}

AnthropicProvider implements Provider using the Anthropic Messages API.

func NewAnthropicProvider

func NewAnthropicProvider(cfg AnthropicConfig) *AnthropicProvider

NewAnthropicProvider creates a new Anthropic provider with the given config.

func (*AnthropicProvider) AuthModeInfo

func (p *AnthropicProvider) AuthModeInfo() AuthModeInfo

func (*AnthropicProvider) Chat

func (p *AnthropicProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*AnthropicProvider) Name

func (p *AnthropicProvider) Name() string

func (*AnthropicProvider) Stream

func (p *AnthropicProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type AnthropicVertexConfig

type AnthropicVertexConfig struct {
	// ProjectID is the GCP project ID.
	ProjectID string
	// Region is the GCP region (e.g. "us-east5", "europe-west1").
	Region string
	// Model is the Vertex model ID (e.g. "claude-sonnet-4@20250514").
	Model string
	// MaxTokens limits the response length.
	MaxTokens int
	// CredentialsJSON is the GCP service account JSON (optional if using ADC).
	CredentialsJSON string
	// TokenSource provides OAuth2 tokens (optional, for testing or custom auth).
	// If set, CredentialsJSON and ADC are ignored.
	TokenSource oauth2.TokenSource
	// HTTPClient is the HTTP client to use (defaults to http.DefaultClient).
	HTTPClient *http.Client
}

AnthropicVertexConfig configures the Anthropic provider for Google Vertex AI. Uses GCP Application Default Credentials (ADC) or explicit OAuth2 tokens.

type AuthModeInfo

type AuthModeInfo struct {
	Mode        string // e.g. "personal", "direct", "bedrock"
	DisplayName string // e.g. "GitHub Copilot (Personal/IDE)"
	Description string // What this mode does
	Warning     string // ToS/usage concerns (empty if none)
	DocsURL     string // Link to official documentation
	ServerSafe  bool   // Whether this mode is appropriate for server/service use
}

AuthModeInfo describes an authentication/deployment mode for a provider backend.

func AllAuthModes

func AllAuthModes() []AuthModeInfo

AllAuthModes returns metadata for all known provider authentication modes, including both implemented and scaffolded providers.

func LocalAuthMode added in v0.5.3

func LocalAuthMode(name, displayName string) AuthModeInfo

LocalAuthMode returns an AuthModeInfo for a local (no-API-key) provider.

type ChannelSource

type ChannelSource struct {
	// contains filtered or unexported fields
}

ChannelSource delivers interactions via Go channels, enabling test goroutines to drive the agent loop interactively from within a Go test.

func NewChannelSource

func NewChannelSource() (source *ChannelSource, interactionsCh <-chan Interaction, responsesCh chan<- InteractionResponse)

NewChannelSource creates a ChannelSource and returns the source along with the test-side channels:

  • interactionsCh receives Interactions from the provider (test reads from this)
  • responsesCh accepts InteractionResponses from the test (test writes to this)

func (*ChannelSource) GetResponse

func (cs *ChannelSource) GetResponse(ctx context.Context, interaction Interaction) (*InteractionResponse, error)

GetResponse implements ResponseSource. It sends the interaction on the interactions channel and blocks until a response arrives on the responses channel or the context is cancelled.

type CohereConfig

type CohereConfig struct {
	APIKey     string
	Model      string
	BaseURL    string
	MaxTokens  int
	HTTPClient *http.Client
}

CohereConfig holds configuration for the Cohere provider.

type CohereProvider

type CohereProvider struct {
	// contains filtered or unexported fields
}

CohereProvider implements Provider using the Cohere Chat API v2.

func NewCohereProvider

func NewCohereProvider(cfg CohereConfig) *CohereProvider

NewCohereProvider creates a new Cohere provider with the given config.

func (*CohereProvider) AuthModeInfo

func (p *CohereProvider) AuthModeInfo() AuthModeInfo

func (*CohereProvider) Chat

func (p *CohereProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*CohereProvider) Name

func (p *CohereProvider) Name() string

func (*CohereProvider) Stream

func (p *CohereProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type CopilotConfig

type CopilotConfig struct {
	Token      string
	Model      string
	BaseURL    string
	MaxTokens  int
	HTTPClient *http.Client
}

CopilotConfig holds configuration for the GitHub Copilot provider.

type CopilotModelsConfig

type CopilotModelsConfig struct {
	// Token is a GitHub fine-grained PAT with models:read permission.
	Token string
	// Model is the model identifier (e.g. "openai/gpt-4o", "anthropic/claude-sonnet-4").
	Model string
	// BaseURL overrides the default endpoint. Default: "https://models.github.ai/inference".
	BaseURL string
	// MaxTokens limits the response length.
	MaxTokens int
}

CopilotModelsConfig configures the GitHub Models provider. GitHub Models is a separate product from GitHub Copilot, available at models.github.ai. It uses fine-grained Personal Access Tokens with the models:read scope.

type CopilotModelsProvider

type CopilotModelsProvider struct {
	*OpenAIProvider
}

CopilotModelsProvider uses GitHub Models (models.github.ai) for inference. It wraps OpenAIProvider since GitHub Models uses an OpenAI-compatible API.

func NewCopilotModelsProvider

func NewCopilotModelsProvider(cfg CopilotModelsConfig) *CopilotModelsProvider

NewCopilotModelsProvider creates a provider that uses GitHub Models for inference. GitHub Models provides access to various AI models via a fine-grained PAT.

Docs: https://docs.github.com/en/rest/models/inference Billing: https://docs.github.com/billing/managing-billing-for-your-products/about-billing-for-github-models

func (*CopilotModelsProvider) AuthModeInfo

func (p *CopilotModelsProvider) AuthModeInfo() AuthModeInfo

func (*CopilotModelsProvider) Name

func (p *CopilotModelsProvider) Name() string

type CopilotProvider

type CopilotProvider struct {
	// contains filtered or unexported fields
}

CopilotProvider implements Provider using the GitHub Copilot Chat API. The API follows the OpenAI Chat Completions format.

func NewCopilotProvider

func NewCopilotProvider(cfg CopilotConfig) *CopilotProvider

NewCopilotProvider creates a new Copilot provider with the given config.

func (*CopilotProvider) AuthModeInfo

func (p *CopilotProvider) AuthModeInfo() AuthModeInfo

func (*CopilotProvider) Chat

func (p *CopilotProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*CopilotProvider) Name

func (p *CopilotProvider) Name() string

func (*CopilotProvider) Stream

func (p *CopilotProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type Embedder

type Embedder interface {
	Embed(ctx context.Context, text string) ([]float32, error)
}

Embedder is optionally implemented by providers that support text embedding.

func AsEmbedder

func AsEmbedder(p Provider) (Embedder, bool)

AsEmbedder checks if a Provider also implements Embedder.

type EventBroadcaster

type EventBroadcaster interface {
	BroadcastEvent(eventType, data string)
}

EventBroadcaster is an optional interface for pushing SSE notifications when new test interactions arrive.

type GeminiConfig

type GeminiConfig struct {
	APIKey     string
	Model      string
	MaxTokens  int
	HTTPClient *http.Client
}

GeminiConfig holds configuration for the Google Gemini provider.

type GeminiProvider

type GeminiProvider struct {
	// contains filtered or unexported fields
}

GeminiProvider implements Provider using the Google Gemini API.

func NewGeminiProvider

func NewGeminiProvider(cfg GeminiConfig) (*GeminiProvider, error)

NewGeminiProvider creates a new Gemini provider. Returns an error if no API key is provided.

func (*GeminiProvider) AuthModeInfo

func (p *GeminiProvider) AuthModeInfo() AuthModeInfo

func (*GeminiProvider) Chat

func (p *GeminiProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*GeminiProvider) Name

func (p *GeminiProvider) Name() string

func (*GeminiProvider) Stream

func (p *GeminiProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type HTTPSource

type HTTPSource struct {
	// contains filtered or unexported fields
}

HTTPSource exposes pending interactions via an API so that humans or QA scripts can act as the LLM. When the agent calls Chat(), the interaction is stored as pending and an event is broadcast. A subsequent API call provides the response, unblocking the waiting goroutine.

func NewHTTPSource

func NewHTTPSource(broadcaster EventBroadcaster) *HTTPSource

NewHTTPSource creates an HTTPSource. The optional EventBroadcaster is used to push notifications when new interactions arrive.

func (*HTTPSource) GetInteraction

func (h *HTTPSource) GetInteraction(id string) (*Interaction, error)

GetInteraction returns the full interaction details for a given ID.

func (*HTTPSource) GetResponse

func (h *HTTPSource) GetResponse(ctx context.Context, interaction Interaction) (*InteractionResponse, error)

GetResponse implements ResponseSource. It adds the interaction to the pending map, broadcasts an event, and blocks until a response is submitted via Respond() or the context is cancelled.

func (*HTTPSource) ListPending

func (h *HTTPSource) ListPending() []InteractionSummary

ListPending returns summaries of all pending interactions.

func (*HTTPSource) PendingCount

func (h *HTTPSource) PendingCount() int

PendingCount returns the number of interactions awaiting responses.

func (*HTTPSource) Respond

func (h *HTTPSource) Respond(id string, resp InteractionResponse) error

Respond submits a response for a pending interaction, unblocking the waiting GetResponse() call.

func (*HTTPSource) SetBroadcaster

func (h *HTTPSource) SetBroadcaster(broadcaster EventBroadcaster)

SetBroadcaster sets or replaces the event broadcaster for push notifications.

type Interaction

type Interaction struct {
	ID        string    `json:"id"`
	Messages  []Message `json:"messages"`
	Tools     []ToolDef `json:"tools"`
	CreatedAt time.Time `json:"created_at"`
}

Interaction represents a single LLM call that needs a response.

type InteractionResponse

type InteractionResponse struct {
	Content   string     `json:"content"`
	ToolCalls []ToolCall `json:"tool_calls,omitempty"`
	Error     string     `json:"error,omitempty"`
	Usage     Usage      `json:"usage,omitempty"`
}

InteractionResponse is the response supplied by a ResponseSource.

type InteractionSummary

type InteractionSummary struct {
	ID        string    `json:"id"`
	MsgCount  int       `json:"msg_count"`
	ToolCount int       `json:"tool_count"`
	CreatedAt time.Time `json:"created_at"`
}

InteractionSummary is a brief view of a pending interaction for list endpoints.

type LlamaCppConfig added in v0.5.3

type LlamaCppConfig struct {
	BaseURL     string // external mode: OpenAI-compatible server URL
	ModelPath   string // managed mode: path to .gguf model file
	ModelName   string // model name sent to server (external mode); defaults to "local"
	BinaryPath  string // override llama-server binary location
	GPULayers   int    // -ngl flag; 0 → default -1 (all layers)
	ContextSize int    // -c flag; default 8192
	Threads     int    // -t flag; default runtime.NumCPU()
	Port        int    // server port; default 8081
	MaxTokens   int    // default 4096
	HTTPClient  *http.Client
}

LlamaCppConfig holds configuration for the LlamaCpp provider. Set BaseURL for external mode (any OpenAI-compatible server). Set ModelPath for managed mode (provider starts llama-server).

type LlamaCppProvider added in v0.5.3

type LlamaCppProvider struct {
	// contains filtered or unexported fields
}

LlamaCppProvider implements Provider using an OpenAI-compatible llama-server.

func NewLlamaCppProvider added in v0.5.3

func NewLlamaCppProvider(cfg LlamaCppConfig) *LlamaCppProvider

NewLlamaCppProvider creates a LlamaCppProvider with the given config. In external mode (BaseURL set), it points at the given URL. In managed mode (ModelPath set), call ensureServer before use.

func (*LlamaCppProvider) AuthModeInfo added in v0.5.3

func (p *LlamaCppProvider) AuthModeInfo() AuthModeInfo

func (*LlamaCppProvider) Chat added in v0.5.3

func (p *LlamaCppProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

Chat sends a non-streaming request and applies ParseThinking to the response.

func (*LlamaCppProvider) Close added in v0.5.3

func (p *LlamaCppProvider) Close() error

Close kills the managed llama-server process if one was started and waits for it to exit to avoid zombie processes.

func (*LlamaCppProvider) EnsureServer added in v0.5.3

func (p *LlamaCppProvider) EnsureServer(ctx context.Context) error

EnsureServer starts the managed llama-server if ModelPath is configured. No-op in external mode. Must be called before Chat/Stream in managed mode.

func (*LlamaCppProvider) Name added in v0.5.3

func (p *LlamaCppProvider) Name() string

func (*LlamaCppProvider) Stream added in v0.5.3

func (p *LlamaCppProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

Stream sends a streaming request, applying ThinkingStreamParser to text events.

type Message

type Message struct {
	Role       Role       `json:"role"`
	Content    string     `json:"content"`
	ToolCallID string     `json:"tool_call_id,omitempty"` // for tool results
	ToolCalls  []ToolCall `json:"tool_calls,omitempty"`   // for assistant messages with tool calls
}

Message is a single turn in a conversation.

type ModelInfo

type ModelInfo struct {
	ID            string `json:"id"`
	Name          string `json:"name"`
	ContextWindow int    `json:"context_window,omitempty"`
}

ModelInfo describes an available model from a provider.

func ListModels

func ListModels(ctx context.Context, providerType, apiKey, baseURL string) ([]ModelInfo, error)

ListModels fetches available models from the given provider type. Only requires an API key and optional base URL — no saved provider needed.

type OllamaConfig added in v0.5.3

type OllamaConfig struct {
	Model      string
	BaseURL    string
	MaxTokens  int
	HTTPClient *http.Client
}

OllamaConfig holds configuration for the Ollama provider.

type OllamaProvider added in v0.5.3

type OllamaProvider struct {
	// contains filtered or unexported fields
}

OllamaProvider implements Provider using a local Ollama server.

func NewOllamaProvider added in v0.5.3

func NewOllamaProvider(cfg OllamaConfig) *OllamaProvider

NewOllamaProvider creates a new OllamaProvider with the given config.

func (*OllamaProvider) AuthModeInfo added in v0.5.3

func (p *OllamaProvider) AuthModeInfo() AuthModeInfo

func (*OllamaProvider) Chat added in v0.5.3

func (p *OllamaProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

Chat sends a non-streaming request and returns the complete response.

func (*OllamaProvider) Health added in v0.5.3

func (p *OllamaProvider) Health(ctx context.Context) error

Health checks whether the Ollama server is reachable.

func (*OllamaProvider) ListModels added in v0.5.3

func (p *OllamaProvider) ListModels(ctx context.Context) ([]ModelInfo, error)

ListModels returns the models available on the Ollama server.

func (*OllamaProvider) Name added in v0.5.3

func (p *OllamaProvider) Name() string

func (*OllamaProvider) Pull added in v0.5.3

func (p *OllamaProvider) Pull(ctx context.Context, model string, progressFn func(pct float64)) error

Pull downloads a model via the Ollama server. progressFn is called with percent completion (0–100); may be nil.

func (*OllamaProvider) Stream added in v0.5.3

func (p *OllamaProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

Stream sends a streaming request and emits events on the returned channel. Thinking tokens extracted via ThinkingStreamParser are emitted as "thinking" events.

type OpenAIAzureConfig

type OpenAIAzureConfig struct {
	// Resource is the Azure OpenAI resource name.
	Resource string
	// DeploymentName is the model deployment name in Azure.
	DeploymentName string
	// APIVersion is the Azure API version (e.g. "2024-10-21").
	APIVersion string
	// MaxTokens limits the response length.
	MaxTokens int
	// APIKey is the Azure API key (use this OR Entra ID token, not both).
	APIKey string
	// EntraToken is a Microsoft Entra ID bearer token (optional, alternative to APIKey).
	EntraToken string
	// HTTPClient overrides the default HTTP client.
	HTTPClient *http.Client
	// BaseURL overrides the computed Azure endpoint (used in tests).
	BaseURL string
}

OpenAIAzureConfig configures the OpenAI provider for Azure OpenAI Service. Uses Azure API keys or Entra ID tokens. URLs follow the pattern: {resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version={version}

type OpenAIAzureProvider

type OpenAIAzureProvider struct {
	// contains filtered or unexported fields
}

OpenAIAzureProvider accesses OpenAI models via Azure OpenAI Service.

func NewOpenAIAzureProvider

func NewOpenAIAzureProvider(cfg OpenAIAzureConfig) (*OpenAIAzureProvider, error)

NewOpenAIAzureProvider creates a provider that accesses OpenAI models via Azure.

Docs: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference

func (*OpenAIAzureProvider) AuthModeInfo

func (p *OpenAIAzureProvider) AuthModeInfo() AuthModeInfo

func (*OpenAIAzureProvider) Chat

func (p *OpenAIAzureProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*OpenAIAzureProvider) Name

func (p *OpenAIAzureProvider) Name() string

func (*OpenAIAzureProvider) Stream

func (p *OpenAIAzureProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type OpenAIConfig

type OpenAIConfig struct {
	APIKey     string
	Model      string
	BaseURL    string
	MaxTokens  int
	HTTPClient *http.Client
}

OpenAIConfig holds configuration for the OpenAI provider.

type OpenAIProvider

type OpenAIProvider struct {
	// contains filtered or unexported fields
}

OpenAIProvider implements Provider using the OpenAI Chat Completions API.

func NewOpenAIProvider

func NewOpenAIProvider(cfg OpenAIConfig) *OpenAIProvider

NewOpenAIProvider creates a new OpenAI provider with the given config.

func (*OpenAIProvider) AuthModeInfo

func (p *OpenAIProvider) AuthModeInfo() AuthModeInfo

func (*OpenAIProvider) Chat

func (p *OpenAIProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

func (*OpenAIProvider) Name

func (p *OpenAIProvider) Name() string

func (*OpenAIProvider) Stream

func (p *OpenAIProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

type OpenRouterConfig

type OpenRouterConfig struct {
	APIKey    string
	Model     string
	BaseURL   string
	MaxTokens int
}

OpenRouterConfig configures the OpenRouter provider.

type OpenRouterProvider

type OpenRouterProvider struct {
	*OpenAIProvider
}

OpenRouterProvider wraps OpenAIProvider with OpenRouter-specific identity and auth info.

func NewOpenRouterProvider

func NewOpenRouterProvider(cfg OpenRouterConfig) *OpenRouterProvider

NewOpenRouterProvider creates a provider that uses OpenRouter's OpenAI-compatible API.

func (*OpenRouterProvider) AuthModeInfo

func (p *OpenRouterProvider) AuthModeInfo() AuthModeInfo

func (*OpenRouterProvider) Name

func (p *OpenRouterProvider) Name() string

type Provider

type Provider interface {
	// Name returns the provider identifier (e.g., "anthropic", "openai", "mock").
	Name() string

	// Chat sends a non-streaming request and returns the complete response.
	Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

	// Stream sends a streaming request. Events are delivered on the returned channel.
	// The channel is closed when the response is complete or an error occurs.
	Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

	// AuthModeInfo returns metadata about this provider's authentication mode.
	AuthModeInfo() AuthModeInfo
}

Provider is an AI backend that powers agent reasoning.

type Response

type Response struct {
	Content   string     `json:"content"`
	Thinking  string     `json:"thinking,omitempty"` // reasoning trace (e.g. from <think> tags)
	ToolCalls []ToolCall `json:"tool_calls,omitempty"`
	Usage     Usage      `json:"usage"`
}

Response is a completed (non-streaming) provider response.

type ResponseSource

type ResponseSource interface {
	// GetResponse receives the current interaction (messages + tools) and returns
	// a response. Implementations may block (e.g. waiting for human input).
	GetResponse(ctx context.Context, interaction Interaction) (*InteractionResponse, error)
}

ResponseSource is the interface that pluggable backends implement to supply responses for the TestProvider.

type Role

type Role string

Role identifies the sender of a chat message.

const (
	RoleSystem    Role = "system"
	RoleUser      Role = "user"
	RoleAssistant Role = "assistant"
	RoleTool      Role = "tool"
)

type ScriptedScenario

type ScriptedScenario struct {
	Name        string         `yaml:"name" json:"name"`
	Description string         `yaml:"description,omitempty" json:"description,omitempty"`
	Steps       []ScriptedStep `yaml:"steps" json:"steps"`
	Loop        bool           `yaml:"loop,omitempty" json:"loop,omitempty"`
}

ScriptedScenario is a named sequence of steps loadable from YAML.

func LoadScenario

func LoadScenario(path string) (*ScriptedScenario, error)

LoadScenario reads a ScriptedScenario from a YAML file.

type ScriptedSource

type ScriptedSource struct {
	// contains filtered or unexported fields
}

ScriptedSource returns responses from a pre-defined sequence of steps. It is safe for concurrent use.

func NewScriptedSource

func NewScriptedSource(steps []ScriptedStep, loop bool) *ScriptedSource

NewScriptedSource creates a ScriptedSource from the given steps. If loop is true, steps cycle indefinitely; otherwise GetResponse returns an error when all steps are exhausted.

func NewScriptedSourceFromScenario

func NewScriptedSourceFromScenario(scenario *ScriptedScenario) *ScriptedSource

NewScriptedSourceFromScenario creates a ScriptedSource from a loaded scenario.

func (*ScriptedSource) GetResponse

func (s *ScriptedSource) GetResponse(ctx context.Context, interaction Interaction) (*InteractionResponse, error)

GetResponse implements ResponseSource.

func (*ScriptedSource) Remaining

func (s *ScriptedSource) Remaining() int

Remaining returns how many unconsumed steps remain.

type ScriptedStep

type ScriptedStep struct {
	Content   string        `yaml:"content" json:"content"`
	ToolCalls []ToolCall    `yaml:"tool_calls,omitempty" json:"tool_calls,omitempty"`
	Error     string        `yaml:"error,omitempty" json:"error,omitempty"`
	Delay     time.Duration `yaml:"delay,omitempty" json:"delay,omitempty"`
}

ScriptedStep defines a single scripted response in a test scenario.

type StreamEvent

type StreamEvent struct {
	Type     string    `json:"type"` // "text", "thinking", "tool_call", "done", "error"
	Text     string    `json:"text,omitempty"`
	Thinking string    `json:"thinking,omitempty"`
	Tool     *ToolCall `json:"tool,omitempty"`
	Error    string    `json:"error,omitempty"`
	Usage    *Usage    `json:"usage,omitempty"`
}

StreamEvent is emitted during streaming responses.

type TestProvider

type TestProvider struct {
	// contains filtered or unexported fields
}

TestProvider implements Provider by delegating to a ResponseSource. It enables interactive and scripted E2E testing of the agent execution pipeline.

func NewTestProvider

func NewTestProvider(source ResponseSource, opts ...TestProviderOption) *TestProvider

NewTestProvider creates a TestProvider backed by the given ResponseSource.

func (*TestProvider) AuthModeInfo

func (tp *TestProvider) AuthModeInfo() AuthModeInfo

AuthModeInfo implements Provider.

func (*TestProvider) Chat

func (tp *TestProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef) (*Response, error)

Chat implements Provider.

func (*TestProvider) InteractionCount

func (tp *TestProvider) InteractionCount() int64

InteractionCount returns how many interactions have been processed.

func (*TestProvider) Name

func (tp *TestProvider) Name() string

Name implements Provider.

func (*TestProvider) Source

func (tp *TestProvider) Source() ResponseSource

Source returns the underlying ResponseSource.

func (*TestProvider) Stream

func (tp *TestProvider) Stream(ctx context.Context, messages []Message, tools []ToolDef) (<-chan StreamEvent, error)

Stream implements Provider by wrapping Chat() into stream events.

type TestProviderOption

type TestProviderOption func(*TestProvider)

TestProviderOption configures a TestProvider.

func WithName

func WithName(s string) TestProviderOption

WithName sets the provider name returned by Name().

func WithTimeout

func WithTimeout(d time.Duration) TestProviderOption

WithTimeout sets the maximum time to wait for a response from the source.

type ThinkingStreamParser added in v0.5.3

type ThinkingStreamParser struct {
	// contains filtered or unexported fields
}

ThinkingStreamParser tracks state across streaming chunks to split thinking vs content tokens. Handles <think>/<think> split across chunks.

func (*ThinkingStreamParser) Feed added in v0.5.3

func (p *ThinkingStreamParser) Feed(chunk string) []StreamEvent

Feed processes a streaming chunk and returns zero or more StreamEvents. Events may be of type "thinking" (Thinking field set) or "text" (Text field set).

type ToolCall

type ToolCall struct {
	ID        string         `json:"id"`
	Name      string         `json:"name"`
	Arguments map[string]any `json:"arguments"`
}

ToolCall is a request from the AI to invoke a tool.

type ToolDef

type ToolDef struct {
	Name        string         `json:"name"`
	Description string         `json:"description"`
	Parameters  map[string]any `json:"parameters"` // JSON Schema
}

ToolDef describes a tool the agent can invoke.

type Usage

type Usage struct {
	InputTokens  int `json:"input_tokens"`
	OutputTokens int `json:"output_tokens"`
}

Usage tracks token consumption.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL