Documentation
¶
Index ¶
- Constants
- type AnthropicConfig
- type AnthropicProvider
- type ContentBlock
- type ContentBlockType
- type Message
- type MiddlewareState
- type Model
- type OpenAIConfig
- type OpenAIProvider
- type Provider
- type ProviderFunc
- type Request
- type Response
- type StreamHandler
- type StreamOnlyModel
- type StreamOnlyProvider
- type StreamResult
- type ToolCall
- type ToolDefinition
- type Usage
Constants ¶
const (
// MiddlewareStateKey exposes the context key so other packages can attach middleware state.
MiddlewareStateKey = middlewareStateKey
)
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AnthropicConfig ¶
type AnthropicConfig struct {
APIKey string
BaseURL string
Model string
MaxTokens int
MaxRetries int
System string
Temperature *float64
HTTPClient *http.Client
}
AnthropicConfig wires a plain anthropic-sdk-go client into the Model interface.
type AnthropicProvider ¶
type AnthropicProvider struct {
APIKey string
BaseURL string
ModelName string
MaxTokens int
MaxRetries int
System string
Temperature *float64
CacheTTL time.Duration
// contains filtered or unexported fields
}
AnthropicProvider caches anthropic clients with optional TTL.
type ContentBlock ¶
type ContentBlock struct {
Type ContentBlockType `json:"type"`
Text string `json:"text,omitempty"`
MediaType string `json:"media_type,omitempty"` // e.g. "image/jpeg", "application/pdf"
Data string `json:"data,omitempty"` // base64-encoded content
URL string `json:"url,omitempty"` // URL source (alternative to Data)
}
ContentBlock represents a single piece of content within a message. For text blocks only Text is populated. For image/document blocks, MediaType and either Data (base64) or URL are set.
type ContentBlockType ¶
type ContentBlockType string
ContentBlockType discriminates the kind of content in a ContentBlock.
const ( ContentBlockText ContentBlockType = "text" ContentBlockImage ContentBlockType = "image" ContentBlockDocument ContentBlockType = "document" )
type Message ¶
type Message struct {
Role string
Content string
ContentBlocks []ContentBlock // Multimodal content; takes precedence over Content when non-empty
ToolCalls []ToolCall
ReasoningContent string // For thinking models (e.g. DeepSeek, Kimi k2.5)
}
Message represents a single conversational turn. Tool calls emitted by the assistant are kept on ToolCalls. When ContentBlocks is non-empty it takes precedence over Content for multimodal messages (images, documents). Existing text-only callers continue to use Content unchanged.
func (Message) TextContent ¶
TextContent returns the text portion of the message. When ContentBlocks is populated it concatenates all text blocks; otherwise it falls back to Content.
type MiddlewareState ¶
MiddlewareState is the minimal contract required for model providers to surface request/response data to middleware consumers without depending on the middleware package (which would cause an import cycle).
type Model ¶
type Model interface {
Complete(ctx context.Context, req Request) (*Response, error)
CompleteStream(ctx context.Context, req Request, cb StreamHandler) error
}
Model is the provider-agnostic interface used by the agent runtime.
func MustProvider ¶
MustProvider materialises a model immediately and panics on failure.
func NewAnthropic ¶
func NewAnthropic(cfg AnthropicConfig) (Model, error)
NewAnthropic constructs a production-ready Anthropic-backed Model.
func NewOpenAI ¶
func NewOpenAI(cfg OpenAIConfig) (Model, error)
NewOpenAI constructs a production-ready OpenAI-backed Model.
func NewOpenAIResponses ¶
func NewOpenAIResponses(cfg OpenAIConfig) (Model, error)
NewOpenAIResponses constructs an OpenAI model using the Responses API.
type OpenAIConfig ¶
type OpenAIConfig struct {
APIKey string
BaseURL string // Optional: for Azure or proxies
Model string // e.g., "gpt-4o", "gpt-4-turbo"
MaxTokens int
MaxRetries int
System string
Temperature *float64
HTTPClient *http.Client
UseResponses bool // true = /responses API, false = /chat/completions
}
OpenAIConfig configures the OpenAI-backed Model.
type OpenAIProvider ¶
type OpenAIProvider struct {
APIKey string
BaseURL string // Optional: for Azure or proxies
ModelName string
MaxTokens int
MaxRetries int
System string
Temperature *float64
CacheTTL time.Duration
// contains filtered or unexported fields
}
OpenAIProvider caches OpenAI clients with optional TTL.
type ProviderFunc ¶
ProviderFunc is an adapter to allow use of ordinary functions as providers.
type Request ¶
type Request struct {
Messages []Message
Tools []ToolDefinition
System string
Model string
SessionID string
MaxTokens int
Temperature *float64
EnablePromptCache bool // Enable prompt caching for system and recent messages
}
Request drives a single model completion.
type StreamHandler ¶
type StreamHandler func(StreamResult) error
StreamHandler consumes streaming updates in order.
type StreamOnlyModel ¶
type StreamOnlyModel struct {
Inner Model
}
StreamOnlyModel wraps a Model so that Complete() internally uses CompleteStream() to collect the response. This works around API proxies that return empty tool_use.input in non-streaming mode but work correctly in streaming mode.
func NewStreamOnlyModel ¶
func NewStreamOnlyModel(inner Model) *StreamOnlyModel
NewStreamOnlyModel returns a wrapper that forces all completions through the streaming path.
func (*StreamOnlyModel) Complete ¶
Complete calls CompleteStream internally and assembles the final Response.
func (*StreamOnlyModel) CompleteStream ¶
func (s *StreamOnlyModel) CompleteStream(ctx context.Context, req Request, cb StreamHandler) error
CompleteStream delegates directly to the inner model.
type StreamOnlyProvider ¶
type StreamOnlyProvider struct {
Inner Provider
}
StreamOnlyProvider wraps a Provider so that the Model it returns always routes Complete() through CompleteStream(). Use this when the upstream API proxy only returns correct tool_use.input in streaming mode.
type StreamResult ¶
StreamResult delivers incremental updates during streaming calls.
type ToolCall ¶
type ToolCall struct {
ID string
Name string
Arguments map[string]any
Result string // Result stores the execution result for this specific tool call
}
ToolCall captures a function-style invocation generated by the model.
type ToolDefinition ¶
ToolDefinition describes a callable function exposed to the model.