Documentation
¶
Overview ¶
Package goai provides a unified SDK for interacting with AI language models.
This file defines error types returned by GoAI functions. They live in the root package so users can type-assert without importing internals.
Package goai provides a unified SDK for interacting with AI language models.
GoAI is a Go port of the Vercel AI SDK, providing a consistent API across multiple AI providers (OpenAI, Anthropic, Google, and more).
Core functions:
- GenerateText: non-streaming text generation
- StreamText: streaming text generation with multiple consumption modes
- GenerateObject: structured output with auto-generated JSON Schema
- StreamObject: streaming structured output with partial object emission
- Embed: single text embedding
- EmbedMany: batch text embeddings with auto-chunking
- GenerateImage: image generation from text prompts
Basic usage:
result, err := goai.GenerateText(ctx, model, goai.WithPrompt("Hello"))
stream, err := goai.StreamText(ctx, model, goai.WithPrompt("Hello"))
for text := range stream.TextStream() {
fmt.Print(text)
}
Index ¶
- Variables
- func AssistantMessage(text string) provider.Message
- func ClassifyStreamError(body []byte) error
- func IsOverflow(message string) bool
- func ParseHTTPError(providerID string, statusCode int, body []byte) error
- func ParseHTTPErrorWithHeaders(providerID string, statusCode int, body []byte, headers http.Header) error
- func SchemaFrom[T any]() json.RawMessage
- func SystemMessage(text string) provider.Message
- func ToolMessage(toolCallID, toolName, output string) provider.Message
- func UserMessage(text string) provider.Message
- type APIError
- type ContextOverflowError
- type EmbedManyResult
- type EmbedResult
- type ImageOption
- type ImageResult
- type ObjectResult
- type ObjectStream
- type Option
- func WithEmbeddingProviderOptions(opts map[string]any) Option
- func WithExplicitSchema(schema json.RawMessage) Option
- func WithFrequencyPenalty(p float64) Option
- func WithHeaders(h map[string]string) Option
- func WithMaxOutputTokens(n int) Option
- func WithMaxParallelCalls(n int) Option
- func WithMaxRetries(n int) Option
- func WithMaxSteps(n int) Option
- func WithMessages(msgs ...provider.Message) Option
- func WithOnRequest(fn func(RequestInfo)) Option
- func WithOnResponse(fn func(ResponseInfo)) Option
- func WithOnStepFinish(fn func(StepResult)) Option
- func WithOnToolCall(fn func(ToolCallInfo)) Option
- func WithPresencePenalty(p float64) Option
- func WithPrompt(s string) Option
- func WithPromptCaching(b bool) Option
- func WithProviderOptions(opts map[string]any) Option
- func WithSchemaName(name string) Option
- func WithSeed(s int) Option
- func WithStopSequences(seqs ...string) Option
- func WithSystem(s string) Option
- func WithTemperature(t float64) Option
- func WithTimeout(d time.Duration) Option
- func WithToolChoice(tc string) Option
- func WithTools(tools ...Tool) Option
- func WithTopK(k int) Option
- func WithTopP(p float64) Option
- type ParsedStreamError
- type RequestInfo
- type ResponseInfo
- type StepResult
- type StreamErrorType
- type TextResult
- type TextStream
- type Tool
- type ToolCallInfo
Constants ¶
This section is empty.
Variables ¶
var ErrUnknownTool = errors.New("goai: unknown tool")
ErrUnknownTool is returned when a tool call references a tool not in the tool map.
Functions ¶
func AssistantMessage ¶
AssistantMessage creates an assistant message with text content.
func ClassifyStreamError ¶
ClassifyStreamError parses a stream error event and returns the appropriate typed error (*ContextOverflowError or *APIError), or nil if the data is not a recognized error event.
func IsOverflow ¶
IsOverflow checks if an error message indicates a context overflow.
func ParseHTTPError ¶
ParseHTTPError classifies an HTTP error response.
func ParseHTTPErrorWithHeaders ¶
func ParseHTTPErrorWithHeaders(providerID string, statusCode int, body []byte, headers http.Header) error
ParseHTTPErrorWithHeaders parses an HTTP error response, preserving retry-related headers.
func SchemaFrom ¶
func SchemaFrom[T any]() json.RawMessage
SchemaFrom generates a JSON Schema from a Go type using reflection. The schema is compatible with OpenAI strict mode:
- All properties are required (pointer types become nullable)
- additionalProperties: false on all objects
Supports struct tags:
- json:"name" for field naming, json:"-" to skip
- jsonschema:"description=...,enum=a|b|c" for descriptions and enums
func SystemMessage ¶
SystemMessage creates a system message with text content.
func ToolMessage ¶
ToolMessage creates a tool result message.
func UserMessage ¶
UserMessage creates a user message with text content.
Types ¶
type APIError ¶
type APIError struct {
Message string
StatusCode int
IsRetryable bool
ResponseBody string
ResponseHeaders map[string]string
}
APIError represents a non-overflow API error.
type ContextOverflowError ¶
ContextOverflowError indicates the prompt exceeded the model's context window.
func (*ContextOverflowError) Error ¶
func (e *ContextOverflowError) Error() string
type EmbedManyResult ¶
type EmbedManyResult struct {
// Embeddings contains the generated vectors (one per input value).
Embeddings [][]float64
// Usage is the aggregated token consumption.
Usage provider.Usage
}
EmbedManyResult is the result of multiple embedding generations.
func EmbedMany ¶
func EmbedMany(ctx context.Context, model provider.EmbeddingModel, values []string, opts ...Option) (*EmbedManyResult, error)
EmbedMany generates embedding vectors for multiple values. Auto-chunks when values exceed the model's MaxValuesPerCall limit and processes chunks in parallel (controlled by WithMaxParallelCalls).
type EmbedResult ¶
type EmbedResult struct {
// Embedding is the generated vector.
Embedding []float64
// Usage tracks token consumption.
Usage provider.Usage
}
EmbedResult is the result of a single embedding generation.
func Embed ¶
func Embed(ctx context.Context, model provider.EmbeddingModel, value string, opts ...Option) (*EmbedResult, error)
Embed generates an embedding vector for a single value.
type ImageOption ¶
type ImageOption func(*imageOptions)
ImageOption configures image generation.
func WithAspectRatio ¶
func WithAspectRatio(ratio string) ImageOption
WithAspectRatio sets the aspect ratio (e.g. "16:9", "1:1").
func WithImageCount ¶
func WithImageCount(n int) ImageOption
WithImageCount sets the number of images to generate.
func WithImagePrompt ¶
func WithImagePrompt(prompt string) ImageOption
WithImagePrompt sets the text prompt for image generation.
func WithImageProviderOptions ¶
func WithImageProviderOptions(opts map[string]any) ImageOption
WithImageProviderOptions sets provider-specific options.
func WithImageSize ¶
func WithImageSize(size string) ImageOption
WithImageSize sets the image size (e.g. "1024x1024", "512x512").
type ImageResult ¶
ImageResult contains the generated images.
func GenerateImage ¶
func GenerateImage(ctx context.Context, model provider.ImageModel, opts ...ImageOption) (*ImageResult, error)
GenerateImage generates images from a text prompt.
type ObjectResult ¶
type ObjectResult[T any] struct { // Object is the parsed structured output. Object T // Usage tracks token consumption. Usage provider.Usage // FinishReason indicates why generation stopped. FinishReason provider.FinishReason // Response contains provider metadata (ID, Model). Response provider.ResponseMetadata }
ObjectResult is the final result of a structured output generation.
func GenerateObject ¶
func GenerateObject[T any](ctx context.Context, model provider.LanguageModel, opts ...Option) (*ObjectResult[T], error)
GenerateObject performs a non-streaming structured output generation. The schema is auto-generated from T, or can be overridden with WithExplicitSchema.
type ObjectStream ¶
type ObjectStream[T any] struct { // contains filtered or unexported fields }
ObjectStream is a streaming structured output response.
func StreamObject ¶
func StreamObject[T any](ctx context.Context, model provider.LanguageModel, opts ...Option) (*ObjectStream[T], error)
StreamObject performs a streaming structured output generation. Returns an ObjectStream that emits progressively populated partial objects.
func (*ObjectStream[T]) PartialObjectStream ¶
func (os *ObjectStream[T]) PartialObjectStream() <-chan *T
PartialObjectStream returns a channel that emits partial objects as JSON accumulates. Each emitted value has progressively more fields populated. Mutually exclusive with Result() -- only call one consumption method first.
func (*ObjectStream[T]) Result ¶
func (os *ObjectStream[T]) Result() (*ObjectResult[T], error)
Result blocks until the stream completes and returns the final validated object. Returns an error if JSON parsing of the accumulated text fails.
type Option ¶
type Option func(*options)
Option configures a generation call.
func WithEmbeddingProviderOptions ¶
WithEmbeddingProviderOptions sets provider-specific parameters for embedding requests.
func WithExplicitSchema ¶
func WithExplicitSchema(schema json.RawMessage) Option
WithExplicitSchema overrides auto-generated JSON Schema for GenerateObject/StreamObject.
func WithFrequencyPenalty ¶
WithFrequencyPenalty sets the frequency penalty.
func WithHeaders ¶
WithHeaders sets additional HTTP headers.
func WithMaxOutputTokens ¶
WithMaxOutputTokens limits the response length.
func WithMaxParallelCalls ¶
WithMaxParallelCalls sets batch parallelism for EmbedMany.
func WithMaxRetries ¶
WithMaxRetries sets the retry count for transient errors.
func WithMaxSteps ¶
WithMaxSteps sets the maximum auto tool loop iterations.
func WithMessages ¶
WithMessages sets the conversation history.
func WithOnRequest ¶
func WithOnRequest(fn func(RequestInfo)) Option
WithOnRequest sets a callback invoked before each model call.
func WithOnResponse ¶
func WithOnResponse(fn func(ResponseInfo)) Option
WithOnResponse sets a callback invoked after each model call completes.
func WithOnStepFinish ¶
func WithOnStepFinish(fn func(StepResult)) Option
WithOnStepFinish sets a callback invoked after each generation step completes.
func WithOnToolCall ¶
func WithOnToolCall(fn func(ToolCallInfo)) Option
WithOnToolCall sets a callback invoked after each tool execution.
func WithPresencePenalty ¶
WithPresencePenalty sets the presence penalty.
func WithPromptCaching ¶
WithPromptCaching enables provider-specific prompt caching.
func WithProviderOptions ¶
WithProviderOptions sets provider-specific request parameters.
func WithSchemaName ¶
WithSchemaName sets the schema name sent to providers (default "response").
func WithStopSequences ¶
WithStopSequences sets stop sequences.
func WithTimeout ¶
WithTimeout sets the timeout for the entire generation call.
type ParsedStreamError ¶
type ParsedStreamError struct {
Type StreamErrorType
Message string
IsRetryable bool
ResponseBody string
}
ParsedStreamError represents a parsed error from an SSE stream.
func ParseStreamError ¶
func ParseStreamError(body []byte) *ParsedStreamError
ParseStreamError parses a stream error event (used by Anthropic/OpenAI error events).
type RequestInfo ¶
type RequestInfo struct {
// Model is the model ID.
Model string
// MessageCount is the number of messages in the request.
MessageCount int
// ToolCount is the number of tools available.
ToolCount int
// Timestamp is when the request was initiated.
Timestamp time.Time
}
RequestInfo is passed to the OnRequest hook before a generation call.
type ResponseInfo ¶
type ResponseInfo struct {
// Latency is the time from request to response.
Latency time.Duration
// Usage is the token consumption for this call.
Usage provider.Usage
// FinishReason indicates why generation stopped.
FinishReason provider.FinishReason
// Error is non-nil if the call failed.
Error error
// StatusCode is the HTTP status code (0 if not applicable).
StatusCode int
}
ResponseInfo is passed to the OnResponse hook after a generation call completes.
type StepResult ¶
type StepResult struct {
// Number is the 1-based step index.
Number int
// Text generated in this step.
Text string
// ToolCalls requested in this step.
ToolCalls []provider.ToolCall
// FinishReason for this step.
FinishReason provider.FinishReason
// Usage for this step.
Usage provider.Usage
// Response contains provider metadata for this step (ID, Model).
Response provider.ResponseMetadata
// Sources contains citations/references from this step.
Sources []provider.Source
}
StepResult is the result of a single generation step in a tool loop.
type StreamErrorType ¶
type StreamErrorType string
StreamErrorType classifies parsed stream errors.
const ( StreamErrorContextOverflow StreamErrorType = "context_overflow" StreamErrorAPI StreamErrorType = "api_error" )
type TextResult ¶
type TextResult struct {
// Text is the accumulated generated text.
Text string
// ToolCalls requested by the model in the final step.
ToolCalls []provider.ToolCall
// Steps contains results from each generation step (for multi-step tool loops).
Steps []StepResult
// TotalUsage is the aggregated token usage across all steps.
TotalUsage provider.Usage
// FinishReason indicates why generation stopped.
FinishReason provider.FinishReason
// Response contains provider metadata from the last step (ID, Model).
Response provider.ResponseMetadata
// Sources contains citations/references extracted from the response.
Sources []provider.Source
}
TextResult is the final result of a text generation call.
func GenerateText ¶
func GenerateText(ctx context.Context, model provider.LanguageModel, opts ...Option) (*TextResult, error)
GenerateText performs a non-streaming text generation. When tools with Execute functions are provided and MaxSteps > 1, it automatically runs a tool loop: generate → execute tools → re-generate.
type TextStream ¶
type TextStream struct {
// contains filtered or unexported fields
}
TextStream is a streaming text generation response.
It provides three consumption modes (Stream, TextStream, Result). Stream() and TextStream() are mutually exclusive -- only call one. Result() can always be called, including after Stream() or TextStream(), to get the accumulated final result.
func StreamText ¶
func StreamText(ctx context.Context, model provider.LanguageModel, opts ...Option) (*TextStream, error)
StreamText performs a streaming text generation.
func (*TextStream) Result ¶
func (ts *TextStream) Result() *TextResult
Result blocks until the stream completes and returns the final result. Can be called after Stream() or TextStream() to get accumulated data.
func (*TextStream) Stream ¶
func (ts *TextStream) Stream() <-chan provider.StreamChunk
Stream returns a channel that emits raw StreamChunks from the provider. Mutually exclusive with TextStream() -- only call one streaming method.
func (*TextStream) TextStream ¶
func (ts *TextStream) TextStream() <-chan string
TextStream returns a channel that emits only text content strings. Mutually exclusive with Stream() -- only call one streaming method.
type Tool ¶
type Tool struct {
// Name is the tool's identifier.
Name string
// Description explains what the tool does.
Description string
// InputSchema is the JSON Schema for the tool's input parameters.
InputSchema json.RawMessage
// ProviderDefinedType, when non-empty, marks this as a provider-defined tool
// (e.g. "computer_20250124", "bash_20250124"). Providers emit the correct
// API type instead of "custom".
ProviderDefinedType string
// ProviderDefinedOptions holds provider-specific tool configuration
// (e.g. displayWidthPx for computer use).
ProviderDefinedOptions map[string]any
// Execute runs the tool with the given JSON input and returns the result text.
Execute func(ctx context.Context, input json.RawMessage) (string, error)
}
Tool defines a tool that can be called by the model during generation. Unlike provider.ToolDefinition (wire-level schema), Tool includes an Execute function that GoAI's auto tool loop invokes.
type ToolCallInfo ¶
type ToolCallInfo struct {
// ToolName is the name of the tool that was called.
ToolName string
// InputSize is the byte length of the tool input JSON.
InputSize int
// Duration is how long the tool execution took.
Duration time.Duration
// Error is non-nil if the tool execution failed.
Error error
}
ToolCallInfo is passed to the OnToolCall hook after a tool executes.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
internal
|
|
|
gemini
Package gemini provides shared utilities for Google Gemini API providers.
|
Package gemini provides shared utilities for Google Gemini API providers. |
|
httpc
Package httpc provides HTTP helper functions for provider implementations.
|
Package httpc provides HTTP helper functions for provider implementations. |
|
openaicompat
Package openaicompat provides shared request building and response parsing for OpenAI-compatible API providers (OpenAI, OpenRouter, Groq, DeepInfra, etc.).
|
Package openaicompat provides shared request building and response parsing for OpenAI-compatible API providers (OpenAI, OpenRouter, Groq, DeepInfra, etc.). |
|
sse
Package sse provides a scanner for Server-Sent Events (SSE) streams.
|
Package sse provides a scanner for Server-Sent Events (SSE) streams. |
|
Package provider defines the interfaces and types that AI providers implement.
|
Package provider defines the interfaces and types that AI providers implement. |
|
anthropic
Package anthropic provides an Anthropic language model implementation for GoAI.
|
Package anthropic provides an Anthropic language model implementation for GoAI. |
|
azure
Package azure provides an Azure OpenAI language model implementation for GoAI.
|
Package azure provides an Azure OpenAI language model implementation for GoAI. |
|
bedrock
Package bedrock provides an AWS Bedrock language model implementation for GoAI.
|
Package bedrock provides an AWS Bedrock language model implementation for GoAI. |
|
cerebras
Package cerebras provides a Cerebras language model implementation for GoAI.
|
Package cerebras provides a Cerebras language model implementation for GoAI. |
|
cohere
Package cohere provides a Cohere language model and embedding implementation for GoAI.
|
Package cohere provides a Cohere language model and embedding implementation for GoAI. |
|
compat
Package compat provides a generic OpenAI-compatible language model for GoAI.
|
Package compat provides a generic OpenAI-compatible language model for GoAI. |
|
deepinfra
Package deepinfra provides a DeepInfra language model implementation for GoAI.
|
Package deepinfra provides a DeepInfra language model implementation for GoAI. |
|
deepseek
Package deepseek provides a DeepSeek language model implementation for GoAI.
|
Package deepseek provides a DeepSeek language model implementation for GoAI. |
|
fireworks
Package fireworks provides a Fireworks AI language model implementation for GoAI.
|
Package fireworks provides a Fireworks AI language model implementation for GoAI. |
|
google
Package google provides a Google Gemini language model implementation for GoAI.
|
Package google provides a Google Gemini language model implementation for GoAI. |
|
groq
Package groq provides a Groq language model implementation for GoAI.
|
Package groq provides a Groq language model implementation for GoAI. |
|
mistral
Package mistral provides a Mistral AI language model implementation for GoAI.
|
Package mistral provides a Mistral AI language model implementation for GoAI. |
|
ollama
Package ollama provides an Ollama language model implementation for GoAI.
|
Package ollama provides an Ollama language model implementation for GoAI. |
|
openai
Package openai provides an OpenAI language model implementation for GoAI.
|
Package openai provides an OpenAI language model implementation for GoAI. |
|
openrouter
Package openrouter provides an OpenRouter language model implementation for GoAI.
|
Package openrouter provides an OpenRouter language model implementation for GoAI. |
|
perplexity
Package perplexity provides a Perplexity language model implementation for GoAI.
|
Package perplexity provides a Perplexity language model implementation for GoAI. |
|
together
Package together provides a Together AI language model implementation for GoAI.
|
Package together provides a Together AI language model implementation for GoAI. |
|
vertex
Package vertex provides a Google Cloud Vertex AI language model implementation for GoAI.
|
Package vertex provides a Google Cloud Vertex AI language model implementation for GoAI. |
|
vllm
Package vllm provides a vLLM language model implementation for GoAI.
|
Package vllm provides a vLLM language model implementation for GoAI. |
|
xai
Package xai provides a xAI (Grok) language model implementation for GoAI.
|
Package xai provides a xAI (Grok) language model implementation for GoAI. |