Documentation
¶
Overview ¶
Package agent provides a high-level interface for LLM interactions. It wraps the transport layer with convenient methods for common operations like chat, vision, tools, and embeddings, with automatic system prompt injection and simplified error handling.
Agent Interface ¶
The Agent interface provides protocol-specific methods:
type Agent interface {
Client() transport.Client
Provider() providers.Provider
Model() models.Model
Chat(ctx context.Context, prompt string, opts ...map[string]any) (*types.ChatResponse, error)
ChatStream(ctx context.Context, prompt string, opts ...map[string]any) (<-chan types.StreamingChunk, error)
Vision(ctx context.Context, prompt string, images []string, opts ...map[string]any) (*types.ChatResponse, error)
VisionStream(ctx context.Context, prompt string, images []string, opts ...map[string]any) (<-chan types.StreamingChunk, error)
Tools(ctx context.Context, prompt string, tools []Tool, opts ...map[string]any) (*types.ToolsResponse, error)
Embed(ctx context.Context, input string, opts ...map[string]any) (*types.EmbeddingsResponse, error)
}
Creating an Agent ¶
Agents are created from configuration that includes transport and optional system prompt:
cfg := &config.AgentConfig{
SystemPrompt: "You are a helpful AI assistant.",
Transport: &config.TransportConfig{
Provider: &config.ProviderConfig{
Name: "ollama",
BaseURL: "http://localhost:11434",
Model: &config.ModelConfig{
Name: "llama2",
Capabilities: map[string]config.CapabilityConfig{
"chat": {
Format: "openai-chat",
Options: map[string]any{
"temperature": 0.7,
},
},
},
},
},
Timeout: config.Duration(30 * time.Second),
ConnectionTimeout: config.Duration(10 * time.Second),
ConnectionPoolSize: 10,
},
}
agent, err := agent.New(cfg)
if err != nil {
log.Fatal(err)
}
Chat Protocol ¶
Simple text-based conversation:
ctx := context.Background()
response, err := agent.Chat(ctx, "What is Go?")
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Content())
With options:
options := map[string]any{
"temperature": 0.9,
"max_tokens": 2000,
}
response, err := agent.Chat(ctx, "Tell me a story", options)
Streaming:
chunks, err := agent.ChatStream(ctx, "Tell me a long story")
if err != nil {
log.Fatal(err)
}
for chunk := range chunks {
if chunk.Error != nil {
log.Printf("Stream error: %v", chunk.Error)
continue
}
fmt.Print(chunk.Content())
}
Vision Protocol ¶
Image understanding with multimodal inputs:
images := []string{
"https://example.com/image1.jpg",
"data:image/jpeg;base64,/9j/4AAQ...",
}
response, err := agent.Vision(ctx, "What do you see in these images?", images)
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Content())
Streaming vision:
chunks, err := agent.VisionStream(ctx, "Describe this image in detail", images)
if err != nil {
log.Fatal(err)
}
for chunk := range chunks {
fmt.Print(chunk.Content())
}
Tools Protocol ¶
Function calling with tool definitions:
tools := []agent.Tool{
{
Name: "get_weather",
Description: "Get the current weather for a location",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "City name",
},
"unit": map[string]any{
"type": "string",
"enum": []string{"celsius", "fahrenheit"},
"description": "Temperature unit",
},
},
"required": []string{"location"},
},
},
}
response, err := agent.Tools(ctx, "What's the weather in San Francisco?", tools)
if err != nil {
log.Fatal(err)
}
// Process tool calls
for _, toolCall := range response.ToolCalls() {
fmt.Printf("Tool: %s\n", toolCall.Name())
fmt.Printf("Arguments: %s\n", toolCall.Arguments())
}
Embeddings Protocol ¶
Text vectorization for semantic search:
response, err := agent.Embed(ctx, "The quick brown fox")
if err != nil {
log.Fatal(err)
}
embeddings := response.Embeddings()
fmt.Printf("Vector dimension: %d\n", len(embeddings[0]))
With options:
options := map[string]any{
"encoding_format": "float",
}
response, err := agent.Embed(ctx, "text to embed", options)
System Prompt Injection ¶
When an agent is created with a system prompt, it's automatically prepended to all protocol requests that support messages:
cfg := &config.AgentConfig{
SystemPrompt: "You are an expert Go programmer.",
Transport: transportConfig,
}
agent, _ := agent.New(cfg)
// System prompt is automatically injected before user prompt
response, err := agent.Chat(ctx, "How do I use channels?")
The message sequence becomes:
- System: "You are an expert Go programmer."
- User: "How do I use channels?"
Affects: Chat, ChatStream, Vision, VisionStream, Tools Does not affect: Embed (embeddings protocol doesn't use messages)
Options Management ¶
All protocol methods accept optional parameters:
// No options
response, err := agent.Chat(ctx, "Hello")
// With options
options := map[string]any{
"temperature": 0.9,
"max_tokens": 2000,
}
response, err := agent.Chat(ctx, "Hello", options)
Options are merged with model defaults, with request options taking precedence.
Tool Definitions ¶
Tools follow the OpenAI function calling schema:
type Tool struct {
Name string // Function name
Description string // What the function does
Parameters map[string]any // JSON Schema for parameters
}
The Parameters field uses JSON Schema format:
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"param_name": map[string]any{
"type": "string",
"description": "Parameter description",
},
},
"required": []string{"param_name"},
}
Error Handling ¶
All methods return standard Go errors:
response, err := agent.Chat(ctx, "Hello")
if err != nil {
// Handle error
log.Printf("Chat failed: %v", err)
return
}
For more detailed error information, the package provides AgentError:
err := agent.NewAgentLLMError(
"Request failed",
agent.WithCode("LLM500"),
agent.WithCause(underlyingError),
)
Error types:
- ErrorTypeInit: Initialization errors
- ErrorTypeLLM: LLM interaction errors
Error options:
- WithCode: Error code for categorization
- WithCause: Underlying error
- WithName: Agent name
- WithClient: Client identification
- WithID: Unique error ID
Context Cancellation ¶
All protocol methods respect context cancellation:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
response, err := agent.Chat(ctx, "Hello")
if err != nil {
if ctx.Err() == context.DeadlineExceeded {
log.Println("Request timed out")
}
}
For streaming:
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
chunks, err := agent.ChatStream(ctx, "Tell me a story")
if err != nil {
log.Fatal(err)
}
// Cancel after 5 seconds
go func() {
time.Sleep(5 * time.Second)
cancel()
}()
for chunk := range chunks {
fmt.Print(chunk.Content())
}
Accessing Lower Layers ¶
The agent provides access to underlying components:
// Transport client
client := agent.Client()
// Provider
provider := agent.Provider()
fmt.Println("Provider:", provider.Name())
// Model
model := agent.Model()
fmt.Println("Model:", model.Name())
This allows advanced usage while maintaining the convenience of agent methods.
Thread Safety ¶
Agents are safe for concurrent use. Multiple goroutines can call protocol methods simultaneously on the same agent instance.
Complete Example ¶
Comprehensive agent usage:
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/JaimeStill/go-agents/pkg/agent"
"github.com/JaimeStill/go-agents/pkg/config"
)
func main() {
cfg := &config.AgentConfig{
SystemPrompt: "You are a helpful assistant.",
Transport: &config.TransportConfig{
Provider: &config.ProviderConfig{
Name: "ollama",
BaseURL: "http://localhost:11434",
Model: &config.ModelConfig{
Name: "llama2",
Capabilities: map[string]config.CapabilityConfig{
"chat": {
Format: "openai-chat",
Options: map[string]any{
"temperature": 0.7,
},
},
},
},
},
Timeout: config.Duration(30 * time.Second),
ConnectionTimeout: config.Duration(10 * time.Second),
ConnectionPoolSize: 10,
},
}
a, err := agent.New(cfg)
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
// Simple chat
response, err := a.Chat(ctx, "What is Go?")
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Content())
// Streaming chat
chunks, err := a.ChatStream(ctx, "Tell me about channels")
if err != nil {
log.Fatal(err)
}
for chunk := range chunks {
if chunk.Error != nil {
log.Printf("Error: %v", chunk.Error)
continue
}
fmt.Print(chunk.Content())
}
}
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Agent ¶
type Agent interface {
// ID returns the unique identifier for the agent.
// The ID is assigned at creation time using UUIDv7 and never changes.
// Thread-safe for concurrent access and safe to use as map keys.
ID() string
// Client returns the underlying HTTP client.
Client() client.Client
// Provider returns the provider instance.
Provider() providers.Provider
// Model returns the model instance.
Model() *model.Model
// Chat executes a chat protocol request with optional system prompt injection.
// Returns the parsed chat response or an error.
Chat(ctx context.Context, prompt string, opts ...map[string]any) (*response.ChatResponse, error)
// ChatStream executes a streaming chat protocol request.
// Automatically sets stream: true in options.
// Returns a channel of streaming chunks or an error.
ChatStream(ctx context.Context, prompt string, opts ...map[string]any) (<-chan *response.StreamingChunk, error)
// Vision executes a vision protocol request with images.
// Images can be URLs or base64-encoded data URIs.
// Returns the parsed chat response or an error.
Vision(ctx context.Context, prompt string, images []string, opts ...map[string]any) (*response.ChatResponse, error)
// VisionStream executes a streaming vision protocol request with images.
// Returns a channel of streaming chunks or an error.
VisionStream(ctx context.Context, prompt string, images []string, opts ...map[string]any) (<-chan *response.StreamingChunk, error)
// Tools executes a tools protocol request with function definitions.
// Returns the parsed tools response with tool calls or an error.
Tools(ctx context.Context, prompt string, tools []Tool, opts ...map[string]any) (*response.ToolsResponse, error)
// Embed executes an embeddings protocol request.
// Returns the parsed embeddings response or an error.
Embed(ctx context.Context, input string, opts ...map[string]any) (*response.EmbeddingsResponse, error)
}
Agent provides a high-level interface for LLM interactions. Methods are protocol-specific and handle message initialization, system prompt injection, and response type assertions.
Each agent has a unique identifier that remains stable across its lifetime. The ID is used for orchestration scenarios including hub registration, message routing, lifecycle tracking, and distributed tracing. IDs are guaranteed to be unique, stable, and thread-safe.
type AgentError ¶
type AgentError struct {
// Type categorizes the error (init or llm).
Type ErrorType `json:"type"`
// ID is a unique identifier for this error instance.
ID uuid.UUID `json:"uuid,omitempty"`
// Name identifies the agent that generated the error.
Name string `json:"name,omitempty"`
// Code is an application-specific error code.
Code string `json:"code,omitempty"`
// Message describes what went wrong.
Message string `json:"message"`
// Cause is the underlying error that caused this error.
Cause error `json:"-"`
// Client identifies the provider/model combination.
Client string `json:"client,omitempty"`
// Timestamp records when the error occurred.
Timestamp time.Time `json:"timestamp"`
}
AgentError provides detailed error information for agent operations. Includes error categorization, unique identification, and contextual metadata.
func NewAgentError ¶
func NewAgentError(errorType ErrorType, message string, options ...ErrorOption) *AgentError
NewAgentError creates a new AgentError with the specified type and message. Optional ErrorOption functions can be provided to set additional fields.
func NewAgentInitError ¶
func NewAgentInitError(message string, options ...ErrorOption) *AgentError
NewAgentInitError creates an initialization error. Shorthand for NewAgentError(ErrorTypeInit, message, options...).
func NewAgentLLMError ¶
func NewAgentLLMError(message string, options ...ErrorOption) *AgentError
NewAgentLLMError creates an LLM interaction error. Shorthand for NewAgentError(ErrorTypeLLM, message, options...).
func (*AgentError) Error ¶
func (e *AgentError) Error() string
Error returns a formatted error message. Format varies based on available context (client, name).
func (*AgentError) Unwrap ¶
func (e *AgentError) Unwrap() error
Unwrap returns the underlying cause error. Implements the error unwrapping interface for errors.Is and errors.As.
type ErrorOption ¶
type ErrorOption func(*AgentError)
ErrorOption is a function that modifies an AgentError. Used with NewAgentError to set optional fields.
func WithAgent ¶ added in v0.3.0
func WithAgent(cfg *config.AgentConfig) ErrorOption
WithAgent extracts identification from agent configuration. Creates a string in the format "provider/model", "provider", or "model" depending on available information.
func WithID ¶
func WithID(id uuid.UUID) ErrorOption
WithID sets a unique identifier for this error instance.
func WithName ¶
func WithName(name string) ErrorOption
WithName sets the agent name that generated the error.
type Tool ¶
type Tool struct {
// Name is the function name that the LLM will call.
Name string `json:"name"`
// Description explains what the function does.
// Should be clear and detailed to help the LLM decide when to use it.
Description string `json:"description"`
// Parameters is a JSON Schema defining the function's parameters.
// Uses the format: {"type": "object", "properties": {...}, "required": [...]}
Parameters map[string]any `json:"parameters"`
}
Tool defines a function that can be called by the LLM. Used with the Tools protocol for function calling capabilities.