Documentation
¶
Overview ¶
Package llm provides a unified interface for interacting with various Large Language Model providers.
Overview ¶
The llm package abstracts away the specific API details of each LLM provider, making it easy to switch between different models and providers. It serves as the foundation for the agent system that communicates with UIs via JSONRPC over Unix sockets.
Features ¶
- Provider-agnostic interface for LLMs
- Automatic model-to-provider routing with pluggable router system
- Model aliases with optional whitelist mode for access control
- Support for both regular and streaming completions
- Function/tool calling support with a registry system
- Multi-modal (text + image) content support
- Configurable with functional options pattern
- Easy to extend with new providers
Usage ¶
Basic usage:
import ( "context" "log/slog" "codeberg.org/MadsRC/gollm" "codeberg.org/MadsRC/gollm/providers/openai" ) // Create an LLM client client := llm.New() // Create and register a provider (OpenAI in this example) provider := openai.New( openai.WithAPIKey("your-api-key-here"), ) err := client.AddProvider(provider) if err != nil { // Handle error } // Create a generate request req := llm.ResponseRequest{ ModelID: "gpt-4", Instructions: "You are a helpful assistant.", Input: llm.TextInput{Text: "What's the weather like today?"}, Temperature: 0.7, } // Send the generate request resp, err := client.Generate(context.Background(), req) if err != nil { // Handle error } // Use the response for _, output := range resp.Output { if textOutput, ok := output.(llm.TextOutput); ok { fmt.Println(textOutput.Text) } }
Streaming API ¶
// Create a streaming generate request req := llm.ResponseRequest{ ModelID: "gpt-4", Instructions: "You are a helpful assistant.", Input: llm.TextInput{Text: "Write a short poem about clouds."}, Stream: true, } // Send the streaming generate request stream, err := client.GenerateStream(context.Background(), req) if err != nil { // Handle error } defer stream.Close() // Process the stream for { chunk, err := stream.Next() if err == io.EOF { break } if err != nil { // Handle error break } // Process the chunk fmt.Print(chunk.Delta.Text) }
Using OpenRouter ¶
OpenRouter provides access to 100+ models from multiple providers through a single API:
import ( "codeberg.org/MadsRC/gollm/providers/openrouter" ) // Create OpenRouter provider provider := openrouter.New( openrouter.WithAPIKey("your-openrouter-api-key"), openrouter.WithHTTPReferer("https://yoursite.com"), openrouter.WithSiteName("Your App Name"), ) client := llm.New() err := client.AddProvider(provider) if err != nil { // Handle error } // Use any model available on OpenRouter req := llm.ResponseRequest{ ModelID: "anthropic/claude-3-sonnet", // OpenRouter model format Instructions: "You are a helpful assistant.", Input: llm.TextInput{Text: "Hello from OpenRouter!"}, } resp, err := client.Generate(context.Background(), req)
Model Routing ¶
GoLLM automatically routes model requests to the appropriate provider using a pluggable router system. By default, it uses StandardModelRouter which maps models to the first provider that claims to support them:
// Use default routing (StandardModelRouter) client := llm.New()
You can also provide a custom router that implements your own routing logic:
// Custom router example type CustomRouter struct { // Your custom routing logic } func (r *CustomRouter) RegisterProvider(ctx context.Context, provider ProviderClient) error { // Register provider with your custom logic } func (r *CustomRouter) RouteModel(ctx context.Context, modelID string) (ProviderClient, error) { // Route model requests with your custom logic } func (r *CustomRouter) ListAvailableModels(ctx context.Context) ([]Model, error) { // List models with your custom logic } // Use custom router customRouter := &CustomRouter{} client := llm.New(llm.WithModelRouter(customRouter))
Model Aliases and Whitelist Mode ¶
Model aliases provide friendly names for models and can be used to implement access control:
// Basic aliases - models can be accessed by alias or actual ID client := llm.New( llm.WithModelAliases(map[string]string{ "fast": "openai/gpt-3.5-turbo", "smart": "openai/gpt-4", "claude": "anthropic/claude-3-sonnet", }), ) // Use alias in requests req := llm.ResponseRequest{ ModelID: "fast", // Resolves to "openai/gpt-3.5-turbo" Input: llm.TextInput{Text: "Hello!"}, }
Alias-Only Mode (Whitelist) ¶
Enable alias-only mode to restrict access to only models that have aliases:
// Whitelist mode - ONLY aliased models can be accessed client := llm.New( llm.WithModelAliases(map[string]string{ "approved": "openai/gpt-4", "budget": "openai/gpt-3.5-turbo", }), llm.WithAliasOnlyMode(true), // Enable whitelist mode ) // This works - using an alias resp, err := client.Generate(ctx, llm.ResponseRequest{ ModelID: "approved", // ✅ Allowed Input: llm.TextInput{Text: "Hello"}, }) // This fails - direct model ID not allowed resp, err := client.Generate(ctx, llm.ResponseRequest{ ModelID: "openai/gpt-4", // ❌ Rejected in alias-only mode Input: llm.TextInput{Text: "Hello"}, }) // Dynamic control router := client.GetModelRouter() router.SetAliasOnlyMode(false) // Disable whitelist mode router.AddModelAlias("new", "new-model") // Add new alias
Tool Calling ¶
// Define a tool tools := []llm.Tool{ { Type: "function", Function: llm.Function{ Name: "get_weather", Description: "Get the current weather for a location", Parameters: map[string]interface{}{ "type": "object", "properties": map[string]interface{}{ "location": map[string]interface{}{ "type": "string", "description": "City name", }, }, "required": []string{"location"}, }, }, }, } // Create a request with tools req := llm.ChatRequest{ ModelID: "gpt-4", Messages: []llm.Message{ { Role: llm.RoleUser, Content: "What's the weather like in Paris?", }, }, Tools: tools, }
Tool Registry ¶
The package includes a ToolRegistry for managing tools and their executors:
// Create a tool registry registry := llm.NewToolRegistry( llm.WithToolRegistryLogger(logger), llm.WithToolRegistryTool(timeTool, timeExecutor), ) // Execute a tool call result, err := registry.ExecuteTool(toolCall)
Multi-modal Input ¶
// Create a multi-modal message req := llm.ChatRequest{ ModelID: "gpt-4-vision-preview", // Must be a vision model Messages: []llm.Message{ { Role: llm.RoleUser, Content: []llm.MessageContent{ llm.TextContent{ Text: "What's in this image?", }, llm.ImageContent{ URL: "https://example.com/image.jpg", }, }, }, }, }
Available Providers ¶
Currently implemented providers:
- openai: OpenAI's API (GPT-3.5, GPT-4, etc.) using the official openai-go SDK
- openrouter: OpenRouter's unified API providing access to multiple LLM providers through OpenAI-compatible interface
Adding New Providers ¶
To add a new provider, implement the llm.ProviderClient interface:
type ProviderClient interface { ID() string Name() string ListModels(ctx context.Context) ([]Model, error) GetModel(ctx context.Context, modelID string) (*Model, error) Generate(ctx context.Context, req ResponseRequest) (*Response, error) GenerateStream(ctx context.Context, req ResponseRequest) (ResponseStream, error) }
Integration with Persona System ¶
The llm package works closely with the persona system, which defines different roles for the LLM agent. Each persona can specify its own set of tools and system prompts, and the llm package provides the underlying communication with the LLM providers.
Package llm provides an abstraction layer for interacting with various LLM providers.
Package llm provides an abstraction layer for interacting with various LLM providers.
Index ¶
- Variables
- type Annotation
- type Client
- func (c *Client) AddProvider(provider ProviderClient) error
- func (c *Client) Generate(ctx context.Context, req ResponseRequest) (*Response, error)
- func (c *Client) GenerateFromItems(ctx context.Context, modelID, instructions string, items []InputItem) (*Response, error)
- func (c *Client) GenerateStream(ctx context.Context, req ResponseRequest) (ResponseStream, error)
- func (c *Client) GenerateText(ctx context.Context, modelID, instructions, input string) (*Response, error)
- func (c *Client) GetModelRouter() ModelRouter
- func (c *Client) GetProvider(providerID string) (ProviderClient, error)
- func (c *Client) ListModels(ctx context.Context) ([]Model, error)
- func (c *Client) ListProviders() []ProviderClient
- type ClientOption
- type FileInputItem
- type FinishReason
- type Function
- type ImageContent
- type ImageInputItem
- type Input
- type InputItem
- type MCPToolExecutor
- type Message
- type MessageContent
- type Model
- type ModelCapabilities
- type ModelNotFoundError
- type ModelRouter
- type MultiInput
- type OutputDelta
- type OutputItem
- type ProviderClient
- type Response
- type ResponseChunk
- type ResponseError
- type ResponseFormat
- type ResponseRequest
- type ResponseStatus
- type ResponseStream
- type Role
- type StandardModelRouter
- func (r *StandardModelRouter) AddModelAlias(alias, actualModelID string)
- func (r *StandardModelRouter) IsAliasOnlyMode() bool
- func (r *StandardModelRouter) ListAvailableModels(ctx context.Context) ([]Model, error)
- func (r *StandardModelRouter) ListModelAliases() map[string]string
- func (r *StandardModelRouter) RegisterProvider(ctx context.Context, provider ProviderClient) error
- func (r *StandardModelRouter) RemoveModelAlias(alias string)
- func (r *StandardModelRouter) RouteModel(ctx context.Context, modelID string) (ProviderClient, error)
- func (r *StandardModelRouter) SetAliasOnlyMode(enabled bool)
- type StandardToolExecutor
- type TextContent
- type TextInput
- type TextInputItem
- type TextOutput
- type TokenUsage
- type Tool
- type ToolCall
- type ToolCallDelta
- type ToolCallFunction
- type ToolCallOutput
- type ToolChoice
- type ToolChoiceFunction
- type ToolExecutor
- type ToolRegistry
- type ToolRegistryOption
- type ToolResult
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrProviderNotFound = errors.New("provider not found")
ErrProviderNotFound is returned when a requested provider is not registered.
var GlobalClientOptions []ClientOption
GlobalClientOptions is a list of options that are applied to all clients created.
Functions ¶
This section is empty.
Types ¶
type Annotation ¶
type Annotation struct { Type string Text string StartPos int EndPos int // Additional annotation data can be stored here Data map[string]any }
Annotation represents annotations on text output.
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client provides a unified way to interact with various LLM providers.
func New ¶
func New(options ...ClientOption) *Client
New creates a new LLM client with the given options.
func (*Client) AddProvider ¶
func (c *Client) AddProvider(provider ProviderClient) error
AddProvider registers a new provider with this client and the model router.
func (*Client) Generate ¶
Generate sends a response generation request, automatically discovering the provider for the model.
Example ¶
Example demonstrates basic usage of GoLLM with OpenAI provider.
package main import ( "context" "errors" "fmt" "log" "strings" "time" llm "codeberg.org/MadsRC/gollm" ) // mockProvider is a mock implementation of ProviderClient for testing. type mockProvider struct { id string name string } func (m *mockProvider) ID() string { return m.id } func (m *mockProvider) Name() string { return m.name } func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) { models := []llm.Model{ { ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, { ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, } if m.id == "openai" { models = append(models, llm.Model{ ID: "gpt-4", Name: "GPT-4", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: false, }, }, llm.Model{ ID: "gpt-4-vision-preview", Name: "GPT-4 Vision Preview", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: true, }, }, ) } return models, nil } func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) { if modelID == "model1" { return &llm.Model{ ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, nil } if modelID == "model2" { return &llm.Model{ ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, nil } if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") { baseModel := modelID if strings.HasPrefix(modelID, "openai/") { baseModel = strings.TrimPrefix(modelID, "openai/") } vision := baseModel == "gpt-4-vision-preview" return &llm.Model{ ID: modelID, Name: baseModel, Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: vision, }, }, nil } return nil, errors.New("model not found") } func (m *mockProvider) Generate(ctx context.Context, req llm.ResponseRequest) (*llm.Response, error) { return &llm.Response{ ID: "resp1", ModelID: req.ModelID, Status: llm.StatusCompleted, Output: []llm.OutputItem{ llm.TextOutput{ Text: "Hello from " + m.id + " (new interface)", }, }, Usage: &llm.TokenUsage{ PromptTokens: 10, CompletionTokens: 5, TotalTokens: 15, }, CreatedAt: time.Now(), }, nil } func (m *mockProvider) GenerateStream(ctx context.Context, req llm.ResponseRequest) (llm.ResponseStream, error) { return nil, errors.New("streaming not implemented in mock") } func main() { // Create an LLM client client := llm.New() // Add a provider (OpenAI in this example) // Note: Replace "your-api-key-here" with your actual API key provider := &mockProvider{id: "openai", name: "OpenAI"} _ = client.AddProvider(provider) // Create a generate request req := llm.ResponseRequest{ ModelID: "gpt-4", Instructions: "You are a helpful assistant.", Input: llm.TextInput{Text: "What's the weather like today?"}, Temperature: 0.7, } // Send the generate request resp, err := client.Generate(context.Background(), req) if err != nil { log.Fatal(err) } // Use the response for _, output := range resp.Output { if textOutput, ok := output.(llm.TextOutput); ok { fmt.Println(textOutput.Text) } } }
Output: Hello from openai (new interface)
Example (MultiModal) ¶
Example demonstrates multi-modal input with text and images.
package main import ( "context" "errors" "fmt" "log" "strings" "time" llm "codeberg.org/MadsRC/gollm" ) // mockProvider is a mock implementation of ProviderClient for testing. type mockProvider struct { id string name string } func (m *mockProvider) ID() string { return m.id } func (m *mockProvider) Name() string { return m.name } func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) { models := []llm.Model{ { ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, { ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, } if m.id == "openai" { models = append(models, llm.Model{ ID: "gpt-4", Name: "GPT-4", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: false, }, }, llm.Model{ ID: "gpt-4-vision-preview", Name: "GPT-4 Vision Preview", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: true, }, }, ) } return models, nil } func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) { if modelID == "model1" { return &llm.Model{ ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, nil } if modelID == "model2" { return &llm.Model{ ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, nil } if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") { baseModel := modelID if strings.HasPrefix(modelID, "openai/") { baseModel = strings.TrimPrefix(modelID, "openai/") } vision := baseModel == "gpt-4-vision-preview" return &llm.Model{ ID: modelID, Name: baseModel, Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: vision, }, }, nil } return nil, errors.New("model not found") } func (m *mockProvider) Generate(ctx context.Context, req llm.ResponseRequest) (*llm.Response, error) { return &llm.Response{ ID: "resp1", ModelID: req.ModelID, Status: llm.StatusCompleted, Output: []llm.OutputItem{ llm.TextOutput{ Text: "Hello from " + m.id + " (new interface)", }, }, Usage: &llm.TokenUsage{ PromptTokens: 10, CompletionTokens: 5, TotalTokens: 15, }, CreatedAt: time.Now(), }, nil } func (m *mockProvider) GenerateStream(ctx context.Context, req llm.ResponseRequest) (llm.ResponseStream, error) { return nil, errors.New("streaming not implemented in mock") } func main() { client := llm.New() provider := &mockProvider{id: "openai", name: "OpenAI"} _ = client.AddProvider(provider) // Create a multi-modal request req := llm.ResponseRequest{ ModelID: "gpt-4-vision-preview", // Must be a vision model Instructions: "You are a helpful assistant that can analyze images.", Input: llm.MultiInput{ Items: []llm.InputItem{ llm.TextInputItem{ Text: "What's in this image?", }, llm.ImageInputItem{ URL: "https://example.com/image.jpg", }, }, }, } resp, err := client.Generate(context.Background(), req) if err != nil { log.Fatal(err) } for _, output := range resp.Output { if textOutput, ok := output.(llm.TextOutput); ok { fmt.Println(textOutput.Text) } } }
Output: Hello from openai (new interface)
Example (WithTools) ¶
Example demonstrates tool calling functionality.
package main import ( "context" "errors" "fmt" "log" "strings" "time" llm "codeberg.org/MadsRC/gollm" ) // mockProvider is a mock implementation of ProviderClient for testing. type mockProvider struct { id string name string } func (m *mockProvider) ID() string { return m.id } func (m *mockProvider) Name() string { return m.name } func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) { models := []llm.Model{ { ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, { ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, } if m.id == "openai" { models = append(models, llm.Model{ ID: "gpt-4", Name: "GPT-4", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: false, }, }, llm.Model{ ID: "gpt-4-vision-preview", Name: "GPT-4 Vision Preview", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: true, }, }, ) } return models, nil } func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) { if modelID == "model1" { return &llm.Model{ ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, nil } if modelID == "model2" { return &llm.Model{ ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, nil } if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") { baseModel := modelID if strings.HasPrefix(modelID, "openai/") { baseModel = strings.TrimPrefix(modelID, "openai/") } vision := baseModel == "gpt-4-vision-preview" return &llm.Model{ ID: modelID, Name: baseModel, Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: vision, }, }, nil } return nil, errors.New("model not found") } func (m *mockProvider) Generate(ctx context.Context, req llm.ResponseRequest) (*llm.Response, error) { return &llm.Response{ ID: "resp1", ModelID: req.ModelID, Status: llm.StatusCompleted, Output: []llm.OutputItem{ llm.TextOutput{ Text: "Hello from " + m.id + " (new interface)", }, }, Usage: &llm.TokenUsage{ PromptTokens: 10, CompletionTokens: 5, TotalTokens: 15, }, CreatedAt: time.Now(), }, nil } func (m *mockProvider) GenerateStream(ctx context.Context, req llm.ResponseRequest) (llm.ResponseStream, error) { return nil, errors.New("streaming not implemented in mock") } func main() { client := llm.New() provider := &mockProvider{id: "openai", name: "OpenAI"} _ = client.AddProvider(provider) // Define a tool tools := []llm.Tool{ { Type: "function", Function: llm.Function{ Name: "get_weather", Description: "Get the current weather for a location", Parameters: map[string]any{ "type": "object", "properties": map[string]any{ "location": map[string]any{ "type": "string", "description": "City name", }, }, "required": []string{"location"}, }, }, }, } // Create a request with tools req := llm.ResponseRequest{ ModelID: "gpt-4", Instructions: "You are a helpful assistant with access to weather data.", Input: llm.TextInput{Text: "What's the weather like in Paris?"}, Tools: tools, } resp, err := client.Generate(context.Background(), req) if err != nil { log.Fatal(err) } for _, output := range resp.Output { if textOutput, ok := output.(llm.TextOutput); ok { fmt.Println(textOutput.Text) } } }
Output: Hello from openai (new interface)
func (*Client) GenerateFromItems ¶
func (c *Client) GenerateFromItems(ctx context.Context, modelID, instructions string, items []InputItem) (*Response, error)
GenerateFromItems is a convenience method for multi-modal generation using the new interface.
func (*Client) GenerateStream ¶
func (c *Client) GenerateStream(ctx context.Context, req ResponseRequest) (ResponseStream, error)
GenerateStream sends a streaming response generation request, automatically discovering the provider for the model.
Example ¶
Example demonstrates streaming API usage.
package main import ( "context" "errors" "fmt" "strings" "time" llm "codeberg.org/MadsRC/gollm" ) // mockProvider is a mock implementation of ProviderClient for testing. type mockProvider struct { id string name string } func (m *mockProvider) ID() string { return m.id } func (m *mockProvider) Name() string { return m.name } func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) { models := []llm.Model{ { ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, { ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, } if m.id == "openai" { models = append(models, llm.Model{ ID: "gpt-4", Name: "GPT-4", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: false, }, }, llm.Model{ ID: "gpt-4-vision-preview", Name: "GPT-4 Vision Preview", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: true, }, }, ) } return models, nil } func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) { if modelID == "model1" { return &llm.Model{ ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, nil } if modelID == "model2" { return &llm.Model{ ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, nil } if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") { baseModel := modelID if strings.HasPrefix(modelID, "openai/") { baseModel = strings.TrimPrefix(modelID, "openai/") } vision := baseModel == "gpt-4-vision-preview" return &llm.Model{ ID: modelID, Name: baseModel, Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: vision, }, }, nil } return nil, errors.New("model not found") } func (m *mockProvider) Generate(ctx context.Context, req llm.ResponseRequest) (*llm.Response, error) { return &llm.Response{ ID: "resp1", ModelID: req.ModelID, Status: llm.StatusCompleted, Output: []llm.OutputItem{ llm.TextOutput{ Text: "Hello from " + m.id + " (new interface)", }, }, Usage: &llm.TokenUsage{ PromptTokens: 10, CompletionTokens: 5, TotalTokens: 15, }, CreatedAt: time.Now(), }, nil } func (m *mockProvider) GenerateStream(ctx context.Context, req llm.ResponseRequest) (llm.ResponseStream, error) { return nil, errors.New("streaming not implemented in mock") } func main() { // Create an LLM client client := llm.New() // Add a provider provider := &mockProvider{id: "openai", name: "OpenAI"} _ = client.AddProvider(provider) // Create a streaming generate request req := llm.ResponseRequest{ ModelID: "gpt-4", Instructions: "You are a helpful assistant.", Input: llm.TextInput{Text: "Write a short poem about clouds."}, Stream: true, } // Note: This example shows the API structure // Real streaming implementation would require a proper provider _, err := client.GenerateStream(context.Background(), req) if err != nil { fmt.Println("Streaming not implemented in mock provider") } }
Output: Streaming not implemented in mock provider
func (*Client) GenerateText ¶
func (c *Client) GenerateText(ctx context.Context, modelID, instructions, input string) (*Response, error)
GenerateText is a convenience method for simple text generation using the new interface.
func (*Client) GetModelRouter ¶
func (c *Client) GetModelRouter() ModelRouter
GetModelRouter returns the model router used by this client. This allows access to router-specific features like adding aliases.
func (*Client) GetProvider ¶
func (c *Client) GetProvider(providerID string) (ProviderClient, error)
GetProvider returns a provider by its ID.
func (*Client) ListModels ¶
ListModels returns all available models across all providers.
func (*Client) ListProviders ¶
func (c *Client) ListProviders() []ProviderClient
ListProviders returns all registered providers.
type ClientOption ¶
type ClientOption interface {
// contains filtered or unexported methods
}
ClientOption configures a Client.
func WithAliasOnlyMode ¶
func WithAliasOnlyMode(enabled bool) ClientOption
WithAliasOnlyMode enables alias-only mode where only models with aliases can be accessed. This provides a whitelist approach for controlling which models are available.
Example ¶
Example demonstrates alias-only mode for access control.
package main import ( "context" "errors" "fmt" "strings" "time" llm "codeberg.org/MadsRC/gollm" ) // mockProvider is a mock implementation of ProviderClient for testing. type mockProvider struct { id string name string } func (m *mockProvider) ID() string { return m.id } func (m *mockProvider) Name() string { return m.name } func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) { models := []llm.Model{ { ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, { ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, } if m.id == "openai" { models = append(models, llm.Model{ ID: "gpt-4", Name: "GPT-4", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: false, }, }, llm.Model{ ID: "gpt-4-vision-preview", Name: "GPT-4 Vision Preview", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: true, }, }, ) } return models, nil } func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) { if modelID == "model1" { return &llm.Model{ ID: "model1", Name: "Test Model 1", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, }, }, nil } if modelID == "model2" { return &llm.Model{ ID: "model2", Name: "Test Model 2", Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsJSON: true, }, }, nil } if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") { baseModel := modelID if strings.HasPrefix(modelID, "openai/") { baseModel = strings.TrimPrefix(modelID, "openai/") } vision := baseModel == "gpt-4-vision-preview" return &llm.Model{ ID: modelID, Name: baseModel, Provider: m.id, Capabilities: llm.ModelCapabilities{ SupportsStreaming: true, SupportsJSON: true, SupportsFunctions: true, SupportsVision: vision, }, }, nil } return nil, errors.New("model not found") } func (m *mockProvider) Generate(ctx context.Context, req llm.ResponseRequest) (*llm.Response, error) { return &llm.Response{ ID: "resp1", ModelID: req.ModelID, Status: llm.StatusCompleted, Output: []llm.OutputItem{ llm.TextOutput{ Text: "Hello from " + m.id + " (new interface)", }, }, Usage: &llm.TokenUsage{ PromptTokens: 10, CompletionTokens: 5, TotalTokens: 15, }, CreatedAt: time.Now(), }, nil } func (m *mockProvider) GenerateStream(ctx context.Context, req llm.ResponseRequest) (llm.ResponseStream, error) { return nil, errors.New("streaming not implemented in mock") } func main() { // Create client with whitelist mode enabled client := llm.New( llm.WithModelAliases(map[string]string{ "fast": "openai/gpt-3.5-turbo", "smart": "openai/gpt-4", "vision": "openai/gpt-4-vision-preview", }), llm.WithAliasOnlyMode(true), // Only aliased models allowed ) provider := &mockProvider{id: "openai", name: "OpenAI"} _ = client.AddProvider(provider) // This works - using an alias req := llm.ResponseRequest{ ModelID: "smart", // Resolves to "openai/gpt-4" Input: llm.TextInput{Text: "Hello!"}, } resp, err := client.Generate(context.Background(), req) if err == nil && resp != nil { fmt.Println("Alias request succeeded") } // This would fail - direct model ID not allowed in alias-only mode req.ModelID = "openai/gpt-4" // Direct ID rejected _, err = client.Generate(context.Background(), req) if err != nil { fmt.Println("Direct model ID rejected in alias-only mode") } }
Output: Alias request succeeded Direct model ID rejected in alias-only mode
func WithClientLogger ¶
func WithClientLogger(logger *slog.Logger) ClientOption
WithClientLogger sets the logger for the client.
func WithModelAliases ¶
func WithModelAliases(aliases map[string]string) ClientOption
WithModelAliases sets up model aliases on the default StandardModelRouter. If a custom router is already set, this option is ignored. aliases should be a map of alias name to actual model ID. Example: WithModelAliases(map[string]string{"gpt4": "openai/gpt-4", "claude": "anthropic/claude-3-sonnet"})
func WithModelRouter ¶
func WithModelRouter(router ModelRouter) ClientOption
WithModelRouter sets a custom model router for the client.
type FileInputItem ¶
FileInputItem represents file content in multi-modal input.
func (FileInputItem) ItemType ¶
func (f FileInputItem) ItemType() string
ItemType implements the InputItem interface.
type FinishReason ¶
type FinishReason string
FinishReason defines why a language model's generation turn ended.
const ( // FinishReasonUnspecified indicates an unspecified reason. FinishReasonUnspecified FinishReason = "" // FinishReasonStop indicates generation stopped due to a stop sequence. FinishReasonStop FinishReason = "stop" // FinishReasonLength indicates generation stopped due to reaching max length. FinishReasonLength FinishReason = "length" // FinishReasonToolCalls indicates generation stopped to make tool calls. FinishReasonToolCalls FinishReason = "tool_calls" // FinishReasonError indicates generation stopped due to an error. FinishReasonError FinishReason = "error" // FinishReasonContentFilter indicates generation stopped due to content filtering. FinishReasonContentFilter FinishReason = "content_filter" )
type Function ¶
type Function struct { // Name is the name of the function. Name string `json:"name"` // Description is a human-readable description of the function. Description string `json:"description,omitempty"` // Parameters represents the function's parameters in JSON schema format. Parameters map[string]any `json:"parameters,omitempty"` }
Function represents a callable function definition.
type ImageContent ¶
type ImageContent struct {
URL string `json:"url"`
}
ImageContent represents image content in a message.
func (ImageContent) MessageContentType ¶
func (i ImageContent) MessageContentType() string
MessageContentType returns the type of message content.
type ImageInputItem ¶
ImageInputItem represents image content in multi-modal input.
func (ImageInputItem) ItemType ¶
func (i ImageInputItem) ItemType() string
ItemType implements the InputItem interface.
type Input ¶
type Input interface {
InputType() string
}
Input represents the input content for a response request.
type InputItem ¶
type InputItem interface {
ItemType() string
}
InputItem represents different types of input content.
type MCPToolExecutor ¶
type MCPToolExecutor struct {
// contains filtered or unexported fields
}
MCPToolExecutor executes tools via MCP protocol
func NewMCPToolExecutor ¶
func NewMCPToolExecutor(client *mcp.Client, logger *slog.Logger) *MCPToolExecutor
NewMCPToolExecutor creates a new MCP tool executor
type Message ¶
type Message struct { // Role is who sent this message (user, assistant, system, etc). Role Role `json:"role"` // Content is the content of the message, which can be either a string // for simple text messages or a slice of MessageContent for multi-modal content. Content any `json:"content"` // Name is an optional identifier for the sender, used in some providers. Name string `json:"name,omitempty"` // ToolCalls contains any tool calls made by the assistant in this message. ToolCalls []ToolCall `json:"tool_calls,omitempty"` // ToolCallID is the ID of the tool call this message is responding to. ToolCallID string `json:"tool_call_id,omitempty"` }
Message represents a single message in a chat conversation.
type MessageContent ¶
type MessageContent interface {
MessageContentType() string
}
MessageContent represents a part of a message's content.
type Model ¶
type Model struct { // ID is the unique identifier for this model within its provider. ID string // Name is a human-readable name for the model. Name string // Provider is the provider ID this model belongs to. Provider string // Capabilities describe what the model can do. Capabilities ModelCapabilities }
Model represents a specific model from a provider.
type ModelCapabilities ¶
type ModelCapabilities struct { // SupportsStreaming indicates if the model can stream responses. SupportsStreaming bool // SupportsJSON indicates if the model supports JSON mode responses. SupportsJSON bool // SupportsFunctions indicates if the model supports function calling. SupportsFunctions bool // SupportsVision indicates if the model supports analyzing images. SupportsVision bool }
ModelCapabilities describes the capabilities of a model.
type ModelNotFoundError ¶
type ModelNotFoundError struct {
ModelID string
}
ModelNotFoundError is returned when no provider supports the requested model.
func (*ModelNotFoundError) Error ¶
func (e *ModelNotFoundError) Error() string
type ModelRouter ¶
type ModelRouter interface { // RegisterProvider registers a provider with the router for the models it supports. RegisterProvider(ctx context.Context, provider ProviderClient) error // RouteModel returns the provider that should handle requests for the given model. RouteModel(ctx context.Context, modelID string) (ProviderClient, error) // ListAvailableModels returns all models available across all registered providers. ListAvailableModels(ctx context.Context) ([]Model, error) // AddModelAlias adds an alias that maps a friendly name to an actual model ID. // For example: AddModelAlias("gpt4", "openai/gpt-4") or AddModelAlias("claude", "bedrock/claude-3-sonnet") AddModelAlias(alias, actualModelID string) // RemoveModelAlias removes a model alias. RemoveModelAlias(alias string) // ListModelAliases returns all currently configured model aliases. ListModelAliases() map[string]string // SetAliasOnlyMode enables or disables alias-only mode. // When enabled, only models that have aliases can be accessed. SetAliasOnlyMode(enabled bool) // IsAliasOnlyMode returns whether alias-only mode is currently enabled. IsAliasOnlyMode() bool }
ModelRouter is responsible for routing model requests to the appropriate provider. This interface allows for custom routing logic to be implemented.
type MultiInput ¶
type MultiInput struct {
Items []InputItem
}
MultiInput represents complex multi-modal content input.
func (MultiInput) InputType ¶
func (m MultiInput) InputType() string
InputType implements the Input interface.
type OutputDelta ¶
type OutputDelta struct { // Text contains new text content, if any. Text string // ToolCall contains tool call information, if any. ToolCall *ToolCallDelta }
OutputDelta represents the content delta in a streaming response chunk.
type OutputItem ¶
type OutputItem interface {
OutputType() string
}
OutputItem represents different types of output content.
type ProviderClient ¶
type ProviderClient interface { // ID returns the unique identifier for this provider. ID() string // Name returns a human-readable name for this provider. Name() string // ListModels returns all available models from this provider. ListModels(ctx context.Context) ([]Model, error) // GetModel returns information for a specific model. GetModel(ctx context.Context, modelID string) (*Model, error) // Generate sends a response generation request. Generate(ctx context.Context, req ResponseRequest) (*Response, error) // GenerateStream sends a streaming response generation request. GenerateStream(ctx context.Context, req ResponseRequest) (ResponseStream, error) }
ProviderClient is an interface that must be implemented by all LLM provider clients.
type Response ¶
type Response struct { // ID uniquely identifies this response. ID string // ModelID is the model that generated this response. ModelID string // Status indicates the current status of the response. Status ResponseStatus // Error contains error information if Status is StatusFailed. Error *ResponseError // IncompleteReason explains why the response is incomplete if Status is StatusIncomplete. IncompleteReason *string // Output contains the generated content. Output []OutputItem // Usage contains token usage information. Usage *TokenUsage // CreatedAt is when this response was created. CreatedAt time.Time }
Response represents a model response.
type ResponseChunk ¶
type ResponseChunk struct { // ID uniquely identifies the response this chunk belongs to. ID string // Delta contains the new content in this chunk. Delta OutputDelta // Finished indicates if this is the final chunk in the stream. Finished bool // Status indicates the current status of the response. Status ResponseStatus // Usage contains token usage information for this chunk. Usage *TokenUsage }
ResponseChunk represents a single chunk of a streaming response.
type ResponseError ¶
type ResponseError struct { // Code is the error code. Code string // Message is the error message. Message string // Details contains additional error details. Details map[string]any }
ResponseError represents an error in a response.
type ResponseFormat ¶
type ResponseFormat struct { // Type is the response format type, e.g., "json_object". Type string `json:"type"` }
ResponseFormat specifies the format of the response.
type ResponseRequest ¶
type ResponseRequest struct { // ModelID is the model to use for generation. ModelID string // Instructions (system/developer message) - replaces system messages in the messages array. Instructions string // Input - the actual content to process (replaces user messages). Input Input // MaxOutputTokens is the maximum number of tokens to generate. MaxOutputTokens int // Temperature controls randomness in generation (0-2). Temperature float32 // TopP controls diversity of generation (0-1). TopP float32 // Tools is a list of tools the model can call. Tools []Tool // ToolChoice controls how the model uses tools. ToolChoice *ToolChoice // ResponseFormat specifies the format of the response. ResponseFormat *ResponseFormat // Stream indicates if responses should be streamed. Stream bool }
ResponseRequest represents a request to generate a model response.
type ResponseStatus ¶
type ResponseStatus string
ResponseStatus represents the status of a response.
const ( StatusCompleted ResponseStatus = "completed" StatusFailed ResponseStatus = "failed" StatusInProgress ResponseStatus = "in_progress" StatusIncomplete ResponseStatus = "incomplete" )
Response status constants.
type ResponseStream ¶
type ResponseStream interface { // Next returns the next chunk in the stream. Next() (*ResponseChunk, error) // Close closes the stream. Close() error // Err returns any error that occurred during streaming, other than io.EOF. Err() error }
ResponseStream represents a streaming response using the new interface.
type StandardModelRouter ¶
type StandardModelRouter struct {
// contains filtered or unexported fields
}
StandardModelRouter is the default implementation of ModelRouter. It routes models to the first provider that supports them and supports model aliases.
func NewStandardModelRouter ¶
func NewStandardModelRouter(logger *slog.Logger) *StandardModelRouter
NewStandardModelRouter creates a new standard model router.
func (*StandardModelRouter) AddModelAlias ¶
func (r *StandardModelRouter) AddModelAlias(alias, actualModelID string)
AddModelAlias adds an alias that maps a friendly name to an actual model ID.
func (*StandardModelRouter) IsAliasOnlyMode ¶
func (r *StandardModelRouter) IsAliasOnlyMode() bool
IsAliasOnlyMode returns whether alias-only mode is currently enabled.
func (*StandardModelRouter) ListAvailableModels ¶
func (r *StandardModelRouter) ListAvailableModels(ctx context.Context) ([]Model, error)
ListAvailableModels returns all models available across all providers. In alias-only mode, only returns models that have aliases (using alias names as model IDs).
func (*StandardModelRouter) ListModelAliases ¶
func (r *StandardModelRouter) ListModelAliases() map[string]string
ListModelAliases returns all currently configured model aliases.
func (*StandardModelRouter) RegisterProvider ¶
func (r *StandardModelRouter) RegisterProvider(ctx context.Context, provider ProviderClient) error
RegisterProvider registers a provider and maps all its supported models.
func (*StandardModelRouter) RemoveModelAlias ¶
func (r *StandardModelRouter) RemoveModelAlias(alias string)
RemoveModelAlias removes a model alias.
func (*StandardModelRouter) RouteModel ¶
func (r *StandardModelRouter) RouteModel(ctx context.Context, modelID string) (ProviderClient, error)
RouteModel returns the provider that should handle the given model.
func (*StandardModelRouter) SetAliasOnlyMode ¶
func (r *StandardModelRouter) SetAliasOnlyMode(enabled bool)
SetAliasOnlyMode enables or disables alias-only mode.
type StandardToolExecutor ¶
type StandardToolExecutor struct {
// contains filtered or unexported fields
}
StandardToolExecutor is a simple implementation of ToolExecutor for standard tools
type TextContent ¶
type TextContent struct {
Text string `json:"text"`
}
TextContent represents plain text content in a message.
func (TextContent) MessageContentType ¶
func (t TextContent) MessageContentType() string
MessageContentType returns the type of message content.
type TextInputItem ¶
type TextInputItem struct {
Text string
}
TextInputItem represents text content in multi-modal input.
func (TextInputItem) ItemType ¶
func (t TextInputItem) ItemType() string
ItemType implements the InputItem interface.
type TextOutput ¶
type TextOutput struct { Text string Annotations []Annotation }
TextOutput represents text content in the response.
func (TextOutput) OutputType ¶
func (t TextOutput) OutputType() string
OutputType implements the OutputItem interface.
type TokenUsage ¶
type TokenUsage struct { // PromptTokens is the number of tokens in the prompt. PromptTokens int // CompletionTokens is the number of tokens in the completion. CompletionTokens int // TotalTokens is the total number of tokens used. TotalTokens int }
TokenUsage provides token count information.
type Tool ¶
type Tool struct { // Type is the type of the tool. Type string `json:"type"` // Function is the function definition if Type is "function". Function Function `json:"function"` }
Tool represents a tool that can be called by the model.
type ToolCall ¶
type ToolCall struct { // ID uniquely identifies this tool call. ID string `json:"id"` // Type is the type of tool being called. Type string `json:"type"` // Function contains details about the function call if Type is "function". Function ToolCallFunction `json:"function"` }
ToolCall represents a call to a tool by the model.
type ToolCallDelta ¶
ToolCallDelta represents tool call information in a response delta.
type ToolCallFunction ¶
type ToolCallFunction struct { // Name is the name of the function to call. Name string `json:"name"` // Arguments is a JSON string containing the function arguments. Arguments string `json:"arguments"` // IsComplete indicates if the arguments field contains complete valid JSON. // This is particularly important during streaming where arguments may come in fragments. IsComplete bool `json:"-"` }
ToolCallFunction represents a function call made by the model.
type ToolCallOutput ¶
ToolCallOutput represents a tool call in the response.
func (ToolCallOutput) OutputType ¶
func (t ToolCallOutput) OutputType() string
OutputType implements the OutputItem interface.
type ToolChoice ¶
type ToolChoice struct { // Type is the tool choice type ("none", "auto", or "function"). Type string `json:"type"` // Function is required if Type is "function". Function *ToolChoiceFunction `json:"function,omitempty"` }
ToolChoice controls how the model chooses to call tools.
type ToolChoiceFunction ¶
type ToolChoiceFunction struct { // Name is the name of the function. Name string `json:"name"` }
ToolChoiceFunction specifies a function the model should use.
type ToolExecutor ¶
type ToolExecutor interface { // Execute executes a tool call and returns the result Execute(toolCall ToolCall) (string, error) }
ToolExecutor is responsible for executing tools and returning their results
type ToolRegistry ¶
type ToolRegistry struct {
// contains filtered or unexported fields
}
ToolRegistry maintains a registry of available tools and their implementations
func NewToolRegistry ¶
func NewToolRegistry(opts ...ToolRegistryOption) *ToolRegistry
NewToolRegistry creates a new tool registry with optional configuration
func (*ToolRegistry) ExecuteTool ¶
func (tr *ToolRegistry) ExecuteTool(toolCall ToolCall) (string, error)
ExecuteTool executes a tool call and returns the result
func (*ToolRegistry) GetTools ¶
func (tr *ToolRegistry) GetTools() []Tool
GetTools returns all registered tools
func (*ToolRegistry) RegisterTool ¶
func (tr *ToolRegistry) RegisterTool(tool Tool, executor ToolExecutor)
RegisterTool adds a tool to the registry
type ToolRegistryOption ¶
type ToolRegistryOption func(*ToolRegistry)
ToolRegistryOption defines functional options for configuring ToolRegistry
func WithToolRegistryLogger ¶
func WithToolRegistryLogger(logger *slog.Logger) ToolRegistryOption
WithToolRegistryLogger sets the logger for the ToolRegistry
func WithToolRegistryTool ¶
func WithToolRegistryTool(tool Tool, executor ToolExecutor) ToolRegistryOption
WithToolRegistryTool registers a tool during initialization
type ToolResult ¶
type ToolResult struct { // ID is the unique identifier of the tool call this result corresponds to. ID string `json:"id"` // ToolName is the name of the tool that was called. // While the LLM knows this from the original ToolCall, including it can be useful for logging and consistency. ToolName string `json:"tool_name"` // Content is the result of the tool's execution, typically a string (e.g., JSON output). Content string `json:"content"` // IsError indicates whether the tool execution resulted in an error. // If true, Content might contain an error message or a serialized error object. IsError bool `json:"is_error,omitempty"` }
ToolResult represents the result of a tool's execution. This is sent back to the LLM to inform it of the outcome of a tool call.