Documentation
¶
Overview ¶
Package chatdelta provides a unified interface for interacting with multiple AI APIs including OpenAI, Anthropic Claude, and Google Gemini.
The package supports both synchronous and streaming responses, conversation handling, parallel execution across multiple providers, comprehensive error handling, and configurable retry logic with exponential backoff.
Basic usage:
client, err := chatdelta.CreateClient("openai", "your-api-key", "gpt-3.5-turbo", nil)
if err != nil {
log.Fatal(err)
}
response, err := client.SendPrompt(context.Background(), "Hello, how are you?")
if err != nil {
log.Fatal(err)
}
fmt.Println(response)
Advanced usage with configuration:
config := chatdelta.NewClientConfig().
SetTimeout(60 * time.Second).
SetTemperature(0.7).
SetMaxTokens(2048).
SetSystemMessage("You are a helpful assistant.")
client, err := chatdelta.CreateClient("claude", "your-api-key", "claude-3-haiku-20240307", config)
if err != nil {
log.Fatal(err)
}
Conversation handling:
conversation := chatdelta.NewConversation()
conversation.AddSystemMessage("You are a helpful math tutor.")
conversation.AddUserMessage("What is 2 + 2?")
conversation.AddAssistantMessage("2 + 2 equals 4.")
conversation.AddUserMessage("What about 3 + 3?")
response, err := client.SendConversation(context.Background(), conversation)
Streaming responses:
if client.SupportsStreaming() {
chunks, err := client.StreamPrompt(context.Background(), "Write a poem")
if err != nil {
log.Fatal(err)
}
for chunk := range chunks {
fmt.Print(chunk.Content)
if chunk.Finished {
break
}
}
}
Parallel execution:
clients := []chatdelta.AIClient{client1, client2, client3}
results := chatdelta.ExecuteParallel(context.Background(), clients, "What is the meaning of life?")
for _, result := range results {
fmt.Printf("%s: %s\n", result.ClientName, result.Result)
}
Environment variables: The library automatically detects API keys from environment variables:
- OpenAI: OPENAI_API_KEY or CHATGPT_API_KEY
- Anthropic: ANTHROPIC_API_KEY or CLAUDE_API_KEY
- Google: GOOGLE_API_KEY or GEMINI_API_KEY
Error handling: The library provides comprehensive error handling with specific error types and helper functions for error classification:
_, err := client.SendPrompt(ctx, "Hello")
if err != nil {
if chatdelta.IsAuthenticationError(err) {
// Handle authentication error
} else if chatdelta.IsNetworkError(err) {
// Handle network error
} else if chatdelta.IsRetryableError(err) {
// Library will automatically retry retryable errors
}
}
Package chatdelta provides a unified interface for interacting with multiple AI APIs. metrics.go implements thread-safe performance metrics collection for AI client interactions. Counters are backed by sync/atomic operations so ClientMetrics can be shared across goroutines without additional locking.
Package chatdelta provides a unified interface for interacting with multiple AI APIs. middleware.go provides composable request-interceptor middleware for AIClient, plus JSON response validation and API-error extraction helpers shared across provider implementations.
The Middleware / MiddlewareClient types model the same interceptor chain pattern as the Rust MiddlewareClient, adapted to Go idioms: instead of async traits, each Middleware is a plain function that calls next() to continue the chain. The lower-level retry loop from utils.go is still available; MiddlewareClient adds an orthogonal, higher-level interception layer (logging, per-call timeouts, etc.).
Package chatdelta provides a unified interface for interacting with multiple AI APIs. mock.go implements a mock AIClient for use in unit tests. Responses are pre-loaded into a queue and dequeued in order; when the queue is exhausted a default fallback response is returned.
Package chatdelta provides a unified interface for interacting with multiple AI APIs. observability.go provides metrics export and structured logging infrastructure for monitoring AI client performance.
Two MetricsExporter implementations are provided:
- TextExporter: always available, writes human-readable key=value lines to stderr.
- PrometheusExporter: exports metrics in Prometheus format via a dedicated registry. It mirrors the seven metrics tracked by the Rust observability module.
Package chatdelta provides a unified interface for interacting with multiple AI APIs. orchestration.go implements multi-model AI orchestration with seven coordination strategies, confidence-based response fusion, consensus analysis, task-type routing, and a TTL-based in-memory response cache backed by sync.Map. No external libraries beyond the standard library are used.
Package chatdelta provides a unified interface for interacting with multiple AI APIs. prompt_optimizer.go implements a context-aware prompt engineering engine that analyses incoming prompts, classifies their task type and target expertise level, then applies a set of configurable enhancement techniques to improve response quality.
Package chatdelta provides a unified interface for interacting with multiple AI APIs. sse.go implements a Server-Sent Events (SSE) parser for reading streaming HTTP responses. It uses only the standard library (bufio, strings) — no external SSE dependency is required.
Usage:
reader := chatdelta.NewSseReader(httpResp.Body)
for {
event, err := reader.Next()
if event == nil || err != nil {
break
}
fmt.Println(event.Data)
}
Example (ErrorHandling) ¶
Example_errorHandling demonstrates error handling
// Try to create a client with invalid API key
client, err := chatdelta.CreateClient("openai", "invalid-key", "", nil)
if err != nil {
fmt.Printf("Error creating client: %v\n", err)
return
}
ctx := context.Background()
_, err = client.SendPrompt(ctx, "Hello")
if err != nil {
// Check error type
if chatdelta.IsAuthenticationError(err) {
fmt.Println("Authentication error - check your API key")
} else if chatdelta.IsNetworkError(err) {
fmt.Println("Network error - check your connection")
} else if chatdelta.IsRetryableError(err) {
fmt.Println("Retryable error - the library will automatically retry")
} else {
fmt.Printf("Other error: %v\n", err)
}
}
Index ¶
- Constants
- Variables
- func ExecuteWithExponentialBackoff(ctx context.Context, retries int, baseDelay time.Duration, ...) error
- func ExecuteWithRetry(ctx context.Context, retries int, operation func() error) error
- func GetAvailableProviders() []string
- func IsAuthenticationError(err error) bool
- func IsNetworkError(err error) bool
- func IsRetryableError(err error) bool
- func MergeStreamChunks(chunks <-chan StreamChunk) (string, error)
- func ParseSseData(line string) (string, bool)
- func QuickPrompt(provider, prompt string) (string, error)
- func StreamConversationToString(ctx context.Context, client AIClient, conversation *Conversation) (string, error)
- func StreamToChannel(ctx context.Context, src <-chan StreamChunk, ...) <-chan StreamChunk
- func StreamToString(ctx context.Context, client AIClient, prompt string) (string, error)
- func ValidateConfig(config *ClientConfig) error
- type AIClient
- type AiOrchestrator
- type AiResponse
- type ChatSession
- func (s *ChatSession) AddMessage(message Message)
- func (s *ChatSession) Clear()
- func (s *ChatSession) History() *Conversation
- func (s *ChatSession) IsEmpty() bool
- func (s *ChatSession) Len() int
- func (s *ChatSession) ResetWithSystem(message string)
- func (s *ChatSession) Send(ctx context.Context, message string) (string, error)
- func (s *ChatSession) SendWithMetadata(ctx context.Context, message string) (*AiResponse, error)
- func (s *ChatSession) Stream(ctx context.Context, message string) (<-chan StreamChunk, error)
- type ClaudeClient
- func (c *ClaudeClient) Model() string
- func (c *ClaudeClient) Name() string
- func (c *ClaudeClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
- func (c *ClaudeClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
- func (c *ClaudeClient) SendPrompt(ctx context.Context, prompt string) (string, error)
- func (c *ClaudeClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
- func (c *ClaudeClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
- func (c *ClaudeClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
- func (c *ClaudeClient) SupportsConversations() bool
- func (c *ClaudeClient) SupportsStreaming() bool
- type ClientConfig
- func (c *ClientConfig) SetBaseURL(url string) *ClientConfig
- func (c *ClientConfig) SetFrequencyPenalty(penalty float64) *ClientConfig
- func (c *ClientConfig) SetMaxTokens(maxTokens int) *ClientConfig
- func (c *ClientConfig) SetPresencePenalty(penalty float64) *ClientConfig
- func (c *ClientConfig) SetRetries(retries int) *ClientConfig
- func (c *ClientConfig) SetRetryStrategy(strategy RetryStrategy) *ClientConfig
- func (c *ClientConfig) SetSystemMessage(message string) *ClientConfig
- func (c *ClientConfig) SetTemperature(temperature float64) *ClientConfig
- func (c *ClientConfig) SetTimeout(timeout time.Duration) *ClientConfig
- func (c *ClientConfig) SetTopP(topP float64) *ClientConfig
- type ClientError
- func NewBadRequestError(message string) *ClientError
- func NewConfigError(message string) *ClientError
- func NewConnectionError(err error) *ClientError
- func NewDNSError(hostname string, err error) *ClientError
- func NewExpiredTokenError() *ClientError
- func NewInvalidAPIKeyError() *ClientError
- func NewInvalidModelError(model string) *ClientError
- func NewInvalidParameterError(parameter, value string) *ClientError
- func NewJSONParseError(err error) *ClientError
- func NewMissingConfigError(config string) *ClientError
- func NewMissingFieldError(field string) *ClientError
- func NewPermissionDeniedError(resource string) *ClientError
- func NewQuotaExceededError() *ClientError
- func NewRateLimitError(retryAfter *time.Duration) *ClientError
- func NewServerError(statusCode int, message string) *ClientError
- func NewStreamClosedError() *ClientError
- func NewStreamReadError(err error) *ClientError
- func NewTimeoutError(timeout time.Duration) *ClientError
- type ClientInfo
- type ClientMetrics
- type ConsensusAnalysis
- type Conversation
- type ErrorType
- type ExpertiseLevel
- type FactCheck
- type FusedResponse
- type GeminiClient
- func (c *GeminiClient) Model() string
- func (c *GeminiClient) Name() string
- func (c *GeminiClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
- func (c *GeminiClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
- func (c *GeminiClient) SendPrompt(ctx context.Context, prompt string) (string, error)
- func (c *GeminiClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
- func (c *GeminiClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
- func (c *GeminiClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
- func (c *GeminiClient) SupportsConversations() bool
- func (c *GeminiClient) SupportsStreaming() bool
- type LogLevel
- type Message
- type MetricsExporter
- type MetricsSnapshot
- type Middleware
- type MiddlewareClient
- func (m *MiddlewareClient) Model() string
- func (m *MiddlewareClient) Name() string
- func (m *MiddlewareClient) SendConversation(ctx context.Context, conv *Conversation) (string, error)
- func (m *MiddlewareClient) SendConversationWithMetadata(ctx context.Context, conv *Conversation) (*AiResponse, error)
- func (m *MiddlewareClient) SendPrompt(ctx context.Context, prompt string) (string, error)
- func (m *MiddlewareClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
- func (m *MiddlewareClient) StreamConversation(ctx context.Context, conv *Conversation) (<-chan StreamChunk, error)
- func (m *MiddlewareClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
- func (m *MiddlewareClient) SupportsConversations() bool
- func (m *MiddlewareClient) SupportsStreaming() bool
- func (m *MiddlewareClient) Use(mw ...Middleware)
- type MockClient
- func (m *MockClient) Model() string
- func (m *MockClient) Name() string
- func (m *MockClient) QueueError(err error)
- func (m *MockClient) QueueResponse(content string)
- func (m *MockClient) SendConversation(_ context.Context, _ *Conversation) (string, error)
- func (m *MockClient) SendConversationWithMetadata(_ context.Context, _ *Conversation) (*AiResponse, error)
- func (m *MockClient) SendPrompt(_ context.Context, _ string) (string, error)
- func (m *MockClient) SendPromptWithMetadata(_ context.Context, _ string) (*AiResponse, error)
- func (m *MockClient) StreamConversation(ctx context.Context, _ *Conversation) (<-chan StreamChunk, error)
- func (m *MockClient) StreamPrompt(_ context.Context, _ string) (<-chan StreamChunk, error)
- func (m *MockClient) SupportsConversations() bool
- func (m *MockClient) SupportsStreaming() bool
- type MockResponse
- type ModelCapabilities
- type ModelContribution
- type ObservabilityContext
- type OpenAIClient
- func (c *OpenAIClient) Model() string
- func (c *OpenAIClient) Name() string
- func (c *OpenAIClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
- func (c *OpenAIClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
- func (c *OpenAIClient) SendPrompt(ctx context.Context, prompt string) (string, error)
- func (c *OpenAIClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
- func (c *OpenAIClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
- func (c *OpenAIClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
- func (c *OpenAIClient) SupportsConversations() bool
- func (c *OpenAIClient) SupportsStreaming() bool
- type OptimizationTechnique
- type OptimizedPrompt
- type OrchestrationMetrics
- type OrchestrationStrategy
- type OrchestratorTaskType
- type ParallelResult
- type PrometheusExporter
- type PromptContext
- type PromptOptimizer
- type PromptTaskType
- type PromptVariation
- type RequestTimer
- type ResponseCache
- type ResponseMetadata
- type RetryStrategy
- type SseEvent
- type SseReader
- type StreamChunk
- type Strength
- type TextExporter
Examples ¶
Constants ¶
const ( // Version of the chatdelta-go library Version = "1.1.0" // DefaultTimeout is the default timeout for HTTP requests DefaultTimeout = 30 // DefaultRetries is the default number of retry attempts DefaultRetries = 3 )
Variables ¶
var SupportedProviders = []string{"openai", "anthropic", "claude", "google", "gemini"}
SupportedProviders lists all supported AI providers
Functions ¶
func ExecuteWithExponentialBackoff ¶
func ExecuteWithExponentialBackoff(ctx context.Context, retries int, baseDelay time.Duration, operation func() error) error
ExecuteWithExponentialBackoff executes a function with exponential backoff
func ExecuteWithRetry ¶
ExecuteWithRetry executes a function with retry logic and exponential backoff
func GetAvailableProviders ¶
func GetAvailableProviders() []string
GetAvailableProviders returns a list of providers with available API keys
Example ¶
ExampleGetAvailableProviders demonstrates checking for available API keys
available := chatdelta.GetAvailableProviders()
if len(available) == 0 {
fmt.Println("No AI providers available (no API keys found in environment)")
fmt.Println("Set one of these environment variables:")
fmt.Println(" OPENAI_API_KEY or CHATGPT_API_KEY")
fmt.Println(" ANTHROPIC_API_KEY or CLAUDE_API_KEY")
fmt.Println(" GOOGLE_API_KEY or GEMINI_API_KEY")
return
}
fmt.Printf("Available providers: %v\n", available)
// Create clients for all available providers
for _, provider := range available {
client, err := chatdelta.CreateClient(provider, "", "", nil)
if err != nil {
fmt.Printf("Error creating %s client: %v\n", provider, err)
continue
}
info := chatdelta.GetClientInfo(client)
fmt.Printf(" %s: model=%s, streaming=%t, conversations=%t\n",
info.Name, info.Model, info.SupportsStreaming, info.SupportsConversations)
}
func IsAuthenticationError ¶
IsAuthenticationError checks if the error is authentication-related
func IsNetworkError ¶
IsNetworkError checks if the error is a network-related error
func IsRetryableError ¶
IsRetryableError checks if the error is retryable
func MergeStreamChunks ¶
func MergeStreamChunks(chunks <-chan StreamChunk) (string, error)
MergeStreamChunks combines multiple stream chunks into a single string
func ParseSseData ¶ added in v1.2.0
ParseSseData extracts the payload from a raw "data: …" SSE line. Returns the trimmed data string and true if the line is a data line; otherwise returns an empty string and false.
func QuickPrompt ¶
QuickPrompt is a convenience function for sending a quick prompt to a provider without needing to manage client instances. It uses environment variables for API keys and default configurations.
func StreamConversationToString ¶
func StreamConversationToString(ctx context.Context, client AIClient, conversation *Conversation) (string, error)
StreamConversationToString converts a streaming conversation response to a string
func StreamToChannel ¶ added in v1.2.0
func StreamToChannel(ctx context.Context, src <-chan StreamChunk, transform func(StreamChunk) StreamChunk) <-chan StreamChunk
StreamToChannel reads chunks from src, applies transform to each one, and forwards the result to the returned channel. The returned channel is closed when src closes or a Finished chunk is forwarded, or when ctx is cancelled.
This is the Go equivalent of the Rust stream_to_channel utility: it converts one channel into another, allowing callers to post-process streaming chunks without consuming the source directly.
Pass nil transform for a transparent passthrough.
func StreamToString ¶
StreamToString converts a streaming response to a string
func ValidateConfig ¶
func ValidateConfig(config *ClientConfig) error
ValidateConfig validates a ClientConfig
Types ¶
type AIClient ¶
type AIClient interface {
// SendPrompt sends a single prompt and returns the response
SendPrompt(ctx context.Context, prompt string) (string, error)
// SendPromptWithMetadata sends a prompt and returns response with metadata
SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
// SendConversation sends a conversation and returns the response
SendConversation(ctx context.Context, conversation *Conversation) (string, error)
// SendConversationWithMetadata sends a conversation and returns response with metadata
SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
// StreamPrompt sends a prompt and returns a channel for streaming chunks
StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
// StreamConversation sends a conversation and returns a channel for streaming chunks
StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
// SupportsStreaming returns true if the client supports streaming
SupportsStreaming() bool
// SupportsConversations returns true if the client supports conversations
SupportsConversations() bool
// Name returns the name of the client
Name() string
// Model returns the model identifier
Model() string
}
AIClient defines the interface for all AI clients
func CreateClient ¶
func CreateClient(provider, apiKey, model string, config *ClientConfig) (AIClient, error)
CreateClient creates a new AI client based on the provider string
Example ¶
ExampleCreateClient demonstrates how to create different AI clients
// Create an OpenAI client
openaiClient, err := chatdelta.CreateClient("openai", "your-api-key", "gpt-3.5-turbo", nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created %s client with model %s\n", openaiClient.Name(), openaiClient.Model())
// Create a Claude client with custom configuration
config := chatdelta.NewClientConfig().
SetTimeout(45 * time.Second).
SetTemperature(0.7).
SetMaxTokens(2048).
SetSystemMessage("You are a helpful AI assistant.")
claudeClient, err := chatdelta.CreateClient("claude", "your-api-key", "claude-3-haiku-20240307", config)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created %s client with model %s\n", claudeClient.Name(), claudeClient.Model())
type AiOrchestrator ¶ added in v1.2.0
type AiOrchestrator struct {
// contains filtered or unexported fields
}
AiOrchestrator coordinates multiple AI clients using a configurable strategy. Use NewAiOrchestrator to create an instance.
func NewAiOrchestrator ¶ added in v1.2.0
func NewAiOrchestrator(clients []AIClient, strategy OrchestrationStrategy) *AiOrchestrator
NewAiOrchestrator creates an orchestrator wrapping the given clients. Basic ModelCapabilities are inferred from each client at construction time. A five-minute response cache is created automatically.
func (*AiOrchestrator) OrchestratorMetrics ¶ added in v1.2.0
func (o *AiOrchestrator) OrchestratorMetrics() MetricsSnapshot
OrchestratorMetrics returns a snapshot of the orchestrator's accumulated metrics.
func (*AiOrchestrator) Query ¶ added in v1.2.0
func (o *AiOrchestrator) Query(ctx context.Context, prompt string) (*FusedResponse, error)
Query sends prompt to the configured clients using the orchestration strategy. Successful responses are cached by prompt text for the configured TTL.
func (*AiOrchestrator) SetCapabilities ¶ added in v1.2.0
func (o *AiOrchestrator) SetCapabilities(name string, caps ModelCapabilities)
SetCapabilities registers detailed capability metadata for a named model. Call this after NewAiOrchestrator to enable strength-based routing.
type AiResponse ¶ added in v1.1.0
type AiResponse struct {
// Content is the actual text response from the AI
Content string `json:"content"`
// Metadata contains additional information about the response
Metadata ResponseMetadata `json:"metadata"`
}
AiResponse combines the text content with response metadata. Use this when you need detailed information about token usage and performance.
type ChatSession ¶ added in v1.1.0
type ChatSession struct {
// contains filtered or unexported fields
}
ChatSession manages multi-turn conversations with an AI client. It automatically maintains conversation history and handles context.
Example:
session := NewChatSessionWithSystemMessage(client, "You are a helpful assistant.") response1, err := session.Send(ctx, "What is Go?") response2, err := session.Send(ctx, "What are its benefits?") // Remembers context
func NewChatSession ¶ added in v1.1.0
func NewChatSession(client AIClient) *ChatSession
NewChatSession creates a new chat session with the given client. The conversation starts empty with no system message.
func NewChatSessionWithSystemMessage ¶ added in v1.1.0
func NewChatSessionWithSystemMessage(client AIClient, message string) *ChatSession
NewChatSessionWithSystemMessage creates a new chat session with a system message. The system message sets the context and behavior for the AI assistant.
func (*ChatSession) AddMessage ¶ added in v1.1.0
func (s *ChatSession) AddMessage(message Message)
AddMessage adds a message to the conversation without sending it. Use this to manually construct conversation history.
func (*ChatSession) Clear ¶ added in v1.1.0
func (s *ChatSession) Clear()
Clear removes all messages from the conversation history.
func (*ChatSession) History ¶ added in v1.1.0
func (s *ChatSession) History() *Conversation
History returns the conversation history. The returned conversation can be modified directly if needed.
func (*ChatSession) IsEmpty ¶ added in v1.1.0
func (s *ChatSession) IsEmpty() bool
IsEmpty returns true if the conversation has no messages.
func (*ChatSession) Len ¶ added in v1.1.0
func (s *ChatSession) Len() int
Len returns the number of messages in the conversation.
func (*ChatSession) ResetWithSystem ¶ added in v1.1.0
func (s *ChatSession) ResetWithSystem(message string)
ResetWithSystem clears the conversation and sets a new system message. This is useful for changing the AI's behavior mid-session.
func (*ChatSession) Send ¶ added in v1.1.0
Send sends a message and gets a response. The message is added to the conversation history as a user message, and the response is added as an assistant message. If an error occurs, the user message is removed from history.
func (*ChatSession) SendWithMetadata ¶ added in v1.1.0
func (s *ChatSession) SendWithMetadata(ctx context.Context, message string) (*AiResponse, error)
SendWithMetadata sends a message and gets a response with metadata. This includes token counts, latency, and other provider-specific information. The conversation history is updated the same as Send.
func (*ChatSession) Stream ¶ added in v1.1.0
func (s *ChatSession) Stream(ctx context.Context, message string) (<-chan StreamChunk, error)
Stream sends a message and returns a channel for streaming chunks. The complete response is assembled and added to history when streaming completes. The returned channel is buffered and will be closed when streaming ends.
type ClaudeClient ¶
type ClaudeClient struct {
// contains filtered or unexported fields
}
ClaudeClient implements the AIClient interface for Anthropic's Claude API
func NewClaudeClient ¶
func NewClaudeClient(apiKey, model string, config *ClientConfig) (*ClaudeClient, error)
NewClaudeClient creates a new Claude client
func (*ClaudeClient) Model ¶
func (c *ClaudeClient) Model() string
Model returns the model identifier
func (*ClaudeClient) SendConversation ¶
func (c *ClaudeClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
SendConversation sends a conversation to Claude
func (*ClaudeClient) SendConversationWithMetadata ¶ added in v1.2.0
func (c *ClaudeClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
SendConversationWithMetadata sends a conversation and returns the response with metadata.
func (*ClaudeClient) SendPrompt ¶
SendPrompt sends a single prompt to Claude
func (*ClaudeClient) SendPromptWithMetadata ¶ added in v1.2.0
func (c *ClaudeClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
SendPromptWithMetadata sends a prompt and returns the response with metadata.
func (*ClaudeClient) StreamConversation ¶
func (c *ClaudeClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
StreamConversation streams a response for a conversation
func (*ClaudeClient) StreamPrompt ¶
func (c *ClaudeClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
StreamPrompt streams a response for a single prompt
func (*ClaudeClient) SupportsConversations ¶
func (c *ClaudeClient) SupportsConversations() bool
SupportsConversations returns true (Claude supports conversations)
func (*ClaudeClient) SupportsStreaming ¶
func (c *ClaudeClient) SupportsStreaming() bool
SupportsStreaming returns true (Claude supports streaming)
type ClientConfig ¶
type ClientConfig struct {
// Timeout for HTTP requests
Timeout time.Duration
// Retries is the number of retry attempts for failed requests
Retries int
// Temperature controls randomness (0.0-2.0), higher = more random
Temperature *float64
// MaxTokens limits the response length
MaxTokens *int
// TopP is nucleus sampling parameter (0.0-1.0)
TopP *float64
// FrequencyPenalty reduces repetition of token sequences (-2.0 to 2.0)
FrequencyPenalty *float64
// PresencePenalty reduces repetition of any tokens that have appeared (-2.0 to 2.0)
PresencePenalty *float64
// SystemMessage sets context for the AI assistant
SystemMessage *string
// BaseURL allows custom endpoints (e.g., Azure OpenAI, local models)
BaseURL *string
// RetryStrategy determines how delays are calculated between retries
RetryStrategy RetryStrategy
}
ClientConfig holds configuration options for AI clients. Use NewClientConfig to create a config with sensible defaults, then use the Set* methods to customize.
Example ¶
ExampleClientConfig demonstrates configuration options
config := chatdelta.NewClientConfig().
SetTimeout(60 * time.Second). // 60 second timeout
SetRetries(5). // 5 retry attempts
SetTemperature(0.8). // Creative temperature
SetMaxTokens(1024). // Limit output length
SetTopP(0.9). // Nucleus sampling
SetFrequencyPenalty(0.1). // Reduce repetition
SetPresencePenalty(0.1). // Encourage topic diversity
SetSystemMessage("You are a creative writing assistant.")
client, err := chatdelta.CreateClient("openai", "your-api-key", "gpt-4", config)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created %s client with custom configuration\n", client.Name())
func NewClientConfig ¶
func NewClientConfig() *ClientConfig
NewClientConfig creates a new ClientConfig with default values
func (*ClientConfig) SetBaseURL ¶ added in v1.1.0
func (c *ClientConfig) SetBaseURL(url string) *ClientConfig
SetBaseURL sets the custom base URL for API endpoint
func (*ClientConfig) SetFrequencyPenalty ¶
func (c *ClientConfig) SetFrequencyPenalty(penalty float64) *ClientConfig
SetFrequencyPenalty sets the frequency penalty parameter
func (*ClientConfig) SetMaxTokens ¶
func (c *ClientConfig) SetMaxTokens(maxTokens int) *ClientConfig
SetMaxTokens sets the maximum number of tokens
func (*ClientConfig) SetPresencePenalty ¶
func (c *ClientConfig) SetPresencePenalty(penalty float64) *ClientConfig
SetPresencePenalty sets the presence penalty parameter
func (*ClientConfig) SetRetries ¶
func (c *ClientConfig) SetRetries(retries int) *ClientConfig
SetRetries sets the number of retries
func (*ClientConfig) SetRetryStrategy ¶ added in v1.1.0
func (c *ClientConfig) SetRetryStrategy(strategy RetryStrategy) *ClientConfig
SetRetryStrategy sets the retry strategy
func (*ClientConfig) SetSystemMessage ¶
func (c *ClientConfig) SetSystemMessage(message string) *ClientConfig
SetSystemMessage sets the system message
func (*ClientConfig) SetTemperature ¶
func (c *ClientConfig) SetTemperature(temperature float64) *ClientConfig
SetTemperature sets the temperature parameter
func (*ClientConfig) SetTimeout ¶
func (c *ClientConfig) SetTimeout(timeout time.Duration) *ClientConfig
SetTimeout sets the timeout duration
func (*ClientConfig) SetTopP ¶
func (c *ClientConfig) SetTopP(topP float64) *ClientConfig
SetTopP sets the top-p parameter
type ClientError ¶
type ClientError struct {
Type ErrorType `json:"type"`
Code string `json:"code,omitempty"`
Message string `json:"message"`
Cause error `json:"-"`
}
ClientError represents an error that occurred during client operations
func NewBadRequestError ¶
func NewBadRequestError(message string) *ClientError
NewBadRequestError creates a new bad request error
func NewConfigError ¶
func NewConfigError(message string) *ClientError
NewConfigError creates a configuration error (helper for ExecuteParallelConversation)
func NewConnectionError ¶
func NewConnectionError(err error) *ClientError
NewConnectionError creates a new connection error
func NewDNSError ¶
func NewDNSError(hostname string, err error) *ClientError
NewDNSError creates a new DNS resolution error
func NewExpiredTokenError ¶
func NewExpiredTokenError() *ClientError
NewExpiredTokenError creates a new expired token error
func NewInvalidAPIKeyError ¶
func NewInvalidAPIKeyError() *ClientError
NewInvalidAPIKeyError creates a new invalid API key error
func NewInvalidModelError ¶
func NewInvalidModelError(model string) *ClientError
NewInvalidModelError creates a new invalid model error
func NewInvalidParameterError ¶
func NewInvalidParameterError(parameter, value string) *ClientError
NewInvalidParameterError creates a new invalid parameter error
func NewJSONParseError ¶
func NewJSONParseError(err error) *ClientError
NewJSONParseError creates a new JSON parsing error
func NewMissingConfigError ¶
func NewMissingConfigError(config string) *ClientError
NewMissingConfigError creates a new missing configuration error
func NewMissingFieldError ¶
func NewMissingFieldError(field string) *ClientError
NewMissingFieldError creates a new missing field error
func NewPermissionDeniedError ¶
func NewPermissionDeniedError(resource string) *ClientError
NewPermissionDeniedError creates a new permission denied error
func NewQuotaExceededError ¶
func NewQuotaExceededError() *ClientError
NewQuotaExceededError creates a new quota exceeded error
func NewRateLimitError ¶
func NewRateLimitError(retryAfter *time.Duration) *ClientError
NewRateLimitError creates a new rate limit error
func NewServerError ¶
func NewServerError(statusCode int, message string) *ClientError
NewServerError creates a new server error
func NewStreamClosedError ¶
func NewStreamClosedError() *ClientError
NewStreamClosedError creates a new stream closed error
func NewStreamReadError ¶
func NewStreamReadError(err error) *ClientError
NewStreamReadError creates a new stream read error
func NewTimeoutError ¶
func NewTimeoutError(timeout time.Duration) *ClientError
NewTimeoutError creates a new timeout error
func (*ClientError) Error ¶
func (e *ClientError) Error() string
Error implements the error interface
func (*ClientError) Is ¶
func (e *ClientError) Is(target error) bool
Is implements error matching for error types
func (*ClientError) Unwrap ¶
func (e *ClientError) Unwrap() error
Unwrap returns the underlying error
type ClientInfo ¶
type ClientInfo struct {
Name string `json:"name"`
Model string `json:"model"`
SupportsStreaming bool `json:"supports_streaming"`
SupportsConversations bool `json:"supports_conversations"`
}
ClientInfo holds information about a client
func GetClientInfo ¶
func GetClientInfo(client AIClient) ClientInfo
GetClientInfo returns information about a client
type ClientMetrics ¶ added in v1.2.0
type ClientMetrics struct {
// contains filtered or unexported fields
}
ClientMetrics tracks cumulative performance statistics for an AI client. All fields are updated atomically; the zero value is ready to use.
func NewClientMetrics ¶ added in v1.2.0
func NewClientMetrics() *ClientMetrics
NewClientMetrics creates a new, zeroed ClientMetrics.
func (*ClientMetrics) RecordCacheHit ¶ added in v1.2.0
func (m *ClientMetrics) RecordCacheHit()
RecordCacheHit increments the cache-hit counter.
func (*ClientMetrics) RecordCacheMiss ¶ added in v1.2.0
func (m *ClientMetrics) RecordCacheMiss()
RecordCacheMiss increments the cache-miss counter.
func (*ClientMetrics) RecordRequest ¶ added in v1.2.0
func (m *ClientMetrics) RecordRequest(success bool, latencyMs int64, tokensUsed int)
RecordRequest records the outcome of a single request. latencyMs should be the wall-clock time in milliseconds; tokensUsed is the total token count (prompt + completion). Negative values are ignored.
func (*ClientMetrics) Reset ¶ added in v1.2.0
func (m *ClientMetrics) Reset()
Reset clears all counters to zero.
func (*ClientMetrics) Snapshot ¶ added in v1.2.0
func (m *ClientMetrics) Snapshot() MetricsSnapshot
Snapshot returns a consistent, serialisable point-in-time copy of the metrics. Derived statistics (success rate, average latency, cache hit ratio) are computed at snapshot time.
type ConsensusAnalysis ¶ added in v1.2.0
type ConsensusAnalysis struct {
// AgreementScore is the fraction of models that produced similar responses (0–1).
AgreementScore float64
// CommonThemes are terms appearing in a majority of responses.
CommonThemes []string
// FactChecks contains consistency verdicts for notable claims.
FactChecks []FactCheck
}
ConsensusAnalysis summarises agreement across model responses.
type Conversation ¶
type Conversation struct {
// Messages in chronological order
Messages []Message `json:"messages"`
}
Conversation represents a collection of messages forming a dialogue. Messages are ordered chronologically with the oldest first.
Example ¶
ExampleConversation demonstrates conversation building
conversation := chatdelta.NewConversation()
// Add messages to build a conversation
conversation.AddSystemMessage("You are a helpful math tutor.")
conversation.AddUserMessage("I need help with algebra.")
conversation.AddAssistantMessage("I'd be happy to help you with algebra! What specific topic are you working on?")
conversation.AddUserMessage("How do I solve linear equations?")
fmt.Printf("Conversation has %d messages:\n", len(conversation.Messages))
for i, msg := range conversation.Messages {
fmt.Printf("%d. %s: %s\n", i+1, msg.Role, msg.Content)
}
func NewConversation ¶
func NewConversation() *Conversation
NewConversation creates a new conversation
func (*Conversation) AddAssistantMessage ¶
func (c *Conversation) AddAssistantMessage(content string)
AddAssistantMessage adds an assistant message to the conversation
func (*Conversation) AddMessage ¶
func (c *Conversation) AddMessage(role, content string)
AddMessage adds a message to the conversation
func (*Conversation) AddSystemMessage ¶
func (c *Conversation) AddSystemMessage(content string)
AddSystemMessage adds a system message to the conversation
func (*Conversation) AddUserMessage ¶
func (c *Conversation) AddUserMessage(content string)
AddUserMessage adds a user message to the conversation
type ExpertiseLevel ¶ added in v1.2.0
type ExpertiseLevel string
ExpertiseLevel represents the estimated expertise of the intended audience.
const ( // ExpertiseBeginner targets users with minimal domain knowledge. ExpertiseBeginner ExpertiseLevel = "beginner" // ExpertiseIntermediate targets users with moderate domain knowledge. ExpertiseIntermediate ExpertiseLevel = "intermediate" // ExpertiseExpert targets domain experts expecting technical depth. ExpertiseExpert ExpertiseLevel = "expert" )
type FactCheck ¶ added in v1.2.0
type FactCheck struct {
// Claim is the assertion being verified.
Claim string
// Consistent is true when a majority of models agree on the claim.
Consistent bool
// SupportCount is the number of models that expressed the claim.
SupportCount int
}
FactCheck is a simple consistency verdict for a single claim.
type FusedResponse ¶ added in v1.2.0
type FusedResponse struct {
// Content is the final merged or selected response text.
Content string
// Contributions holds each model's individual result.
Contributions []ModelContribution
// Confidence is the overall confidence in the fused answer (0–1).
Confidence float64
// Consensus contains agreement analysis when applicable.
Consensus *ConsensusAnalysis
// Metrics records performance information for this call.
Metrics OrchestrationMetrics
}
FusedResponse is the aggregated output of an orchestration call.
type GeminiClient ¶
type GeminiClient struct {
// contains filtered or unexported fields
}
GeminiClient implements the AIClient interface for Google's Gemini API
func NewGeminiClient ¶
func NewGeminiClient(apiKey, model string, config *ClientConfig) (*GeminiClient, error)
NewGeminiClient creates a new Gemini client
func (*GeminiClient) Model ¶
func (c *GeminiClient) Model() string
Model returns the model identifier
func (*GeminiClient) SendConversation ¶
func (c *GeminiClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
SendConversation sends a conversation to Gemini
func (*GeminiClient) SendConversationWithMetadata ¶ added in v1.2.0
func (c *GeminiClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
SendConversationWithMetadata sends a conversation and returns the response with metadata.
func (*GeminiClient) SendPrompt ¶
SendPrompt sends a single prompt to Gemini
func (*GeminiClient) SendPromptWithMetadata ¶ added in v1.2.0
func (c *GeminiClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
SendPromptWithMetadata sends a prompt and returns the response with metadata.
func (*GeminiClient) StreamConversation ¶
func (c *GeminiClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
StreamConversation streams a response for a conversation (not implemented for Gemini yet)
func (*GeminiClient) StreamPrompt ¶
func (c *GeminiClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
StreamPrompt streams a response for a single prompt (not implemented for Gemini yet)
func (*GeminiClient) SupportsConversations ¶
func (c *GeminiClient) SupportsConversations() bool
SupportsConversations returns true (Gemini supports conversations)
func (*GeminiClient) SupportsStreaming ¶
func (c *GeminiClient) SupportsStreaming() bool
SupportsStreaming returns false (Gemini streaming not implemented yet)
type LogLevel ¶ added in v1.2.0
type LogLevel string
LogLevel controls the verbosity of chatdelta's internal log output.
const ( // LogLevelError only logs errors. LogLevelError LogLevel = "error" // LogLevelWarn logs warnings and errors. LogLevelWarn LogLevel = "warn" // LogLevelInfo logs informational messages and above. LogLevelInfo LogLevel = "info" // LogLevelDebug logs all messages including diagnostic details. LogLevelDebug LogLevel = "debug" )
func InitTracing ¶ added in v1.2.0
func InitTracing() LogLevel
InitTracing reads the CHATDELTA_LOG environment variable and returns the corresponding LogLevel. Valid values are "error", "warn", "info", "debug". Unrecognised or empty values default to LogLevelInfo.
type Message ¶
type Message struct {
// Role of the message sender ("system", "user", or "assistant")
Role string `json:"role"`
// Content of the message
Content string `json:"content"`
}
Message represents a single message in a conversation. Role should be one of "system", "user", or "assistant".
type MetricsExporter ¶ added in v1.2.0
type MetricsExporter interface {
// Export sends the snapshot to the configured backend.
// Implementations should be non-blocking where possible.
Export(snapshot MetricsSnapshot) error
// Name returns a short identifier for the exporter (e.g. "text", "prometheus").
Name() string
}
MetricsExporter is implemented by backends that receive periodic metrics snapshots.
type MetricsSnapshot ¶ added in v1.2.0
type MetricsSnapshot struct {
// Raw counters.
RequestsTotal uint64 `json:"requests_total"`
RequestsSuccessful uint64 `json:"requests_successful"`
RequestsFailed uint64 `json:"requests_failed"`
TotalLatencyMs uint64 `json:"total_latency_ms"`
TotalTokensUsed uint64 `json:"total_tokens_used"`
CacheHits uint64 `json:"cache_hits"`
CacheMisses uint64 `json:"cache_misses"`
// Derived statistics computed at snapshot time.
SuccessRate float64 `json:"success_rate"` // percentage (0–100)
AvgLatencyMs float64 `json:"avg_latency_ms"` // per successful request
CacheHitRatio float64 `json:"cache_hit_ratio"` // fraction (0–1)
}
MetricsSnapshot is a serialisable, point-in-time view of ClientMetrics.
func (MetricsSnapshot) Summary ¶ added in v1.2.0
func (s MetricsSnapshot) Summary() string
Summary returns a human-readable one-line summary of the snapshot.
type Middleware ¶ added in v1.2.0
type Middleware func(ctx context.Context, prompt string, next func(context.Context, string) (string, error)) (string, error)
Middleware is an interceptor around a prompt-based AIClient call.
ctx is the request context. prompt is the current prompt string (may have been modified by an earlier interceptor). next invokes the next middleware in the chain, or the inner AIClient when at the end.
A Middleware can:
- Inspect or transform prompt before calling next.
- Inspect or transform the response returned by next.
- Short-circuit by returning an error without calling next.
- Add retry, logging, timeout, or other cross-cutting behaviour.
func LoggingMiddleware ¶ added in v1.2.0
func LoggingMiddleware(logger *log.Logger) Middleware
LoggingMiddleware returns a Middleware that logs each request prompt and the corresponding response (or error) using logger. Pass nil to use the default stdlib logger writing to stderr.
func RetryMiddleware ¶ added in v1.2.0
func RetryMiddleware(maxRetries int) Middleware
RetryMiddleware returns a Middleware that retries retryable errors up to maxRetries additional times with exponential backoff. This is the interceptor-level complement to utils.go's ExecuteWithRetry loop: use this when you want retries applied by a composable layer rather than baked into a single operation.
func TimeoutMiddleware ¶ added in v1.2.0
func TimeoutMiddleware(timeout time.Duration) Middleware
TimeoutMiddleware returns a Middleware that enforces a per-call timeout by deriving a child context with the given deadline. If the inner call exceeds the deadline the context's error is returned.
type MiddlewareClient ¶ added in v1.2.0
type MiddlewareClient struct {
// contains filtered or unexported fields
}
MiddlewareClient wraps any AIClient with an ordered chain of Middleware interceptors. Execution order is middleware[0] → middleware[1] → … → inner client.
For SendPrompt, the full chain is applied. For SendConversation, the last user message in the conversation is extracted as the "prompt" passed to middleware. The inner AIClient always receives the unmodified Conversation, so prompt-mutating middleware affects the returned string but not the conversation history. Logging and validation middleware work transparently for both. For streaming methods the middleware chain is bypassed and calls are forwarded to the inner client directly; use StreamToChannel to post-process chunks.
func NewMiddlewareClient ¶ added in v1.2.0
func NewMiddlewareClient(inner AIClient, mw ...Middleware) *MiddlewareClient
NewMiddlewareClient creates a MiddlewareClient wrapping inner with the given middleware chain. Additional middleware can be appended later via Use.
func (*MiddlewareClient) Model ¶ added in v1.2.0
func (m *MiddlewareClient) Model() string
Model delegates to the inner client.
func (*MiddlewareClient) Name ¶ added in v1.2.0
func (m *MiddlewareClient) Name() string
Name delegates to the inner client.
func (*MiddlewareClient) SendConversation ¶ added in v1.2.0
func (m *MiddlewareClient) SendConversation(ctx context.Context, conv *Conversation) (string, error)
SendConversation applies the middleware chain using the last user message as the interception point, then delegates the full conversation to the inner client.
func (*MiddlewareClient) SendConversationWithMetadata ¶ added in v1.2.0
func (m *MiddlewareClient) SendConversationWithMetadata(ctx context.Context, conv *Conversation) (*AiResponse, error)
SendConversationWithMetadata applies the middleware chain then delegates to the inner client.
func (*MiddlewareClient) SendPrompt ¶ added in v1.2.0
SendPrompt applies the middleware chain then delegates to the inner client.
func (*MiddlewareClient) SendPromptWithMetadata ¶ added in v1.2.0
func (m *MiddlewareClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
SendPromptWithMetadata applies the middleware chain and wraps the result with metadata.
func (*MiddlewareClient) StreamConversation ¶ added in v1.2.0
func (m *MiddlewareClient) StreamConversation(ctx context.Context, conv *Conversation) (<-chan StreamChunk, error)
StreamConversation forwards directly to the inner client. Use StreamToChannel to post-process.
func (*MiddlewareClient) StreamPrompt ¶ added in v1.2.0
func (m *MiddlewareClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
StreamPrompt forwards directly to the inner client. Use StreamToChannel to post-process.
func (*MiddlewareClient) SupportsConversations ¶ added in v1.2.0
func (m *MiddlewareClient) SupportsConversations() bool
SupportsConversations delegates to the inner client.
func (*MiddlewareClient) SupportsStreaming ¶ added in v1.2.0
func (m *MiddlewareClient) SupportsStreaming() bool
SupportsStreaming delegates to the inner client.
func (*MiddlewareClient) Use ¶ added in v1.2.0
func (m *MiddlewareClient) Use(mw ...Middleware)
Use appends one or more middleware to the end of the chain.
type MockClient ¶ added in v1.2.0
type MockClient struct {
// contains filtered or unexported fields
}
MockClient implements AIClient using a pre-loaded response queue. It is safe for concurrent use.
func NewMockClient ¶ added in v1.2.0
func NewMockClient(name, model string) *MockClient
NewMockClient creates a new MockClient with the given name and model. If model is empty it defaults to "mock-model".
func (*MockClient) Model ¶ added in v1.2.0
func (m *MockClient) Model() string
Model returns the model identifier.
func (*MockClient) Name ¶ added in v1.2.0
func (m *MockClient) Name() string
Name returns the client name.
func (*MockClient) QueueError ¶ added in v1.2.0
func (m *MockClient) QueueError(err error)
QueueError enqueues an error response.
func (*MockClient) QueueResponse ¶ added in v1.2.0
func (m *MockClient) QueueResponse(content string)
QueueResponse enqueues a successful text response.
func (*MockClient) SendConversation ¶ added in v1.2.0
func (m *MockClient) SendConversation(_ context.Context, _ *Conversation) (string, error)
SendConversation returns the next queued response.
func (*MockClient) SendConversationWithMetadata ¶ added in v1.2.0
func (m *MockClient) SendConversationWithMetadata(_ context.Context, _ *Conversation) (*AiResponse, error)
SendConversationWithMetadata returns the next queued response with basic metadata.
func (*MockClient) SendPrompt ¶ added in v1.2.0
SendPrompt returns the next queued response.
func (*MockClient) SendPromptWithMetadata ¶ added in v1.2.0
func (m *MockClient) SendPromptWithMetadata(_ context.Context, _ string) (*AiResponse, error)
SendPromptWithMetadata returns the next queued response with basic metadata.
func (*MockClient) StreamConversation ¶ added in v1.2.0
func (m *MockClient) StreamConversation(ctx context.Context, _ *Conversation) (<-chan StreamChunk, error)
StreamConversation delegates to StreamPrompt.
func (*MockClient) StreamPrompt ¶ added in v1.2.0
func (m *MockClient) StreamPrompt(_ context.Context, _ string) (<-chan StreamChunk, error)
StreamPrompt dequeues a response and delivers it as a two-chunk stream. If the dequeued item is an error it is returned immediately.
func (*MockClient) SupportsConversations ¶ added in v1.2.0
func (m *MockClient) SupportsConversations() bool
SupportsConversations returns true.
func (*MockClient) SupportsStreaming ¶ added in v1.2.0
func (m *MockClient) SupportsStreaming() bool
SupportsStreaming returns true.
type MockResponse ¶ added in v1.2.0
type MockResponse struct {
// Content is the text to return on success.
Content string
// Error, if non-nil, is returned instead of Content.
Error error
}
MockResponse is a pre-configured response held in a MockClient's queue.
type ModelCapabilities ¶ added in v1.2.0
type ModelCapabilities struct {
// Name is the client/model identifier.
Name string
// Strengths lists areas where this model performs best.
Strengths []Strength
// AvgLatencyMs is the historical average latency in milliseconds.
AvgLatencyMs int64
// CostPer1kTokens is the approximate cost in USD per 1,000 tokens.
CostPer1kTokens float32
// MaxContextLength is the maximum number of tokens the model accepts.
MaxContextLength int
// SupportsStreaming indicates streaming capability.
SupportsStreaming bool
// SupportsVision indicates multi-modal image understanding.
SupportsVision bool
// SupportsFunctionCalling indicates function/tool-call capability.
SupportsFunctionCalling bool
}
ModelCapabilities describes a client's characteristics and areas of strength.
type ModelContribution ¶ added in v1.2.0
type ModelContribution struct {
// ClientName identifies the contributing client.
ClientName string
// Response is the raw text returned by the client.
Response string
// Confidence is a 0–1 quality estimate computed from response characteristics.
Confidence float64
// LatencyMs is the wall-clock time to receive this response.
LatencyMs int64
// Error holds any error from this client, if applicable.
Error error
}
ModelContribution holds one client's response within a FusedResponse.
type ObservabilityContext ¶ added in v1.2.0
type ObservabilityContext struct {
// RequestID is a unique identifier for this request.
RequestID string
// Provider is the AI provider name (e.g. "openai", "claude").
Provider string
// Model is the model identifier used for this request.
Model string
}
ObservabilityContext associates an in-flight request with a unique ID and provider/model information for structured logging and distributed tracing.
func NewObservabilityContext ¶ added in v1.2.0
func NewObservabilityContext(requestID, provider, model string) *ObservabilityContext
NewObservabilityContext creates a new ObservabilityContext.
func (*ObservabilityContext) LogFields ¶ added in v1.2.0
func (o *ObservabilityContext) LogFields() map[string]string
LogFields returns a map of key-value pairs suitable for structured logging.
type OpenAIClient ¶
type OpenAIClient struct {
// contains filtered or unexported fields
}
OpenAIClient implements the AIClient interface for OpenAI's API
func NewOpenAIClient ¶
func NewOpenAIClient(apiKey, model string, config *ClientConfig) (*OpenAIClient, error)
NewOpenAIClient creates a new OpenAI client
func (*OpenAIClient) Model ¶
func (c *OpenAIClient) Model() string
Model returns the model identifier
func (*OpenAIClient) SendConversation ¶
func (c *OpenAIClient) SendConversation(ctx context.Context, conversation *Conversation) (string, error)
SendConversation sends a conversation to OpenAI
func (*OpenAIClient) SendConversationWithMetadata ¶ added in v1.2.0
func (c *OpenAIClient) SendConversationWithMetadata(ctx context.Context, conversation *Conversation) (*AiResponse, error)
SendConversationWithMetadata sends a conversation and returns the response with metadata.
func (*OpenAIClient) SendPrompt ¶
SendPrompt sends a single prompt to OpenAI
func (*OpenAIClient) SendPromptWithMetadata ¶ added in v1.2.0
func (c *OpenAIClient) SendPromptWithMetadata(ctx context.Context, prompt string) (*AiResponse, error)
SendPromptWithMetadata sends a prompt and returns the response with metadata.
func (*OpenAIClient) StreamConversation ¶
func (c *OpenAIClient) StreamConversation(ctx context.Context, conversation *Conversation) (<-chan StreamChunk, error)
StreamConversation streams a response for a conversation
func (*OpenAIClient) StreamPrompt ¶
func (c *OpenAIClient) StreamPrompt(ctx context.Context, prompt string) (<-chan StreamChunk, error)
StreamPrompt streams a response for a single prompt
func (*OpenAIClient) SupportsConversations ¶
func (c *OpenAIClient) SupportsConversations() bool
SupportsConversations returns true (OpenAI supports conversations)
func (*OpenAIClient) SupportsStreaming ¶
func (c *OpenAIClient) SupportsStreaming() bool
SupportsStreaming returns true (OpenAI supports streaming)
type OptimizationTechnique ¶ added in v1.2.0
type OptimizationTechnique string
OptimizationTechnique names a single enhancement applied to a prompt.
const ( // TechniqueClarity adds punctuation and direct phrasing. TechniqueClarity OptimizationTechnique = "clarity_enhancement" // TechniqueContextInjection prepends task-specific framing. TechniqueContextInjection OptimizationTechnique = "context_injection" // TechniqueChainOfThought introduces step-by-step reasoning language. TechniqueChainOfThought OptimizationTechnique = "chain_of_thought" // TechniqueFewShot appends a request for illustrative examples. TechniqueFewShot OptimizationTechnique = "few_shot_learning" // TechniqueRoleSpecification prefixes a persona assignment. TechniqueRoleSpecification OptimizationTechnique = "role_specification" )
type OptimizedPrompt ¶ added in v1.2.0
type OptimizedPrompt struct {
// Original is the unmodified input.
Original string
// Optimized is the enhanced prompt.
Optimized string
// Techniques lists each enhancement applied, in order.
Techniques []OptimizationTechnique
// Context holds the detected characteristics of the input.
Context PromptContext
// Confidence is a 0–1 quality estimate for the optimized prompt.
Confidence float64
// Variations are alternative phrasings worth considering.
Variations []PromptVariation
}
OptimizedPrompt is the result of running PromptOptimizer.Optimize.
type OrchestrationMetrics ¶ added in v1.2.0
type OrchestrationMetrics struct {
// TotalLatencyMs is the wall-clock time from call start to result.
TotalLatencyMs int64
// ModelsConsulted is the number of clients that were queried.
ModelsConsulted int
// SuccessfulResponses is the number of clients that returned without error.
SuccessfulResponses int
// Strategy is the strategy that was ultimately applied.
Strategy OrchestrationStrategy
}
OrchestrationMetrics records performance data for one orchestration call.
type OrchestrationStrategy ¶ added in v1.2.0
type OrchestrationStrategy int
OrchestrationStrategy selects how multiple AI clients are coordinated.
const ( // StrategyParallel queries all clients simultaneously and picks the highest- // confidence response as the primary content. StrategyParallel OrchestrationStrategy = iota // StrategySequential passes output through each client in order, with each // client refining the previous response. StrategySequential // StrategySpecialized routes the prompt to the single best-suited client // based on task type and declared model strengths. StrategySpecialized // StrategyConsensus queries all clients and adds agreement analysis. StrategyConsensus // StrategyWeightedFusion merges all responses weighted by confidence scores. StrategyWeightedFusion // StrategyTournament scores every response and returns the winner. StrategyTournament // StrategyAdaptive selects a strategy at runtime based on the number of // configured clients and prompt characteristics. StrategyAdaptive )
type OrchestratorTaskType ¶ added in v1.2.0
type OrchestratorTaskType string
OrchestratorTaskType classifies a prompt for routing decisions.
const ( OrchestratorTaskCode OrchestratorTaskType = "code" OrchestratorTaskCreative OrchestratorTaskType = "creative" OrchestratorTaskAnalysis OrchestratorTaskType = "analysis" OrchestratorTaskMathematics OrchestratorTaskType = "mathematics" OrchestratorTaskGeneral OrchestratorTaskType = "general" )
type ParallelResult ¶
type ParallelResult struct {
// ClientName identifies which client produced this result
ClientName string
// Result contains the successful response text
Result string
// Error contains any error that occurred
Error error
}
ParallelResult represents the result of a parallel execution across multiple clients. Either Result or Error will be populated, not both.
func ExecuteParallel ¶
func ExecuteParallel(ctx context.Context, clients []AIClient, prompt string) []ParallelResult
ExecuteParallel executes multiple AI clients in parallel with the same prompt
Example ¶
ExampleExecuteParallel demonstrates parallel execution across multiple providers
// Create multiple clients
var clients []chatdelta.AIClient
if openaiClient, err := chatdelta.CreateClient("openai", os.Getenv("OPENAI_API_KEY"), "", nil); err == nil {
clients = append(clients, openaiClient)
}
if claudeClient, err := chatdelta.CreateClient("claude", os.Getenv("CLAUDE_API_KEY"), "", nil); err == nil {
clients = append(clients, claudeClient)
}
if geminiClient, err := chatdelta.CreateClient("gemini", os.Getenv("GEMINI_API_KEY"), "", nil); err == nil {
clients = append(clients, geminiClient)
}
if len(clients) == 0 {
fmt.Println("No clients available (check API keys)")
return
}
// Execute the same prompt across all providers
ctx := context.Background()
prompt := "What is the meaning of life?"
results := chatdelta.ExecuteParallel(ctx, clients, prompt)
fmt.Printf("Results from %d providers:\n", len(results))
for _, result := range results {
fmt.Printf("\n=== %s ===\n", result.ClientName)
if result.Error != nil {
fmt.Printf("Error: %v\n", result.Error)
} else {
fmt.Printf("Response: %s\n", result.Result)
}
}
func ExecuteParallelConversation ¶
func ExecuteParallelConversation(ctx context.Context, clients []AIClient, conversation *Conversation) []ParallelResult
ExecuteParallelConversation executes multiple AI clients in parallel with the same conversation
type PrometheusExporter ¶ added in v1.2.0
type PrometheusExporter struct {
// contains filtered or unexported fields
}
PrometheusExporter exports ChatDelta metrics in Prometheus format.
It mirrors the seven metrics exported by the Rust observability module:
- chatdelta_requests_total (gauge — cumulative total requests)
- chatdelta_requests_successful_total (gauge)
- chatdelta_requests_failed_total (gauge)
- chatdelta_request_duration_ms (gauge — average latency per request)
- chatdelta_tokens_total (gauge — cumulative tokens consumed)
- chatdelta_cache_hits_total (gauge)
- chatdelta_cache_misses_total (gauge)
All counters are modelled as Gauges because Export receives a full MetricsSnapshot with absolute cumulative values (not per-call deltas). This is the idiomatic Go approach when syncing from an external metrics source.
The Rust implementation uses a Histogram for request duration with nine buckets (10 ms – 10 s). Because the snapshot only carries an average, that information is surfaced here as a Gauge. If you need the full histogram, record individual observations with a prometheus.Histogram directly.
Use NewPrometheusExporter to construct an instance, Export to push a snapshot, and Handler to serve a /metrics endpoint.
func NewPrometheusExporter ¶ added in v1.2.0
func NewPrometheusExporter() *PrometheusExporter
NewPrometheusExporter creates a PrometheusExporter with its own isolated prometheus.Registry (not the global default). Metrics are registered once at construction time; call Export to update their values.
func (*PrometheusExporter) Export ¶ added in v1.2.0
func (e *PrometheusExporter) Export(snapshot MetricsSnapshot) error
Export synchronises the Prometheus gauges with the given snapshot. It implements MetricsExporter and is safe for concurrent use.
func (*PrometheusExporter) Handler ¶ added in v1.2.0
func (e *PrometheusExporter) Handler() http.Handler
Handler returns an http.Handler that serves Prometheus metrics on the standard /metrics path. Mount it on your HTTP mux:
http.Handle("/metrics", exporter.Handler())
func (*PrometheusExporter) Name ¶ added in v1.2.0
func (e *PrometheusExporter) Name() string
Name returns "prometheus".
func (*PrometheusExporter) Registry ¶ added in v1.2.0
func (e *PrometheusExporter) Registry() *prometheus.Registry
Registry returns the underlying prometheus.Registry so callers can register additional metrics or integrate with existing instrumentation.
type PromptContext ¶ added in v1.2.0
type PromptContext struct {
// TaskType is the detected task category.
TaskType PromptTaskType
// ExpertiseLevel is the estimated target expertise.
ExpertiseLevel ExpertiseLevel
// Keywords are notable terms found in the prompt.
Keywords []string
}
PromptContext holds characteristics detected in the input prompt.
type PromptOptimizer ¶ added in v1.2.0
type PromptOptimizer struct {
// contains filtered or unexported fields
}
PromptOptimizer analyses prompts and applies context-appropriate enhancements.
func NewPromptOptimizer ¶ added in v1.2.0
func NewPromptOptimizer() *PromptOptimizer
NewPromptOptimizer creates a PromptOptimizer with default settings.
func (*PromptOptimizer) Optimize ¶ added in v1.2.0
func (o *PromptOptimizer) Optimize(prompt string) *OptimizedPrompt
Optimize analyses prompt, applies enhancement techniques, and returns an OptimizedPrompt containing the improved text and diagnostic metadata.
type PromptTaskType ¶ added in v1.2.0
type PromptTaskType string
PromptTaskType classifies the nature of a prompt for optimization routing.
const ( // PromptTaskAnalysis covers evaluation, comparison, and data-analysis tasks. PromptTaskAnalysis PromptTaskType = "analysis" // PromptTaskGeneration covers text generation tasks. PromptTaskGeneration PromptTaskType = "generation" // PromptTaskSummarization covers condensing or abstracting existing content. PromptTaskSummarization PromptTaskType = "summarization" // PromptTaskTranslation covers language translation tasks. PromptTaskTranslation PromptTaskType = "translation" // PromptTaskQA covers question-answering and explanation tasks. PromptTaskQA PromptTaskType = "qa" // PromptTaskReasoning covers logical inference and multi-step reasoning. PromptTaskReasoning PromptTaskType = "reasoning" // PromptTaskCreative covers open-ended creative writing tasks. PromptTaskCreative PromptTaskType = "creative" // PromptTaskTechnical covers code, algorithms, and technical documentation. PromptTaskTechnical PromptTaskType = "technical" )
type PromptVariation ¶ added in v1.2.0
type PromptVariation struct {
// Prompt is the alternative phrasing.
Prompt string
// EstimatedImprovement is the estimated percentage gain over the original.
EstimatedImprovement float64
}
PromptVariation is an alternative phrasing of the prompt with an estimated improvement.
type RequestTimer ¶ added in v1.2.0
type RequestTimer struct {
// contains filtered or unexported fields
}
RequestTimer measures elapsed wall-clock time for a single operation.
func NewRequestTimer ¶ added in v1.2.0
func NewRequestTimer() *RequestTimer
NewRequestTimer creates and immediately starts a RequestTimer.
func (*RequestTimer) ElapsedMs ¶ added in v1.2.0
func (t *RequestTimer) ElapsedMs() int64
ElapsedMs returns milliseconds elapsed since the timer was started.
type ResponseCache ¶ added in v1.2.0
type ResponseCache struct {
// contains filtered or unexported fields
}
ResponseCache is a TTL-based in-memory cache backed by sync.Map. It is safe for concurrent use. Expired entries are removed lazily on Get.
func NewResponseCache ¶ added in v1.2.0
func NewResponseCache(ttl time.Duration, maxItems int) *ResponseCache
NewResponseCache creates a ResponseCache with the specified TTL. maxItems is accepted for API compatibility but not enforced (entries expire via TTL).
func (*ResponseCache) Get ¶ added in v1.2.0
func (c *ResponseCache) Get(key string) *FusedResponse
Get retrieves a cached response. Returns nil if absent or expired.
func (*ResponseCache) Set ¶ added in v1.2.0
func (c *ResponseCache) Set(key string, resp *FusedResponse)
Set stores a response in the cache under key.
type ResponseMetadata ¶ added in v1.1.0
type ResponseMetadata struct {
// ModelUsed is the actual model version used (may differ from requested)
ModelUsed string `json:"model_used,omitempty"`
// PromptTokens is the number of tokens in the prompt
PromptTokens int `json:"prompt_tokens,omitempty"`
// CompletionTokens is the number of tokens in the completion
CompletionTokens int `json:"completion_tokens,omitempty"`
// TotalTokens is the sum of prompt and completion tokens
TotalTokens int `json:"total_tokens,omitempty"`
// FinishReason indicates why generation ended (e.g., "stop", "length", "content_filter")
FinishReason string `json:"finish_reason,omitempty"`
// SafetyRatings contains provider-specific safety or content filter results
SafetyRatings interface{} `json:"safety_ratings,omitempty"`
// RequestID for debugging and tracking
RequestID string `json:"request_id,omitempty"`
// LatencyMs is the time taken to generate response in milliseconds
LatencyMs int64 `json:"latency_ms,omitempty"`
}
ResponseMetadata contains additional information from the AI provider. Not all fields are populated by all providers.
type RetryStrategy ¶ added in v1.1.0
type RetryStrategy string
RetryStrategy defines the retry behavior for failed requests.
const ( // RetryStrategyFixed uses a fixed delay between retries RetryStrategyFixed RetryStrategy = "fixed" // RetryStrategyLinear increases delay linearly with each attempt RetryStrategyLinear RetryStrategy = "linear" // RetryStrategyExponentialBackoff doubles the delay with each attempt RetryStrategyExponentialBackoff RetryStrategy = "exponential" // RetryStrategyExponentialWithJitter adds random jitter to prevent thundering herd RetryStrategyExponentialWithJitter RetryStrategy = "exponential_with_jitter" )
type SseEvent ¶ added in v1.2.0
type SseEvent struct {
// Event is the optional event-type field (defaults to "message" per spec).
Event string
// Data is the event payload; multiple consecutive data lines are joined with "\n".
Data string
// ID is the optional last-event-ID value.
ID string
// Retry, when non-nil, is the reconnection time hint in milliseconds.
Retry *int
}
SseEvent represents a single parsed Server-Sent Event. Fields correspond directly to the SSE wire format defined in the WHATWG specification (https://html.spec.whatwg.org/multipage/server-sent-events.html).
type SseReader ¶ added in v1.2.0
type SseReader struct {
// contains filtered or unexported fields
}
SseReader reads and parses SSE events from an io.Reader. It buffers the underlying reader and is NOT safe for concurrent use.
func NewSseReader ¶ added in v1.2.0
NewSseReader wraps r in an SseReader.
func (*SseReader) Next ¶ added in v1.2.0
Next reads the next complete SSE event from the stream. It returns (nil, nil) when the stream is exhausted, (nil, err) on I/O errors, and a populated *SseEvent when an event boundary (blank line or end-of-stream) is reached and at least one data line was seen.
type StreamChunk ¶
type StreamChunk struct {
// Content of this chunk
Content string `json:"content"`
// Finished indicates if this is the final chunk
Finished bool `json:"finished"`
// Metadata is only populated on the final chunk
Metadata *ResponseMetadata `json:"metadata,omitempty"`
}
StreamChunk represents a chunk of streaming response. When Finished is true, this is the final chunk and Metadata may be populated.
type Strength ¶ added in v1.2.0
type Strength string
Strength describes a capability area in which a model excels.
const ( StrengthReasoning Strength = "reasoning" StrengthCreativity Strength = "creativity" StrengthCodeGeneration Strength = "code_generation" StrengthMathematics Strength = "mathematics" StrengthLanguage Strength = "language" StrengthAnalysis Strength = "analysis" StrengthVision Strength = "vision" StrengthSpeed Strength = "speed" )
type TextExporter ¶ added in v1.2.0
type TextExporter struct {
// contains filtered or unexported fields
}
TextExporter writes human-readable metrics to a standard logger. It is always available and has no external dependencies.
func NewTextExporter ¶ added in v1.2.0
func NewTextExporter() *TextExporter
NewTextExporter creates a TextExporter that writes to stderr.
func (*TextExporter) Export ¶ added in v1.2.0
func (e *TextExporter) Export(snapshot MetricsSnapshot) error
Export logs a formatted summary of the snapshot.
func (*TextExporter) Format ¶ added in v1.2.0
func (e *TextExporter) Format(snapshot MetricsSnapshot) string
Format returns a key=value formatted metrics string without logging it.
func (*TextExporter) Name ¶ added in v1.2.0
func (e *TextExporter) Name() string
Name returns "text".