Documentation
¶
Index ¶
- Constants
- func ExponentialBackoff(attempt int, baseDelay time.Duration) time.Duration
- func GetAPIKey(key string) string
- func GetEnv(key, defaultValue string) string
- func IsRetryableError(err error) bool
- func ValidateTools(tools []Tool) error
- type Attachment
- type AuthManager
- type Client
- type ContentBlock
- type ContentType
- type Error
- func NewAPIError(message string, cause error) *Error
- func NewError(errType ErrorType, message string, cause error) *Error
- func NewNetworkError(message string, cause error) *Error
- func NewTimeoutError(message string, cause error) *Error
- func NewUnknownError(message string, cause error) *Error
- func NewValidationError(message string, cause error) *Error
- type ErrorType
- type FileTokenStore
- type Message
- type OAuthToken
- type Option
- func WithAttachments(attachments ...Attachment) Option
- func WithEnableSearch(enabled bool) Option
- func WithMaxTokens(n int) Option
- func WithModel(model string) Option
- func WithStop(stops ...string) Option
- func WithSystemPrompt(prompt string) Option
- func WithTemperature(t float64) Option
- func WithThinking(budget int) Option
- func WithTools(tools ...Tool) Option
- func WithTopP(p float64) Option
- func WithUsageCallback(fn func(Usage)) Option
- type Options
- type PKCEHelper
- type Response
- type Stream
- type StreamEvent
- type StreamEventType
- type TokenHelper
- type TokenStore
- type Tool
- type ToolCall
- type Usage
Constants ¶
const ( // RoleSystem is for system instructions that guide the model's behavior. // System messages set the context, personality, or constraints. // Example: "You are a helpful coding assistant." RoleSystem = "system" // RoleUser is for messages from the human user. // These are the questions or prompts you send to the model. RoleUser = "user" // RoleAssistant is for messages from the AI model. // These are the model's responses. RoleAssistant = "assistant" // RoleTool is for tool execution results. // Use this when responding to a model's tool call request. RoleTool = "tool" )
Message roles define who is speaking in a conversation.
Variables ¶
This section is empty.
Functions ¶
func ExponentialBackoff ¶
ExponentialBackoff calculates the backoff time with jitter
func IsRetryableError ¶
IsRetryableError checks if an error is retryable based on common patterns
func ValidateTools ¶ added in v0.1.10
ValidateTools validates a list of tools for use in a request. This is a convenience function that validates each tool and ensures there are no duplicate names.
Use this before sending tools to the API to catch configuration errors early.
Parameters:
- tools: Slice of Tool objects to validate
Returns nil if all tools are valid, or an error describing the first failure
Types ¶
type Attachment ¶ added in v0.1.7
type Attachment struct {
// Name is the filename or identifier for this attachment.
Name string
// MediaType is the MIME type of the content.
// Examples: "image/png", "image/jpeg", "application/pdf", "text/plain"
MediaType string
// Data is the raw content bytes.
// For images, this should be the raw image data (not base64 encoded in the struct itself).
Data []byte
// URL is an alternative to Data for referring to remote resources.
// If URL is set, Data should be empty.
URL string
// IsTextBased indicates whether this is a text format (true)
// or binary format (false). Text-based attachments may be handled
// differently by some providers.
IsTextBased bool
}
Attachment represents a file or media item that can be attached to a message. Attachments allow sending images, documents, or other files along with text prompts to models that support multimodal inputs.
Example:
imageData, _ := os.ReadFile("photo.jpg")
attachment := core.NewImageAttachment("photo.jpg", "image/jpeg", imageData)
response, err := client.Chat(ctx, messages, core.WithAttachments(attachment))
func NewAttachment ¶ added in v0.1.7
func NewAttachment(name, mediaType string, data []byte, isText bool) Attachment
NewAttachment creates a new attachment with the specified properties.
Parameters:
- name: The filename or identifier
- mediaType: The MIME type (e.g., "image/png")
- data: The raw content bytes
- isText: Whether this is a text-based format
Returns a configured Attachment
func NewImageAttachment ¶ added in v0.1.7
func NewImageAttachment(name, mediaType string, data []byte) Attachment
NewImageAttachment creates a new attachment for image content. This is a convenience function that sets IsTextBased to false.
Parameters:
- name: The filename
- mediaType: The MIME type (e.g., "image/jpeg", "image/png")
- data: The raw image bytes
Returns a configured Attachment for images
type AuthManager ¶
type AuthManager struct {
// contains filtered or unexported fields
}
AuthManager handles authentication and token lifecycle management for AI provider clients. It supports automatic token refresh and persistent token storage.
AuthManager is safe for concurrent use. Multiple goroutines can call GetToken simultaneously; proper locking ensures tokens are refreshed only when necessary.
Example:
manager := core.NewAuthManager(provider, "token.json")
if err := manager.Login(); err != nil {
log.Fatal(err)
}
token, err := manager.GetToken()
// use token for API requests
func NewAuthManager ¶
func NewAuthManager(provider interface {
GetProviderName() string
Authenticate() (*OAuthToken, error)
RefreshToken(refreshToken string) (*OAuthToken, error)
}, filename string) *AuthManager
NewAuthManager creates a new AuthManager with file-based token storage. The filename specifies where tokens should be persisted. If the file exists, tokens will be loaded from it on first access.
Parameters:
- provider: The authentication provider implementing GetProviderName, Authenticate, and RefreshToken
- filename: Path to the file where tokens are stored
Returns a configured AuthManager
func NewAuthManagerWithStore ¶
func NewAuthManagerWithStore(provider interface {
GetProviderName() string
Authenticate() (*OAuthToken, error)
RefreshToken(refreshToken string) (*OAuthToken, error)
}, store TokenStore) *AuthManager
NewAuthManagerWithStore creates a new AuthManager with custom token storage. Use this when you need to store tokens in a different location or format.
Parameters:
- provider: The authentication provider
- store: Custom implementation of TokenStore
Returns a configured AuthManager
func (*AuthManager) GetToken ¶
func (m *AuthManager) GetToken() (*OAuthToken, error)
GetToken returns a valid authentication token, refreshing it if necessary. This is the main method used by API clients to obtain valid credentials.
The method first checks if a token exists. If not, it attempts to load from the store. If the token is expired or will expire soon, it attempts to refresh the token using the provider's RefreshToken method.
Returns a valid OAuthToken and nil on success. Returns nil and an error if no token is available, loading fails, or refresh fails.
func (*AuthManager) LoadToken ¶
func (m *AuthManager) LoadToken() error
LoadToken loads a previously stored token from the token store. This is used to restore session state without requiring re-authentication. If no token store is configured or the store is empty, an error is returned.
Returns an error if loading fails (file not found, corrupted data, etc.)
func (*AuthManager) Login ¶
func (m *AuthManager) Login() error
Login authenticates the user and stores the resulting token. This method should be called once when the user first logs in. The token is stored persistently and will be reused on subsequent program starts via LoadToken.
Returns an error if authentication fails
func (*AuthManager) SetExpiryBuffer ¶ added in v0.1.10
func (m *AuthManager) SetExpiryBuffer(buffer time.Duration)
SetExpiryBuffer changes the buffer time used to determine when to proactively refresh tokens. The default is 60 seconds.
A larger buffer gives more time for network delays but increases the frequency of token refreshes. A smaller buffer may cause requests to fail if refresh takes too long.
Parameters:
- buffer: The duration before expiration to trigger refresh
type Client ¶
type Client interface {
// Chat sends messages and returns a complete response
//
// Parameters:
// - ctx: Context for cancellation and timeout
// - messages: Conversation messages
// - opts: Optional parameters (temperature, max tokens, tools, etc.)
//
// Returns:
// - *Response: Complete response with content, usage, and tool calls
// - error: Error if the request fails
Chat(ctx context.Context, messages []Message, opts ...Option) (*Response, error)
// ChatStream sends messages and returns a stream of events
//
// Parameters:
// - ctx: Context for cancellation and timeout
// - messages: Conversation messages
// - opts: Optional parameters (temperature, max tokens, tools, etc.)
//
// Returns:
// - *Stream: Stream of events (content, usage, errors)
// - error: Error if the request fails to start
ChatStream(ctx context.Context, messages []Message, opts ...Option) (*Stream, error)
}
Client is the primary interface for LLM interactions. All providers implement this interface.
Example usage:
messages := []core.Message{
core.NewUserMessage("Hello, who are you?"),
}
response, err := client.Chat(ctx, messages)
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Content)
With options:
response, err := client.Chat(ctx, messages,
core.WithTemperature(0.8),
core.WithMaxTokens(1000),
)
type ContentBlock ¶
type ContentBlock struct {
// Type identifies what kind of content this block contains.
Type ContentType `json:"type"`
// Text is the text content (only for ContentTypeText).
Text string `json:"text,omitempty"`
// MediaType is the MIME type (only for ContentTypeImage/ContentTypeFile).
// Examples: "image/png", "image/jpeg", "application/pdf"
MediaType string `json:"media_type,omitempty"`
// Data is the base64-encoded content (only for ContentTypeImage/ContentTypeFile).
Data string `json:"data,omitempty"`
URL string `json:"url,omitempty"`
// FileName is the original filename (only for ContentTypeFile).
FileName string `json:"file_name,omitempty"`
}
ContentBlock is one piece of content within a message.
Messages can contain multiple content blocks of different types, enabling multimodal interactions (text + images + files).
For simple text messages, use the NewUserMessage helper which creates a message with a single text content block.
Example (text):
block := ContentBlock{
Type: ContentTypeText,
Text: "Hello, world!",
}
Example (image):
imageData := base64.StdEncoding.EncodeToString(imageBytes)
block := ContentBlock{
Type: ContentTypeImage,
MediaType: "image/png",
Data: imageData,
}
type ContentType ¶
type ContentType string
ContentType identifies the type of content in a ContentBlock.
const ( // ContentTypeText is for plain text content. ContentTypeText ContentType = "text" // ContentTypeImage is for image content (base64-encoded). ContentTypeImage ContentType = "image" ContentTypeImageURL ContentType = "image_url" // ContentTypeFile is for file attachments. ContentTypeFile ContentType = "file" // ContentTypeThinking is for thinking/reasoning content from the model. ContentTypeThinking ContentType = "thinking" )
type Error ¶
type Error struct {
// Type categorizes the error for programmatic handling.
Type ErrorType
// Message is a human-readable description of what went wrong.
Message string
// Cause is the underlying error that triggered this one, if available.
// This may be nil if the error originated without a specific cause.
Cause error
}
Error represents a structured error with type classification, a human-readable message, and an optional underlying cause. This design enables both programmatic error handling and human-readable error reporting.
Example:
if err != nil {
if apiErr, ok := err.(*core.Error); ok {
if apiErr.Type == core.ErrorTypeRateLimit {
// handle rate limit specifically
}
}
}
func NewAPIError ¶
NewAPIError creates a new error for API-level failures such as invalid requests, authentication errors, or provider-side issues.
Parameters:
- message: A description of the API error
- cause: The underlying error, or nil if not applicable
Returns an Error with Type set to ErrorTypeAPI
func NewError ¶
NewError creates a new structured error with the specified type, message, and optional cause. This is the low-level constructor for creating Error instances; most callers should use one of the specialized constructors like NewAPIError or NewNetworkError.
Parameters:
- errType: The classification of the error
- message: A human-readable description of the error
- cause: The underlying error that triggered this one, or nil
Returns a pointer to the newly created Error
func NewNetworkError ¶
NewNetworkError creates a new error for network-related failures such as connection refused, DNS resolution failures, or connectivity issues that prevent communication with the API server.
Parameters:
- message: A description of the network error
- cause: The underlying error, or nil if not applicable
Returns an Error with Type set to ErrorTypeNetwork
func NewTimeoutError ¶
NewTimeoutError creates a new error for operations that exceed their configured time limit. This includes both connect timeouts and read/write timeouts.
Parameters:
- message: A description of what timed out
- cause: The underlying error, or nil if not applicable
Returns an Error with Type set to ErrorTypeTimeout
func NewUnknownError ¶
NewUnknownError creates a new error for unexpected failures that don't fit into other categories. Use this as a fallback when the error source doesn't provide more specific information.
Parameters:
- message: A description of the unexpected error
- cause: The underlying error, or nil if not applicable
Returns an Error with Type set to ErrorTypeUnknown
func NewValidationError ¶
NewValidationError creates a new error for input validation failures. This includes invalid request parameters, malformed data, or values that fail business logic validation.
Parameters:
- message: A description of the validation failure
- cause: The underlying error, or nil if not applicable
Returns an Error with Type set to ErrorTypeValidation
type ErrorType ¶
type ErrorType string
ErrorType defines the classification of errors that can occur during API operations. Use these types to programmatically handle specific error conditions.
const ( // ErrorTypeAPI indicates an error returned by the AI provider's API. // This includes invalid requests, authentication failures, or rate limits. ErrorTypeAPI ErrorType = "api_error" // ErrorTypeNetwork indicates a network connectivity issue. // This includes connection timeouts, DNS failures, or refused connections. ErrorTypeNetwork ErrorType = "network_error" // ErrorTypeTimeout indicates an operation exceeded its time limit. // This occurs when a request takes longer than the configured timeout. ErrorTypeTimeout ErrorType = "timeout_error" // ErrorTypeValidation indicates invalid input parameters. // This occurs when request data fails validation checks. ErrorTypeValidation ErrorType = "validation_error" // ErrorTypeUnknown indicates an unexpected error without classification. // This is used as a fallback for un categorized failures. ErrorTypeUnknown ErrorType = "unknown_error" )
Error classification constants for structured error handling. These values are used by the Error struct to categorize failures.
type FileTokenStore ¶
type FileTokenStore struct {
Filename string
}
FileTokenStore is a simple file-based implementation of TokenStore.
func NewFileTokenStore ¶
func NewFileTokenStore(filename string) *FileTokenStore
NewFileTokenStore creates a new FileTokenStore.
func (*FileTokenStore) Load ¶
func (s *FileTokenStore) Load() (*OAuthToken, error)
Load reads the token from a file.
func (*FileTokenStore) Save ¶
func (s *FileTokenStore) Save(token *OAuthToken) error
Save writes the token to a file as JSON.
type Message ¶
type Message struct {
Role string `json:"role"`
Content []ContentBlock `json:"content,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"` // assistant requesting tool use
ToolCallID string `json:"tool_call_id,omitempty"` // for role=tool responses
}
Message represents a single message in a conversation
func NewSystemMessage ¶
NewSystemMessage creates a system message
func NewTextMessage ¶
NewTextMessage creates a message with text content
func NewUserMessage ¶
NewUserMessage creates a user message with text content
func ProcessAttachments ¶ added in v0.1.7
func ProcessAttachments(messages []Message, attachments []Attachment) []Message
ProcessAttachments injects provided attachments into the message list.
func (Message) TextContent ¶
TextContent returns the concatenated text of all text blocks. Convenience for the common single-text-block case.
type OAuthToken ¶
type OAuthToken struct {
// Access is the bearer token used to authenticate API requests.
// Include this as a Bearer token in the Authorization header.
Access string `json:"access"`
// Refresh is the token used to obtain a new access token when it expires.
// Store this securely for refreshing expired tokens.
Refresh string `json:"refresh"`
// Expires is the Unix timestamp in milliseconds when this token expires.
// Use this to determine when to refresh the token proactively.
Expires int64 `json:"expires"`
// ResourceUrl is an optional endpoint for accessing the resource.
// This may be provided by some OAuth providers.
ResourceUrl *string `json:"resourceUrl,omitempty"`
// NotificationMessage may contain a message from the OAuth provider.
// This could include warnings or important notices.
NotificationMessage *string `json:"notification_message,omitempty"`
}
OAuthToken represents an OAuth token received from an AI provider. It contains the credentials needed to authenticate API requests, along with expiration information for token lifecycle management.
type Option ¶
type Option func(*Options)
Option is a functional option for Chat/ChatStream
func WithAttachments ¶ added in v0.1.7
func WithAttachments(attachments ...Attachment) Option
WithAttachments attaches files or media to the message context
func WithEnableSearch ¶
WithEnableSearch enables web search for models that support it (e.g. Qwen)
func WithMaxTokens ¶
WithMaxTokens sets the maximum tokens to generate
func WithSystemPrompt ¶
WithSystemPrompt sets a system prompt (prepended as system message)
func WithTemperature ¶
WithTemperature sets the temperature for generation
func WithThinking ¶
WithThinking enables extended thinking/reasoning with optional token budget
func WithUsageCallback ¶
WithUsageCallback sets a callback to be called when usage info is available
type Options ¶
type Options struct {
Model string
Temperature *float64 // pointer so zero-value is distinguishable from "not set"
MaxTokens *int
TopP *float64
Stop []string
Tools []Tool
SystemPrompt string // prepended as system message if set
Thinking bool // enables extended thinking/reasoning (provider-dependent)
ThinkingBudget int // max tokens for thinking (0 = provider default)
EnableSearch bool // Qwen/compatible-mode search
UsageCallback func(Usage)
Attachments []Attachment
}
Options holds per-request parameters
func ApplyOptions ¶
ApplyOptions builds Options from a list of Option funcs
type PKCEHelper ¶
type PKCEHelper struct{}
PKCEHelper PKCE 辅助工具
func (*PKCEHelper) GeneratePKCE ¶
func (h *PKCEHelper) GeneratePKCE() (verifier, challenge string, err error)
GeneratePKCE 生成 PKCE verifier 和 challenge
func (*PKCEHelper) GenerateState ¶
func (h *PKCEHelper) GenerateState() (string, error)
GenerateState 生成随机 state
func (*PKCEHelper) GenerateUUID ¶
func (h *PKCEHelper) GenerateUUID() (string, error)
GenerateUUID 生成随机 UUID
type Response ¶
type Response struct {
// ID is the unique identifier for this completion.
// Provider-specific format (e.g., "chatcmpl-abc123" for OpenAI).
ID string `json:"id,omitempty"`
// Model is the name of the model that generated this response.
// May differ from the requested model if the provider substituted it.
Model string `json:"model,omitempty"`
// Content is the generated text content.
// This is a convenience field that concatenates all text blocks from Message.
// For simple text responses, this is all you need.
Content string `json:"content"`
// ReasoningContent is the model's thinking/reasoning process output.
ReasoningContent string `json:"reasoning_content,omitempty"`
// Message is the full structured message from the model.
// Use this when you need access to multimodal content or tool calls.
Message Message `json:"message"`
// FinishReason indicates why the model stopped generating.
// Common values: "stop" (natural end), "length" (max tokens reached),
// "tool_calls" (model wants to call tools), "content_filter" (filtered).
FinishReason string `json:"finish_reason"`
// Usage contains token consumption information.
// Use this to track costs and monitor usage.
Usage *Usage `json:"usage,omitempty"`
// ToolCalls is a convenience field extracted from Message.ToolCalls.
// Non-empty when the model wants to invoke tools.
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
}
Response is the complete result from a Chat call.
It contains the model's generated content, metadata about the request, token usage information, and any tool calls the model wants to make.
Example:
response, err := client.Chat(ctx, messages)
if err != nil {
log.Fatal(err)
}
// Simple text response
fmt.Println(response.Content)
// Check token usage
fmt.Printf("Used %d tokens\n", response.Usage.TotalTokens)
// Handle tool calls
if len(response.ToolCalls) > 0 {
for _, tc := range response.ToolCalls {
fmt.Printf("Model wants to call: %s\n", tc.Name)
}
}
type Stream ¶
type Stream struct {
// contains filtered or unexported fields
}
Stream provides a thread-safe interface for consuming streaming responses. It wraps a channel of StreamEvents and provides methods for iterating through the stream, collecting text, and proper resource cleanup.
Stream is safe for concurrent use from multiple goroutines. The Close method should be called when the stream is no longer needed to ensure proper cleanup of underlying resources.
Example:
stream := client.ChatStream(ctx, messages)
defer stream.Close()
for stream.Next() {
event := stream.Event()
fmt.Print(event.Content)
}
if err := stream.Err(); err != nil {
// handle error
}
func NewStream ¶
func NewStream(ch <-chan StreamEvent, closer io.Closer) *Stream
NewStream creates a new Stream wrapping the provided event channel. The closer is used to close underlying resources (like the HTTP response) when Close is called. The closer may be nil if no cleanup is needed.
Parameters:
- ch: The channel to read stream events from
- closer: An optional io.Closer to call when the stream is closed
Returns a new Stream instance
func (*Stream) Close ¶
Close releases resources associated with the stream. It is safe to call multiple times. After Close is called, Next will always return false.
Close will wait up to 5 seconds for the event channel to drain. If the producer goroutine has exited without closing the channel, Close will return after the timeout rather than blocking forever.
Returns an error if closing the underlying closer fails
func (*Stream) Err ¶
Err returns any error that occurred during streaming. If the stream ended normally, nil is returned.
Returns the error from the stream, or nil if no error occurred
func (*Stream) Event ¶
func (s *Stream) Event() StreamEvent
Event returns the most recent event received from the last call to Next. The returned event is valid until the next call to Next or Close.
Returns the current StreamEvent
func (*Stream) Next ¶
Next advances the stream to the next event. Returns false if the stream has ended (either normally or due to an error) or if Close was called.
This method is safe for concurrent use. Only one goroutine should call Next at a time, but Close may be called concurrently.
Returns true if there is a new event available to process
func (*Stream) ReasoningText ¶
ReasoningText collects and returns all thinking events as a single string. This is used for models that provide extended thinking/reasoning output separately from the main content.
The method consumes the entire stream until it ends or an error occurs. After ReasoningText returns, the stream is fully consumed.
Returns the concatenated thinking content and any error that occurred
func (*Stream) Text ¶
Text collects and returns all content events as a single string. This is a convenience method for when you only care about the text output and not the individual events.
The method consumes the entire stream until it ends or an error occurs. After Text returns, the stream is fully consumed and should not be used further.
Returns the concatenated text content and any error that occurred
type StreamEvent ¶
type StreamEvent struct {
// Type indicates what kind of event this is (content, thinking, done, error).
Type StreamEventType
// Content contains the text data for content or thinking events.
// This is empty for done or error events.
Content string
// Usage contains token usage statistics when available.
// This is typically populated when the stream completes.
Usage *Usage
// Err is non-nil if an error occurred during streaming.
// When this is set, the stream has terminated abnormally.
Err error
}
StreamEvent represents a single unit of data received during streaming. Events are sent through a channel as they arrive from the server.
type StreamEventType ¶
type StreamEventType string
StreamEventType classifies the type of event received during streaming.
const ( // EventContent indicates a content chunk was received. // This is the main type for text output from the model. EventContent StreamEventType = "content" // EventThinking indicates a reasoning/thinking chunk was received. // This is used by models that support extended thinking. EventThinking StreamEventType = "thinking" // EventDone indicates the stream has completed successfully. // This signals that all data has been received. EventDone StreamEventType = "done" // EventError indicates an error occurred during streaming. // The error details are in the Err field of StreamEvent. EventError StreamEventType = "error" )
Stream event type constants indicating what kind of data was received in a streaming response.
type TokenHelper ¶
type TokenHelper struct{}
TokenHelper 令牌辅助工具
func (*TokenHelper) IsTokenExpired ¶
func (h *TokenHelper) IsTokenExpired(token *OAuthToken) bool
IsTokenExpired 检查令牌是否过期
func (*TokenHelper) LoadToken ¶
func (h *TokenHelper) LoadToken(filename string) (*OAuthToken, error)
LoadToken 从文件加载令牌
func (*TokenHelper) SaveToken ¶
func (h *TokenHelper) SaveToken(token *OAuthToken, filename string) error
SaveToken 保存令牌到文件
type TokenStore ¶
type TokenStore interface {
Save(token *OAuthToken) error
Load() (*OAuthToken, error)
}
TokenStore defines the interface for persisting and retrieving OAuth tokens. This allows developers to use custom storage like databases, Redis, or memory.
type Tool ¶
type Tool struct {
// Name is a unique identifier for this tool.
// Must be a valid identifier: starts with letter/underscore,
// contains only letters, digits, and underscores.
Name string `json:"name"`
// Description explains what the tool does and when to use it.
// This is used by the model to decide whether to call the tool.
Description string `json:"description"`
// Parameters is a JSON Schema defining the tool's input parameters.
// Must be a valid JSON object with "type": "object".
// Each property in "properties" defines one parameter.
Parameters json.RawMessage `json:"parameters"`
}
Tool represents a callable function that an LLM can invoke during generation. Tools extend the model's capabilities by allowing it to perform actions like searching the web, executing code, or accessing external APIs.
A Tool consists of a name, description, and JSON schema that defines the parameters the tool accepts. The model uses the description to determine when to call the tool, and the schema to generate valid arguments.
Example:
tool := core.Tool{
Name: "get_weather",
Description: "Get the current weather for a location",
Parameters: json.RawMessage(`{
"type": "object",
"properties": {
"location": {"type": "string"}
}
}`),
}
func (*Tool) Validate ¶ added in v0.1.10
Validate checks if the tool is properly configured and can be used in requests. Validation ensures the tool name is valid and the parameters schema is well-formed.
Returns nil if the tool is valid, or an error describing the validation failure. Common validation errors include:
- Empty tool name
- Invalid tool name (starts with number, contains special characters)
- Invalid JSON in Parameters
- Parameters missing required "type" field
- Parameters with type other than "object"
- Invalid property names in the schema
type ToolCall ¶
type ToolCall struct {
// ID uniquely identifies this tool call request.
// Use this when sending the tool result back to the model.
ID string `json:"id"`
// Name is the name of the tool to call.
// Must match the Name field of a Tool provided in the request.
Name string `json:"name"`
// Arguments is a JSON string containing the parameters to pass to the tool.
// Parse this with json.Unmarshal to get a map or struct.
Arguments string `json:"arguments"`
}
ToolCall represents a request from the model to invoke a tool. This is returned when the model decides to call a tool during generation. It contains the tool name and the arguments to pass to the tool.
type Usage ¶
type Usage struct {
// PromptTokens is the number of tokens in the input (your messages).
PromptTokens int `json:"prompt_tokens"`
// CompletionTokens is the number of tokens in the output (model's response).
CompletionTokens int `json:"completion_tokens"`
// TotalTokens is the sum of PromptTokens and CompletionTokens.
// This is typically what you're billed for.
TotalTokens int `json:"total_tokens"`
}
Usage tracks token consumption for a request.
Tokens are the basic units that LLM providers use for billing. Different providers may count tokens differently, but the general principle is: more tokens = higher cost.
Example:
response, _ := client.Chat(ctx, messages)
if response.Usage != nil {
fmt.Printf("Prompt: %d tokens\n", response.Usage.PromptTokens)
fmt.Printf("Completion: %d tokens\n", response.Usage.CompletionTokens)
fmt.Printf("Total: %d tokens\n", response.Usage.TotalTokens)
}