fm

package module
v0.1.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 12, 2025 License: MIT Imports: 11 Imported by: 0

README

Fallback logo

go-foundationmodels

🚀 Pure Go wrapper for Apple's Foundation Models


Why? 🤔

Apple's Foundation Models provides powerful on-device AI capabilities in macOS 26 Tahoe, but it's only accessible through Swift/Objective-C APIs. This package bridges that gap, offering:

  • 🔒 Privacy-focused: All AI processing happens on-device, no data leaves your Mac
  • ⚡ High performance: Optimized for Apple Silicon with no network latency
  • 🚀 Streaming-first: Simulated real-time response streaming with typing indicators for modern UX
  • 🛠️ Rich tooling: Advanced features like input validation, context cancellation, and generation control
  • 📦 Self-contained: Embedded Swift shim library - no external dependencies
  • 🎯 Production-ready: Comprehensive error handling, memory management, and structured logging

Features

Generation Control

  • Temperature control: Deterministic (0.0) to creative (1.0) output
  • Token limiting: Control response length with max tokens
  • Helper functions: WithDeterministic(), WithCreative(), WithBalanced()

Advanced Tool System

  • Custom tool creation: Define tools that Foundation Models can call autonomously
  • Real-time data access: Via custom integrations
  • Input validation: Type checking, required fields, enum constraints, regex patterns
  • Automatic error handling: Comprehensive validation before execution
  • Swift-Go bridge: Seamless callback mechanism between Foundation Models and Go tools

Context Management

  • Timeout support: Cancel long-running requests automatically
  • Manual cancellation: User-controlled request cancellation
  • Context tracking: 4096-token window with usage monitoring
  • Session refresh: Seamless context window management

Robust Architecture

  • Pure Go implementation: No CGO dependencies, uses purego for Swift bridge
  • Memory safety: Automatic C string cleanup and proper resource management
  • Error resilience: Graceful initialization failure handling
  • Self-contained: Embedded Swift shim library with automatic extraction
  • Structured logging: Go slog integration with debug logging for both Go and Swift layers

[!WARNING] I've noticed Apple's model is very finicky and is overly cautious and may refuse when answering any questions.

Requirements

  • macOS 26 Tahoe (beta) or later
  • Apple Intelligence enabled on your device
  • Apple Silicon Mac (M1/M2/M3/M4 series)
  • Go 1.24+ (uses latest Go features)
  • Xcode 15.x or later (for Swift shim compilation if needed)

Getting Started

go get github.com/blacktop/go-foundationmodels

Basic Usage

package main

import (
    "fmt"
    "log"
    fm "github.com/blacktop/go-foundationmodels"
)

func main() {
    // Check availability
    if fm.CheckModelAvailability() != fm.ModelAvailable {
        log.Fatal("Foundation Models not available")
    }

    // Create session
    sess := fm.NewSession()
    defer sess.Release()

    // Generate text
    response := sess.Respond("What is artificial intelligence?", nil)
    fmt.Println(response)

    // Use generation options
    creative := sess.Respond("Write a story", fm.WithCreative())
    fmt.Println(creative)
}

Tool Calling Example

package main

import (
    "fmt"
    "log"
    fm "github.com/blacktop/go-foundationmodels"
)

// Simple calculator tool
type CalculatorTool struct{}

func (c *CalculatorTool) Name() string { return "calculate" }
func (c *CalculatorTool) Description() string {
    return "Calculate mathematical expressions with add, subtract, multiply, or divide operations"
}
func (c *CalculatorTool) GetParameters() []fm.ToolArgument {
    return []fm.ToolArgument{{
        Name: "arguments", Type: "string", Required: true,
        Description: "Mathematical expression with two numbers and one operation",
    }}
}
func (c *CalculatorTool) Execute(args map[string]any) (fm.ToolResult, error) {
    expr := args["arguments"].(string)
    // ... implement expression parsing and calculation
    return fm.ToolResult{Content: "42.00"}, nil
}

func main() {
    sess := fm.NewSessionWithInstructions("You are a helpful calculator assistant.")
    defer sess.Release()

    // Register tool
    calculator := &CalculatorTool{}
    sess.RegisterTool(calculator)

    // AI will autonomously call the tool when needed
    response := sess.RespondWithTools("What is 15 plus 27?")
    fmt.Println(response) // "The result is 42.00"
}

CLI tool found

Install with Homebrew:

brew install blacktop/tap/found

Install with Go:

go install github.com/blacktop/go-foundationmodels/cmd/found@latest

Or download from the latest release

CLI Usage

Use found --help or found [command] --help to see all available commands and examples.

Available commands:

  • found info - Display model availability and system information
  • found quest - Interactive chat with streaming support, system instructions and JSON output
  • found stream - Real-time streaming text generation with optional tools ✅
  • found tool calc - Mathematical calculations with real arithmetic ✅
  • found tool weather - Real-time weather data with geocoding ✅

demo

Working Examples

Tool Calling Success Stories ✅

Weather Tool: Get real-time weather data

found tool weather "New York"
# Returns actual weather from OpenMeteo API with temperature, conditions, humidity, etc.

Calculator Tool: Perform mathematical operations

found tool calc "add 15 plus 27"
# Returns: The result of "15 + 27" is **42.00**.

Debug Mode: See comprehensive logging in action

found tool weather --verbose "Paris"
# Shows both Go debug logs (slog) and Swift logs with detailed execution flow

Foundation Models Behavior

While tool calling is functional, Foundation Models exhibits some variability:

  • Tool execution works: When called, tools successfully return real data
  • Callback mechanism fixed: Swift ↔ Go communication is reliable
  • ⚠️ Inconsistent invocation: Foundation Models sometimes refuses to call tools due to safety restrictions
  • Error handling: Graceful failures with helpful explanations

Known Limitations

  • Foundation Models Safety: Some queries may be blocked by built-in safety guardrails
  • Context Window: 4096 token limit requires session refresh for long conversations
  • Tool Parameter Mapping: Complex expressions may not parse correctly into tool parameters
  • Streaming Implementation: Currently uses simulated streaming (post-processing chunks) as Foundation Models doesn't yet provide native streaming APIs

Roadmap

  • Fix tool calling reliability - ✅ COMPLETED - Tools now work with real data
  • Swift-Go callback mechanism - ✅ COMPLETED - Reliable bidirectional communication
  • Tool debugging capabilities - ✅ COMPLETED - --verbose flag for comprehensive debug logs
  • Direct tool testing - ✅ COMPLETED - --direct flag bypasses Foundation Models
  • Streaming responses - ✅ COMPLETED - Simulated streaming with word/sentence chunks (native streaming pending Foundation Models API)
  • Structured logging - ✅ COMPLETED - Go slog integration with consolidated debug logging
  • Advanced tool schemas with OpenAPI-style definitions
  • Multi-modal support (images, audio) when available
  • Performance optimizations for large contexts
  • Enhanced error handling with detailed diagnostics
  • Plugin system for extensible tool management
  • Native streaming support - Upgrade to Foundation Models native streaming API when available
  • Improve Foundation Models consistency - Research better prompting strategies

License

MIT Copyright (c) 2025 blacktop

Documentation

Overview

Package fm provides a pure Go wrapper around macOS Foundation Models framework.

Foundation Models is Apple's on-device large language model framework introduced in macOS 26 Tahoe, providing privacy-focused AI capabilities without requiring internet connectivity.

Features

• Streaming-first text generation with LanguageModelSession • Simulated real-time response streaming with word/sentence chunks • Dynamic tool calling with custom Go tools and input validation • Structured output generation with JSON formatting • Context window management (4096 token limit) • Context cancellation and timeout support • Session lifecycle management with proper memory handling • System instructions support • Generation options for temperature, max tokens, and other parameters • Structured logging with Go slog integration for comprehensive debugging

Requirements

• macOS 26 Tahoe or later • Apple Intelligence enabled • Compatible Apple Silicon device

Basic Usage

Create a session and generate text:

sess := fm.NewSession()
defer sess.Release()

response := sess.Respond("Tell me about artificial intelligence", nil)
fmt.Println(response)

Generation Options

Control output with GenerationOptions:

// Deterministic output
response := sess.Respond("What is 2+2?", fm.WithDeterministic())

// Creative output
response = sess.Respond("Write a story", fm.WithCreative())

// Custom options
options := &fm.GenerationOptions{
	Temperature: &[]float32{0.3}[0],
	MaxTokens:   &[]int{100}[0],
}
response = sess.Respond("Explain AI", options)

System Instructions

Create a session with specific behavior:

instructions := "You are a helpful assistant that responds concisely."
sess := fm.NewSessionWithInstructions(instructions)
defer sess.Release()

response := sess.Respond("What is machine learning?", nil)
fmt.Println(response)

Context Management

Foundation Models has a strict 4096 token context window. Monitor usage:

fmt.Printf("Context: %d/%d tokens (%.1f%% used)\n",
	sess.GetContextSize(), sess.GetMaxContextSize(), sess.GetContextUsagePercent())

if sess.IsContextNearLimit() {
	// Refresh session when approaching limit
	newSess := sess.RefreshSession()
	sess.Release()
	sess = newSess
}

Tool Calling

Define custom tools that the model can call:

type CalculatorTool struct{}

func (c *CalculatorTool) Name() string {
	return "calculate"
}

func (c *CalculatorTool) Description() string {
	return "Calculate mathematical expressions with add, subtract, multiply, or divide operations"
}

// Implement SchematizedTool for parameter definitions
func (c *CalculatorTool) GetParameters() []fm.ToolArgument {
	return []fm.ToolArgument{{
		Name: "arguments", Type: "string", Required: true,
		Description: "Mathematical expression with two numbers and one operation",
	}}
}

func (c *CalculatorTool) Execute(args map[string]any) (fm.ToolResult, error) {
	expr := args["arguments"].(string)
	// Parse and evaluate expression (implementation details omitted)
	result := evaluateExpression(expr)

	return fm.ToolResult{
		Content: fmt.Sprintf("%.2f", result),
	}, nil
}

Tool Input Validation

Add validation to your tools for better error handling:

// Define validation rules
var calculatorArgDefs = []fm.ToolArgument{
	{
		Name:     "a",
		Type:     "number",
		Required: true,
	},
	{
		Name:     "b",
		Type:     "number",
		Required: true,
	},
	{
		Name:     "operation",
		Type:     "string",
		Required: true,
		Enum:     []any{"add", "subtract", "multiply", "divide"},
	},
}

// Implement ValidatedTool interface
func (c *CalculatorTool) ValidateArguments(args map[string]any) error {
	return fm.ValidateToolArguments(args, calculatorArgDefs)
}

Register and use tools:

sess := fm.NewSessionWithInstructions("You are a helpful calculator assistant.")
defer sess.Release()

calculator := &CalculatorTool{}
sess.RegisterTool(calculator)

// Foundation Models will autonomously call the tool when needed
response := sess.RespondWithTools("What is 15 + 27?")
fmt.Println(response) // "The result is 42.00"

Structured Output

Generate structured JSON responses:

response := sess.RespondWithStructuredOutput("Analyze this text: 'Hello world'")
fmt.Println(response) // Returns formatted JSON

Context Cancellation

Cancel long-running requests with context support:

import (
	"context"
	"time"
)

// Timeout cancellation
response, err := sess.RespondWithTimeout(5*time.Second, "Long prompt", nil)
if err != nil {
	fmt.Printf("Request timed out: %v\n", err)
}

// Manual cancellation
ctx, cancel := context.WithCancel(context.Background())
go func() {
	time.Sleep(2 * time.Second)
	cancel()
}()

response, err = sess.RespondWithContext(ctx, "Prompt", fm.WithCreative())
if err != nil {
	fmt.Printf("Request cancelled: %v\n", err)
}

// Tool calling with timeout
response, err = sess.RespondWithToolsTimeout(10*time.Second, "What is 25 times 4?")
if err != nil {
	fmt.Printf("Tool request timed out: %v\n", err)
}

Streaming Responses

Generate responses with simulated real-time streaming output:

// Simple streaming (simulated - post-processes complete response into chunks)
callback := func(chunk string, isLast bool) {
	fmt.Print(chunk)
	if isLast {
		fmt.Println() // Final newline
	}
}
sess.RespondWithStreaming("Write a story", callback)

// Streaming with tools
sess.RespondWithToolsStreaming("What's the weather and calculate 2+2?", callback)

// Basic streaming
sess.RespondWithStreaming("Tell me a joke", callback)

Note: Current streaming implementation is simulated (breaks complete response into chunks). Native streaming will be implemented when Foundation Models provides streaming APIs.

Model Availability

Check if Foundation Models is available:

availability := fm.CheckModelAvailability()
switch availability {
case fm.ModelAvailable:
	fmt.Println("✅ Foundation Models available")
case fm.ModelUnavailableAINotEnabled:
	fmt.Println("❌ Apple Intelligence not enabled")
case fm.ModelUnavailableDeviceNotEligible:
	fmt.Println("❌ Device not eligible")
default:
	fmt.Println("❌ Unknown availability status")
}

Error Handling

The package provides comprehensive error handling:

if err := sess.RegisterTool(myTool); err != nil {
	log.Fatalf("Failed to register tool: %v", err)
}

// Context validation
response := sess.Respond(veryLongPrompt, nil)
if strings.HasPrefix(response, "Error:") {
	fmt.Printf("Request failed: %s\n", response)
}

// Context-aware error handling
import "errors"

response, err := sess.RespondWithTimeout(30*time.Second, prompt, nil)
if err != nil {
	if errors.Is(err, context.DeadlineExceeded) {
		fmt.Println("Request timed out")
	} else if errors.Is(err, context.Canceled) {
		fmt.Println("Request was cancelled")
	}
}

Memory Management

Always release sessions to prevent memory leaks:

sess := fm.NewSession()
defer sess.Release() // Important: release session

// Use session...

Performance Considerations

• Foundation Models runs entirely on-device • No internet connection required • Processing time depends on prompt complexity and device capabilities • Context window is limited to 4096 tokens • Token estimation is approximate (4 chars per token) • Use context cancellation for long-running requests • Input validation prevents runtime errors and improves performance

Threading

The package is not thread-safe. Use appropriate synchronization when accessing sessions from multiple goroutines. Context cancellation is goroutine-safe and can be used from any goroutine.

Swift Shim

This package automatically manages the Swift shim library (libFMShim.dylib) that bridges Foundation Models APIs to C functions callable from Go via purego.

The library search strategy: 1. Look for existing libFMShim.dylib in current directory and common paths 2. If not found, automatically extract embedded library to temp directory 3. Load the library and initialize the Foundation Models interface

No manual setup required - the package is fully self-contained!

Limitations

• Foundation Models API is still evolving • Some advanced GenerationOptions may not be fully supported yet • Foundation Models tool invocation can be inconsistent due to safety restrictions • Context cancellation cannot interrupt actual model computation • Streaming is currently simulated (post-processing) - native streaming pending Apple API support • macOS 26 Tahoe only

Tool Calling Status

✅ **What Works:** • Tool registration and parameter definition • Swift ↔ Go callback mechanism • Real data fetching (weather, calculations, etc.) • Error handling and validation • Structured logging with Go slog integration

⚠️ **Foundation Models Behavior:** • Tool calling works but can be inconsistent • Some queries may be blocked by safety guardrails • Success rate varies by tool complexity and phrasing

Debug Logging

The package provides comprehensive debug logging through Go's slog package:

import "log/slog"

// Enable debug logging (typically done by CLI with --verbose flag)
handler := slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
	Level: slog.LevelDebug,
})
slog.SetDefault(slog.New(handler))

// All fm operations will now log detailed debug information
sess := fm.NewSession()
// Logs: session creation, tool registration, response processing, etc.

Debug logs include: • Session creation and configuration details • Tool registration and parameter validation • Request/response processing with timing • Context usage and memory management • Swift shim layer interaction details

License

See LICENSE file for details.

Package fm provides a pure Go wrapper around macOS Foundation Models framework using purego to call a Swift shim library that exports C functions.

Foundation Models (macOS 26 Tahoe) provides on-device LLM capabilities including: - Text generation with LanguageModelSession - Streaming responses via delegates or async sequences - Tool calling with requestToolInvocation:with: - Structured outputs with LanguageModelRequestOptions

IMPORTANT: Foundation Models has a strict 4096 token context window limit. This package automatically tracks context usage and validates requests to prevent exceeding the limit. Use GetContextSize(), IsContextNearLimit(), and RefreshSession() to manage long conversations.

This implementation uses a Swift shim (libFMShim.dylib) that exports C functions using @_cdecl to bridge Swift async methods to synchronous C calls.

Index

Constants

View Source
const MAX_CONTEXT_SIZE = 4096 // Foundation Models context limit

Variables

This section is empty.

Functions

func GetLogs added in v0.1.4

func GetLogs() string

GetLogs returns accumulated logs from the Swift shim and clears them

func GetModelInfo

func GetModelInfo() string

GetModelInfo returns information about the current language model

func ValidateToolArguments

func ValidateToolArguments(args map[string]any, argDefs []ToolArgument) error

ValidateToolArguments validates tool arguments against argument definitions

Types

type GenerationOptions

type GenerationOptions struct {
	// MaxTokens is the maximum number of tokens to generate (default: no limit)
	MaxTokens *int `json:"maxTokens,omitempty"`

	// Temperature controls randomness (0.0 = deterministic, 1.0 = very random)
	Temperature *float32 `json:"temperature,omitempty"`

	// TopP controls nucleus sampling probability threshold (0.0-1.0)
	TopP *float32 `json:"topP,omitempty"`

	// TopK controls top-K sampling limit (positive integer)
	TopK *int `json:"topK,omitempty"`

	// PresencePenalty penalizes tokens based on their presence in the text so far
	PresencePenalty *float32 `json:"presencePenalty,omitempty"`

	// FrequencyPenalty penalizes tokens based on their frequency in the text so far
	FrequencyPenalty *float32 `json:"frequencyPenalty,omitempty"`

	// StopSequences is an array of sequences that will stop generation
	StopSequences []string `json:"stopSequences,omitempty"`

	// Seed for reproducible generation (when temperature is 0.0)
	Seed *int `json:"seed,omitempty"`
}

GenerationOptions represents options for controlling text generation

func WithBalanced

func WithBalanced() *GenerationOptions

WithBalanced creates GenerationOptions for balanced creativity

func WithCreative

func WithCreative() *GenerationOptions

WithCreative creates GenerationOptions for creative output

func WithDeterministic

func WithDeterministic() *GenerationOptions

WithDeterministic creates GenerationOptions for deterministic output

func WithMaxTokens

func WithMaxTokens(maxTokens int) *GenerationOptions

WithMaxTokens creates GenerationOptions with specified max tokens

func WithTemperature

func WithTemperature(temp float32) *GenerationOptions

WithTemperature creates GenerationOptions with specified temperature

type ModelAvailability

type ModelAvailability int

ModelAvailability represents the availability status of the language model

const (
	ModelAvailable ModelAvailability = iota
	ModelUnavailableAINotEnabled
	ModelUnavailableNotReady
	ModelUnavailableDeviceNotEligible
	ModelUnavailableUnknown = -1
)

func CheckModelAvailability

func CheckModelAvailability() ModelAvailability

CheckModelAvailability checks if the Foundation Models are available on this device

type ParameterDefinition added in v0.1.3

type ParameterDefinition struct {
	Type        string   `json:"type"`
	Description string   `json:"description"`
	Required    bool     `json:"required"`
	Enum        []string `json:"enum,omitempty"`
}

ParameterDefinition represents a tool parameter definition

type SchematizedTool added in v0.1.3

type SchematizedTool interface {
	Tool
	// GetParameters returns the parameter definitions for this tool
	GetParameters() []ToolArgument
}

SchematizedTool extends Tool with parameter schema definition capabilities

type Session

type Session struct {
	// contains filtered or unexported fields
}

Session represents a LanguageModelSession with context tracking

func NewSession

func NewSession() *Session

NewSession creates a new LanguageModelSession using the Swift shim

func NewSessionWithInstructions

func NewSessionWithInstructions(instructions string) *Session

NewSessionWithInstructions creates a new LanguageModelSession with system instructions

func (*Session) ClearTools

func (s *Session) ClearTools() error

ClearTools clears all registered tools from the session

func (*Session) GetContextSize

func (s *Session) GetContextSize() int

GetContextSize returns the current estimated context size

func (*Session) GetContextUsagePercent

func (s *Session) GetContextUsagePercent() float64

GetContextUsagePercent returns the percentage of context used

func (*Session) GetMaxContextSize

func (s *Session) GetMaxContextSize() int

GetMaxContextSize returns the maximum allowed context size

func (*Session) GetRegisteredTools

func (s *Session) GetRegisteredTools() []string

GetRegisteredTools returns a list of registered tool names

func (*Session) GetRemainingContextTokens

func (s *Session) GetRemainingContextTokens() int

GetRemainingContextTokens returns the number of tokens remaining in context

func (*Session) GetSystemInstructions

func (s *Session) GetSystemInstructions() string

GetSystemInstructions returns the system instructions for this session

func (*Session) IsContextNearLimit

func (s *Session) IsContextNearLimit() bool

IsContextNearLimit returns true if context usage is above 80%

func (*Session) RefreshSession

func (s *Session) RefreshSession() *Session

RefreshSession creates a new session with the same system instructions and tools This is useful when context is near the limit and you want to continue the conversation

func (*Session) RegisterTool

func (s *Session) RegisterTool(tool Tool) error

RegisterTool registers a tool with the session

func (*Session) Release

func (s *Session) Release()

Release releases the session memory

func (*Session) Respond

func (s *Session) Respond(prompt string, options *GenerationOptions) string

Respond sends a prompt to the language model and returns the response If options is nil, uses default generation settings

func (*Session) RespondWithContext

func (s *Session) RespondWithContext(ctx context.Context, prompt string, options *GenerationOptions) (string, error)

RespondWithContext sends a prompt with context cancellation support

func (*Session) RespondWithOptions

func (s *Session) RespondWithOptions(prompt string, maxTokens int, temperature float32) string

RespondWithOptions sends a prompt with specific generation options

func (*Session) RespondWithStreaming added in v0.1.5

func (s *Session) RespondWithStreaming(prompt string, callback StreamingCallback)

RespondWithStreaming generates a response with streaming output

func (*Session) RespondWithStructuredOutput

func (s *Session) RespondWithStructuredOutput(prompt string) string

RespondWithStructuredOutput sends a prompt and returns structured JSON output

func (*Session) RespondWithTimeout

func (s *Session) RespondWithTimeout(timeout time.Duration, prompt string, options *GenerationOptions) (string, error)

RespondWithTimeout is a convenience method that creates a context with timeout

func (*Session) RespondWithTools

func (s *Session) RespondWithTools(prompt string) string

RespondWithTools sends a prompt with tool calling enabled

func (*Session) RespondWithToolsContext

func (s *Session) RespondWithToolsContext(ctx context.Context, prompt string) (string, error)

RespondWithToolsContext sends a prompt with tool calling enabled and context cancellation support

func (*Session) RespondWithToolsStreaming added in v0.1.5

func (s *Session) RespondWithToolsStreaming(prompt string, callback StreamingCallback)

RespondWithToolsStreaming generates a response with tools using streaming output

func (*Session) RespondWithToolsTimeout

func (s *Session) RespondWithToolsTimeout(timeout time.Duration, prompt string) (string, error)

RespondWithToolsTimeout is a convenience method for tool calling with timeout

type StreamingCallback added in v0.1.5

type StreamingCallback func(chunk string, isLast bool)

StreamingCallback is called for each chunk of streaming response

type Tool

type Tool interface {
	// Name returns the name of the tool
	Name() string
	// Description returns a description of what the tool does
	Description() string
	// Execute executes the tool with the given arguments and returns the result
	Execute(arguments map[string]any) (ToolResult, error)
}

Tool represents a tool that can be called by the Foundation Models

type ToolArgument

type ToolArgument struct {
	Name        string   `json:"name"`
	Type        string   `json:"type"` // "string", "number", "integer", "boolean", "array", "object"
	Description string   `json:"description"`
	Required    bool     `json:"required"`
	MinLength   *int     `json:"minLength,omitempty"` // For strings
	MaxLength   *int     `json:"maxLength,omitempty"` // For strings
	Minimum     *float64 `json:"minimum,omitempty"`   // For numbers
	Maximum     *float64 `json:"maximum,omitempty"`   // For numbers
	Pattern     *string  `json:"pattern,omitempty"`   // Regex pattern for strings
	Enum        []any    `json:"enum,omitempty"`      // Allowed values
}

ToolArgument represents a tool argument definition for validation

type ToolDefinition

type ToolDefinition struct {
	Name        string                         `json:"name"`
	Description string                         `json:"description"`
	Parameters  map[string]ParameterDefinition `json:"parameters"`
}

ToolDefinition represents a tool definition for the Swift shim

type ToolResult

type ToolResult struct {
	Content string `json:"content"`
	Error   string `json:"error,omitempty"`
}

ToolResult represents the result of a tool execution

type ValidatedTool

type ValidatedTool interface {
	Tool
	// ValidateArguments validates the tool arguments before execution
	ValidateArguments(arguments map[string]any) error
}

ValidatedTool extends Tool with input validation capabilities

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL