Documentation
¶
Overview ¶
Package dive provides a Go library for building AI agents and integrating with leading LLMs. It takes a library-first approach, providing a clean API for embedding AI capabilities into Go applications.
The core types are:
- Agent orchestrates LLM interactions with tool execution and conversation management.
- Tool and TypedTool define callable tools that an LLM can invoke.
- Response captures the output from an agent's response generation.
- Hooks groups hook slices for customizing agent behavior at key points. Hook types: PreGenerationHook, PostGenerationHook, PreToolUseHook, PostToolUseHook, PostToolUseFailureHook, StopHook, PreIterationHook.
Quick Start ¶
agent, _ := dive.NewAgent(dive.AgentOptions{
Name: "Assistant",
SystemPrompt: "You are a helpful assistant.",
Model: anthropic.New(),
})
response, _ := agent.CreateResponse(ctx, dive.WithInput("Hello!"))
fmt.Println(response.OutputText())
Built-in tools are available in the github.com/deepnoodle-ai/dive/toolkit package. LLM providers are in the github.com/deepnoodle-ai/dive/providers subpackages.
Index ¶
- Constants
- Variables
- func AbortGeneration(reason string) error
- func AbortGenerationWithCause(reason string, cause error) error
- func DateTimeString(t time.Time) string
- func IsUserFeedback(err error) (string, bool)
- func NewUserFeedback(feedback string) error
- func Ptr[T any](t T) *T
- type Agent
- type AgentOptions
- type AutoApproveDialog
- type CreateResponseOption
- type CreateResponseOptions
- type DenyAllDialog
- type Dialog
- type DialogInput
- type DialogOption
- type DialogOutput
- type EventCallback
- type FuncToolOption
- type HookAbortError
- type HookContext
- type Hooks
- type ModelSettings
- type PostGenerationHook
- type PostToolUseFailureHook
- type PostToolUseHook
- type PreGenerationHook
- type PreIterationHook
- type PreToolUseHook
- type Response
- type ResponseItem
- type ResponseItemType
- type Schema
- type SchemaProperty
- type SchemaType
- type Session
- type StopDecision
- type StopHook
- type TerminalDialog
- type TerminalDialogOptions
- type Tool
- type ToolAnnotations
- type ToolCallPreview
- type ToolCallResult
- type ToolPreviewer
- type ToolResult
- type ToolResultContent
- type ToolResultContentType
- type Toolset
- type ToolsetFunc
- type TypedTool
- type TypedToolAdapter
- func (t *TypedToolAdapter[T]) Annotations() *ToolAnnotations
- func (t *TypedToolAdapter[T]) Call(ctx context.Context, input any) (*ToolResult, error)
- func (t *TypedToolAdapter[T]) Description() string
- func (t *TypedToolAdapter[T]) Name() string
- func (t *TypedToolAdapter[T]) PreviewCall(ctx context.Context, input any) *ToolCallPreview
- func (t *TypedToolAdapter[T]) Schema() *Schema
- func (t *TypedToolAdapter[T]) ToolConfiguration(providerName string) map[string]any
- func (t *TypedToolAdapter[T]) Unwrap() TypedTool[T]
- type TypedToolPreviewer
- type UserFeedbackError
Constants ¶
const (
// StateKeyCompactionEvent stores the compaction event for post-generation hooks.
StateKeyCompactionEvent = "compaction_event"
)
Well-known keys for HookContext.Values used by experimental packages. Using these constants instead of raw strings prevents typos and makes cross-package dependencies explicit.
Variables ¶
var ( ErrLLMNoResponse = errors.New("llm did not return a response") ErrNoLLM = errors.New("no llm provided") )
var NewSchema = schema.NewSchema
NewSchema creates a new Schema with the given properties and required fields.
Functions ¶
func AbortGeneration ¶ added in v1.0.0
AbortGeneration creates a HookAbortError to abort generation. Use this in hooks when a critical failure occurs that should stop generation entirely.
func AbortGenerationWithCause ¶ added in v1.0.0
AbortGenerationWithCause creates a HookAbortError with an underlying cause.
func DateTimeString ¶ added in v1.0.0
DateTimeString returns a human-readable string describing the given time, including the date, time of day, and day of the week.
func IsUserFeedback ¶ added in v1.0.0
IsUserFeedback checks if an error is user feedback and returns the feedback text. Returns the feedback string and true if it's user feedback, empty string and false otherwise.
func NewUserFeedback ¶ added in v1.0.0
NewUserFeedback creates a UserFeedbackError with the given feedback.
Types ¶
type Agent ¶
type Agent struct {
// contains filtered or unexported fields
}
Agent represents an intelligent AI entity that can autonomously use tools to process information while responding to chat messages.
func NewAgent ¶ added in v0.0.10
func NewAgent(opts AgentOptions) (*Agent, error)
NewAgent returns a new Agent configured with the given options.
func (*Agent) CreateResponse ¶
type AgentOptions ¶ added in v0.0.10
type AgentOptions struct {
// SystemPrompt is the system prompt sent to the LLM.
SystemPrompt string
// Model is the LLM to use for generation.
Model llm.LLM
// Tools available to the agent (static).
Tools []Tool
// Toolsets provide dynamic tool resolution. Each toolset's Tools() method
// is called before each LLM request, enabling context-dependent tool
// availability. Tools from toolsets are merged with static Tools.
Toolsets []Toolset
// Hooks groups all agent-level hooks.
Hooks Hooks
// Infrastructure
Logger llm.Logger
ModelSettings *ModelSettings
// LLMHooks are provider-level hooks passed to the LLM on each generation.
// These are distinct from agent-level hooks which control the agent's
// generation loop.
LLMHooks llm.Hooks
// Optional name for logging
Name string
// Session enables persistent conversation state. When set, the agent
// automatically loads history before generation and saves new messages
// after generation. Can be overridden per-call with WithSession.
Session Session
// Timeouts and limits
ResponseTimeout time.Duration
ToolIterationLimit int
}
AgentOptions are used to configure an Agent.
type AutoApproveDialog ¶ added in v1.0.0
type AutoApproveDialog struct{}
AutoApproveDialog automatically approves confirmations and selects first/default options.
func (*AutoApproveDialog) Show ¶ added in v1.0.0
func (d *AutoApproveDialog) Show(ctx context.Context, in *DialogInput) (*DialogOutput, error)
type CreateResponseOption ¶ added in v0.0.10
type CreateResponseOption func(*CreateResponseOptions)
CreateResponseOption is a type signature for defining new LLM generation options.
func WithEventCallback ¶
func WithEventCallback(callback EventCallback) CreateResponseOption
WithEventCallback specifies a callback function that will be invoked for each item generated during response creation.
func WithInput ¶
func WithInput(input string) CreateResponseOption
WithInput specifies a simple text input string to be used in the generation. This is a convenience wrapper that creates a single user message.
func WithMessages ¶
func WithMessages(messages ...*llm.Message) CreateResponseOption
WithMessages specifies the messages to be used in the generation.
func WithSession ¶ added in v1.0.0
func WithSession(s Session) CreateResponseOption
WithSession overrides the agent's default session for a single call. This is useful in server scenarios where one agent handles multiple sessions.
func WithValue ¶ added in v1.0.0
func WithValue(key string, value any) CreateResponseOption
WithValue sets a single key-value pair that will be available in HookContext.Values during generation. Multiple WithValue calls accumulate.
type CreateResponseOptions ¶ added in v0.0.10
type CreateResponseOptions struct {
// Messages contains the input messages for this generation. These are
// appended to any existing session messages before sending to the LLM.
Messages []*llm.Message
// EventCallback is invoked for each response item during generation.
// Callbacks include messages, tool calls, and tool results.
EventCallback EventCallback
// Values contains arbitrary key-value pairs that are copied into
// HookContext.Values before hooks run. This allows callers to pass
// data to hooks (e.g. session IDs) through CreateResponse options.
Values map[string]any
// Session overrides AgentOptions.Session for this call.
// Useful in server scenarios where one agent serves multiple sessions.
Session Session
}
CreateResponseOptions contains configuration for LLM generations.
This struct holds all the options that can be passed to Agent.CreateResponse. Options are typically set using the With* functions rather than directly modifying this struct.
func (*CreateResponseOptions) Apply ¶ added in v0.0.10
func (o *CreateResponseOptions) Apply(opts []CreateResponseOption)
Apply invokes any supplied options. Used internally in Dive.
type DenyAllDialog ¶ added in v1.0.0
type DenyAllDialog struct{}
DenyAllDialog denies all confirmations and cancels all other dialogs.
func (*DenyAllDialog) Show ¶ added in v1.0.0
func (d *DenyAllDialog) Show(ctx context.Context, in *DialogInput) (*DialogOutput, error)
type Dialog ¶ added in v1.0.0
type Dialog interface {
// Show presents a dialog to the user and waits for their response.
Show(ctx context.Context, in *DialogInput) (*DialogOutput, error)
}
Dialog handles user interaction prompts during agent execution.
Implementations of this interface provide the UI layer for confirmations, selections, and text input. The prompt type is determined by which fields are set in DialogInput:
- Confirm mode: set Confirm=true (yes/no question)
- Select mode: set Options (pick one)
- MultiSelect mode: set Options and MultiSelect=true (pick many)
- Input mode: none of the above (free-form text)
Example implementation for auto-approve:
type AutoApproveDialog struct{}
func (d *AutoApproveDialog) Show(ctx context.Context, in *DialogInput) (*DialogOutput, error) {
if in.Confirm {
return &DialogOutput{Confirmed: true}, nil
}
if len(in.Options) > 0 {
return &DialogOutput{Values: []string{in.Options[0].Value}}, nil
}
return &DialogOutput{Text: in.Default}, nil
}
type DialogInput ¶ added in v1.0.0
type DialogInput struct {
// Title is a short heading for the dialog.
Title string
// Message provides additional context or instructions.
Message string
// Confirm indicates this is a yes/no confirmation dialog.
// When true, the response's Confirmed field contains the answer.
Confirm bool
// Options provides choices for selection dialogs.
// When non-empty, the response's Values field contains selected option values.
Options []DialogOption
// MultiSelect allows selecting multiple options.
// Only applies when Options is non-empty.
MultiSelect bool
// Default is the pre-selected value.
// Type depends on mode: bool (confirm), string (select/input), []string (multi-select).
Default string
// Validate is an optional validation function for text input.
// Return an error to reject the input with a message.
Validate func(string) error
// Tool is the tool requesting this dialog (optional, for context).
Tool Tool
// Call is the specific tool invocation (optional, for context).
Call *llm.ToolUseContent
}
DialogInput describes what to present to the user.
type DialogOption ¶ added in v1.0.0
type DialogOption struct {
// Value is the machine-readable identifier returned when selected.
Value string
// Label is the human-readable text displayed to the user.
Label string
// Description provides additional context about this option.
Description string
}
DialogOption represents a selectable choice.
type DialogOutput ¶ added in v1.0.0
type DialogOutput struct {
// Confirmed is the answer for Confirm mode dialogs.
Confirmed bool
// Values contains selected option values.
// For single-select, this has one element.
// For multi-select, this may have zero or more elements.
Values []string
// Text is the entered text for Input mode dialogs.
Text string
// Canceled indicates the user dismissed the dialog without responding.
Canceled bool
// AllowSession indicates the user wants to allow all similar actions this session.
AllowSession bool
// Feedback contains user-provided text when denying (e.g., "try a different approach").
Feedback string
}
DialogOutput contains the user's response.
type EventCallback ¶
type EventCallback func(ctx context.Context, item *ResponseItem) error
EventCallback is a function called with each item produced while an agent is using tools or generating a response.
type FuncToolOption ¶ added in v1.0.0
type FuncToolOption interface {
// contains filtered or unexported methods
}
FuncToolOption configures a FuncTool.
func WithFuncToolAnnotations ¶ added in v1.0.0
func WithFuncToolAnnotations(a *ToolAnnotations) FuncToolOption
WithFuncToolAnnotations sets annotations on a FuncTool.
func WithFuncToolSchema ¶ added in v1.0.0
func WithFuncToolSchema(s *Schema) FuncToolOption
WithFuncToolSchema overrides the auto-generated schema.
type HookAbortError ¶ added in v1.0.0
type HookAbortError struct {
Reason string
HookType string // "PreGeneration", "PostGeneration", "PreToolUse", "PostToolUse", "PostToolUseFailure"
HookName string // Optional: name/description of the hook that aborted
Cause error // Optional: underlying error
}
HookAbortError signals that a hook wants to abort generation entirely. When returned from any hook, CreateResponse will abort and return this error. Use this for safety violations, compliance issues, or critical failures.
Regular errors (non-HookAbortError) are handled gracefully:
- PreGeneration: aborts (setup is required)
- PostGeneration: logged only
- PreToolUse: converted to Deny message
- PostToolUse: logged only
- PostToolUseFailure: logged only
func (*HookAbortError) Error ¶ added in v1.0.0
func (e *HookAbortError) Error() string
func (*HookAbortError) Unwrap ¶ added in v1.0.0
func (e *HookAbortError) Unwrap() error
type HookContext ¶ added in v1.0.0
type HookContext struct {
// Agent is the agent running the generation.
Agent *Agent
// Values provides arbitrary storage for hooks to communicate.
// Persists across all phases within one CreateResponse call.
Values map[string]any
// SystemPrompt is the system prompt that will be sent to the LLM.
SystemPrompt string
// Messages contains the conversation history plus new input messages.
Messages []*llm.Message
// Response is the complete Response object returned by CreateResponse.
Response *Response
// OutputMessages contains the messages generated during this response.
OutputMessages []*llm.Message
// Usage contains token usage statistics for this generation.
Usage *llm.Usage
// Tool is the tool being executed.
Tool Tool
// Call contains the tool invocation details including input.
Call *llm.ToolUseContent
// Result contains the tool execution result (PostToolUse/PostToolUseFailure only).
Result *ToolCallResult
// UpdatedInput, when set by a PreToolUse hook, replaces Call.Input before
// the tool is executed. Only the last hook's UpdatedInput takes effect.
UpdatedInput []byte
// AdditionalContext, when set by a hook, is appended as a text content
// block to the tool result message sent to the LLM. This lets hooks
// provide guidance without modifying the tool result itself.
AdditionalContext string
// StopHookActive is true when this stop check was triggered by a
// previous stop hook continuation. Check this to prevent infinite loops.
StopHookActive bool
// Iteration is the zero-based iteration number within the generation loop.
Iteration int
}
HookContext provides mutable access to the generation context. All hook types receive a *HookContext. Fields are populated based on the hook phase:
- PreGeneration: Agent, Values, SystemPrompt, Messages
- PostGeneration: Agent, Values, SystemPrompt, Messages, Response, OutputMessages, Usage
- PreToolUse: Agent, Values, Tool, Call
- PostToolUse: Agent, Values, Tool, Call, Result
- PostToolUseFailure: Agent, Values, Tool, Call, Result
- Stop: Agent, Values, Response, OutputMessages, Usage, StopHookActive
- PreIteration: Agent, Values, SystemPrompt, Messages, Iteration
The Values map allows hooks to communicate with each other by storing arbitrary data that persists across the hook chain within a single CreateResponse call.
func NewHookContext ¶ added in v1.0.0
func NewHookContext() *HookContext
NewHookContext creates a new HookContext with initialized Values map.
type Hooks ¶ added in v1.0.0
type Hooks struct {
// PreGeneration hooks are called before the LLM generation loop.
PreGeneration []PreGenerationHook
// PostGeneration hooks are called after the LLM generation loop completes.
PostGeneration []PostGenerationHook
// PreToolUse hooks are called before each tool execution.
PreToolUse []PreToolUseHook
// PostToolUse hooks are called after each successful tool execution.
PostToolUse []PostToolUseHook
// PostToolUseFailure hooks are called after each failed tool execution.
PostToolUseFailure []PostToolUseFailureHook
// Stop hooks run when the agent is about to finish responding.
// A hook can prevent stopping by returning a StopDecision with Continue: true.
Stop []StopHook
// PreIteration hooks run before each LLM call within the generation loop.
PreIteration []PreIterationHook
}
Hooks groups all agent hook slices.
type ModelSettings ¶ added in v0.0.10
type ModelSettings struct {
Temperature *float64
PresencePenalty *float64
FrequencyPenalty *float64
ParallelToolCalls *bool
Caching *bool
MaxTokens *int
ReasoningBudget *int
ReasoningEffort llm.ReasoningEffort
ToolChoice *llm.ToolChoice
Features []string
RequestHeaders http.Header
MCPServers []llm.MCPServerConfig
}
ModelSettings are used to configure details of the LLM for an Agent.
func (*ModelSettings) Options ¶ added in v1.0.0
func (m *ModelSettings) Options() []llm.Option
Options returns the LLM options corresponding to the model settings.
type PostGenerationHook ¶ added in v1.0.0
type PostGenerationHook func(ctx context.Context, hctx *HookContext) error
PostGenerationHook is called after the LLM generation loop completes.
PostGeneration hooks run in order and can:
- Read hctx.Response to access the complete response
- Read hctx.OutputMessages to access generated messages
- Read hctx.Usage to access token usage statistics
- Read data from hctx.Values stored by earlier hooks
- Perform side effects like logging, saving, or notifications
PostGeneration hook errors are logged but do NOT affect the returned Response. This design ensures that generation results are not lost due to post-processing failures (e.g., if saving to a database fails).
func UsageLogger ¶ added in v1.0.0
func UsageLogger(logFunc func(usage *llm.Usage)) PostGenerationHook
UsageLogger returns a PostGenerationHook that logs token usage after each generation using the provided callback function.
func UsageLoggerWithSlog ¶ added in v1.0.0
func UsageLoggerWithSlog(logger llm.Logger) PostGenerationHook
UsageLoggerWithSlog returns a PostGenerationHook that logs token usage using an slog.Logger.
type PostToolUseFailureHook ¶ added in v1.0.0
type PostToolUseFailureHook func(ctx context.Context, hctx *HookContext) error
PostToolUseFailureHook is called after a tool call fails.
The hook receives the same context as PostToolUseHook, but fires only when the tool execution returned an error or the result has IsError set. This mirrors Claude Code's separate PostToolUseFailure event.
Hooks can set hctx.AdditionalContext to inject context into the tool result message.
Hook errors are logged but do not affect the tool result.
func MatchToolPostFailure ¶ added in v1.0.0
func MatchToolPostFailure(pattern string, hook PostToolUseFailureHook) PostToolUseFailureHook
MatchToolPostFailure returns a PostToolUseFailureHook that only runs when the tool name matches the given pattern. The pattern is a Go regexp. If the pattern fails to compile, the returned hook always returns the compilation error.
type PostToolUseHook ¶ added in v0.0.12
type PostToolUseHook func(ctx context.Context, hctx *HookContext) error
PostToolUseHook is called after a tool call succeeds.
The hook receives context about the completed tool call including the result. Hooks can modify hctx.Result to transform the tool output before it's sent to the LLM in the next generation iteration.
Hooks can set hctx.AdditionalContext to inject context into the tool result message.
Hook errors are logged but do not affect the tool result.
func MatchToolPost ¶ added in v1.0.0
func MatchToolPost(pattern string, hook PostToolUseHook) PostToolUseHook
MatchToolPost returns a PostToolUseHook that only runs when the tool name matches the given pattern. The pattern is a Go regexp. If the pattern fails to compile, the returned hook always returns the compilation error.
type PreGenerationHook ¶ added in v1.0.0
type PreGenerationHook func(ctx context.Context, hctx *HookContext) error
PreGenerationHook is called before the LLM generation loop begins.
PreGeneration hooks run in order and can:
- Modify hctx.SystemPrompt to customize the system prompt
- Modify hctx.Messages to inject context or load session history
- Store data in hctx.Values for use by later hooks
- Return an error to abort generation entirely
If any PreGeneration hook returns an error, generation is aborted and CreateResponse returns that error. No subsequent hooks are called.
func CompactionHook ¶ added in v1.0.0
func CompactionHook(messageThreshold int, summarizer func(context.Context, []*llm.Message) ([]*llm.Message, error)) PreGenerationHook
CompactionHook returns a PreGenerationHook that triggers context compaction when the message count exceeds the given threshold.
The summarizer function is called when compaction is triggered. It receives the current messages and should return compacted messages. If the summarizer returns an error, the hook returns that error (aborting generation).
func InjectContext ¶ added in v1.0.0
func InjectContext(content ...llm.Content) PreGenerationHook
InjectContext returns a PreGenerationHook that prepends the given content to the conversation as a user message.
This is useful for injecting context that should appear before the user's actual input, such as:
- Relevant documentation or code snippets
- Previous conversation summaries
- Environment information or system state
Example:
agent, _ := dive.NewAgent(dive.AgentOptions{
SystemPrompt: "You are a coding assistant.",
Model: model,
Hooks: dive.Hooks{
PreGeneration: []dive.PreGenerationHook{
dive.InjectContext(
llm.NewTextContent("Current working directory: /home/user/project"),
llm.NewTextContent("Git branch: main"),
),
},
},
})
type PreIterationHook ¶ added in v1.0.0
type PreIterationHook func(ctx context.Context, hctx *HookContext) error
PreIterationHook is called before each LLM call within the generation loop. Use these to modify the system prompt or messages between iterations.
hctx.Iteration provides the zero-based iteration number. Errors abort generation (same as PreGeneration).
type PreToolUseHook ¶ added in v0.0.12
type PreToolUseHook func(ctx context.Context, hctx *HookContext) error
PreToolUseHook is called before a tool is executed.
All hooks run in order. If any hook returns an error, the tool is denied and the error message is sent to the LLM. If all hooks return nil, the tool is executed.
Hooks can set hctx.UpdatedInput to rewrite tool arguments before execution, and hctx.AdditionalContext to inject context into the tool result message.
Error handling:
- nil: no objection (tool runs if all hooks return nil)
- error: deny the tool (error message sent to LLM)
- *HookAbortError: abort generation entirely
func MatchTool ¶ added in v1.0.0
func MatchTool(pattern string, hook PreToolUseHook) PreToolUseHook
MatchTool returns a PreToolUseHook that only runs when the tool name matches the given pattern. The pattern is a Go regexp. If the pattern fails to compile, the returned hook always denies tool execution with the compilation error.
type Response ¶
type Response struct {
// Model represents the model that generated the response
Model string `json:"model,omitempty"`
// Items contains the individual response items including
// messages, tool calls, and tool results.
Items []*ResponseItem `json:"items,omitempty"`
// OutputMessages contains the messages generated during this response.
// This includes assistant messages and tool result messages in the order
// they were produced. Use these messages to continue a multi-turn
// conversation: pass the original input messages plus OutputMessages
// plus a new user message on the next call.
OutputMessages []*llm.Message `json:"output_messages,omitempty"`
// Usage contains token usage information
Usage *llm.Usage `json:"usage,omitempty"`
// CreatedAt is the timestamp when this response was created
CreatedAt time.Time `json:"created_at,omitempty"`
// FinishedAt is the timestamp when this response was completed
FinishedAt *time.Time `json:"finished_at,omitempty"`
}
Response represents the output from an Agent's response generation.
func (*Response) OutputText ¶
OutputText returns the text content from the last message in the response. If there are no messages or no text content, returns an empty string.
func (*Response) ToolCallResults ¶
func (r *Response) ToolCallResults() []*ToolCallResult
ToolCallResults returns all tool call results from the response.
type ResponseItem ¶
type ResponseItem struct {
// Type of the response item
Type ResponseItemType `json:"type,omitempty"`
// Event is set if the response item is an event
Event *llm.Event `json:"event,omitempty"`
// Message is set if the response item is a message
Message *llm.Message `json:"message,omitempty"`
// ToolCall is set if the response item is a tool call
ToolCall *llm.ToolUseContent `json:"tool_call,omitempty"`
// ToolCallResult is set if the response item is a tool call result
ToolCallResult *ToolCallResult `json:"tool_call_result,omitempty"`
// Extension holds optional data from experimental packages.
// The concrete type depends on the ResponseItemType.
Extension any `json:"extension,omitempty"`
// Usage contains token usage information, if applicable
Usage *llm.Usage `json:"usage,omitempty"`
}
ResponseItem contains either a message, tool call, tool result, or LLM event. Multiple items may be generated in response to a single prompt.
type ResponseItemType ¶
type ResponseItemType string
ResponseItemType represents the type of response item emitted during response generation.
Response items are delivered via the EventCallback during CreateResponse. They provide real-time visibility into the agent's activity including initialization, messages, tool calls, and streaming events.
const ( // ResponseItemTypeMessage indicates a complete message is available from the agent. // The Message field contains the full assistant message including any tool calls. ResponseItemTypeMessage ResponseItemType = "message" // ResponseItemTypeToolCall indicates a tool call is about to be executed. // The ToolCall field contains the tool name and input parameters. ResponseItemTypeToolCall ResponseItemType = "tool_call" // ResponseItemTypeToolCallResult indicates a tool call has completed. // The ToolCallResult field contains the tool output or error. ResponseItemTypeToolCallResult ResponseItemType = "tool_call_result" // ResponseItemTypeModelEvent indicates a streaming event from the LLM. // The Event field contains the raw LLM event for real-time UI updates. ResponseItemTypeModelEvent ResponseItemType = "model_event" )
type SchemaProperty ¶ added in v0.0.12
Type aliases for easy access to schema types used in tool definitions
type SchemaType ¶ added in v0.0.12
type SchemaType = schema.SchemaType
Type aliases for easy access to schema types used in tool definitions
const ( Object SchemaType = schema.Object Array SchemaType = schema.Array String SchemaType = schema.String Integer SchemaType = schema.Integer Number SchemaType = schema.Number Boolean SchemaType = schema.Boolean Null SchemaType = schema.Null )
SchemaType constants for JSON Schema types
type Session ¶ added in v1.0.0
type Session interface {
// ID returns a unique identifier for this session.
ID() string
// Messages returns the conversation history.
Messages(ctx context.Context) ([]*llm.Message, error)
// SaveTurn persists messages from a single turn.
SaveTurn(ctx context.Context, messages []*llm.Message, usage *llm.Usage) error
}
Session provides persistent conversation state across multiple turns. The agent calls Messages before generation to load history, and SaveTurn after generation to persist new messages.
Agents are stateless by default. Setting Session on AgentOptions or passing WithSession per-call enables automatic history loading and saving.
type StopDecision ¶ added in v1.0.0
type StopDecision struct {
// Continue, when true, prevents the agent from stopping.
// The Reason is injected as a user message so the LLM knows
// why it should keep going.
Continue bool
// Reason is required when Continue is true. It's added to the
// conversation as context for the next LLM iteration.
Reason string
}
StopDecision tells the agent what to do after a stop hook runs.
type StopHook ¶ added in v1.0.0
type StopHook func(ctx context.Context, hctx *HookContext) (*StopDecision, error)
StopHook is called when the agent is about to stop responding. A hook can prevent stopping by returning a StopDecision with Continue: true, which injects the Reason as a user message and re-enters the generation loop.
hctx.StopHookActive is true when this stop check was triggered by a previous stop hook continuation. Check this to prevent infinite loops.
type TerminalDialog ¶ added in v1.0.0
type TerminalDialog struct {
// contains filtered or unexported fields
}
TerminalDialog implements Dialog using stdin/stdout.
func NewTerminalDialog ¶ added in v1.0.0
func NewTerminalDialog() *TerminalDialog
NewTerminalDialog creates a Dialog that prompts via stdin/stdout.
func NewTerminalDialogWithOptions ¶ added in v1.0.0
func NewTerminalDialogWithOptions(opts TerminalDialogOptions) *TerminalDialog
NewTerminalDialogWithOptions creates a Dialog with custom input/output.
func (*TerminalDialog) Show ¶ added in v1.0.0
func (d *TerminalDialog) Show(ctx context.Context, in *DialogInput) (*DialogOutput, error)
type TerminalDialogOptions ¶ added in v1.0.0
TerminalDialogOptions configures a TerminalDialog.
type Tool ¶
type Tool interface {
// Name of the tool.
Name() string
// Description of the tool.
Description() string
// Schema describes the parameters used to call the tool.
Schema() *Schema
// Annotations returns optional properties that describe tool behavior.
Annotations() *ToolAnnotations
// Call is the function that is called to use the tool.
Call(ctx context.Context, input any) (*ToolResult, error)
}
Tool is an interface for a tool that can be called by an LLM.
func FuncTool ¶ added in v1.0.0
func FuncTool[T any](name, description string, fn func(ctx context.Context, input T) (*ToolResult, error), opts ...FuncToolOption) Tool
FuncTool creates a Tool from a function with an auto-generated schema.
The schema is generated from the input type T using struct tags. Use json tags for field names, description tags for parameter descriptions, and omitempty to mark optional fields. See schema.Generate for all supported tags.
Example:
type WeatherInput struct {
City string `json:"city" description:"City name"`
Units string `json:"units,omitempty" description:"Temperature units" enum:"celsius,fahrenheit"`
}
weatherTool := dive.FuncTool("get_weather", "Get current weather",
func(ctx context.Context, input *WeatherInput) (*dive.ToolResult, error) {
return dive.NewToolResultText("72°F"), nil
},
)
type ToolAnnotations ¶
type ToolAnnotations struct {
Title string `json:"title,omitempty"`
ReadOnlyHint bool `json:"readOnlyHint,omitempty"`
DestructiveHint bool `json:"destructiveHint,omitempty"`
IdempotentHint bool `json:"idempotentHint,omitempty"`
OpenWorldHint bool `json:"openWorldHint,omitempty"`
EditHint bool `json:"editHint,omitempty"` // Indicates file edit operations for acceptEdits mode
Extra map[string]any `json:"extra,omitempty"`
}
ToolAnnotations contains optional metadata hints that describe a tool's behavior. These hints help agents and permission systems make decisions about tool usage.
func (*ToolAnnotations) MarshalJSON ¶
func (a *ToolAnnotations) MarshalJSON() ([]byte, error)
func (*ToolAnnotations) UnmarshalJSON ¶
func (a *ToolAnnotations) UnmarshalJSON(data []byte) error
type ToolCallPreview ¶ added in v0.0.12
type ToolCallPreview struct {
// Summary is a short description of what the tool will do, e.g., "Fetch https://example.com"
Summary string `json:"summary"`
// Details is optional longer markdown with more context about the operation.
Details string `json:"details,omitempty"`
}
ToolCallPreview contains human-readable information about a pending tool call.
type ToolCallResult ¶
type ToolCallResult struct {
ID string
Name string
Input any
Preview *ToolCallPreview // Preview generated before execution (if tool implements ToolPreviewer)
Result *ToolResult // Protocol-level result sent to the LLM
Error error // Go error if tool.Call() itself failed
AdditionalContext string // Context injected by hooks, appended to the tool result message
}
ToolCallResult is a tool call that has been made. This is used to understand what calls have happened during an LLM interaction.
Error and Result.IsError track different failure modes:
- Error is a Go error from tool.Call() — the tool itself crashed or failed unexpectedly.
- Result.IsError is a protocol-level flag — the tool ran but reported a failure to the LLM (e.g. via NewToolResultError). Both are surfaced to the LLM as an error result.
type ToolPreviewer ¶ added in v0.0.12
type ToolPreviewer interface {
// PreviewCall returns a markdown description of what the tool will do
// given the input. The input is the same type passed to Call().
PreviewCall(ctx context.Context, input any) *ToolCallPreview
}
ToolPreviewer is an optional interface that tools can implement to provide human-readable previews of what they will do before execution.
type ToolResult ¶
type ToolResult struct {
// Content is the tool output sent to the LLM.
Content []*ToolResultContent `json:"content"`
// Display is an optional human-readable markdown summary of the result.
// If empty, consumers should fall back to Content for display.
Display string `json:"display,omitempty"`
// IsError indicates whether the tool call resulted in an error.
IsError bool `json:"isError,omitempty"`
}
ToolResult is the output from a tool call.
func NewToolResult ¶
func NewToolResult(content ...*ToolResultContent) *ToolResult
NewToolResult creates a new ToolResult with the given content.
func NewToolResultError ¶
func NewToolResultError(text string) *ToolResult
NewToolResultError creates a new ToolResult containing an error message.
func NewToolResultText ¶
func NewToolResultText(text string) *ToolResult
NewToolResultText creates a new ToolResult with the given text content.
func (*ToolResult) WithDisplay ¶ added in v0.0.12
func (r *ToolResult) WithDisplay(display string) *ToolResult
WithDisplay sets the Display field and returns the receiver for chaining.
type ToolResultContent ¶
type ToolResultContent struct {
Type ToolResultContentType `json:"type"`
Text string `json:"text,omitempty"`
Data string `json:"data,omitempty"`
MimeType string `json:"mimeType,omitempty"`
Annotations map[string]any `json:"annotations,omitempty"`
}
ToolResultContent is a single content block within a tool result, such as text output, an image, or audio data.
type ToolResultContentType ¶
type ToolResultContentType string
ToolResultContentType indicates the media type of a tool result content block.
const ( ToolResultContentTypeText ToolResultContentType = "text" ToolResultContentTypeImage ToolResultContentType = "image" ToolResultContentTypeAudio ToolResultContentType = "audio" )
func (ToolResultContentType) String ¶
func (t ToolResultContentType) String() string
type Toolset ¶ added in v1.0.0
type Toolset interface {
// Name identifies this toolset for logging and debugging.
Name() string
// Tools returns the tools available in the current context.
// Called before each LLM request. Implementations should cache tool
// instances and avoid re-creating them on every call.
Tools(ctx context.Context) ([]Tool, error)
}
Toolset provides dynamic tool resolution. Tools() is called before each LLM request, allowing the available tools to vary based on runtime context. Use toolsets for MCP servers, permission-filtered tools, or context-dependent tool availability.
type ToolsetFunc ¶ added in v1.0.0
type ToolsetFunc struct {
// ToolsetName identifies this toolset.
ToolsetName string
// Resolve returns the tools for the current context.
Resolve func(ctx context.Context) ([]Tool, error)
}
ToolsetFunc adapts a function into a Toolset.
func (*ToolsetFunc) Name ¶ added in v1.0.0
func (f *ToolsetFunc) Name() string
Name returns the toolset name.
type TypedTool ¶
type TypedTool[T any] interface { // Name of the tool. Name() string // Description of the tool. Description() string // Schema describes the parameters used to call the tool. Schema() *Schema // Annotations returns optional properties that describe tool behavior. Annotations() *ToolAnnotations // Call is the function that is called to use the tool. Call(ctx context.Context, input T) (*ToolResult, error) }
TypedTool is a tool that can be called with a specific type of input.
type TypedToolAdapter ¶
type TypedToolAdapter[T any] struct { // contains filtered or unexported fields }
TypedToolAdapter is an adapter that allows a TypedTool to be used as a regular Tool. Specifically the Call method accepts `input any` and then internally unmarshals the input to the correct type and passes it to the TypedTool.
func ToolAdapter ¶
func ToolAdapter[T any](tool TypedTool[T]) *TypedToolAdapter[T]
ToolAdapter creates a new TypedToolAdapter for the given tool.
func (*TypedToolAdapter[T]) Annotations ¶
func (t *TypedToolAdapter[T]) Annotations() *ToolAnnotations
func (*TypedToolAdapter[T]) Call ¶
func (t *TypedToolAdapter[T]) Call(ctx context.Context, input any) (*ToolResult, error)
func (*TypedToolAdapter[T]) Description ¶
func (t *TypedToolAdapter[T]) Description() string
func (*TypedToolAdapter[T]) Name ¶
func (t *TypedToolAdapter[T]) Name() string
func (*TypedToolAdapter[T]) PreviewCall ¶ added in v0.0.12
func (t *TypedToolAdapter[T]) PreviewCall(ctx context.Context, input any) *ToolCallPreview
PreviewCall implements ToolPreviewer by delegating to the underlying TypedTool if it implements TypedToolPreviewer[T].
func (*TypedToolAdapter[T]) Schema ¶
func (t *TypedToolAdapter[T]) Schema() *Schema
func (*TypedToolAdapter[T]) ToolConfiguration ¶
func (t *TypedToolAdapter[T]) ToolConfiguration(providerName string) map[string]any
ToolConfiguration delegates to the underlying tool's ToolConfiguration method if it implements the llm.ToolConfiguration interface.
func (*TypedToolAdapter[T]) Unwrap ¶
func (t *TypedToolAdapter[T]) Unwrap() TypedTool[T]
Unwrap returns the underlying TypedTool.
type TypedToolPreviewer ¶ added in v0.0.12
type TypedToolPreviewer[T any] interface { // PreviewCall returns a markdown description of what the tool will do. PreviewCall(ctx context.Context, input T) *ToolCallPreview }
TypedToolPreviewer is an optional interface that typed tools can implement to provide human-readable previews with typed input.
type UserFeedbackError ¶ added in v1.0.0
type UserFeedbackError struct {
Feedback string
}
UserFeedbackError wraps user-provided feedback when they deny a tool call. This allows distinguishing user feedback from actual errors.
func (*UserFeedbackError) Error ¶ added in v1.0.0
func (e *UserFeedbackError) Error() string
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
experimental
|
|
|
sandbox/proxy
Package proxy provides a simple HTTP/HTTPS proxy that enforces a domain allowlist.
|
Package proxy provides a simple HTTP/HTTPS proxy that enforces a domain allowlist. |
|
skill
Package skill provides support for Claude-compatible Agent Skills.
|
Package skill provides support for Claude-compatible Agent Skills. |
|
slashcmd
Package slashcmd provides support for Claude-compatible slash commands.
|
Package slashcmd provides support for Claude-compatible slash commands. |
|
subagent
Package subagent provides subagent management for Dive agents.
|
Package subagent provides subagent management for Dive agents. |
|
toolkit/extended
Package extended provides extended tools for AI agents.
|
Package extended provides extended tools for AI agents. |
|
toolkit/firecrawl
Package firecrawl provides a client for interacting with the Firecrawl API.
|
Package firecrawl provides a client for interacting with the Firecrawl API. |
|
Package llm defines the unified abstraction layer over different LLM providers.
|
Package llm defines the unified abstraction layer over different LLM providers. |
|
Package permission provides tool permission management for Dive agents.
|
Package permission provides tool permission management for Dive agents. |
|
Package providers contains the LLM provider registry and shared error types.
|
Package providers contains the LLM provider registry and shared error types. |
|
google
module
|
|
|
openai
module
|
|
|
Package session provides persistent conversation state for Dive agents.
|
Package session provides persistent conversation state for Dive agents. |
|
Package toolkit provides a collection of tools for AI agents to interact with the local filesystem, execute shell commands, search the web, and communicate with users.
|
Package toolkit provides a collection of tools for AI agents to interact with the local filesystem, execute shell commands, search the web, and communicate with users. |