Documentation
¶
Index ¶
- Variables
- func BuildTools(cfg *config.Config, translator *i18n.Translator) []openrouter.Tool
- func FindMatchingBrace(text string, startIdx int) int
- func SanitizeLLMResponse(text string) (string, bool)
- type ContextData
- type Laplace
- func (l *Laplace) BuildMessages(ctx context.Context, contextData *ContextData, currentMessageContent string, ...) []openrouter.Message
- func (l *Laplace) Execute(ctx context.Context, req *Request, toolHandler ToolHandler) (*Response, error)
- func (l *Laplace) LoadContextData(ctx context.Context, userID int64, rawQuery string, ...) (*ContextData, error)
- func (l *Laplace) LogExecution(ctx context.Context, userID int64, resp *Response, cost float64)
- func (l *Laplace) SetAgentLogger(logger *agentlog.Logger)
- func (l *Laplace) Type() agent.AgentType
- type Request
- type Response
- type ToolCallContext
- type ToolHandler
- type ToolResult
Constants ¶
This section is empty.
Variables ¶
var HallucinationTags = []string{
"</tool_code>",
"</tool_call>",
"</s>",
"<|endoftext|>",
"<|end|>",
"default_api:",
}
HallucinationTags contains known hallucination markers that should be removed from LLM output.
Functions ¶
func BuildTools ¶
func BuildTools(cfg *config.Config, translator *i18n.Translator) []openrouter.Tool
BuildTools creates OpenRouter tool definitions from config.
func FindMatchingBrace ¶
FindMatchingBrace finds the index of the closing brace matching the opening brace at startIdx. Returns -1 if no matching brace is found.
func SanitizeLLMResponse ¶
SanitizeLLMResponse removes hallucination artifacts from LLM response. Returns the sanitized text and a boolean indicating if sanitization occurred. An empty result is a legitimate outcome — the caller substitutes the localized bot.empty_response string when this function returns "".
Types ¶
type ContextData ¶
type ContextData struct {
// User identification
UserID int64 // v0.6.0: User ID for artifact loading
// System prompt components
BaseSystemPrompt string
ProfileFacts string
RecentTopics string
InnerCircle string // v0.5.1: People from Work_Inner + Family circles
// Session history
RecentHistory []storage.Message
// RAG results
RAGResults []rag.TopicSearchResult
ArtifactResults []rag.ArtifactResult // v0.6.0: Artifact summary matches
SelectedArtifactIDs []int64 // v0.6.0: Artifact IDs selected by reranker for full content loading
RAGInfo *rag.RetrievalDebugInfo
RelevantPeople []storage.Person // v0.5.1: People selected by reranker
}
ContextData contains pre-built context for LLM.
type Laplace ¶
type Laplace struct {
// contains filtered or unexported fields
}
Laplace is the main chat agent that handles user conversations.
func New ¶
func New( cfg *config.Config, orClient openrouter.Client, ragService rag.Retriever, msgRepo storage.MessageRepository, factRepo storage.FactRepository, artifactRepo storage.ArtifactRepository, translator *i18n.Translator, logger *slog.Logger, ) *Laplace
New creates a new Laplace agent.
func (*Laplace) BuildMessages ¶
func (l *Laplace) BuildMessages( ctx context.Context, contextData *ContextData, currentMessageContent string, currentMessageParts []interface{}, enrichedQuery string, ) []openrouter.Message
BuildMessages assembles OpenRouter messages from context data.
func (*Laplace) Execute ¶
func (l *Laplace) Execute(ctx context.Context, req *Request, toolHandler ToolHandler) (*Response, error)
Execute runs the main chat loop with tool calls.
func (*Laplace) LoadContextData ¶
func (l *Laplace) LoadContextData( ctx context.Context, userID int64, rawQuery string, currentMessageParts []interface{}, ) (*ContextData, error)
LoadContextData loads all context data needed for LLM generation.
func (*Laplace) LogExecution ¶
LogExecution logs the execution to agent logger.
func (*Laplace) SetAgentLogger ¶
SetAgentLogger sets the agent logger for debug logging.
type Request ¶
type Request struct {
UserID int64
// Message content
HistoryContent string // Full message content for history storage
RawQuery string // Raw text for RAG query
CurrentMessageParts []interface{} // Multimodal parts (text, images, audio)
// Telegram context (for intermediate messages)
ChatID int64
MessageThreadID int
ReplyToMsgID int
// Callbacks for Telegram actions
OnIntermediateMessage func(text string) // Called when tool call has intermediate text
OnTypingAction func() // Called before tool execution
}
Request contains all inputs for Laplace agent execution.
type Response ¶
type Response struct {
// Main response
Content string
// GeneratedArtifactIDs accumulates artifact IDs produced by tools during
// this Execute call (e.g. generate_image). The orchestrator uses them to
// attach photos to the Telegram reply. Empty for text-only responses.
GeneratedArtifactIDs []int64
// Error (non-nil if execution failed after partial completion)
// When set, Content may be empty or partial, but other fields (tokens, timing, turns) are valid
Error error
// Token usage
PromptTokens int
CompletionTokens int
TotalCost *float64
// Timing
LLMDuration time.Duration
ToolDuration time.Duration
TotalTurns int
// RAG info for logging
RAGInfo *rag.RetrievalDebugInfo
// Debug info
Messages []openrouter.Message // Full conversation for logging
ConversationTurns *agentlog.ConversationTurns
}
Response contains the result of Laplace agent execution.
type ToolCallContext ¶ added in v0.8.0
type ToolCallContext struct {
UserID int64
CurrentMessageImages []openrouter.FilePart
}
ToolCallContext carries execution context for a tool call: the owning user and any image parts from the current user message (so tools that edit/combine images — e.g. generate_image — can access attached photos automatically).
type ToolHandler ¶
type ToolHandler interface {
// ExecuteToolCall executes a single tool call and returns the result.
ExecuteToolCall(ctx context.Context, tcc ToolCallContext, toolName, arguments string) (*ToolResult, error)
}
ToolHandler defines the interface for executing tool calls. This allows bot package to provide Telegram-aware implementations.
type ToolResult ¶ added in v0.8.0
ToolResult is the richer return type for tool execution. Content is what gets fed back to the LLM; GeneratedArtifactIDs is a side-channel carrying artifact IDs produced during the call (e.g. generated images), which the orchestrator collects to attach to the final user reply.