Documentation
¶
Index ¶
- func CompactionConvertToLLM(msgs []agentcore.AgentMessage) []agentcore.Message
- func ContextEstimateAdapter(msgs []agentcore.AgentMessage) (tokens, usageTokens, trailingTokens int)
- func EstimateTokens(msg agentcore.AgentMessage) int
- func EstimateTotal(msgs []agentcore.AgentMessage) int
- func NewCompaction(cfg CompactionConfig) ...
- type CompactionConfig
- type CompactionInfo
- type CompactionSummary
- type ContextUsageEstimate
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CompactionConvertToLLM ¶
func CompactionConvertToLLM(msgs []agentcore.AgentMessage) []agentcore.Message
CompactionConvertToLLM converts AgentMessages to LLM Messages, handling CompactionSummary by wrapping it as a user message with XML tags. For all other message types, it delegates to DefaultConvertToLLM behavior.
func ContextEstimateAdapter ¶
func ContextEstimateAdapter(msgs []agentcore.AgentMessage) (tokens, usageTokens, trailingTokens int)
ContextEstimateAdapter adapts EstimateContextTokens to the agentcore.ContextEstimateFn signature.
func EstimateTokens ¶
func EstimateTokens(msg agentcore.AgentMessage) int
EstimateTokens estimates the token count for a single message. Uses chars/4 approximation (conservative overestimate).
func EstimateTotal ¶
func EstimateTotal(msgs []agentcore.AgentMessage) int
EstimateTotal estimates the total token count for a message list.
func NewCompaction ¶
func NewCompaction(cfg CompactionConfig) func(context.Context, []agentcore.AgentMessage) ([]agentcore.AgentMessage, error)
NewCompaction returns a TransformContext function that automatically compacts the message history when context tokens approach the window limit.
Usage:
agent := agentcore.NewAgent(
agentcore.WithTransformContext(memory.NewCompaction(memory.CompactionConfig{
Model: model,
ContextWindow: 128000,
})),
agentcore.WithConvertToLLM(memory.CompactionConvertToLLM),
)
Types ¶
type CompactionConfig ¶
type CompactionConfig struct {
// Model is the ChatModel used for generating summaries.
// Typically the same model the agent uses.
Model agentcore.ChatModel
// ContextWindow is the model's context window size in tokens.
// Required — there is no default.
ContextWindow int
// ReserveTokens is the token headroom reserved for the LLM response.
// Default: 16384.
ReserveTokens int
// KeepRecentTokens is the minimum number of recent tokens to always keep.
// Default: 20000.
KeepRecentTokens int
// OnCompaction is called after a successful compaction with details.
// Optional — nil means no callback.
OnCompaction func(CompactionInfo)
}
CompactionConfig configures automatic context compaction.
type CompactionInfo ¶ added in v1.5.7
type CompactionInfo struct {
TokensBefore int
TokensAfter int
MessagesBefore int
MessagesAfter int
CompactedCount int // number of messages summarized
KeptCount int // number of messages retained verbatim
IsSplitTurn bool
IsIncremental bool // updated an existing summary
SummaryLen int // summary length in runes
Duration time.Duration // wall time including LLM calls
}
CompactionInfo holds details about a completed compaction for observability.
type CompactionSummary ¶
type CompactionSummary struct {
Summary string
TokensBefore int
ReadFiles []string
ModifiedFiles []string
Timestamp time.Time
}
CompactionSummary is a compacted context summary message. It implements AgentMessage but is NOT a Message, so DefaultConvertToLLM will filter it out. Use CompactionConvertToLLM to handle it.
func (CompactionSummary) GetRole ¶
func (c CompactionSummary) GetRole() agentcore.Role
func (CompactionSummary) GetTimestamp ¶
func (c CompactionSummary) GetTimestamp() time.Time
func (CompactionSummary) HasToolCalls ¶
func (c CompactionSummary) HasToolCalls() bool
func (CompactionSummary) TextContent ¶
func (c CompactionSummary) TextContent() string
func (CompactionSummary) ThinkingContent ¶
func (c CompactionSummary) ThinkingContent() string
type ContextUsageEstimate ¶
type ContextUsageEstimate struct {
// Tokens is the total estimated context tokens (UsageTokens + TrailingTokens).
Tokens int
// UsageTokens is the token count derived from the last LLM-reported Usage.
UsageTokens int
// TrailingTokens is the chars/4 estimate for messages after the last Usage.
TrailingTokens int
// LastUsageIndex is the index of the last assistant message with Usage, or -1 if none.
LastUsageIndex int
}
ContextUsageEstimate holds the hybrid token estimation result. It combines LLM-reported Usage data with chars/4 estimation for trailing messages.
func EstimateContextTokens ¶
func EstimateContextTokens(msgs []agentcore.AgentMessage) ContextUsageEstimate
EstimateContextTokens uses a hybrid approach: actual Usage from the last non-aborted assistant message, plus chars/4 estimates for trailing messages. This approximates the current context window occupancy.