Documentation
¶
Overview ¶
Package compaction provides context window management with token estimation, compaction triggers, and LLM-based conversation summarization.
The algorithm preserves a token budget of recent messages (KeepRecentTokens, default 20 000) rather than a fixed message count. Auto-compaction fires when estimated context usage exceeds contextWindow − ReserveTokens.
Index ¶
- func EstimateMessageTokens(messages []fantasy.Message) int
- func EstimateTokens(text string) int
- func FindCutPoint(messages []fantasy.Message, keepRecentTokens int) int
- func ShouldCompact(messages []fantasy.Message, contextWindow int, reserveTokens int) bool
- type CompactionOptions
- type CompactionResult
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func EstimateMessageTokens ¶
EstimateMessageTokens estimates total tokens across a slice of fantasy messages by summing the estimated tokens for every text part.
func EstimateTokens ¶
EstimateTokens provides a rough token count (~4 chars per token).
func FindCutPoint ¶
FindCutPoint walks backward from the end of messages, accumulating tokens until the keepRecentTokens budget is filled. Returns the index that separates "old" messages (0..cutPoint-1, to be summarised) from "recent" messages (cutPoint..end, to be preserved).
Returns 0 if there are fewer than 2 messages or all messages fit within the keep budget.
Types ¶
type CompactionOptions ¶
type CompactionOptions struct {
ContextWindow int // Model's context window size (tokens)
ReserveTokens int // Tokens to reserve for LLM response, default 16384
KeepRecentTokens int // Recent tokens to preserve (not summarised), default 20000
SummaryPrompt string // Custom summary prompt (empty = use default)
}
CompactionOptions configures compaction behaviour. Token-based defaults are applied for zero-value fields.
type CompactionResult ¶
type CompactionResult struct {
Summary string // LLM-generated summary of compacted messages
OriginalTokens int // Estimated token count before compaction
CompactedTokens int // Estimated token count after compaction
MessagesRemoved int // Number of messages replaced by the summary
}
CompactionResult contains statistics from a compaction operation.
func Compact ¶
func Compact( ctx context.Context, model fantasy.LanguageModel, messages []fantasy.Message, opts CompactionOptions, customInstructions string, ) (*CompactionResult, []fantasy.Message, error)
Compact summarises older messages using the LLM, returning the compaction result and a new message slice (summary message + preserved recent messages).
The model parameter is the same fantasy.LanguageModel used for regular generation — compaction creates a disposable fantasy agent with no tools to produce the summary.
customInstructions is optional text appended to the summary prompt (e.g. "Focus on the API design decisions"). Pass "" to use the default prompt only.