Documentation
¶
Index ¶
- Constants
- func Sentence(s string) string
- type Answer
- type Blob
- type Chatter
- type Cmd
- type Content
- type Context
- type Example
- type Feedback
- type Guide
- type Input
- type Invoke
- type Json
- type LLM
- type MaxTokens
- type Message
- type Opt
- type Prompt
- func (*Prompt) HKT1(Message)
- func (prompt Prompt) String() string
- func (prompt *Prompt) ToSeq() []Message
- func (prompt *Prompt) With(block Content) *Prompt
- func (prompt *Prompt) WithBlob(note string, text string) *Prompt
- func (prompt *Prompt) WithContext(note string, text ...string) *Prompt
- func (prompt *Prompt) WithExample(input, reply string) *Prompt
- func (prompt *Prompt) WithFeedback(note string, text ...string) *Prompt
- func (prompt *Prompt) WithGuide(note string, text ...string) *Prompt
- func (prompt *Prompt) WithInput(note string, text ...string) *Prompt
- func (prompt *Prompt) WithRules(note string, text ...string) *Prompt
- func (prompt *Prompt) WithTask(task string, args ...any) *Prompt
- type Registry
- type Reply
- type Rules
- type Stage
- type StopSequences
- type Stratum
- type Task
- type Temperature
- type Text
- type TopK
- type TopP
- type Usage
- type Vector
Constants ¶
const ( // LLM has a result to return LLM_RETURN = Stage("return") // LLM has a result to return but it was truncated (e.g. max tokens, stop sequence) LLM_INCOMPLETE = Stage("incomplete") // LLM requires to invoke external command/tools LLM_INVOKE = Stage("invoke") // LLM has aborted execution due to error LLM_ERROR = Stage("error") )
const Version = "v0.10.0"
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Answer ¶ added in v0.6.0
type Answer struct {
Yield []Json `json:"yield,omitempty"`
}
Answer from external tools
type Blob ¶ added in v0.4.0
Blob is part of the Prompt that provides unformatted input data required to complete the task.
type Cmd ¶ added in v0.6.0
type Cmd struct { // [Required] A unique name for the command, used as a reference by LLMs (e.g., "bash"). Cmd string `json:"cmd"` // [Required] A detailed, multi-line description to educate the LLM on command usage. // Provides contextual information on how and when to use the command. About string `json:"about"` // [Required] JSON Schema specifies arguments, types, and additional context // to guide the LLM on command invokation. Schema json.RawMessage `json:"schema"` }
Command descriptor
type Content ¶ added in v0.6.0
Content is the core building block for I/O with LLMs. It defines either input prompt or result of the LLM execution. For example,
- Prompt is a content either simple plain Text or semistructured Prompt.
- LLM replies with generated Text, Vector or Invoke instructions.
- Invocation of external tools is orchestrated using Json content.
- etc.
The content itself is encapsulated in sequence of Message forming a conversation.
type Context ¶ added in v0.3.0
type Context struct { Note string `json:"note,omitempty"` Text []string `json:"context,omitempty"` }
Context is part of the Prompt that provides additional information required to complete the task.
type Example ¶ added in v0.0.5
Example is part of the Prompt that gives examples how to complete the task.
type Feedback ¶ added in v0.3.0
type Feedback struct { Note string `json:"note,omitempty"` Text []string `json:"feedback,omitempty"` }
Feedback is part of the Prompt that gives feedback to LLM on previous completion of the task (e.g. errors).
type Guide ¶ added in v0.3.0
Guide is part of the Prompt that guides LLM on how to complete the task.
type Input ¶ added in v0.3.0
Input is part of the Prompt that provides input data required to complete the task.
type Invoke ¶ added in v0.6.0
type Invoke struct { // Unique identifier of the tool model wants to use. // The name is used to lookup the tool in the registry. Cmd string `json:"name"` // Arguments to the tool, which are passed as a JSON object. Args Json `json:"args"` // Original LLM message that triggered the invocation, as defined by the providers API. // The message is used to maintain the converstation history and context. Message any `json:"-"` }
Invoke is a special content bloc defining interaction with external functions. Invoke is generated by LLMs when execution of external tools is required.
It is expected that client code will use Reply.Invoke to process the invocation and call the function with the name and arguments.
Answer is returned with the results of the function call.
func (Invoke) RawMessage ¶ added in v0.6.0
type Json ¶ added in v0.7.0
type Json struct { // Unique identifier of Json objects, used for tracking in the conversation // and correlating the input with output (invocations with answers). ID string `json:"id,omitempty"` // Unique identifier of the source of the Json object. // For example, it can be a name of the tool that produced the output. Source string `json:"source,omitempty"` // Value of JSON Object Value json.RawMessage `json:"bag,omitempty"` }
Json is a structured object (JSON object) that can be used as input to LLMs or as a reply from LLMs.
Json is a key abstraction for LLMs integration with external tools. It is used to pass structured data from LLM to the tool and vice versa, supporting invocation and answering the resuls.
type LLM ¶ added in v0.2.0
type LLM interface { // Model ID as defined by the vendor ModelID() string // Encode prompt to bytes: // - encoding prompt as prompt markup supported by LLM // - encoding prompt to envelop supported by LLM's hosting platform Encode([]Message, ...Opt) ([]byte, error) // Decode LLM's reply into pure text Decode([]byte) (Reply, error) }
Foundational identity of LLMs
type MaxTokens ¶ added in v0.10.0
type MaxTokens int
Token quota for reply, the model would limit response given number
func (MaxTokens) ChatterOpt ¶ added in v0.10.0
func (MaxTokens) ChatterOpt()
type Message ¶
Message is an element of the conversation with LLMs. Sequence of messages forms a conversation, memory or history.
Messages are composed from different Content blocks. We distinguish between input messages (prompts) and output messages (replies).
type Prompt ¶
type Prompt struct { Task Task `json:"task,omitempty"` Content []Content `json:"content,omitempty"` }
Prompt standardizes taxonomy of prompts for LLMs to solve complex tasks. See https://aclanthology.org/2023.findings-emnlp.946.pdf
The container allows application to maintain semi-strucuted prompts while enabling efficient serialization into the textual prompt (aiming for quality). At the glance the prompt is structured:
{task}. {guidelines}. 1. {requirements} 2. ... {feedback} {examples} {context} ... {input}
func (*Prompt) WithBlob ¶ added in v0.7.0
Blob unformatted input data required to complete the task.
prompt.WithBlob(...)
func (*Prompt) WithContext ¶ added in v0.0.4
Additional information required to complete the task.
prompt.WithContext(...)
func (*Prompt) WithExample ¶ added in v0.0.5
Give examples to LLM about input data and expected outcomes.
prompt.WithExample(...)
func (*Prompt) WithFeedback ¶ added in v0.7.0
Give the feedback to LLM on previous completion of the task.
prompt.WithFeedback(...)
func (*Prompt) WithGuide ¶ added in v0.7.0
Guide LLM on how to complete the task.
prompt.WithGuid(...)
func (*Prompt) WithInput ¶ added in v0.0.4
Input data required to complete the task.
prompt.WithInput(...)
type Registry ¶ added in v0.6.1
type Registry []Cmd
Command registry is a sequence of tools available for LLM usage.
func (Registry) ChatterOpt ¶ added in v0.6.1
func (Registry) ChatterOpt()
type Reply ¶ added in v0.2.0
type Reply struct { Stage Stage `json:"stage"` Usage Usage `json:"usage"` Content []Content `json:"content"` }
The reply from LLMs
type Rules ¶ added in v0.3.0
Rules is part of the Prompt that defines the rules and requirements to be followed by LLM. Use it to give as much information as possible to ensure your response does not use any incorrect assumptions.
type StopSequences ¶ added in v0.10.0
type StopSequences []string
The stop sequence prevents LLMs from generating more text after a specific string appears. Stop sequences make it easy to guarantee concise, controlled responses from models.
func (StopSequences) ChatterOpt ¶ added in v0.10.0
func (StopSequences) ChatterOpt()
type Stratum ¶ added in v0.3.0
type Stratum string
Ground level constrain of the model behavior. The latin meaning "something that has been laid down". Think about it as a cornerstone of the model behavior. "Act as <role>" ... Setting a specific role for a given prompt increases the likelihood of more accurate information, when done appropriately.
type Task ¶ added in v0.7.0
type Task string
Task is part of the Prompt that defines the task to be solved by LLM.
type Temperature ¶ added in v0.4.0
type Temperature float64
LLMs' critical parameter influencing the balance between predictability and creativity in generated text. Lower temperatures prioritize exploiting learned patterns, yielding more deterministic outputs, while higher temperatures encourage exploration, fostering diversity and innovation.
func (Temperature) ChatterOpt ¶ added in v0.4.0
func (Temperature) ChatterOpt()
type Text ¶ added in v0.3.0
type Text string
Text is a plain text either part of prompt or LLM's reply. For simplicity of library's api, text is also representing a Message (HKT1(Message)), allowing it to be used directly as input (prompt) to LLM
func (Text) MarshalJSON ¶ added in v0.10.0
type TopK ¶ added in v0.10.0
type TopK float64
func (TopK) ChatterOpt ¶ added in v0.10.0
func (TopK) ChatterOpt()
type TopP ¶ added in v0.4.0
type TopP float64
Nucleus Sampling, a parameter used in LLMs, impacts token selection by considering only the most likely tokens that together represent a cumulative probability mass (e.g., top-p tokens). This limits the number of choices to avoid overly diverse or nonsensical outputs while maintaining diversity within the top-ranked options.
func (TopP) ChatterOpt ¶ added in v0.4.0
func (TopP) ChatterOpt()
Directories
¶
Path | Synopsis |
---|---|
bedrock
module
|
|
bedrockbatch
module
|
|
cache
module
|
|
examples
module
|
|
llm
|
|
autoconfig
module
|
|
bedrock
module
|
|
bedrockbatch
module
|
|
converse
module
|
|
openai
module
|
|
llms
module
|
|
openai
module
|
|
provider
|
|
autoconfig
module
|
|
bedrock
module
|
|
openai
module
|