chatter

package module
v0.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 28, 2025 License: MIT Imports: 6 Imported by: 32

README

chatter

a universal toolkit for working with LLMs

sub-moduledocabout
Core types and helper utilities
AWS Bedrock AI models
OpenAI models (+ compatible API)


Abstract LLMs. Switch backends. Stay consistent.

Large Language Models (LLMs) APIs are clunky and tightly coupled to specific provider. There are no consistent and extensible interface that works with all models.

chatter is an adapter that integrates with popular Large Language Models (LLMs) and hosting solutions under umbrella of a unified interface. Portability is the primary problem addressed by this library.

Quick Start

package main

import (
  "context"
  "fmt"

  "github.com/kshard/chatter"
  "github.com/kshard/chatter/provider/openai/llm/gpt"
)

func main() {
  llm := gpt.Must(gpt.New("gpt-4o",
    openai.WithSecret("sk-your-access-key"),
  ))

  reply, err := llm.Prompt(context.Background(),
    []chatter.Message{
      chatter.Text("Enumerate rainbow colors.")
    },
  )
  if err != nil {
    panic(err)
  }

  fmt.Printf("==> (%+v)\n%s\n", llm.Usage(), reply)
}

What is the library about?

From application perspective, Large Language Models (LLMs) are non-deterministic functions ƒ: Prompt ⟼ Output, which generate output based on input prompt. Originally, LLMs were created as human-centric assistants for working with unstructured text.

Recently, they have evolved to support rich content (images, video, audio) and, more importantly, to enable machine-to-machine interaction — for example in RAG pipelines and agent systems. These use cases require more structured formats for both prompts and outputs.

However, the fast-moving AI landscape created fragmented ecosystem: OpenAI, Anthropic, Meta, Google and others — each providing models with different APIs and formats, often incompatible. This makes integration and switching between providers difficult in real applications.

This library (chatter) introduces a high-level abstraction over non-deterministic LLM functions to standardize access to popular models in Go (Golang). It allows developers to switch providers, run A/B testing, or refactor pipelines with minimal changes to application code.

The library abstracts LLMs as an I/O protocol, with a prompt Encoder and a reply Decoder. This way, implementing a new provider becomes a mechanical task — just following the specification.

%%{init: {'theme':'neutral'}}%%
graph LR
    A[chatter.Prompt]
    B[chatter.Reply]
    A --> E
    D --> B
    subgraph adapter
    E[Encoder]
    D[Decoder]
    G((LLM I/O))
    E --> G
    G --> D
    end

To be fair, AWS Bedrock Converse API is the first semi-standardized effort to unify access to multiple LLMs. But it only works inside AWS ecosystem. You cannot combine models from OpenAI, Google, and others easily.

This library provides that missing flexibility.

Please note, this library is not about Agents, RAGs, or similar high-level concepts. It is a pure low-level interface to use LLMs as non-deterministic functions.

Getting Started

The latest version of the library is available at main branch of this repository. All development, including new features and bug fixes, take place on the main branch using forking and pull requests as described in contribution guidelines. The stable version is available via Golang modules.

The library is organized into multiple submodules for better dependency control and natural development process.

Core data types are defined in the root module: github.com/kshard/chatter. This module defines how prompts are structured and how results are passed back to the application.

All LLM adapters are following the structure:

import "github.com/kshard/chatter/{provider}/{capability}/{family}"

For a list of supported provider, see the provider folder.

Each provider module encapsulates access to various capabilities — distinct categories of AI services such as:

  • embedding — vector embedding service, which transforms text into numerical representations for search, clustering, or similarity tasks.
  • foundation — interface for general-purpose large language model capabilities, including chat and text completion.

Within each capability, implementations are further organized by model families, which group related models API characteristics.

Thus, the overall module hierarchy reflects this layered design:

provider/           # AI service provider (e.g., openai, mistral)
  ├─ capability/    # Service category (e.g., embedding, llm)
  │    └─ family/   # Model family (e.g., gpt, claude, text2vec)

For example:

In addition to model adapters, the library includes composable utilities (in github.com/kshard/chatter/aio) for common tasks like caching, rate limiting, and more - helping to build efficient and scalable AI applications.

LLM I/O

Chatter is the main interface for interacting with all supported models. It takes a list of messages representing the conversation history and returns the LLM's reply.

Both Message and Reply are built from Content blocks — this allows flexible structure for text, images, or other modalities.

type Chatter interface {
	Usage() Usage
	Prompt(context.Context, []chatter.Message, ...chatter.Opt) (*chatter.Reply, error)
}

Prompt

A good prompt has 4 key elements: Role, Task, Requirements, Instructions. "Are You AI Ready? Investigating AI Tools in Higher Education – Student Guide"

In the research community, there was an attempt for making standardized taxonomy of prompts for large language models (LLMs) to solve complex tasks. It encourages the community to adopt the TELeR taxonomy to achieve meaningful comparisons among LLMs, facilitating more accurate conclusions and helping the community achieve consensus on state-of-the-art LLM performance more efficiently.

Package chatter provides utilities for creating and managing structured prompts for language models.

The Prompt type allows you to create a structured prompt with various sections such as task, rules, feedback, examples, context, and input. This helps in maintaining semi-structured prompts while enabling efficient serialization into textual prompts.

{task}. {guidelines}.
1. {requirements}
2. {requirements}
3. ...
{feedback}
{examples}
{context}
{context}
...
{input}

Example usage:

var prompt chatter.Prompt{}

prompt.WithTask("Translate the following text")

// Creates a guide section with the given note and text.
// It is complementary paragraph to the task.
prompt.WithGuide("Please translate the text accurately")

// Creates a rules / requirements section with the given note and text.
prompt.WithRules(
  "Strictly adhere to the following requirements when generating a response.",
  "Do not use any slang or informal language",
  "Do not invent new, unkown words",
)

// Creates a feedback section with the given note and text.
prompt.WithFeedback(
  "Improve the response based on feedback",
  "Previous translations were too literal.",
)

// Create example of input and expected output.
prompt.WithExample(`["Hello"]`, `["Hola"]`)

// Creates a context section with the given note and text.
prompt.WithContext(
  "Below are additional context relevant to your goal task.",
  "The text is a formal letter",
)

// Creates an input section with the given note and text.
prompt.WithInput(
  "Translate the following sentence",
  "Hello, how are you?",
)

Reply

TBD.

Advanced Usage

Autoconfig: Model Initialization from External Configuration

This library includes an autoconfig provider that offers a simple interface for creating models from external configuration. It is particularly useful when developing command-line applications or scripts where hardcoding model details is undesirable.

By default, autoconfig reads configuration from your ~/.netrc file, allowing you to specify the provider, model, and any provider- or model-specific options in a centralized, reusable way.

import (
	"github.com/kshard/chatter/provider/autoconfig"
)

model, err := autoconfig.FromNetRC("myservice")
.netrc Format

Your ~/.netrc file must include at least the provider and model fields under a named service entry. For example:

machine myservice
  provider provider:bedrock/foundation/converse
  model us.anthropic.claude-3-7-sonnet-20250219-v1:0
  • provider specifies the full path to the provider's capability (e.g., provider:bedrock/foundation/converse). The path ressembles import path of providers implemented by this library
  • model specifies the exact model name as recognized by the provider

Each provider and model family may support additional options. These can also be added under the same machine entry and will be passed into the corresponding provider implementation.

region     // used by Bedrock providers
host       // used by OpenAI providers
secret     // used by OpenAI providers
timeout    // used by OpenAI providers
dimensions // used by embedding families

LM Studio

The openai provider supports any service with OpenAI compatible API, for example LM Studio. You need to set the model host address manually in configuration.

import (
	"github.com/kshard/chatter/provider/openai/llm/gpt"
)

assistant, err := gpt.New("gemma-3-27b-it", 
  openai.WithHost("http://localhost:1234"),
)

AWS Bedrock Inference Profile

See the explanation about usage of models with inference profile

import (
	"github.com/kshard/chatter/provider/bedrock/llm/converse"
)

converse.New("us.anthropic.claude-3-7-sonnet-20250219-v1:0")

How To Contribute

The library is MIT licensed and accepts contributions via GitHub pull requests:

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

The build and testing process requires Go version 1.23 or later.

build and test library.

git clone https://github.com/kshard/chatter
cd chatter
go test ./...

API documentation

commit message

The commit message helps us to write a good release note, speed-up review process. The message should address two question what changed and why. The project follows the template defined by chapter Contributing to a Project of Git book.

bugs

If you experience any issues with the library, please let us know via GitHub issues. We appreciate detailed and accurate reports that help us to identity and replicate the issue.

License

See LICENSE

Reference

  1. Use a tool to complete an Amazon Bedrock model response
  2. Converse API tool use examples
  3. Call a tool with the Converse API

Documentation

Index

Constants

View Source
const (
	// LLM has a result to return
	LLM_RETURN = Stage("return")

	// LLM has a result to return but it was truncated (e.g. max tokens, stop sequence)
	LLM_INCOMPLETE = Stage("incomplete")

	// LLM requires to invoke external command/tools
	LLM_INVOKE = Stage("invoke")

	// LLM has aborted execution due to error
	LLM_ERROR = Stage("error")
)
View Source
const Version = "v0.10.0"

Variables

This section is empty.

Functions

func Sentence added in v0.3.0

func Sentence(s string) string

Helper function for making completition of sentences/pharses.

Types

type Answer added in v0.6.0

type Answer struct {
	Yield []Json `json:"yield,omitempty"`
}

Answer from external tools

func (*Answer) HKT1 added in v0.7.0

func (*Answer) HKT1(Message)

func (Answer) String added in v0.6.0

func (Answer) String() string

type Blob added in v0.4.0

type Blob struct {
	Note string `json:"text,omitempty"`
	Text string `json:"blob,omitempty"`
}

Blob is part of the Prompt that provides unformatted input data required to complete the task.

func (Blob) String added in v0.7.0

func (b Blob) String() string

type Chatter

type Chatter interface {
	Usage() Usage
	Prompt(context.Context, []Message, ...Opt) (*Reply, error)
}

The generic trait to "interact" with LLMs;

type Cmd added in v0.6.0

type Cmd struct {
	// [Required] A unique name for the command, used as a reference by LLMs (e.g., "bash").
	Cmd string `json:"cmd"`

	// [Required] A detailed, multi-line description to educate the LLM on command usage.
	// Provides contextual information on how and when to use the command.
	About string `json:"about"`

	// [Required] JSON Schema specifies arguments, types, and additional context
	// to guide the LLM on command invokation.
	Schema json.RawMessage `json:"schema"`
}

Command descriptor

type Content added in v0.6.0

type Content interface{ fmt.Stringer }

Content is the core building block for I/O with LLMs. It defines either input prompt or result of the LLM execution. For example,

  • Prompt is a content either simple plain Text or semistructured Prompt.
  • LLM replies with generated Text, Vector or Invoke instructions.
  • Invocation of external tools is orchestrated using Json content.
  • etc.

The content itself is encapsulated in sequence of Message forming a conversation.

type Context added in v0.3.0

type Context struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"context,omitempty"`
}

Context is part of the Prompt that provides additional information required to complete the task.

func (Context) String added in v0.7.0

func (c Context) String() string

type Example added in v0.0.5

type Example struct {
	Input string `json:"input,omitempty"`
	Reply string `json:"reply,omitempty"`
}

Example is part of the Prompt that gives examples how to complete the task.

func (Example) String added in v0.3.0

func (e Example) String() string

type Feedback added in v0.3.0

type Feedback struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"feedback,omitempty"`
}

Feedback is part of the Prompt that gives feedback to LLM on previous completion of the task (e.g. errors).

func (Feedback) Error added in v0.7.0

func (f Feedback) Error() string

func (Feedback) String added in v0.7.0

func (f Feedback) String() string

type Guide added in v0.3.0

type Guide struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"guide,omitempty"`
}

Guide is part of the Prompt that guides LLM on how to complete the task.

func (Guide) String added in v0.7.0

func (g Guide) String() string

type Input added in v0.3.0

type Input struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"input,omitempty"`
}

Input is part of the Prompt that provides input data required to complete the task.

func (Input) String added in v0.7.0

func (i Input) String() string

type Invoke added in v0.6.0

type Invoke struct {
	// Unique identifier of the tool model wants to use.
	// The name is used to lookup the tool in the registry.
	Cmd string `json:"name"`

	// Arguments to the tool, which are passed as a JSON object.
	Args Json `json:"args"`

	// Original LLM message that triggered the invocation, as defined by the providers API.
	// The message is used to maintain the converstation history and context.
	Message any `json:"-"`
}

Invoke is a special content bloc defining interaction with external functions. Invoke is generated by LLMs when execution of external tools is required.

It is expected that client code will use Reply.Invoke to process the invocation and call the function with the name and arguments.

Answer is returned with the results of the function call.

func (Invoke) RawMessage added in v0.6.0

func (inv Invoke) RawMessage() any

func (Invoke) String added in v0.7.0

func (inv Invoke) String() string

type Json added in v0.7.0

type Json struct {
	// Unique identifier of Json objects, used for tracking in the conversation
	// and correlating the input with output (invocations with answers).
	ID string `json:"id,omitempty"`

	// Unique identifier of the source of the Json object.
	// For example, it can be a name of the tool that produced the output.
	Source string `json:"source,omitempty"`

	// Value of JSON Object
	Value json.RawMessage `json:"bag,omitempty"`
}

Json is a structured object (JSON object) that can be used as input to LLMs or as a reply from LLMs.

Json is a key abstraction for LLMs integration with external tools. It is used to pass structured data from LLM to the tool and vice versa, supporting invocation and answering the resuls.

func (Json) String added in v0.7.0

func (j Json) String() string

type LLM added in v0.2.0

type LLM interface {
	// Model ID as defined by the vendor
	ModelID() string

	// Encode prompt to bytes:
	// - encoding prompt as prompt markup supported by LLM
	// - encoding prompt to envelop supported by LLM's hosting platform
	Encode([]Message, ...Opt) ([]byte, error)

	// Decode LLM's reply into pure text
	Decode([]byte) (Reply, error)
}

Foundational identity of LLMs

type MaxTokens added in v0.10.0

type MaxTokens int

Token quota for reply, the model would limit response given number

func (MaxTokens) ChatterOpt added in v0.10.0

func (MaxTokens) ChatterOpt()

type Message

type Message interface {
	fmt.Stringer
	HKT1(Message)
}

Message is an element of the conversation with LLMs. Sequence of messages forms a conversation, memory or history.

Messages are composed from different Content blocks. We distinguish between input messages (prompts) and output messages (replies).

type Opt added in v0.4.0

type Opt = interface{ ChatterOpt() }

type Prompt

type Prompt struct {
	Task    Task      `json:"task,omitempty"`
	Content []Content `json:"content,omitempty"`
}

Prompt standardizes taxonomy of prompts for LLMs to solve complex tasks. See https://aclanthology.org/2023.findings-emnlp.946.pdf

The container allows application to maintain semi-strucuted prompts while enabling efficient serialization into the textual prompt (aiming for quality). At the glance the prompt is structured:

 {task}. {guidelines}.
	1. {requirements}
	2. ...
 {feedback}
 {examples}
 {context}
 ...
 {input}

func (*Prompt) HKT1 added in v0.7.0

func (*Prompt) HKT1(Message)

Prompt is LLM Message

func (Prompt) String added in v0.3.0

func (prompt Prompt) String() string

Converts prompt to structured string

func (*Prompt) ToSeq added in v0.3.0

func (prompt *Prompt) ToSeq() []Message

Helper function to make sequence of single prompt

func (*Prompt) With added in v0.3.0

func (prompt *Prompt) With(block Content) *Prompt

Add Content block into LLM's prompt

func (*Prompt) WithBlob added in v0.7.0

func (prompt *Prompt) WithBlob(note string, text string) *Prompt

Blob unformatted input data required to complete the task.

prompt.WithBlob(...)

func (*Prompt) WithContext added in v0.0.4

func (prompt *Prompt) WithContext(note string, text ...string) *Prompt

Additional information required to complete the task.

prompt.WithContext(...)

func (*Prompt) WithExample added in v0.0.5

func (prompt *Prompt) WithExample(input, reply string) *Prompt

Give examples to LLM about input data and expected outcomes.

prompt.WithExample(...)

func (*Prompt) WithFeedback added in v0.7.0

func (prompt *Prompt) WithFeedback(note string, text ...string) *Prompt

Give the feedback to LLM on previous completion of the task.

prompt.WithFeedback(...)

func (*Prompt) WithGuide added in v0.7.0

func (prompt *Prompt) WithGuide(note string, text ...string) *Prompt

Guide LLM on how to complete the task.

prompt.WithGuid(...)

func (*Prompt) WithInput added in v0.0.4

func (prompt *Prompt) WithInput(note string, text ...string) *Prompt

Input data required to complete the task.

prompt.WithInput(...)

func (*Prompt) WithRules added in v0.7.0

func (prompt *Prompt) WithRules(note string, text ...string) *Prompt

Requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions.

prompt.WithRules(...)

func (*Prompt) WithTask added in v0.0.4

func (prompt *Prompt) WithTask(task string, args ...any) *Prompt

The task is a summary of what you want the prompt to do.

prompt.WithTask(...)

type Registry added in v0.6.1

type Registry []Cmd

Command registry is a sequence of tools available for LLM usage.

func (Registry) ChatterOpt added in v0.6.1

func (Registry) ChatterOpt()

type Reply added in v0.2.0

type Reply struct {
	Stage   Stage     `json:"stage"`
	Usage   Usage     `json:"usage"`
	Content []Content `json:"content"`
}

The reply from LLMs

func (*Reply) HKT1 added in v0.7.0

func (*Reply) HKT1(Message)

func (Reply) Invoke added in v0.6.0

func (reply Reply) Invoke(f func(string, json.RawMessage) (json.RawMessage, error)) (Answer, error)

Helper function to invoke external tools

func (Reply) String added in v0.4.0

func (reply Reply) String() string

type Rules added in v0.3.0

type Rules struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"rules,omitempty"`
}

Rules is part of the Prompt that defines the rules and requirements to be followed by LLM. Use it to give as much information as possible to ensure your response does not use any incorrect assumptions.

func (Rules) String added in v0.7.0

func (r Rules) String() string

type Stage added in v0.6.0

type Stage string

Stage of the interaction with LLM

type StopSequences added in v0.10.0

type StopSequences []string

The stop sequence prevents LLMs from generating more text after a specific string appears. Stop sequences make it easy to guarantee concise, controlled responses from models.

func (StopSequences) ChatterOpt added in v0.10.0

func (StopSequences) ChatterOpt()

type Stratum added in v0.3.0

type Stratum string

Ground level constrain of the model behavior. The latin meaning "something that has been laid down". Think about it as a cornerstone of the model behavior. "Act as <role>" ... Setting a specific role for a given prompt increases the likelihood of more accurate information, when done appropriately.

func (Stratum) HKT1 added in v0.7.0

func (Stratum) HKT1(Message)

Stratum is LLM Message

func (Stratum) String added in v0.3.0

func (s Stratum) String() string

type Task added in v0.7.0

type Task string

Task is part of the Prompt that defines the task to be solved by LLM.

func (Task) HKT1 added in v0.7.0

func (t Task) HKT1(Message)

func (Task) String added in v0.7.0

func (t Task) String() string

type Temperature added in v0.4.0

type Temperature float64

LLMs' critical parameter influencing the balance between predictability and creativity in generated text. Lower temperatures prioritize exploiting learned patterns, yielding more deterministic outputs, while higher temperatures encourage exploration, fostering diversity and innovation.

func (Temperature) ChatterOpt added in v0.4.0

func (Temperature) ChatterOpt()

type Text added in v0.3.0

type Text string

Text is a plain text either part of prompt or LLM's reply. For simplicity of library's api, text is also representing a Message (HKT1(Message)), allowing it to be used directly as input (prompt) to LLM

func (Text) HKT1 added in v0.7.0

func (t Text) HKT1(Message)

func (Text) MarshalJSON added in v0.10.0

func (t Text) MarshalJSON() ([]byte, error)

func (Text) String added in v0.3.0

func (t Text) String() string

type TopK added in v0.10.0

type TopK float64

func (TopK) ChatterOpt added in v0.10.0

func (TopK) ChatterOpt()

type TopP added in v0.4.0

type TopP float64

Nucleus Sampling, a parameter used in LLMs, impacts token selection by considering only the most likely tokens that together represent a cumulative probability mass (e.g., top-p tokens). This limits the number of choices to avoid overly diverse or nonsensical outputs while maintaining diversity within the top-ranked options.

func (TopP) ChatterOpt added in v0.4.0

func (TopP) ChatterOpt()

type Usage added in v0.6.0

type Usage struct {
	InputTokens int `json:"inputTokens"`
	ReplyTokens int `json:"replyTokens"`
}

LLM Usage stats

type Vector added in v0.10.0

type Vector []float32

Vector is a sequence of float32 numbers representing the embedding vector.

func (Vector) MarshalJSON added in v0.10.0

func (v Vector) MarshalJSON() ([]byte, error)

func (Vector) String added in v0.10.0

func (v Vector) String() string

Directories

Path Synopsis
aio
bedrock module
bedrockbatch module
cache module
examples module
llm
autoconfig module
bedrock module
bedrockbatch module
converse module
openai module
llms module
openai module
provider
autoconfig module
bedrock module
openai module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL