ollamatea

package module
v0.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 15, 2024 License: MIT Imports: 19 Imported by: 1

README

OllamaTea - BubbleTea Component for Ollama

Picture of OllamaTea's Mascot, a llama with a mohawk and goldchains wearing a T-shirt while Bubbles float around them. ot-simplegen demo
I pity the fool without local terminal inferencing... -OT
Latest Release GoDoc Code Of Conduct

This is an experimental project.

ollamatea is a Bubble Tea Component for integrating terminal experiences with an Ollama LLM server. It offers a base ollamaTea.Session component, a simple out-of-the-box ollamatea.ChatPanelModel widget, a ollamatea.ModelChooser widget, and some example tools which demonstrate them.

You must have access to an Ollama server to use these tools. Follow the Ollama's instructions to install a service locally.

To import the ollamatea library, use this Go module statement:

import (
    "github.com/NimbleMarkets/ollamatea"
)

To install OllamaTea's various ot- tools:

Table of Contents

Components

ollamatea.Session

The main OllamaTea component is ollamatea.Session, which exposes the Ollama API using the BubbleTea framework. The ollamatea.Session struct holds Ollama session data for generating completions, such as its host and prompt, and handles server responses. Each ollamatea.Session corresponds to the idea of an ollama run session.

ollamatea.Session implements the BubbleTea Model interface and exposes it via the BubbleTea Command system. Note that a BubbleTea Model is unrelated to an Ollama LLM model. ollamatea.Session implements a BubbleTea tea.Model, while ollamatea.Session.Model holds the Ollama model name for generation.

One uses ollamatea.Session as any other BubbleTea component. Add it to your model and connect it with your model's Init/Update/View methods. To spawn a generation request, send it ollamatea.StartGenerateMsg; to stop the current request, send it a ollamatea.StopGenerateMsg. Currently, only one generation is managed per ollamatea.Session; a new StartGenerateMsg will cancel the current request. The streaming responses will be sent via GenerateResponseMsg, which builds the Response() data. Parent models may also intercept and process this message. Once the generation is completed, a GenerateDoneMsg is sent.

It is critical to properly initialize ollamatea.Session. In a parent model's Init method, the session's Init call must also be invoked and its resultant command dispatched:

type model {
    session *ollamatea.Session
}

type (m model) Init() tea.Cmd {
  cmd := m.session.Init()
  return sessionCmd
}

Also note that ollamatea.Session methods take pointer receivers, rather than value receivers. This is a little different than most BubbleTea components, but eases internal state management.

To see an example of using ollamatea.Session, see the implementation of the ollamatea.ChatPanelModel component described in the next session.

ollamatea.ChatPanelModel

ollamatea.ChatPanelModel is a simple BubbleTea TUI component using ollamatea.Session. It presents a TextArea for prompt input and Viewport for generation output.

The ot-simplegen tool is a minimal example using this component.

TODO: ollamatea.ChatPanelModel features are currently in flux -- the hope is to add a bit more to make it a minimal, but very useful component *TODO: picture here

ollamatea.ModelChooser

ollamatea.ModelChooser is a simple BubbleTea TUI Model which can be incorporated into your own TUI. The ot-model-chooser is a minimal example using it. There is also bare FetchModelList machinery to create custom experiences.

Configuration

The OllamaTea component defaults can be controlled with environment variables:

Variable Default Description
OLLAMATEA_NOENV "" If true, yes, or 1, then defaults are not loaded from the environment.
OLLAMATEA_HOST "http://localhost:11434" The default Ollama server URL.
OLLAMATEA_MODEL "llama3.2-vision:11b" The default Ollama model name.
OLLAMATEA_PROMPT "" The default Ollama prompt.
OLLAMATEA_SYSTEM "" The default Ollama system prompt.

Tools

To exercise the library, there a some CLI tools:

ot-ansi-to-image

ot-ansi-to-image converts ANSI-encoded text into a PNG image:

$ usage:  ot-ansi-to-png [--help] [--in <ansitext-filename>] --out <png-filename>

Converts input ANSI terminal text from stdin (or a file with --in)
and renders it visually as a PNG image file saved to --out.

If --in is '-' then stdin is used. If --out is '-' then stdout is used.

Example:  $ echo -e "\033[31mHello\033[0m World" | ot-ansi-to-png --out hello.png

      --help         show help
  -i, --in string    Input text filename (default: stdin)
  -o, --out string   Output PNG filename ('-' is stdout)
ot-model-chooser

ot-model-chooser is a minimal example using the ollamatea.ModelChooser BubbleTea component. See above.

Model Chooser Demo
ot-png-prompt

ot-png-prompt generates an Ollama response from a PNG image and prompt:

usage:  ot-png-prompt [--help] [options] --in <input-png-filename>

Generates an Ollama response from a given PNG image.

The prompt may be specified with  --prompt or the OLLAMATEA_PROMPT envvar.
The default prompt is:
  Describe this image for a visually impaired person'.

Example:  $ ot-png-prompt --in hello.png -m llava

      --help            show help
  -h, --host string     Host for Ollama (also OLLAMATEA_HOST env) (default "http://localhost:11434")
  -i, --in string       Input PNG filename ('-' is stdin)
  -m, --model string    Model for Ollama (also OLLAMATEA_MODEL env) (default "llava")
  -o, --out string      Output PNG filename
  -p, --prompt string   Prompt for Ollama (see --help for default)
  -v, --verbose         verbose output

For example, here it describes this Hellow World image:

$ ./bin/ot-png-prompt -i tests/hello.png
The image displays a simple text message in the top-left corner, with the words "Hello World" written in red and white. The background of the image is solid black.

*   In the top-left corner, there are two lines of text:
    *   The first line reads "Hello" in red.
    *   The second line reads "World" in white.
*   The background of the image is a solid black color.


$ ./bin/ot-png-prompt -i tests/hello.png -v --prompt "make a poem about this image"
INFO: ohost=http://localhost:11434 omodel=llama3.2-vision:11b oprompt="make a poem about this image"
In pixelated simplicity, a sight to see,
A message whispers "Hello" from the digital tree.
A single line of code, a world apart,
A greeting in the void, a gentle start.

The screen glows dimly, a pale blue hue,
A canvas waiting for the stories anew.
But for now, it's just a simple phrase,
A hello to the world, in digital daze.
ot-simplegen

ot-simplegen is a minimal simple chat generation example using little more than the ollamatea.ChatPanelModel BubbleTea component.

ot-simplegen demo
ot-timechart

ot-timechart reads time,value data from a CSV file, displays it in as ntchart timeserieslinechart, renders that to a PNG image, and then feeds that to Ollama with a prompt.

Of course, one might feed the chart data directly to Ollama, or perhaps render images it with higher fidelity (e.g. headless HTML charting to images). But abstractly this workflow could work with any ANSI text display or BubbleTea component View(). It might be an interesting avenue to explore for some interfaces.

This work expands on some of the ideas in this ntcharts accessibility issue. You can see an example with market data on the ot-timechart README.

usage:  ./bin/ot-timechart [--help] [options] --in <input-csv-filename>

A mini-TUI for generating an Ollama response from a simple CSV file.
The CSV file should have a header row with the first column being the time.

The prompt may be specified with  --prompt or the OLLAMATEA_PROMPT envvar.
The default prompt is:
  Describe this image for a visually impaired person'.

See https://github.com/NimbleMarkets/ollamatea/tree/main/cmd/ot-timechart

      --braille         use braille lines (default: arc lines)
      --help            show help
  -h, --host string     Host for Ollama (also OLLAMATEA_HOST env) (default "http://localhost:11434")
  -i, --in string       Input CSV filename ('-' is stdin)
  -m, --model string    Model for Ollama (also OLLAMATEA_MODEL env) (default "llama3.2-vision:11b")
  -p, --prompt string   Prompt for Ollama (see --help for default)
  -v, --verbose         verbose output
  -z, --zstd            Input is ZSTD compressed (otherwise uses filename ending in .zst or zstd)
ot-timechart demo

Open Collaboration

We welcome contributions and feedback. Please adhere to our Code of Conduct when engaging our community.

Acknowledgements

Thanks to to Charm.sh for making the command line glamorous and sharing Bubble Tea and thanks to Ollama for making local inferencing so easy.

License

Released under the MIT License, see LICENSE.txt. Authored by Evan Wies.

Copyright (c) 2024 Neomantra Corp.


Made with ❤ and 🔥 by the team behind Nimble.Markets.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Cmdize

func Cmdize[T any](t T) tea.Cmd

Cmdize is a utility function to convert a given value into a `tea.Cmd` https://github.com/KevM/bubbleo/blob/main/utils/utils.go

func ConvertTerminalTextToImage

func ConvertTerminalTextToImage(terminalText string, convertConfig *ansitoimage.Config) ([]byte, error)

ConvertTerminalTextToImage converts the [terminalText] to a PNG image returned as a []byte. Returns nil with an error, if any. Uses the passed [go-ansi-to-image Config](https://github.com/pavelpatrin/go-ansi-to-image/blob/main/config.go#L4) or otherwise the [DefaultConfig](https://github.com/pavelpatrin/go-ansi-to-image/blob/main/config.go#L28).

func DefaultHost

func DefaultHost() string

func DefaultModel

func DefaultModel() string

func DefaultPrompt

func DefaultPrompt() string

func DefaultSystemPrompt

func DefaultSystemPrompt() string

func FetchModelList

func FetchModelList(ollamaHost string, id int64) tea.Msg

FetchModelList fetches a list of models from the Ollama server and returns a [FetchListResponseMsg]. If there is an error, a [FetchListErrorMsg] is returned.

It is independent of any Model, so can be used as an independent tea.Msg generator to implement one's own model selection interfaces.

func GetNextModelChooserID

func GetNextModelChooserID() int64

GetNextFetchModelListID atomically returns the next FetchModelList ID. Call this to get a unique ID for a FetchModelList request.

Types

type ChatPanelModel

type ChatPanelModel struct {
	Width       int  // Width is the width of the ollamatea.ChatPanelModel
	Height      int  // Height is the height of the ollamatea.ChatPanelModel
	InputHeight int  // Height of the Input Box, other heights derive from this
	InputOnTop  bool // InputOnTop indicates whether the input box is at the top of screen

	Session *Session
	// contains filtered or unexported fields
}

ollamatea.ChatPanelModel holds a simple Panel TUI for an Ollama chat

func NewChatPanel

func NewChatPanel(session Session) ChatPanelModel

func (ChatPanelModel) GetInputHeight

func (m ChatPanelModel) GetInputHeight() int

func (ChatPanelModel) GetPlaceholder

func (m ChatPanelModel) GetPlaceholder() string

func (ChatPanelModel) Init

func (m ChatPanelModel) Init() tea.Cmd

Init handles the initialization of an ChatPanelModel

func (ChatPanelModel) SetHeight

func (m ChatPanelModel) SetHeight(height int) ChatPanelModel

func (ChatPanelModel) SetInputHeight

func (m ChatPanelModel) SetInputHeight(inputHeight int) ChatPanelModel

SetInputHeight sets the height of the input window. This is clamped to [0,Height)

func (ChatPanelModel) SetPlaceholder

func (m ChatPanelModel) SetPlaceholder(s string) ChatPanelModel

func (ChatPanelModel) SetWidth

func (m ChatPanelModel) SetWidth(w int) ChatPanelModel

func (ChatPanelModel) Update

func (m ChatPanelModel) Update(msg tea.Msg) (ChatPanelModel, tea.Cmd)

Update handles BubbleTea messages for the ChatPanelModel

func (ChatPanelModel) View

func (m ChatPanelModel) View() string

View renders the ChatPanelModel's view.

type FetchModelListErrorMsg

type FetchModelListErrorMsg struct {
	ID         int64  // ID of the original request
	OllamaHost string // Ollama Host generating the error
	Error      error  // Error returned
}

FetchModelListErrorMsg is sent when a FetchModelList fails.

type FetchModelListResponseMsg

type FetchModelListResponseMsg struct {
	ID         int64               // ID of the original request
	OllamaHost string              // Ollama Host generating the response
	Models     []ListModelResponse // Models delivered
}

FetchModelListResponseMsg is sent when a FetchModelList succeeds.

type GenerateDoneMsg

type GenerateDoneMsg struct {
	ID         int64     // ID is the generation session ID corresponding to the Response
	Response   string    // Full resposne from the Ollama generation
	CreatedAt  time.Time // CreatedAt is the timestamp of the response.
	DoneReason string    // DoneReason is the reason the model stopped generating text.
	// Context is an encoding of the conversation used in this response; this
	// can be sent in the next request to keep a conversational memory.
	Context []int
}

GenerateDoneMsg is the message generated when the generation is complete. It contains the complete response along with [Context], which may be set on [Session.Context] to carry on the conversation.

type GenerateResponseMsg

type GenerateResponseMsg struct {
	ID        int64     // ID is the generation session ID corresponding to the Response
	CreatedAt time.Time // CreatedAt is the timestamp of the response.

	// Response is the textual response in this specific call.
	// Use [GenerateDoneMsg] or [Session.GetResponse()] for fuller responses.
	Response string
}

GenerateResponseMsg is the message generated each time there is a reply from Ollama. The information contained is only partial. To check what has been received so far in the request, check [OllamataTeaSession.GetResponse()] To focus solely on full responses, listen for GenerateDoneMsg.

type ImageData

type ImageData = ollama.ImageData

Type alias in this package for convenience

type ListModelResponse

type ListModelResponse = ollama.ListModelResponse

Type alias in this package for convenience

type ModelChooser

type ModelChooser struct {
	Waiting    string // Waiting to load message (default is "Loading models..")
	MenuPrompt string // Menu prompt (default is "Select Ollama model")
	// contains filtered or unexported fields
}

ModelChooser is a Terminal UX for selecting a local LLM model from Ollama.

func NewModelChooser

func NewModelChooser(ollamaHost string) ModelChooser

NewModelChooser returns a new ModelChooser for the given Ollama Host.

func (ModelChooser) GetHost

func (m ModelChooser) GetHost() string

GetHost returns the Ollama Host URL for the ModelChooser.

func (ModelChooser) GetLastError

func (m ModelChooser) GetLastError() error

GetLastError returns the last error encountered from fetching the model list. Returns nil if there is no error.

func (ModelChooser) GetSelectedModel

func (m ModelChooser) GetSelectedModel() *ollama.ListModelResponse

GetSelectedModel returns the selected model from the ModelChooser. Returns nil if there is no selected model.

func (ModelChooser) GetStyles

func (m ModelChooser) GetStyles() list.Styles

GetStyles returns the list.Styles for the ModelChooser.

func (ModelChooser) ID

func (m ModelChooser) ID() int64

ID returns the ModelChooser unique ID.

func (ModelChooser) Init

func (m ModelChooser) Init() tea.Cmd

Init handles the initialization of an Session

func (ModelChooser) IsFetching

func (m ModelChooser) IsFetching() bool

IsFetching returns true if the ModelChooser is fetching the model list.

func (ModelChooser) SetStyles

func (m ModelChooser) SetStyles(styles list.Styles) ModelChooser

SetStyles sets a list.Styles for the TUI. The Spinner is set to the list.Styles.Spinner Returns nil if there is no selected model.

func (ModelChooser) Update

func (m ModelChooser) Update(msg tea.Msg) (ModelChooser, tea.Cmd)

Update handles BubbleTea messages for the Session This is for starting/stopping/updating generation.

func (ModelChooser) View

func (m ModelChooser) View() string

View renders the ModelChooser's view.

type ModelChooserAbortedMsg

type ModelChooserAbortedMsg struct {
	ID    int64 // ID of the original request
	Error error // Error that caused the exit, if any
}

type ModelChooserSelectedMsg

type ModelChooserSelectedMsg struct {
	ID         int64  // ID of the original request
	OllamaHost string // Ollama Host generating the list
	Selection  ollama.ListModelResponse
}

type Session

type Session struct {
	Host     string // Ollama Host -- really the service's URL
	Model    string // Ollama LLM model.  See https://ollama.com/library
	System   string // Ollama System prompt
	Template string // Ollama System prompt
	Context  []int  // Ollama Context

	Prompt  string                 // Ollama Prompt
	Suffix  string                 // Ollama Prompt Suffix
	Images  []ImageData            // List of base64-encoded images
	Options map[string]interface{} // Options lists model-specific options
	// contains filtered or unexported fields
}

Session holds the data for an OllamaTea Generate, both its request and built response See https://github.com/ollama/ollama/blob/main/api/types.go#L42

func NewSession

func NewSession() Session

NewSession returns a new Session with the default values.

func (*Session) ClearError

func (s *Session) ClearError()

func (*Session) ClearResponse

func (s *Session) ClearResponse()

func (*Session) Error

func (s *Session) Error() error

func (*Session) ID

func (s *Session) ID() int64

func (*Session) Init

func (m *Session) Init() tea.Cmd

Init handles the initialization of an Session

func (*Session) IsGenerating

func (s *Session) IsGenerating() bool

func (*Session) Response

func (s *Session) Response() string

func (*Session) StartGenerateMsg

func (s *Session) StartGenerateMsg() tea.Msg

func (*Session) Update

func (m *Session) Update(msg tea.Msg) (tea.Model, tea.Cmd)

Update handles BubbleTea messages for the Session This is for starting/stopping/updating generation.

func (*Session) View

func (m *Session) View() string

View renders the Sessions's view. This is will either be an error message, a "..." waiting string, or the Ollama response. We often set up other components for the TUI chrome and ignore this View.

type StartGenerateMsg

type StartGenerateMsg struct {
	ID int64 // ID is the session ID to start
}

type StopGenerateMsg

type StopGenerateMsg struct {
	ID int64 // ID is the session ID to stop
}

Directories

Path Synopsis
cmd
ot-ansi-to-png command
ot-png-prompt command
ot-simplegen command
ot-timechart command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL