ollama

package module
v1.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 27, 2025 License: MIT Imports: 9 Imported by: 1

README

go-ollama

An ollama API library for Go.

Usage

Install

To add go-ollama to your project, run:

go get github.com/JexSrs/go-ollama
Ollama instance
LLM := ollama.New("http://localhost:11434")

// Optionally set headers for each request
LLM.SetHeaders("Authorization", []string{"Bearer xyz"})
Generate a completion
res, err := LLM.Generate(
    LLM.Generate.WithModel("llama3"), // Default value
    LLM.Generate.WithPrompt("Why is the sky blue?"),
)

By default, stream is set to false, to enable it:

res, err := LLM.Generate(
    LLM.Generate.WithModel("llama3"), // Default value
    LLM.Generate.WithPrompt("Why is the sky blue?"),
	LLM.Generate.WithStream(true, 512000, func(r *Response, err error) {
        // ...		
    }),
)

The function will block the thread until streaming is finished and return the latest response with the concatenated message of the previous responses.

To apply a format in the model's response:

res, err := LLM.Generate(
    LLM.Generate.WithModel("llama3"), // Default value
    LLM.Generate.WithPrompt("What color is the sky at different times of the day? Respond using JSON"), // Important to instruct the model to respond in json
    LLM.Generate.WitFormat("json"),
)

To append am image to the request:

res, err := LLM.Generate(
    LLM.Generate.WithModel("llama3"), // Default value
    LLM.Generate.WithPrompt("What is in this picture?"),
    LLM.Generate.WithImage("iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"),
)

To set temperature and seed:

res, err := LLM.Generate(
    LLM.Generate.WithModel("llama3"), // Default value
    LLM.Generate.WithPrompt("Hello, how are you?"),
    LLM.Generate.WithTemperature(0.4),
    LLM.Generate.WithSeed(123),
)
Generate a chat completion
chatId := "gyhztyd"
message := Message {
    Role: Pointer("user"),
	Content: Pointer("Hello, how are you?"),
	Images: nil,
}

res, err := LLM.Chat(
	chatId,
    LLM.Chat.WithModel("llama3"), // Default value
    LLM.Chat.WithMessage(message),
)

By passing a chat id, the request will include all the previous messages sent and received before the current message. If the chatId does not exist, it will create a new one. If you don't want to keep history then pass nil as the chatId.

All chats are stored into memory. A restart of the application will delete all chats

To handle the chats:

chat := ll.GetChat(chatId) // Returns the chat or nil if it does not exist
LLM.DeleteChat(chatId) // Deletes a specified chat
LLM.DeleteAllChats() // Delete all chats

LLM.PreloadChat(chat) // Loads a new Chat instance

To access the chats:

chat := ll.GetChat(chatId)

chat.AddMessage(Message{...}) // Adds a new message at the end of the list
chat.AddMessageTo(2, Message{...}) // Adds a new message at the specified index
chat.DeleteMessage(2) // Deletes a message at the specified index
chat.DeleteAllMessages() // Deletes all mesages
Blobs functions

To create a blob:

success, err := LLM.Blobs.Create("sha256:...", []byte{...})

Check blob exists:

success, err := LLM.Blobs.Check("sha256:...")
Models functions

Create a model:

res, err := LLM.Models.Create(
    LLM.Models.Create.WithFrom("llama3"),
    LLM.Models.Create.WithParameter(Parameter{Key:"num_keep", Value:"24"),
    LLM.Models.Create.WithStream(true, 512000, func(r *Response, err error) {
        // ...		
    }),
	LLM.Models.Create.WithLicense("<license>"),
)

Get local models:

res, err := LLM.Models.List()

Show model's information:

res, err := LLM.Models.ShowInfo("llama3")

Clone a model:

res, err := LLM.Models.Copy("llama3", "llama3-copy")

Delete a model:

res, err := LLM.Models.Delete("llama3")

Pull a model from ollama library:

res, err := LLM.Models.Pull(
    LLM.Models.Pull.WithName("llama3"),
    LLM.Models.Pull.WithInsecure(false),
    LLM.Models.Pull.WithStream(true, 512000, func(r *Response, err error) {
        // ...		
    }),
)

Push a model to the ollama library:

res, err := LLM.Models.Push(
    LLM.Models.Push.WithName("llama3"),
    LLM.Models.Push.WithInsecure(false),
    LLM.Models.Push.WithStream(true, 512000, func(r *Response, err error) {
        // ...		
    }),
)

Generate embeddings:

res, err := LLM.GenerateEmbeddings(
    LLM.GenerateEmbeddings.WithModel("llama3"),
    LLM.GenerateEmbeddings.WithPrompt("This is the prompt!"),
    LLM.GenerateEmbeddings.WithKeepAlive("5m"),
    LLM.GenerateEmbeddings.WithStream(true, 512000, func(r *Response, err error) {
        // ...		
    }),
)

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BlobCheckFunc

type BlobCheckFunc func(digest string) error

BlobCheckFunc performs a request to the Ollama API to check if a blob file exists.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type BlobCreateFunc

type BlobCreateFunc func(digest string, data []byte) error

BlobCreateFunc performs a request to the Ollama API to create a new blob with the provided blob file.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type Chat

type Chat struct {
	ID       string
	Messages []Message
}

Chat stores the messages sent from the user and received from the assistant.

func (*Chat) AddMessage

func (c *Chat) AddMessage(m Message)

AddMessage adds a new message to the end of the chat.

Parameters:

  • m: The message to add.

func (*Chat) AddMessageTo

func (c *Chat) AddMessageTo(index int, m Message)

AddMessageTo adds a new message at the specified index.

Parameters:

  • index: The index at which to add the new message.
  • m: The message to add.

func (*Chat) DeleteAllMessages

func (c *Chat) DeleteAllMessages()

DeleteAllMessages deletes all messages in the chat.

func (*Chat) DeleteMessage

func (c *Chat) DeleteMessage(index int)

DeleteMessage deletes a message at the specified index.

Parameters:

  • index: The index of the message to delete.

type ChatFunc

type ChatFunc func(chatId *string, builder ...func(reqBuilder *ChatRequestBuilder)) (*ChatResponse, error)

ChatFunc performs a request to the Ollama API with the provided instructions. If chatId is set, it will append the messages from previous requests to the current request. If chatId is not found, a new chat will be generated.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (*ChatFunc) WithFormat

func (f *ChatFunc) WithFormat(v string) func(*ChatRequestBuilder)

WithFormat sets the format to return a response in. Currently, the only accepted value is "json".

Parameters:

  • v: The format string.

func (*ChatFunc) WithKeepAlive added in v1.0.4

func (f *ChatFunc) WithKeepAlive(v string) func(*ChatRequestBuilder)

WithKeepAlive controls how long the model will stay loaded into memory following the request.

Parameters:

  • v: The keep alive duration.

func (*ChatFunc) WithMessage

func (f *ChatFunc) WithMessage(v Message) func(*ChatRequestBuilder)

WithMessage appends a new message to the request.

Parameters:

  • v: The message to append.

func (*ChatFunc) WithModel

func (f *ChatFunc) WithModel(v string) func(*ChatRequestBuilder)

WithModel sets the model used for this request.

Parameters:

  • v: The model name.

func (*ChatFunc) WithOptions

func (f *ChatFunc) WithOptions(v Options) func(*ChatRequestBuilder)

WithOptions sets the options for this request. It will override any settings set before, such as temperature and seed.

Parameters:

  • v: The options to set.

func (*ChatFunc) WithRaw

func (f *ChatFunc) WithRaw(v bool) func(*ChatRequestBuilder)

WithRaw bypasses the templating system and provides a full prompt.

Parameters:

  • v: A boolean indicating whether to use raw mode.

func (*ChatFunc) WithSeed

func (f *ChatFunc) WithSeed(v int) func(*ChatRequestBuilder)

WithSeed sets the seed for this request.

Parameters:

  • v: The seed value.

func (*ChatFunc) WithStream

func (f *ChatFunc) WithStream(v bool, bufferSize int, fn func(r *ChatResponse, err error)) func(*ChatRequestBuilder)

WithStream passes a function to allow reading stream

Parameters:

  • v: A boolean indicating whether to use streaming.
  • bufferSize: The size of the streamed buffer
  • f: The function to handle streaming

func (*ChatFunc) WithTemperature

func (f *ChatFunc) WithTemperature(v float64) func(*ChatRequestBuilder)

WithTemperature sets the temperature for this request.

Parameters:

  • v: The temperature value.

type ChatRequestBuilder

type ChatRequestBuilder struct {
	Model     *string   `json:"model"`
	Format    *string   `json:"format"`
	Raw       *bool     `json:"raw"`
	Messages  []Message `json:"messages"`
	KeepAlive *string   `json:"keep_alive,omitempty"`
	Options   *Options  `json:"options"`

	Stream           *bool                            `json:"stream"`
	StreamBufferSize *int                             `json:"-"`
	StreamFunc       func(r *ChatResponse, err error) `json:"-"`
}

ChatRequestBuilder represents the chat API request.

type ChatResponse added in v1.0.4

type ChatResponse struct {
	Model      string  `json:"model"`      // Is the model name that generated the response.
	CreatedAt  string  `json:"created_at"` // Is the timestamp of the response.
	Message    Message `json:"message"`
	Done       bool    `json:"done"`        // Specifies if the response is complete.
	DoneReason string  `json:"done_reason"` // The reason the model stopped generating text.
	Context    []int   `json:"context"`     // Is an encoding of the conversation used in this response; this can be sent in the next request to keep a conversational memory.

	Metrics
}

ChatResponse represents the API response for "chat" endpoint.

type CopyModelFunc

type CopyModelFunc func(source, destination string) error

CopyModelFunc performs a request to the Ollama API to copy an existing model under a different name.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type CreateModelFunc

type CreateModelFunc func(builder ...func(modelFileBuilder *ModelFileRequestBuilder)) (*StatusResponse, error)

CreateModelFunc performs a request to the Ollama API to create a new model with the provided model file. Canceled pulls are resumed from where they left off, and multiple calls will share the same download progress.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (*CreateModelFunc) WithAdapter

func (f *CreateModelFunc) WithAdapter(v string) func(*ModelFileRequestBuilder)

WithAdapter defines the (Q)LoRA adapters to apply to the model.

Parameters:

  • v: The adapter string.

func (*CreateModelFunc) WithChat

func (f *CreateModelFunc) WithChat(chat *Chat) func(*ModelFileRequestBuilder)

WithChat appends all the messages from a chat to the message history.

Parameters:

  • chat: The chat whose messages to append.

func (*CreateModelFunc) WithFrom

func (f *CreateModelFunc) WithFrom(v string) func(*ModelFileRequestBuilder)

WithFrom defines the base model to use.

Parameters:

  • v: The base model string.

func (*CreateModelFunc) WithLicense

func (f *CreateModelFunc) WithLicense(v string) func(*ModelFileRequestBuilder)

WithLicense specifies the legal license.

Parameters:

  • v: The license string.

func (*CreateModelFunc) WithMessage

func (f *CreateModelFunc) WithMessage(v Message) func(*ModelFileRequestBuilder)

WithMessage appends a new message to the message history.

Parameters:

  • v: The message to append.

func (*CreateModelFunc) WithModel added in v1.0.4

func (f *CreateModelFunc) WithModel(v string) func(*ModelFileRequestBuilder)

WithModel sets the new model's name for this request.

Parameters:

  • v: The model name.

func (*CreateModelFunc) WithParameter

func (f *CreateModelFunc) WithParameter(v Parameter) func(*ModelFileRequestBuilder)

WithParameter appends a new parameter for how Ollama will run the model.

Parameters:

  • v: The parameter to append.

func (*CreateModelFunc) WithPath added in v1.0.4

func (f *CreateModelFunc) WithPath(v string) func(*ModelFileRequestBuilder)

WithPath sets the path for this request.

Parameters:

  • v: The path.

func (*CreateModelFunc) WithQuantize added in v1.0.4

func (f *CreateModelFunc) WithQuantize(v string) func(*ModelFileRequestBuilder)

WithQuantize sets the quantize for this request.

Parameters:

  • v: The quantize value.

func (*CreateModelFunc) WithStream

func (f *CreateModelFunc) WithStream(v bool, bufferSize int, fc func(r *StatusResponse, err error)) func(*ModelFileRequestBuilder)

WithStream passes a function to allow reading stream types.

Parameters:

  • v: A boolean indicating whether to use streaming.
  • bufferSize: The size of the streamed buffer
  • fc: The function to handle streaming types.

func (*CreateModelFunc) WithSystem

func (f *CreateModelFunc) WithSystem(v string) func(*ModelFileRequestBuilder)

WithSystem specifies the system message that will be set in the template.

Parameters:

  • v: The system message string.

func (*CreateModelFunc) WithTemplate

func (f *CreateModelFunc) WithTemplate(v string) func(*ModelFileRequestBuilder)

WithTemplate sets the full prompt template to be sent to the model.

Parameters:

  • v: The template string.

type DeleteModelFunc

type DeleteModelFunc func(name string) error

DeleteModelFunc performs a request to the Ollama API to delete a model.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type GenerateEmbeddingsFunc

type GenerateEmbeddingsFunc func(...func(modelFileBuilder *GenerateEmbeddingsRequestBuilder)) (*GenerateEmbeddingsResponse, error)

GenerateEmbeddingsFunc performs a request to the Ollama API to generate embeddings from a model.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (GenerateEmbeddingsFunc) WithKeepAlive

WithKeepAlive controls how long the model will stay loaded into memory following the request (default: 5m).

Parameters:

  • v: The keep alive string.

func (GenerateEmbeddingsFunc) WithModel

WithModel sets the model used for this request.

Parameters:

  • v: The model name.

func (GenerateEmbeddingsFunc) WithOptions

WithOptions sets the options for this request. It will override any settings set before, such as temperature and seed.

Parameters:

  • v: The options to set.

func (GenerateEmbeddingsFunc) WithPrompt

WithPrompt sets the prompt for this request.

Parameters:

  • v: The prompt string.

type GenerateEmbeddingsRequestBuilder added in v1.0.4

type GenerateEmbeddingsRequestBuilder struct {
	Model     *string  `json:"model"`
	Prompt    *string  `json:"prompt"`
	KeepAlive *string  `json:"keep_alive"`
	Options   *Options `json:"options"`
}

GenerateEmbeddingsRequestBuilder represents the generate embeddings API request.

type GenerateEmbeddingsResponse

type GenerateEmbeddingsResponse struct {
	Embedding []float64 `json:"embedding"`
}

GenerateEmbeddingsResponse represents the API response for "generate embeddings" endpoint.

type GenerateFunc

type GenerateFunc func(builder ...func(reqBuilder *GenerateRequestBuilder)) (*GenerateResponse, error)

GenerateFunc performs a request to the Ollama API with the provided instructions. If the prompt is not set, the model will be loaded into memory.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (GenerateFunc) WithContext added in v1.0.4

func (c GenerateFunc) WithContext(v []int) func(*GenerateRequestBuilder)

WithContext overrides the model's default prompt template.

Parameters:

  • v: The content int array.

func (GenerateFunc) WithFormat

func (c GenerateFunc) WithFormat(v string) func(*GenerateRequestBuilder)

WithFormat sets the format to return a response in. Currently, the only accepted value is "json".

Parameters:

  • v: The format string.

func (GenerateFunc) WithImage

func (c GenerateFunc) WithImage(v string) func(*GenerateRequestBuilder)

WithImage appends an image to the message sent to Ollama. The image must be base64 encoded.

Parameters:

  • v: The base64 encoded image string.

func (GenerateFunc) WithKeepAlive added in v1.0.4

func (c GenerateFunc) WithKeepAlive(v string) func(*GenerateRequestBuilder)

WithKeepAlive controls how long the model will stay loaded in memory following this request.

Parameters:

  • v: The keep alive duration.

func (GenerateFunc) WithModel

func (c GenerateFunc) WithModel(v string) func(*GenerateRequestBuilder)

WithModel sets the model used for this request.

Parameters:

  • v: The model name.

func (GenerateFunc) WithOptions

func (c GenerateFunc) WithOptions(v Options) func(*GenerateRequestBuilder)

WithOptions sets the options for this request. It will override any settings set before, such as temperature and seed.

Parameters:

  • v: The options to set.

func (GenerateFunc) WithPrompt

func (c GenerateFunc) WithPrompt(v string) func(*GenerateRequestBuilder)

WithPrompt sets the prompt for this request.

Parameters:

  • v: The prompt string.

func (GenerateFunc) WithRaw

func (c GenerateFunc) WithRaw(v bool) func(*GenerateRequestBuilder)

WithRaw bypasses the templating system and provides a full prompt.

Parameters:

  • v: A boolean indicating whether to use raw mode.

func (GenerateFunc) WithSeed

func (c GenerateFunc) WithSeed(v int) func(*GenerateRequestBuilder)

WithSeed sets the seed for this request.

Parameters:

  • v: The seed value.

func (GenerateFunc) WithStream

func (c GenerateFunc) WithStream(v bool, bufferSize int, f func(r *GenerateResponse, err error)) func(*GenerateRequestBuilder)

WithStream passes a function to allow reading stream types.

Parameters:

  • v: A boolean indicating whether to use streaming.
  • bufferSize: The size of the streamed buffer
  • f: The function to handle streaming types.

func (GenerateFunc) WithSystem added in v1.0.4

func (c GenerateFunc) WithSystem(v string) func(*GenerateRequestBuilder)

WithSystem overrides the model's default system message/prompt.

Parameters:

  • v: The system string.

func (GenerateFunc) WithTemperature

func (c GenerateFunc) WithTemperature(v float64) func(*GenerateRequestBuilder)

WithTemperature sets the temperature for this request.

Parameters:

  • v: The temperature value.

func (GenerateFunc) WithTemplate added in v1.0.4

func (c GenerateFunc) WithTemplate(v string) func(*GenerateRequestBuilder)

WithTemplate overrides the model's default prompt template.

Parameters:

  • v: The template string.

type GenerateRequestBuilder

type GenerateRequestBuilder struct {
	Model     *string  `json:"model"`
	Prompt    *string  `json:"prompt"`
	System    *string  `json:"system"`
	Template  *string  `json:"template"`
	Format    *string  `json:"format"`
	Images    []string `json:"images"`
	Raw       *bool    `json:"raw"`
	Context   []int    `json:"context,omitempty"`
	KeepAlive *string  `json:"keep_alive,omitempty"`
	Options   *Options `json:"options"`

	Stream           *bool                                `json:"stream"`
	StreamBufferSize *int                                 `json:"-"`
	StreamFunc       func(r *GenerateResponse, err error) `json:"-"`
}

GenerateRequestBuilder represents the generate API request.

type GenerateResponse added in v1.0.4

type GenerateResponse struct {
	Model      string `json:"model"`       // Is the model name that generated the response.
	CreatedAt  string `json:"created_at"`  // Is the timestamp of the response.
	Response   string `json:"response"`    // Is the textual response itself.
	Done       bool   `json:"done"`        // Specifies if the response is complete.
	DoneReason string `json:"done_reason"` // The reason the model stopped generating text.
	Context    []int  `json:"context"`     // Is an encoding of the conversation used in this response; this can be sent in the next request to keep a conversational memory.

	Metrics
}

GenerateResponse represents the API response for "generate" endpoint.

type ListLocalModelsFunc

type ListLocalModelsFunc func() (*ListLocalModelsResponse, error)

ListLocalModelsFunc performs a request to the Ollama API to retrieve the local models.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type ListLocalModelsResponse

type ListLocalModelsResponse struct {
	Models []ModelResponse `json:"models"`
}

ListLocalModelsResponse represents the response for listing local models.

type Message

type Message struct {
	Role    *string  `json:"role"`    // Role of the message, either system, user, or assistant.
	Content *string  `json:"content"` // Content of the message.
	Images  []string `json:"images"`  // Images associated with the message.
}

Message represents a message sent/received from the API.

type Metrics added in v1.0.4

type Metrics struct {
	TotalDuration      time.Duration `json:"total_duration"`
	LoadDuration       time.Duration `json:"load_duration"`
	PromptEvalCount    int           `json:"prompt_eval_count"`
	PromptEvalDuration time.Duration `json:"prompt_eval_duration"`
	EvalCount          int           `json:"eval_count"`
	EvalDuration       time.Duration `json:"eval_duration"`
}

type ModelDetails

type ModelDetails struct {
	Format            string   `json:"format"`
	Family            string   `json:"family"`
	Families          []string `json:"families"`
	ParameterSize     string   `json:"parameter_size"`
	QuantizationLevel string   `json:"quantization_level"`
}

ModelDetails represents detailed information about a model.

type ModelFileRequestBuilder added in v1.0.4

type ModelFileRequestBuilder struct {
	Model     *string `json:"model"`
	Path      *string `json:"path"`
	Modelfile *string `json:"modelfile"`
	Quantize  *string `json:"quantize"`

	Stream           *bool                              `json:"stream"`
	StreamBufferSize *int                               `json:"-"`
	StreamFunc       func(r *StatusResponse, err error) `json:"-"`
	// contains filtered or unexported fields
}

ModelFileRequestBuilder represents the model creation API request.

func (*ModelFileRequestBuilder) Build added in v1.0.4

func (m *ModelFileRequestBuilder) Build() string

Build generates the ModelFile.

Parameters:

  • defaultModel: The default model string.

type ModelResponse added in v1.0.4

type ModelResponse struct {
	Name       string       `json:"name"`
	Model      string       `json:"model"`
	ModifiedAt string       `json:"modified_at"`
	Size       int64        `json:"size"`
	Digest     string       `json:"digest"`
	Details    ModelDetails `json:"details"`
	ExpiresAt  time.Time    `json:"expires_at"`
	SizeVRAM   int64        `json:"size_vram"`
}

ModelResponse represents a model's metadata.

type Ollama

type Ollama struct {
	Http *http.Client

	Chat     ChatFunc
	Generate GenerateFunc

	Blobs struct {
		Check  BlobCheckFunc
		Create BlobCreateFunc
	}

	Models struct {
		Create   CreateModelFunc
		List     ListLocalModelsFunc
		ShowInfo ShowModelInfoFunc
		Copy     CopyModelFunc
		Delete   DeleteModelFunc
		Pull     PullModelFunc
		Push     PushModelFunc
	}

	GenerateEmbeddings GenerateEmbeddingsFunc
	// contains filtered or unexported fields
}

Ollama represents a client for interacting with the Ollama API.

func New

func New(v url.URL) *Ollama

New creates a new Ollama client that points to the specified URL. It initializes the client with default settings and available API functions.

Example:

llm := New("http://api.ollama.com")

func (*Ollama) DeleteAllChats

func (o *Ollama) DeleteAllChats()

DeleteAllChats removes all chats from the client's chat map.

func (*Ollama) DeleteChat

func (o *Ollama) DeleteChat(id string)

DeleteChat removes a chat by its ID.

Parameters:

  • id: The ID of the chat to remove.

func (*Ollama) GetChat

func (o *Ollama) GetChat(id string) *Chat

GetChat retrieves a chat by its ID.

Parameters:

  • id: The ID of the chat.

Returns:

  • A pointer to the Chat if found, or nil if not found.

func (*Ollama) PreloadChat

func (o *Ollama) PreloadChat(chat Chat)

PreloadChat preloads a chat into the client's chat map.

Parameters:

  • chat: The chat to preload.

func (*Ollama) SetHeaders added in v1.1.0

func (o *Ollama) SetHeaders(key string, value []string)

SetHeaders sets the headers for all the requests.

type Options

type Options struct {
	NumKeep          *int     `json:"num_keep"`
	NumPredict       *int     `json:"num_predict"`       // Max number of tokens to predict.
	TopK             *int     `json:"top_k"`             // Reduces the probability of generating nonsense.
	TopP             *float64 `json:"top_p"`             // Controls diversity of text.
	TfsZ             *float64 `json:"tfs_z"`             // Tail free sampling.
	TypicalP         *float64 `json:"typical_p"`         // Typical probability.
	RepeatLastN      *int     `json:"repeat_last_n"`     // Prevents repetition.
	PenalizeNewLine  *bool    `json:"penalize_newline"`  // Penalizes new lines.
	RepeatPenalty    *float64 `json:"repeat_penalty"`    // Penalizes repetitions.
	PresencePenalty  *float64 `json:"presence_penalty"`  // Penalizes presence of tokens.
	FrequencyPenalty *float64 `json:"frequency_penalty"` // Penalizes frequency of tokens.
	Mirostat         *int     `json:"mirostat"`          // Enables Mirostat sampling.
	MirostatEta      *float64 `json:"mirostat_eta"`      // Learning rate for Mirostat.
	MirostatTau      *float64 `json:"mirostat_tau"`      // Balance between coherence and diversity.
	Stop             []string `json:"stop"`              // Stop sequences.
	Numa             *bool    `json:"numa"`              // NUMA support.
	NumCtx           *int     `json:"num_ctx"`           // Context window size.
	NumBatch         *int     `json:"num_batch"`         // Batch size.
	NumGPU           *int     `json:"num_gpu"`           // Number of GPUs.
	LowVRam          *bool    `json:"low_vram"`          // Low VRAM mode.
	F16KV            *bool    `json:"f16_kv"`            // 16-bit key-value pairs.
	VocabOnly        *bool    `json:"vocab_only"`        // Vocab only mode.
	NumThreads       *int     `json:"num_threads"`       // Number of threads.
	UseMMap          *bool    `json:"use_mmap"`          // Use memory-mapped files.
	UseMLock         *bool    `json:"use_mlock"`         // Use memory locking.
	Seed             *int     `json:"seed"`              // Random seed.
	Temperature      *float64 `json:"temperature"`       // Temperature for generation.
}

Options represents the options that will be sent to the API.

type Parameter

type Parameter struct {
	Key   string
	Value string
}

Parameter represents a parameter sent to the API,

type PullModelFunc

type PullModelFunc func(...func(modelFileBuilder *PullModelRequestBuilder)) (*PushPullModelResponse, error)

PullModelFunc performs a request to the Ollama API to pull model from the ollama library.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (*PullModelFunc) WithInsecure

func (f *PullModelFunc) WithInsecure(v bool) func(*PullModelRequestBuilder)

WithInsecure allows insecure connections to the library. Only use this if you are pulling from your own library during development.

Parameters:

  • v: A boolean indicating whether to insecure mode.

func (*PullModelFunc) WithModel added in v1.0.4

func (f *PullModelFunc) WithModel(v string) func(*PullModelRequestBuilder)

WithModel sets the model used for this request.

Parameters:

  • v: The model name.

func (*PullModelFunc) WithPassword added in v1.0.4

func (f *PullModelFunc) WithPassword(v string) func(*PullModelRequestBuilder)

WithPassword sets the password used for this request.

Parameters:

  • v: The password.

func (*PullModelFunc) WithStream

func (f *PullModelFunc) WithStream(v bool, bufferSize int, fc func(r *PushPullModelResponse, err error)) func(*PullModelRequestBuilder)

WithStream passes a function to allow reading stream types.

Parameters:

  • v: A boolean indicating whether to use streaming.
  • bufferSize: The size of the streamed buffer
  • fc: The function to handle streaming types.

func (*PullModelFunc) WithUsername added in v1.0.4

func (f *PullModelFunc) WithUsername(v string) func(*PullModelRequestBuilder)

WithUsername sets the username used for this request.

Parameters:

  • v: The username.

type PullModelRequestBuilder added in v1.0.4

type PullModelRequestBuilder struct {
	Model    *string `json:"model"`
	Insecure *bool   `json:"insecure"`
	Username *string `json:"username"`
	Password *string `json:"password"`

	Stream           *bool                                     `json:"stream"`
	StreamBufferSize *int                                      `json:"-"`
	StreamFunc       func(r *PushPullModelResponse, err error) `json:"-"`
}

PullModelRequestBuilder represents the pull model API request.

type PushModelFunc

type PushModelFunc func(...func(modelFileBuilder *PushModelRequestBuilder)) (*PushPullModelResponse, error)

PushModelFunc performs a request to the Ollama API to push model to the ollama library. Requires registering for ollama.ai and adding a public key first

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (*PushModelFunc) WithInsecure

func (f *PushModelFunc) WithInsecure(v bool) func(*PushModelRequestBuilder)

WithInsecure allows insecure connections to the library. Only use this if you are pulling from your own library during development.

Parameters:

  • v: A boolean indicating whether to insecure mode.

func (*PushModelFunc) WithModel added in v1.0.4

func (f *PushModelFunc) WithModel(v string) func(*PushModelRequestBuilder)

WithModel sets the model used for this request.

Parameters:

  • v: The model name.

func (*PushModelFunc) WithPassword added in v1.0.4

func (f *PushModelFunc) WithPassword(v string) func(*PushModelRequestBuilder)

WithPassword sets the password used for this request.

Parameters:

  • v: The password.

func (*PushModelFunc) WithStream

func (f *PushModelFunc) WithStream(v bool, bufferSize int, fc func(r *PushPullModelResponse, err error)) func(*PushModelRequestBuilder)

WithStream passes a function to allow reading stream types.

Parameters:

  • v: A boolean indicating whether to use streaming.
  • bufferSize: The size of the streamed buffer
  • fc: The function to handle streaming types.

func (*PushModelFunc) WithUsername added in v1.0.4

func (f *PushModelFunc) WithUsername(v string) func(*PushModelRequestBuilder)

WithUsername sets the username used for this request.

Parameters:

  • v: The username.

type PushModelRequestBuilder added in v1.0.4

type PushModelRequestBuilder struct {
	Model    *string `json:"model"`
	Insecure *bool   `json:"insecure"`
	Username *string `json:"username"`
	Password *string `json:"password"`

	Stream           *bool                                     `json:"stream"`
	StreamBufferSize *int                                      `json:"-"`
	StreamFunc       func(r *PushPullModelResponse, err error) `json:"-"`
}

PushModelRequestBuilder represents the push model API request.

type PushPullModelResponse added in v1.0.4

type PushPullModelResponse struct {
	Status    string `json:"status"`
	Error     string `json:"error"`
	Digest    string `json:"digest"`
	Total     int64  `json:"total"`
	Completed int64  `json:"completed"`
}

PushPullModelResponse represents the API response for "model push" endpoint.

type ShowModelInfoFunc

type ShowModelInfoFunc func(builder ...func(reqBuilder *ShowModelRequestBuilder)) (*ShowModelInfoResponse, error)

ShowModelInfoFunc performs a request to the Ollama API to retrieve the information of a model.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

func (*ShowModelInfoFunc) WithModel added in v1.0.4

func (f *ShowModelInfoFunc) WithModel(v string) func(*ShowModelRequestBuilder)

WithModel sets the new model's name for this request.

Parameters:

  • v: The model name.

func (*ShowModelInfoFunc) WithOptions added in v1.0.4

func (f *ShowModelInfoFunc) WithOptions(v Options) func(*ShowModelRequestBuilder)

WithOptions sets the options for this request. It will override any settings set before, such as temperature and seed.

Parameters:

  • v: The options to set.

func (*ShowModelInfoFunc) WithSystem added in v1.0.4

func (f *ShowModelInfoFunc) WithSystem(v string) func(*ShowModelRequestBuilder)

WithSystem sets the system for this request.

Parameters:

  • v: The system message string.

func (*ShowModelInfoFunc) WithTemplate added in v1.0.4

func (f *ShowModelInfoFunc) WithTemplate(v string) func(*ShowModelRequestBuilder)

WithTemplate sets the template for this request.

Parameters:

  • v: The template string.

type ShowModelInfoResponse

type ShowModelInfoResponse struct {
	License    string       `json:"license"`
	Modelfile  string       `json:"modelfile"`
	Parameters string       `json:"parameters"`
	Template   string       `json:"template"`
	System     string       `json:"system"`
	Details    ModelDetails `json:"details"`
	Messages   []Message    `json:"messages"`
}

ShowModelInfoResponse represents the response for showing model information.

type ShowModelRequestBuilder added in v1.0.4

type ShowModelRequestBuilder struct {
	Model    *string  `json:"model"`
	System   *string  `json:"path"`
	Template *string  `json:"modelfile"`
	Options  *Options `json:"options"`
}

ShowModelRequestBuilder represents the model creation API request.

type StatusResponse

type StatusResponse struct {
	Status string `json:"status"`
	Error  string `json:"error"`
}

StatusResponse represents the API response for endpoint that return status updates.

type VersionFunc added in v1.0.4

type VersionFunc func() (*VersionResponse, error)

VersionFunc performs a request to the Ollama API and returns the Ollama server version as a string.

For more information about the request, see the API documentation: https://github.com/ollama/ollama/blob/main/docs/api.md

type VersionResponse added in v1.0.4

type VersionResponse struct {
	Version string `json:"version"`
}

VersionResponse represents the API response for the version endpoint.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL