Documentation
¶
Overview ¶
ollama implements an API client for ollama https://github.com/ollama/ollama/blob/main/docs/api.md
Index ¶
- type ChatDuration
- type ChatOpt
- type ChatStatus
- type Client
- func (c *Client) ChatGenerate(ctx context.Context, model, prompt string, opts ...ChatOpt) (ChatStatus, error)
- func (c *Client) CopyModel(source, destination string) error
- func (c *Client) CreateModel(ctx context.Context, name, modelfile string) error
- func (c *Client) DeleteModel(name string) error
- func (o *Client) Generate(ctx context.Context, model agent.Model, context []agent.Context, ...) (*agent.Response, error)
- func (c *Client) ListModels() ([]Model, error)
- func (o *Client) Models(context.Context) ([]agent.Model, error)
- func (*Client) Name() string
- func (c *Client) PullModel(ctx context.Context, name string) error
- func (c *Client) ShowModel(name string) (ModelShow, error)
- func (o *Client) UserPrompt(v string) agent.Context
- type Model
- type ModelDetails
- type ModelShow
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ChatDuration ¶
type ChatOpt ¶
type ChatOpt func(*reqChatCompletion) error
func OptFormatJSON ¶
func OptFormatJSON() ChatOpt
OptFormatJSON sets the output to be JSON. it's also important to instruct the model to use JSON in the prompt.
type ChatStatus ¶
type ChatStatus struct { Response string `json:"response,omitempty"` Context []int `json:"context,omitempty"` PromptTokens int `json:"prompt_eval_count,omitempty"` ResponseTokens int `json:"total_eval_count,omitempty"` LoadDurationNs int64 `json:"load_duration,omitempty"` PromptDurationNs int64 `json:"prompt_eval_duration,omitempty"` ResponseDurationNs int64 `json:"response_eval_duration,omitempty"` TotalDurationNs int64 `json:"total_duration,omitempty"` }
func (ChatStatus) String ¶
func (m ChatStatus) String() string
type Client ¶
type Client struct {
*client.Client
}
func (*Client) ChatGenerate ¶
func (c *Client) ChatGenerate(ctx context.Context, model, prompt string, opts ...ChatOpt) (ChatStatus, error)
Generate a response, given a model and a prompt
func (*Client) CreateModel ¶
Create a new model with a name and contents of the Modelfile
func (*Client) DeleteModel ¶
Delete a local model by name
func (*Client) Generate ¶ added in v1.0.10
func (o *Client) Generate(ctx context.Context, model agent.Model, context []agent.Context, opts ...agent.Opt) (*agent.Response, error)
Generate a response from a text message
type Model ¶
type Model struct { Name string `json:"name"` Model string `json:"model"` ModifiedAt time.Time `json:"modified_at"` Size int64 `json:"size"` Digest string `json:"digest"` Details ModelDetails `json:"details"` }
Model is a docker image of a ollama model
type ModelDetails ¶
type ModelDetails struct { ParentModel string `json:"parent_model,omitempty"` Format string `json:"format"` Family string `json:"family"` Families []string `json:"families"` ParameterSize string `json:"parameter_size"` QuantizationLevel string `json:"quantization_level"` }
ModelDetails are the details of the model
func (ModelDetails) String ¶
func (m ModelDetails) String() string
Click to show internal directories.
Click to hide internal directories.