provider

package
v0.0.0-...-e74ad27 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2026 License: AGPL-3.0 Imports: 12 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Config

type Config struct {
	APIBase string `json:"apiBase"`          // Base URL (e.g. "https://api.groq.com/openai/v1")
	APIKey  string `json:"apiKey,omitempty"` // API key or "$ENV_VAR" reference
	Type    string `json:"type"`             // "openai" or "anthropic"
}

Config defines the settings for a provider loaded from configuration.

func (Config) ResolveAPIKey

func (c Config) ResolveAPIKey() string

ResolveAPIKey resolves the API key, dereferencing env var references.

type Ollama

type Ollama struct {
	// contains filtered or unexported fields
}

Ollama implements the Provider interface for Ollama's native /api/chat endpoint.

We can't use the OpenAI-compatible /v1/chat/completions endpoint for Ollama because it silently drops the `options` field — meaning num_ctx can't be set, so Ollama defaults to 2048 tokens regardless of the model and silently truncates conversation history. The native /api/chat endpoint accepts options properly.

func NewOllama

func NewOllama(name, baseURL string) *Ollama

NewOllama creates a new native Ollama provider. The baseURL should NOT include "/v1" — e.g. "http://localhost:11434" (we hit /api/chat directly).

func (*Ollama) Name

func (o *Ollama) Name() string

func (*Ollama) SendMessage

func (o *Ollama) SendMessage(ctx context.Context, httpClient *http.Client, req *api.MessagesRequest) (*api.MessageResp, error)

func (*Ollama) StreamMessages

func (o *Ollama) StreamMessages(ctx context.Context, httpClient *http.Client, req *api.MessagesRequest) (<-chan api.StreamEvent, <-chan error)

func (*Ollama) WithNoToolsModels

func (o *Ollama) WithNoToolsModels(patterns []string) *Ollama

WithNoToolsModels sets the list of model patterns that don't support tools.

func (*Ollama) WithNumCtx

func (o *Ollama) WithNumCtx(n int) *Ollama

WithNumCtx sets the num_ctx context window passed in every request.

type OpenAI

type OpenAI struct {
	// contains filtered or unexported fields
}

OpenAI implements the Provider interface for any OpenAI-compatible API (OpenAI, Groq, Ollama, Together, vLLM, etc.).

func NewOpenAI

func NewOpenAI(name, baseURL, apiKey string) *OpenAI

NewOpenAI creates a new OpenAI-compatible provider. When baseURL points to api.openai.com, max_completion_tokens is used instead of max_tokens (required by GPT-4.1, GPT-5, o-series, and all modern OpenAI models).

func (*OpenAI) ListModels

func (o *OpenAI) ListModels(ctx context.Context, httpClient *http.Client) ([]api.ModelInfo, error)

ListModels queries the /v1/models endpoint to discover available models. This works with OpenAI, Groq, Ollama, vLLM, and other OpenAI-compatible APIs. Returns api.ModelInfo slices (satisfies api.ModelLister interface).

func (*OpenAI) Name

func (o *OpenAI) Name() string

func (*OpenAI) SendMessage

func (o *OpenAI) SendMessage(ctx context.Context, httpClient *http.Client, req *api.MessagesRequest) (*api.MessageResp, error)

func (*OpenAI) StreamMessages

func (o *OpenAI) StreamMessages(ctx context.Context, httpClient *http.Client, req *api.MessagesRequest) (<-chan api.StreamEvent, <-chan error)

func (*OpenAI) WithNumCtx

func (o *OpenAI) WithNumCtx(n int) *OpenAI

WithNumCtx sets the num_ctx (context window size) sent in every request. Ollama defaults to 2048 tokens regardless of the model's actual context length, silently truncating conversation history once the system prompt + history exceeds that limit. Set this to e.g. 32768 for local models.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL