llm

package
v0.24.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 31, 2025 License: AGPL-3.0 Imports: 8 Imported by: 0

README

LLM Package

Go package for Large Language Model (LLM) operations.

Installation

go get github.com/dracory/base/llm

Usage

Basic Usage
package main

import (
	"context"
	"fmt"
	"github.com/dracory/base/llm"
)

func main() {
	// Create a model with OpenAI provider for text output
	model, err := llm.TextModel(llm.ProviderOpenAI)
	if err != nil {
		fmt.Printf("Error creating model: %v\n", err)
		return
	}

	// Generate a completion
	response, err := model.Complete(context.Background(), llm.CompletionRequest{
		Prompt:      "Once upon a time",
		MaxTokens:   100,
		Temperature: 0.7,
	})
	if err != nil {
		fmt.Printf("Error generating completion: %v\n", err)
		return
	}

	fmt.Println("Generated text:", response.Text)
	fmt.Println("Tokens used:", response.TokensUsed)
}
Creating Different Output Format Models

The package provides convenience functions for creating models with specific output formats:

// For text output
textModel, err := llm.TextModel(llm.ProviderOpenAI)

// For JSON output
jsonModel, err := llm.JSONModel(llm.ProviderOpenAI)

// For image output
imageModel, err := llm.ImageModel(llm.ProviderOpenAI)
Using Different Providers

The package supports multiple LLM providers:

// OpenAI
openaiModel, err := llm.TextModel(llm.ProviderOpenAI)

// Google Gemini
geminiModel, err := llm.TextModel(llm.ProviderGemini)

// Google Vertex AI
vertexModel, err := llm.TextModel(llm.ProviderVertex)

// Anthropic (Claude)
anthropicModel, err := llm.TextModel(llm.ProviderAnthropic)

// Mock model for testing
mockModel, err := llm.TextModel(llm.ProviderMock)
Advanced Configuration

You can create a model with custom configuration options:

options := llm.ModelOptions{
	Provider:     llm.ProviderOpenAI,
	OutputFormat: llm.OutputFormatJSON,
	ApiKey:       "your-api-key",
	Model:        "gpt-4",
	MaxTokens:    2048,
	Temperature:  0.5,
	Verbose:      true,
}

model, err := llm.NewModel(options)
if err != nil {
	// Handle error
}
ModelInterface

The package defines a ModelInterface that all models implement:

type ModelInterface interface {
	// Complete generates a completion for the provided prompt
	Complete(ctx context.Context, request CompletionRequest) (CompletionResponse, error)
}

Available Output Formats

The package supports the following output formats:

  • OutputFormatText - Plain text output
  • OutputFormatJSON - JSON formatted output
  • OutputFormatXML - XML formatted output
  • OutputFormatYAML - YAML formatted output
  • OutputFormatEnum - Enumeration values
  • OutputFormatImagePNG - PNG image output
  • OutputFormatImageJPG - JPEG image output

License

This package is part of the Dracory/Base project and is subject to the same licensing terms.

Documentation

Overview

Package llm provides a lightweight interface for Large Language Model operations

Index

Constants

View Source
const (
	GeminiModel1Pro   = "gemini-pro"
	GeminiModel1Flash = "gemini-pro-flash"
	GeminiModel2Pro   = "gemini-2-pro"
	GeminiModel2Flash = "gemini-2-flash"
)

Gemini model constants

View Source
const (
	OpenAIModelGPT35Turbo = "gpt-3.5-turbo"
	OpenAIModelGPT4       = "gpt-4"
	OpenAIModelGPT4Turbo  = "gpt-4-turbo-preview"
	OpenAIModelGPT4Vision = "gpt-4-vision-preview"
	OpenAIModelGPT4OMini  = "gpt-4o-mini"
	OpenAIModelGPT4O      = "gpt-4o"
)

OpenAI model constants

View Source
const (
	VertexModelGemini20Flash         = "gemini-2.0-flash-001"
	VertexModelGemini20FlashLite     = "gemini-2.0-flash-lite-001"
	VertexModelGemini20FlashImageGen = "gemini-2.0-flash-exp-image-generation"
	VertexModelGemini25ProPreview    = "gemini-2.5-pro-preview-03-25"
	VertexModelGemini15Pro           = "gemini-1.5-pro"   // supported but older
	VertexModelGemini15Flash         = "gemini-1.5-flash" // supported but older
)

Vertex AI model constants

Variables

View Source
var ErrInvalidRequest = errors.New("invalid request")

ErrInvalidRequest is returned when a request is invalid

View Source
var ErrServiceUnavailable = errors.New("service unavailable")

ErrServiceUnavailable is returned when the LLM service is unavailable

Functions

func CountTokens

func CountTokens(text string) int

CountTokens provides a simple approximation of token counting Note: This is a basic implementation and not accurate for all models Production code should use model-specific tokenizers

func EstimateMaxTokens

func EstimateMaxTokens(promptTokens, contextWindowSize int) int

EstimateMaxTokens estimates the maximum number of tokens that could be generated given the model's context window size and the prompt length

func Version

func Version() string

Version returns the current version of the package

Types

type CompletionRequest

type CompletionRequest struct {
	// SystemPrompt contains instructions for the LLM
	SystemPrompt string `json:"system_prompt"`

	// UserPrompt contains the actual query or content to process
	UserPrompt string `json:"user_prompt"`

	// MaxTokens is the maximum number of tokens to generate
	MaxTokens int `json:"max_tokens"`

	// Temperature controls randomness in generation (0.0-1.0)
	Temperature float64 `json:"temperature"`
}

CompletionRequest represents a request to generate a completion

type CompletionResponse

type CompletionResponse struct {
	// Text is the generated completion text
	Text string `json:"text"`

	// TokensUsed is the number of tokens used for this request
	TokensUsed int `json:"tokens_used"`
}

CompletionResponse represents a response from a completion request

type MockModel

type MockModel struct {
	// Response is the predefined response to return
	Response CompletionResponse

	// Error is the predefined error to return
	Error error
	// contains filtered or unexported fields
}

MockModel implements the ModelInterface for testing purposes

func NewMockModel

func NewMockModel() *MockModel

NewMockModel creates a new mock model with a default response

func NewMockModelWithOptions

func NewMockModelWithOptions(options ModelOptions) *MockModel

NewMockModelWithOptions creates a new mock model with the specified options

func (*MockModel) Complete

func (m *MockModel) Complete(ctx context.Context, request CompletionRequest) (CompletionResponse, error)

Complete implements the ModelInterface

func (*MockModel) GetApiKey

func (m *MockModel) GetApiKey() string

GetApiKey implements the ModelInterface

func (*MockModel) GetMaxTokens

func (m *MockModel) GetMaxTokens() int

GetMaxTokens implements the ModelInterface

func (*MockModel) GetModel

func (m *MockModel) GetModel() string

GetModel implements the ModelInterface

func (*MockModel) GetOutputFormat

func (m *MockModel) GetOutputFormat() OutputFormat

GetOutputFormat implements the ModelInterface

func (*MockModel) GetProjectID

func (m *MockModel) GetProjectID() string

GetProjectID implements the ModelInterface

func (*MockModel) GetProvider

func (m *MockModel) GetProvider() Provider

GetProvider implements the ModelInterface

func (*MockModel) GetRegion

func (m *MockModel) GetRegion() string

GetRegion implements the ModelInterface

func (*MockModel) GetTemperature

func (m *MockModel) GetTemperature() float64

GetTemperature implements the ModelInterface

func (*MockModel) GetVerbose

func (m *MockModel) GetVerbose() bool

GetVerbose implements the ModelInterface

type ModelInterface

type ModelInterface interface {
	// Complete generates a completion for the provided prompt
	Complete(ctx context.Context, request CompletionRequest) (CompletionResponse, error)

	// GetProvider returns the provider of the model
	GetProvider() Provider

	// GetOutputFormat returns the output format of the model
	GetOutputFormat() OutputFormat

	// GetApiKey returns the API key of the model
	GetApiKey() string

	// GetModel returns the model of the model
	GetModel() string

	// GetMaxTokens returns the maximum number of tokens of the model
	GetMaxTokens() int

	// GetTemperature returns the temperature of the model
	GetTemperature() float64

	// GetProjectID returns the project ID of the model
	GetProjectID() string

	// GetRegion returns the region of the model
	GetRegion() string

	// GetVerbose returns the verbose of the model
	GetVerbose() bool
}

ModelInterface defines the interface for interacting with Large Language Models

func ImageModel

func ImageModel(provider Provider) (ModelInterface, error)

ImageModel creates an LLM model for image output

func JSONModel

func JSONModel(provider Provider) (ModelInterface, error)

JSONModel creates an LLM model for JSON output

func NewModel

func NewModel(options ModelOptions) (ModelInterface, error)

NewModel creates a new LLM model based on the provided options

func TextModel

func TextModel(provider Provider) (ModelInterface, error)

TextModel creates an LLM model for text output

type ModelOptions

type ModelOptions struct {
	Provider     Provider
	OutputFormat OutputFormat
	ApiKey       string
	Model        string
	MaxTokens    int
	Temperature  float64
	ProjectID    string
	Region       string
	Verbose      bool
}

ModelOptions contains configuration options for creating an LLM model

type OutputFormat

type OutputFormat string

OutputFormat specifies the desired output format from the LLM

const (
	OutputFormatText     OutputFormat = "text"
	OutputFormatJSON     OutputFormat = "json"
	OutputFormatXML      OutputFormat = "xml"
	OutputFormatYAML     OutputFormat = "yaml"
	OutputFormatEnum     OutputFormat = "enum"
	OutputFormatImagePNG OutputFormat = "image/png"
	OutputFormatImageJPG OutputFormat = "image/jpeg"
)

type Provider

type Provider string

Provider represents an LLM provider type

const (
	ProviderOpenAI    Provider = "openai"
	ProviderGemini    Provider = "gemini"
	ProviderVertex    Provider = "vertex"
	ProviderMock      Provider = "mock"
	ProviderAnthropic Provider = "anthropic"
	ProviderCustom    Provider = "custom"
)

Supported LLM providers

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL