redlama

package
v0.0.0-...-a240560 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 3, 2024 License: MIT Imports: 9 Imported by: 0

Documentation

Overview

ollama package is for interacting with local Ollama server.

Local instance of Redis is running on WSL 2 to return cached responses.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CheckLocalConnetion

func CheckLocalConnetion() (string, int, error)

CheckLocalConnetion function will return a string for Ollama server status.

Must be running Ollama (hosted locally).

Otherwise this function returns an error.

Returns:

  • string: response for Ollama server status
  • int: http status code
  • error: error message if there is one

func RedisClient

func RedisClient(ctx context.Context, db_num int) (*redis.Client, error)

RedisClient function will connect to local Redis server, hosted on WSL2, and return a Redis client.

The client returned is used by passing it to functions requiring Redis to cache data.

Parameters:

  • db_num: int, redis db number [0-15]

Returns:

  • *redis.Client
  • error: error message if there is one

Types

type OllamaOutput

type OllamaOutput struct {
	Model             string `json:"model"`
	CreatedAt         string `json:"created_at"`
	Response          string `json:"response"`
	Done              bool   `json:"done"`
	DoneReason        string `json:"done_reason"`
	Context           []int  `json:"context"`
	TotalDuration     int    `json:"total_duration"`
	LoadDuration      int    `json:"load_duration"`
	PromptValCount    int    `json:"prompt_val_count"`
	PromptValDuration int    `json:"prompt_eval_duration"`
	EvalCount         int    `json:"eval_count"`
	EvalDuration      int    `json:"eval_duration"`
}

struct for decoding json response returned from Ollama server.

func PromptOllama

func PromptOllama(ctx context.Context, prompt string, model string, cache bool, redisClient *redis.Client) (*OllamaOutput, int, error)

PromptOllama function will return a response, encoded as json, and a status code.

Must be running Ollama (hosted locally).

Otherwise this function returns an error.

Parameters:

  • prompt: string, message or question you wish to ask
  • model: string, model that ollama will use for prompt
  • cache: bool, set to true to used cached response from redis or set to false to reset cached response
  • redisClient: *redis.Client, use output of RedisClient function for this functions parameter

Returns:

  • *OllamaOutput: response returned as json struct
  • int: http status code
  • error: error message if there is one

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL