openai

package module
v0.7.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 13, 2024 License: Apache-2.0 Imports: 18 Imported by: 2

README

openai

GoDoc Go Report Card

Zero dependency (Unofficial) Go Client for OpenAI API endpoints. Built upon the great work done here.

Goals

Why did we bother to refactor the original library? We have 5 main goals:

  1. Use more idiomatic Go style.
  2. Better documentation.
  3. Make request parameters whose Go default value is a valid parameter value (and differs from OpenAI's defaults) pointers (See here for more).
  4. Have a consistent style throughout.
  5. Implement all endpoints.

We hope that by doing the above, future maintenance should also be trivial. Read more on the original refactoring PR here.

Installation
go get github.com/fabiustech/openai
Example Usage
package main

import (
	"context"
	"fmt"
	"os"
	
	"github.com/fabiustech/openai"
	"github.com/fabiustech/openai/models"
	"github.com/fabiustech/openai/params"
)

func main() {
	var key, ok = os.LookupEnv("OPENAI_API_KEY")
	if !ok {
		panic("env variable OPENAI_API_KEY not set")
	}
	var c = openai.NewClient(key)

	var resp, err = c.CreateCompletion(context.Background(), &openai.CompletionRequest[models.Completion]{
		Model:       models.TextDavinci003,
		MaxTokens:   100,
		Prompt:      "Lorem ipsum",
		Temperature: params.Optional(0.0),
	})
	if err != nil {
		return
	}

	fmt.Println(resp.Choices[0].Text)
}
Contributing

Contributions are welcome and encouraged! Feel free to report any bugs / feature requests as issues.

Documentation

Overview

Package openai is a client library for interacting with the OpenAI API. It supports all non-deprecated endpoints (as well as the Engines endpoint).

Index

Constants

This section is empty.

Variables

View Source
var ErrBadPrefix = errors.New("unexpected event received")

ErrBadPrefix is returned if we attempt to parse a Read from the response Body that doesn't begin with "data: ".

Functions

This section is empty.

Types

type AudioTranscriptionRequest added in v0.7.4

type AudioTranscriptionRequest struct {
	// File is the audio file object (not file name) to transcribe, in one of these formats:
	// mp3, mp4, mpeg, mpga, m4a, wav, or webm.
	File *os.File
	// Model is the ID of the model to use. Only whisper-1 is currently available.
	Model models.Audio
	// Prompt is optional text to guide the model's style or continue a previous audio segment. The prompt should match
	// the audio language.
	Prompt *string
	// ResponseFormat is the format of the transcript output, in one of these options:
	// json, text, srt, verbose_json, or vtt.
	ResponseFormat *audio.Format
	// Temperature is he sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random,
	// while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log
	// probability to automatically increase the temperature until certain thresholds are hit.
	Temperature *float64
	// Language is the language of the input audio. Supplying the input language in ISO-639-1 format will improve
	// accuracy and latency.
	Language *string
}

AudioTranscriptionRequest is the request body for the audio/transcriptions endpoint.

type ChatCompletionChoice added in v0.5.0

type ChatCompletionChoice struct {
	Index        int          `json:"index"`
	Message      *ChatMessage `json:"message"`
	FinishReason string       `json:"finish_reason"`
}

ChatCompletionChoice represents one of possible chat completions.

type ChatCompletionRequest added in v0.5.0

type ChatCompletionRequest struct {
	// Model specifies the ID of the model to use.
	// See more here: https://platform.openai.com/docs/models/overview.
	Model models.ChatCompletion `json:"model"`
	// Messages are the messages to generate chat completions for, in the chat format.
	Messages []*ChatMessage `json:"messages"`
	// Functions are a list of functions the model may generate JSON inputs for.
	Functions []*Function `json:"functions,omitempty"`
	// FunctionCall controls how the model responds to function calls. "none" means the model does not call a function,
	// and responds to the end-user. "auto" means the model can pick between an end-user or calling a function.
	// Specifying a particular function via {"name":\ "my_function"} forces the model to call that function.
	// "none" is the default when no functions are present. "auto" is the default if functions are present.
	// Use FunctionCallByName to generate a FunctionCall value which will explicitly call a function named |name|.
	FunctionCall *FunctionCall `json:"function_call,omitempty"`
	// Temperature specifies what sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the
	// output more random, while lower values like 0.2 will make it more focused and deterministic. OpenAI generally
	// recommends altering this or top_p but not both.
	// Defaults to 1.
	Temperature *float64 `json:"temperature,omitempty"`
	// TopP specifies an alternative to sampling with temperature, called nucleus sampling, where the model considers
	// the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%
	// probability mass are considered. OpenAI generally recommends altering this or temperature but not both.
	// Defaults to 1.
	TopP *float64 `json:"top_p,omitempty"`
	// N specifies many chat completion choices to generate for each input message.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// Stop specifies up to 4 sequences where the API will stop generating further tokens.
	Stop []string `json:"stop,omitempty"`
	// MaxTokens specifies the maximum number of tokens to generate in the chat completion. The total length of input
	// tokens and generated tokens is limited by the model's context length.
	// Defaults to Infinity.
	MaxTokens int `json:"max_tokens,omitempty"`
	// PresencePenalty can be a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they
	// appear in the text so far, increasing the model's likelihood to talk about new topics.
	// Defaults to 0.
	PresencePenalty float32 `json:"presence_penalty,omitempty"`
	// FrequencyPenalty can be a number between -2.0 and 2.0. Positive values penalize new tokens based on their
	// existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
	// Defaults to 0.
	FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
	// LogitBias modifies the likelihood of specified tokens appearing in the completion. Accepts a json object that
	// maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100.
	// Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will
	// vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100
	// or 100 should result in a ban or exclusive selection of the relevant token.
	// Defaults to null.
	LogitBias map[string]int `json:"logit_bias,omitempty"`
	// User is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
	// See more here: https://beta.openai.com/docs/guides/safety-best-practices/end-user-ids
	User string `json:"user,omitempty"`
}

ChatCompletionRequest contains all relevant fields for requests to the chat completions endpoint.

type ChatCompletionResponse added in v0.5.0

type ChatCompletionResponse struct {
	ID      string                  `json:"id"`
	Object  objects.Object          `json:"object"`
	Created uint64                  `json:"created"`
	Choices []*ChatCompletionChoice `json:"choices"`
	Usage   *Usage                  `json:"usage"`
}

ChatCompletionResponse is the response from the chat completions endpoint.

type ChatMessage added in v0.5.0

type ChatMessage struct {
	Role         ChatRole              `json:"role"`
	Content      string                `json:"content,omitempty"`
	FunctionCall *FunctionCallResponse `json:"function_call,omitempty"`
}

ChatMessage represents a message in a chat completion.

type ChatRole added in v0.5.0

type ChatRole string

ChatRole is an enum of the various message roles.

const (
	// System represents a system message, which helps set the behavior of the assistant.
	System ChatRole = "system"
	// User represents a message from a user, which helps instruct the assistant. They can be generated by the end users
	// of an application, or set by a developer as an instruction.
	User ChatRole = "user"
	// Assistant represents a message from the assistant. Assistant messages help store prior responses. They can also
	// be written by a developer to help give examples of desired behavior.
	Assistant ChatRole = "assistant"
	// RoleFunction represents a call to a function.
	RoleFunction ChatRole = "function"
)

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is OpenAI API client.

func NewClient

func NewClient(token string) *Client

NewClient creates new OpenAI API client.

func NewClientWithOrg

func NewClientWithOrg(token, org string) *Client

NewClientWithOrg creates new OpenAI API client for specified Organization ID.

func (*Client) CancelFineTune

func (c *Client) CancelFineTune(ctx context.Context, id string) (*FineTuneResponse, error)

CancelFineTune immediately cancels a fine-tune job.

func (*Client) CreateChatCompletion added in v0.5.0

func (c *Client) CreateChatCompletion(ctx context.Context, cr *ChatCompletionRequest) (*ChatCompletionResponse, error)

CreateChatCompletion creates a chat completion for the provided prompt and parameters.

func (*Client) CreateCompletion

CreateCompletion creates a completion for the provided prompt and parameters.

func (*Client) CreateEdit

func (c *Client) CreateEdit(ctx context.Context, er *EditsRequest) (*EditsResponse, error)

CreateEdit creates a new edit for the provided input, instruction, and parameters.

func (*Client) CreateEmbeddings

func (c *Client) CreateEmbeddings(ctx context.Context, request *EmbeddingRequest) (*EmbeddingResponse, error)

CreateEmbeddings creates an embedding vector representing the input text.

func (*Client) CreateFineTune

func (c *Client) CreateFineTune(ctx context.Context, ftr *FineTuneRequest) (*FineTuneResponse, error)

CreateFineTune creates a job that fine-tunes a specified model from a given dataset. *FineTuneResponse includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

func (*Client) CreateFineTunedCompletion

CreateFineTunedCompletion creates a completion for the provided prompt and parameters, using a fine-tuned model.

func (*Client) CreateImage

func (c *Client) CreateImage(ctx context.Context, ir *CreateImageRequest) (*ImageResponse, error)

CreateImage creates an image (or images) given a prompt.

func (*Client) CreateModeration

func (c *Client) CreateModeration(ctx context.Context, mr *ModerationRequest) (*ModerationResponse, error)

CreateModeration classifies if text violates OpenAI's Content Policy.

func (*Client) CreateStreamingCompletion added in v0.2.0

func (c *Client) CreateStreamingCompletion(ctx context.Context, cr *CompletionRequest[models.Completion]) (<-chan *CompletionResponse[models.Completion], <-chan error, error)

CreateStreamingCompletion returns two channels: the first will be sent *CompletionResponse[models.Completion]s as they are received from the API and the second is sent any error(s) encountered while receiving / parsing responses. Both channels will be closed on receipt of the "[DONE]" event or upon the first encountered error. An err is returned if any error occurred prior to receiving an initial response from the API.

func (*Client) DeleteFile

func (c *Client) DeleteFile(ctx context.Context, id string) error

DeleteFile deletes a file.

func (*Client) DeleteFineTune

func (c *Client) DeleteFineTune(ctx context.Context, id string) (*FineTuneDeletionResponse, error)

DeleteFineTune delete a fine-tuned model. You must have the Owner role in your organization.

func (*Client) EditImage

func (c *Client) EditImage(ctx context.Context, eir *EditImageRequest) (*ImageResponse, error)

EditImage creates an edited or extended image (or images) given an original image and a prompt.

func (*Client) GetEngine deprecated

func (c *Client) GetEngine(ctx context.Context, id string) (*Engine, error)

GetEngine retrieves a model instance, providing basic information about it such as the owner and availability.

Deprecated: Please use their replacement, Models, instead. https://beta.openai.com/docs/api-reference/models

func (*Client) ImageVariation

func (c *Client) ImageVariation(ctx context.Context, vir *VariationImageRequest) (*ImageResponse, error)

ImageVariation creates a variation (or variations) of a given image.

func (*Client) ListEngines deprecated

func (c *Client) ListEngines(ctx context.Context) (*List[*Engine], error)

ListEngines lists the currently available engines, and provides basic information about each option such as the owner and availability.

Deprecated: Please use their replacement, Models, instead. https://beta.openai.com/docs/api-reference/models

func (*Client) ListFiles

func (c *Client) ListFiles(ctx context.Context) (*List[*File], error)

ListFiles returns a list of files that belong to the user's organization.

func (*Client) ListFineTuneEvents

func (c *Client) ListFineTuneEvents(ctx context.Context, id string) (*List[*Event], error)

ListFineTuneEvents returns fine-grained status updates for a fine-tune job. TODO: Support streaming (in a different method).

func (*Client) ListFineTunes

func (c *Client) ListFineTunes(ctx context.Context) (*List[*FineTuneResponse], error)

ListFineTunes lists your organization's fine-tuning jobs.

func (*Client) RetrieveFile

func (c *Client) RetrieveFile(ctx context.Context, id string) (*File, error)

RetrieveFile returns information about a specific file.

func (*Client) RetrieveFineTune

func (c *Client) RetrieveFineTune(ctx context.Context, id string) (*FineTuneResponse, error)

RetrieveFineTune gets info about the fine-tune job.

func (*Client) SetBaseURL added in v0.4.0

func (c *Client) SetBaseURL(u string) error

SetBaseURL configures the client to make requests to a different base URL. If configuring a call to an Azure hosted endpoint, include the `api-version` parameter in the passed URL string. E.g.

https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/?api-version=2022-12-01

func (*Client) TranscribeAudioFile added in v0.7.4

func (c *Client) TranscribeAudioFile(ctx context.Context, ar *AudioTranscriptionRequest) ([]byte, error)

TranscribeAudioFile creates a new audio file transcription request. File uploads are currently limited to 25 MB and the following input file types are supported:mp3, mp4, mpeg, mpga, m4a, wav, and webm. The returned []byte is the raw response from the API (as the response format changes depending on the contents of the request).

func (*Client) UploadFile

func (c *Client) UploadFile(ctx context.Context, fr *FileRequest) (*File, error)

UploadFile uploads a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB.

type CompletionChoice

type CompletionChoice struct {
	Text         string         `json:"text"`
	Index        int            `json:"index"`
	FinishReason *string        `json:"finish_reason,omitempty"`
	LogProbs     *LogprobResult `json:"logprobs"`
}

CompletionChoice represents one of possible completions.

type CompletionRequest

type CompletionRequest[T models.Completion | models.FineTunedModel] struct {
	// Model specifies the ID of the model to use.
	// See more here: https://beta.openai.com/docs/models/overview
	Model T `json:"model"`
	// Prompt specifies the prompt(s) to generate completions for, encoded as a string, array of strings, array of
	// tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during
	// training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
	// Defaults to <|endoftext|>.
	Prompt string `json:"prompt,omitempty"`
	// Suffix specifies the suffix that comes after a completion of inserted text.
	// Defaults to null.
	Suffix string `json:"suffix,omitempty"`
	// MaxTokens specifies the maximum number of tokens to generate in the completion. The token count of your prompt
	// plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens
	// (except for the newest models, which support 4096).
	// Defaults to 16.
	MaxTokens int `json:"max_tokens,omitempty"`
	// Temperature specifies what sampling temperature to use. Higher values means the model will take more risks. Try
	// 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. OpenAI generally
	// recommends altering this or top_p but not both.
	//
	// More on sampling temperature: https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277
	//
	// Defaults to 1.
	Temperature *float64 `json:"temperature,omitempty"`
	// TopP specifies an alternative to sampling with temperature, called nucleus sampling, where the model considers
	// the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%
	// probability mass are considered. OpenAI generally recommends altering this or temperature but not both.
	// Defaults to 1.
	TopP *float64 `json:"top_p,omitempty"`
	// N specifies how many completions to generate for each prompt.
	// Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully
	// and ensure that you have reasonable settings for max_tokens and stop.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// LogProbs specifies to include the log probabilities on the logprobs most likely tokens, as well the chosen
	// tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will
	// always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
	// The maximum value for logprobs is 5.
	// Defaults to null.
	LogProbs *int `json:"logprobs,omitempty"`
	// Echo specifies to echo back the prompt in addition to the completion.
	// Defaults to false.
	Echo bool `json:"echo,omitempty"`
	// Stop specifies up to 4 sequences where the API will stop generating further tokens. The returned text will not
	// contain the stop sequence.
	Stop []string `json:"stop,omitempty"`
	// PresencePenalty can be a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they
	// appear in the text so far, increasing the model's likelihood to talk about new topics.
	// Defaults to 0.
	PresencePenalty float32 `json:"presence_penalty,omitempty"`
	// FrequencyPenalty can be a number between -2.0 and 2.0. Positive values penalize new tokens based on their
	// existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
	// Defaults to 0.
	FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
	// Generates best_of completions server-side and returns the "best" (the one with the highest log probability per
	// token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n
	// specifies how many to return – best_of must be greater than n. Note: Because this parameter generates many
	// completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings
	// for max_tokens and stop.
	// Defaults to 1.
	BestOf int `json:"best_of,omitempty"`
	// LogitBias modifies the likelihood of specified tokens appearing in the completion. Accepts a json object that
	// maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100.
	// Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will
	// vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like
	// -100 or 100 should result in a ban or exclusive selection of the relevant token.
	// As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.
	//
	// You can use this tokenizer tool to convert text to token IDs:
	// https://beta.openai.com/tokenizer
	//
	// Defaults to null.
	LogitBias map[string]int `json:"logit_bias,omitempty"`
	// User is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
	// See more here: https://beta.openai.com/docs/guides/safety-best-practices/end-user-ids
	User string `json:"user,omitempty"`
}

CompletionRequest contains all relevant fields for requests to the completions endpoint.

type CompletionResponse

type CompletionResponse[T models.Completion | models.FineTunedModel] struct {
	ID      string              `json:"id"`
	Object  objects.Object      `json:"object"`
	Created uint64              `json:"created"`
	Model   T                   `json:"model"`
	Choices []*CompletionChoice `json:"choices"`
	Usage   *Usage              `json:"usage,omitempty"`
}

CompletionResponse is the response from the completions endpoint.

type CreateImageRequest

type CreateImageRequest struct {
	// Prompt is a text description of the desired image(s). The maximum length is 1000 characters.
	Prompt string `json:"prompt"`
	// N specifies the number of images to generate. Must be between 1 and 10.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// Size specifies the size of the generated images. Must be one of images.Size256x256, images.Size512x512, or
	// images.Size1024x1024.
	// Defaults to images.Size1024x1024.
	Size images.Size `json:"size,omitempty"`
	// ResponseFormat specifies the format in which the generated images are returned. Must be one of images.FormatURL
	// or images.FormatB64JSON.
	// Defaults to images.FormatURL.
	ResponseFormat images.Format `json:"response_format,omitempty"`
	// User specifies a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse:
	// https://beta.openai.com/docs/guides/safety-best-practices/end-user-ids.
	User string `json:"user,omitempty"`
}

CreateImageRequest contains all relevant fields for requests to the images/generations endpoint.

type EditImageRequest

type EditImageRequest struct {
	// Image is the image to edit. Must be a valid PNG file, less than 4MB, and square. If Mask is not provided, image
	// must have transparency, which will be used as the mask.
	Image string `json:"image"`
	// Mask is an additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should
	// be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as Image.
	Mask string `json:"mask,omitempty"`
	// Prompt is a text description of the desired image(s). The maximum length is 1000 characters.
	Prompt string `json:"prompt"`
	// N specifies the number of images to generate. Must be between 1 and 10.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// Size specifies the size of the generated images. Must be one of images.Size256x256, images.Size512x512, or
	// images.Size1024x1024.
	// Defaults to images.Size1024x1024.
	Size images.Size `json:"size,omitempty"`
	// ResponseFormat specifies the format in which the generated images are returned. Must be one of images.FormatURL
	// or images.FormatB64JSON.
	// Defaults to images.FormatURL.
	ResponseFormat images.Format `json:"response_format,omitempty"`
	// User specifies a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse:
	// https://beta.openai.com/docs/guides/safety-best-practices/end-user-ids.
	User string `json:"user,omitempty"`
}

EditImageRequest contains all relevant fields for requests to the images/edits endpoint.

type EditsChoice

type EditsChoice struct {
	Text  string `json:"text"`
	Index int    `json:"index"`
}

EditsChoice represents one of possible edits.

type EditsRequest

type EditsRequest struct {
	Model models.Edit `json:"model"`
	// Input is the input text to use as a starting point for the edit.
	// Defaults to "".
	Input string `json:"input,omitempty"`
	// Instruction is the instruction that tells the model how to edit the prompt.
	Instruction string `json:"instruction,omitempty"`
	// N specifies how many edits to generate for the input and instruction.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// Temperature specifies what sampling temperature to use. Higher values means the model will take more risks.
	// Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. OpenAI
	// generally recommends altering this or top_p but not both.
	//
	// More on sampling temperature: https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277
	//
	// Defaults to 1.
	Temperature *float64 `json:"temperature,omitempty"`
	// TopP specifies an alternative to sampling with temperature, called nucleus sampling, where the model considers
	// the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%
	// probability mass are considered. OpenAI generally recommends altering this or temperature but not both.
	// Defaults to 1.
	TopP *float64 `json:"top_p,omitempty"`
}

EditsRequest contains all relevant fields for requests to the edits endpoint.

type EditsResponse

type EditsResponse struct {
	Object  objects.Object `json:"object"` // "edit"
	Created uint64         `json:"created"`
	Usage   *Usage         `json:"usage"`
	Choices []*EditsChoice `json:"choices"`
}

EditsResponse represents a response structure for Edits API.

type Embedding

type Embedding struct {
	Object    objects.Object `json:"object"`
	Embedding []float64      `json:"embedding"`
	Index     int            `json:"index"`
}

Embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.

type EmbeddingRequest

type EmbeddingRequest struct {
	// Input represents input text to get embeddings for, encoded as a strings. To get embeddings for multiple inputs in
	// a single request, pass a slice of length > 1. Each input string must not exceed 8192 tokens in length.
	Input []string `json:"input"`
	// Model is the ID of the model to use.
	Model models.Embedding `json:"model"`
	// User is a unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
	User string `json:"user"`
}

EmbeddingRequest contains all relevant fields for requests to the embeddings endpoint.

type EmbeddingResponse

type EmbeddingResponse struct {
	*List[*Embedding]
	Model models.Embedding
	Usage *Usage
}

EmbeddingResponse is the response from a Create embeddings request.

type Engine

type Engine struct {
	ID     string `json:"id"`
	Object string `json:"object"`
	Owner  string `json:"owner"`
	Ready  bool   `json:"ready"`
}

Engine contains all relevant fields for requests to the engines endpoint.

type Error

type Error struct {
	StatusCode int     `json:"statusCode"`
	Code       string  `json:"code"`
	Message    string  `json:"message"`
	Param      *string `json:"param,omitempty"`
	Type       string  `json:"type"`
}

Error represents an error response from the API.

func (*Error) Error

func (e *Error) Error() string

Error implements the error interface.

func (*Error) Retryable

func (e *Error) Retryable() bool

Retryable returns true if the error is retryable.

type Event

type Event struct {
	Object    objects.Object `json:"object"`
	CreatedAt uint64         `json:"created_at"`
	Level     string         `json:"level"`
	Message   string         `json:"message"`
}

Event represents an event related to a fine-tune request.

type File

type File struct {
	ID        string         `json:"id"`
	Object    objects.Object `json:"object"`
	Bytes     int            `json:"bytes"`
	CreatedAt int            `json:"created_at"`
	Filename  string         `json:"filename"`
	Purpose   string         `json:"purpose"`
}

File represents an OpenAPI file.

type FileRequest

type FileRequest struct {
	// File is the JSON Lines file to be uploaded. If the purpose is set to "fine-tune", each line is a JSON record
	// with "prompt" and "completion" fields representing your training examples:
	// https://beta.openai.com/docs/guides/fine-tuning/prepare-training-data.
	File *os.File
	// Purpose is the intended purpose of the uploaded documents. Use "fine-tune" for Fine-tuning.
	// This allows OpenAI to validate the format of the uploaded file.
	Purpose string
}

FileRequest contains all relevant data for upload requests to the files endpoint.

func NewFineTuneFileRequest

func NewFineTuneFileRequest(path string) (*FileRequest, error)

NewFineTuneFileRequest returns a |*FileRequest| with File opened from |path| and Purpose set to "fine-tuned".

type FineTuneDeletionResponse

type FineTuneDeletionResponse struct {
	ID      string         `json:"id"`
	Object  objects.Object `json:"object"`
	Deleted bool           `json:"deleted"`
}

FineTuneDeletionResponse is the response from the fine-tunes/delete endpoint.

type FineTuneRequest

type FineTuneRequest struct {
	// TrainingFile specifies the ID of an uploaded file that contains training data. See upload file for how to upload
	// a file.
	//
	// https://beta.openai.com/docs/api-reference/files/upload
	//
	// Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys
	// "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. See the
	// fine-tuning guide for more details:
	//
	// https://beta.openai.com/docs/guides/fine-tuning/creating-training-data
	TrainingFile string `json:"training_file"`
	// ValidationFile specifies the ID of an uploaded file that contains validation data. If you provide this file, the
	// data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the
	// fine-tuning results file.
	//
	// https://beta.openai.com/docs/guides/fine-tuning/analyzing-your-fine-tuned-model
	//
	// Your train and validation data should be mutually exclusive. Your dataset must be formatted as a JSONL file,
	// where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must
	// upload your file with the purpose fine-tune. See the fine-tuning guide for more details:
	//
	// https://beta.openai.com/docs/guides/fine-tuning/creating-training-data
	ValidationFile *string `json:"validation_file,omitempty"`
	// Model specifies the name of the base model to fine-tune. You can select one of "ada", "babbage", "curie",
	// "davinci", or a fine-tuned model created after 2022-04-21. To learn more about these models, see the Models
	// documentation.
	// Defaults to "curie".
	Model *models.FineTune `json:"model,omitempty"`
	// NEpochs specifies the number of epochs to train the model for. An epoch refers to one full cycle through
	// the training dataset.
	// Defaults to 4.
	NEpochs *int `json:"n_epochs,omitempty"`
	// BatchSize specifies the batch size to use for training. The batch size is the number of training examples used
	// to train a single forward and backward pass. By default, the batch size will be dynamically configured to be
	// ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch
	// sizes tend to work better for larger datasets.
	// Defaults to null.
	BatchSize *int `json:"batch_size,omitempty"`
	// LearningRateMultiplier specifies the learning rate multiplier to use for training. The fine-tuning learning rate
	// is the original learning rate used for pretraining multiplied by this value. By default, the learning rate
	// multiplier is the 0.05, 0.1, or 0.2 depending on final batch_size (larger learning rates tend to perform better
	// with larger batch sizes). We recommend experimenting with values in the range 0.02 to 0.2 to see what produces
	// the best results.
	// Defaults to null.
	LearningRateMultiplier *int `json:"learning_rate_multiplier,omitempty"`
	// PromptLossWeight specifies the weight to use for loss on the prompt tokens. This controls how much the model
	// tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can
	// add a stabilizing effect to training when completions are short. If prompts are extremely long (relative to
	// completions), it may make sense to reduce this weight so as to avoid over-prioritizing learning the prompt.
	// Defaults to 0.01.
	PromptLossWeight *int `json:"prompt_loss_weight,omitempty"`
	// ComputeClassificationMetrics calculates classification-specific metrics such as accuracy and F-1 score using the
	// validation set at the end of every epoch if set to true. These metrics can be viewed in the results file.
	//
	// https://beta.openai.com/docs/guides/fine-tuning/analyzing-your-fine-tuned-model
	//
	// In order to compute classification metrics, you must provide a ValidationFile. Additionally, you must specify
	// ClassificationNClasses for multiclass classification or ClassificationPositiveClass for binary classification.
	ComputeClassificationMetrics bool `json:"compute_classification_metrics,omitempty"`
	// ClassificationNClasses specifies the number of classes in a classification task. This parameter is required for
	// multiclass classification.
	// Defaults to null.
	ClassificationNClasses *int `json:"classification_n_classes,omitempty"`
	// ClassificationPositiveClass specifies the positive class in binary classification. This parameter is needed to
	// generate precision, recall, and F1 metrics when doing binary classification.
	// Defaults to null.
	ClassificationPositiveClass *string `json:"classification_positive_class,omitempty"`
	// ClassificationBetas specifies that if provided, we calculate F-beta scores at the specified beta values. The
	// F-beta score is a generalization of F-1 score. This is only used for binary classification. With a beta of 1
	// (i.e. the F-1 score), precision and recall are given the same weight. A larger beta score puts more weight on
	// recall and less on precision. A smaller beta score puts more weight on precision and less on recall.
	// Defaults to null.
	ClassificationBetas []float32 `json:"classification_betas,omitempty"`
	// Suffix specifies a string of up to 40 characters that will be added to your fine-tuned model name. For example,
	// a suffix of "custom-model-name" would produce a model name like
	// ada:ft-your-org:custom-model-name-2022-02-15-04-21-04.
	Suffix string `json:"suffix,omitempty"`
}

FineTuneRequest contains all relevant fields for requests to the fine-tunes endpoints.

type FineTuneResponse

type FineTuneResponse struct {
	ID             string                 `json:"id"`
	Object         objects.Object         `json:"object"`
	Model          models.FineTune        `json:"model"`
	CreatedAt      uint64                 `json:"created_at"`
	Events         []*Event               `json:"events,omitempty"`
	FineTunedModel *models.FineTunedModel `json:"fine_tuned_model"`
	Hyperparams    struct {
		BatchSize              int     `json:"batch_size"`
		LearningRateMultiplier float64 `json:"learning_rate_multiplier"`
		NEpochs                int     `json:"n_epochs"`
		PromptLossWeight       float64 `json:"prompt_loss_weight"`
	} `json:"hyperparams"`
	OrganizationID  string   `json:"organization_id"`
	ResultFiles     []string `json:"result_files"`
	Status          string   `json:"status"`
	ValidationFiles []string `json:"validation_files"`
	TrainingFiles   []struct {
		ID        string         `json:"id"`
		Object    objects.Object `json:"object"`
		Bytes     int            `json:"bytes"`
		CreatedAt uint64         `json:"created_at"`
		Filename  string         `json:"filename"`
		Purpose   string         `json:"purpose"`
	} `json:"training_files"`
	UpdatedAt uint64 `json:"updated_at"`
}

FineTuneResponse is the response from fine-tunes endpoints.

type Function added in v0.7.3

type Function struct {
	// Name is the name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a
	// maximum length of 64.
	Name string `json:"name"`
	// Description is the description of what the function does.
	Description string `json:"description"`
	// Parameters are the parameters the functions accepts, described as a JSON Schema object. See the guide
	// (https://platform.openai.com/docs/guides/gpt/function-calling) for examples, and the JSON Schema reference
	// (https://json-schema.org/understanding-json-schema/) for documentation about the format.
	// TODO: Is there a stronger typed representation of this w/o adding a dependency?
	Parameters json.RawMessage `json:"parameters"`
}

Function is a list of functions the model may generate JSON inputs for.

type FunctionCall added in v0.7.3

type FunctionCall struct {
	Name string `json:"name"`
	// contains filtered or unexported fields
}

FunctionCall specifies that the model should explicitly call the |Name|d function. To specify that the model should not call a function, use FunctionCallNone. To specify the default behavior, use FunctionCallAuto.

func FunctionCallAuto added in v0.7.3

func FunctionCallAuto() *FunctionCall

FunctionCallAuto returns a FunctionCall which will specify that the model should pick between an end-user or calling a function.

func FunctionCallNone added in v0.7.3

func FunctionCallNone() *FunctionCall

FunctionCallNone returns a FunctionCall which will specify that no function should be called.

func (*FunctionCall) MarshalJSON added in v0.7.3

func (f *FunctionCall) MarshalJSON() ([]byte, error)

MarshalJSON implements json.Marshaler.

type FunctionCallResponse added in v0.7.3

type FunctionCallResponse struct {
	// Name is the name of the function to be called.
	Name string `json:"name"`
	// Arguments are the arguments to the function call, encoded as a JSON string.
	Arguments string `json:"arguments"`
}

FunctionCallResponse represents a response from a function call.

func (*FunctionCallResponse) Unmarshal added in v0.7.3

func (c *FunctionCallResponse) Unmarshal(val any) error

Unmarshal unmarshals the content of the message into the provided value.

type ImageData

type ImageData struct {
	URL     *string `json:"url,omitempty"`
	B64JSON *string `json:"b64_json,omitempty"`
}

ImageData represents a response data structure for image API. Only one field will be non-nil.

type ImageResponse

type ImageResponse struct {
	Created uint64       `json:"created,omitempty"`
	Data    []*ImageData `json:"data,omitempty"`
}

ImageResponse represents a response structure for image API.

type List

type List[T any] struct {
	// Object specifies the object type (e.g. Model).
	Object objects.Object `json:"object"`
	// Data contains the list of objects.
	Data []T `json:"data"`
}

List represents a generic form of list of objects returned from many get endpoints.

type LogprobResult

type LogprobResult struct {
	Tokens        []string             `json:"tokens"`
	TokenLogprobs []float32            `json:"token_logprobs"`
	TopLogprobs   []map[string]float32 `json:"top_logprobs"`
	TextOffset    []int                `json:"text_offset"`
}

LogprobResult represents logprob result of Choice.

type ModerationRequest

type ModerationRequest struct {
	// Input is the input text to classify.
	Input string `json:"input,omitempty"`
	// Model specifies the model to use for moderation.
	// Defaults to models.TextModerationLatest.
	Model models.Moderation `json:"model,omitempty"`
}

ModerationRequest contains all relevant fields for requests to the moderations endpoint.

type ModerationResponse

type ModerationResponse struct {
	ID      string   `json:"id"`
	Model   string   `json:"model"`
	Results []Result `json:"results"`
}

ModerationResponse represents a response structure for moderation API.

type Result

type Result struct {
	Categories     *ResultCategories     `json:"categories"`
	CategoryScores *ResultCategoryScores `json:"category_scores"`
	Flagged        bool                  `json:"flagged"`
}

Result represents one of possible moderation results.

type ResultCategories

type ResultCategories struct {
	Hate            bool `json:"hate"`
	HateThreatening bool `json:"hate/threatening"`
	SelfHarm        bool `json:"self-harm"`
	Sexual          bool `json:"sexual"`
	SexualMinors    bool `json:"sexual/minors"`
	Violence        bool `json:"violence"`
	ViolenceGraphic bool `json:"violence/graphic"`
}

ResultCategories represents Categories of Result.

type ResultCategoryScores

type ResultCategoryScores struct {
	Hate            float32 `json:"hate"`
	HateThreatening float32 `json:"hate/threatening"`
	SelfHarm        float32 `json:"self-harm"`
	Sexual          float32 `json:"sexual"`
	SexualMinors    float32 `json:"sexual/minors"`
	Violence        float32 `json:"violence"`
	ViolenceGraphic float32 `json:"violence/graphic"`
}

ResultCategoryScores represents CategoryScores of Result.

type Usage

type Usage struct {
	// PromptTokens is the number of tokens in the request's prompt.
	PromptTokens int `json:"prompt_tokens"`
	// CompletionTokens is the number of tokens in the completion response.
	// Will not be set for requests to the embeddings endpoint.
	CompletionTokens int `json:"completion_tokens,omitempty"`
	// Total tokens is the sum of PromptTokens and CompletionTokens.
	TotalTokens int `json:"total_tokens"`
}

Usage Represents the total token usage per request to OpenAI.

type VariationImageRequest

type VariationImageRequest struct {
	// Image is the image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.
	Image string `json:"image"`
	// N specifies the number of images to generate. Must be between 1 and 10.
	// Defaults to 1.
	N int `json:"n,omitempty"`
	// Size specifies the size of the generated images. Must be one of images.Size256x256, images.Size512x512, or
	// images.Size1024x1024.
	// Defaults to images.Size1024x1024.
	Size images.Size `json:"size,omitempty"`
	// ResponseFormat specifies the format in which the generated images are returned. Must be one of images.FormatURL
	// or images.FormatB64JSON.
	// Defaults to images.FormatURL.
	ResponseFormat images.Format `json:"response_format,omitempty"`
	// User specifies a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse:
	// https://beta.openai.com/docs/guides/safety-best-practices/end-user-ids.
	User string `json:"user,omitempty"`
}

VariationImageRequest contains all relevant fields for requests to the images/variations endpoint.

Directories

Path Synopsis
Package audio contains the enum values which represent the output formats returned by the OpenAI transcription endpoint.
Package audio contains the enum values which represent the output formats returned by the OpenAI transcription endpoint.
Package images contains the enum values which represent the various image formats and sizes returned by the OpenAI image endpoints.
Package images contains the enum values which represent the various image formats and sizes returned by the OpenAI image endpoints.
Package models contains the enum values which represent the various models used by all OpenAI endpoints.
Package models contains the enum values which represent the various models used by all OpenAI endpoints.
Package objects contains the enum values which represent the various objects returned by all OpenAI endpoints.
Package objects contains the enum values which represent the various objects returned by all OpenAI endpoints.
Package params provides a helper function to simplify setting optional parameters in struct literals.
Package params provides a helper function to simplify setting optional parameters in struct literals.
Package routes contains constants for all OpenAI endpoint routes.
Package routes contains constants for all OpenAI endpoint routes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL