Documentation ¶
Index ¶
- Constants
- func Chat(ctx context.Context, messages []Msg, opt Options, client *http.Client, ...) ([]Msg, Usage, error)
- func ChatTokenCount(msgs []Msg, model string) int
- func Complete(ctx context.Context, prompt string, opt Options, client *http.Client, ...) ([]Completion, Usage, error)
- func Decode(tokens []int, model string) string
- func Encode(text, model string) []int
- func EncodeEnum(text, model string, f func(int))
- func MaxTokens(model string) int
- func MsgTokenCount(msg Msg, model string) int
- func TokenCount(text, model string) int
- type Completion
- type Credentials
- type Error
- type FinishReason
- type ForceFunctionCall
- type FunctionCall
- type Msg
- func AssistantMsg(content string) Msg
- func DropChatHistoryIfNeeded(chat []Msg, fixedSuffixLen int, maxTokens int, model string) ([]Msg, int)
- func FitChatContext(candidates []Msg, maxTokenCount int, model string) ([]Msg, int)
- func StreamChat(ctx context.Context, messages []Msg, opt Options, client *http.Client, ...) (Msg, error)
- func SystemMsg(content string) Msg
- func UserMsg(content string) Msg
- type Options
- type Price
- type Role
- type Usage
Constants ¶
const ( // ModelChatGPT4o is the current best chat model, gpt-4o, with 128k context. ModelChatGPT4o = "gpt-4o" // ModelChatGPT4oMini is the current best low-cost model, gpt-4o-mini, with 128k context. ModelChatGPT4oMini = "gpt-4o-mini" // ModelChatGPT4Turbo is the first 128k context model, gpt-4-turbo. ModelChatGPT4Turbo = "gpt-4-turbo" // ModelChatGPT4TurboPreview is the preview of GPT-4 Turbo, with 128k context. ModelChatGPT4TurboPreview = "gpt-4-turbo-preview" // ModelChatGPT4 is the current best chat model, gpt-4, with 8k context. ModelChatGPT4 = "gpt-4" // ModelChatGPT4With32k is a version of ModelChatGPT4 with a 32k context. ModelChatGPT4With32k = "gpt-4-32k" // ModelChatGPT35Turbo is the current best, cheapest and universally available ChatGPT 3.5 model. ModelChatGPT35Turbo = "gpt-3.5-turbo" // ModelDefaultChat is a chat model used by default. This is going to be set to whatever // default choice the author of this library feels appropriate going forward, but really, // you should be specifying a specific model like ModelChatGPT4 or ModelChatGPT35Turbo. ModelDefaultChat = ModelChatGPT4Turbo // ModelDefaultCompletion is the current best instruction-following model for text completion. // Not recommended for basically anything any more because gpt-3.5-turbo is 10x cheaper and just as good. ModelDefaultCompletion = ModelTextDavinci003 // ModelTextDavinci003 is the current best instruction-following model. // Not recommended for basically anything any more because gpt-3.5-turbo is 10x cheaper and just as good. ModelTextDavinci003 = "text-davinci-003" // ModelBaseDavinci is an older GPT 3 (not GPT 3.5) family base model. Only useful for fine-tuning. ModelBaseDavinci = "davinci" // ModelEmbedding3Large is the best embedding model so far. ModelEmbedding3Large = "text-embedding-3-large" // ModelEmbedding3Small is a better version of ModelEmbeddingAda002. ModelEmbedding3Small = "text-embedding-3-small" // ModelEmbeddingAda002 is the original embedding model, its use is no longer recommended. ModelEmbeddingAda002 = "text-embedding-ada-002" )
Variables ¶
This section is empty.
Functions ¶
func Chat ¶
func Chat(ctx context.Context, messages []Msg, opt Options, client *http.Client, creds Credentials) ([]Msg, Usage, error)
Chat suggests the next assistant's message for the given prompt via ChatGPT.. When successful, always returns at least one Msg, more if you set opt.N (these are multiple choices for the next message, not multiple messages). Options should originate from DefaultChatOptions, not DefaultCompleteOptions.
func ChatTokenCount ¶
func Complete ¶
func Complete(ctx context.Context, prompt string, opt Options, client *http.Client, creds Credentials) ([]Completion, Usage, error)
Complete generates a completion for the given prompt using a non-chat model. This is mainly of interest when using fine-tuned models now. When successful, always returns at least one Completion; more if you set opt.N. Options should originate from DefaultCompleteOptions, not DefaultChatOptions.
func EncodeEnum ¶
func MaxTokens ¶
MaxTokens returns the maximum number of tokens the given model supports. This is a sum of prompt and completion tokens.
func MsgTokenCount ¶
func TokenCount ¶
TokenCount counts GPT-3 tokens in the given text for the given model.
Types ¶
type Completion ¶
type Completion struct { Text string `json:"text"` FinishReason FinishReason `json:"finish_reason"` }
type Credentials ¶
Credentials are used to authenticate with OpenAI.
type Error ¶
type FinishReason ¶
type FinishReason string
const ( FinishReasonStop FinishReason = "stop" FinishReasonLength FinishReason = "length" )
type ForceFunctionCall ¶
type ForceFunctionCall struct {
Name string `json:"name"`
}
ForceFunctionCall is a value to use in Options.FunctionCallMode.
type FunctionCall ¶
func (*FunctionCall) UnmarshalArguments ¶
func (call *FunctionCall) UnmarshalArguments(out any) error
type Msg ¶
type Msg struct { Role Role `json:"role"` Content string `json:"content"` FunctionCall *FunctionCall `json:"function_call,omitempty"` }
Msg is a single chat message.
func AssistantMsg ¶
AssistantMsg makes an Msg with an Assistant role.
func DropChatHistoryIfNeeded ¶
func FitChatContext ¶
FitChatContext returns messages that fit into maxTokenCount, skipping those that don't fit. This is meant for including knowledge base context into ChatGPT prompts.
func StreamChat ¶
func StreamChat(ctx context.Context, messages []Msg, opt Options, client *http.Client, creds Credentials, f func(msg *Msg, delta string) error) (Msg, error)
StreamChat suggests the next assistant's message for the given prompt via ChatGPT, streaming the response. Options should originate from DefaultChatOptions, not DefaultCompleteOptions. Options.N must be 0 or 1.
func (*Msg) UnmarshalCallArguments ¶
type Options ¶
type Options struct { // Model is the OpenAI model to use, see https://platform.openai.com/docs/models/. // Model string `json:"model"` // MaxTokens is upper limit on completion length. In chat API, use 0 to allow the maximum possible length (4096 minus prompt length). MaxTokens int `json:"max_tokens,omitempty"` Functions []any `json:"functions,omitempty"` FunctionCallMode any `json:"function_call,omitempty"` Tools []any `json:"tools,omitempty"` ToolChoice any `json:"tool_choice,omitempty"` Temperature float64 `json:"temperature"` TopP float64 `json:"top_p"` // N determines how many choices to return for each prompt. Defaults to 1. Must be less or equal to BestOf if both are specified. N int `json:"n,omitempty"` // BestOf determines how many choices to create for each prompt. Defaults to 1. Must be greater or equal to N if both are specified. BestOf int `json:"best_of,omitempty"` // Stop is up to 4 sequences where the API will stop generating tokens. Response will not contain the stop sequence. Stop []string `json:"stop,omitempty"` // PresencePenalty number between 0 and 1 that penalizes tokens that have already appeared in the text so far. PresencePenalty float64 `json:"presence_penalty"` // FrequencyPenalty number between 0 and 1 that penalizes tokens on existing frequency in the text so far. FrequencyPenalty float64 `json:"frequency_penalty"` }
Options adjust details of how Chat and Complete calls behave.
func DefaultChatOptions ¶
func DefaultChatOptions() Options
DefaultChatOptions provides a safe and conservative starting point for Chat call options. Note that it sets Temperature to 0 and enables unlimited MaxTokens.
func DefaultCompleteOptions ¶
func DefaultCompleteOptions() Options
DefaultCompleteOptions provides a safe and conservative starting point for Complete call options. Note that it sets Temperature to 0 and MaxTokens to 256.
type Price ¶
type Price int64
Price is an amount in 1/1_000_000 of a cent. I.e. $2 per 1M tokens = $0.002 per 1K tokens = Price(200) per token. Max price is thus $92_233_720_368 ($92 billion). But looks like we'll have to scale it soon!
func Cost ¶
Cost estimates the cost of processing the given number of prompt & completion tokens with the given model.
func FineTuningCost ¶
FineTuningCost estimates the cost of fine-tuning the given model using the given number of tokens of sample data.