Documentation ¶
Overview ¶
Package chatgptgo provides a client for communicating with the OpenAI's GPT-3.5 (ChatGPT) API.
Index ¶
Constants ¶
View Source
const ( // BaseUrl is the base URL for the OpenAI API. BaseUrl = "https://api.openai.com/v1" // ChatUrl is the URL for the chat endpoint. ChatUrl = BaseUrl + "/chat/completions" )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Api ¶
type Api struct {
// contains filtered or unexported fields
}
Api is the client for communicating with the OpenAI API.
func (*Api) WithClient ¶
WithClient sets your custom HTTP client to use for requests.
func (*Api) WithOrganizationId ¶
WithOrganizationId sets your organization ID to use for requests.
type ApiError ¶
type ApiError struct { StatusCode int `json:"-"` ErrorDetails *ErrorDetails `json:"error"` }
ApiError is the error returned from OpenAI's API.
type Choice ¶
type Choice struct { Index int `json:"index"` Message *Message `json:"message"` FinishReason string `json:"finish_reason"` }
Choice is a single choice (message) returned from the chat endpoint.
type ErrorDetails ¶
type Request ¶
type Request struct { // Model is ID of the model to use. Currently, only `gpt-3.5-turbo` and `gpt-3.5-turbo-0301` are supported. Model Model `json:"model"` // Messages is the messages to generate chat completions for, in the chat format. Messages []*Message `json:"messages"` //Temperature is what sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Temperature float64 `json:"temperature,omitempty"` // TopP is an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. TopP float64 `json:"top_p,omitempty"` // N is how many chat completion choices to generate for each input message. N int `json:"n,omitempty"` //Stop is up to 4 sequences where the API will stop generating further tokens. Stop []string `json:"stop,omitempty"` // MaxTokens is the maximum number of tokens to generate for each chat completion choice. MaxTokens int `json:"max_tokens,omitempty"` // PresencePenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. PresencePenalty float64 `json:"presence_penalty,omitempty"` // FrequencyPenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` // User is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. User string `json:"user,omitempty"` }
Request is the request body for the chat endpoint.
Click to show internal directories.
Click to hide internal directories.