Documentation
¶
Overview ¶
Package gpt provides a client for the OpenAI GPT-3 API
Package gpt provides a client for the OpenAI GPT-3 API ¶
Package gpt provides a client for the OpenAI GPT-3 API ¶
Package gpt provides a client for the OpenAI GPT-3 API
Index ¶
- Constants
- func Float32Ptr(f float32) *float32
- func IntPtr(i int) *int
- type APIError
- type APIErrorResponse
- type ChatCompletionRequest
- type ChatCompletionRequestMessage
- type ChatCompletionResponse
- type ChatCompletionResponseChoice
- type ChatCompletionResponseMessage
- type ChatCompletionStreamResponse
- type ChatCompletionStreamResponseChoice
- type ChatCompletionsResponseUsage
- type Client
- type ClientOption
- type CompletionRequest
- type CompletionResponse
- type CompletionResponseChoice
- type CompletionResponseUsage
- type EditsRequest
- type EditsResponse
- type EditsResponseChoice
- type EditsResponseUsage
- type EmbeddingsRequest
- type EmbeddingsResponse
- type EmbeddingsResult
- type EmbeddingsUsage
- type EngineObject
- type EnginesResponse
- type ImageRequest
- type ImageResponse
- type ImageResponseDataInner
- type LogProbResult
- type Option
- type SearchData
- type SearchRequest
- type SearchResponse
Constants ¶
const ( TextAda001Engine = "text-ada-001" // TextAda001Engine Text Ada 001 TextBabbage001Engine = "text-babbage-001" // TextBabbage001Engine Text Babbage 001 TextCurie001Engine = "text-curie-001" // TextCurie001Engine Text Curie 001 TextDavinci001Engine = "text-davinci-001" // TextDavinci001Engine Text Davinci 001 TextDavinci002Engine = "text-davinci-002" // TextDavinci002Engine Text Davinci 002 TextDavinci003Engine = "text-davinci-003" // TextDavinci003Engine Text Davinci 003 AdaEngine = "ada" // AdaEngine Ada BabbageEngine = "babbage" // BabbageEngine Babbage CurieEngine = "curie" // CurieEngine Curie DavinciEngine = "davinci" // DavinciEngine Davinci DefaultEngine = DavinciEngine // DefaultEngine Default Engine )
Define GPT-3 Engine Types
const ( GPT4 = "gpt-4" // GPT4 GPT-4 GPT3Dot5Turbo = "gpt-3.5-turbo" // GPT3Dot5Turbo GPT-3.5 Turbo GPT3Dot5Turbo0301 = "gpt-3.5-turbo-0301" // GPT3Dot5Turbo0301 GPT-3.5 Turbo 0301 TextSimilarityAda001 = "text-similarity-ada-001" // TextSimilarityAda001 Text Similarity Ada 001 TextSimilarityBabbage001 = "text-similarity-babbage-001" // TextSimilarityBabbage001 Text Similarity Babbage 001 TextSimilarityCurie001 = "text-similarity-curie-001" // TextSimilarityCurie001 Text Similarity Curie 001 TextSimilarityDavinci001 = "text-similarity-davinci-001" // TextSimilarityDavinci001 Text Similarity Davinci 001 TextSearchAdaDoc001 = "text-search-ada-doc-001" // TextSearchAdaDoc001 Text Search Ada Doc 001 TextSearchAdaQuery001 = "text-search-ada-query-001" // TextSearchAdaQuery001 Text Search Ada Query 001 TextSearchBabbageDoc001 = "text-search-babbage-doc-001" // TextSearchBabbageDoc001 Text Search Babbage Doc 001 TextSearchBabbageQuery001 = "text-search-babbage-query-001" // TextSearchBabbageQuery001 Text Search Babbage Query 001 TextSearchCurieDoc001 = "text-search-curie-doc-001" // TextSearchCurieDoc001 Text Search Curie Doc 001 TextSearchCurieQuery001 = "text-search-curie-query-001" // TextSearchCurieQuery001 Text Search Curie Query 001 TextSearchDavinciDoc001 = "text-search-davinci-doc-001" // TextSearchDavinciDoc001 Text Search Davinci Doc 001 TextSearchDavinciQuery001 = "text-search-davinci-query-001" // TextSearchDavinciQuery001 Text Search Davinci Query 001 CodeSearchAdaCode001 = "code-search-ada-code-001" // CodeSearchAdaCode001 Code Search Ada Code 001 CodeSearchAdaText001 = "code-search-ada-text-001" // CodeSearchAdaText001 Code Search Ada Text 001 CodeSearchBabbageCode001 = "code-search-babbage-code-001" // CodeSearchBabbageCode001 Code Search Babbage Code 001 CodeSearchBabbageText001 = "code-search-babbage-text-001" // CodeSearchBabbageText001 Code Search Babbage Text 001 TextEmbeddingAda002 = "text-embedding-ada-002" // TextEmbeddingAda002 Text Embedding Ada 002 )
const ( CreateImageSize256x256 = "256x256" // CreateImageSize256x256 256x256 CreateImageSize512x512 = "512x512" // CreateImageSize512x512 512x512 CreateImageSize1024x1024 = "1024x1024" // CreateImageSize1024x1024 1024x1024 CreateImageResponseFormatURL = "url" // CreateImageResponseFormatURL URL CreateImageResponseFormatB64JSON = "b64_json" // CreateImageResponseFormatB64JSON B64 JSON )
Image sizes defined by the OpenAI API.
Variables ¶
This section is empty.
Functions ¶
func Float32Ptr ¶
Float32Ptr converts a float32 to a *float32 as a convenience
Types ¶
type APIError ¶
type APIError struct { StatusCode int `json:"status_code"` Message string `json:"message"` Type string `json:"type"` }
APIError represents an error that occurred on an API
type APIErrorResponse ¶
type APIErrorResponse struct {
Error APIError `json:"error"`
}
APIErrorResponse is the full error response that has been returned by an API.
type ChatCompletionRequest ¶
type ChatCompletionRequest struct { // Model is the name of the model to use. If not specified, will default to gpt-3.5-turbo. Model string `json:"model"` // Messages is a list of messages to use as the context for the chat completion. Messages []ChatCompletionRequestMessage `json:"messages"` // Temperature is sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, // while lower values like 0.2 will make it more focused and deterministic Temperature float32 `json:"temperature,omitempty"` // TopP is an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of // the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. TopP float32 `json:"top_p,omitempty"` // N is number of responses to generate N int `json:"n,omitempty"` // Stream is whether to stream responses back as they are generated Stream bool `json:"stream,omitempty"` // Stop is up to 4 sequences where the API will stop generating further tokens. Stop []string `json:"stop,omitempty"` // MaxTokens is the maximum number of tokens to return. MaxTokens int `json:"max_tokens,omitempty"` // PresencePenalty (-2, 2) penalize tokens that haven't appeared yet in the history. PresencePenalty float32 `json:"presence_penalty,omitempty"` // FrequencyPenalty (-2, 2) penalize tokens that appear too frequently in the history. FrequencyPenalty float32 `json:"frequency_penalty,omitempty"` // LogitBias modify the probability of specific tokens appearing in the completion. LogitBias map[string]float32 `json:"logit_bias,omitempty"` // User can be used to identify an end-user User string `json:"user,omitempty"` }
ChatCompletionRequest is a request for the chat completion API
type ChatCompletionRequestMessage ¶
type ChatCompletionRequestMessage struct { // Role is the role is the role of the message. Can be "system", "user", or "assistant" Role string `json:"role"` // Content is the content of the message Content string `json:"content"` }
ChatCompletionRequestMessage is a message to use as the context for the chat completion API
type ChatCompletionResponse ¶
type ChatCompletionResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []ChatCompletionResponseChoice `json:"choices"` Usage ChatCompletionsResponseUsage `json:"usage"` }
ChatCompletionResponse is the full response from a request to the Chat Completions API
type ChatCompletionResponseChoice ¶
type ChatCompletionResponseChoice struct { Index int `json:"index"` FinishReason string `json:"finish_reason"` Message ChatCompletionResponseMessage `json:"message"` }
ChatCompletionResponseChoice is one of the choices returned in the response to the Chat Completions API
type ChatCompletionResponseMessage ¶
type ChatCompletionResponseMessage struct { Role string `json:"role"` Content string `json:"content"` }
ChatCompletionResponseMessage is a message returned in the response to the Chat Completions API
type ChatCompletionStreamResponse ¶
type ChatCompletionStreamResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []ChatCompletionStreamResponseChoice `json:"choices"` Usage ChatCompletionsResponseUsage `json:"usage"` }
type ChatCompletionStreamResponseChoice ¶
type ChatCompletionStreamResponseChoice struct { Index int `json:"index"` FinishReason string `json:"finish_reason"` Delta ChatCompletionResponseMessage `json:"delta"` }
ChatCompletionStreamResponseChoice is one of the choices returned in the response to the Chat Completions API
type ChatCompletionsResponseUsage ¶
type ChatCompletionsResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
ChatCompletionsResponseUsage is the object that returns how many tokens the completion's request used
type Client ¶
type Client interface { // Engines lists the currently available engines, and provides basic information about each // option such as the owner and availability. Engines(ctx context.Context) (*EnginesResponse, error) // Engine retrieves an engine instance, providing basic information about the engine such // as the owner and availability. Engine(ctx context.Context, engine string) (*EngineObject, error) // ChatCompletion creates a completion with the Chat completion endpoint which // is what powers the ChatGPT experience. ChatCompletion(ctx context.Context, request *ChatCompletionRequest) (*ChatCompletionResponse, error) // ChatCompletionStream creates a completion with the Chat completion endpoint which // is what powers the ChatGPT experience. ChatCompletionStream(ctx context.Context, request *ChatCompletionRequest, onData func(*ChatCompletionStreamResponse)) error // Completion creates a completion with the default engine. This is the main endpoint of the API // which auto-completes based on the given prompt. Completion(ctx context.Context, request *CompletionRequest) (*CompletionResponse, error) // CompletionStream creates a completion with the default engine and streams the results through // multiple calls to onData. CompletionStream(ctx context.Context, request *CompletionRequest, onData func(*CompletionResponse)) error // CompletionWithEngine is the same as Completion except allows overriding the default engine on the client CompletionWithEngine(ctx context.Context, request *CompletionRequest) (*CompletionResponse, error) // CompletionStreamWithEngine is the same as CompletionStream allows overriding the default engine on the client CompletionStreamWithEngine(ctx context.Context, request *CompletionRequest, onData func(*CompletionResponse)) error // Edits is given a prompt and an instruction, the model will return an edited version of the prompt. Edits(ctx context.Context, request *EditsRequest) (*EditsResponse, error) // Search performs a semantic search over a list of documents with the default engine. Search(ctx context.Context, request *SearchRequest) (*SearchResponse, error) // SearchWithEngine performs a semantic search over a list of documents with the specified engine. SearchWithEngine(ctx context.Context, engine string, request *SearchRequest) (*SearchResponse, error) // Embeddings Returns an embedding using the provided request. Embeddings(ctx context.Context, request *EmbeddingsRequest) (*EmbeddingsResponse, error) // Image returns an image using the provided request. Image(ctx context.Context, request *ImageRequest) (*ImageResponse, error) }
Client is an API client to communicate with the OpenAI gpt-3 APIs
func NewClient ¶
func NewClient(apiKey string, options ...ClientOption) Client
NewClient returns a new OpenAI GPT-3 API client. An APIKey is required to use the client
type ClientOption ¶
type ClientOption func(*client) *client
ClientOption are options that can be passed when creating a new client
func WithBaseURL ¶
func WithBaseURL(baseURL string) ClientOption
WithBaseURL is a client option that allows you to override the default base url of the client. The default base url is "https://api.openai.com/v1"
func WithDefaultEngine ¶
func WithDefaultEngine(engine string) ClientOption
WithDefaultEngine is a client option that allows you to override the default engine of the client
func WithHTTPClient ¶
func WithHTTPClient(httpClient *http.Client) ClientOption
WithHTTPClient allows you to override the internal http.Client used
func WithOrg ¶
func WithOrg(id string) ClientOption
WithOrg is a client option that allows you to override the organization ID
func WithTimeout ¶
func WithTimeout(timeout time.Duration) ClientOption
WithTimeout is a client option that allows you to override the default timeout duration of requests for the client. The default is 30 seconds. If you are overriding the http client as well, just include the timeout there.
func WithUserAgent ¶
func WithUserAgent(userAgent string) ClientOption
WithUserAgent is a client option that allows you to override the default user agent of the client
type CompletionRequest ¶
type CompletionRequest struct { Model string `json:"model"` // Prompt sets a list of string prompts to use. Prompt []string `json:"prompt,omitempty"` // Suffix comes after a completion of inserted text. Suffix string `json:"suffix,omitempty"` // MaxTokens sets how many tokens to complete up to. Max of 512 MaxTokens int `json:"max_tokens,omitempty"` // Temperature sets sampling temperature to use Temperature float32 `json:"temperature,omitempty"` // TopP sets alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` // N sets how many choice to create for each prompt N *int `json:"n"` // Stream sets whether to stream back results or not. Don't set this value in the request yourself // as it will be overridden depending on if you use CompletionStream or Completion methods. Stream bool `json:"stream,omitempty"` // LogProbs sets include the probabilities of most likely tokens LogProbs *int `json:"logprobs"` // Echo sets back the prompt in addition to the completion Echo bool `json:"echo"` // Stop sets up to 4 sequences where the API will stop generating tokens. Response will not contain the stop sequence. Stop []string `json:"stop,omitempty"` // PresencePenalty sets number between 0 and 1 that penalizes tokens that have already appeared in the text so far. PresencePenalty float32 `json:"presence_penalty"` // FrequencyPenalty number between 0 and 1 that penalizes tokens on existing frequency in the text so far. FrequencyPenalty float32 `json:"frequency_penalty"` // BestOf sets how many of the n best completions to return. Defaults to 1. BestOf int `json:"best_of,omitempty"` // LogitBias sets modify the probability of specific tokens appearing in the completion. LogitBias map[string]float32 `json:"logit_bias,omitempty"` // User sets an end-user identifier. Can be used to associate completions generated by a specific user. User string `json:"user,omitempty"` }
CompletionRequest is a request for the completions API
type CompletionResponse ¶
type CompletionResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []CompletionResponseChoice `json:"choices"` Usage CompletionResponseUsage `json:"usage"` }
CompletionResponse is the full response from a request to the completions API
type CompletionResponseChoice ¶
type CompletionResponseChoice struct { Text string `json:"text"` Index int `json:"index"` LogProbs LogProbResult `json:"logprobs"` FinishReason string `json:"finish_reason"` }
CompletionResponseChoice is one of the choices returned in the response to the Completions API
type CompletionResponseUsage ¶
type CompletionResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
CompletionResponseUsage is the object that returns how many tokens the completion's request used
type EditsRequest ¶
type EditsRequest struct { // Model is ID of the model to use. You can use the List models API to see all of your available models, // or see our Model overview for descriptions of them. Model string `json:"model"` // Input is the input text to use as a starting point for the edit. Input string `json:"input,omitempty"` // Instruction is the instruction that tells the model how to edit the prompt. Instruction string `json:"instruction"` // N is how many edits to generate for the input and instruction. Defaults to 1 N *int `json:"n,omitempty"` // Temperature is sampling temperature to use Temperature *float32 `json:"temperature,omitempty"` // TopP is alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` }
EditsRequest is a request for the edits API
type EditsResponse ¶
type EditsResponse struct { Object string `json:"object"` Created int `json:"created"` Choices []EditsResponseChoice `json:"choices"` Usage EditsResponseUsage `json:"usage"` }
EditsResponse is the full response from a request to the edits API
type EditsResponseChoice ¶
EditsResponseChoice is one of the choices returned in the response to the Edits API
type EditsResponseUsage ¶
type EditsResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
EditsResponseUsage is a structure used in the response from a request to the edits API
type EmbeddingsRequest ¶
type EmbeddingsRequest struct { // Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings // for multiple inputs in a single request, pass an array of strings or array of token arrays. // Each input must not exceed 2048 tokens in length. Input []string `json:"input"` // Model is ID of the model to use Model string `json:"model"` // User is the request user is an optional parameter meant to be used to trace abusive requests // back to the originating user. OpenAI states: // "The [user] IDs should be a string that uniquely identifies each user. We recommend hashing // their username or email address, in order to avoid sending us any identifying information. // If you offer a preview of your product to non-logged in users, you can send a session ID // instead." User string `json:"user,omitempty"` }
EmbeddingsRequest is a request for the Embeddings API
type EmbeddingsResponse ¶
type EmbeddingsResponse struct { Object string `json:"object"` Data []EmbeddingsResult `json:"data"` Usage EmbeddingsUsage `json:"usage"` }
EmbeddingsResponse is the response from a create embeddings request. See: https://beta.openai.com/docs/api-reference/embeddings/create
type EmbeddingsResult ¶
type EmbeddingsResult struct { // The type of object returned (e.g., "list", "object") Object string `json:"object"` // The embedding data for the input Embedding []float64 `json:"embedding"` Index int `json:"index"` }
EmbeddingsResult The inner result of a create embeddings request, containing the embeddings for a single input.
type EmbeddingsUsage ¶
type EmbeddingsUsage struct { // The number of tokens used by the prompt PromptTokens int `json:"prompt_tokens"` // The total tokens used TotalTokens int `json:"total_tokens"` }
EmbeddingsUsage The usage stats for an embeddings response
type EngineObject ¶
type EngineObject struct { ID string `json:"id"` Object string `json:"object"` Owner string `json:"owner"` Ready bool `json:"ready"` }
EngineObject contained in an engine repose
type EnginesResponse ¶
type EnginesResponse struct { Data []EngineObject `json:"data"` Object string `json:"object"` }
EnginesResponse is returned from the Engines API
type ImageRequest ¶ added in v0.0.2
type ImageRequest struct { Prompt string `json:"prompt,omitempty"` N int `json:"n,omitempty"` Size string `json:"size,omitempty"` ResponseFormat string `json:"response_format,omitempty"` User string `json:"user,omitempty"` }
ImageRequest represents the request structure for the image API.
type ImageResponse ¶ added in v0.0.2
type ImageResponse struct { Created int64 `json:"created,omitempty"` Data []ImageResponseDataInner `json:"data,omitempty"` }
ImageResponse represents a response structure for image API.
type ImageResponseDataInner ¶ added in v0.0.2
type ImageResponseDataInner struct { URL string `json:"url,omitempty"` B64JSON string `json:"b64_json,omitempty"` }
ImageResponseDataInner represents a response data structure for image API.
type LogProbResult ¶
type LogProbResult struct { Tokens []string `json:"tokens"` TokenLogProbs []float32 `json:"token_logprobs"` TopLogProbs []map[string]float32 `json:"top_logprobs"` TextOffset []int `json:"text_offset"` }
LogProbResult represents logprob result of Choice
type Option ¶
type Option interface {
// contains filtered or unexported methods
}
Option sets gpt-3 Client option values.
type SearchData ¶
type SearchData struct { Document int `json:"document"` Object string `json:"object"` Score float64 `json:"score"` }
SearchData is a single search result from the document search API
type SearchRequest ¶
SearchRequest is a request for the document search API
type SearchResponse ¶
type SearchResponse struct { Data []SearchData `json:"data"` Object string `json:"object"` }
SearchResponse is the full response from a request to the document search API