Documentation
¶
Index ¶
- func ToLLMSFromResponses(resp *ResponsesResponse) *llm.GenerateResponse
- func ToLLMSResponse(resp *Response) *llm.GenerateResponse
- type APIKeyProvider
- type AuthDiagnosticsProvider
- type ChatGPTAccountIDProvider
- type ChatGPTBackendResponsesPayload
- type Choice
- type Client
- func (c *Client) Generate(ctx context.Context, request *llm.GenerateRequest) (*llm.GenerateResponse, error)
- func (c *Client) Implements(feature string) bool
- func (c *Client) Stream(ctx context.Context, request *llm.GenerateRequest) (<-chan llm.StreamEvent, error)
- func (c *Client) SupportsAnchorContinuation() bool
- func (c *Client) ToRequest(request *llm.GenerateRequest) (*Request, error)
- type ClientOption
- func WithAPIKeyProvider(provider APIKeyProvider) ClientOption
- func WithAuthDiagnosticsProvider(provider AuthDiagnosticsProvider) ClientOption
- func WithAuthSource(source string) ClientOption
- func WithBaseURL(baseURL string) ClientOption
- func WithChatGPTAccountID(accountID string) ClientOption
- func WithChatGPTAccountIDProvider(provider ChatGPTAccountIDProvider) ClientOption
- func WithCodexBetaFeatures(features string) ClientOption
- func WithContextContinuation(enabled *bool) ClientOption
- func WithHTTPClient(httpClient *http.Client) ClientOption
- func WithLoggingEnabled(enabled bool) ClientOption
- func WithMaxTokens(max int) ClientOption
- func WithModel(model string) ClientOption
- func WithOriginator(originator string) ClientOption
- func WithTemperature(temp float64) ClientOption
- func WithTimeout(timeoutSeconds int) ClientOption
- func WithUsageListener(l basecfg.UsageListener) ClientOption
- func WithUserAgent(userAgent string) ClientOption
- type Container
- type ContentItem
- type DeltaMessage
- type File
- type FunctionCall
- type FunctionCallDelta
- type ImageURL
- type InputItem
- type Message
- type Request
- type Response
- type ResponsesContentItem
- type ResponsesImageURL
- type ResponsesOutputItem
- type ResponsesPayload
- type ResponsesResponse
- type ResponsesTool
- type ResponsesUsage
- type StreamChoice
- type StreamOptions
- type StreamResponse
- type TextControls
- type TextFormat
- type Tool
- type ToolCall
- type ToolCallDelta
- type ToolDefinition
- type Usage
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ToLLMSFromResponses ¶
func ToLLMSFromResponses(resp *ResponsesResponse) *llm.GenerateResponse
ToLLMSFromResponses converts a Responses API response to llm.GenerateResponse.
func ToLLMSResponse ¶
func ToLLMSResponse(resp *Response) *llm.GenerateResponse
ToLLMSResponse converts a Response to an llm.ChatResponse
Types ¶
type AuthDiagnosticsProvider ¶
type ChatGPTBackendResponsesPayload ¶
type ChatGPTBackendResponsesPayload struct {
Model string `json:"model"`
Instructions string `json:"instructions"`
Input []InputItem `json:"input"`
Tools []ResponsesTool `json:"tools,omitempty"`
ToolChoice interface{} `json:"tool_choice,omitempty"`
ParallelToolCalls bool `json:"parallel_tool_calls,omitempty"`
Reasoning interface{} `json:"reasoning,omitempty"`
Store bool `json:"store"`
Stream bool `json:"stream"`
Include []string `json:"include,omitempty"`
PromptCacheKey string `json:"prompt_cache_key,omitempty"`
Text *TextControls `json:"text,omitempty"`
}
ChatGPTBackendResponsesPayload is the dedicated request contract for https://chatgpt.com/backend-api/codex/responses.
Do not add v1/responses-only fields here (e.g. temperature, top_p, n, max_output_tokens). Backend rejects several of those.
func ToChatGPTBackendResponsesPayload ¶
func ToChatGPTBackendResponsesPayload(req *Request) *ChatGPTBackendResponsesPayload
ToChatGPTBackendResponsesPayload builds a backend-specific payload. It intentionally applies backend constraints and role adaptations that differ from OpenAI v1/responses.
type Choice ¶
type Choice struct {
Index int `json:"index"`
Message Message `json:"message"`
FinishReason string `json:"finish_reason"`
}
Choice represents a choice in the OpenAI API response
type Client ¶
type Client struct {
basecfg.Config
APIKey string
// APIKeyProvider resolves the API key at call time (e.g., from OAuth token exchange).
// When set, it is used only if APIKey is empty.
APIKeyProvider APIKeyProvider
// UserAgent overrides the default User-Agent header when specified and allowed.
UserAgent string
// EnableLogging toggles provider runtime logs (auth/ws diagnostics).
EnableLogging bool
// Originator mirrors Codex default header style for backend-api compatibility.
Originator string
// CodexBetaFeatures maps to x-codex-beta-features header when set.
CodexBetaFeatures string
// AuthSource is a redacted label indicating where auth keys are resolved from.
AuthSource string
// AuthDiagnosticsProvider returns redacted runtime diagnostics for auth decisions.
AuthDiagnosticsProvider AuthDiagnosticsProvider
// ChatGPTAccountIDProvider resolves workspace/account id for ChatGPT backend requests.
ChatGPTAccountIDProvider ChatGPTAccountIDProvider
// ChatGPTAccountID is an optional static workspace/account id.
ChatGPTAccountID string
// Defaults applied when GenerateRequest.Options is nil or leaves the
// respective field unset.
MaxTokens int
Temperature *float64
// ContextContinuation controls whether this client should use
// response continuation by response_id when supported. When nil,
// continuation is considered enabled.
ContextContinuation *bool
// contains filtered or unexported fields
}
Client represents an OpenAI API client
func NewClient ¶
func NewClient(apiKey, model string, options ...ClientOption) *Client
NewClient creates a new OpenAI client with the given API key and model
func (*Client) Generate ¶
func (c *Client) Generate(ctx context.Context, request *llm.GenerateRequest) (*llm.GenerateResponse, error)
Generate sends a chat request to the OpenAI API and returns the response
func (*Client) Implements ¶
func (*Client) Stream ¶
func (c *Client) Stream(ctx context.Context, request *llm.GenerateRequest) (<-chan llm.StreamEvent, error)
Stream sends a chat request to the OpenAI API with streaming enabled and returns a channel of partial responses.
func (*Client) SupportsAnchorContinuation ¶
SupportsAnchorContinuation reports whether previous_response_id-style continuation should be attempted for this client.
type ClientOption ¶
type ClientOption func(*Client)
ClientOption mutates an OpenAI Client instance.
func WithAPIKeyProvider ¶
func WithAPIKeyProvider(provider APIKeyProvider) ClientOption
WithAPIKeyProvider configures a resolver used to obtain an API key at call time. This is intended for auth flows that mint or refresh API keys dynamically.
func WithAuthDiagnosticsProvider ¶
func WithAuthDiagnosticsProvider(provider AuthDiagnosticsProvider) ClientOption
WithAuthDiagnosticsProvider sets runtime auth diagnostics producer.
func WithAuthSource ¶
func WithAuthSource(source string) ClientOption
WithAuthSource sets a redacted label describing auth source selection.
func WithBaseURL ¶
func WithBaseURL(baseURL string) ClientOption
func WithChatGPTAccountID ¶
func WithChatGPTAccountID(accountID string) ClientOption
WithChatGPTAccountID sets static ChatGPT workspace/account id header value.
func WithChatGPTAccountIDProvider ¶
func WithChatGPTAccountIDProvider(provider ChatGPTAccountIDProvider) ClientOption
WithChatGPTAccountIDProvider sets runtime resolver for ChatGPT workspace/account id.
func WithCodexBetaFeatures ¶
func WithCodexBetaFeatures(features string) ClientOption
WithCodexBetaFeatures sets x-codex-beta-features header value.
func WithContextContinuation ¶
func WithContextContinuation(enabled *bool) ClientOption
WithContextContinuation sets a client-level toggle for server-side context continuation (continuation by response_id) when supported by the provider.
func WithHTTPClient ¶
func WithHTTPClient(httpClient *http.Client) ClientOption
func WithLoggingEnabled ¶
func WithLoggingEnabled(enabled bool) ClientOption
WithLoggingEnabled toggles provider runtime logs.
func WithMaxTokens ¶
func WithMaxTokens(max int) ClientOption
WithMaxTokens sets a default max_tokens that will be applied to any Generate request that does not explicitly specify MaxTokens in the options.
func WithModel ¶
func WithModel(model string) ClientOption
func WithOriginator ¶
func WithOriginator(originator string) ClientOption
WithOriginator sets an explicit originator header value.
func WithTemperature ¶
func WithTemperature(temp float64) ClientOption
WithTemperature sets a default temperature applied when a Generate request does not specify it.
func WithTimeout ¶
func WithTimeout(timeoutSeconds int) ClientOption
func WithUsageListener ¶
func WithUsageListener(l basecfg.UsageListener) ClientOption
WithUsageListener assigns token usage listener to the client.
func WithUserAgent ¶
func WithUserAgent(userAgent string) ClientOption
WithUserAgent sets a User-Agent override for OpenAI requests. The override is applied only when the value starts with "openai" (case-insensitive).
type ContentItem ¶
type ContentItem struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ImageURL *ImageURL `json:"image_url,omitempty"`
File *File `json:"file,omitempty"`
}
ContentItem represents a single content item in a message for the OpenAI API
type DeltaMessage ¶
type DeltaMessage struct {
Role string `json:"role,omitempty"`
Content *string `json:"content,omitempty"`
ToolCalls []ToolCallDelta `json:"tool_calls,omitempty"`
}
type FunctionCall ¶
FunctionCall represents a function call in the OpenAI API
type FunctionCallDelta ¶
type InputItem ¶
type InputItem struct {
Type string `json:"type"`
// Message fields.
Role string `json:"role,omitempty"`
// Name is not supported by Responses API and must be omitted.
Content []ResponsesContentItem `json:"content,omitempty"`
// ToolCallID is required by OpenAI when role == "tool" to associate
// the tool result with a prior assistant tool_call request.
ToolCallID string `json:"tool_call_id,omitempty"`
// function_call_output fields.
CallID string `json:"call_id,omitempty"`
Output string `json:"output,omitempty"`
}
type Message ¶
type Message struct {
Role string `json:"role"`
Content interface{} `json:"content,omitempty"` // Can be string or []ContentItem
Name string `json:"name,omitempty"`
FunctionCall *FunctionCall `json:"function_call,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
ToolCallId string `json:"tool_call_id,omitempty"`
}
Message represents a message in the OpenAI API request
type Request ¶
type Request struct {
Tools []Tool `json:"tools,omitempty"`
Model string `json:"model"`
Messages []Message `json:"messages"`
Temperature *float64 `json:"temperature,omitempty"`
MaxTokens int `json:"max_completion_tokens,omitempty"`
TopP float64 `json:"top_p,omitempty"`
N int `json:"n,omitempty"`
Stream bool `json:"stream,omitempty"`
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
// Reasoning enables configuration of internal chain-of-thought reasoning features.
Reasoning *llm.Reasoning `json:"reasoning,omitempty"`
// Instructions provides system guidance for the Responses API.
Instructions string `json:"instructions,omitempty"`
// PromptCacheKey enables provider-side prompt caching when supported.
PromptCacheKey string `json:"prompt_cache_key,omitempty"`
// Text controls output formatting/verbosity for the Responses API.
Text *TextControls `json:"text,omitempty"`
ToolChoice interface{} `json:"tool_choice,omitempty"`
ParallelToolCalls bool `json:"parallel_tool_calls,omitempty"`
// PreviousResponseID allows continuing a prior Responses API call.
PreviousResponseID string `json:"previous_response_id,omitempty"`
// EnableCodeInterpreter controls stream-only injection of a default
// code_interpreter tool in Responses API payloads.
EnableCodeInterpreter bool `json:"-"`
// EnableImageGeneration controls stream-only injection of the built-in
// image_generation tool in Responses API payloads.
EnableImageGeneration bool `json:"-"`
}
Request represents the request structure for OpenAI API
func ToRequest ¶
func ToRequest(request *llm.GenerateRequest) *Request
ToRequest is a convenience wrapper retained for backward-compatible tests. It constructs a default client and adapts an llm.GenerateRequest to provider Request. Errors are ignored in this wrapper; callers requiring error handling should use Client.ToRequest.
type Response ¶
type Response struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model"`
Choices []Choice `json:"choices"`
Usage Usage `json:"usage"`
}
Response represents the response structure from OpenAI API
type ResponsesContentItem ¶
type ResponsesContentItem struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ImageURL string `json:"image_url,omitempty"`
FileData string `json:"file_data,omitempty"`
FileName string `json:"filename,omitempty"`
Detail string `json:"detail,omitempty"`
FileID string `json:"file_id,omitempty"`
// Function call output back to the model
CallID string `json:"call_id,omitempty"`
Output string `json:"output,omitempty"`
// Allow passthrough for future shapes (e.g., input_audio)
Extra map[string]interface{} `json:"-"`
}
type ResponsesImageURL ¶
type ResponsesOutputItem ¶
type ResponsesOutputItem struct {
// Type is typically "message".
Type string `json:"type"`
Role string `json:"role,omitempty"`
Content []ResponsesContentItem `json:"content,omitempty"`
// Tool calls when assistant requests tools
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
// Function-call item shape (when Type == "function_call")
CallID string `json:"call_id,omitempty"`
Name string `json:"name,omitempty"`
Status string `json:"status,omitempty"`
Arguments string `json:"arguments,omitempty"`
}
type ResponsesPayload ¶
type ResponsesPayload struct {
Model string `json:"model"`
Instructions string `json:"instructions,omitempty"`
Input []InputItem `json:"input"`
Tools []ResponsesTool `json:"tools,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
MaxOutputTokens int `json:"max_output_tokens,omitempty"`
TopP float64 `json:"top_p,omitempty"`
N int `json:"n,omitempty"`
Stream bool `json:"stream,omitempty"`
ToolChoice interface{} `json:"tool_choice,omitempty"`
ParallelToolCalls bool `json:"parallel_tool_calls,omitempty"`
Reasoning *llm.Reasoning `json:"reasoning,omitempty"`
PreviousResponseID string `json:"previous_response_id,omitempty"`
Store *bool `json:"store,omitempty"`
Include []string `json:"include,omitempty"`
PromptCacheKey string `json:"prompt_cache_key,omitempty"`
Text *TextControls `json:"text,omitempty"`
// Provider-specific metadata passthrough if needed in future
Extra map[string]interface{} `json:"-"`
}
ResponsesPayload is the request body for the /v1/responses API.
func ToResponsesPayload ¶
func ToResponsesPayload(req *Request) *ResponsesPayload
ToResponsesPayload converts legacy Request into Responses API payload.
type ResponsesResponse ¶
type ResponsesResponse struct {
ID string `json:"id"`
Status string `json:"status,omitempty"`
Model string `json:"model"`
Output []ResponsesOutputItem `json:"output"`
Usage ResponsesUsage `json:"usage"`
}
ResponsesResponse represents a non-streaming reply from the /v1/responses API or the final object emitted in the streaming completed event.
type ResponsesTool ¶
type ResponsesTool struct {
Type string `json:"type"`
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Parameters map[string]interface{} `json:"parameters,omitempty"`
Required []string `json:"required,omitempty"`
Strict *bool `json:"strict,omitempty"`
Container *Container `json:"container,omitempty"`
OutputFormat string `json:"output_format,omitempty"`
Background string `json:"background,omitempty"`
Size string `json:"size,omitempty"`
}
ResponsesTool is the Responses API tool schema. For function tools, the name/description/parameters are top-level (not nested under "function").
type ResponsesUsage ¶
type StreamChoice ¶
type StreamChoice struct {
Index int `json:"index"`
Delta DeltaMessage `json:"delta"`
FinishReason *string `json:"finish_reason"`
}
type StreamOptions ¶
type StreamOptions struct {
IncludeUsage bool `json:"include_usage,omitempty"`
}
StreamOptions controls additional streaming behavior.
type StreamResponse ¶
type StreamResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model"`
Choices []StreamChoice `json:"choices"`
}
StreamResponse represents a single Server-Sent Event chunk from OpenAI chat/completions endpoint when stream=true. The payload places partial deltas under choices[i].delta instead of choices[i].message.
type TextControls ¶
type TextControls struct {
Verbosity string `json:"verbosity,omitempty"`
Format *TextFormat `json:"format,omitempty"`
}
TextControls enables response formatting controls on the Responses API.
type TextFormat ¶
type TextFormat struct {
Type string `json:"type"`
Strict bool `json:"strict"`
Schema map[string]interface{} `json:"schema"`
Name string `json:"name,omitempty"`
}
TextFormat configures structured text output on the Responses API.
type Tool ¶
type Tool struct {
Type string `json:"type"`
Function ToolDefinition `json:"function"`
}
Tool represents a tool in the OpenAI API
type ToolCall ¶
type ToolCall struct {
ID string `json:"id"`
Type string `json:"type"`
Function FunctionCall `json:"function"`
}
ToolCall represents a tool call in the OpenAI API
type ToolCallDelta ¶
type ToolCallDelta struct {
Index int `json:"index"`
ID string `json:"id,omitempty"`
Type string `json:"type,omitempty"`
Function FunctionCallDelta `json:"function,omitempty"`
}
ToolCallDelta mirrors the incremental tool call fields included in streaming deltas. Arguments are delivered as a concatenated string across multiple events.
type ToolDefinition ¶
type ToolDefinition struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Parameters map[string]interface{} `json:"parameters,omitempty"`
Required []string `json:"required,omitempty"`
Strict bool `json:"strict,omitempty"`
}
ToolDefinition represents a tool definition in the OpenAI API
type Usage ¶
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
// Some OpenAI responses provide flattened fields in addition to details.
// Support both shapes to ensure robust parsing across models/endpoints.
PromptCachedTokens int `json:"prompt_cached_tokens,omitempty"`
ReasoningTokens int `json:"reasoning_tokens,omitempty"`
CompletionReasoningTokens int `json:"completion_reasoning_tokens,omitempty"`
PromptTokensDetails struct {
CachedTokens int `json:"cached_tokens"`
AudioTokens int `json:"audio_tokens"`
} `json:"prompt_tokens_details"`
CompletionTokensDetails struct {
ReasoningTokens int `json:"reasoning_tokens"`
AudioTokens int `json:"audio_tokens"`
AcceptedPredictionTokens int `json:"accepted_prediction_tokens"`
RejectedPredictionTokens int `json:"rejected_prediction_tokens"`
} `json:"completion_tokens_details"`
}
Usage represents token usage information in the OpenAI API response