Documentation
¶
Overview ¶
Package responses / service.go contains the service layer for OpenAI responses API.
Index ¶
- Constants
- func ForceFunction(name string) json.RawMessage
- func ForceToolChoice(toolType string, name string) json.RawMessage
- type Content
- type ContextConfig
- type Conversation
- type ConversationCli
- type ConversationItemList
- type ConversationItemsInclude
- type ConversationListOptions
- type Prompt
- type ReasoningConfig
- type Request
- type Response
- func (r *Response) ApplyPatchCalls() []output.ApplyPatchCall
- func (r *Response) CustomToolCalls() []output.CustomToolCall
- func (r *Response) FirstText() string
- func (r *Response) FunctionCalls() []output.FunctionCall
- func (r *Response) JoinedReasoningSummaries() string
- func (r *Response) JoinedTexts() string
- func (r *Response) LastText() string
- func (r *Response) MCPApprovalRequests() []output.MCPApprovalRequest
- func (r *Response) Parse() error
- func (r *Response) ReasoningSummaries() []string
- func (r *Response) Reasonings() []output.Reasoning
- func (r *Response) Refusals() []string
- func (r *Response) ShellCalls() []output.ShellCall
- func (r *Response) Texts() []string
- type Service
- type StreamOptions
- type TextFormatType
- type TextOptions
- type WSConn
Constants ¶
const ( // Text format types TextFormatTypeText = "text" TextFormatTypeJSONObject = "json_object" TextFormatTypeJSONSchema = "json_schema" // Service tiers ServiceTierAuto = "auto" // service tier configured in the Project settings ServiceTierDefault = "default" // standard pricing and performance for the selected model ServiceTierFlex = "flex" // slower but cheaper ServiceTierPriority = "priority" // faster but more expensive )
Variables ¶
This section is empty.
Functions ¶
func ForceFunction ¶
func ForceFunction(name string) json.RawMessage
ForceFunction is a convenience function to force the use of a specific function tool. This is a backward-compatible wrapper around ForceToolChoice.
func ForceToolChoice ¶
func ForceToolChoice(toolType string, name string) json.RawMessage
ForceToolChoice generates parameter value for ResponseRequest.ToolChoice field that forces the use of one specified tool.
Types ¶
type Content ¶
type Content interface {
string |
input.InputText |
input.InputImage |
input.InputFile |
output.OutputText |
output.Refusal |
output.FileSearchCall |
output.ComputerCall |
output.ComputerCallOutput |
output.WebSearchCall |
output.FunctionCall |
output.FunctionCallOutput |
output.CustomToolCall |
output.CustomToolCallOutput |
output.Reasoning |
output.Compaction |
output.ApplyPatchCall |
output.ApplyPatchCallOutput |
output.ShellCall |
output.ShellCallOutput |
input.ItemReference
}
Content is an interface listing all types that can be used as content in Responses API. These types may appear in `Request.Input`, `Response.Outputs`, and `output.Message.Content` fields.
type ContextConfig ¶ added in v1.3.1
type ContextConfig struct {
// The context management entry type. Currently only "compaction" is supported.
Type string `json:"type"`
// Token threshold at which compaction should be triggered for this entry.
// Minimum 1000.
CompactThreshold int `json:"compact_threshold,omitempty"`
}
ContextConfig represents configuration options for context management.
func (ContextConfig) MarshalJSON ¶ added in v1.3.1
func (c ContextConfig) MarshalJSON() ([]byte, error)
MarshalJSON implements the json.Marshaler interface. Fills the "type" field with "compaction" if empty.
type Conversation ¶ added in v1.2.0
type Conversation struct {
ConversationCli `json:"-"` // implements API methods of the Conversation object
ID string `json:"id"`
Object string `json:"object"`
CreatedAt int `json:"created_at"`
Metadata map[string]string `json:"metadata,omitempty"`
}
Conversation represents a persisted conversation container on the server. It embeds the ConversationCli interface and implements the Conversation object methods.
type ConversationCli ¶ added in v1.2.0
type ConversationCli interface {
// Update sends the current state of the conversation to the API.
// Effectively saves changes in the metadata.
Update() error
// Delete removes the conversation from the API.
Delete() error
// ListItems retrieves items stored in the conversation.
ListItems(opts *ConversationListOptions) (*ConversationItemList, error)
// AppendItems adds new items to the conversation.
AppendItems(include *ConversationItemsInclude, items ...any) (*ConversationItemList, error)
// Item retrieves a single item from the conversation.
Item(include *ConversationItemsInclude, itemID string) (any, error)
// DeleteItem removes a single item from the conversation.
DeleteItem(itemID string) error
}
ConversationCli is a client that implements API methods of the Conversation object.
type ConversationItemList ¶ added in v1.2.0
type ConversationItemList struct {
Object string `json:"object"` // always "list"
Data []output.Any `json:"data"`
FirstID string `json:"first_id"`
LastID string `json:"last_id"`
HasMore bool `json:"has_more"`
ParsedData []any `json:"-"` // parsed data from the Data field
}
ConversationItemList is the paginated response returned when listing conversation items.
func (*ConversationItemList) Parse ¶ added in v1.2.0
func (l *ConversationItemList) Parse() error
Parse parses the []output.Any and places the parsed objects in ParsedData.
type ConversationItemsInclude ¶ added in v1.2.0
type ConversationItemsInclude struct {
// Include the sources of the web search tool call.
WebSearchCallActionSources bool
// Includes the outputs of python code execution in code interpreter tool call items.
CodeInterpreterCallOutputs bool
// Include image urls from the computer call output.
ComputerCallOutputImageURL bool
// Include the search results of the file search tool call.
FileSearchCallResults bool
// Include image urls from the input message.
MessageInputImageURL bool
// Include logprobs with assistant messages.
MessageOutputTextLogprobs bool
// Includes an encrypted version of reasoning tokens in reasoning item outputs.
// This enables reasoning items to be used in multi-turn conversations when using the Responses
// API statelessly (like when the store parameter is set to false, or when an organization is
// enrolled in the zero data retention program).
ReasoningEncryptedContent bool
}
ConversationItemsInclude lists true/false flags for which items to include in a ConversationItemList response. It prepares a slice of flags set to true for URL query values.
func (ConversationItemsInclude) Values ¶ added in v1.2.0
func (i ConversationItemsInclude) Values() []string
Values returns a slice of strings for flags set to true.
type ConversationListOptions ¶ added in v1.2.0
type ConversationListOptions struct {
Limit int
FirstID string
LastID string
Include *ConversationItemsInclude
}
ConversationListOptions configures pagination when listing conversation items.
type Prompt ¶ added in v1.0.1
type Prompt struct {
ID string `json:"id"`
Variables map[string]json.RawMessage `json:"variables"`
Version *string `json:"version"`
}
Prompt is a reference to a prompt template and its variables.
type ReasoningConfig ¶
type ReasoningConfig struct {
Effort string `json:"effort,omitempty"` // "none", "minimal", "low", "medium", or "high"
GenerateSummary string `json:"generate_summary,omitempty"` // "concise" or "detailed"
}
ReasoningConfig represents configuration options for reasoning models.
type Request ¶
type Request struct {
// Required
Model string `json:"model"`
Input any `json:"input"` // string or []Any
// Optional
Include []string `json:"include,omitempty"` // Additional data to include in response: "file_search_call.results", "message.input_image.image_url", "computer_call_output.output.image_url"
Instructions string `json:"instructions,omitempty"` // System message for context
Conversation any `json:"conversation,omitempty"` // ID or a Conversation object containing an ID
ContextManagement []ContextConfig `json:"context_management,omitempty"` // Compaction configuration
MaxOutputTokens int `json:"max_output_tokens,omitempty"` // Max tokens to generate
Metadata map[string]string `json:"metadata,omitempty"` // Key-value pairs
ParallelToolCalls *bool `json:"parallel_tool_calls,omitempty"` // Allow parallel tool calls, default true
PreviousResponseID string `json:"previous_response_id,omitempty"` // ID of previous response
Prompt *Prompt `json:"prompt,omitempty"` // Reference to a prompt template and its variables
PromptCacheKey string `json:"prompt_cache_key,omitempty"` // Used for matching similar requests with cached input
PromptCacheRetention string `json:"prompt_cache_retention,omitempty"` // Prompt cache retention policy: "in_memory" (default) or "24h"
Reasoning *ReasoningConfig `json:"reasoning,omitempty"` // Reasoning configuration
SafetyIdentifier string `json:"safety_identifier,omitempty"` // Stable unique identifier for end user, preferably anonymized
ServiceTier string `json:"service_tier,omitempty"` // Service tier to use, default "auto"
Store *bool `json:"store,omitempty"` // Whether to store the response, default true
Stream bool `json:"stream,omitempty"` // Stream the response, default false
StreamOptions *StreamOptions `json:"stream_options,omitempty"` // Streaming configuration
Temperature float64 `json:"temperature,omitempty"` // default 1
Text *TextOptions `json:"text,omitempty"` // Text format configuration
ToolChoice json.RawMessage `json:"tool_choice,omitempty"` // default "auto", can be "none", "required", or an object
TopP float64 `json:"top_p,omitempty"` // default 1
Truncation string `json:"truncation,omitempty"` // "auto" or "disabled"
User string `json:"user,omitempty"` // Deprecated: use SafetyIdentifier and PromptCacheKey instead
Background bool `json:"background,omitempty"` // if true, the API returns immediately with only a response ID
Generate *bool `json:"generate,omitempty"` // if false, warm up request state without generating model output
// names of tools/functions to include, will be marshaled as their full structs from tools registry
Tools []string `json:"-"`
// Custom (not part of the API)
// If set, tool calls will be returned instead of executed.
ReturnToolCalls bool `json:"-"` // default false
// If set, will be called on messages received alongside other outputs (e.g., tool calls)
// that would otherwise be returned in the response but can be handled sooner with this handler.
IntermediateMessageHandler func(output.Message) `json:"-"`
}
Request is the request body for the Responses API.
type Response ¶
Response is a wrapper for outputs returned from the Responses API.
func (*Response) ApplyPatchCalls ¶ added in v1.0.3
func (r *Response) ApplyPatchCalls() []output.ApplyPatchCall
ApplyPatchCalls returns a slice of ApplyPatchCall objects from the response.
func (*Response) CustomToolCalls ¶ added in v1.0.1
func (r *Response) CustomToolCalls() []output.CustomToolCall
CustomToolCalls returns a slice of CustomToolCall objects from the response.
func (*Response) FirstText ¶
FirstText returns the first text output in the response, or an empty string.
func (*Response) FunctionCalls ¶
func (r *Response) FunctionCalls() []output.FunctionCall
FunctionCalls returns a slice of FunctionCall objects from the response.
func (*Response) JoinedReasoningSummaries ¶
JoinedReasoningSummaries returns all reasoning summaries joined by newlines.
func (*Response) JoinedTexts ¶
JoinedTexts returns a single string joined from all text outputs in the response with newlines. Normally there's only one text output.
func (*Response) LastText ¶
LastText returns the last text output in the response, or an empty string.
func (*Response) MCPApprovalRequests ¶
func (r *Response) MCPApprovalRequests() []output.MCPApprovalRequest
MCPApprovalRequests returns a slice of MCPApprovalRequest objects from the response.
func (*Response) Parse ¶
Parse parses the []output.Any and places the parsed objects in ParsedOutputs.
func (*Response) ReasoningSummaries ¶
ReasoningSummaries returns a slice of summary texts from reasoning outputs.
func (*Response) Reasonings ¶
Reasonings returns a slice of Reasoning objects from the response.
func (*Response) ShellCalls ¶ added in v1.0.3
ShellCalls returns a slice of ShellCall objects from the response.
type Service ¶
type Service interface {
// Send sends a request to the Responses API.
Send(req *Request) (response *Response, err error)
// Stream sends a request with parameter "stream":true and returns a streaming iterator.
Stream(ctx context.Context, req *Request) (*streaming.StreamIterator, error)
// WebSocket opens a persistent WebSocket connection for response.create events.
WebSocket(ctx context.Context) (WSConn, error)
// NewMessage creates a new empty message.
NewMessage() *output.Message
// NewRequest creates a new empty request.
NewRequest() *Request
// Poll continuously fetches a background response until completion or failure.
// ctx controls cancellation; interval is time to wait between subsequent polls.
Poll(ctx context.Context, responseID string, interval time.Duration) (*Response, error)
// CreateConversation creates a new persistent conversation container.
CreateConversation(metadata map[string]string, items ...any) (*Conversation, error)
// Conversation retrieves a conversation by ID.
Conversation(id string) (*Conversation, error)
}
Service is the service layer for OpenAI responses API.
type StreamOptions ¶ added in v1.0.1
type StreamOptions struct {
// Stream obfuscation adds random characters to an obfuscation field on streaming delta events
// to normalize payload sizes as a mitigation to certain side-channel attacks.
// These obfuscation fields are included by default, but add a small amount of overhead
// to the data stream.
// You can set include_obfuscation to false to optimize for bandwidth if you trust the network
// links between your application and the OpenAI API.
IncludeObfuscation bool `json:"include_obfuscation,omitempty"`
}
StreamOptions is a set of options for streaming responses.
type TextFormatType ¶
type TextFormatType struct {
Type string `json:"type"` // "text", "json_object", or "json_schema"
Schema json.RawMessage `json:"schema,omitempty"` // Schema for json_schema type
Name string `json:"name,omitempty"` // Name for json_schema type
Description string `json:"description,omitempty"` // Description for json_schema type
Strict bool `json:"strict,omitempty"` // Whether to enforce strict schema validation
}
TextFormatType represents the type of text format.
type TextOptions ¶ added in v1.0.1
type TextOptions struct {
Format TextFormatType `json:"format"`
Verbosity string `json:"verbosity,omitempty"` // "low", "medium", or "high", default "medium"
}
TextOptions represents the format configuration for text responses.
type WSConn ¶ added in v1.3.0
type WSConn interface {
// Send sends one response.create event and returns a streaming iterator for
// the resulting server events.
Send(ctx context.Context, req *Request) (*streaming.StreamIterator, error)
// Warmup sends one response.create event with generate=false and returns the
// response ID that can be used as PreviousResponseID.
Warmup(ctx context.Context, req *Request) (string, error)
// Close closes the WebSocket connection.
Close() error
}
WSConn is a persistent WebSocket connection to the Responses API. It is created by Service.WebSocket.