Documentation
¶
Overview ¶
Package sdk provides primitives to interact with the openapi HTTP API.
Code generated by github.com/oapi-codegen/oapi-codegen/v2 version v2.4.1 DO NOT EDIT.
Index ¶
- Constants
- type BadRequest
- type ChatCompletionChoice
- type ChatCompletionChoiceFinishReason
- type ChatCompletionMessageToolCall
- type ChatCompletionMessageToolCallChunk
- type ChatCompletionMessageToolCallFunction
- type ChatCompletionStreamChoice
- type ChatCompletionStreamOptions
- type ChatCompletionStreamResponseDelta
- type ChatCompletionTokenLogprob
- type ChatCompletionTool
- type ChatCompletionToolType
- type Client
- type ClientOptions
- type CompletionUsage
- type CreateChatCompletionJSONRequestBody
- type CreateChatCompletionParams
- type CreateChatCompletionRequest
- type CreateChatCompletionResponse
- type CreateChatCompletionStreamResponse
- type Error
- type FunctionObject
- type FunctionParameters
- type InternalError
- type ListModelsParams
- type ListModelsResponse
- type ListToolsResponse
- type MCPNotExposed
- type MCPTool
- type Message
- type MessageRole
- type Model
- type Provider
- type ProviderRequest
- type ProviderResponse
- type ProviderSpecificResponse
- type ProxyPatchJSONBody
- type ProxyPatchJSONRequestBody
- type ProxyPostJSONBody
- type ProxyPostJSONRequestBody
- type ProxyPutJSONBody
- type ProxyPutJSONRequestBody
- type SSEvent
- type SSEventEvent
- type Unauthorized
Constants ¶
const (
BearerAuthScopes = "bearerAuth.Scopes"
)
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ChatCompletionChoice ¶ added in v1.5.0
type ChatCompletionChoice struct { // FinishReason The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, // `length` if the maximum number of tokens specified in the request was reached, // `content_filter` if content was omitted due to a flag from our content filters, // `tool_calls` if the model called a tool. FinishReason ChatCompletionChoiceFinishReason `json:"finish_reason"` // Index The index of the choice in the list of choices. Index int `json:"index"` // Message Message structure for provider requests Message Message `json:"message"` }
ChatCompletionChoice defines model for ChatCompletionChoice.
type ChatCompletionChoiceFinishReason ¶ added in v1.5.0
type ChatCompletionChoiceFinishReason string
ChatCompletionChoiceFinishReason The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool.
const ( ContentFilter ChatCompletionChoiceFinishReason = "content_filter" FunctionCall ChatCompletionChoiceFinishReason = "function_call" Length ChatCompletionChoiceFinishReason = "length" Stop ChatCompletionChoiceFinishReason = "stop" ToolCalls ChatCompletionChoiceFinishReason = "tool_calls" )
Defines values for ChatCompletionChoiceFinishReason.
type ChatCompletionMessageToolCall ¶ added in v1.5.0
type ChatCompletionMessageToolCall struct { // Function The function that the model called. Function ChatCompletionMessageToolCallFunction `json:"function"` // Id The ID of the tool call. Id string `json:"id"` // Type The type of the tool. Currently, only `function` is supported. Type ChatCompletionToolType `json:"type"` }
ChatCompletionMessageToolCall defines model for ChatCompletionMessageToolCall.
type ChatCompletionMessageToolCallChunk ¶ added in v1.5.0
type ChatCompletionMessageToolCallChunk struct { Index int `json:"index"` ID string `json:"id,omitempty"` Type string `json:"type,omitempty"` Function struct { Name string `json:"name,omitempty"` Arguments string `json:"arguments,omitempty"` } `json:"function,omitempty"` }
ChatCompletionMessageToolCallChunk represents a chunk of a tool call in a stream response.
type ChatCompletionMessageToolCallFunction ¶ added in v1.5.0
type ChatCompletionMessageToolCallFunction struct { // Arguments The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function. Arguments string `json:"arguments"` // Name The name of the function to call. Name string `json:"name"` }
ChatCompletionMessageToolCallFunction The function that the model called.
type ChatCompletionStreamChoice ¶ added in v1.5.0
type ChatCompletionStreamChoice struct { Delta ChatCompletionStreamResponseDelta `json:"delta"` Index int `json:"index"` FinishReason string `json:"finish_reason"` }
ChatCompletionStreamChoice represents a choice in a streaming chat completion response.
type ChatCompletionStreamOptions ¶ added in v1.5.0
type ChatCompletionStreamOptions struct { // IncludeUsage If set, an additional chunk will be streamed before the `data: [DONE]` message. The `usage` field on this chunk shows the token usage statistics for the entire request, and the `choices` field will always be an empty array. All other chunks will also include a `usage` field, but with a null value. IncludeUsage bool `json:"include_usage"` }
ChatCompletionStreamOptions Options for streaming response. Only set this when you set `stream: true`.
type ChatCompletionStreamResponseDelta ¶ added in v1.5.0
type ChatCompletionStreamResponseDelta struct { Content string `json:"content,omitempty"` ToolCalls []ChatCompletionMessageToolCallChunk `json:"tool_calls,omitempty"` Role string `json:"role,omitempty"` Reasoning *string `json:"reasoning,omitempty"` ReasoningContent *string `json:"reasoning_content,omitempty"` Refusal string `json:"refusal,omitempty"` }
ChatCompletionStreamResponseDelta represents a chat completion delta generated by streamed model responses.
type ChatCompletionTokenLogprob ¶ added in v1.5.0
type ChatCompletionTokenLogprob struct { Token string `json:"token"` Logprob float64 `json:"logprob"` Bytes []int `json:"bytes"` }
ChatCompletionTokenLogprob represents token log probability information.
type ChatCompletionTool ¶ added in v1.5.0
type ChatCompletionTool struct { Function FunctionObject `json:"function"` // Type The type of the tool. Currently, only `function` is supported. Type ChatCompletionToolType `json:"type"` }
ChatCompletionTool defines model for ChatCompletionTool.
type ChatCompletionToolType ¶ added in v1.5.0
type ChatCompletionToolType string
ChatCompletionToolType The type of the tool. Currently, only `function` is supported.
const (
Function ChatCompletionToolType = "function"
)
Defines values for ChatCompletionToolType.
type Client ¶
type Client interface { WithAuthToken(token string) *clientImpl WithTools(tools *[]ChatCompletionTool) *clientImpl WithOptions(options *CreateChatCompletionRequest) *clientImpl WithHeaders(headers map[string]string) *clientImpl WithHeader(name, value string) *clientImpl ListModels(ctx context.Context) (*ListModelsResponse, error) ListProviderModels(ctx context.Context, provider Provider) (*ListModelsResponse, error) ListTools(ctx context.Context) (*ListToolsResponse, error) GenerateContent(ctx context.Context, provider Provider, model string, messages []Message) (*CreateChatCompletionResponse, error) GenerateContentStream(ctx context.Context, provider Provider, model string, messages []Message) (<-chan SSEvent, error) HealthCheck(ctx context.Context) error }
Client represents the SDK client interface
func NewClient ¶
func NewClient(options *ClientOptions) Client
NewClient creates a new SDK client with the specified options.
Example:
client := sdk.NewClient(&sdk.ClientOptions{ BaseURL: "http://localhost:8080/v1", APIKey: "your-api-key", Timeout: 30 * time.Second, Tools: nil, Headers: map[string]string{ "X-Custom-Header": "custom-value", "User-Agent": "my-app/1.0", }, })
type ClientOptions ¶ added in v1.5.1
type ClientOptions struct { // APIKey is the API key to use for the client. APIKey string // BaseURL is the base URL to use for the client. BaseURL string // Timeout is the timeout to use for the client. Timeout time.Duration // Tools is the tools to use for the client. Tools *[]ChatCompletionTool // Headers is a map of custom headers to include with all requests. Headers map[string]string }
ClientOptions represents the options that can be passed to the client.
type CompletionUsage ¶ added in v1.5.0
type CompletionUsage struct { // CompletionTokens Number of tokens in the generated completion. CompletionTokens int64 `json:"completion_tokens"` // PromptTokens Number of tokens in the prompt. PromptTokens int64 `json:"prompt_tokens"` // TotalTokens Total number of tokens used in the request (prompt + completion). TotalTokens int64 `json:"total_tokens"` }
CompletionUsage Usage statistics for the completion request.
type CreateChatCompletionJSONRequestBody ¶ added in v1.5.0
type CreateChatCompletionJSONRequestBody = CreateChatCompletionRequest
CreateChatCompletionJSONRequestBody defines body for CreateChatCompletion for application/json ContentType.
type CreateChatCompletionParams ¶ added in v1.5.0
type CreateChatCompletionParams struct { // Provider Specific provider to use (default determined by model) Provider *Provider `form:"provider,omitempty" json:"provider,omitempty"` }
CreateChatCompletionParams defines parameters for CreateChatCompletion.
type CreateChatCompletionRequest ¶ added in v1.5.0
type CreateChatCompletionRequest struct { // MaxTokens An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. MaxTokens *int `json:"max_tokens,omitempty"` // Messages A list of messages comprising the conversation so far. Messages []Message `json:"messages"` // Model Model ID to use Model string `json:"model"` // ReasoningFormat The format of the reasoning content. Can be `raw` or `parsed`. // When specified as raw some reasoning models will output <think /> tags. When specified as parsed the model will output the reasoning under `reasoning` or `reasoning_content` attribute. ReasoningFormat *string `json:"reasoning_format,omitempty"` // Stream If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). Stream *bool `json:"stream,omitempty"` // StreamOptions Options for streaming response. Only set this when you set `stream: true`. StreamOptions *ChatCompletionStreamOptions `json:"stream_options,omitempty"` // Tools A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. Tools *[]ChatCompletionTool `json:"tools,omitempty"` }
CreateChatCompletionRequest defines model for CreateChatCompletionRequest.
type CreateChatCompletionResponse ¶ added in v1.5.0
type CreateChatCompletionResponse struct { // Choices A list of chat completion choices. Can be more than one if `n` is greater than 1. Choices []ChatCompletionChoice `json:"choices"` // Created The Unix timestamp (in seconds) of when the chat completion was created. Created int `json:"created"` // Id A unique identifier for the chat completion. Id string `json:"id"` // Model The model used for the chat completion. Model string `json:"model"` // Object The object type, which is always `chat.completion`. Object string `json:"object"` // Usage Usage statistics for the completion request. Usage *CompletionUsage `json:"usage,omitempty"` }
CreateChatCompletionResponse Represents a chat completion response returned by model, based on the provided input.
type CreateChatCompletionStreamResponse ¶ added in v1.5.0
type CreateChatCompletionStreamResponse struct { ID string `json:"id"` Choices []ChatCompletionStreamChoice `json:"choices"` Created int `json:"created"` Model string `json:"model"` SystemFingerprint string `json:"system_fingerprint,omitempty"` Object string `json:"object"` Usage *CompletionUsage `json:"usage,omitempty"` }
CreateChatCompletionStreamResponse represents a streamed chunk of a chat completion response.
type Error ¶ added in v1.5.0
type Error struct {
Error *string `json:"error,omitempty"`
}
Error defines model for Error.
type FunctionObject ¶ added in v1.5.0
type FunctionObject struct { // Description A description of what the function does, used by the model to choose when and how to call the function. Description *string `json:"description,omitempty"` // Name The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. Name string `json:"name"` // Parameters The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. // Omitting `parameters` defines a function with an empty parameter list. Parameters *FunctionParameters `json:"parameters,omitempty"` // Strict Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the `parameters` field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn more about Structured Outputs in the [function calling guide](docs/guides/function-calling). Strict *bool `json:"strict,omitempty"` }
FunctionObject defines model for FunctionObject.
type FunctionParameters ¶ added in v1.5.0
type FunctionParameters map[string]interface{}
FunctionParameters The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. Omitting `parameters` defines a function with an empty parameter list.
type InternalError ¶ added in v1.5.0
type InternalError = Error
InternalError defines model for InternalError.
type ListModelsParams ¶ added in v1.5.0
type ListModelsParams struct { // Provider Specific provider to query (optional) Provider *Provider `form:"provider,omitempty" json:"provider,omitempty"` }
ListModelsParams defines parameters for ListModels.
type ListModelsResponse ¶ added in v1.3.0
type ListModelsResponse struct { Data []Model `json:"data"` Object string `json:"object"` Provider *Provider `json:"provider,omitempty"` }
ListModelsResponse Response structure for listing models
type ListToolsResponse ¶ added in v1.8.0
type ListToolsResponse struct { // Data Array of available MCP tools Data []MCPTool `json:"data"` // Object Always "list" Object string `json:"object"` }
ListToolsResponse Response structure for listing MCP tools
type MCPNotExposed ¶ added in v1.8.0
type MCPNotExposed = Error
MCPNotExposed defines model for MCPNotExposed.
type MCPTool ¶ added in v1.8.0
type MCPTool struct { // Description A description of what the tool does Description string `json:"description"` // InputSchema JSON schema for the tool's input parameters InputSchema *map[string]interface{} `json:"input_schema,omitempty"` // Name The name of the tool Name string `json:"name"` // Server The MCP server that provides this tool Server string `json:"server"` }
MCPTool An MCP tool definition
type Message ¶
type Message struct { Content string `json:"content"` // Reasoning The reasoning of the chunk message. Same as reasoning_content. Reasoning *string `json:"reasoning,omitempty"` // ReasoningContent The reasoning content of the chunk message. ReasoningContent *string `json:"reasoning_content,omitempty"` // Role Role of the message sender Role MessageRole `json:"role"` ToolCallId *string `json:"tool_call_id,omitempty"` ToolCalls *[]ChatCompletionMessageToolCall `json:"tool_calls,omitempty"` }
Message Message structure for provider requests
type MessageRole ¶ added in v1.5.0
type MessageRole string
MessageRole Role of the message sender
const ( Assistant MessageRole = "assistant" System MessageRole = "system" Tool MessageRole = "tool" User MessageRole = "user" )
Defines values for MessageRole.
type Model ¶
type Model struct { Created int64 `json:"created"` Id string `json:"id"` Object string `json:"object"` OwnedBy string `json:"owned_by"` ServedBy Provider `json:"served_by"` }
Model Common model information
type ProviderRequest ¶ added in v1.5.0
type ProviderRequest struct { Messages *[]struct { Content *string `json:"content,omitempty"` Role *string `json:"role,omitempty"` } `json:"messages,omitempty"` Model *string `json:"model,omitempty"` Temperature *float32 `json:"temperature,omitempty"` }
ProviderRequest defines model for ProviderRequest.
type ProviderResponse ¶ added in v1.5.0
type ProviderResponse = ProviderSpecificResponse
ProviderResponse Provider-specific response format. Examples:
OpenAI GET /v1/models?provider=openai response: ```json
{ "provider": "openai", "object": "list", "data": [ { "id": "gpt-4", "object": "model", "created": 1687882410, "owned_by": "openai", "served_by": "openai" } ] }
```
Anthropic GET /v1/models?provider=anthropic response: ```json
{ "provider": "anthropic", "object": "list", "data": [ { "id": "gpt-4", "object": "model", "created": 1687882410, "owned_by": "openai", "served_by": "openai" } ] }
```
type ProviderSpecificResponse ¶ added in v1.5.0
type ProviderSpecificResponse = map[string]interface{}
ProviderSpecificResponse Provider-specific response format. Examples:
OpenAI GET /v1/models?provider=openai response: ```json
{ "provider": "openai", "object": "list", "data": [ { "id": "gpt-4", "object": "model", "created": 1687882410, "owned_by": "openai", "served_by": "openai" } ] }
```
Anthropic GET /v1/models?provider=anthropic response: ```json
{ "provider": "anthropic", "object": "list", "data": [ { "id": "gpt-4", "object": "model", "created": 1687882410, "owned_by": "openai", "served_by": "openai" } ] }
```
type ProxyPatchJSONBody ¶ added in v1.5.0
type ProxyPatchJSONBody struct { Messages *[]struct { Content *string `json:"content,omitempty"` Role *string `json:"role,omitempty"` } `json:"messages,omitempty"` Model *string `json:"model,omitempty"` Temperature *float32 `json:"temperature,omitempty"` }
ProxyPatchJSONBody defines parameters for ProxyPatch.
type ProxyPatchJSONRequestBody ¶ added in v1.5.0
type ProxyPatchJSONRequestBody ProxyPatchJSONBody
ProxyPatchJSONRequestBody defines body for ProxyPatch for application/json ContentType.
type ProxyPostJSONBody ¶ added in v1.5.0
type ProxyPostJSONBody struct { Messages *[]struct { Content *string `json:"content,omitempty"` Role *string `json:"role,omitempty"` } `json:"messages,omitempty"` Model *string `json:"model,omitempty"` Temperature *float32 `json:"temperature,omitempty"` }
ProxyPostJSONBody defines parameters for ProxyPost.
type ProxyPostJSONRequestBody ¶ added in v1.5.0
type ProxyPostJSONRequestBody ProxyPostJSONBody
ProxyPostJSONRequestBody defines body for ProxyPost for application/json ContentType.
type ProxyPutJSONBody ¶ added in v1.5.0
type ProxyPutJSONBody struct { Messages *[]struct { Content *string `json:"content,omitempty"` Role *string `json:"role,omitempty"` } `json:"messages,omitempty"` Model *string `json:"model,omitempty"` Temperature *float32 `json:"temperature,omitempty"` }
ProxyPutJSONBody defines parameters for ProxyPut.
type ProxyPutJSONRequestBody ¶ added in v1.5.0
type ProxyPutJSONRequestBody ProxyPutJSONBody
ProxyPutJSONRequestBody defines body for ProxyPut for application/json ContentType.
type SSEvent ¶ added in v1.4.0
type SSEvent struct { Data *[]byte `json:"data,omitempty"` Event *SSEventEvent `json:"event,omitempty"` Retry *int `json:"retry,omitempty"` }
SSEvent defines model for SSEvent.
type SSEventEvent ¶ added in v1.5.0
type SSEventEvent string
SSEventEvent defines model for SSEvent.Event.
const ( ContentDelta SSEventEvent = "content-delta" ContentEnd SSEventEvent = "content-end" ContentStart SSEventEvent = "content-start" MessageEnd SSEventEvent = "message-end" MessageStart SSEventEvent = "message-start" StreamEnd SSEventEvent = "stream-end" StreamStart SSEventEvent = "stream-start" )
Defines values for SSEventEvent.
type Unauthorized ¶ added in v1.5.0
type Unauthorized = Error
Unauthorized defines model for Unauthorized.