Documentation
¶
Overview ¶
Create an Azure OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the `azureopenai` service.
The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
* [GPT-4 and GPT-4 Turbo models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-4-and-gpt-4-turbo-models) * [GPT-3.5](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-35)
The list of embeddings models that you can choose from in your deployment can be found in the [Azure models documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#embeddings).
Index ¶
- Variables
- type NewPutAzureopenai
- type PutAzureopenai
- func (r *PutAzureopenai) ChunkingSettings(chunkingsettings *types.InferenceChunkingSettings) *PutAzureopenai
- func (r PutAzureopenai) Do(providedCtx context.Context) (*Response, error)
- func (r *PutAzureopenai) ErrorTrace(errortrace bool) *PutAzureopenai
- func (r *PutAzureopenai) FilterPath(filterpaths ...string) *PutAzureopenai
- func (r *PutAzureopenai) Header(key, value string) *PutAzureopenai
- func (r *PutAzureopenai) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *PutAzureopenai) Human(human bool) *PutAzureopenai
- func (r PutAzureopenai) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *PutAzureopenai) Pretty(pretty bool) *PutAzureopenai
- func (r *PutAzureopenai) Raw(raw io.Reader) *PutAzureopenai
- func (r *PutAzureopenai) Request(req *Request) *PutAzureopenai
- func (r *PutAzureopenai) Service(service azureopenaiservicetype.AzureOpenAIServiceType) *PutAzureopenai
- func (r *PutAzureopenai) ServiceSettings(servicesettings *types.AzureOpenAIServiceSettings) *PutAzureopenai
- func (r *PutAzureopenai) TaskSettings(tasksettings *types.AzureOpenAITaskSettings) *PutAzureopenai
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type NewPutAzureopenai ¶
type NewPutAzureopenai func(tasktype, azureopenaiinferenceid string) *PutAzureopenai
NewPutAzureopenai type alias for index.
func NewPutAzureopenaiFunc ¶
func NewPutAzureopenaiFunc(tp elastictransport.Interface) NewPutAzureopenai
NewPutAzureopenaiFunc returns a new instance of PutAzureopenai with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type PutAzureopenai ¶
type PutAzureopenai struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *PutAzureopenai
Create an Azure OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the `azureopenai` service.
The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
* [GPT-4 and GPT-4 Turbo models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-4-and-gpt-4-turbo-models) * [GPT-3.5](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-35)
The list of embeddings models that you can choose from in your deployment can be found in the [Azure models documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#embeddings).
https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-azure-openai.html
func (*PutAzureopenai) ChunkingSettings ¶
func (r *PutAzureopenai) ChunkingSettings(chunkingsettings *types.InferenceChunkingSettings) *PutAzureopenai
ChunkingSettings The chunking configuration object. API name: chunking_settings
func (PutAzureopenai) Do ¶
func (r PutAzureopenai) Do(providedCtx context.Context) (*Response, error)
Do runs the request through the transport, handle the response and returns a putazureopenai.Response
func (*PutAzureopenai) ErrorTrace ¶
func (r *PutAzureopenai) ErrorTrace(errortrace bool) *PutAzureopenai
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*PutAzureopenai) FilterPath ¶
func (r *PutAzureopenai) FilterPath(filterpaths ...string) *PutAzureopenai
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*PutAzureopenai) Header ¶
func (r *PutAzureopenai) Header(key, value string) *PutAzureopenai
Header set a key, value pair in the PutAzureopenai headers map.
func (*PutAzureopenai) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*PutAzureopenai) Human ¶
func (r *PutAzureopenai) Human(human bool) *PutAzureopenai
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (PutAzureopenai) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*PutAzureopenai) Pretty ¶
func (r *PutAzureopenai) Pretty(pretty bool) *PutAzureopenai
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*PutAzureopenai) Raw ¶
func (r *PutAzureopenai) Raw(raw io.Reader) *PutAzureopenai
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*PutAzureopenai) Request ¶
func (r *PutAzureopenai) Request(req *Request) *PutAzureopenai
Request allows to set the request property with the appropriate payload.
func (*PutAzureopenai) Service ¶
func (r *PutAzureopenai) Service(service azureopenaiservicetype.AzureOpenAIServiceType) *PutAzureopenai
Service The type of service supported for the specified task type. In this case, `azureopenai`. API name: service
func (*PutAzureopenai) ServiceSettings ¶
func (r *PutAzureopenai) ServiceSettings(servicesettings *types.AzureOpenAIServiceSettings) *PutAzureopenai
ServiceSettings Settings used to install the inference model. These settings are specific to the `azureopenai` service. API name: service_settings
func (*PutAzureopenai) TaskSettings ¶
func (r *PutAzureopenai) TaskSettings(tasksettings *types.AzureOpenAITaskSettings) *PutAzureopenai
TaskSettings Settings to configure the inference task. These settings are specific to the task type you specified. API name: task_settings
type Request ¶
type Request struct { // ChunkingSettings The chunking configuration object. ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"` // Service The type of service supported for the specified task type. In this case, // `azureopenai`. Service azureopenaiservicetype.AzureOpenAIServiceType `json:"service"` // ServiceSettings Settings used to install the inference model. These settings are specific to // the `azureopenai` service. ServiceSettings types.AzureOpenAIServiceSettings `json:"service_settings"` // TaskSettings Settings to configure the inference task. // These settings are specific to the task type you specified. TaskSettings *types.AzureOpenAITaskSettings `json:"task_settings,omitempty"` }
Request holds the request body struct for the package putazureopenai
type Response ¶
type Response struct { // ChunkingSettings Chunking configuration object ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"` // InferenceId The inference Id InferenceId string `json:"inference_id"` // Service The service type Service string `json:"service"` // ServiceSettings Settings specific to the service ServiceSettings json.RawMessage `json:"service_settings"` // TaskSettings Task settings specific to the service and task type TaskSettings json.RawMessage `json:"task_settings,omitempty"` // TaskType The task type TaskType tasktypeazureopenai.TaskTypeAzureOpenAI `json:"task_type"` }
Response holds the response body struct for the package putazureopenai