Documentation
¶
Overview ¶
Create an Azure AI studio inference endpoint.
Create an inference endpoint to perform an inference task with the `azureaistudio` service.
Index ¶
- Variables
- type NewPutAzureaistudio
- type PutAzureaistudio
- func (r *PutAzureaistudio) ChunkingSettings(chunkingsettings *types.InferenceChunkingSettings) *PutAzureaistudio
- func (r PutAzureaistudio) Do(providedCtx context.Context) (*Response, error)
- func (r *PutAzureaistudio) ErrorTrace(errortrace bool) *PutAzureaistudio
- func (r *PutAzureaistudio) FilterPath(filterpaths ...string) *PutAzureaistudio
- func (r *PutAzureaistudio) Header(key, value string) *PutAzureaistudio
- func (r *PutAzureaistudio) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *PutAzureaistudio) Human(human bool) *PutAzureaistudio
- func (r PutAzureaistudio) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *PutAzureaistudio) Pretty(pretty bool) *PutAzureaistudio
- func (r *PutAzureaistudio) Raw(raw io.Reader) *PutAzureaistudio
- func (r *PutAzureaistudio) Request(req *Request) *PutAzureaistudio
- func (r *PutAzureaistudio) Service(service azureaistudioservicetype.AzureAiStudioServiceType) *PutAzureaistudio
- func (r *PutAzureaistudio) ServiceSettings(servicesettings *types.AzureAiStudioServiceSettings) *PutAzureaistudio
- func (r *PutAzureaistudio) TaskSettings(tasksettings *types.AzureAiStudioTaskSettings) *PutAzureaistudio
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type NewPutAzureaistudio ¶
type NewPutAzureaistudio func(tasktype, azureaistudioinferenceid string) *PutAzureaistudio
NewPutAzureaistudio type alias for index.
func NewPutAzureaistudioFunc ¶
func NewPutAzureaistudioFunc(tp elastictransport.Interface) NewPutAzureaistudio
NewPutAzureaistudioFunc returns a new instance of PutAzureaistudio with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type PutAzureaistudio ¶
type PutAzureaistudio struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *PutAzureaistudio
Create an Azure AI studio inference endpoint.
Create an inference endpoint to perform an inference task with the `azureaistudio` service.
https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-azure-ai-studio.html
func (*PutAzureaistudio) ChunkingSettings ¶
func (r *PutAzureaistudio) ChunkingSettings(chunkingsettings *types.InferenceChunkingSettings) *PutAzureaistudio
ChunkingSettings The chunking configuration object. API name: chunking_settings
func (PutAzureaistudio) Do ¶
func (r PutAzureaistudio) Do(providedCtx context.Context) (*Response, error)
Do runs the request through the transport, handle the response and returns a putazureaistudio.Response
func (*PutAzureaistudio) ErrorTrace ¶
func (r *PutAzureaistudio) ErrorTrace(errortrace bool) *PutAzureaistudio
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*PutAzureaistudio) FilterPath ¶
func (r *PutAzureaistudio) FilterPath(filterpaths ...string) *PutAzureaistudio
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*PutAzureaistudio) Header ¶
func (r *PutAzureaistudio) Header(key, value string) *PutAzureaistudio
Header set a key, value pair in the PutAzureaistudio headers map.
func (*PutAzureaistudio) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*PutAzureaistudio) Human ¶
func (r *PutAzureaistudio) Human(human bool) *PutAzureaistudio
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (PutAzureaistudio) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*PutAzureaistudio) Pretty ¶
func (r *PutAzureaistudio) Pretty(pretty bool) *PutAzureaistudio
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*PutAzureaistudio) Raw ¶
func (r *PutAzureaistudio) Raw(raw io.Reader) *PutAzureaistudio
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*PutAzureaistudio) Request ¶
func (r *PutAzureaistudio) Request(req *Request) *PutAzureaistudio
Request allows to set the request property with the appropriate payload.
func (*PutAzureaistudio) Service ¶
func (r *PutAzureaistudio) Service(service azureaistudioservicetype.AzureAiStudioServiceType) *PutAzureaistudio
Service The type of service supported for the specified task type. In this case, `azureaistudio`. API name: service
func (*PutAzureaistudio) ServiceSettings ¶
func (r *PutAzureaistudio) ServiceSettings(servicesettings *types.AzureAiStudioServiceSettings) *PutAzureaistudio
ServiceSettings Settings used to install the inference model. These settings are specific to the `openai` service. API name: service_settings
func (*PutAzureaistudio) TaskSettings ¶
func (r *PutAzureaistudio) TaskSettings(tasksettings *types.AzureAiStudioTaskSettings) *PutAzureaistudio
TaskSettings Settings to configure the inference task. These settings are specific to the task type you specified. API name: task_settings
type Request ¶
type Request struct { // ChunkingSettings The chunking configuration object. ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"` // Service The type of service supported for the specified task type. In this case, // `azureaistudio`. Service azureaistudioservicetype.AzureAiStudioServiceType `json:"service"` // ServiceSettings Settings used to install the inference model. These settings are specific to // the `openai` service. ServiceSettings types.AzureAiStudioServiceSettings `json:"service_settings"` // TaskSettings Settings to configure the inference task. // These settings are specific to the task type you specified. TaskSettings *types.AzureAiStudioTaskSettings `json:"task_settings,omitempty"` }
Request holds the request body struct for the package putazureaistudio
type Response ¶
type Response struct { // ChunkingSettings Chunking configuration object ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"` // InferenceId The inference Id InferenceId string `json:"inference_id"` // Service The service type Service string `json:"service"` // ServiceSettings Settings specific to the service ServiceSettings json.RawMessage `json:"service_settings"` // TaskSettings Task settings specific to the service and task type TaskSettings json.RawMessage `json:"task_settings,omitempty"` // TaskType The task type TaskType tasktypeazureaistudio.TaskTypeAzureAIStudio `json:"task_type"` }
Response holds the response body struct for the package putazureaistudio