putwatsonx

package
v8.18.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 17, 2025 License: Apache-2.0 Imports: 14 Imported by: 0

Documentation

Overview

Create a Watsonx inference endpoint.

Create an inference endpoint to perform an inference task with the `watsonxai` service. You need an IBM Cloud Databases for Elasticsearch deployment to use the `watsonxai` inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

type NewPutWatsonx

type NewPutWatsonx func(tasktype, watsonxinferenceid string) *PutWatsonx

NewPutWatsonx type alias for index.

func NewPutWatsonxFunc

func NewPutWatsonxFunc(tp elastictransport.Interface) NewPutWatsonx

NewPutWatsonxFunc returns a new instance of PutWatsonx with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.

type PutWatsonx

type PutWatsonx struct {
	// contains filtered or unexported fields
}

func New

Create a Watsonx inference endpoint.

Create an inference endpoint to perform an inference task with the `watsonxai` service. You need an IBM Cloud Databases for Elasticsearch deployment to use the `watsonxai` inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-watsonx-ai.html

func (PutWatsonx) Do

func (r PutWatsonx) Do(providedCtx context.Context) (*Response, error)

Do runs the request through the transport, handle the response and returns a putwatsonx.Response

func (*PutWatsonx) ErrorTrace

func (r *PutWatsonx) ErrorTrace(errortrace bool) *PutWatsonx

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace

func (*PutWatsonx) FilterPath

func (r *PutWatsonx) FilterPath(filterpaths ...string) *PutWatsonx

FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path

func (*PutWatsonx) Header

func (r *PutWatsonx) Header(key, value string) *PutWatsonx

Header set a key, value pair in the PutWatsonx headers map.

func (*PutWatsonx) HttpRequest

func (r *PutWatsonx) HttpRequest(ctx context.Context) (*http.Request, error)

HttpRequest returns the http.Request object built from the given parameters.

func (*PutWatsonx) Human

func (r *PutWatsonx) Human(human bool) *PutWatsonx

Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human

func (PutWatsonx) Perform

func (r PutWatsonx) Perform(providedCtx context.Context) (*http.Response, error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*PutWatsonx) Pretty

func (r *PutWatsonx) Pretty(pretty bool) *PutWatsonx

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty

func (*PutWatsonx) Raw

func (r *PutWatsonx) Raw(raw io.Reader) *PutWatsonx

Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.

func (*PutWatsonx) Request

func (r *PutWatsonx) Request(req *Request) *PutWatsonx

Request allows to set the request property with the appropriate payload.

func (*PutWatsonx) Service

The type of service supported for the specified task type. In this case, `watsonxai`. API name: service

func (*PutWatsonx) ServiceSettings

func (r *PutWatsonx) ServiceSettings(servicesettings types.WatsonxServiceSettingsVariant) *PutWatsonx

Settings used to install the inference model. These settings are specific to the `watsonxai` service. API name: service_settings

type Request

type Request struct {

	// Service The type of service supported for the specified task type. In this case,
	// `watsonxai`.
	Service watsonxservicetype.WatsonxServiceType `json:"service"`
	// ServiceSettings Settings used to install the inference model. These settings are specific to
	// the `watsonxai` service.
	ServiceSettings types.WatsonxServiceSettings `json:"service_settings"`
}

Request holds the request body struct for the package putwatsonx

https://github.com/elastic/elasticsearch-specification/blob/f6a370d0fba975752c644fc730f7c45610e28f36/specification/inference/put_watsonx/PutWatsonxRequest.ts#L28-L74

func NewRequest

func NewRequest() *Request

NewRequest returns a Request

func (*Request) FromJSON

func (r *Request) FromJSON(data string) (*Request, error)

FromJSON allows to load an arbitrary json into the request structure

type Response

type Response struct {

	// ChunkingSettings Chunking configuration object
	ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
	// InferenceId The inference Id
	InferenceId string `json:"inference_id"`
	// Service The service type
	Service string `json:"service"`
	// ServiceSettings Settings specific to the service
	ServiceSettings json.RawMessage `json:"service_settings"`
	// TaskSettings Task settings specific to the service and task type
	TaskSettings json.RawMessage `json:"task_settings,omitempty"`
	// TaskType The task type
	TaskType tasktype.TaskType `json:"task_type"`
}

Response holds the response body struct for the package putwatsonx

https://github.com/elastic/elasticsearch-specification/blob/f6a370d0fba975752c644fc730f7c45610e28f36/specification/inference/put_watsonx/PutWatsonxResponse.ts#L22-L24

func NewResponse

func NewResponse() *Response

NewResponse returns a Response

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL