inference

package
v9.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 17, 2025 License: Apache-2.0 Imports: 12 Imported by: 0

Documentation

Overview

Perform inference on the service.

This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.

> info > The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

type Inference

type Inference struct {
	// contains filtered or unexported fields
}

func New

Perform inference on the service.

This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.

> info > The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-inference

func (Inference) Do

func (r Inference) Do(providedCtx context.Context) (*Response, error)

Do runs the request through the transport, handle the response and returns a inference.Response

func (*Inference) ErrorTrace

func (r *Inference) ErrorTrace(errortrace bool) *Inference

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace

func (*Inference) FilterPath

func (r *Inference) FilterPath(filterpaths ...string) *Inference

FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path

func (*Inference) Header

func (r *Inference) Header(key, value string) *Inference

Header set a key, value pair in the Inference headers map.

func (*Inference) HttpRequest

func (r *Inference) HttpRequest(ctx context.Context) (*http.Request, error)

HttpRequest returns the http.Request object built from the given parameters.

func (*Inference) Human

func (r *Inference) Human(human bool) *Inference

Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human

func (*Inference) Input

func (r *Inference) Input(inputs ...string) *Inference

The text on which you want to perform the inference task. It can be a single string or an array.

> info > Inference endpoints for the `completion` task type currently only support a single string as input. API name: input

func (Inference) Perform

func (r Inference) Perform(providedCtx context.Context) (*http.Response, error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*Inference) Pretty

func (r *Inference) Pretty(pretty bool) *Inference

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty

func (*Inference) Query

func (r *Inference) Query(query string) *Inference

The query input, which is required only for the `rerank` task. It is not required for other tasks. API name: query

func (*Inference) Raw

func (r *Inference) Raw(raw io.Reader) *Inference

Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.

func (*Inference) Request

func (r *Inference) Request(req *Request) *Inference

Request allows to set the request property with the appropriate payload.

func (*Inference) TaskSettings

func (r *Inference) TaskSettings(tasksettings json.RawMessage) *Inference

Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service. API name: task_settings

func (*Inference) TaskType

func (r *Inference) TaskType(tasktype string) *Inference

TaskType The type of inference task that the model performs. API Name: tasktype

func (*Inference) Timeout

func (r *Inference) Timeout(duration string) *Inference

Timeout The amount of time to wait for the inference request to complete. API name: timeout

type NewInference

type NewInference func(inferenceid string) *Inference

NewInference type alias for index.

func NewInferenceFunc

func NewInferenceFunc(tp elastictransport.Interface) NewInference

NewInferenceFunc returns a new instance of Inference with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.

type Request

type Request struct {

	// Input The text on which you want to perform the inference task.
	// It can be a single string or an array.
	//
	// > info
	// > Inference endpoints for the `completion` task type currently only support a
	// single string as input.
	Input []string `json:"input"`
	// Query The query input, which is required only for the `rerank` task.
	// It is not required for other tasks.
	Query *string `json:"query,omitempty"`
	// TaskSettings Task settings for the individual inference request.
	// These settings are specific to the task type you specified and override the
	// task settings specified when initializing the service.
	TaskSettings json.RawMessage `json:"task_settings,omitempty"`
}

Request holds the request body struct for the package inference

https://github.com/elastic/elasticsearch-specification/blob/52c473efb1fb5320a5bac12572d0b285882862fb/specification/inference/inference/InferenceRequest.ts#L26-L91

func NewRequest

func NewRequest() *Request

NewRequest returns a Request

func (*Request) FromJSON

func (r *Request) FromJSON(data string) (*Request, error)

FromJSON allows to load an arbitrary json into the request structure

func (*Request) UnmarshalJSON

func (s *Request) UnmarshalJSON(data []byte) error

type Response

type Response struct {
	AdditionalInferenceResultProperty map[string]json.RawMessage      `json:"-"`
	Completion                        []types.CompletionResult        `json:"completion,omitempty"`
	Rerank                            []types.RankedDocument          `json:"rerank,omitempty"`
	SparseEmbedding                   []types.SparseEmbeddingResult   `json:"sparse_embedding,omitempty"`
	TextEmbedding                     []types.TextEmbeddingResult     `json:"text_embedding,omitempty"`
	TextEmbeddingBits                 []types.TextEmbeddingByteResult `json:"text_embedding_bits,omitempty"`
	TextEmbeddingBytes                []types.TextEmbeddingByteResult `json:"text_embedding_bytes,omitempty"`
}

Response holds the response body struct for the package inference

https://github.com/elastic/elasticsearch-specification/blob/52c473efb1fb5320a5bac12572d0b285882862fb/specification/inference/inference/InferenceResponse.ts#L22-L25

func NewResponse

func NewResponse() *Response

NewResponse returns a Response

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL