Documentation
¶
Overview ¶
Perform inference on the service.
This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.
For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.
> info > The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
Index ¶
- Variables
- type Inference
- func (r Inference) Do(providedCtx context.Context) (*Response, error)
- func (r *Inference) ErrorTrace(errortrace bool) *Inference
- func (r *Inference) FilterPath(filterpaths ...string) *Inference
- func (r *Inference) Header(key, value string) *Inference
- func (r *Inference) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *Inference) Human(human bool) *Inference
- func (r *Inference) Input(inputs ...string) *Inference
- func (r Inference) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *Inference) Pretty(pretty bool) *Inference
- func (r *Inference) Query(query string) *Inference
- func (r *Inference) Raw(raw io.Reader) *Inference
- func (r *Inference) Request(req *Request) *Inference
- func (r *Inference) TaskSettings(tasksettings json.RawMessage) *Inference
- func (r *Inference) TaskType(tasktype string) *Inference
- func (r *Inference) Timeout(duration string) *Inference
- type NewInference
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type Inference ¶
type Inference struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *Inference
Perform inference on the service.
This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.
For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.
> info > The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-inference
func (Inference) Do ¶
Do runs the request through the transport, handle the response and returns a inference.Response
func (*Inference) ErrorTrace ¶
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*Inference) FilterPath ¶
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*Inference) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*Inference) Human ¶
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (*Inference) Input ¶
The text on which you want to perform the inference task. It can be a single string or an array.
> info > Inference endpoints for the `completion` task type currently only support a single string as input. API name: input
func (Inference) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*Inference) Pretty ¶
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*Inference) Query ¶
The query input, which is required only for the `rerank` task. It is not required for other tasks. API name: query
func (*Inference) Raw ¶
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*Inference) Request ¶
Request allows to set the request property with the appropriate payload.
func (*Inference) TaskSettings ¶
func (r *Inference) TaskSettings(tasksettings json.RawMessage) *Inference
Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service. API name: task_settings
type NewInference ¶
NewInference type alias for index.
func NewInferenceFunc ¶
func NewInferenceFunc(tp elastictransport.Interface) NewInference
NewInferenceFunc returns a new instance of Inference with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type Request ¶
type Request struct { // Input The text on which you want to perform the inference task. // It can be a single string or an array. // // > info // > Inference endpoints for the `completion` task type currently only support a // single string as input. Input []string `json:"input"` // Query The query input, which is required only for the `rerank` task. // It is not required for other tasks. Query *string `json:"query,omitempty"` // TaskSettings Task settings for the individual inference request. // These settings are specific to the task type you specified and override the // task settings specified when initializing the service. TaskSettings json.RawMessage `json:"task_settings,omitempty"` }
Request holds the request body struct for the package inference
func (*Request) UnmarshalJSON ¶
type Response ¶
type Response struct { AdditionalInferenceResultProperty map[string]json.RawMessage `json:"-"` Completion []types.CompletionResult `json:"completion,omitempty"` Rerank []types.RankedDocument `json:"rerank,omitempty"` SparseEmbedding []types.SparseEmbeddingResult `json:"sparse_embedding,omitempty"` TextEmbedding []types.TextEmbeddingResult `json:"text_embedding,omitempty"` TextEmbeddingBits []types.TextEmbeddingByteResult `json:"text_embedding_bits,omitempty"` TextEmbeddingBytes []types.TextEmbeddingByteResult `json:"text_embedding_bytes,omitempty"` }
Response holds the response body struct for the package inference