Documentation
¶
Overview ¶
Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
Index ¶
- Variables
- type NewPut
- type Put
- func (r *Put) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *Put
- func (r Put) Do(providedCtx context.Context) (*Response, error)
- func (r *Put) ErrorTrace(errortrace bool) *Put
- func (r *Put) FilterPath(filterpaths ...string) *Put
- func (r *Put) Header(key, value string) *Put
- func (r *Put) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *Put) Human(human bool) *Put
- func (r Put) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *Put) Pretty(pretty bool) *Put
- func (r *Put) Raw(raw io.Reader) *Put
- func (r *Put) Request(req *Request) *Put
- func (r *Put) Service(service string) *Put
- func (r *Put) ServiceSettings(servicesettings json.RawMessage) *Put
- func (r *Put) TaskSettings(tasksettings json.RawMessage) *Put
- func (r *Put) TaskType(tasktype string) *Put
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type NewPut ¶
NewPut type alias for index.
func NewPutFunc ¶
func NewPutFunc(tp elastictransport.Interface) NewPut
NewPutFunc returns a new instance of Put with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type Put ¶
type Put struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *Put
Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html
func (*Put) ChunkingSettings ¶ added in v8.18.0
func (r *Put) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *Put
Chunking configuration object API name: chunking_settings
func (Put) Do ¶
Do runs the request through the transport, handle the response and returns a put.Response
func (*Put) ErrorTrace ¶
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*Put) FilterPath ¶
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*Put) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*Put) Human ¶
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (Put) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*Put) Pretty ¶
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*Put) Raw ¶
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*Put) ServiceSettings ¶
func (r *Put) ServiceSettings(servicesettings json.RawMessage) *Put
Settings specific to the service API name: service_settings
func (*Put) TaskSettings ¶
func (r *Put) TaskSettings(tasksettings json.RawMessage) *Put
Task settings specific to the service and task type API name: task_settings
type Request ¶
type Request = types.InferenceEndpoint
Request holds the request body struct for the package put
type Response ¶
type Response struct { // ChunkingSettings Chunking configuration object ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"` // InferenceId The inference Id InferenceId string `json:"inference_id"` // Service The service type Service string `json:"service"` // ServiceSettings Settings specific to the service ServiceSettings json.RawMessage `json:"service_settings"` // TaskSettings Task settings specific to the service and task type TaskSettings json.RawMessage `json:"task_settings,omitempty"` // TaskType The task type TaskType tasktype.TaskType `json:"task_type"` }
Response holds the response body struct for the package put