putgoogleaistudio

package
v8.18.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 17, 2025 License: Apache-2.0 Imports: 14 Imported by: 0

Documentation

Overview

Create an Google AI Studio inference endpoint.

Create an inference endpoint to perform an inference task with the `googleaistudio` service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

type NewPutGoogleaistudio

type NewPutGoogleaistudio func(tasktype, googleaistudioinferenceid string) *PutGoogleaistudio

NewPutGoogleaistudio type alias for index.

func NewPutGoogleaistudioFunc

func NewPutGoogleaistudioFunc(tp elastictransport.Interface) NewPutGoogleaistudio

NewPutGoogleaistudioFunc returns a new instance of PutGoogleaistudio with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.

type PutGoogleaistudio

type PutGoogleaistudio struct {
	// contains filtered or unexported fields
}

func New

Create an Google AI Studio inference endpoint.

Create an inference endpoint to perform an inference task with the `googleaistudio` service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-google-ai-studio.html

func (*PutGoogleaistudio) ChunkingSettings

func (r *PutGoogleaistudio) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *PutGoogleaistudio

The chunking configuration object. API name: chunking_settings

func (PutGoogleaistudio) Do

func (r PutGoogleaistudio) Do(providedCtx context.Context) (*Response, error)

Do runs the request through the transport, handle the response and returns a putgoogleaistudio.Response

func (*PutGoogleaistudio) ErrorTrace

func (r *PutGoogleaistudio) ErrorTrace(errortrace bool) *PutGoogleaistudio

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace

func (*PutGoogleaistudio) FilterPath

func (r *PutGoogleaistudio) FilterPath(filterpaths ...string) *PutGoogleaistudio

FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path

func (*PutGoogleaistudio) Header

func (r *PutGoogleaistudio) Header(key, value string) *PutGoogleaistudio

Header set a key, value pair in the PutGoogleaistudio headers map.

func (*PutGoogleaistudio) HttpRequest

func (r *PutGoogleaistudio) HttpRequest(ctx context.Context) (*http.Request, error)

HttpRequest returns the http.Request object built from the given parameters.

func (*PutGoogleaistudio) Human

func (r *PutGoogleaistudio) Human(human bool) *PutGoogleaistudio

Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human

func (PutGoogleaistudio) Perform

func (r PutGoogleaistudio) Perform(providedCtx context.Context) (*http.Response, error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*PutGoogleaistudio) Pretty

func (r *PutGoogleaistudio) Pretty(pretty bool) *PutGoogleaistudio

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty

func (*PutGoogleaistudio) Raw

Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.

func (*PutGoogleaistudio) Request

func (r *PutGoogleaistudio) Request(req *Request) *PutGoogleaistudio

Request allows to set the request property with the appropriate payload.

func (*PutGoogleaistudio) Service

The type of service supported for the specified task type. In this case, `googleaistudio`. API name: service

func (*PutGoogleaistudio) ServiceSettings

Settings used to install the inference model. These settings are specific to the `googleaistudio` service. API name: service_settings

type Request

type Request struct {

	// ChunkingSettings The chunking configuration object.
	ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
	// Service The type of service supported for the specified task type. In this case,
	// `googleaistudio`.
	Service googleaiservicetype.GoogleAiServiceType `json:"service"`
	// ServiceSettings Settings used to install the inference model. These settings are specific to
	// the `googleaistudio` service.
	ServiceSettings types.GoogleAiStudioServiceSettings `json:"service_settings"`
}

Request holds the request body struct for the package putgoogleaistudio

https://github.com/elastic/elasticsearch-specification/blob/f6a370d0fba975752c644fc730f7c45610e28f36/specification/inference/put_googleaistudio/PutGoogleAiStudioRequest.ts#L29-L77

func NewRequest

func NewRequest() *Request

NewRequest returns a Request

func (*Request) FromJSON

func (r *Request) FromJSON(data string) (*Request, error)

FromJSON allows to load an arbitrary json into the request structure

type Response

type Response struct {

	// ChunkingSettings Chunking configuration object
	ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
	// InferenceId The inference Id
	InferenceId string `json:"inference_id"`
	// Service The service type
	Service string `json:"service"`
	// ServiceSettings Settings specific to the service
	ServiceSettings json.RawMessage `json:"service_settings"`
	// TaskSettings Task settings specific to the service and task type
	TaskSettings json.RawMessage `json:"task_settings,omitempty"`
	// TaskType The task type
	TaskType tasktype.TaskType `json:"task_type"`
}

Response holds the response body struct for the package putgoogleaistudio

https://github.com/elastic/elasticsearch-specification/blob/f6a370d0fba975752c644fc730f7c45610e28f36/specification/inference/put_googleaistudio/PutGoogleAiStudioResponse.ts#L22-L24

func NewResponse

func NewResponse() *Response

NewResponse returns a Response

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL