machinelearning

package module
v1.24.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 29, 2024 License: Apache-2.0 Imports: 43 Imported by: 17

Documentation

Overview

Package machinelearning provides the API client, operations, and parameter types for Amazon Machine Learning.

Definition of the public APIs exposed by Amazon Machine Learning

Index

Constants

View Source
const ServiceAPIVersion = "2014-12-12"
View Source
const ServiceID = "Machine Learning"

Variables

This section is empty.

Functions

func NewDefaultEndpointResolver

func NewDefaultEndpointResolver() *internalendpoints.Resolver

NewDefaultEndpointResolver constructs a new service endpoint resolver

func WithAPIOptions added in v1.0.0

func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)

WithAPIOptions returns a functional option for setting the Client's APIOptions option.

func WithEndpointResolver deprecated

func WithEndpointResolver(v EndpointResolver) func(*Options)

Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead.

func WithEndpointResolverV2 added in v1.16.0

func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)

WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option.

func WithSigV4SigningName added in v1.20.2

func WithSigV4SigningName(name string) func(*Options)

WithSigV4SigningName applies an override to the authentication workflow to use the given signing name for SigV4-authenticated operations.

This is an advanced setting. The value here is FINAL, taking precedence over the resolved signing name from both auth scheme resolution and endpoint resolution.

func WithSigV4SigningRegion added in v1.20.2

func WithSigV4SigningRegion(region string) func(*Options)

WithSigV4SigningRegion applies an override to the authentication workflow to use the given signing region for SigV4-authenticated operations.

This is an advanced setting. The value here is FINAL, taking precedence over the resolved signing region from both auth scheme resolution and endpoint resolution.

Types

type AddTagsInput

type AddTagsInput struct {

	// The ID of the ML object to tag. For example, exampleModelId .
	//
	// This member is required.
	ResourceId *string

	// The type of the ML object to tag.
	//
	// This member is required.
	ResourceType types.TaggableResourceType

	// The key-value pairs to use to create tags. If you specify a key without
	// specifying a value, Amazon ML creates a tag with the specified key and a value
	// of null.
	//
	// This member is required.
	Tags []types.Tag
	// contains filtered or unexported fields
}

type AddTagsOutput

type AddTagsOutput struct {

	// The ID of the ML object that was tagged.
	ResourceId *string

	// The type of the ML object that was tagged.
	ResourceType types.TaggableResourceType

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Amazon ML returns the following elements.

type AuthResolverParameters added in v1.20.2

type AuthResolverParameters struct {
	// The name of the operation being invoked.
	Operation string

	// The region in which the operation is being invoked.
	Region string
}

AuthResolverParameters contains the set of inputs necessary for auth scheme resolution.

type AuthSchemeResolver added in v1.20.2

type AuthSchemeResolver interface {
	ResolveAuthSchemes(context.Context, *AuthResolverParameters) ([]*smithyauth.Option, error)
}

AuthSchemeResolver returns a set of possible authentication options for an operation.

type BatchPredictionAvailableWaiter added in v1.3.0

type BatchPredictionAvailableWaiter struct {
	// contains filtered or unexported fields
}

BatchPredictionAvailableWaiter defines the waiters for BatchPredictionAvailable

func NewBatchPredictionAvailableWaiter added in v1.3.0

func NewBatchPredictionAvailableWaiter(client DescribeBatchPredictionsAPIClient, optFns ...func(*BatchPredictionAvailableWaiterOptions)) *BatchPredictionAvailableWaiter

NewBatchPredictionAvailableWaiter constructs a BatchPredictionAvailableWaiter.

func (*BatchPredictionAvailableWaiter) Wait added in v1.3.0

Wait calls the waiter function for BatchPredictionAvailable waiter. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

func (*BatchPredictionAvailableWaiter) WaitForOutput added in v1.9.0

WaitForOutput calls the waiter function for BatchPredictionAvailable waiter and returns the output of the successful operation. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

type BatchPredictionAvailableWaiterOptions added in v1.3.0

type BatchPredictionAvailableWaiterOptions struct {

	// Set of options to modify how an operation is invoked. These apply to all
	// operations invoked for this client. Use functional options on operation call to
	// modify this list for per operation behavior.
	//
	// Passing options here is functionally equivalent to passing values to this
	// config's ClientOptions field that extend the inner client's APIOptions directly.
	APIOptions []func(*middleware.Stack) error

	// Functional options to be passed to all operations invoked by this client.
	//
	// Function values that modify the inner APIOptions are applied after the waiter
	// config's own APIOptions modifiers.
	ClientOptions []func(*Options)

	// MinDelay is the minimum amount of time to delay between retries. If unset,
	// BatchPredictionAvailableWaiter will use default minimum delay of 30 seconds.
	// Note that MinDelay must resolve to a value lesser than or equal to the MaxDelay.
	MinDelay time.Duration

	// MaxDelay is the maximum amount of time to delay between retries. If unset or
	// set to zero, BatchPredictionAvailableWaiter will use default max delay of 120
	// seconds. Note that MaxDelay must resolve to value greater than or equal to the
	// MinDelay.
	MaxDelay time.Duration

	// LogWaitAttempts is used to enable logging for waiter retry attempts
	LogWaitAttempts bool

	// Retryable is function that can be used to override the service defined
	// waiter-behavior based on operation output, or returned error. This function is
	// used by the waiter to decide if a state is retryable or a terminal state. By
	// default service-modeled logic will populate this option. This option can thus be
	// used to define a custom waiter state with fall-back to service-modeled waiter
	// state mutators.The function returns an error in case of a failure state. In case
	// of retry state, this function returns a bool value of true and nil error, while
	// in case of success it returns a bool value of false and nil error.
	Retryable func(context.Context, *DescribeBatchPredictionsInput, *DescribeBatchPredictionsOutput, error) (bool, error)
}

BatchPredictionAvailableWaiterOptions are waiter options for BatchPredictionAvailableWaiter

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client provides the API client to make operations call for Amazon Machine Learning.

func New

func New(options Options, optFns ...func(*Options)) *Client

New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client, such as changing the client's endpoint or adding custom middleware behavior.

func NewFromConfig

func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client

NewFromConfig returns a new client from the provided config.

func (*Client) AddTags

func (c *Client) AddTags(ctx context.Context, params *AddTagsInput, optFns ...func(*Options)) (*AddTagsOutput, error)

Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you add a tag using a key that is already associated with the ML object, AddTags updates the tag's value.

func (*Client) CreateBatchPrediction

func (c *Client) CreateBatchPrediction(ctx context.Context, params *CreateBatchPredictionInput, optFns ...func(*Options)) (*CreateBatchPredictionOutput, error)

Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource . This operation creates a new BatchPrediction , and uses an MLModel and the data files referenced by the DataSource as information sources. CreateBatchPrediction is an asynchronous operation. In response to CreateBatchPrediction , Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction status to PENDING . After the BatchPrediction completes, Amazon ML sets the status to COMPLETED . You can poll for status updates by using the GetBatchPrediction operation and checking the Status parameter of the result. After the COMPLETED status appears, the results are available in the location specified by the OutputUri parameter.

func (*Client) CreateDataSourceFromRDS

func (c *Client) CreateDataSourceFromRDS(ctx context.Context, params *CreateDataSourceFromRDSInput, optFns ...func(*Options)) (*CreateDataSourceFromRDSOutput, error)

Creates a DataSource object from an Amazon Relational Database Service (http://aws.amazon.com/rds/) (Amazon RDS). A DataSource references data that can be used to perform CreateMLModel , CreateEvaluation , or CreateBatchPrediction operations. CreateDataSourceFromRDS is an asynchronous operation. In response to CreateDataSourceFromRDS , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in the COMPLETED or PENDING state can be used only to perform >CreateMLModel >, CreateEvaluation , or CreateBatchPrediction operations. If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

func (*Client) CreateDataSourceFromRedshift

func (c *Client) CreateDataSourceFromRedshift(ctx context.Context, params *CreateDataSourceFromRedshiftInput, optFns ...func(*Options)) (*CreateDataSourceFromRedshiftOutput, error)

Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource references data that can be used to perform either CreateMLModel , CreateEvaluation , or CreateBatchPrediction operations. CreateDataSourceFromRedshift is an asynchronous operation. In response to CreateDataSourceFromRedshift , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in COMPLETED or PENDING states can be used to perform only CreateMLModel , CreateEvaluation , or CreateBatchPrediction operations. If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response. The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to transfer the result set of the SelectSqlQuery query to S3StagingLocation . After the DataSource has been created, it's ready for use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel , the DataSource also requires a recipe. A recipe describes how each input variable will be used in training an MLModel . Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions. You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing datasource and copy the values to a CreateDataSource call. Change the settings that you want to change and make sure that all required fields have the appropriate values.

func (*Client) CreateDataSourceFromS3

func (c *Client) CreateDataSourceFromS3(ctx context.Context, params *CreateDataSourceFromS3Input, optFns ...func(*Options)) (*CreateDataSourceFromS3Output, error)

Creates a DataSource object. A DataSource references data that can be used to perform CreateMLModel , CreateEvaluation , or CreateBatchPrediction operations. CreateDataSourceFromS3 is an asynchronous operation. In response to CreateDataSourceFromS3 , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource has been created and is ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in the COMPLETED or PENDING state can be used to perform only CreateMLModel , CreateEvaluation or CreateBatchPrediction operations. If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response. The observation data used in a DataSource should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the DataSource . After the DataSource has been created, it's ready to use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel , the DataSource also needs a recipe. A recipe describes how each input variable will be used in training an MLModel . Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.

func (*Client) CreateEvaluation

func (c *Client) CreateEvaluation(ctx context.Context, params *CreateEvaluationInput, optFns ...func(*Options)) (*CreateEvaluationOutput, error)

Creates a new Evaluation of an MLModel . An MLModel is evaluated on a set of observations associated to a DataSource . Like a DataSource for an MLModel , the DataSource for an Evaluation contains values for the Target Variable . The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType : BINARY , REGRESSION or MULTICLASS . CreateEvaluation is an asynchronous operation. In response to CreateEvaluation , Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING . After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED . You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.

func (*Client) CreateMLModel

func (c *Client) CreateMLModel(ctx context.Context, params *CreateMLModelInput, optFns ...func(*Options)) (*CreateMLModelOutput, error)

Creates a new MLModel using the DataSource and the recipe as information sources. An MLModel is nearly immutable. Users can update only the MLModelName and the ScoreThreshold in an MLModel without creating a new MLModel . CreateMLModel is an asynchronous operation. In response to CreateMLModel , Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING . After the MLModel has been created and ready is for use, Amazon ML sets the status to COMPLETED . You can use the GetMLModel operation to check the progress of the MLModel during the creation operation. CreateMLModel requires a DataSource with computed statistics, which can be created by setting ComputeStatistics to true in CreateDataSourceFromRDS , CreateDataSourceFromS3 , or CreateDataSourceFromRedshift operations.

func (*Client) CreateRealtimeEndpoint

func (c *Client) CreateRealtimeEndpoint(ctx context.Context, params *CreateRealtimeEndpointInput, optFns ...func(*Options)) (*CreateRealtimeEndpointOutput, error)

Creates a real-time endpoint for the MLModel . The endpoint contains the URI of the MLModel ; that is, the location to send real-time prediction requests for the specified MLModel .

func (*Client) DeleteBatchPrediction

func (c *Client) DeleteBatchPrediction(ctx context.Context, params *DeleteBatchPredictionInput, optFns ...func(*Options)) (*DeleteBatchPredictionOutput, error)

Assigns the DELETED status to a BatchPrediction , rendering it unusable. After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction changed to DELETED. Caution: The result of the DeleteBatchPrediction operation is irreversible.

func (*Client) DeleteDataSource

func (c *Client) DeleteDataSource(ctx context.Context, params *DeleteDataSourceInput, optFns ...func(*Options)) (*DeleteDataSourceOutput, error)

Assigns the DELETED status to a DataSource , rendering it unusable. After using the DeleteDataSource operation, you can use the GetDataSource operation to verify that the status of the DataSource changed to DELETED. Caution: The results of the DeleteDataSource operation are irreversible.

func (*Client) DeleteEvaluation

func (c *Client) DeleteEvaluation(ctx context.Context, params *DeleteEvaluationInput, optFns ...func(*Options)) (*DeleteEvaluationOutput, error)

Assigns the DELETED status to an Evaluation , rendering it unusable. After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation to verify that the status of the Evaluation changed to DELETED . Caution: The results of the DeleteEvaluation operation are irreversible.

func (*Client) DeleteMLModel

func (c *Client) DeleteMLModel(ctx context.Context, params *DeleteMLModelInput, optFns ...func(*Options)) (*DeleteMLModelOutput, error)

Assigns the DELETED status to an MLModel , rendering it unusable. After using the DeleteMLModel operation, you can use the GetMLModel operation to verify that the status of the MLModel changed to DELETED. Caution: The result of the DeleteMLModel operation is irreversible.

func (*Client) DeleteRealtimeEndpoint

func (c *Client) DeleteRealtimeEndpoint(ctx context.Context, params *DeleteRealtimeEndpointInput, optFns ...func(*Options)) (*DeleteRealtimeEndpointOutput, error)

Deletes a real time endpoint of an MLModel .

func (*Client) DeleteTags

func (c *Client) DeleteTags(ctx context.Context, params *DeleteTagsInput, optFns ...func(*Options)) (*DeleteTagsOutput, error)

Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags. If you specify a tag that doesn't exist, Amazon ML ignores it.

func (*Client) DescribeBatchPredictions

func (c *Client) DescribeBatchPredictions(ctx context.Context, params *DescribeBatchPredictionsInput, optFns ...func(*Options)) (*DescribeBatchPredictionsOutput, error)

Returns a list of BatchPrediction operations that match the search criteria in the request.

func (*Client) DescribeDataSources

func (c *Client) DescribeDataSources(ctx context.Context, params *DescribeDataSourcesInput, optFns ...func(*Options)) (*DescribeDataSourcesOutput, error)

Returns a list of DataSource that match the search criteria in the request.

func (*Client) DescribeEvaluations

func (c *Client) DescribeEvaluations(ctx context.Context, params *DescribeEvaluationsInput, optFns ...func(*Options)) (*DescribeEvaluationsOutput, error)

Returns a list of DescribeEvaluations that match the search criteria in the request.

func (*Client) DescribeMLModels

func (c *Client) DescribeMLModels(ctx context.Context, params *DescribeMLModelsInput, optFns ...func(*Options)) (*DescribeMLModelsOutput, error)

Returns a list of MLModel that match the search criteria in the request.

func (*Client) DescribeTags

func (c *Client) DescribeTags(ctx context.Context, params *DescribeTagsInput, optFns ...func(*Options)) (*DescribeTagsOutput, error)

Describes one or more of the tags for your Amazon ML object.

func (*Client) GetBatchPrediction

func (c *Client) GetBatchPrediction(ctx context.Context, params *GetBatchPredictionInput, optFns ...func(*Options)) (*GetBatchPredictionOutput, error)

Returns a BatchPrediction that includes detailed metadata, status, and data file information for a Batch Prediction request.

func (*Client) GetDataSource

func (c *Client) GetDataSource(ctx context.Context, params *GetDataSourceInput, optFns ...func(*Options)) (*GetDataSourceOutput, error)

Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource . GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

func (*Client) GetEvaluation

func (c *Client) GetEvaluation(ctx context.Context, params *GetEvaluationInput, optFns ...func(*Options)) (*GetEvaluationOutput, error)

Returns an Evaluation that includes metadata as well as the current status of the Evaluation .

func (*Client) GetMLModel

func (c *Client) GetMLModel(ctx context.Context, params *GetMLModelInput, optFns ...func(*Options)) (*GetMLModelOutput, error)

Returns an MLModel that includes detailed metadata, data source information, and the current status of the MLModel . GetMLModel provides results in normal or verbose format.

func (*Client) Options added in v1.21.0

func (c *Client) Options() Options

Options returns a copy of the client configuration.

Callers SHOULD NOT perform mutations on any inner structures within client config. Config overrides should instead be made on a per-operation basis through functional options.

func (*Client) Predict

func (c *Client) Predict(ctx context.Context, params *PredictInput, optFns ...func(*Options)) (*PredictOutput, error)

Generates a prediction for the observation using the specified ML Model . Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.

func (*Client) UpdateBatchPrediction

func (c *Client) UpdateBatchPrediction(ctx context.Context, params *UpdateBatchPredictionInput, optFns ...func(*Options)) (*UpdateBatchPredictionOutput, error)

Updates the BatchPredictionName of a BatchPrediction . You can use the GetBatchPrediction operation to view the contents of the updated data element.

func (*Client) UpdateDataSource

func (c *Client) UpdateDataSource(ctx context.Context, params *UpdateDataSourceInput, optFns ...func(*Options)) (*UpdateDataSourceOutput, error)

Updates the DataSourceName of a DataSource . You can use the GetDataSource operation to view the contents of the updated data element.

func (*Client) UpdateEvaluation

func (c *Client) UpdateEvaluation(ctx context.Context, params *UpdateEvaluationInput, optFns ...func(*Options)) (*UpdateEvaluationOutput, error)

Updates the EvaluationName of an Evaluation . You can use the GetEvaluation operation to view the contents of the updated data element.

func (*Client) UpdateMLModel

func (c *Client) UpdateMLModel(ctx context.Context, params *UpdateMLModelInput, optFns ...func(*Options)) (*UpdateMLModelOutput, error)

Updates the MLModelName and the ScoreThreshold of an MLModel . You can use the GetMLModel operation to view the contents of the updated data element.

type CreateBatchPredictionInput

type CreateBatchPredictionInput struct {

	// The ID of the DataSource that points to the group of observations to predict.
	//
	// This member is required.
	BatchPredictionDataSourceId *string

	// A user-supplied ID that uniquely identifies the BatchPrediction .
	//
	// This member is required.
	BatchPredictionId *string

	// The ID of the MLModel that will generate predictions for the group of
	// observations.
	//
	// This member is required.
	MLModelId *string

	// The location of an Amazon Simple Storage Service (Amazon S3) bucket or
	// directory to store the batch prediction results. The following substrings are
	// not allowed in the s3 key portion of the outputURI field: ':', '//', '/./',
	// '/../'. Amazon ML needs permissions to store and retrieve the logs on your
	// behalf. For information about how to set permissions, see the Amazon Machine
	// Learning Developer Guide (https://docs.aws.amazon.com/machine-learning/latest/dg)
	// .
	//
	// This member is required.
	OutputUri *string

	// A user-supplied name or description of the BatchPrediction . BatchPredictionName
	// can only use the UTF-8 character set.
	BatchPredictionName *string
	// contains filtered or unexported fields
}

type CreateBatchPredictionOutput

type CreateBatchPredictionOutput struct {

	// A user-supplied ID that uniquely identifies the BatchPrediction . This value is
	// identical to the value of the BatchPredictionId in the request.
	BatchPredictionId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateBatchPrediction operation, and is an acknowledgement that Amazon ML received the request. The CreateBatchPrediction operation is asynchronous. You can poll for status updates by using the >GetBatchPrediction operation and checking the Status parameter of the result.

type CreateDataSourceFromRDSInput

type CreateDataSourceFromRDSInput struct {

	// A user-supplied ID that uniquely identifies the DataSource . Typically, an
	// Amazon Resource Number (ARN) becomes the ID for a DataSource .
	//
	// This member is required.
	DataSourceId *string

	// The data specification of an Amazon RDS DataSource :
	//   - DatabaseInformation -
	//   - DatabaseName - The name of the Amazon RDS database.
	//   - InstanceIdentifier - A unique identifier for the Amazon RDS database
	//   instance.
	//   - DatabaseCredentials - AWS Identity and Access Management (IAM) credentials
	//   that are used to connect to the Amazon RDS database.
	//   - ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2
	//   instance to carry out the copy task from Amazon RDS to Amazon Simple Storage
	//   Service (Amazon S3). For more information, see Role templates (https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html)
	//   for data pipelines.
	//   - ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data
	//   Pipeline service to monitor the progress of the copy task from Amazon RDS to
	//   Amazon S3. For more information, see Role templates (https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html)
	//   for data pipelines.
	//   - SecurityInfo - The security information to use to access an RDS DB
	//   instance. You need to set up appropriate ingress rules for the security entity
	//   IDs provided to allow access to the Amazon RDS instance. Specify a [ SubnetId
	//   , SecurityGroupIds ] pair for a VPC-based RDS DB instance.
	//   - SelectSqlQuery - A query that is used to retrieve the observation data for
	//   the Datasource .
	//   - S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The
	//   data retrieved from Amazon RDS using SelectSqlQuery is stored in this
	//   location.
	//   - DataSchemaUri - The Amazon S3 location of the DataSchema .
	//   - DataSchema - A JSON string representing the schema. This is not required if
	//   DataSchemaUri is specified.
	//   - DataRearrangement - A JSON string that represents the splitting and
	//   rearrangement requirements for the Datasource . Sample -
	//   "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
	//
	// This member is required.
	RDSData *types.RDSDataSpec

	// The role that Amazon ML assumes on behalf of the user to create and activate a
	// data pipeline in the user's account and copy data using the SelectSqlQuery
	// query from Amazon RDS to Amazon S3.
	//
	// This member is required.
	RoleARN *string

	// The compute statistics for a DataSource . The statistics are generated from the
	// observation data referenced by a DataSource . Amazon ML uses the statistics
	// internally during MLModel training. This parameter must be set to true if the
	// DataSource needs to be used for MLModel training.
	ComputeStatistics bool

	// A user-supplied name or description of the DataSource .
	DataSourceName *string
	// contains filtered or unexported fields
}

type CreateDataSourceFromRDSOutput

type CreateDataSourceFromRDSOutput struct {

	// A user-supplied ID that uniquely identifies the datasource. This value should
	// be identical to the value of the DataSourceID in the request.
	DataSourceId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateDataSourceFromRDS operation, and is an acknowledgement that Amazon ML received the request. The CreateDataSourceFromRDS > operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter. You can inspect the Message when Status shows up as FAILED . You can also check the progress of the copy operation by going to the DataPipeline console and looking up the pipeline using the pipelineId from the describe call.

type CreateDataSourceFromRedshiftInput

type CreateDataSourceFromRedshiftInput struct {

	// A user-supplied ID that uniquely identifies the DataSource .
	//
	// This member is required.
	DataSourceId *string

	// The data specification of an Amazon Redshift DataSource :
	//   - DatabaseInformation -
	//   - DatabaseName - The name of the Amazon Redshift database.
	//   - ClusterIdentifier - The unique ID for the Amazon Redshift cluster.
	//   - DatabaseCredentials - The AWS Identity and Access Management (IAM)
	//   credentials that are used to connect to the Amazon Redshift database.
	//   - SelectSqlQuery - The query that is used to retrieve the observation data
	//   for the Datasource .
	//   - S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location
	//   for staging Amazon Redshift data. The data retrieved from Amazon Redshift using
	//   the SelectSqlQuery query is stored in this location.
	//   - DataSchemaUri - The Amazon S3 location of the DataSchema .
	//   - DataSchema - A JSON string representing the schema. This is not required if
	//   DataSchemaUri is specified.
	//   - DataRearrangement - A JSON string that represents the splitting and
	//   rearrangement requirements for the DataSource . Sample -
	//   "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
	//
	// This member is required.
	DataSpec *types.RedshiftDataSpec

	// A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role
	// on behalf of the user to create the following:
	//   - A security group to allow Amazon ML to execute the SelectSqlQuery query on
	//   an Amazon Redshift cluster
	//   - An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the
	//   S3StagingLocation
	//
	// This member is required.
	RoleARN *string

	// The compute statistics for a DataSource . The statistics are generated from the
	// observation data referenced by a DataSource . Amazon ML uses the statistics
	// internally during MLModel training. This parameter must be set to true if the
	// DataSource needs to be used for MLModel training.
	ComputeStatistics bool

	// A user-supplied name or description of the DataSource .
	DataSourceName *string
	// contains filtered or unexported fields
}

type CreateDataSourceFromRedshiftOutput

type CreateDataSourceFromRedshiftOutput struct {

	// A user-supplied ID that uniquely identifies the datasource. This value should
	// be identical to the value of the DataSourceID in the request.
	DataSourceId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateDataSourceFromRedshift operation, and is an acknowledgement that Amazon ML received the request. The CreateDataSourceFromRedshift operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter.

type CreateDataSourceFromS3Input

type CreateDataSourceFromS3Input struct {

	// A user-supplied identifier that uniquely identifies the DataSource .
	//
	// This member is required.
	DataSourceId *string

	// The data specification of a DataSource :
	//   - DataLocationS3 - The Amazon S3 location of the observation data.
	//   - DataSchemaLocationS3 - The Amazon S3 location of the DataSchema .
	//   - DataSchema - A JSON string representing the schema. This is not required if
	//   DataSchemaUri is specified.
	//   - DataRearrangement - A JSON string that represents the splitting and
	//   rearrangement requirements for the Datasource . Sample -
	//   "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
	//
	// This member is required.
	DataSpec *types.S3DataSpec

	// The compute statistics for a DataSource . The statistics are generated from the
	// observation data referenced by a DataSource . Amazon ML uses the statistics
	// internally during MLModel training. This parameter must be set to true if the
	// DataSource needs to be used for MLModel training.
	ComputeStatistics bool

	// A user-supplied name or description of the DataSource .
	DataSourceName *string
	// contains filtered or unexported fields
}

type CreateDataSourceFromS3Output

type CreateDataSourceFromS3Output struct {

	// A user-supplied ID that uniquely identifies the DataSource . This value should
	// be identical to the value of the DataSourceID in the request.
	DataSourceId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateDataSourceFromS3 operation, and is an acknowledgement that Amazon ML received the request. The CreateDataSourceFromS3 operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter.

type CreateEvaluationInput

type CreateEvaluationInput struct {

	// The ID of the DataSource for the evaluation. The schema of the DataSource must
	// match the schema used to create the MLModel .
	//
	// This member is required.
	EvaluationDataSourceId *string

	// A user-supplied ID that uniquely identifies the Evaluation .
	//
	// This member is required.
	EvaluationId *string

	// The ID of the MLModel to evaluate. The schema used in creating the MLModel must
	// match the schema of the DataSource used in the Evaluation .
	//
	// This member is required.
	MLModelId *string

	// A user-supplied name or description of the Evaluation .
	EvaluationName *string
	// contains filtered or unexported fields
}

type CreateEvaluationOutput

type CreateEvaluationOutput struct {

	// The user-supplied ID that uniquely identifies the Evaluation . This value should
	// be identical to the value of the EvaluationId in the request.
	EvaluationId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateEvaluation operation, and is an acknowledgement that Amazon ML received the request. CreateEvaluation operation is asynchronous. You can poll for status updates by using the GetEvcaluation operation and checking the Status parameter.

type CreateMLModelInput

type CreateMLModelInput struct {

	// A user-supplied ID that uniquely identifies the MLModel .
	//
	// This member is required.
	MLModelId *string

	// The category of supervised learning that this MLModel will address. Choose from
	// the following types:
	//   - Choose REGRESSION if the MLModel will be used to predict a numeric value.
	//   - Choose BINARY if the MLModel result has two possible values.
	//   - Choose MULTICLASS if the MLModel result has a limited number of values.
	// For more information, see the Amazon Machine Learning Developer Guide (https://docs.aws.amazon.com/machine-learning/latest/dg)
	// .
	//
	// This member is required.
	MLModelType types.MLModelType

	// The DataSource that points to the training data.
	//
	// This member is required.
	TrainingDataSourceId *string

	// A user-supplied name or description of the MLModel .
	MLModelName *string

	// A list of the training parameters in the MLModel . The list is implemented as a
	// map of key-value pairs. The following is the current set of training parameters:
	//
	//   - sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending
	//   on the input data, the size of the model might affect its performance. The value
	//   is an integer that ranges from 100000 to 2147483648 . The default value is
	//   33554432 .
	//   - sgd.maxPasses - The number of times that the training process traverses the
	//   observations to build the MLModel . The value is an integer that ranges from 1
	//   to 10000 . The default value is 10 .
	//   - sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling
	//   the data improves a model's ability to find the optimal solution for a variety
	//   of data types. The valid values are auto and none . The default value is none
	//   . We strongly recommend that you shuffle your data.
	//   - sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It
	//   controls overfitting the data by penalizing large coefficients. This tends to
	//   drive coefficients to zero, resulting in a sparse feature set. If you use this
	//   parameter, start by specifying a small value, such as 1.0E-08 . The value is a
	//   double that ranges from 0 to MAX_DOUBLE . The default is to not use L1
	//   normalization. This parameter can't be used when L2 is specified. Use this
	//   parameter sparingly.
	//   - sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It
	//   controls overfitting the data by penalizing large coefficients. This tends to
	//   drive coefficients to small, nonzero values. If you use this parameter, start by
	//   specifying a small value, such as 1.0E-08 . The value is a double that ranges
	//   from 0 to MAX_DOUBLE . The default is to not use L2 normalization. This
	//   parameter can't be used when L1 is specified. Use this parameter sparingly.
	Parameters map[string]string

	// The data recipe for creating the MLModel . You must specify either the recipe or
	// its URI. If you don't specify a recipe or its URI, Amazon ML creates a default.
	Recipe *string

	// The Amazon Simple Storage Service (Amazon S3) location and file name that
	// contains the MLModel recipe. You must specify either the recipe or its URI. If
	// you don't specify a recipe or its URI, Amazon ML creates a default.
	RecipeUri *string
	// contains filtered or unexported fields
}

type CreateMLModelOutput

type CreateMLModelOutput struct {

	// A user-supplied ID that uniquely identifies the MLModel . This value should be
	// identical to the value of the MLModelId in the request.
	MLModelId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a CreateMLModel operation, and is an acknowledgement that Amazon ML received the request. The CreateMLModel operation is asynchronous. You can poll for status updates by using the GetMLModel operation and checking the Status parameter.

type CreateRealtimeEndpointInput

type CreateRealtimeEndpointInput struct {

	// The ID assigned to the MLModel during creation.
	//
	// This member is required.
	MLModelId *string
	// contains filtered or unexported fields
}

type CreateRealtimeEndpointOutput

type CreateRealtimeEndpointOutput struct {

	// A user-supplied ID that uniquely identifies the MLModel . This value should be
	// identical to the value of the MLModelId in the request.
	MLModelId *string

	// The endpoint information of the MLModel
	RealtimeEndpointInfo *types.RealtimeEndpointInfo

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an CreateRealtimeEndpoint operation. The result contains the MLModelId and the endpoint information for the MLModel . Note: The endpoint information includes the URI of the MLModel ; that is, the location to send online prediction requests for the specified MLModel .

type DataSourceAvailableWaiter added in v1.3.0

type DataSourceAvailableWaiter struct {
	// contains filtered or unexported fields
}

DataSourceAvailableWaiter defines the waiters for DataSourceAvailable

func NewDataSourceAvailableWaiter added in v1.3.0

func NewDataSourceAvailableWaiter(client DescribeDataSourcesAPIClient, optFns ...func(*DataSourceAvailableWaiterOptions)) *DataSourceAvailableWaiter

NewDataSourceAvailableWaiter constructs a DataSourceAvailableWaiter.

func (*DataSourceAvailableWaiter) Wait added in v1.3.0

Wait calls the waiter function for DataSourceAvailable waiter. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

func (*DataSourceAvailableWaiter) WaitForOutput added in v1.9.0

WaitForOutput calls the waiter function for DataSourceAvailable waiter and returns the output of the successful operation. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

type DataSourceAvailableWaiterOptions added in v1.3.0

type DataSourceAvailableWaiterOptions struct {

	// Set of options to modify how an operation is invoked. These apply to all
	// operations invoked for this client. Use functional options on operation call to
	// modify this list for per operation behavior.
	//
	// Passing options here is functionally equivalent to passing values to this
	// config's ClientOptions field that extend the inner client's APIOptions directly.
	APIOptions []func(*middleware.Stack) error

	// Functional options to be passed to all operations invoked by this client.
	//
	// Function values that modify the inner APIOptions are applied after the waiter
	// config's own APIOptions modifiers.
	ClientOptions []func(*Options)

	// MinDelay is the minimum amount of time to delay between retries. If unset,
	// DataSourceAvailableWaiter will use default minimum delay of 30 seconds. Note
	// that MinDelay must resolve to a value lesser than or equal to the MaxDelay.
	MinDelay time.Duration

	// MaxDelay is the maximum amount of time to delay between retries. If unset or
	// set to zero, DataSourceAvailableWaiter will use default max delay of 120
	// seconds. Note that MaxDelay must resolve to value greater than or equal to the
	// MinDelay.
	MaxDelay time.Duration

	// LogWaitAttempts is used to enable logging for waiter retry attempts
	LogWaitAttempts bool

	// Retryable is function that can be used to override the service defined
	// waiter-behavior based on operation output, or returned error. This function is
	// used by the waiter to decide if a state is retryable or a terminal state. By
	// default service-modeled logic will populate this option. This option can thus be
	// used to define a custom waiter state with fall-back to service-modeled waiter
	// state mutators.The function returns an error in case of a failure state. In case
	// of retry state, this function returns a bool value of true and nil error, while
	// in case of success it returns a bool value of false and nil error.
	Retryable func(context.Context, *DescribeDataSourcesInput, *DescribeDataSourcesOutput, error) (bool, error)
}

DataSourceAvailableWaiterOptions are waiter options for DataSourceAvailableWaiter

type DeleteBatchPredictionInput

type DeleteBatchPredictionInput struct {

	// A user-supplied ID that uniquely identifies the BatchPrediction .
	//
	// This member is required.
	BatchPredictionId *string
	// contains filtered or unexported fields
}

type DeleteBatchPredictionOutput

type DeleteBatchPredictionOutput struct {

	// A user-supplied ID that uniquely identifies the BatchPrediction . This value
	// should be identical to the value of the BatchPredictionID in the request.
	BatchPredictionId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DeleteBatchPrediction operation. You can use the GetBatchPrediction operation and check the value of the Status parameter to see whether a BatchPrediction is marked as DELETED .

type DeleteDataSourceInput

type DeleteDataSourceInput struct {

	// A user-supplied ID that uniquely identifies the DataSource .
	//
	// This member is required.
	DataSourceId *string
	// contains filtered or unexported fields
}

type DeleteDataSourceOutput

type DeleteDataSourceOutput struct {

	// A user-supplied ID that uniquely identifies the DataSource . This value should
	// be identical to the value of the DataSourceID in the request.
	DataSourceId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DeleteDataSource operation.

type DeleteEvaluationInput

type DeleteEvaluationInput struct {

	// A user-supplied ID that uniquely identifies the Evaluation to delete.
	//
	// This member is required.
	EvaluationId *string
	// contains filtered or unexported fields
}

type DeleteEvaluationOutput

type DeleteEvaluationOutput struct {

	// A user-supplied ID that uniquely identifies the Evaluation . This value should
	// be identical to the value of the EvaluationId in the request.
	EvaluationId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DeleteEvaluation operation. The output indicates that Amazon Machine Learning (Amazon ML) received the request. You can use the GetEvaluation operation and check the value of the Status parameter to see whether an Evaluation is marked as DELETED .

type DeleteMLModelInput

type DeleteMLModelInput struct {

	// A user-supplied ID that uniquely identifies the MLModel .
	//
	// This member is required.
	MLModelId *string
	// contains filtered or unexported fields
}

type DeleteMLModelOutput

type DeleteMLModelOutput struct {

	// A user-supplied ID that uniquely identifies the MLModel . This value should be
	// identical to the value of the MLModelID in the request.
	MLModelId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DeleteMLModel operation. You can use the GetMLModel operation and check the value of the Status parameter to see whether an MLModel is marked as DELETED .

type DeleteRealtimeEndpointInput

type DeleteRealtimeEndpointInput struct {

	// The ID assigned to the MLModel during creation.
	//
	// This member is required.
	MLModelId *string
	// contains filtered or unexported fields
}

type DeleteRealtimeEndpointOutput

type DeleteRealtimeEndpointOutput struct {

	// A user-supplied ID that uniquely identifies the MLModel . This value should be
	// identical to the value of the MLModelId in the request.
	MLModelId *string

	// The endpoint information of the MLModel
	RealtimeEndpointInfo *types.RealtimeEndpointInfo

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an DeleteRealtimeEndpoint operation. The result contains the MLModelId and the endpoint information for the MLModel .

type DeleteTagsInput

type DeleteTagsInput struct {

	// The ID of the tagged ML object. For example, exampleModelId .
	//
	// This member is required.
	ResourceId *string

	// The type of the tagged ML object.
	//
	// This member is required.
	ResourceType types.TaggableResourceType

	// One or more tags to delete.
	//
	// This member is required.
	TagKeys []string
	// contains filtered or unexported fields
}

type DeleteTagsOutput

type DeleteTagsOutput struct {

	// The ID of the ML object from which tags were deleted.
	ResourceId *string

	// The type of the ML object from which tags were deleted.
	ResourceType types.TaggableResourceType

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Amazon ML returns the following elements.

type DescribeBatchPredictionsAPIClient added in v0.30.0

type DescribeBatchPredictionsAPIClient interface {
	DescribeBatchPredictions(context.Context, *DescribeBatchPredictionsInput, ...func(*Options)) (*DescribeBatchPredictionsOutput, error)
}

DescribeBatchPredictionsAPIClient is a client that implements the DescribeBatchPredictions operation.

type DescribeBatchPredictionsInput

type DescribeBatchPredictionsInput struct {

	// The equal to operator. The BatchPrediction results will have FilterVariable
	// values that exactly match the value specified with EQ .
	EQ *string

	// Use one of the following variables to filter a list of BatchPrediction :
	//   - CreatedAt - Sets the search criteria to the BatchPrediction creation date.
	//   - Status - Sets the search criteria to the BatchPrediction status.
	//   - Name - Sets the search criteria to the contents of the BatchPrediction Name
	//   .
	//   - IAMUser - Sets the search criteria to the user account that invoked the
	//   BatchPrediction creation.
	//   - MLModelId - Sets the search criteria to the MLModel used in the
	//   BatchPrediction .
	//   - DataSourceId - Sets the search criteria to the DataSource used in the
	//   BatchPrediction .
	//   - DataURI - Sets the search criteria to the data file(s) used in the
	//   BatchPrediction . The URL can identify either a file or an Amazon Simple
	//   Storage Solution (Amazon S3) bucket or directory.
	FilterVariable types.BatchPredictionFilterVariable

	// The greater than or equal to operator. The BatchPrediction results will have
	// FilterVariable values that are greater than or equal to the value specified with
	// GE .
	GE *string

	// The greater than operator. The BatchPrediction results will have FilterVariable
	// values that are greater than the value specified with GT .
	GT *string

	// The less than or equal to operator. The BatchPrediction results will have
	// FilterVariable values that are less than or equal to the value specified with LE
	// .
	LE *string

	// The less than operator. The BatchPrediction results will have FilterVariable
	// values that are less than the value specified with LT .
	LT *string

	// The number of pages of information to include in the result. The range of
	// acceptable values is 1 through 100 . The default value is 100 .
	Limit *int32

	// The not equal to operator. The BatchPrediction results will have FilterVariable
	// values not equal to the value specified with NE .
	NE *string

	// An ID of the page in the paginated results.
	NextToken *string

	// A string that is found at the beginning of a variable, such as Name or Id . For
	// example, a Batch Prediction operation could have the Name
	// 2014-09-09-HolidayGiftMailer . To search for this BatchPrediction , select Name
	// for the FilterVariable and any of the following strings for the Prefix :
	//   - 2014-09
	//   - 2014-09-09
	//   - 2014-09-09-Holiday
	Prefix *string

	// A two-value parameter that determines the sequence of the resulting list of
	// MLModel s.
	//   - asc - Arranges the list in ascending order (A-Z, 0-9).
	//   - dsc - Arranges the list in descending order (Z-A, 9-0).
	// Results are sorted by FilterVariable .
	SortOrder types.SortOrder
	// contains filtered or unexported fields
}

type DescribeBatchPredictionsOutput

type DescribeBatchPredictionsOutput struct {

	// The ID of the next page in the paginated results that indicates at least one
	// more page follows.
	NextToken *string

	// A list of BatchPrediction objects that meet the search criteria.
	Results []types.BatchPrediction

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DescribeBatchPredictions operation. The content is essentially a list of BatchPrediction s.

type DescribeBatchPredictionsPaginator added in v0.30.0

type DescribeBatchPredictionsPaginator struct {
	// contains filtered or unexported fields
}

DescribeBatchPredictionsPaginator is a paginator for DescribeBatchPredictions

func NewDescribeBatchPredictionsPaginator added in v0.30.0

NewDescribeBatchPredictionsPaginator returns a new DescribeBatchPredictionsPaginator

func (*DescribeBatchPredictionsPaginator) HasMorePages added in v0.30.0

func (p *DescribeBatchPredictionsPaginator) HasMorePages() bool

HasMorePages returns a boolean indicating whether more pages are available

func (*DescribeBatchPredictionsPaginator) NextPage added in v0.30.0

NextPage retrieves the next DescribeBatchPredictions page.

type DescribeBatchPredictionsPaginatorOptions added in v0.30.0

type DescribeBatchPredictionsPaginatorOptions struct {
	// The number of pages of information to include in the result. The range of
	// acceptable values is 1 through 100 . The default value is 100 .
	Limit int32

	// Set to true if pagination should stop if the service returns a pagination token
	// that matches the most recent token provided to the service.
	StopOnDuplicateToken bool
}

DescribeBatchPredictionsPaginatorOptions is the paginator options for DescribeBatchPredictions

type DescribeDataSourcesAPIClient added in v0.30.0

type DescribeDataSourcesAPIClient interface {
	DescribeDataSources(context.Context, *DescribeDataSourcesInput, ...func(*Options)) (*DescribeDataSourcesOutput, error)
}

DescribeDataSourcesAPIClient is a client that implements the DescribeDataSources operation.

type DescribeDataSourcesInput

type DescribeDataSourcesInput struct {

	// The equal to operator. The DataSource results will have FilterVariable values
	// that exactly match the value specified with EQ .
	EQ *string

	// Use one of the following variables to filter a list of DataSource :
	//   - CreatedAt - Sets the search criteria to DataSource creation dates.
	//   - Status - Sets the search criteria to DataSource statuses.
	//   - Name - Sets the search criteria to the contents of DataSource Name .
	//   - DataUri - Sets the search criteria to the URI of data files used to create
	//   the DataSource . The URI can identify either a file or an Amazon Simple
	//   Storage Service (Amazon S3) bucket or directory.
	//   - IAMUser - Sets the search criteria to the user account that invoked the
	//   DataSource creation.
	FilterVariable types.DataSourceFilterVariable

	// The greater than or equal to operator. The DataSource results will have
	// FilterVariable values that are greater than or equal to the value specified with
	// GE .
	GE *string

	// The greater than operator. The DataSource results will have FilterVariable
	// values that are greater than the value specified with GT .
	GT *string

	// The less than or equal to operator. The DataSource results will have
	// FilterVariable values that are less than or equal to the value specified with LE
	// .
	LE *string

	// The less than operator. The DataSource results will have FilterVariable values
	// that are less than the value specified with LT .
	LT *string

	// The maximum number of DataSource to include in the result.
	Limit *int32

	// The not equal to operator. The DataSource results will have FilterVariable
	// values not equal to the value specified with NE .
	NE *string

	// The ID of the page in the paginated results.
	NextToken *string

	// A string that is found at the beginning of a variable, such as Name or Id . For
	// example, a DataSource could have the Name 2014-09-09-HolidayGiftMailer . To
	// search for this DataSource , select Name for the FilterVariable and any of the
	// following strings for the Prefix :
	//   - 2014-09
	//   - 2014-09-09
	//   - 2014-09-09-Holiday
	Prefix *string

	// A two-value parameter that determines the sequence of the resulting list of
	// DataSource .
	//   - asc - Arranges the list in ascending order (A-Z, 0-9).
	//   - dsc - Arranges the list in descending order (Z-A, 9-0).
	// Results are sorted by FilterVariable .
	SortOrder types.SortOrder
	// contains filtered or unexported fields
}

type DescribeDataSourcesOutput

type DescribeDataSourcesOutput struct {

	// An ID of the next page in the paginated results that indicates at least one
	// more page follows.
	NextToken *string

	// A list of DataSource that meet the search criteria.
	Results []types.DataSource

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the query results from a DescribeDataSources operation. The content is essentially a list of DataSource .

type DescribeDataSourcesPaginator added in v0.30.0

type DescribeDataSourcesPaginator struct {
	// contains filtered or unexported fields
}

DescribeDataSourcesPaginator is a paginator for DescribeDataSources

func NewDescribeDataSourcesPaginator added in v0.30.0

func NewDescribeDataSourcesPaginator(client DescribeDataSourcesAPIClient, params *DescribeDataSourcesInput, optFns ...func(*DescribeDataSourcesPaginatorOptions)) *DescribeDataSourcesPaginator

NewDescribeDataSourcesPaginator returns a new DescribeDataSourcesPaginator

func (*DescribeDataSourcesPaginator) HasMorePages added in v0.30.0

func (p *DescribeDataSourcesPaginator) HasMorePages() bool

HasMorePages returns a boolean indicating whether more pages are available

func (*DescribeDataSourcesPaginator) NextPage added in v0.30.0

func (p *DescribeDataSourcesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*DescribeDataSourcesOutput, error)

NextPage retrieves the next DescribeDataSources page.

type DescribeDataSourcesPaginatorOptions added in v0.30.0

type DescribeDataSourcesPaginatorOptions struct {
	// The maximum number of DataSource to include in the result.
	Limit int32

	// Set to true if pagination should stop if the service returns a pagination token
	// that matches the most recent token provided to the service.
	StopOnDuplicateToken bool
}

DescribeDataSourcesPaginatorOptions is the paginator options for DescribeDataSources

type DescribeEvaluationsAPIClient added in v0.30.0

type DescribeEvaluationsAPIClient interface {
	DescribeEvaluations(context.Context, *DescribeEvaluationsInput, ...func(*Options)) (*DescribeEvaluationsOutput, error)
}

DescribeEvaluationsAPIClient is a client that implements the DescribeEvaluations operation.

type DescribeEvaluationsInput

type DescribeEvaluationsInput struct {

	// The equal to operator. The Evaluation results will have FilterVariable values
	// that exactly match the value specified with EQ .
	EQ *string

	// Use one of the following variable to filter a list of Evaluation objects:
	//   - CreatedAt - Sets the search criteria to the Evaluation creation date.
	//   - Status - Sets the search criteria to the Evaluation status.
	//   - Name - Sets the search criteria to the contents of Evaluation Name .
	//   - IAMUser - Sets the search criteria to the user account that invoked an
	//   Evaluation .
	//   - MLModelId - Sets the search criteria to the MLModel that was evaluated.
	//   - DataSourceId - Sets the search criteria to the DataSource used in Evaluation
	//   .
	//   - DataUri - Sets the search criteria to the data file(s) used in Evaluation .
	//   The URL can identify either a file or an Amazon Simple Storage Solution (Amazon
	//   S3) bucket or directory.
	FilterVariable types.EvaluationFilterVariable

	// The greater than or equal to operator. The Evaluation results will have
	// FilterVariable values that are greater than or equal to the value specified with
	// GE .
	GE *string

	// The greater than operator. The Evaluation results will have FilterVariable
	// values that are greater than the value specified with GT .
	GT *string

	// The less than or equal to operator. The Evaluation results will have
	// FilterVariable values that are less than or equal to the value specified with LE
	// .
	LE *string

	// The less than operator. The Evaluation results will have FilterVariable values
	// that are less than the value specified with LT .
	LT *string

	// The maximum number of Evaluation to include in the result.
	Limit *int32

	// The not equal to operator. The Evaluation results will have FilterVariable
	// values not equal to the value specified with NE .
	NE *string

	// The ID of the page in the paginated results.
	NextToken *string

	// A string that is found at the beginning of a variable, such as Name or Id . For
	// example, an Evaluation could have the Name 2014-09-09-HolidayGiftMailer . To
	// search for this Evaluation , select Name for the FilterVariable and any of the
	// following strings for the Prefix :
	//   - 2014-09
	//   - 2014-09-09
	//   - 2014-09-09-Holiday
	Prefix *string

	// A two-value parameter that determines the sequence of the resulting list of
	// Evaluation .
	//   - asc - Arranges the list in ascending order (A-Z, 0-9).
	//   - dsc - Arranges the list in descending order (Z-A, 9-0).
	// Results are sorted by FilterVariable .
	SortOrder types.SortOrder
	// contains filtered or unexported fields
}

type DescribeEvaluationsOutput

type DescribeEvaluationsOutput struct {

	// The ID of the next page in the paginated results that indicates at least one
	// more page follows.
	NextToken *string

	// A list of Evaluation that meet the search criteria.
	Results []types.Evaluation

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the query results from a DescribeEvaluations operation. The content is essentially a list of Evaluation .

type DescribeEvaluationsPaginator added in v0.30.0

type DescribeEvaluationsPaginator struct {
	// contains filtered or unexported fields
}

DescribeEvaluationsPaginator is a paginator for DescribeEvaluations

func NewDescribeEvaluationsPaginator added in v0.30.0

func NewDescribeEvaluationsPaginator(client DescribeEvaluationsAPIClient, params *DescribeEvaluationsInput, optFns ...func(*DescribeEvaluationsPaginatorOptions)) *DescribeEvaluationsPaginator

NewDescribeEvaluationsPaginator returns a new DescribeEvaluationsPaginator

func (*DescribeEvaluationsPaginator) HasMorePages added in v0.30.0

func (p *DescribeEvaluationsPaginator) HasMorePages() bool

HasMorePages returns a boolean indicating whether more pages are available

func (*DescribeEvaluationsPaginator) NextPage added in v0.30.0

func (p *DescribeEvaluationsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*DescribeEvaluationsOutput, error)

NextPage retrieves the next DescribeEvaluations page.

type DescribeEvaluationsPaginatorOptions added in v0.30.0

type DescribeEvaluationsPaginatorOptions struct {
	// The maximum number of Evaluation to include in the result.
	Limit int32

	// Set to true if pagination should stop if the service returns a pagination token
	// that matches the most recent token provided to the service.
	StopOnDuplicateToken bool
}

DescribeEvaluationsPaginatorOptions is the paginator options for DescribeEvaluations

type DescribeMLModelsAPIClient added in v0.30.0

type DescribeMLModelsAPIClient interface {
	DescribeMLModels(context.Context, *DescribeMLModelsInput, ...func(*Options)) (*DescribeMLModelsOutput, error)
}

DescribeMLModelsAPIClient is a client that implements the DescribeMLModels operation.

type DescribeMLModelsInput

type DescribeMLModelsInput struct {

	// The equal to operator. The MLModel results will have FilterVariable values that
	// exactly match the value specified with EQ .
	EQ *string

	// Use one of the following variables to filter a list of MLModel :
	//   - CreatedAt - Sets the search criteria to MLModel creation date.
	//   - Status - Sets the search criteria to MLModel status.
	//   - Name - Sets the search criteria to the contents of MLModel Name .
	//   - IAMUser - Sets the search criteria to the user account that invoked the
	//   MLModel creation.
	//   - TrainingDataSourceId - Sets the search criteria to the DataSource used to
	//   train one or more MLModel .
	//   - RealtimeEndpointStatus - Sets the search criteria to the MLModel real-time
	//   endpoint status.
	//   - MLModelType - Sets the search criteria to MLModel type: binary, regression,
	//   or multi-class.
	//   - Algorithm - Sets the search criteria to the algorithm that the MLModel uses.
	//   - TrainingDataURI - Sets the search criteria to the data file(s) used in
	//   training a MLModel . The URL can identify either a file or an Amazon Simple
	//   Storage Service (Amazon S3) bucket or directory.
	FilterVariable types.MLModelFilterVariable

	// The greater than or equal to operator. The MLModel results will have
	// FilterVariable values that are greater than or equal to the value specified with
	// GE .
	GE *string

	// The greater than operator. The MLModel results will have FilterVariable values
	// that are greater than the value specified with GT .
	GT *string

	// The less than or equal to operator. The MLModel results will have FilterVariable
	// values that are less than or equal to the value specified with LE .
	LE *string

	// The less than operator. The MLModel results will have FilterVariable values
	// that are less than the value specified with LT .
	LT *string

	// The number of pages of information to include in the result. The range of
	// acceptable values is 1 through 100 . The default value is 100 .
	Limit *int32

	// The not equal to operator. The MLModel results will have FilterVariable values
	// not equal to the value specified with NE .
	NE *string

	// The ID of the page in the paginated results.
	NextToken *string

	// A string that is found at the beginning of a variable, such as Name or Id . For
	// example, an MLModel could have the Name 2014-09-09-HolidayGiftMailer . To search
	// for this MLModel , select Name for the FilterVariable and any of the following
	// strings for the Prefix :
	//   - 2014-09
	//   - 2014-09-09
	//   - 2014-09-09-Holiday
	Prefix *string

	// A two-value parameter that determines the sequence of the resulting list of
	// MLModel .
	//   - asc - Arranges the list in ascending order (A-Z, 0-9).
	//   - dsc - Arranges the list in descending order (Z-A, 9-0).
	// Results are sorted by FilterVariable .
	SortOrder types.SortOrder
	// contains filtered or unexported fields
}

type DescribeMLModelsOutput

type DescribeMLModelsOutput struct {

	// The ID of the next page in the paginated results that indicates at least one
	// more page follows.
	NextToken *string

	// A list of MLModel that meet the search criteria.
	Results []types.MLModel

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a DescribeMLModels operation. The content is essentially a list of MLModel .

type DescribeMLModelsPaginator added in v0.30.0

type DescribeMLModelsPaginator struct {
	// contains filtered or unexported fields
}

DescribeMLModelsPaginator is a paginator for DescribeMLModels

func NewDescribeMLModelsPaginator added in v0.30.0

func NewDescribeMLModelsPaginator(client DescribeMLModelsAPIClient, params *DescribeMLModelsInput, optFns ...func(*DescribeMLModelsPaginatorOptions)) *DescribeMLModelsPaginator

NewDescribeMLModelsPaginator returns a new DescribeMLModelsPaginator

func (*DescribeMLModelsPaginator) HasMorePages added in v0.30.0

func (p *DescribeMLModelsPaginator) HasMorePages() bool

HasMorePages returns a boolean indicating whether more pages are available

func (*DescribeMLModelsPaginator) NextPage added in v0.30.0

func (p *DescribeMLModelsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*DescribeMLModelsOutput, error)

NextPage retrieves the next DescribeMLModels page.

type DescribeMLModelsPaginatorOptions added in v0.30.0

type DescribeMLModelsPaginatorOptions struct {
	// The number of pages of information to include in the result. The range of
	// acceptable values is 1 through 100 . The default value is 100 .
	Limit int32

	// Set to true if pagination should stop if the service returns a pagination token
	// that matches the most recent token provided to the service.
	StopOnDuplicateToken bool
}

DescribeMLModelsPaginatorOptions is the paginator options for DescribeMLModels

type DescribeTagsInput

type DescribeTagsInput struct {

	// The ID of the ML object. For example, exampleModelId .
	//
	// This member is required.
	ResourceId *string

	// The type of the ML object.
	//
	// This member is required.
	ResourceType types.TaggableResourceType
	// contains filtered or unexported fields
}

type DescribeTagsOutput

type DescribeTagsOutput struct {

	// The ID of the tagged ML object.
	ResourceId *string

	// The type of the tagged ML object.
	ResourceType types.TaggableResourceType

	// A list of tags associated with the ML object.
	Tags []types.Tag

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Amazon ML returns the following elements.

type EndpointParameters added in v1.16.0

type EndpointParameters struct {
	// The AWS region used to dispatch the request.
	//
	// Parameter is
	// required.
	//
	// AWS::Region
	Region *string

	// When true, use the dual-stack endpoint. If the configured endpoint does not
	// support dual-stack, dispatching the request MAY return an error.
	//
	// Defaults to
	// false if no value is provided.
	//
	// AWS::UseDualStack
	UseDualStack *bool

	// When true, send this request to the FIPS-compliant regional endpoint. If the
	// configured endpoint does not have a FIPS compliant endpoint, dispatching the
	// request will return an error.
	//
	// Defaults to false if no value is
	// provided.
	//
	// AWS::UseFIPS
	UseFIPS *bool

	// Override the endpoint used to send this request
	//
	// Parameter is
	// required.
	//
	// SDK::Endpoint
	Endpoint *string
}

EndpointParameters provides the parameters that influence how endpoints are resolved.

func (EndpointParameters) ValidateRequired added in v1.16.0

func (p EndpointParameters) ValidateRequired() error

ValidateRequired validates required parameters are set.

func (EndpointParameters) WithDefaults added in v1.16.0

func (p EndpointParameters) WithDefaults() EndpointParameters

WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable.

type EndpointResolver

type EndpointResolver interface {
	ResolveEndpoint(region string, options EndpointResolverOptions) (aws.Endpoint, error)
}

EndpointResolver interface for resolving service endpoints.

func EndpointResolverFromURL added in v1.1.0

func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver

EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint.

type EndpointResolverFunc

type EndpointResolverFunc func(region string, options EndpointResolverOptions) (aws.Endpoint, error)

EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values.

func (EndpointResolverFunc) ResolveEndpoint

func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)

type EndpointResolverOptions added in v0.29.0

type EndpointResolverOptions = internalendpoints.Options

EndpointResolverOptions is the service endpoint resolver options

type EndpointResolverV2 added in v1.16.0

type EndpointResolverV2 interface {
	// ResolveEndpoint attempts to resolve the endpoint with the provided options,
	// returning the endpoint if found. Otherwise an error is returned.
	ResolveEndpoint(ctx context.Context, params EndpointParameters) (
		smithyendpoints.Endpoint, error,
	)
}

EndpointResolverV2 provides the interface for resolving service endpoints.

func NewDefaultEndpointResolverV2 added in v1.16.0

func NewDefaultEndpointResolverV2() EndpointResolverV2

type EvaluationAvailableWaiter added in v1.3.0

type EvaluationAvailableWaiter struct {
	// contains filtered or unexported fields
}

EvaluationAvailableWaiter defines the waiters for EvaluationAvailable

func NewEvaluationAvailableWaiter added in v1.3.0

func NewEvaluationAvailableWaiter(client DescribeEvaluationsAPIClient, optFns ...func(*EvaluationAvailableWaiterOptions)) *EvaluationAvailableWaiter

NewEvaluationAvailableWaiter constructs a EvaluationAvailableWaiter.

func (*EvaluationAvailableWaiter) Wait added in v1.3.0

Wait calls the waiter function for EvaluationAvailable waiter. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

func (*EvaluationAvailableWaiter) WaitForOutput added in v1.9.0

WaitForOutput calls the waiter function for EvaluationAvailable waiter and returns the output of the successful operation. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

type EvaluationAvailableWaiterOptions added in v1.3.0

type EvaluationAvailableWaiterOptions struct {

	// Set of options to modify how an operation is invoked. These apply to all
	// operations invoked for this client. Use functional options on operation call to
	// modify this list for per operation behavior.
	//
	// Passing options here is functionally equivalent to passing values to this
	// config's ClientOptions field that extend the inner client's APIOptions directly.
	APIOptions []func(*middleware.Stack) error

	// Functional options to be passed to all operations invoked by this client.
	//
	// Function values that modify the inner APIOptions are applied after the waiter
	// config's own APIOptions modifiers.
	ClientOptions []func(*Options)

	// MinDelay is the minimum amount of time to delay between retries. If unset,
	// EvaluationAvailableWaiter will use default minimum delay of 30 seconds. Note
	// that MinDelay must resolve to a value lesser than or equal to the MaxDelay.
	MinDelay time.Duration

	// MaxDelay is the maximum amount of time to delay between retries. If unset or
	// set to zero, EvaluationAvailableWaiter will use default max delay of 120
	// seconds. Note that MaxDelay must resolve to value greater than or equal to the
	// MinDelay.
	MaxDelay time.Duration

	// LogWaitAttempts is used to enable logging for waiter retry attempts
	LogWaitAttempts bool

	// Retryable is function that can be used to override the service defined
	// waiter-behavior based on operation output, or returned error. This function is
	// used by the waiter to decide if a state is retryable or a terminal state. By
	// default service-modeled logic will populate this option. This option can thus be
	// used to define a custom waiter state with fall-back to service-modeled waiter
	// state mutators.The function returns an error in case of a failure state. In case
	// of retry state, this function returns a bool value of true and nil error, while
	// in case of success it returns a bool value of false and nil error.
	Retryable func(context.Context, *DescribeEvaluationsInput, *DescribeEvaluationsOutput, error) (bool, error)
}

EvaluationAvailableWaiterOptions are waiter options for EvaluationAvailableWaiter

type GetBatchPredictionInput

type GetBatchPredictionInput struct {

	// An ID assigned to the BatchPrediction at creation.
	//
	// This member is required.
	BatchPredictionId *string
	// contains filtered or unexported fields
}

type GetBatchPredictionOutput

type GetBatchPredictionOutput struct {

	// The ID of the DataSource that was used to create the BatchPrediction .
	BatchPredictionDataSourceId *string

	// An ID assigned to the BatchPrediction at creation. This value should be
	// identical to the value of the BatchPredictionID in the request.
	BatchPredictionId *string

	// The approximate CPU time in milliseconds that Amazon Machine Learning spent
	// processing the BatchPrediction , normalized and scaled on computation resources.
	// ComputeTime is only available if the BatchPrediction is in the COMPLETED state.
	ComputeTime *int64

	// The time when the BatchPrediction was created. The time is expressed in epoch
	// time.
	CreatedAt *time.Time

	// The AWS user account that invoked the BatchPrediction . The account type can be
	// either an AWS root account or an AWS Identity and Access Management (IAM) user
	// account.
	CreatedByIamUser *string

	// The epoch time when Amazon Machine Learning marked the BatchPrediction as
	// COMPLETED or FAILED . FinishedAt is only available when the BatchPrediction is
	// in the COMPLETED or FAILED state.
	FinishedAt *time.Time

	// The location of the data file or directory in Amazon Simple Storage Service
	// (Amazon S3).
	InputDataLocationS3 *string

	// The number of invalid records that Amazon Machine Learning saw while processing
	// the BatchPrediction .
	InvalidRecordCount *int64

	// The time of the most recent edit to BatchPrediction . The time is expressed in
	// epoch time.
	LastUpdatedAt *time.Time

	// A link to the file that contains logs of the CreateBatchPrediction operation.
	LogUri *string

	// The ID of the MLModel that generated predictions for the BatchPrediction
	// request.
	MLModelId *string

	// A description of the most recent details about processing the batch prediction
	// request.
	Message *string

	// A user-supplied name or description of the BatchPrediction .
	Name *string

	// The location of an Amazon S3 bucket or directory to receive the operation
	// results.
	OutputUri *string

	// The epoch time when Amazon Machine Learning marked the BatchPrediction as
	// INPROGRESS . StartedAt isn't available if the BatchPrediction is in the PENDING
	// state.
	StartedAt *time.Time

	// The status of the BatchPrediction , which can be one of the following values:
	//   - PENDING - Amazon Machine Learning (Amazon ML) submitted a request to
	//   generate batch predictions.
	//   - INPROGRESS - The batch predictions are in progress.
	//   - FAILED - The request to perform a batch prediction did not run to
	//   completion. It is not usable.
	//   - COMPLETED - The batch prediction process completed successfully.
	//   - DELETED - The BatchPrediction is marked as deleted. It is not usable.
	Status types.EntityStatus

	// The number of total records that Amazon Machine Learning saw while processing
	// the BatchPrediction .
	TotalRecordCount *int64

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a GetBatchPrediction operation and describes a BatchPrediction .

type GetDataSourceInput

type GetDataSourceInput struct {

	// The ID assigned to the DataSource at creation.
	//
	// This member is required.
	DataSourceId *string

	// Specifies whether the GetDataSource operation should return DataSourceSchema .
	// If true, DataSourceSchema is returned. If false, DataSourceSchema is not
	// returned.
	Verbose bool
	// contains filtered or unexported fields
}

type GetDataSourceOutput

type GetDataSourceOutput struct {

	// The parameter is true if statistics need to be generated from the observation
	// data.
	ComputeStatistics bool

	// The approximate CPU time in milliseconds that Amazon Machine Learning spent
	// processing the DataSource , normalized and scaled on computation resources.
	// ComputeTime is only available if the DataSource is in the COMPLETED state and
	// the ComputeStatistics is set to true.
	ComputeTime *int64

	// The time that the DataSource was created. The time is expressed in epoch time.
	CreatedAt *time.Time

	// The AWS user account from which the DataSource was created. The account type
	// can be either an AWS root account or an AWS Identity and Access Management (IAM)
	// user account.
	CreatedByIamUser *string

	// The location of the data file or directory in Amazon Simple Storage Service
	// (Amazon S3).
	DataLocationS3 *string

	// A JSON string that represents the splitting and rearrangement requirement used
	// when this DataSource was created.
	DataRearrangement *string

	// The total size of observations in the data files.
	DataSizeInBytes *int64

	// The ID assigned to the DataSource at creation. This value should be identical
	// to the value of the DataSourceId in the request.
	DataSourceId *string

	// The schema used by all of the data files of this DataSource . Note: This
	// parameter is provided as part of the verbose format.
	DataSourceSchema *string

	// The epoch time when Amazon Machine Learning marked the DataSource as COMPLETED
	// or FAILED . FinishedAt is only available when the DataSource is in the COMPLETED
	// or FAILED state.
	FinishedAt *time.Time

	// The time of the most recent edit to the DataSource . The time is expressed in
	// epoch time.
	LastUpdatedAt *time.Time

	// A link to the file containing logs of CreateDataSourceFrom* operations.
	LogUri *string

	// The user-supplied description of the most recent details about creating the
	// DataSource .
	Message *string

	// A user-supplied name or description of the DataSource .
	Name *string

	// The number of data files referenced by the DataSource .
	NumberOfFiles *int64

	// The datasource details that are specific to Amazon RDS.
	RDSMetadata *types.RDSMetadata

	// Describes the DataSource details specific to Amazon Redshift.
	RedshiftMetadata *types.RedshiftMetadata

	// The Amazon Resource Name (ARN) of an AWS IAM Role (https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html#roles-about-termsandconcepts)
	// , such as the following: arn:aws:iam::account:role/rolename.
	RoleARN *string

	// The epoch time when Amazon Machine Learning marked the DataSource as INPROGRESS
	// . StartedAt isn't available if the DataSource is in the PENDING state.
	StartedAt *time.Time

	// The current status of the DataSource . This element can have one of the
	// following values:
	//   - PENDING - Amazon ML submitted a request to create a DataSource .
	//   - INPROGRESS - The creation process is underway.
	//   - FAILED - The request to create a DataSource did not run to completion. It is
	//   not usable.
	//   - COMPLETED - The creation process completed successfully.
	//   - DELETED - The DataSource is marked as deleted. It is not usable.
	Status types.EntityStatus

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a GetDataSource operation and describes a DataSource .

type GetEvaluationInput

type GetEvaluationInput struct {

	// The ID of the Evaluation to retrieve. The evaluation of each MLModel is
	// recorded and cataloged. The ID provides the means to access the information.
	//
	// This member is required.
	EvaluationId *string
	// contains filtered or unexported fields
}

type GetEvaluationOutput

type GetEvaluationOutput struct {

	// The approximate CPU time in milliseconds that Amazon Machine Learning spent
	// processing the Evaluation , normalized and scaled on computation resources.
	// ComputeTime is only available if the Evaluation is in the COMPLETED state.
	ComputeTime *int64

	// The time that the Evaluation was created. The time is expressed in epoch time.
	CreatedAt *time.Time

	// The AWS user account that invoked the evaluation. The account type can be
	// either an AWS root account or an AWS Identity and Access Management (IAM) user
	// account.
	CreatedByIamUser *string

	// The DataSource used for this evaluation.
	EvaluationDataSourceId *string

	// The evaluation ID which is same as the EvaluationId in the request.
	EvaluationId *string

	// The epoch time when Amazon Machine Learning marked the Evaluation as COMPLETED
	// or FAILED . FinishedAt is only available when the Evaluation is in the COMPLETED
	// or FAILED state.
	FinishedAt *time.Time

	// The location of the data file or directory in Amazon Simple Storage Service
	// (Amazon S3).
	InputDataLocationS3 *string

	// The time of the most recent edit to the Evaluation . The time is expressed in
	// epoch time.
	LastUpdatedAt *time.Time

	// A link to the file that contains logs of the CreateEvaluation operation.
	LogUri *string

	// The ID of the MLModel that was the focus of the evaluation.
	MLModelId *string

	// A description of the most recent details about evaluating the MLModel .
	Message *string

	// A user-supplied name or description of the Evaluation .
	Name *string

	// Measurements of how well the MLModel performed using observations referenced by
	// the DataSource . One of the following metric is returned based on the type of
	// the MLModel :
	//   - BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to
	//   measure performance.
	//   - RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE)
	//   technique to measure performance. RMSE measures the difference between predicted
	//   and actual values for a single variable.
	//   - MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to
	//   measure performance.
	// For more information about performance metrics, please see the Amazon Machine
	// Learning Developer Guide (https://docs.aws.amazon.com/machine-learning/latest/dg)
	// .
	PerformanceMetrics *types.PerformanceMetrics

	// The epoch time when Amazon Machine Learning marked the Evaluation as INPROGRESS
	// . StartedAt isn't available if the Evaluation is in the PENDING state.
	StartedAt *time.Time

	// The status of the evaluation. This element can have one of the following
	// values:
	//   - PENDING - Amazon Machine Language (Amazon ML) submitted a request to
	//   evaluate an MLModel .
	//   - INPROGRESS - The evaluation is underway.
	//   - FAILED - The request to evaluate an MLModel did not run to completion. It is
	//   not usable.
	//   - COMPLETED - The evaluation process completed successfully.
	//   - DELETED - The Evaluation is marked as deleted. It is not usable.
	Status types.EntityStatus

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a GetEvaluation operation and describes an Evaluation .

type GetMLModelInput

type GetMLModelInput struct {

	// The ID assigned to the MLModel at creation.
	//
	// This member is required.
	MLModelId *string

	// Specifies whether the GetMLModel operation should return Recipe . If true,
	// Recipe is returned. If false, Recipe is not returned.
	Verbose bool
	// contains filtered or unexported fields
}

type GetMLModelOutput

type GetMLModelOutput struct {

	// The approximate CPU time in milliseconds that Amazon Machine Learning spent
	// processing the MLModel , normalized and scaled on computation resources.
	// ComputeTime is only available if the MLModel is in the COMPLETED state.
	ComputeTime *int64

	// The time that the MLModel was created. The time is expressed in epoch time.
	CreatedAt *time.Time

	// The AWS user account from which the MLModel was created. The account type can
	// be either an AWS root account or an AWS Identity and Access Management (IAM)
	// user account.
	CreatedByIamUser *string

	// The current endpoint of the MLModel
	EndpointInfo *types.RealtimeEndpointInfo

	// The epoch time when Amazon Machine Learning marked the MLModel as COMPLETED or
	// FAILED . FinishedAt is only available when the MLModel is in the COMPLETED or
	// FAILED state.
	FinishedAt *time.Time

	// The location of the data file or directory in Amazon Simple Storage Service
	// (Amazon S3).
	InputDataLocationS3 *string

	// The time of the most recent edit to the MLModel . The time is expressed in epoch
	// time.
	LastUpdatedAt *time.Time

	// A link to the file that contains logs of the CreateMLModel operation.
	LogUri *string

	// The MLModel ID, which is same as the MLModelId in the request.
	MLModelId *string

	// Identifies the MLModel category. The following are the available types:
	//   - REGRESSION -- Produces a numeric result. For example, "What price should a
	//   house be listed at?"
	//   - BINARY -- Produces one of two possible results. For example, "Is this an
	//   e-commerce website?"
	//   - MULTICLASS -- Produces one of several possible results. For example, "Is
	//   this a HIGH, LOW or MEDIUM risk trade?"
	MLModelType types.MLModelType

	// A description of the most recent details about accessing the MLModel .
	Message *string

	// A user-supplied name or description of the MLModel .
	Name *string

	// The recipe to use when training the MLModel . The Recipe provides detailed
	// information about the observation data to use during training, and manipulations
	// to perform on the observation data during training. Note: This parameter is
	// provided as part of the verbose format.
	Recipe *string

	// The schema used by all of the data files referenced by the DataSource . Note:
	// This parameter is provided as part of the verbose format.
	Schema *string

	// The scoring threshold is used in binary classification MLModel models. It marks
	// the boundary between a positive prediction and a negative prediction. Output
	// values greater than or equal to the threshold receive a positive result from the
	// MLModel, such as true . Output values less than the threshold receive a negative
	// response from the MLModel, such as false .
	ScoreThreshold *float32

	// The time of the most recent edit to the ScoreThreshold . The time is expressed
	// in epoch time.
	ScoreThresholdLastUpdatedAt *time.Time

	// Long integer type that is a 64-bit signed number.
	SizeInBytes *int64

	// The epoch time when Amazon Machine Learning marked the MLModel as INPROGRESS .
	// StartedAt isn't available if the MLModel is in the PENDING state.
	StartedAt *time.Time

	// The current status of the MLModel . This element can have one of the following
	// values:
	//   - PENDING - Amazon Machine Learning (Amazon ML) submitted a request to
	//   describe a MLModel .
	//   - INPROGRESS - The request is processing.
	//   - FAILED - The request did not run to completion. The ML model isn't usable.
	//   - COMPLETED - The request completed successfully.
	//   - DELETED - The MLModel is marked as deleted. It isn't usable.
	Status types.EntityStatus

	// The ID of the training DataSource .
	TrainingDataSourceId *string

	// A list of the training parameters in the MLModel . The list is implemented as a
	// map of key-value pairs. The following is the current set of training parameters:
	//
	//   - sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending
	//   on the input data, the size of the model might affect its performance. The value
	//   is an integer that ranges from 100000 to 2147483648 . The default value is
	//   33554432 .
	//   - sgd.maxPasses - The number of times that the training process traverses the
	//   observations to build the MLModel . The value is an integer that ranges from 1
	//   to 10000 . The default value is 10 .
	//   - sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling
	//   data improves a model's ability to find the optimal solution for a variety of
	//   data types. The valid values are auto and none . The default value is none .
	//   We strongly recommend that you shuffle your data.
	//   - sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It
	//   controls overfitting the data by penalizing large coefficients. This tends to
	//   drive coefficients to zero, resulting in a sparse feature set. If you use this
	//   parameter, start by specifying a small value, such as 1.0E-08 . The value is a
	//   double that ranges from 0 to MAX_DOUBLE . The default is to not use L1
	//   normalization. This parameter can't be used when L2 is specified. Use this
	//   parameter sparingly.
	//   - sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It
	//   controls overfitting the data by penalizing large coefficients. This tends to
	//   drive coefficients to small, nonzero values. If you use this parameter, start by
	//   specifying a small value, such as 1.0E-08 . The value is a double that ranges
	//   from 0 to MAX_DOUBLE . The default is to not use L2 normalization. This
	//   parameter can't be used when L1 is specified. Use this parameter sparingly.
	TrainingParameters map[string]string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of a GetMLModel operation, and provides detailed information about a MLModel .

type HTTPClient

type HTTPClient interface {
	Do(*http.Request) (*http.Response, error)
}

type HTTPSignerV4

type HTTPSignerV4 interface {
	SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) error
}

type MLModelAvailableWaiter added in v1.3.0

type MLModelAvailableWaiter struct {
	// contains filtered or unexported fields
}

MLModelAvailableWaiter defines the waiters for MLModelAvailable

func NewMLModelAvailableWaiter added in v1.3.0

func NewMLModelAvailableWaiter(client DescribeMLModelsAPIClient, optFns ...func(*MLModelAvailableWaiterOptions)) *MLModelAvailableWaiter

NewMLModelAvailableWaiter constructs a MLModelAvailableWaiter.

func (*MLModelAvailableWaiter) Wait added in v1.3.0

Wait calls the waiter function for MLModelAvailable waiter. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

func (*MLModelAvailableWaiter) WaitForOutput added in v1.9.0

WaitForOutput calls the waiter function for MLModelAvailable waiter and returns the output of the successful operation. The maxWaitDur is the maximum wait duration the waiter will wait. The maxWaitDur is required and must be greater than zero.

type MLModelAvailableWaiterOptions added in v1.3.0

type MLModelAvailableWaiterOptions struct {

	// Set of options to modify how an operation is invoked. These apply to all
	// operations invoked for this client. Use functional options on operation call to
	// modify this list for per operation behavior.
	//
	// Passing options here is functionally equivalent to passing values to this
	// config's ClientOptions field that extend the inner client's APIOptions directly.
	APIOptions []func(*middleware.Stack) error

	// Functional options to be passed to all operations invoked by this client.
	//
	// Function values that modify the inner APIOptions are applied after the waiter
	// config's own APIOptions modifiers.
	ClientOptions []func(*Options)

	// MinDelay is the minimum amount of time to delay between retries. If unset,
	// MLModelAvailableWaiter will use default minimum delay of 30 seconds. Note that
	// MinDelay must resolve to a value lesser than or equal to the MaxDelay.
	MinDelay time.Duration

	// MaxDelay is the maximum amount of time to delay between retries. If unset or
	// set to zero, MLModelAvailableWaiter will use default max delay of 120 seconds.
	// Note that MaxDelay must resolve to value greater than or equal to the MinDelay.
	MaxDelay time.Duration

	// LogWaitAttempts is used to enable logging for waiter retry attempts
	LogWaitAttempts bool

	// Retryable is function that can be used to override the service defined
	// waiter-behavior based on operation output, or returned error. This function is
	// used by the waiter to decide if a state is retryable or a terminal state. By
	// default service-modeled logic will populate this option. This option can thus be
	// used to define a custom waiter state with fall-back to service-modeled waiter
	// state mutators.The function returns an error in case of a failure state. In case
	// of retry state, this function returns a bool value of true and nil error, while
	// in case of success it returns a bool value of false and nil error.
	Retryable func(context.Context, *DescribeMLModelsInput, *DescribeMLModelsOutput, error) (bool, error)
}

MLModelAvailableWaiterOptions are waiter options for MLModelAvailableWaiter

type Options

type Options struct {
	// Set of options to modify how an operation is invoked. These apply to all
	// operations invoked for this client. Use functional options on operation call to
	// modify this list for per operation behavior.
	APIOptions []func(*middleware.Stack) error

	// The optional application specific identifier appended to the User-Agent header.
	AppID string

	// This endpoint will be given as input to an EndpointResolverV2. It is used for
	// providing a custom base endpoint that is subject to modifications by the
	// processing EndpointResolverV2.
	BaseEndpoint *string

	// Configures the events that will be sent to the configured logger.
	ClientLogMode aws.ClientLogMode

	// The credentials object to use when signing requests.
	Credentials aws.CredentialsProvider

	// The configuration DefaultsMode that the SDK should use when constructing the
	// clients initial default settings.
	DefaultsMode aws.DefaultsMode

	// The endpoint options to be used when attempting to resolve an endpoint.
	EndpointOptions EndpointResolverOptions

	// The service endpoint resolver.
	//
	// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
	// value for this field will likely prevent you from using any endpoint-related
	// service features released after the introduction of EndpointResolverV2 and
	// BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom
	// endpoint, set the client option BaseEndpoint instead.
	EndpointResolver EndpointResolver

	// Resolves the endpoint used for a particular service operation. This should be
	// used over the deprecated EndpointResolver.
	EndpointResolverV2 EndpointResolverV2

	// Signature Version 4 (SigV4) Signer
	HTTPSignerV4 HTTPSignerV4

	// The logger writer interface to write logging messages to.
	Logger logging.Logger

	// The region to send requests to. (Required)
	Region string

	// RetryMaxAttempts specifies the maximum number attempts an API client will call
	// an operation that fails with a retryable error. A value of 0 is ignored, and
	// will not be used to configure the API client created default retryer, or modify
	// per operation call's retry max attempts. If specified in an operation call's
	// functional options with a value that is different than the constructed client's
	// Options, the Client's Retryer will be wrapped to use the operation's specific
	// RetryMaxAttempts value.
	RetryMaxAttempts int

	// RetryMode specifies the retry mode the API client will be created with, if
	// Retryer option is not also specified. When creating a new API Clients this
	// member will only be used if the Retryer Options member is nil. This value will
	// be ignored if Retryer is not nil. Currently does not support per operation call
	// overrides, may in the future.
	RetryMode aws.RetryMode

	// Retryer guides how HTTP requests should be retried in case of recoverable
	// failures. When nil the API client will use a default retryer. The kind of
	// default retry created by the API client can be changed with the RetryMode
	// option.
	Retryer aws.Retryer

	// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
	// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
	// should not populate this structure programmatically, or rely on the values here
	// within your applications.
	RuntimeEnvironment aws.RuntimeEnvironment

	// The HTTP client to invoke API calls with. Defaults to client's default HTTP
	// implementation if nil.
	HTTPClient HTTPClient

	// The auth scheme resolver which determines how to authenticate for each
	// operation.
	AuthSchemeResolver AuthSchemeResolver

	// The list of auth schemes supported by the client.
	AuthSchemes []smithyhttp.AuthScheme
	// contains filtered or unexported fields
}

func (Options) Copy

func (o Options) Copy() Options

Copy creates a clone where the APIOptions list is deep copied.

func (Options) GetIdentityResolver added in v1.20.2

func (o Options) GetIdentityResolver(schemeID string) smithyauth.IdentityResolver

type PredictInput

type PredictInput struct {

	// A unique identifier of the MLModel .
	//
	// This member is required.
	MLModelId *string

	// This member is required.
	PredictEndpoint *string

	// A map of variable name-value pairs that represent an observation.
	//
	// This member is required.
	Record map[string]string
	// contains filtered or unexported fields
}

type PredictOutput

type PredictOutput struct {

	// The output from a Predict operation:
	//   - Details - Contains the following attributes:
	//   DetailsAttributes.PREDICTIVE_MODEL_TYPE - REGRESSION | BINARY | MULTICLASS
	//   DetailsAttributes.ALGORITHM - SGD
	//   - PredictedLabel - Present for either a BINARY or MULTICLASS MLModel request.
	//   - PredictedScores - Contains the raw classification score corresponding to
	//   each label.
	//   - PredictedValue - Present for a REGRESSION MLModel request.
	Prediction *types.Prediction

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

type ResolveEndpoint

type ResolveEndpoint struct {
	Resolver EndpointResolver
	Options  EndpointResolverOptions
}

func (*ResolveEndpoint) HandleSerialize

func (*ResolveEndpoint) ID

func (*ResolveEndpoint) ID() string

type UpdateBatchPredictionInput

type UpdateBatchPredictionInput struct {

	// The ID assigned to the BatchPrediction during creation.
	//
	// This member is required.
	BatchPredictionId *string

	// A new user-supplied name or description of the BatchPrediction .
	//
	// This member is required.
	BatchPredictionName *string
	// contains filtered or unexported fields
}

type UpdateBatchPredictionOutput

type UpdateBatchPredictionOutput struct {

	// The ID assigned to the BatchPrediction during creation. This value should be
	// identical to the value of the BatchPredictionId in the request.
	BatchPredictionId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an UpdateBatchPrediction operation. You can see the updated content by using the GetBatchPrediction operation.

type UpdateDataSourceInput

type UpdateDataSourceInput struct {

	// The ID assigned to the DataSource during creation.
	//
	// This member is required.
	DataSourceId *string

	// A new user-supplied name or description of the DataSource that will replace the
	// current description.
	//
	// This member is required.
	DataSourceName *string
	// contains filtered or unexported fields
}

type UpdateDataSourceOutput

type UpdateDataSourceOutput struct {

	// The ID assigned to the DataSource during creation. This value should be
	// identical to the value of the DataSourceID in the request.
	DataSourceId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an UpdateDataSource operation. You can see the updated content by using the GetBatchPrediction operation.

type UpdateEvaluationInput

type UpdateEvaluationInput struct {

	// The ID assigned to the Evaluation during creation.
	//
	// This member is required.
	EvaluationId *string

	// A new user-supplied name or description of the Evaluation that will replace the
	// current content.
	//
	// This member is required.
	EvaluationName *string
	// contains filtered or unexported fields
}

type UpdateEvaluationOutput

type UpdateEvaluationOutput struct {

	// The ID assigned to the Evaluation during creation. This value should be
	// identical to the value of the Evaluation in the request.
	EvaluationId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an UpdateEvaluation operation. You can see the updated content by using the GetEvaluation operation.

type UpdateMLModelInput

type UpdateMLModelInput struct {

	// The ID assigned to the MLModel during creation.
	//
	// This member is required.
	MLModelId *string

	// A user-supplied name or description of the MLModel .
	MLModelName *string

	// The ScoreThreshold used in binary classification MLModel that marks the
	// boundary between a positive prediction and a negative prediction. Output values
	// greater than or equal to the ScoreThreshold receive a positive result from the
	// MLModel , such as true . Output values less than the ScoreThreshold receive a
	// negative response from the MLModel , such as false .
	ScoreThreshold *float32
	// contains filtered or unexported fields
}

type UpdateMLModelOutput

type UpdateMLModelOutput struct {

	// The ID assigned to the MLModel during creation. This value should be identical
	// to the value of the MLModelID in the request.
	MLModelId *string

	// Metadata pertaining to the operation's result.
	ResultMetadata middleware.Metadata
	// contains filtered or unexported fields
}

Represents the output of an UpdateMLModel operation. You can see the updated content by using the GetMLModel operation.

Directories

Path Synopsis
internal
customizations
Package customizations provides customizations for the Machine Learning API client.
Package customizations provides customizations for the Machine Learning API client.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL