timestream

package module
v0.9.17 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 5, 2024 License: AGPL-3.0 Imports: 16 Imported by: 0

README

xk6-output-timestream

Output k6 results to AWS Timestream so that you can run a performant, low-cost load test.

Why?

If you're here you've probably chosen to use k6 already and you're probably interested in using an AWS serverless service. These give you the benefits of:

  • Performance at scale
  • Low cost
  • Great developer experience

For more information see the alternatives.

Using this extension lets you hook up K6 to AWS Timestream - plus you get a nice looking Grafana dashboard 😉 based off the K6 Load Testing Results dashboard.

Example Grafana dashboard

Usage

This output is written as an extension to K6 using xk6 extensions.

You can use this extension by either:

Configuration

Include the argument --out timestream when using the k6 run command - see the K6 docs

For all configuration specific to this extension see the Config struct in config.go.

The key bits of config you'll need to setup are the following environment variables

K6_TIMESTREAM_DATABASE_NAME
K6_TIMESTREAM_TABLE_NAME

You'll also need to setup your AWS credentials - see the guide on how to do this.

Tags Usage and Requirement

The timestream record dimensions (see timestream concepts) for each metric emmitted by k6 are taken from any k6 tags that have non-empty values.

Every timestream record requires at least one dimension when written, and k6 applies some default tags to metrics emmitted by many core k6 JavaScript API objects such as http requests, groups and checks. However, since some metrics emitted in the global/test scope may not have any k6 default tags, you will likely see the error At least one dimension is required for a record. logged from timestream if you do not define at least one custom tag at the topmost scope of your script to cover metrics with no default tags, as in an options object export. More information can be found in the K6 documentation or an example of setting up tags can be found in the integration test script.

Grafana Dashboard

An example dashboard is provided. You can use this dashboard by running make grafana-build grafana-run.

Development

I use VSCode for development so this will be the best supported editor. However, you should be able to use other IDEs. If you are using another IDE:

  1. The devcontainer Dockerfile ci target shows all the tools you need for a dev environment (e.g. For linting).
  2. There are suggested tools you can also use.
VSCode

The preferred way to develop using VSCode is to use the dev container feature. This will mean you have all the tools required and suggested for development.

If you do want to use different tools (e.g. you don't like the shell setup), create .devcontainer/tools.override.sh and base it off .devcontainer/tools.default.sh.

If you don't want to use dev containers, you'll need to make sure you install the tools from the devcontainer Dockerfile and the packages in suggested tools that are needed for the VSCode extensions.

Where to start for development

output.go contains the logic for converting from K6 metric samples to AWS Timestream records and then saving those records.

There are targets for different development tasks in the Makefile.

Architecture

Metric samples are passed from each of the K6 VUs to metricSamplesHandler. This converts them to the format that the Timestream SDK expects and holds on to them until it has 100 records to save (the max batch size for Timestream). It will then save these asyncronously by kicking off a new go-routine to perform the save.

The channel for receiving metric samples is closed at the end of the test and the left-over records are saved.

  graph TD;

    K6-VU1.AddMetricSamples--metric samples-->metricSamplesHandler
    K6-VU2.AddMetricSamples--metric samples-->metricSamplesHandler
    K6-VUN.AddMetricSamples--metric samples-->metricSamplesHandler

    metricSamplesHandler--have 100 samples?-->writeRecordsAsync
    metricSamplesHandler--shutting down?-->writeRecordsAsync

    writeRecordsAsync--new go routine-->writeRecords
Testing
Integration

The integration tests work by creating a Timestream database and table, running a load test (with a built in test script) and then checking the results.

  graph LR;
    Client--deploy-->Timestream;
    Client--build-->k6;
    Client--run-->k6;
    k6-->nginx-fake-api
    k6--write-->Timestream;
    Client--build-->Tests;
    Client--run-->Tests;
    Tests--query-->Timestream;
    Client--destroy-->Timestream;

To run the integration tests you'll need to setup AWS credentials - see the guide on how to do this.

To deploy the Timestream database run make deploy-infra.

To run the tests (build, run and query steps above) run make test-integration. Note that you will need to build the k6 image first with make build-image.

To destroy the Timestream database run make destroy-infra.

Grafana

Testing of the Grafana dashboard is manual:

  1. export K6_ITERATIONS=40000 - to get a reasonable number of results, set the number of iterations to a large number.
  2. make deploy-infra - to deploy the infrastructure.
  3. make test-integration - to run the tests. These will likely fail as the number of iterations is not what the tests expect.
  4. make grafana-build grafana-run and browse to http://localhost:3000. From the dashboard you'll see the results come in. It should look like the dashboard near the top.
  5. make destroy-infra - to destroy the infrastructure once you're done testing.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func New

func New(params output.Params) (output.Output, error)

Types

type Config

type Config struct {
	Region       string `envconfig:"K6_TIMESTREAM_REGION"        json:"region"`
	DatabaseName string `envconfig:"K6_TIMESTREAM_DATABASE_NAME" json:"databaseName"`
	TableName    string `envconfig:"K6_TIMESTREAM_TABLE_NAME"    json:"tableName"`
}

func GetConsolidatedConfig

func GetConsolidatedConfig(
	jsonRawConf json.RawMessage,
	env map[string]string,
) (Config, error)

GetConsolidatedConfig combines {default config values + JSON config + environment vars config values}, and returns the final result.

func NewConfig

func NewConfig() Config

type Output

type Output struct {
	// contains filtered or unexported fields
}

func (*Output) AddMetricSamples added in v0.7.0

func (o *Output) AddMetricSamples(samples []metrics.SampleContainer)

func (*Output) Description

func (o *Output) Description() string

func (*Output) Start

func (o *Output) Start() error

func (*Output) Stop

func (o *Output) Stop() error

type WriteClient added in v0.8.46

type WriteClient interface {
	WriteRecords(
		ctx context.Context,
		params *timestreamwrite.WriteRecordsInput,
		optFns ...func(*timestreamwrite.Options),
	) (*timestreamwrite.WriteRecordsOutput, error)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL