Documentation ¶
Overview ¶
Package poplar provides a set of tools for running and managing results for benchmarks in go.
Tools and Infrastructure ¶
The Report type defines infrastructure that any project can use to create and upload test results to a cedar data service regardless of the execution model without needing to write client code that interacts with cedars RPC protocol.
Additionally, poplar defines some-local only RPC interfaces on top of the github.com/deciduosity/birch/ftdc and github.com/mongodb/ftdc/events packages to support generating data payloads in real time.
Report ¶
A Report structure structure, in YAML, would generally resemble the following:
project: <evergreen-project> version: <evergreen-version> variant: <evergreen-buildvariant> task_name: <short evergreen task display name> task_id: <unique evergreen task_id> execution_number: <evergreen execution number> bucket: api_key: <aws api key> api_secret: <aws api secret> api_tokent: <aws api token> region: <aws-region> name: <bucket name> prefix: <key prefix> tests: - info: test_name: <local test name> trial: <integer for repeated execution> tags: [ "canary", <arbitrary>, <string>, <metadata> ] arguments: count: 1 iterations: 1000 # arbitrary test settings create_at: <timestamp> completed_at: <timestamp> artifacts: - bucket: <name> path: <test>/local_data/client_data.ftdc tags: [ "combined", "client" ] is_ftdc: true events_raw: true created_at: <timestamp> local_file: path/to/client_data.ftdc - bucket: <name> path: <test>/local_data/client_data.ftdc tags: [ "combined", "server" ] is_ftdc: true created_at: <timestamp> local_file: path/to/server_data.ftdc test: [] # subtests with the same test structure
See the documentation of the Report, Test, TestInfo, TestArtifact and TestMetrics format for more information.
Benchmarks ¶
Poplar contains a toolkit for running benchmark tests and collecting rich intra-run data from those tests. Consider the following example:
suite := BenchmarkSuite{ { CaseName: "HelloWorld", Bench: func(ctx context.Context, r events.Recorder, count int) error { out := []string{} for i := 0; i < count; i++ { startAt := time.Now() r.Begin() val := "Hello World" out = append(out, val) r.End(time.Since(startAt)) r.IncOps(1) r.IncSize(len(val)) } return nil }, MinRuntime: 5 * time.Second, MaxRuntime: time.Minute, MinIterations: 1000000, Count: 1000, Recorder: poplar.RecorderPerf, }, } results := suite.Run(ctx, "poplar_raw") grip.Info(results.Composer()) // log results. // Optionally send results to cedar report := &Report{ Project: "poplar test", Version: "<evg-version>", Variant: "arch-linux", TaskName: "perf", TaskID: "<evg task>", BucketConf: BucketConfiguration{ APIKey: "<key>", APISecret: "<secret>", APIToken: "token", Bucket: "poplarresults", Prefix: "perf", }, Tests: results.Export(), } var cc *grpc.ClientConn // set up service connection to cedar err := rpc.Upload(ctx, report, cc) if err != nil { grip.EmergencyFatal(err) // exit }
You can also run a benchmark suite using go's standard library, as in:
registry := NewRegistry() // create recorder recorder infrastructure func BenchmarkHelloWorldSuite(b *testing.B) { suite.Standard(registry)(b) }
Each test in the suite is reported as a separate sub-benchmark.
Workloads ¶
In addition to suites, workloads provide a way to define concurrent and parallel workloads that execute multiple instances of a single test at a time. Workloads function like an extended case of suites.
workload := &BenchmarkWorkload{ Name: "HelloWorld", Instances: 10, Recorder: poplar.RecorderPerf, Case: &BenchmarkCase{}, }
You must specify either a single case or a list of sub-workloads. The executors for workloads run the groups of these tests in parallel, with the degree of parallelism controlled by the Instances value. When you specify a list of sub-workloads, poplar will execute a group of these workloads in parallel, but the workloads themselves are run sequentially, potentially with their own parallelism.
Benchmark suites and workloads both report data in the same format. You can also execute workloads using the Standard method, as with suites, using default Go methods for running benchmarks.
Index ¶
- Constants
- func DialCedar(ctx context.Context, client *http.Client, opts DialCedarOptions) (*grpc.ClientConn, error)
- type Benchmark
- type BenchmarkCase
- func (c *BenchmarkCase) Name() string
- func (c *BenchmarkCase) Run(ctx context.Context, recorder events.Recorder) BenchmarkResult
- func (c *BenchmarkCase) SetBench(b Benchmark) *BenchmarkCase
- func (c *BenchmarkCase) SetCount(v int) *BenchmarkCase
- func (c *BenchmarkCase) SetDuration(dur time.Duration) *BenchmarkCase
- func (c *BenchmarkCase) SetIterationTimeout(dur time.Duration) *BenchmarkCase
- func (c *BenchmarkCase) SetIterations(v int) *BenchmarkCase
- func (c *BenchmarkCase) SetMaxDuration(dur time.Duration) *BenchmarkCase
- func (c *BenchmarkCase) SetMaxIterations(v int) *BenchmarkCase
- func (c *BenchmarkCase) SetName(n string) *BenchmarkCase
- func (c *BenchmarkCase) SetRecorder(r RecorderType) *BenchmarkCase
- func (c *BenchmarkCase) SetTimeout(dur time.Duration) *BenchmarkCase
- func (c *BenchmarkCase) Standard(registry *RecorderRegistry) func(*testing.B)
- func (c *BenchmarkCase) String() string
- func (c *BenchmarkCase) Validate() error
- type BenchmarkResult
- type BenchmarkResultGroup
- type BenchmarkSuite
- type BenchmarkWorkload
- func (w *BenchmarkWorkload) Add() *BenchmarkWorkload
- func (w *BenchmarkWorkload) Name() string
- func (w *BenchmarkWorkload) Run(ctx context.Context, prefix string) (BenchmarkResultGroup, error)
- func (w *BenchmarkWorkload) SetCase() *BenchmarkCase
- func (w *BenchmarkWorkload) SetInstances(i int) *BenchmarkWorkload
- func (w *BenchmarkWorkload) SetName(n string) *BenchmarkWorkload
- func (w *BenchmarkWorkload) SetRecorder(r RecorderType) *BenchmarkWorkload
- func (w *BenchmarkWorkload) SetTimeout(d time.Duration) *BenchmarkWorkload
- func (w *BenchmarkWorkload) Standard(registry *RecorderRegistry) func(*testing.B)
- func (w *BenchmarkWorkload) Timeout() time.Duration
- func (w *BenchmarkWorkload) Validate() error
- type BucketConfiguration
- type CreateOptions
- type CustomMetricsCollector
- type DialCedarOptions
- type EventsCollectorType
- type Recorder
- type RecorderRegistry
- func (r *RecorderRegistry) Close(key string) error
- func (r *RecorderRegistry) Create(key string, collOpts CreateOptions) (events.Recorder, error)
- func (r *RecorderRegistry) GetCollector(key string) (ftdc.Collector, bool)
- func (r *RecorderRegistry) GetCustomCollector(key string) (CustomMetricsCollector, bool)
- func (r *RecorderRegistry) GetEventsCollector(key string) (events.Collector, bool)
- func (r *RecorderRegistry) GetRecorder(key string) (events.Recorder, bool)
- func (r *RecorderRegistry) MakeBenchmark(bench *BenchmarkCase) func(*testing.B)
- func (r *RecorderRegistry) SetBenchRecorderPrefix(prefix string)
- type RecorderType
- type Report
- type ReportType
- type Test
- type TestArtifact
- type TestInfo
- type TestMetrics
Constants ¶
const ( RecorderPerf RecorderType = "perf" RecorderPerfSingle = "perf-single" RecorderPerf100ms = "perf-grouped-100ms" RecorderPerf1s = "perf-grouped-1s" RecorderHistogramSingle = "histogram-single" RecorderHistogram100ms = "histogram-grouped-100ms" RecorderHistogram1s = "histogram-grouped-1s" CustomMetrics = "custom" )
const ( EventsCollectorBasic EventsCollectorType = "basic" EventsCollectorPassthrough = "passthrough" EventsCollectorSampling100 = "sampling-100" EventsCollectorSampling1k = "sampling-1k" EventsCollectorSampling10k = "sampling-10k" EventsCollectorSampling100k = "sampling-100k" EventsCollectorRandomSampling50 = "rand-sampling-50" EventsCollectorRandomSampling25 = "rand-sampling-25" EventsCollectorRandomSampling10 = "rand-sampling-10" EventsCollectorInterval100ms = "interval-100ms" EventsCollectorInterval1s = "interval-1s" )
const ( ProjectEnv = "project" VersionEnv = "version_id" OrderEnv = "revision_order_id" VariantEnv = "build_variant" TaskNameEnv = "task_name" ExecutionEnv = "execution" MainlineEnv = "is_patch" APIKeyEnv = "API_KEY" APISecretEnv = "API_SECRET" APITokenEnv = "API_TOKEN" BucketNameEnv = "BUCKET_NAME" BucketPrefixEnv = "BUCKET_PREFIX" BucketRegionEnv = "BUCKET_REGION" )
Variables ¶
This section is empty.
Functions ¶
func DialCedar ¶
func DialCedar(ctx context.Context, client *http.Client, opts DialCedarOptions) (*grpc.ClientConn, error)
DialCedar is a convenience function for creating a RPC client connection with cedar via gRPC. This wraps the same function in aviation in order to avoid users having to vendor aviation.
Types ¶
type Benchmark ¶
Benchmark defines a function signature for running a benchmark test.
These functions take a context to support course timeouts, a Recorder instance to capture intra-test data, and a count number to tell the test the number of times the function should run.
In general, functions should resemble the following:
func(ctx context.Context, r events.Recorder, count int) error { ticks := 0 for i :=0, i < count; i++ { r.Begin() ticks += 4 r.IncOps(4) r.End() } }
type BenchmarkCase ¶
type BenchmarkCase struct { CaseName string Bench Benchmark MinRuntime time.Duration MaxRuntime time.Duration Timeout time.Duration IterationTimeout time.Duration Count int MinIterations int MaxIterations int Recorder RecorderType }
BenchmarkCase describes a single benchmark, and describes how to run a benchmark, including minimum and maximum runtimes and iterations.
With poplar's exceution, via the Run method, cases will execute until both the minimum runtime and iteration count are reached, and will end as soon as either the maximum iteration or runtime counts are exceeded.
You can also use the Standard() function to convert the BenchmarkCase into a more conventional go standard library Bencharmk function.
You can construct BenchmarkCases either directly, or using a fluent interface. The Validate method ensures that the case is well formed and sets some default values, overriding unambigious and unuseful zero values.
func (*BenchmarkCase) Name ¶
func (c *BenchmarkCase) Name() string
Name returns either the CaseName value OR the name of the symbol for the benchmark function. Use the CaseName field/SetName function when you define the case as a function literal, or to override the function name.
func (*BenchmarkCase) Run ¶
func (c *BenchmarkCase) Run(ctx context.Context, recorder events.Recorder) BenchmarkResult
Run executes the benchmark recording interrun data to the recorder, and returns the populated result.
The benchmark will be run as many times as needed until both the minimum iteration and runtime requirements are satisfied OR the maximum runtime or iteration requirements are satisfied.
If the test case errors, the runtime or iteration count are not incremented, and the test will return early. Similarly if the context is canceled the test will return early, and while the tests themselves are passed the context which reflects the execution timeout, the tests can choose to propagate that error.
func (*BenchmarkCase) SetBench ¶
func (c *BenchmarkCase) SetBench(b Benchmark) *BenchmarkCase
SetBench allows you set the benchmark cunftion, and is part of the BenchmarkCase's fluent interface.
func (*BenchmarkCase) SetCount ¶
func (c *BenchmarkCase) SetCount(v int) *BenchmarkCase
SetCount allows you to set the count number passed to the benchmark function which should control the number of internal iterations, and is part of the BenchmarkCase's fluent interface.
If running as a standard library test, this value is ignored.
func (*BenchmarkCase) SetDuration ¶
func (c *BenchmarkCase) SetDuration(dur time.Duration) *BenchmarkCase
SetDuration sets the minimum runtime for the case, as part of the fluent interfaces. This method also sets a maxmum runtime, which is 10x this value when the duration is under a minute, 2x this value when the duration is under ten minutes, and 1 minute greater when the duration is greater than ten minutes.
You can override the maximum duration separately, if these defaults do not make sense for your case.
The minimum and maximum runtime values are optional.
func (*BenchmarkCase) SetIterationTimeout ¶
func (c *BenchmarkCase) SetIterationTimeout(dur time.Duration) *BenchmarkCase
SetIterationTimeout describes the timeout set on the context passed to each individual iteration, and is part of the BenchmarkCase's fluent interface. It must be less than the total timeout.
See the validation function for information on the default value.
func (*BenchmarkCase) SetIterations ¶
func (c *BenchmarkCase) SetIterations(v int) *BenchmarkCase
SetIterations sets the minimum iterations for the case. It also sets the maximum number of iterations to 10x the this value, which you can override with SetMaxIterations.
func (*BenchmarkCase) SetMaxDuration ¶
func (c *BenchmarkCase) SetMaxDuration(dur time.Duration) *BenchmarkCase
SetMaxDuration allows you to specify a maximum duration for the test, and is part of the BenchmarkCase's fluent interface. If the test has been running for more than this period of time, then it will stop running. If you do not specify a timeout, poplar execution will use some factor of the max duration.
func (*BenchmarkCase) SetMaxIterations ¶
func (c *BenchmarkCase) SetMaxIterations(v int) *BenchmarkCase
SetMaxIterations allows you to specify a maximum number of times that the test will execute, and is part of the BenchmarkCase's fluent interface. This setting is optional.
The number of iterations refers to the number of time that the test case executes, and is not passed to the benchmark (e.g. the count.)
The maximum number of iterations is ignored if minimum runtime is not satisfied.
func (*BenchmarkCase) SetName ¶
func (c *BenchmarkCase) SetName(n string) *BenchmarkCase
SetName sets the case's name, overriding the symbol name if needed, and is part of the BenchmarkCase's fluent interface.
func (*BenchmarkCase) SetRecorder ¶
func (c *BenchmarkCase) SetRecorder(r RecorderType) *BenchmarkCase
SetRecorder overrides, the default event recorder type, which allows you to change the way that intrarun data is collected and allows you to use histogram data if needed for longer runs, and is part of the BenchmarkCase's fluent interface.
func (*BenchmarkCase) SetTimeout ¶
func (c *BenchmarkCase) SetTimeout(dur time.Duration) *BenchmarkCase
SetTimeout sets the total timeout for the entire case, and is part of the BenchmarkCase's fluent interface. It must be greater than the iteration timeout.
See the validation function for information on the default value.
func (*BenchmarkCase) Standard ¶
func (c *BenchmarkCase) Standard(registry *RecorderRegistry) func(*testing.B)
Standard produces a standard library test function, and configures a recorder from the registry.
func (*BenchmarkCase) String ¶
func (c *BenchmarkCase) String() string
String satisfies fmt.Stringer and prints a string representation of the case and its values.
func (*BenchmarkCase) Validate ¶
func (c *BenchmarkCase) Validate() error
Validate checks the values of the case, setting default values when possible and returning errors if the settings would lead to an impossible execution.
If not set, validate imposes the following defaults: count is set to 1; the Timeout is set to 3 times the maximum runtime (if set) or 10 minutes; the IterationTimeout is set to twice the maximum runtime (if set) or five minutes.
type BenchmarkResult ¶
type BenchmarkResult struct { Name string `bson:"name" json:"name" yaml:"name"` Runtime time.Duration `bson:"duration" json:"duration" yaml:"duration"` Count int `bson:"count" json:"count" yaml:"count"` Iterations int `bson:"iterations" json:"iterations" yaml:"iterations"` Workload bool `bson:"workload" json:"workload" yaml:"workload"` Instances int `bson:"instances,omitempty" json:"instances,omitempty" yaml:"instances,omitempty"` ArtifactPath string `bson:"path" json:"path" yaml:"path"` StartAt time.Time `bson:"start_at" json:"start_at" yaml:"start_at"` CompletedAt time.Time `bson:"compleated_at" json:"compleated_at" yaml:"compleated_at"` Error error `bson:"-" json:"-" yaml:"-"` }
BenchmarkResult contains data about the run of a specific test. The ArtifactPath is populated with a link to the intra-run data collected during the execution of the test.
func (*BenchmarkResult) Composer ¶
func (res *BenchmarkResult) Composer() message.Composer
Composer produces a grip/message.Composer implementation that allows for easy logging of a results object. The message's string form is the same as Report, but also includes a structured raw format.
func (*BenchmarkResult) Export ¶
func (res *BenchmarkResult) Export() Test
Export converts a benchmark result into a test structure to support integration with cedar.
func (*BenchmarkResult) Report ¶
func (res *BenchmarkResult) Report() string
Report returns a multi-line string format using the format of the verbose output of go test to communicate the test's outcome.
type BenchmarkResultGroup ¶
type BenchmarkResultGroup []BenchmarkResult
BenchmarkResultGroup holds the result of a single suite of benchmarks, and provides several helper methods.
func (BenchmarkResultGroup) Composer ¶
func (res BenchmarkResultGroup) Composer() message.Composer
Composer returns a grip/message.Composer implementation that aggregates Composers from all of the results.
func (BenchmarkResultGroup) Export ¶
func (res BenchmarkResultGroup) Export() []Test
Export converts a group of test results into a slice of tests in preparation for uploading those results.
func (BenchmarkResultGroup) Report ¶
func (res BenchmarkResultGroup) Report() string
Report returns an aggregated report for all results.
type BenchmarkSuite ¶
type BenchmarkSuite []*BenchmarkCase
BenchmarkSuite is a convenience wrapper around a group of suites.
You can use the Standard() function to convert these benchmarks to a standard go library benchmark function.
func (*BenchmarkSuite) Add ¶
func (s *BenchmarkSuite) Add() *BenchmarkCase
Add creates and adds a new case to an existing suite and allows access to a fluent-style API for declaring and modifying cases.
func (BenchmarkSuite) Run ¶
func (s BenchmarkSuite) Run(ctx context.Context, prefix string) (BenchmarkResultGroup, error)
Run executes a suite of benchmarks writing intrarun data to an FTDC compressed events stream prefixed with the benchmark name. RunBenchmarks has continue-on-error semantics for test failures but not for problems configuring data collection infrastructure, which are likely to be file-system erorrs. RunBenchmark continues running tests after a failure, and will aggregate all error messages.
The results data structure will always be populated even in the event of an error.
func (BenchmarkSuite) Standard ¶
func (s BenchmarkSuite) Standard(registry *RecorderRegistry) func(*testing.B)
Standard returns a go standard library benchmark function that you can use to run an entire suite. The same recorder instance is passed to each test case, which is run as a subtest.
func (BenchmarkSuite) Validate ¶
func (s BenchmarkSuite) Validate() error
Validate aggregates the validation of all constituent tests. The validation method for case may also modify a case to set better defaults. Validate has continue-on-error semantics.
type BenchmarkWorkload ¶
type BenchmarkWorkload struct { WorkloadName string WorkloadTimeout *time.Duration Case *BenchmarkCase Group []*BenchmarkWorkload Instances int Recorder RecorderType }
BenchmarkWorkload provides a way to express a more complex performance test, that involves multiple instances of a benchmark test running at the same time.
You can specify the workload as either a single benchmark case, or as a ordered list of benchmark operations, however it is not valid to do both in the same workload instance.
If you specify a group of workload operations when executing, poplar will run each sub-workload (with however many instances of the workload are specified,) sequentially, with no inter-workload synchronization.
func (*BenchmarkWorkload) Add ¶
func (w *BenchmarkWorkload) Add() *BenchmarkWorkload
Add creates a new sub-workload, and adds it to the workload's group. Add also unsets the case, if set.
func (*BenchmarkWorkload) Name ¶
func (w *BenchmarkWorkload) Name() string
Name returns the name of the workload as defined or the name of the case if no name is defined.
func (*BenchmarkWorkload) Run ¶
func (w *BenchmarkWorkload) Run(ctx context.Context, prefix string) (BenchmarkResultGroup, error)
Run executes the workload, and has similar semantics to the BenchmarkSuite implementation.
func (*BenchmarkWorkload) SetCase ¶
func (w *BenchmarkWorkload) SetCase() *BenchmarkCase
SetCase creates a case for the workload to run, returning it for the caller to manipulate. This method also unsets the group.
func (*BenchmarkWorkload) SetInstances ¶
func (w *BenchmarkWorkload) SetInstances(i int) *BenchmarkWorkload
SetInstances makes it possible to set the Instance value of the workload in a chained context.
func (*BenchmarkWorkload) SetName ¶
func (w *BenchmarkWorkload) SetName(n string) *BenchmarkWorkload
SetName makes it possible to set the name of the workload in a chainable context.
func (*BenchmarkWorkload) SetRecorder ¶
func (w *BenchmarkWorkload) SetRecorder(r RecorderType) *BenchmarkWorkload
SetRecorder overrides, the default event recorder type, which allows you to change the way that intrarun data is collected and allows you to use histogram data if needed for longer runs, and is part of the BenchmarkCase's fluent interface.
func (*BenchmarkWorkload) SetTimeout ¶
func (w *BenchmarkWorkload) SetTimeout(d time.Duration) *BenchmarkWorkload
SetTimeout allows you to define a timeout for the workload as a whole. Timeouts are not required and sub-cases or workloads will are respected. Additionally, the validation method requires that the timeout be greater than 1 millisecond.
func (*BenchmarkWorkload) Standard ¶
func (w *BenchmarkWorkload) Standard(registry *RecorderRegistry) func(*testing.B)
Standard produces a standard golang benchmarking function from a poplar workload.
These invocations are not able to respect the top-level workload timeout, and *do* perform pre-flight workload validation.
func (*BenchmarkWorkload) Timeout ¶
func (w *BenchmarkWorkload) Timeout() time.Duration
Timeout returns the timeout for the workload, returning -1 when the timeout is unset, and the value otherwise.
func (*BenchmarkWorkload) Validate ¶
func (w *BenchmarkWorkload) Validate() error
Validate ensures that the workload is well formed. Additionally, ensures that the all cases and workload groups are valid, and have the same recorder type defined.
type BucketConfiguration ¶
type BucketConfiguration struct { APIKey string `bson:"api_key" json:"api_key" yaml:"api_key"` APISecret string `bson:"api_secret" json:"api_secret" yaml:"api_secret"` APIToken string `bson:"api_token" json:"api_token" yaml:"api_token"` Name string `bson:"name" json:"name" yaml:"name"` Prefix string `bson:"prefix" json:"prefix" yaml:"prefix"` Region string `bson:"region" json:"region" yaml:"region"` }
BucketConfiguration describes the configuration information for an AWS S3 bucket for uploading test artifacts for this report.
type CreateOptions ¶
type CreateOptions struct { Path string ChunkSize int Streaming bool Dynamic bool Buffered bool Recorder RecorderType Events EventsCollectorType }
CreateOptions support the use and creation of a collector.
type CustomMetricsCollector ¶
type CustomMetricsCollector interface { Add(string, interface{}) error Dump() events.Custom Reset() }
CustomMetricsCollector defines an interface for collecting metrics.
type DialCedarOptions ¶
type DialCedarOptions services.DialCedarOptions
DialCedarOptions describes the options for the DialCedar function. The base address defaults to `cedar.mongodb.com` and the RPC port to 7070. If a base address is provided the RPC port must also be provided. Username and password must always be provided. This aliases the same type in aviation in order to avoid users having to vendor aviation.
type EventsCollectorType ¶
type EventsCollectorType string
EventsCollectorType represents the collector strategy for events collector.
func (EventsCollectorType) Validate ¶
func (t EventsCollectorType) Validate() error
Validate the underlying events collector type.
type Recorder ¶
Recorder is an alias for events.Recorder, eliminating any dependency on ftdc for external users of poplar.
type RecorderRegistry ¶
type RecorderRegistry struct {
// contains filtered or unexported fields
}
RecorderRegistry caches instances of recorders.
func NewRegistry ¶
func NewRegistry() *RecorderRegistry
NewRegistry returns a new (empty) RecorderRegistry.
func (*RecorderRegistry) Close ¶
func (r *RecorderRegistry) Close(key string) error
Close flushes and closes the underlying recorder and collector and then removes it from the cache.
func (*RecorderRegistry) Create ¶
func (r *RecorderRegistry) Create(key string, collOpts CreateOptions) (events.Recorder, error)
Create builds a new collector, of the given name with the specified options controlling the collector type and configuration.
If the options specify a filename that already exists, then Create will return an error.
func (*RecorderRegistry) GetCollector ¶
func (r *RecorderRegistry) GetCollector(key string) (ftdc.Collector, bool)
GetCollector returns the collector instance for this key. Will return false, when the collector does not exist OR if the collector is not dynamic.
func (*RecorderRegistry) GetCustomCollector ¶
func (r *RecorderRegistry) GetCustomCollector(key string) (CustomMetricsCollector, bool)
GetCustomCollector returns the CustomMetricsCollector instance for this key. Returns false when the collector does not exist.
func (*RecorderRegistry) GetEventsCollector ¶
func (r *RecorderRegistry) GetEventsCollector(key string) (events.Collector, bool)
GetEventsCollector returns the events.Collector instance for this key. Will return false, when the collector does not exist OR if the collector is not an events.Collector.
func (*RecorderRegistry) GetRecorder ¶
func (r *RecorderRegistry) GetRecorder(key string) (events.Recorder, bool)
GetRecorder returns the Recorder instance for this key. Returns false when the recorder does not exist.
func (*RecorderRegistry) MakeBenchmark ¶
func (r *RecorderRegistry) MakeBenchmark(bench *BenchmarkCase) func(*testing.B)
MakeBenchmark configures a recorder to support executing a BenchmarkCase in the form of a standard library benchmarking format.
func (*RecorderRegistry) SetBenchRecorderPrefix ¶
func (r *RecorderRegistry) SetBenchRecorderPrefix(prefix string)
SetBenchRecorderPrefix sets the bench prefix for this registry.
type RecorderType ¶
type RecorderType string
RecorderType represents the underlying recorder type.
func (RecorderType) Validate ¶
func (t RecorderType) Validate() error
Validate the underlying recorder type.
type Report ¶
type Report struct { // These settings are at the top level to provide a DRY // location for the data, in the DB they're part of the // test-info, but we're going to assume that these tasks run // in Evergreen conventionally. Project string `bson:"project" json:"project" yaml:"project"` Version string `bson:"version" json:"version" yaml:"version"` Order int `bson:"order" json:"order" yaml:"order"` Variant string `bson:"variant" json:"variant" yaml:"variant"` TaskName string `bson:"task_name" json:"task_name" yaml:"task_name"` TaskID string `bson:"task_id" json:"task_id" yaml:"task_id"` Execution int `bson:"execution_number" json:"execution_number" yaml:"execution_number"` Mainline bool `bson:"mainline" json:"mainline" yaml:"mainline"` BucketConf BucketConfiguration `bson:"bucket" json:"bucket" yaml:"bucket"` // Tests holds all of the test data. Tests []Test `bson:"tests" json:"tests" yaml:"tests"` }
Report is the top level object to represent a suite of performance tests and is used to feed data to a ceder instance. All of the test data is in the "Tests" field, with additional metadata common to all tests in the top-level fields of the Report structure.
func LoadReport ¶
LoadReport reads the content of the specified file and attempts to create a Report structure based on the content. The file can be in bson, json, or yaml, and LoadReport examines the files' extension to determine the data format.
func ReportSetup ¶
func ReportSetup(reportType ReportType, filename string) (*Report, error)
ReportSetup sets up a Report struct with the given ReportType and filename. Note that not all ReportTypes require a filename (such as ReportTypeEnv), if this is the case, pass in an empty string for the filename.
type ReportType ¶
type ReportType string
ReportType describes the marshalled report type.
const ( ReportTypeJSON ReportType = "JSON" ReportTypeBSON ReportType = "BSON" ReportTypeYAML ReportType = "YAML" ReportTypeEnv ReportType = "ENV" )
type Test ¶
type Test struct { ID string `bson:"_id" json:"id" yaml:"id"` Info TestInfo `bson:"info" json:"info" yaml:"info"` CreatedAt time.Time `bson:"created_at" json:"created_at" yaml:"created_at"` CompletedAt time.Time `bson:"completed_at" json:"completed_at" yaml:"completed_at"` Artifacts []TestArtifact `bson:"artifacts" json:"artifacts" yaml:"artifacts"` Metrics []TestMetrics `bson:"metrics" json:"metrics" yaml:"metrics"` SubTests []Test `bson:"sub_tests" json:"sub_tests" yaml:"sub_tests"` }
Test holds data about a specific test and its subtests. You should not populate the ID field, and instead populate the entire Info structure. ID fields are populated by the server by hashing the Info document along with high level metadata that is, in this representation, stored in the report structure.
type TestArtifact ¶
type TestArtifact struct { Bucket string `bson:"bucket" json:"bucket" yaml:"bucket"` Prefix string `bson:"prefix" json:"prefix" yaml:"prefix"` Permissions string `bson:"permissions" json:"permissions" yaml:"permissions"` Path string `bson:"path" json:"path" yaml:"path"` Tags []string `bson:"tags" json:"tags" yaml:"tags"` CreatedAt time.Time `bson:"created_at" json:"created_at" yaml:"created_at"` LocalFile string `bson:"local_path,omitempty" json:"local_path,omitempty" yaml:"local_path,omitempty"` PayloadTEXT bool `bson:"is_text,omitempty" json:"is_text,omitempty" yaml:"is_text,omitempty"` PayloadFTDC bool `bson:"is_ftdc,omitempty" json:"is_ftdc,omitempty" yaml:"is_ftdc,omitempty"` PayloadBSON bool `bson:"is_bson,omitempty" json:"is_bson,omitempty" yaml:"is_bson,omitempty"` PayloadJSON bool `bson:"is_json,omitempty" json:"is_json,omitempty" yaml:"is_json,omitempty"` PayloadCSV bool `bson:"is_csv,omitempty" json:"is_csv,omitempty" yaml:"is_csv,omitempty"` DataUncompressed bool `bson:"is_uncompressed" json:"is_uncompressed" yaml:"is_uncompressed"` DataGzipped bool `bson:"is_gzip,omitempty" json:"is_gzip,omitempty" yaml:"is_gzip,omitempty"` DataTarball bool `bson:"is_tarball,omitempty" json:"is_tarball,omitempty" yaml:"is_tarball,omitempty"` EventsRaw bool `bson:"events_raw,omitempty" json:"events_raw,omitempty" yaml:"events_raw,omitempty"` EventsHistogram bool `bson:"events_histogram,omitempty" json:"events_histogram,omitempty" yaml:"events_histogram,omitempty"` EventsIntervalSummary bool `bson:"events_interval_summary,omitempty" json:"events_interval_summary,omitempty" yaml:"events_interval_summary,omitempty"` EventsCollapsed bool `bson:"events_collapsed,omitempty" json:"events_collapsed,omitempty" yaml:"events_collapsed,omitempty"` ConvertGzip bool `bson:"convert_gzip,omitempty" json:"convert_gzip,omitempty" yaml:"convert_gzip,omitempty"` ConvertBSON2FTDC bool `bson:"convert_bson_to_ftdc,omitempty" json:"convert_bson_to_ftdc,omitempty" yaml:"convert_bson_to_ftdc,omitempty"` ConvertJSON2FTDC bool `bson:"convert_json_to_ftdc" json:"convert_json_to_ftdc" yaml:"convert_json_to_ftdc"` ConvertCSV2FTDC bool `bson:"convert_csv_to_ftdc" json:"convert_csv_to_ftdc" yaml:"convert_csv_to_ftdc"` }
TestArtifact is an optional structure to allow you to upload and attach metadata to results files.
func (*TestArtifact) Convert ¶
func (a *TestArtifact) Convert(ctx context.Context) error
Convert translates a the artifact into a different format, typically by converting JSON, BSON, or CSV to FTDC, and also optionally gzipping the results.
func (*TestArtifact) SetBucketInfo ¶
func (a *TestArtifact) SetBucketInfo(conf BucketConfiguration) error
SetBucketInfo sets any missing fields related to uploading an artifact using the passed in `BucketConfiguration`. An error is returned if any required fields are blank. This method should be used before calling `Upload`.
func (*TestArtifact) Upload ¶
func (a *TestArtifact) Upload(ctx context.Context, conf BucketConfiguration, dryRun bool) error
Upload provides a way to upload an artifact using a bucket configuration.
func (*TestArtifact) Validate ¶
func (a *TestArtifact) Validate() error
Validate examines an entire artifact structure and reports if there are any logical inconsistencies with the data.
type TestInfo ¶
type TestInfo struct { TestName string `bson:"test_name" json:"test_name" yaml:"test_name"` Trial int `bson:"trial" json:"trial" yaml:"trial"` Parent string `bson:"parent" json:"parent" yaml:"parent"` Tags []string `bson:"tags" json:"tags" yaml:"tags"` Arguments map[string]int32 `bson:"args" json:"args" yaml:"args"` }
TestInfo holds metadata about the test configuration and execution. The parent field holds the content of the ID field of the parent test for sub tests, and should be populated automatically by the client when uploading results.
type TestMetrics ¶
type TestMetrics struct { Name string `bson:"name" json:"name" yaml:"name"` Version int `bson:"version,omitempty" json:"version,omitempty" yaml:"version,omitempty"` Type string `bson:"type" json:"type" yaml:"type"` Value interface{} `bson:"value" json:"value" yaml:"value"` }
TestMetrics is a structure that holds computed metrics for an entire test in the case that test harnesses need or want to report their own test outcomes.