loadgen

package module
v0.0.24 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 15, 2020 License: Apache-2.0 Imports: 57 Imported by: 2

README

Load testing library & cli

Created for load tests that use (generated) clients in Go to communicate to services (in any supported language). By providing the Attack interface, any client and protocol could potentially be tested with this package.

Compared to existing HTTP load testing tools (e.g. tsenart/vegeta) that can send raw HTTP requests, this package requires the use of client code to send the requests and receive the response.

This tool is heavily based on https://github.com/emicklei/hazana, with added functionality:

  • multiple generators in one runtime
  • generate grafana dashboard for all attackers
  • load and store data for attackers
  • performance degradation checks
  • dump transport for debug
  • automatic generation of load profile from logs

Install lib

go get github.com/skudasov/loadgen

Install cli

go install ~/go/pkg/mod/github.com/skudasov/loadgen\@${version}/cmd/loadcli.go

Bootstrap grafana + grafite

docker run -d -p 8181:80 -p 8125:8125/udp -p 8126:8126 --publish=2003:2003 --name kamon-grafana-dashboard kamon/grafana_graphite

Create default generator config in your home dir ~/generator.yaml

host:  // host data for monitoring
  name: local_generator // used as graphite metrics prefix
  network_iface: en0
generator:
  target: https://ya.ru  // default target of attack
  responseTimeoutSec: 20
  rampUpStrategy: linear // linear | exp2
  verbose: true
execution_mode: parallel // generator execution mode, run attacker in modes parallel | sequence
grafana: // grafana configuration
  url: http://0.0.0.0:8181
  login: "admin"
  password: "admin"
graphite:
  url: 0.0.0.0:2003
  flushDurationSec: 1
  loadGeneratorPrefix: observer // prefix for graphite metrics
checks:
  handle_threshold_percent: 1.20
root_package_name: loadgen // your root package name
load_scripts_dir: load  // where all attackers and suite configs will be stored
timezone: Europe/Moscow

Creating tests

Create new test

loadcli new first_test

Open load/first_test_attack.go, implement Setup and Do methods

// Attack must be implemented by a service client.
type Attack interface {
	// Setup should establish the connection to the service
	// It may want to access the Config of the Runner.
	Setup(c RunnerConfig) error
	// Do performs one request and is executed in a separate goroutine.
	// The context is used to cancel the request on timeout.
	Do(ctx context.Context) DoResult
	// Teardown can be used to close the connection to the service
	Teardown() error
	// Clone should return a fresh new Attack
	// Make sure the new Attack has values for shared struct fields initialized at Setup.
	Clone(r *Runner) Attack
	// StoreData should return if this scenario will save data, that gonna be needed for another scenario or verification
	StoreData() bool
}

If you require to put or get data for test, also implement

	// PutData writes object representation to handle file
	PutData(mo interface{}) error
	// GetData reads object from handle file
	GetData() (interface{}, error)

Also default run config was created in run_configs/first_test.yaml

dumptransport: true
http_timeout: 20
handles:
- name: first_test
  rps: 1
  attack_time_sec: 30
  ramp_up_sec: 1
  ramp_up_strategy: exp2
  max_attackers: 1
  verbose: true
  do_timeout_sec: 40
  store_data: false
  recycle_data: true
execution_mode: sequence

Now it's time to generate and upload grafana dashboard for your test

loadcli dashboard

And run test, build options are linux|darwin for now

loadcli build darwin
./load_suite -config load/run_configs/first_test.yaml

If you have remote vm for running tests, upload it (you must have ssh keys copied to remote)

loadcli upload myuser@102.37.13.83:/home/myuser/loadtest

CI Run

If handle threshold percent is reached (default is 20% of p50 for any handle), or there is errors in any handle, pipeline will fail.

checks:
    handle_threshold_percent: 1.2

All reports for handle is stored in reports dir

For sequence_validate mode use scaling report

loadcli scaling_report scaling.csv report.png

Debug

Bootstrap local kamon for debugging metrics, export dashboard from dir

docker run -d -p 8181:80 -p 8125:8125/udp -p 8126:8126 --publish=2003:2003 --name kamon-grafana-dashboard kamon/grafana_graphite

Turn on dumptransport in run config:

dumptransport: true

Documentation

Index

Constants

View Source
const (
	DefaultMetricsInstanceType = "t2.micro"
	DefaultGrafanaPort         = "8181"
	DefaultVMRootUser          = "ec2-user"

	MetricsContainerCommand = "" /* 128-byte string literal not displayed */
)
View Source
const (
	RoleMetrics   = "metrics"
	RoleGenerator = "generator"
)
View Source
const (
	RequestHeader      = "========== REQUEST ==========\n%s\n"
	RequestHeaderBody  = "========== REQUEST ==========\n%s\n%s\n"
	ResponseHeaderBody = "========== RESPONSE ==========\n%s\n%s\n"
	ResponseHeader     = "========== RESPONSE ==========\n%s\n"
	HTTPBodyDelimiter  = "\r\n\r\n"
)
View Source
const (
	ReportFileTmpl       = "%s-%d.json"
	ParallelMode         = "parallel"
	SequenceMode         = "sequence"
	SequenceValidateMode = "sequence_validate"
)

Variables

This section is empty.

Functions

func BuildSuiteCommand

func BuildSuiteCommand(testDir string, platform string)

func CodegenAttackerFile

func CodegenAttackerFile(packageName string, label string)

CodegenAttackerFile generates load test code:

package generated_loadtest

import (

"context"
"github.com/skudasov/loadgen"

)

type GeneratedAttack struct {
	loadgen.WithRunner
}
func (a *GeneratedAttack) Setup(hc loadgen.RunnerConfig) error {
	return nil
}
func (a *GeneratedAttack) Do(ctx context.Context) loadgen.DoResult {
	return loadgen.DoResult{
		Error:        nil,
		RequestLabel: GeneratedLabel,
	}
}
func (a *GeneratedAttack) Clone(r *loadgen.Runner) loadgen.Attack {
	return &GeneratedAttack{WithRunner: loadgen.WithRunner{R: r}}
}

func CodegenAttackersFile

func CodegenAttackersFile(packageName string, labels []LabelKV)

CodegenAttackersFile generates attacker factory code for every label it found in labels.go, struct will be camelcased with Attack suffix:

package load

import (

"github.com/skudasov/loadgen"
"log"

)

func AttackerFromName(name string) loadgen.Attack {
	switch name {
	case "user_create":
		return loadgen.WithMonitor(new(UserCreateAttack))

/

 ...

	default:
		log.Fatalf("unknown attacker type: %s", name)
		return nil
	}
}

func CodegenChecksFile

func CodegenChecksFile(packageName string, labels []LabelKV)

func CodegenLabelsFile

func CodegenLabelsFile(packageName string, labels []LabelKV)

CodegenLabelsFile generate labels file, add new label for created test:

package load

const (

	UserCreateLabel           = "user_create"
 ...

)

func CodegenMainFile

func CodegenMainFile(packageName string)

CodegenMainFile generate loadtest entry point:

package main

import (

"github.com/skudasov/loadgen"
"github.com/insolar/example_loadtest"
"github.com/insolar/example_loadtest/config"

)

func main() {
	loadgen.Run(example_loadtest.AttackerFromName, example_loadtest.CheckFromName)
}

func CollectYamlLabels

func CollectYamlLabels() []string

func CreateOrReplaceFile

func CreateOrReplaceFile(fname string) *os.File

func DecodeECDSAPair

func DecodeECDSAPair(pemEncoded string, pemEncodedPub string) (*ecdsa.PrivateKey, *ecdsa.PublicKey)

func DefaultReadCSV

func DefaultReadCSV(a Attack) []string

func DefaultWriteCSV

func DefaultWriteCSV(a Attack, data []string)

func DescribeInstances

func DescribeInstances(svc *ec2.EC2, input *ec2.DescribeInstancesInput) *ec2.DescribeInstancesOutput

func EncodeECDSAPair

func EncodeECDSAPair(privateKey *ecdsa.PrivateKey, publicKey *ecdsa.PublicKey) (string, string)

EncodeECDSAPair encodes private and public keys for reuse created members for later tests

func ErrorPercentCheck

func ErrorPercentCheck(r *Runner, percent float64) bool

func GenerateNewTestCommand

func GenerateNewTestCommand(testDir string, label string)

func GenerateSingleRunConfig

func GenerateSingleRunConfig(testDir string, label string)

GenerateSingleRunConfig generates simple config to run test for debug

func HumanReadableTestInterval

func HumanReadableTestInterval(from string, to string)

func MaxRPS

func MaxRPS(array []float64) float64

func NewAttackerStructName

func NewAttackerStructName(label string) string

func NewCheckFuncName

func NewCheckFuncName(label string) string

func NewLabelName

func NewLabelName(label string) string

func NewLoggingHTTPClient

func NewLoggingHTTPClient(debug bool, transportTimeout int) *http.Client

NewLoggintHTTPClient creates new client with debug http

func PrintReport

func PrintReport(r RunReport)

PrintReport writes the JSON report to a file or stdout, depending on the configuration.

func PromBooleanQuery

func PromBooleanQuery(r *Runner) bool

PromBooleanQuery executes prometheus boolean query

func RandInt

func RandInt() int

func ReadCsvFile

func ReadCsvFile(path string) (map[string]ChartLine, error)

func RegisterGauge

func RegisterGauge(name string) metrics.Gauge

RegisterGauge registers gauge metric to graphite

func RenderChart

func RenderChart(requests map[string]ChartLine, fileName string) error

func ReportScaling

func ReportScaling(inputCsv, outputPng string)

func Run

func Run(factory attackerFactory, checksFactory attackerChecksFactory, beforeSuite BeforeSuite, afterSuite AfterSuite)

Run default run mode for suite, with degradation checks

func RunSuiteCommand

func RunSuiteCommand(cfgPath string)

func StartGraphiteSender

func StartGraphiteSender(prefix string, flushDuration time.Duration, url string)

func TimerangeUrl

func TimerangeUrl(fromEpoch int64, toEpoch int64)

func UploadGrafanaDashboard

func UploadGrafanaDashboard()

func UploadSuiteCommand

func UploadSuiteCommand(testDir string, remoteRootDir string, keyPath string)

func WithRqId

func WithRqId(ctx context.Context, rqId string) context.Context

WithRqId returns a context which knows its request ID

func WithSessionId

func WithSessionId(ctx context.Context, sessionId string) context.Context

WithSessionId returns a context which knows its session ID

Types

type AfterRunner

type AfterRunner interface {
	AfterRun(r *RunReport) error
}

AfterRunner can be implemented by an Attacker and its method is called after a test or Run. The report is passed to compute the Failed field and/or store values in Output.

type AfterSuite

type AfterSuite func(config *GeneratorConfig) error

type Annotations

type Annotations struct {
	List []interface{} `json:"list"`
}

type AtomicBool added in v0.0.23

type AtomicBool struct {
	// contains filtered or unexported fields
}

func (*AtomicBool) Get added in v0.0.23

func (b *AtomicBool) Get() bool

func (*AtomicBool) Set added in v0.0.23

func (b *AtomicBool) Set(value bool)

type Attack

type Attack interface {
	Runnable
	// Setup should establish the connection to the service
	// It may want to access the Config of the Runner.
	Setup(c RunnerConfig) error
	// Do performs one request and is executed in a separate goroutine.
	// The context is used to cancel the request on timeout.
	Do(ctx context.Context) DoResult
	// Teardown can be used to close the connection to the service
	Teardown() error
	// Clone should return a fresh new Attack
	// Make sure the new Attack has values for shared struct fields initialized at Setup.
	Clone(r *Runner) Attack
}

Attack must be implemented by a service client.

type AttackerWithLabel

type AttackerWithLabel struct {
	Name  string
	Label string
}

type BeforeRunner

type BeforeRunner interface {
	BeforeRun(c RunnerConfig) error
}

BeforeRunner can be implemented by an Attacker and its method is called before a test or Run.

type BeforeSuite

type BeforeSuite func(config *GeneratorConfig) error

type CSVData

type CSVData struct {
	Mu *sync.Mutex

	CsvWriter *csv.Writer
	CsvReader *csv.Reader
	Recycle   bool
	// contains filtered or unexported fields
}

func NewCSVData

func NewCSVData(f *os.File, recycle bool) *CSVData

func (*CSVData) Flush

func (m *CSVData) Flush()

func (*CSVData) Lock

func (m *CSVData) Lock()

func (*CSVData) Read

func (m *CSVData) Read() ([]string, error)

Read reads string from csv, recycle if EOF

func (*CSVData) RecycleData

func (m *CSVData) RecycleData() error

RecycleData reads file from the beginning

func (*CSVData) Unlock

func (m *CSVData) Unlock()

func (*CSVData) Write

func (m *CSVData) Write(rec []string) error

Write writes csv string

type CSVMonitored

type CSVMonitored struct {
	Attack
}

func WithCSVMonitor

func WithCSVMonitor(a Attack) CSVMonitored

func (CSVMonitored) Clone

func (m CSVMonitored) Clone(r *Runner) Attack

func (CSVMonitored) Do

func (CSVMonitored) Setup

func (m CSVMonitored) Setup(c RunnerConfig) error

type ChartLine

type ChartLine struct {
	XValues []float64
	YValues []float64
}

type Checks

type Checks struct {
	// Type error check mode, ex.: error | prometheus
	Type string
	// Query prometheus bool query
	Query string
	// Threshold fail threshold, from 0 to 1, float
	Threshold float64
	// Interval check interval in seconds
	Interval int
}

Checks stop criteria checks

type ClusterSpec

type ClusterSpec struct {
	Region    string
	Instances []InstanceSpec
}

func CreateSpec

func CreateSpec(region string, nodes int, instanceType string) ClusterSpec

type Dashboard

type Dashboard struct {
	Inputs        Inputs        `json:"__inputs"`
	Requires      Requires      `json:"__requires"`
	Annotations   Annotations   `json:"annotations"`
	Editable      bool          `json:"editable"`
	GnetID        interface{}   `json:"gnetId"`
	GraphTooltip  int           `json:"graphTooltip"`
	HideControls  bool          `json:"hideControls"`
	ID            interface{}   `json:"id"`
	Links         []interface{} `json:"links"`
	Refresh       string        `json:"refresh"`
	Rows          []Row         `json:"rows"`
	SchemaVersion int           `json:"schemaVersion"`
	Style         string        `json:"style"`
	Tags          []interface{} `json:"tags"`
	Templating    Templating    `json:"templating"`
	Time          Time          `json:"time"`
	Timepicker    TimePicker    `json:"timepicker"`
	Timezone      string        `json:"timezone"`
	Title         string        `json:"title"`
	Version       int           `json:"version"`
}

func DefaultDSDashboard

func DefaultDSDashboard(title string, rows []Row) Dashboard

func GrafanaGeneratorNodeDashboard

func GrafanaGeneratorNodeDashboard(title string, labels []string, projectMetricPrefix string) Dashboard

func GrafanaGeneratorsSummaryDashboard

func GrafanaGeneratorsSummaryDashboard(title string, labels []string) Dashboard

type Datable

type Datable interface {
	// PutData writes object representation to handle file
	PutData(mo interface{}) error
	// GetData reads object from handle file
	GetData() (interface{}, error)
}

type DoResult

type DoResult struct {
	// Label identifying the request that was send which is only used for reporting the Metrics.
	RequestLabel string
	// The error that happened when sending the request or receiving the response.
	Error error
	// The HTTP status code.
	StatusCode int
	// Number of bytes transferred when sending the request.
	BytesIn int64
	// Number of bytes transferred when receiving the response.
	BytesOut int64
}

DoResult is the return value of a Do call on an Attack.

type DumpTransport

type DumpTransport struct {
	// contains filtered or unexported fields
}

DumpTransport log http request/responses, pprint bodies

func (*DumpTransport) RoundTrip

func (d *DumpTransport) RoundTrip(h *http.Request) (*http.Response, error)

type GeneratorConfig

type GeneratorConfig struct {
	// Host current vm host configuration
	Host struct {
		// Name used in grafana metrics as prefix
		Name string `mapstructure:"name"`
		// NetworkIface default network interface to collect metrics from
		NetworkIface string `mapstructure:"network_iface"`
		// CollectMetrics collect host metrics flag
		CollectMetrics bool `mapstructure:"collect_metrics"`
	} `mapstructure:"host"`
	// Remotes contains remote generator vm data
	Remotes []struct {
		// Name hostname of remote generator
		Name string `mapstructure:"name"`
		// RemoteRootDir remote root dir of a test
		RemoteRootDir string `mapstructure:"remote_root_dir"`
		// KeyPath path to ssh pub key
		KeyPath string `mapstructure:"key_path"`
	}
	// Generator generator specific config
	Generator struct {
		// Target base url to attack
		Target string `mapstructure:"target"`
		// ResponseTimeoutSec response timeout in seconds
		ResponseTimeoutSec int `mapstructure:"responseTimeoutSec"`
		// RampUpStrategy ramp up strategy: linear | exp2
		RampUpStrategy string `mapstructure:"ramp_up_strategy"`
		// Verbose allows to print debug generator logs
		Verbose bool `mapstructure:"verbose"`
	} `mapstructure:"generator"`
	// ExecutionMode step execution mode: sequence, sequence_validate, parallel
	ExecutionMode string `mapstructure:"execution_mode"`
	// Grafana related config
	Grafana struct {
		// URL base url of grafana, ex.: http://0.0.0.0:8181
		URL string `mapstructure:"url"`
		// Login login
		Login string `mapstructure:"login"`
		// Password password
		Password string `mapstructure:"password"`
	} `mapstructure:"grafana"`
	// Graphite related config
	Graphite struct {
		// URL graphite base url, ex.: 0.0.0.0:2003
		URL string `mapstructure:"url"`
		// FlushIntervalSec flush interval in seconds
		FlushIntervalSec int `mapstructure:"flushDurationSec"`
		// LoadGeneratorPrefix prefix to be used in graphite metrics
		LoadGeneratorPrefix string `mapstructure:"loadGeneratorPrefix"`
	} `mapstructure:"graphite"`
	Prometheus *Prometheus `mapstructure:"prometheus"`
	// LoadScriptsDir relative from cwd load dir path, ex.: load
	LoadScriptsDir string `mapstructure:"load_scripts_dir"`
	// Timezone timezone used for grafana url, ex.: Europe/Moscow
	Timezone string `mapstructure:"timezone"`
	// Logging logging related config
	Logging struct {
		// Level level of allowed log messages,ex.: debug | info
		Level string `mapstructure:"level"`
		// Encoding encoding of logs, ex.: console | json
		Encoding string `mapstructure:"encoding"`
	} `mapstructure:"logging"`
}

func LoadDefaultGeneratorConfig

func LoadDefaultGeneratorConfig(cfgPath string) *GeneratorConfig

func (*GeneratorConfig) Validate

func (c *GeneratorConfig) Validate() (list []string)

type HostMetrics

type HostMetrics struct {
	// contains filtered or unexported fields
}

func NewHostOSMetrics

func NewHostOSMetrics(hostPrefix string, graphiteUrl string, flushDurationSec int, networkInterface string) *HostMetrics

NewOsMetrics

func (*HostMetrics) GetCPU

func (m *HostMetrics) GetCPU() int64

GetCPU get user + system cpu used

func (*HostMetrics) GetMem

func (m *HostMetrics) GetMem() *memory.Stats

GetMem get all mem and swap used/free/total stats

func (*HostMetrics) GetNetwork

func (m *HostMetrics) GetNetwork() (int64, int64)

GetNetwork get rx/tx for particular interface

func (*HostMetrics) SelectNetworkInterface

func (m *HostMetrics) SelectNetworkInterface(stats []network.Stats) *network.Stats

func (*HostMetrics) Watch

func (m *HostMetrics) Watch(intervalSec int)

Watch updates generator host Metrics

type ImportPayload

type ImportPayload struct {
	Dashboard Dashboard     `json:"dashboard"`
	Overwrite bool          `json:"overwrite"`
	Inputs    []UploadInput `json:"inputs"`
}

type InfrastructureProviderAWS

type InfrastructureProviderAWS struct {
	ClusterSpec      ClusterSpec
	RunningInstances map[string]*RunningInstance
	// contains filtered or unexported fields
}

func NewInfrastructureProviderAWS

func NewInfrastructureProviderAWS(spec ClusterSpec) *InfrastructureProviderAWS

func (*InfrastructureProviderAWS) Bootstrap

func (m *InfrastructureProviderAWS) Bootstrap()

Bootstrap creates vms according to spec and wait until all vm in state "running"

func (*InfrastructureProviderAWS) Exec

func (m *InfrastructureProviderAWS) Exec(vmName string, cmd string)

type Inputs

type Inputs []struct {
	Name        string `json:"name"`
	Label       string `json:"label"`
	Description string `json:"description"`
	Type        string `json:"type"`
	PluginID    string `json:"pluginId"`
	PluginName  string `json:"pluginName"`
}

type InstanceSpec

type InstanceSpec struct {
	Role  string
	Name  string
	Image string
	Type  string
}

type LabelKV

type LabelKV struct {
	Label     string
	LabelName string
}

func CollectLabels

func CollectLabels(path string) []LabelKV

CollectLabels read all labels in labels.go

type LatencyMetrics

type LatencyMetrics struct {
	// Total is the total latency sum of all requests in an attack.
	Total time.Duration `json:"total"`
	// Mean is the mean request latency.
	Mean time.Duration `json:"mean"`
	// P50 is the 50th percentile request latency.
	P50 time.Duration `json:"50th"`
	// P95 is the 95th percentile request latency.
	P95 time.Duration `json:"95th"`
	// P99 is the 99th percentile request latency.
	P99 time.Duration `json:"99th"`
	// Max is the maximum observed request latency.
	Max time.Duration `json:"max"`
}

LatencyMetrics holds computed request latency Metrics.

type Legend

type Legend struct {
	Avg     bool `json:"avg"`
	Current bool `json:"current"`
	Max     bool `json:"max"`
	Min     bool `json:"min"`
	Show    bool `json:"show"`
	Total   bool `json:"total"`
	Values  bool `json:"values"`
}

type LoadManager

type LoadManager struct {
	RootMemberPrivateKey *ecdsa.PrivateKey
	RootMemberPublicKey  *ecdsa.PublicKey
	// SuiteConfig holds data common for all groups
	SuiteConfig *SuiteConfig
	// GeneratorConfig holds generator data
	GeneratorConfig *GeneratorConfig
	// Steps runner objects that fires .Do()
	Steps []RunStep
	// AttackerConfigs attacker configs
	AttackerConfigs map[string]RunnerConfig
	// Reports run reports for every handle
	Reports map[string]*RunReport
	// CsvStore stores data for all attackers
	CsvMu    *sync.Mutex
	CsvStore map[string]*CSVData
	// all handles csv logs
	CSVLogMu      *sync.Mutex
	CSVLog        *csv.Writer
	RPSScalingLog *csv.Writer
	ReportDir     string
	// When degradation threshold is reached for any handle, see default Config
	Degradation bool
	// When there are Errors in any handle
	Failed bool
	// When max rps validation failed
	ValidationFailed bool
}

LoadManager manages data and finish criteria

func NewLoadManager

func NewLoadManager(suiteCfg *SuiteConfig, genCfg *GeneratorConfig) *LoadManager

NewLoadManager create example_loadtest manager with data files

func SuiteFromSteps

func SuiteFromSteps(factory attackerFactory, checksFactory attackerChecksFactory, cfgPath string, genCfg *GeneratorConfig) *LoadManager

SuiteFromSteps create runners for every step

func (*LoadManager) CheckDegradation

func (m *LoadManager) CheckDegradation()

CheckDegradation checks handle performance degradation to last successful run stored in *handle_name*_last file

func (*LoadManager) CheckErrors

func (m *LoadManager) CheckErrors()

CheckErrors checkStopIf Errors logic

func (*LoadManager) CsvForHandle

func (m *LoadManager) CsvForHandle(name string) *CSVData

func (*LoadManager) HandleShutdownSignal

func (m *LoadManager) HandleShutdownSignal()

func (*LoadManager) LastSuccessReportForHandle

func (m *LoadManager) LastSuccessReportForHandle(handleName string) (*RunReport, error)

LastSuccessReportForHandle gets last successful report for a handle

func (*LoadManager) RunSuite

func (m *LoadManager) RunSuite()

RunSuite starts suite and wait for all generator to shutdown

func (*LoadManager) SetupHandleStore

func (m *LoadManager) SetupHandleStore(handle RunnerConfig)

func (*LoadManager) Shutdown

func (m *LoadManager) Shutdown()

func (*LoadManager) StoreHandleReports

func (m *LoadManager) StoreHandleReports()

StoreHandleReports stores report for every handle in suite

func (*LoadManager) WriteLastSuccess

func (m *LoadManager) WriteLastSuccess(handleName string, ts int64)

WriteLastSuccess writes ts of last successful run for handle

type Logger

type Logger struct {
	*zap.SugaredLogger
}

func NewLogger

func NewLogger() *Logger

func (*Logger) FromCtx

func (m *Logger) FromCtx(ctx context.Context) *Logger

Logger returns a zap logger with as much context as possible

type Metrics

type Metrics struct {
	// Latencies holds computed request latency Metrics.
	Latencies LatencyMetrics `json:"latencies"`
	// First is the earliest timestamp in a Result set.
	Earliest time.Time `json:"earliest"`
	// Latest is the latest timestamp in a Result set.
	Latest time.Time `json:"latest"`
	// End is the latest timestamp in a Result set plus its latency.
	End time.Time `json:"end"`
	// Duration is the duration of the attack.
	Duration time.Duration `json:"duration"`
	// Wait is the extra time waiting for responses from targets.
	Wait time.Duration `json:"wait"`
	// Requests is the total number of requests executed.
	Requests uint64 `json:"requests"`
	// Rate is the rate of requests per second.
	Rate float64 `json:"rate"`
	// Success is the percentage of non-error responses.
	Success float64 `json:"success"`
	// StatusCodes is a histogram of the responses' status codes.
	StatusCodes map[string]int `json:"status_codes"`
	// Errors is a set of unique Errors returned by the targets during the attack.
	Errors []string `json:"Errors"`
	// contains filtered or unexported fields
}

Metrics holds Metrics computed out of a slice of Results which are used in some of the Reporters

type Monitored

type Monitored struct {
	Attack
}

func WithMonitor

func WithMonitor(a Attack) Monitored

func (Monitored) Clone

func (m Monitored) Clone(r *Runner) Attack

func (Monitored) Do

func (m Monitored) Do(ctx context.Context) DoResult

func (Monitored) Setup

func (m Monitored) Setup(c RunnerConfig) error

type Panel

type Panel struct {
	AliasColors     struct{}      `json:"aliasColors"`
	Bars            bool          `json:"bars"`
	DashLength      int           `json:"dashLength"`
	Dashes          bool          `json:"dashes"`
	Datasource      string        `json:"datasource"`
	Fill            int           `json:"fill"`
	ID              int           `json:"id"`
	Legend          Legend        `json:"legend"`
	Lines           bool          `json:"lines"`
	Linewidth       int           `json:"linewidth"`
	Links           []interface{} `json:"links"`
	NullPointMode   string        `json:"nullPointMode"`
	Percentage      bool          `json:"percentage"`
	Pointradius     int           `json:"pointradius"`
	Points          bool          `json:"points"`
	Renderer        string        `json:"renderer"`
	SeriesOverrides []interface{} `json:"seriesOverrides"`
	SpaceLength     int           `json:"spaceLength"`
	Span            int           `json:"span"`
	Stack           bool          `json:"stack"`
	SteppedLine     bool          `json:"steppedLine"`
	Targets         []Target      `json:"targets"`
	Thresholds      []interface{} `json:"thresholds"`
	TimeFrom        interface{}   `json:"timeFrom"`
	TimeShift       interface{}   `json:"timeShift"`
	Title           string        `json:"title"`
	Tooltip         Tooltip       `json:"tooltip"`
	Type            string        `json:"type"`
	Xaxis           Xaxe          `json:"xaxis"`
	Yaxes           []Yaxe        `json:"yaxes"`
}

func GenerateXTimePanel

func GenerateXTimePanel(title string, targets []Target, xSpan int, yAxisFormat string) Panel

type Prometheus

type Prometheus struct {
	// URL prometheus base url
	URL string `mapstructure:"url"`
	// EnvLabel prometheus environment label
	EnvLabel string `mapstructure:"env_label"`
	// Namespace prometheus namespace
	Namespace string `mapstructure:"namespace"`
}

Prometheus prometheus config

type Requires

type Requires []struct {
	Type    string `json:"type"`
	ID      string `json:"id"`
	Name    string `json:"name"`
	Version string `json:"version"`
}

type Row

type Row struct {
	Collapse        bool        `json:"collapse"`
	Height          int         `json:"height"`
	Panels          []Panel     `json:"panels"`
	Repeat          interface{} `json:"repeat"`
	RepeatIteration interface{} `json:"repeatIteration"`
	RepeatRowID     interface{} `json:"repeatRowId"`
	ShowTitle       bool        `json:"showTitle"`
	Title           string      `json:"title"`
	TitleSize       string      `json:"titleSize"`
}

func GenerateNodeGeneratorRows

func GenerateNodeGeneratorRows(labels []string, projectGeneratorNodePrefix string) []Row

func GenerateRow

func GenerateRow(title string, panel ...Panel) Row

func GenerateSummaryRows

func GenerateSummaryRows(labels []string) []Row

type RunReport

type RunReport struct {
	StartedAt     time.Time    `json:"startedAt"`
	FinishedAt    time.Time    `json:"finishedAt"`
	Configuration RunnerConfig `json:"configuration"`
	// RunError is set when a Run could not be called or executed.
	RunError string              `json:"runError"`
	Metrics  map[string]*Metrics `json:"Metrics"`
	// Failed can be set by your loadtest test program to indicate that the results are not acceptable.
	Failed bool `json:"failed"`
	// Output is used to publish any custom output in the report.
	Output map[string]interface{} `json:"output"`
}

RunReport is a composition of configuration, measurements and custom output from a loadtest Run.

func NewErrorReport

func NewErrorReport(err error, config RunnerConfig) RunReport

NewErrorReport returns a report when a Run could not be called or executed.

type RunStep

type RunStep struct {
	Name          string
	ExecutionMode string
	Runners       []*Runner
}

type Runnable

type Runnable interface {
	// GetManager get test manager with all required data files/readers/writers
	GetManager() *LoadManager
	// GetRunner get current runner
	GetRunner() *Runner
}

Runnable contains default generator/suite configs and methods to access them

type Runner

type Runner struct {
	TestStage    int
	ReadCsvName  string
	WriteCsvName string
	RecycleData  bool
	Manager      *LoadManager
	Config       RunnerConfig

	CheckData []Checks

	// Other clients for checks
	PromClient v1.API

	RateLog []float64
	MaxRPS  float64
	// RampUpMetrics store only rampup interval metrics, cleared every interval
	RampUpMetrics map[string]*Metrics
	// Metrics store full attack metrics
	Metrics map[string]*Metrics

	Errors map[string]metrics.Counter

	L *Logger
	// contains filtered or unexported fields
}

func NewRunner

func NewRunner(name string, lm *LoadManager, a Attack, ch RuntimeCheckFunc, c RunnerConfig) *Runner

func (*Runner) ReportMaxRPS

func (r *Runner) ReportMaxRPS()

func (*Runner) Run

func (r *Runner) Run(wg *sync.WaitGroup, lm *LoadManager)

Run offers the complete flow of a test.

func (*Runner) SetValidationParams

func (r *Runner) SetValidationParams()

func (*Runner) SetupHandleStore

func (r *Runner) SetupHandleStore(m *LoadManager)

func (*Runner) Shutdown

func (r *Runner) Shutdown()

type RunnerConfig

type RunnerConfig struct {
	// WaitBeforeSec debug sleep before starting runner when checking condition is impossible
	WaitBeforeSec int `mapstructure:"wait_before_sec" yaml:"wait_before_sec"`
	// HandleName name of a handle, must be the same as test label in labels.go
	HandleName string `mapstructure:"name" yaml:"name"`
	// RPS max requests per second limit, load profile depends on AttackTimeSec and RampUpTimeSec
	RPS int `mapstructure:"rps" yaml:"rps"`
	// AttackTimeSec time of the test in seconds
	AttackTimeSec int `mapstructure:"attack_time_sec" yaml:"attack_time_sec"`
	// RampUpTimeSec ramp up period in seconds, in which RPS will be increased to max of RPS parameter
	RampUpTimeSec int `mapstructure:"ramp_up_sec" yaml:"ramp_up_sec"`
	// RampUpStrategy ramp up strategy: linear | exp2
	RampUpStrategy string `mapstructure:"ramp_up_strategy" yaml:"ramp_up_strategy"`
	// MaxAttackers max amount of goroutines to attack
	MaxAttackers int `mapstructure:"max_attackers" yaml:"max_attackers"`
	// OutputFilename report filename
	OutputFilename string `mapstructure:"outputFilename,omitempty" yaml:"outputFilename,omitempty"`
	// Verbose allows to print generator debug info
	Verbose bool `mapstructure:"verbose" yaml:"verbose"`
	// Metadata load run metadata
	Metadata map[string]string `mapstructure:"metadata,omitempty" yaml:"metadata,omitempty"`
	// DoTimeoutSec attacker.Do() func timeout
	DoTimeoutSec int `mapstructure:"do_timeout_sec" yaml:"do_timeout_sec"`
	// StoreData flag to check if test must put some data in csv for later validation
	StoreData bool `mapstructure:"store_data" yaml:"store_data"`
	// RecycleData flag to allow recycling data from csv when it ends
	RecycleData bool `mapstructure:"recycle_data" yaml:"recycle_data"`
	// ReadFromCsvName path to csv file to get data for test, use DefaultReadCSV/DefaultWriteCSV to read/write data for test
	ReadFromCsvName string `mapstructure:"csv_read,omitempty" yaml:"csv_read,omitempty"`
	// WriteToCsvName path to csv file to write data from test, use DefaultReadCSV/DefaultWriteCSV to read/write data for test
	WriteToCsvName string `mapstructure:"csv_write,omitempty" yaml:"csv_write,omitempty"`
	// HandleParams handle params metadata, ex. limit=100
	HandleParams map[string]string `mapstructure:"handle_params,omitempty" yaml:"handle_params,omitempty"`
	// IsValidationRun flag to know it's test run that validates max rps
	IsValidationRun bool `mapstructure:"validation_run" yaml:"validation_run"`
	// StopIf describes stop test criteria
	StopIf []Checks `mapstructure:"stop_if" yaml:"stop_if"`
	// Validation validation config
	Validation Validation `mapstructure:"validation" yaml:"validation"`

	// DebugSleep used as a crutch to not affect response time when one need to run test < 1 rps
	DebugSleep int `mapstructure:"debug_sleep"`
}

RunnerConfig runner config

func ConfigFromFile

func ConfigFromFile(named string) RunnerConfig

ConfigFromFile loads a RunnerConfig for use in a Runner.

func ConfigFromFlags

func ConfigFromFlags() RunnerConfig

ConfigFromFlags creates a RunnerConfig for use in a Runner.

func (RunnerConfig) Validate

func (c RunnerConfig) Validate() (list []string)

Validate checks all settings and returns a list of strings with problems.

type RunningInstance

type RunningInstance struct {
	Id              string
	Name            string
	KeyFileName     string
	Role            string
	PrivateKeyPem   string
	PublicDNSName   string
	PublicIPAddress string
}

type RuntimeCheckFunc

type RuntimeCheckFunc func(r *Runner) bool

type Step

type Step struct {
	// Name loadtest step name
	Name string `mapstructure:"name" yaml:"name"`
	// ExecutionMode handles execution mode: sequence, sequence_validate, parallel
	ExecutionMode string `mapstructure:"execution_mode" yaml:"execution_mode"`
	// Handles handle configs
	Handles []RunnerConfig `mapstructure:"handles" yaml:"handles"`
}

Step loadtest step config

type SuiteConfig

type SuiteConfig struct {
	RootKeys string `mapstructure:"rootkeys,omitempty" yaml:"rootkeys,omitempty"`
	RootRef  string `mapstructure:"rootref,omitempty" yaml:"rootref,omitempty"`
	// DumpTransport dumps request/response in stdout
	DumpTransport bool `mapstructure:"dumptransport" yaml:"dumptransport"`
	// GoroutinesDump dump goroutines when SIGHUP
	GoroutinesDump bool `mapstructure:"goroutines_dump" yaml:"goroutines_dump"`
	// HttpTimeout default http client timeout
	HttpTimeout int `mapstructure:"http_timeout" yaml:"http_timeout"`
	// Steps load test steps
	Steps []Step `mapstructure:"steps" yaml:"steps"`
}

SuiteConfig suite config

func LoadSuiteConfig

func LoadSuiteConfig(cfgPath string) *SuiteConfig

LoadSuiteConfig loads yaml loadtest profile Config

type Target

type Target struct {
	RefID  string `json:"refId"`
	Target string `json:"target"`
}

func GenerateGoroutinesTotalTarget

func GenerateGoroutinesTotalTarget(labels []string, projectMetricPrefix string) []Target

func GenerateHostMetricTargets

func GenerateHostMetricTargets(tmpl string, hostPrefix string, scale string, labels []string) []Target

func GenerateNetworkSummary

func GenerateNetworkSummary() []Target

func GeneratePercentileTargets

func GeneratePercentileTargets(labels []string, projectMetricPrefix string) []Target

func GenerateRPSTargets

func GenerateRPSTargets(labels []string, projectMetricPrefix string) []Target

func GenerateSummaryPercentileTargets

func GenerateSummaryPercentileTargets(labels []string) []Target

func GenerateSummaryRPSTargets

func GenerateSummaryRPSTargets(labels []string) []Target

type Templating

type Templating struct {
	List []interface{} `json:"list"`
}

type Time

type Time struct {
	From string `json:"from"`
	To   string `json:"to"`
}

type TimePicker

type TimePicker struct {
	RefreshIntervals []string `json:"refresh_intervals"`
	TimeOptions      []string `json:"time_options"`
}

type Tooltip

type Tooltip struct {
	Shared    bool   `json:"shared"`
	Sort      int    `json:"sort"`
	ValueType string `json:"value_type"`
}

type UploadInput

type UploadInput struct {
	Name     string `json:"name"`
	Type     string `json:"type"`
	PluginID string `json:"pluginId"`
	Value    string `json:"value"`
}

type Validation

type Validation struct {
	// AttackTimeSec validation attack time sec
	AttackTimeSec int `mapstructure:"attack_time_sec" yaml:"attack_time_sec"`
	// Threshold percent of max rps to validate, ex.: 0.7 means 70% of max rps
	Threshold float64 `mapstructure:"threshold" yaml:"threshold"`
}

Validation validation config

type WithData

type WithData struct {
}

type WithRunner

type WithRunner struct {
	R *Runner
}

WithRunner embeds Runner with all configs to be accessible for attacker

func (*WithRunner) GetManager

func (a *WithRunner) GetManager() *LoadManager

func (*WithRunner) GetRunner

func (a *WithRunner) GetRunner() *Runner

func (*WithRunner) Teardown

func (a *WithRunner) Teardown() error

type Xaxe

type Xaxe struct {
	Buckets interface{}   `json:"buckets"`
	Mode    string        `json:"mode"`
	Name    interface{}   `json:"name"`
	Show    bool          `json:"show"`
	Values  []interface{} `json:"values"`
}

type Yaxe

type Yaxe struct {
	Format  string      `json:"format"`
	Label   interface{} `json:"label"`
	LogBase int         `json:"logBase"`
	Max     interface{} `json:"max"`
	Min     interface{} `json:"min"`
	Show    bool        `json:"show"`
}

Rows data

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL