yabf

package module
v0.0.0-...-c74e22e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 22, 2023 License: Apache-2.0 Imports: 22 Imported by: 0

README

Yet Another Benchmark Framework(YABF).

YABF is a benchmark framework similar to YCSB.

YCSB is a great framework in benchmarking database. In its implementation, it uses synchronized interfaces as database interface layer and system thread as execution unit. This puts an inherent limit on concurrency, which is significant when benchmarking high throughput and high latency systems. YABF tends to combine the design of YCSB and the concurrency facilities provided by Golang. It implements main features of YCSB, and provides particular optimizations for these high throughput/latency systems.

Features

YABF support all main features and options as YCSB, which include:

  1. test client with a varity of database bindings
  2. workload and various workload distributions

Build

Then get the source code and build it with these commands:

$ git clone https://github.com/hhkbp2/yabf.git
$ cd yabf && make

When the build process is done, you could get the YABF binary located at main/yabf.

To clean up the build, use this command:

$ make clean

Usage

Copy yabf binary into PATH and use it in the following scenarios:

Example 1: Run as interactive shell

YABF contains two dummy database bindings by default. The binding of name simple just does nothing, which could be used as a silent database binding to verify the YABF logic/workload loading. The binding of name basic does nothing but echo every operation, which is handy in interactive shell.

You could enter interactive shell and operation on any supported database binding. Take basic as an example:

$ ./yabf shell basic
YABF Command Line Client
Type "help" for command line help
Connected.
> help
Commands
  read key [field1 field2 ...] - Read a record
  scan key recordcount [field1 field2 ...] - Scan starting at key
  insert key name1=value1 [name2=value2 ...] - Insert a new record
  update key name1=value1 [name2=value2 ...] - Update a record
  delete key - Delete a record
  table [tablename] - Get or [set] the name of the table
  quit - Quit
> table test
Using table "test"
0 ms
> insert k c1=v1 c2=v2
Result: OK
1 ms
> read k
Return code: OK
0 ms
> 
Example 2: Run a specified workload

YABF support workload and various properties to customize the workload as needed. A usual process of testing a database would be:

  1. load a data set into the database
$ yabf load [database binding] [host, port, user, password and other parameters]
  1. run the specified workload to test the performance
$ yabf run [database binding] [host, port, user, password and other parameters]

YABF support a varity of properties are support to customize the workload, e.g.

yabf load mysql \
  -s \
  -p workload=CoreWorkload \
  -p recordcount=100000000 \
  -p threadcount=2000000 \
  -p operationcount=30000000 \
  -p insertcount=4000000 \
  -p readproportion=0.2 \
  -p updateproportion=0.65 \
  -p insertproportion=0.15 \
  -p core_workload_insertion_retry_limit=1 \
  -p mysql.host=localhost \
  -p mysql.port=2000 \
  -p mysql.db=test \
  -p table=test.test \
  -p mysql.user=user \
  -p mysql.password=password \
  -p mysql.primarykey=testkey

This command specifies the record total to 100 000 000, concurrent operation thread number to 2 000 000, max operation number is 30 000 000, max insertion number is 4 000 000, read/update/insert propotion to 0.2, 0.65, 0.15, and other properties to test the performance of MySQL binding.

After the test process is finished, YABF would output a summary report of the whole test.

Documentation

Index

Constants

View Source
const (
	// DBWrapper
	PropertyReportLatencyForEachError        = "reportlatencyforeacherror"
	PropertyReportLatencyForEachErrorDefault = "false"
	PropertyLatencyTrackedErrors             = "latencytrackederrors"

	// GoodBadUglyDB
	SimulateDelay        = "gbudb.delays"
	SimulateDelayDefault = "200,1000,10000,50000,100000"

	// BasicDB
	ConfigSimulateDelay         = "basicdb.simulatedelay"
	ConfigSimulateDelayDefault  = "0"
	ConfigRandomizeDelay        = "basicdb.randomizedelay"
	ConfigRandomizeDelayDefault = "true"

	// Client
	// The number of records to load into the database initially.
	PropertyRecordCount = "recordcount"
	// The default value of `PropertyRecordCount`
	PropertyRecordCountDefault = "0"
	// The target number of operations to perform.
	PropertyOperationCount        = "operationcount"
	PropertyOperationCountDefault = "0"
	// The workload class to be loaded.
	PropertyWorkload = "workload"
	// The database class to be used.
	PropertyDB        = "db"
	PropertyDBDefault = "basic"
	// The exporter class to be used. The default is TestMeasurementExporter.
	PropertyExporter        = "exporter"
	PropertyExporterDefault = "TextMeasurementExporter"
	// If set to the path of a file, this file will be written instead of stdout.
	PropertyExportFile = "exportfile"
	// The number of client goroutines to run.
	PropertyThreadCount        = "threadcount"
	PropertyThreadCountDefault = "1"
	// Indicates how many inserts to do, if less than `recordcount`.
	// Useful for partitioning the load among multiple servers, if the Client
	// is the bottleneck. Additionally, workloads should support the
	// "insertstart" property, which tells them which record to start at.
	PropertyInsertCount = "insertcount"
	// Target number of operations per second
	PropertyTarget        = "target"
	PropertyTargetDefault = "0"
	// The maximum amount of time (in seconds) for which the benchmark will be run.
	PropertyMaxExecutionTime        = "maxexecutiontime"
	PropertyMaxExecutionTimeDefault = "0"
	// Whether or not this is the transaction phase (run) or not (load).
	PropertyTransactions          = "dotransactions"
	PropertyStatusInterval        = "status.interval"
	PropertyStatusIntervalDefault = "10"

	// workload
	PropertyInsertStart        = "insertstart"
	PropertyInsertStartDefault = "0"

	// The name of the database table to run queries against.
	PropertyTableName = "table"
	// The default value of `PropertyTableName`
	PropertyTableNameDefault = "usertable"
	// The name of property for the number of fields in a record
	PropertyFieldCount = "fieldcount"
	// The default value of `PropertyFieldCount`.
	PropertyFieldCountDefault  = "10"
	PropertyFieldPrefix        = "fieldprefix"
	PropertyFieldPrefixDefault = "field"
	// The name of the property for the field length distribution.
	// Options are "uniform", "zipfian"(favoring short records), "constant",
	// and "histogram".
	// If "uniform", "zipfian" or "constant", the maximum field length will
	// be that specified by the fieldlength property. If "histogram", then
	// the histogram will be read from the filename specified in the
	// "fieldlengthhistogram" property.
	PropertyFieldLengthDistribution = "fieldlengthdistribution"
	// The default value of `PropertyFieldLengthDistribution`
	PropertyFieldLengthDistributionDefault = "constant"
	// The name of the property for the length of a field in bytes.
	PropertyFieldLength = "fieldlength"
	// The default value of `PropertyFieldLength`
	PropertyFieldLengthDefault = "100"
	// The name of a property that specifies the filename containing the field
	// length histogram (only used if fieldlengthdistribution is "histogram").
	PropertyFieldLengthHistogramFile = "fieldlengthhistogram"
	// The default value of `PropertyFieldLengthHistogramFile`
	PropertyFieldLengthHistogramFileDefault = "hist.txt"
	// The prefix of key
	PropertyKeyPrefix = "keyprefix"
	// The default value of `PropertyKeyPrefix`
	PropertyKeyPrefixDefault = "user"
	// The name of the property for deciding whether to read one field (false)
	// or all fields (true) of a record.
	PropertyReadAllFields = "readallfields"
	// The default value of `PropertyReadAllFields`
	PropertyReadAllFieldsDefault = "true"
	// The name of the property for deciding whether to write one field (false)
	// or all fields (true) of a record.
	PropertyWriteAllFields = "writeallfields"
	// The default value of `PropertyWriteAllFields`
	PropertyWriteAllFieldsDefault = "false"
	// The name of the property for deciding whether to check all returned
	// data against the formation template to ensure data integrity.
	PropertyDataIntegrity = "dataintegrity"
	// The default value of `PropertyDataIntegrity`
	PropertyDataIntegrityDefault = "false"
	// The name of the property for the proportion of transactions
	// that are reads.
	PropertyReadProportion = "readproportion"
	// The default value of `PropertyReadProportion`
	PropertyReadProportionDefault = "0.95"
	// The name of the property for proportion of transactions
	// that are updates.
	PropertyUpdateProportion = "updateproportion"
	// The default value of `PropertyUpdateProportion`
	PropertyUpdateProportionDefault = "0.05"
	// The name of the property for proportion of transactions
	// that are inserts.
	PropertyInsertProportion = "insertproportion"
	// The default value of `PropertyInsertProportion`
	PropertyInsertProportionDefault = "0.0"
	// The name of the property for proportion of transactions
	// that are scans.
	PropertyScanProportion = "scanproportion"
	// The default value of `PropertyScanProportion`
	PropertyScanProportionDefault = "0.0"
	// The name of the property for proportion of transcations
	// that are read-modify-write.
	PropertyReadModifyWriteProportion = "readmodifywriteproportion"
	// The default value of `PropertyReadModifyWriteProportion`
	PropertyReadModifyWriteProportionDefault = "0.0"
	// The name of the property for the distribution of requests
	// across the keyspace. Options are "uniform", "zipfian" and "latest"
	PropertyRequestDistribution = "requestdistribution"
	// The default value of `PropertyRequestDistribution`
	PropertyRequestDistributionDefault = "uniform"
	// The name of the property for the max scan length (number of records)
	PropertyMaxScanLength = "maxscanlength"
	// The default max scan length
	PropertyMaxScanLengthDefault = "1000"
	// The name of the property for the scan length distribution.
	// Options are "uniform" and "zipfian" (favoring short scans)
	PropertyScanLengthDistribution = "scanlengthdistribution"
	// The default value of `PropertyScanLengthDistribution`
	PropertyScanLengthDistributionDefault = "uniform"
	// The name of the property for the order to insert records.
	// Options are "ordered" or "hashed"
	PropertyInsertOrder = "insertorder"
	// The default value of `PropertyInsertOrder`
	PropertyInsertOrderDefault = "hashed"
	// Percentage data items that constitute the hot set.
	HotspotDataFraction = "hotspotdatafraction"
	// The default value of `HotspotDataFraction`
	HotspotDataFractionDefault = "0.2"
	// Percentage operations that access the hot set.
	HotspotOpnFraction = "hotspotopnfraction"
	// The default value of `HotspotOpnFraction`
	HotspotOpnFractionDefault = "0.8"
	// How many times to retry when insertion of a single item to a DB fails.
	InsertionRetryLimit = "core_workload_insertion_retry_limit"
	// The default value of `InsertionRetryLimit`
	InsertionRetryLimitDefault = "0"
	// On average, how long to wait between the retries, in seconds.
	InsertionRetryInterval = "core_workload_insertion_retry_interval"
	// The default value of `InsertionRetryInterval`
	InsertionRetryIntervalDefault = "3"

	PropertyStorageAge        = "storageages"
	PropertyStorageAgeDefault = "10"
	PropertyDiskSize          = "disksize"
	PropertyDiskSizeDefault   = "100000000"
	PropertyOccupancy         = "occupancy"
	PropertyOccupancyDefault  = "0.9"

	// measurement
	PropertyMeasurementType            = "measurementtype"
	PropertyMeasurementTypeDefault     = "hdrhistogram"
	PropertyMeasurementInterval        = "measurement.interval"
	PropertyMeasurementIntervalDefault = "op"

	// Granularity for time series; measurements will be averaged in chunks of
	// this granularity. Units are milliseconds.
	PropertyGranularity        = "timeseries.granularity"
	PropertyGranularityDefault = "1000"

	// Optionally, user can configure an output file to save the raw
	// data points. Default is none, raw results will be written to stdout.
	OutputFilePath        = "measurement.raw.output_file"
	OutputFilePathDefault = ""
	// Optionally, user can request to not output summary stats. This is
	// useful if the user chains the raw measurement type behind the
	// HdrHistogram type which already outputs summary stats. But even in
	// that case, the user may still want this class to compute summary stats
	// for them, especially if they want accurate computation of percentiles
	// (because percentiles computed by histogram classes are still
	// approximations).
	NoSummaryStats        = "measurement.raw.no_summary"
	NoSummaryStatsDefault = "false"

	Buckets        = "histogram.buckets"
	BucketsDefault = "1000"

	// The name of the property for deciding what percentile values to output.
	PropertyPercentiles = "hdrhistogram.percentiles"
	// The default value of `PropertyPercentiles`
	PropertyPercentilesDefault = "95,99"
	// The name of the property that specifies the output filename of hdrhistogram
	PropertyHdrHistogramOutput = "hdrhistogram.fileoutput"
	// The default value of `PropertyHdrHistogramOutput`
	PropertyHdrHistogramOutputDefault = "false"
	PropertyHdrHistogramOutputPath    = "hdrhistogram.output.path"
	// The default value of `PropertyHdrHistogramOutputPath`
	PropertyHdrHistogramOutputPathDefault = ""
	// The max value of hdrhistogram
	PropertyHdrHistogramMax = "hdrhistogram.max"
	// The default value of `PropertyHdrHistogramMax`, which is 2 seconds in unit ms.
	PropertyHdrHistogramMaxDefault = "2000000"
	// The number of significant value digits in hdrhistogram
	PropertyHdrHistogramSig = "hdrhistogram.sig"
	// The default value of `PropertyHdrHistogramSig`
	PropertyHdrHistogramSigDefault = "3"

	// generator
	// What percentage of the readings should be within the most recent
	// exponential.fracportion of the dataset?
	PropertyExponentialPercentile        = "exponential.percentile"
	PropertyExponentialPercentileDefault = "95"
	// What fraction of the dataset should be accessed exponential.percentile
	// of the time?
	PropertyExponentialFraction        = "exponential.frac"
	PropertyExponentialFractionDefault = "0.8571428571" // 1/7
)
View Source
const (
	RandomBytesLength = 6
)

Variables

View Source
var (
	Commands = map[string]bool{
		"load":  true,
		"run":   true,
		"shell": true,
	}
	Databases = map[string]MakeDBFunc{
		"basic": func() DB {
			return NewBasicDB()
		},
		"simple": func() DB {
			return NewGoodBadUglyDB()
		},
	}
	OptionPrefixes = []string{"--", "-"}
	OptionList     = []*Option{
		&Option{
			Name:            "P",
			HasArgument:     true,
			HasDefaultValue: false,
			Doc:             "specify workload file",
			Operation: func(context interface{}, value string) {
				props, _ := context.(Properties)
				propsFromFile, err := LoadProperties(value)
				if err != nil {
					ExitOnError(err.Error())
				}
				props.Merge(propsFromFile)
			},
		},
		&Option{
			Name:            "p",
			HasArgument:     true,
			HasDefaultValue: false,
			Doc:             "specify a property value",
			Operation: func(context interface{}, value string) {
				props, _ := context.(Properties)

				parts := strings.Split(value, "=")
				if len(parts) != 2 {
					ExitOnError("invalid property: %s", value)
				}
				props.Add(parts[0], parts[1])
			},
		},
		&Option{
			Name:            "s",
			HasArgument:     false,
			HasDefaultValue: false,
			Doc:             "show status (default: no status)",
		},
		&Option{
			Name:            "l",
			HasArgument:     true,
			HasDefaultValue: false,
			Doc:             "use label for status (e.g. to label one experiment out of a whole batch)",
		},
		&Option{
			Name:            "db",
			HasArgument:     true,
			HasDefaultValue: false,
			Doc:             "use a specified DB class(can also set the \"db\" property)",
			Operation: func(context interface{}, value string) {
				props, _ := context.(Properties)
				props.Add(PropertyDB, value)
			},
		},
		&Option{
			Name:            "table",
			HasArgument:     true,
			HasDefaultValue: true,
			DefaultValue:    PropertyTableNameDefault,
			Doc:             "use the table name instead of the default %s",
			Operation: func(context interface{}, value string) {
				props, _ := context.(Properties)
				props.Add(PropertyTableName, value)
			},
		},
		&Option{
			Name:            "x",
			HasArgument:     true,
			HasDefaultValue: true,
			DefaultValue:    "info",
			Doc:             "specify the level name of log output (dafault: info)",
			Operation: func(context interface{}, value string) {
				levelName := strings.ToLower(value)
				level, ok := nameToLevels[levelName]
				if !ok {
					ExitOnError("invalid log level name: %s", value)
				}
				logLevel = level
			},
		},
		&Option{
			Name:            "h",
			HasArgument:     false,
			HasDefaultValue: false,
			Doc:             "show this help message and exit",
			Operation: func(context interface{}, value string) {
				Usage()
				os.Exit(0)
			},
		},
		&Option{
			Name:            "help",
			HasArgument:     false,
			HasDefaultValue: false,
			Doc:             "show this help message and exit",
			Operation: func(context interface{}, value string) {
				Usage()
				os.Exit(0)
			},
		},
		&Option{
			Name:            "v",
			HasArgument:     false,
			HasDefaultValue: false,
			Doc:             "show the version number and exit",
			Operation: func(context interface{}, value string) {
				Version()
				os.Exit(0)
			},
		},
		&Option{
			Name:            "version",
			HasArgument:     false,
			HasDefaultValue: false,
			Doc:             "show the version number and exit",
			Operation: func(context interface{}, value string) {
				Version()
				os.Exit(0)
			},
		},
	}
	Options = make(map[string]*Option)

	ProgramName = ""
	MainVersion = "0.1.0"
)
View Source
var (
	Error              = errors.New("The operation failed.")
	NotFound           = errors.New("The requested record was not found.")
	NotImplemented     = errors.New("The operation is not implemented for the current binding.")
	UnexpectedState    = errors.New("The operation reported success, but the result was not as expected.")
	BadRequest         = errors.New("The request was not valid.")
	Forbidden          = errors.New("The request was not valid.")
	ServiceUnavailable = errors.New("Dependant service for the current binding is not available.")
)
View Source
var (
	MeasurementExporters map[string]MakeMeasurementExporterFunc
)
View Source
var (
	Suffixes = []string{"th", "st", "nd", "rd", "th", "th", "th", "th", "th", "th"}
)
View Source
var (
	Workloads map[string]MakeWorkloadFunc
)

Functions

func ConcatFieldsStr

func ConcatFieldsStr(fields []string) string

func ConcatKVStr

func ConcatKVStr(values KVMap) string

func Debugf

func Debugf(format string, args ...interface{})

func EPrintf

func EPrintf(format string, args ...interface{})

func Errorf

func Errorf(format string, args ...interface{})

func ExitOnError

func ExitOnError(format string, args ...interface{})

func Flogf

func Flogf(w io.Writer, level LogLevelType, format string, args ...interface{})

func Infof

func Infof(format string, args ...interface{})

func Log

func Log(level LogLevelType, format string, args ...interface{})

func LogProperties

func LogProperties(p Properties)

func Main

func Main()

func MillisecondToNanosecond

func MillisecondToNanosecond(millis int64) int64

func MillisecondToSecond

func MillisecondToSecond(millis int64) int64

func NanosecondToMicrosecond

func NanosecondToMicrosecond(nano int64) int64

func NanosecondToMillisecond

func NanosecondToMillisecond(nano int64) int64

func NowMS

func NowMS() int64

func NowNS

func NowNS() int64

func Printf

func Printf(format string, args ...interface{})

func PromptPrintf

func PromptPrintf(format string, args ...interface{})

func RandomBytes

func RandomBytes(length int64) []byte

func SecondToNanosecond

func SecondToNanosecond(second int64) int64

func SetMeasurementProperties

func SetMeasurementProperties(props Properties)

func Usage

func Usage()

func Verbosef

func Verbosef(format string, args ...interface{})

func Version

func Version()

func Warnf

func Warnf(format string, args ...interface{})

Types

type Arguemnts

type Arguemnts struct {
	Command  string
	Database string
	Options  map[string]string
	Properties
}

func ParseArgs

func ParseArgs() *Arguemnts

type BasicDB

type BasicDB struct {
	*DBBase
	// contains filtered or unexported fields
}

A simple DB implementation that just prints out the requested operations, instead of doing them against a real database.

func NewBasicDB

func NewBasicDB() *BasicDB

func (*BasicDB) Cleanup

func (self *BasicDB) Cleanup() error

func (*BasicDB) Delay

func (self *BasicDB) Delay()

func (*BasicDB) Delete

func (self *BasicDB) Delete(table string, key string) StatusType

Delete a record from the database.

func (*BasicDB) Init

func (self *BasicDB) Init() error

Initialize any state for this DB.

func (*BasicDB) Insert

func (self *BasicDB) Insert(table string, key string, values KVMap) StatusType

Insert a record in the database.

func (*BasicDB) Read

func (self *BasicDB) Read(table string, key string, fields []string) (KVMap, StatusType)

Read a record from the database.

func (*BasicDB) Scan

func (self *BasicDB) Scan(table string, startKey string, recordCount int64, fields []string) ([]KVMap, StatusType)

Perform a range scan for a set of records in the database.

func (*BasicDB) Update

func (self *BasicDB) Update(table string, key string, values KVMap) StatusType

Update a record in the database.

type Binary

type Binary []byte

Binary represents arbitrary binary value(byte array).

func (Binary) Value

func (self Binary) Value() (driver.Value, error)

type Client

type Client interface {
	Main()
}

type ClientBase

type ClientBase struct {
	Args           *Arguemnts
	DoTransactions bool
}

func NewClientBase

func NewClientBase(args *Arguemnts) *ClientBase

func (*ClientBase) CheckProperties

func (self *ClientBase) CheckProperties()

func (*ClientBase) Main

func (self *ClientBase) Main()

type ConstantOccupancyWorkload

type ConstantOccupancyWorkload struct {
	*CoreWorkload
	// contains filtered or unexported fields
}

A disk-fragmenting workload. Properties to control the client: disksize: how many bytes of storage can the disk store? (default 100,000,000) occupancy: what fraction of the available storage should be used? (default 0.9) requestdistribution: what distribution should be used to select the records to operate on - uniform, zipfian or latest (default histogram)

See also: Russell Sears, Catharine van Ingen. Fragmentation in Large Object Repositories(https://database.cs.wisc.edu/cidr/cidr2007/papers/cidr07p34.pdf) CIDR 2006. [Presentation(https://database.cs.wisc.edu/cidr/cidr2007/slides/p34-sears.ppt)]

func NewConstantOccupancyWorkload

func NewConstantOccupancyWorkload() *ConstantOccupancyWorkload

func (*ConstantOccupancyWorkload) Init

func (self *ConstantOccupancyWorkload) Init(p Properties) (err error)

type CoreWorkload

type CoreWorkload struct {
	// contains filtered or unexported fields
}

CoreWorkload represents the core benchmark scenario. It's a set of clients doing simple CRUD operations. The relative proportion of different kinds of operations, and other properties of the workload, are controlled by parameters specified at runtime. Properties to control the client:

fieldcount: the number of fields in a second (default: 10)
fieldlength: the size of each field (default: 100)
readallfields: should reads read all fields (true) or just one (false)
               (default: true)
writeallfields: should updates and read/modify/writes update all fields
                (true) or just one (false) (default: false)
readproportion: what proportion of operations should be reads
                (default: 0.95)
updateproportion: what proportion of operations should be updates
                  (default: 0.05)
insertproportion: what proportion of operations should be inserts
                  (default: 0)
scanproportion: what proportion of operations should be scans (default: 0)
readmodifywriteproportion: what proportion of operations should be read a
                           record, modify it, write it back (default: 0)
requestdistribution: what distribution should be used to select the records
                     to operate on - uniform, zipfian, hotspot or latest
                     (default: uniform)
maxscanlength: for scans, what is the maximum number of records to scan
               (default: 1000)
scanlengthdistribution: for scans, what distribution should be used to
                        choose the number of records to scan, for
                        each scan, between 1 and maxscanlength
                        (default: uniform)
insertorder: should records be inserted in order by key ("ordered"), or in
             hashed order ("hashed") (default: hashed)

func NewCoreWorkload

func NewCoreWorkload() *CoreWorkload

func (*CoreWorkload) Cleanup

func (self *CoreWorkload) Cleanup() error

func (*CoreWorkload) DoInsert

func (self *CoreWorkload) DoInsert(db DB, object interface{}) bool

Do one insert operation. Because it will be called concurrently from multiple client goroutines, this function must be routine safe. However, avoid synchronized, or the goroutines will block waiting for each other, and it will be difficult to reach the target throughput. Ideally, this function would have no side effects other than DB operations.

func (*CoreWorkload) DoTransaction

func (self *CoreWorkload) DoTransaction(db DB, object interface{}) bool

Do one transaction operation. Because it will be called concurrently from multiple client goroutines, this function must be routine safe. However, avoid synchronized, or the goroutines will block waiting for each other, and it will be difficult to reach the target throughput. Ideally, this function would have no side effects other than DB operations.

func (*CoreWorkload) DoTransactionInsert

func (self *CoreWorkload) DoTransactionInsert(db DB)

func (*CoreWorkload) DoTransactionRead

func (self *CoreWorkload) DoTransactionRead(db DB)

func (*CoreWorkload) DoTransactionReadModifyWrite

func (self *CoreWorkload) DoTransactionReadModifyWrite(db DB)

func (*CoreWorkload) DoTransactionScan

func (self *CoreWorkload) DoTransactionScan(db DB)

func (*CoreWorkload) DoTransactionUpdate

func (self *CoreWorkload) DoTransactionUpdate(db DB)

func (*CoreWorkload) Init

func (self *CoreWorkload) Init(p Properties) error

Initialize the scenario. Called once, in the main routine, before any operations are started.

func (*CoreWorkload) InitRoutine

func (self *CoreWorkload) InitRoutine(p Properties) (interface{}, error)

type DB

type DB interface {
	// Set the properties for this DB.
	SetProperties(p Properties)

	// Get the properties for this DB.
	GetProperties() Properties

	// Initialize any state for this DB.
	// Called once per DB instance; there is one DB instance per client routine.
	Init() error

	// Cleanup any state for this DB.
	// Called once per DB instance; there is one DB instance per client routine.
	Cleanup() error

	// Read a record from the database.
	// Each field/value pair from the result will be returned.
	Read(table string, key string, fields []string) (KVMap, StatusType)

	// Perform a range scan for a set of records in the database.
	// Each field/value pair from the result will be returned.
	Scan(table string, startKey string, recordCount int64, fields []string) ([]KVMap, StatusType)

	// Update a record in the database.
	// Any field/value pairs in the specified values will be written into
	// the record with the specified record key, overwriting any existing
	// values with the same field name.
	Update(table string, key string, values KVMap) StatusType

	// Insert a record in the database. Any field/value pairs in the specified
	// values will be written into the record with the specified record key.
	Insert(table string, key string, values KVMap) StatusType

	// Delete a reord from the database.
	Delete(table string, key string) StatusType
}

DB is A layer for accessing a database to be benchmarked. Each routine in the client will be given its own instance of whatever DB class is to be used in the test. This class should be constructed using a no-argument constructor, so we can load it dynamically. Any argument-based initialization should be done by Init().

Note that YABF does not make any use of the return codes returned by this class. Instead, it keeps a count of the return values and presents them to the user.

The semantics of methods such as Insert, Update and Delete vary from database to database. In particular, operations may or may not be durable once these methods commit, and some systems may return 'success' regardless of whether or not a tuple with a matching key existed before the call. Rather than dictate the exact semantics of these methods, we recommend you either implement them to match the database's default semantics, or the semantics of your target application. For the sake of comparison between experiments we also recommend you explain the semantics you chose when presenting performance results.

func NewDB

func NewDB(database string, props Properties) (DB, error)

type DBBase

type DBBase struct {
	// contains filtered or unexported fields
}

func NewDBBase

func NewDBBase() *DBBase

func (*DBBase) GetProperties

func (self *DBBase) GetProperties() Properties

func (*DBBase) SetProperties

func (self *DBBase) SetProperties(p Properties)

type DBWrapper

type DBWrapper struct {
	DB
	// contains filtered or unexported fields
}

Wrapper around a "real" DB that measures latencies and counts return codes. Also reports latency separately between OK and false operations.

func NewDBWrapper

func NewDBWrapper(db DB) *DBWrapper

func (*DBWrapper) Cleanup

func (self *DBWrapper) Cleanup() error

Cleanup any state for this DB.

func (*DBWrapper) Delete

func (self *DBWrapper) Delete(table string, key string) StatusType

Delete a record from the database.

func (*DBWrapper) Init

func (self *DBWrapper) Init() (err error)

func (*DBWrapper) Insert

func (self *DBWrapper) Insert(table string, key string, values KVMap) StatusType

Insert a record in the database.

func (*DBWrapper) Read

func (self *DBWrapper) Read(table string, key string, fields []string) (KVMap, StatusType)

Read a record from the database.

func (*DBWrapper) Scan

func (self *DBWrapper) Scan(table string, startKey string, recordCount int64, fields []string) ([]KVMap, StatusType)

Perform a range scan for a set of records in the databases.

func (*DBWrapper) Update

func (self *DBWrapper) Update(table string, key string, values KVMap) StatusType

Update a recerd in the database.

type DefaultMeasurements

type DefaultMeasurements struct {
	// contains filtered or unexported fields
}

func NewDefaultMeasurements

func NewDefaultMeasurements(props Properties) *DefaultMeasurements

func (*DefaultMeasurements) ExportMeasurements

func (self *DefaultMeasurements) ExportMeasurements(exporter MeasurementExporter) (err error)

func (*DefaultMeasurements) GetSummary

func (self *DefaultMeasurements) GetSummary() string

func (*DefaultMeasurements) Measure

func (self *DefaultMeasurements) Measure(operation string, latency int64)

Report a single value of a single metric. E.g. for read latency, operation="READ" and latency is the measured value.

func (*DefaultMeasurements) ReportStatus

func (self *DefaultMeasurements) ReportStatus(operation string, status StatusType)

type GoodBadUglyDB

type GoodBadUglyDB struct {
	*DBBase
	// contains filtered or unexported fields
}

A simple DB implementation that just prints out the requested operations, instead of doing them against a database.

func NewGoodBadUglyDB

func NewGoodBadUglyDB() *GoodBadUglyDB

func (*GoodBadUglyDB) Cleanup

func (self *GoodBadUglyDB) Cleanup() error

func (*GoodBadUglyDB) Delete

func (self *GoodBadUglyDB) Delete(table string, key string) StatusType

Delete a record from the database.

func (*GoodBadUglyDB) Init

func (self *GoodBadUglyDB) Init() error

Initialize any state for this DB.

func (*GoodBadUglyDB) Insert

func (self *GoodBadUglyDB) Insert(table string, key string, value KVMap) StatusType

Insert a record in the database.

func (*GoodBadUglyDB) Read

func (self *GoodBadUglyDB) Read(table string, key string, fields []string) (KVMap, StatusType)

Read a record from the database.

func (*GoodBadUglyDB) Scan

func (self *GoodBadUglyDB) Scan(table string, startKey string, recordCount int64, fields []string) ([]KVMap, StatusType)

Perform a range scan for a set of records in the database.

func (*GoodBadUglyDB) Update

func (self *GoodBadUglyDB) Update(table string, key string, values KVMap) StatusType

Update a record in the database.

type HdrHistogramLogReader

type HdrHistogramLogReader struct {
	// contains filtered or unexported fields
}

func NewHdrHistogramLogReader

func NewHdrHistogramLogReader(r io.Reader) *HdrHistogramLogReader

func (*HdrHistogramLogReader) NextHistogram

func (self *HdrHistogramLogReader) NextHistogram() (*hdrhistogram.Histogram, error)

type HdrHistogramLogWriter

type HdrHistogramLogWriter struct {
	// contains filtered or unexported fields
}

func NewHdrHistogramLogWriter

func NewHdrHistogramLogWriter(w io.Writer) *HdrHistogramLogWriter

func (*HdrHistogramLogWriter) OutputHistogram

func (self *HdrHistogramLogWriter) OutputHistogram(h *hdrhistogram.Histogram) error

type JSONArrayMeasurementExporter

type JSONArrayMeasurementExporter struct {
	io.WriteCloser
	// contains filtered or unexported fields
}

Export measurements into a machine readable JSON Array of measurement objects.

func NewJSONArrayMeasurementExporter

func NewJSONArrayMeasurementExporter(w io.WriteCloser) *JSONArrayMeasurementExporter

func (*JSONArrayMeasurementExporter) Close

func (self *JSONArrayMeasurementExporter) Close() error

func (*JSONArrayMeasurementExporter) Write

func (self *JSONArrayMeasurementExporter) Write(metric string, measurement string, v interface{}) error

type JSONMeasurementExporter

type JSONMeasurementExporter struct {
	io.WriteCloser
	// contains filtered or unexported fields
}

Export measurements into a machine readable JSON file.

func NewJSONMeasurementExporter

func NewJSONMeasurementExporter(w io.WriteCloser) *JSONMeasurementExporter

func (*JSONMeasurementExporter) Close

func (self *JSONMeasurementExporter) Close() error

func (*JSONMeasurementExporter) Write

func (self *JSONMeasurementExporter) Write(metric string, measurement string, v interface{}) error

type KVMap

type KVMap map[string]Binary

Result represents the result type of db operations.

type Loader

type Loader struct {
	*ClientBase
}

func NewLoader

func NewLoader(args *Arguemnts) *Loader

type LogLevelType

type LogLevelType uint8
const (
	LevelVerbose LogLevelType = 50
	LevelDebug   LogLevelType = 40
	LevelInfo    LogLevelType = 30
	LevelWarn    LogLevelType = 20
	LevelError   LogLevelType = 10
	LevelQuiet   LogLevelType = 0
)

type MakeDBFunc

type MakeDBFunc func() DB

type MakeMeasurementExporterFunc

type MakeMeasurementExporterFunc func(w io.WriteCloser) MeasurementExporter

type MakeWorkloadFunc

type MakeWorkloadFunc func() Workload

type MeasurementExporter

type MeasurementExporter interface {
	// Write a measurement to the exported format. v should be int64 or float64
	Write(metric string, measurement string, v interface{}) error
	io.Closer
}

Used to export the collected measuremrnts into a usefull format, for example human readable text or machine readable JSON.

func NewMeasurementExporter

func NewMeasurementExporter(className string, w io.WriteCloser) (MeasurementExporter, error)

type MeasurementType

type MeasurementType uint8
const (
	MeasurementHistogram MeasurementType = 1 + iota
	MeasurementHDRHistogram
	MeasurementHDRHistogramAndHistogram
	MeasurementHDRHistogramAndRaw
	MeasurementTimeSeries
	MeasurementRaw
)

type Measurements

type Measurements interface {
	// Report a single value of a single metric. E.g. for read latency,
	// operation="READ" and latency is the measured value.
	Measure(operation string, latency int64)

	// Return a one line summary of the measurements.
	GetSummary() string

	// Report a return code for a single DB operation.
	ReportStatus(operation string, status StatusType)

	// Export the current measurements to a suitable format.
	ExportMeasurements(exporter MeasurementExporter) error
}

Collects latency measurements, and reports them when requested.

func GetMeasurements

func GetMeasurements() Measurements

type OneMeasurement

type OneMeasurement interface {
	Measure(latency int64)
	GetName() string
	GetSummary() string
	// Report a return code.
	ReportStatus(status StatusType)
	// Exports the current measurements to a suitable format.
	ExportMeasurements(exporter MeasurementExporter) error
}

A single measured metric (such as READ LATENCY)

func MustNewMeasurement

func MustNewMeasurement(m OneMeasurement, err error) OneMeasurement

type OneMeasurementBase

type OneMeasurementBase struct {
	Name            string
	MeasureLock     *sync.Mutex
	ReturnCodes     map[StatusType]uint32
	ReturnCodesLock *sync.Mutex
}

func NewOneMeasurementBase

func NewOneMeasurementBase(name string) *OneMeasurementBase

func (*OneMeasurementBase) ExportStatusCounts

func (self *OneMeasurementBase) ExportStatusCounts(exporter MeasurementExporter) error

func (*OneMeasurementBase) GetName

func (self *OneMeasurementBase) GetName() string

func (*OneMeasurementBase) ReportStatus

func (self *OneMeasurementBase) ReportStatus(status StatusType)

type OneMeasurementHdrHistogram

type OneMeasurementHdrHistogram struct {
	*OneMeasurementBase
	// contains filtered or unexported fields
}

Take measurements and maintain a HdrHistogram of a given metric, such as READ LATENCY.

func NewOneMeasurementHdrHistogram

func NewOneMeasurementHdrHistogram(name string, props Properties) (*OneMeasurementHdrHistogram, error)

func (*OneMeasurementHdrHistogram) ExportMeasurements

func (self *OneMeasurementHdrHistogram) ExportMeasurements(exporter MeasurementExporter) (err error)

This is called from a main thread, on orderly termination.

func (*OneMeasurementHdrHistogram) GetSummary

func (self *OneMeasurementHdrHistogram) GetSummary() string

This is called periodically from the status goroutine. There's a single status goroutine per client process. We optionally serialize the interval to log on this oppertunity.

func (*OneMeasurementHdrHistogram) Measure

func (self *OneMeasurementHdrHistogram) Measure(latency int64)

It appears latency is reported in micros.

type OneMeasurementHistogram

type OneMeasurementHistogram struct {
	*OneMeasurementBase
	// contains filtered or unexported fields
}

Take measurements and maintain a histogram of a given metric, such as READ LATENCY.

func NewOneMeasurementHistogram

func NewOneMeasurementHistogram(name string, props Properties) (*OneMeasurementHistogram, error)

func (*OneMeasurementHistogram) ExportMeasurements

func (self *OneMeasurementHistogram) ExportMeasurements(exporter MeasurementExporter) (err error)

func (*OneMeasurementHistogram) GetSummary

func (self *OneMeasurementHistogram) GetSummary() string

func (*OneMeasurementHistogram) Measure

func (self *OneMeasurementHistogram) Measure(latency int64)

type OneMeasurementRaw

type OneMeasurementRaw struct {
	*OneMeasurementBase
	// contains filtered or unexported fields
}

Record a series of measurements as raw data points without down sampling, optionally write to an output file when configured.

func NewOneMeasurementRaw

func NewOneMeasurementRaw(name string, props Properties) (*OneMeasurementRaw, error)

func (*OneMeasurementRaw) ExportMeasurements

func (self *OneMeasurementRaw) ExportMeasurements(exporter MeasurementExporter) (err error)

func (*OneMeasurementRaw) GetSummary

func (self *OneMeasurementRaw) GetSummary() string

func (*OneMeasurementRaw) Measure

func (self *OneMeasurementRaw) Measure(latency int64)

type OneMeasurementTimeSeries

type OneMeasurementTimeSeries struct {
	*OneMeasurementBase
	// contains filtered or unexported fields
}

A time series measurement of a metric, such as READ LATENCY.

func NewOneMeasurementTimeSeries

func NewOneMeasurementTimeSeries(name string, props Properties) (*OneMeasurementTimeSeries, error)

func (*OneMeasurementTimeSeries) CheckEndOfUnit

func (self *OneMeasurementTimeSeries) CheckEndOfUnit(forceEnd bool)

func (*OneMeasurementTimeSeries) ExportMeasurements

func (self *OneMeasurementTimeSeries) ExportMeasurements(exporter MeasurementExporter) (err error)

func (*OneMeasurementTimeSeries) GetSummary

func (self *OneMeasurementTimeSeries) GetSummary() string

func (*OneMeasurementTimeSeries) Measure

func (self *OneMeasurementTimeSeries) Measure(latency int64)

type Option

type Option struct {
	Name            string
	HasArgument     bool
	HasDefaultValue bool
	DefaultValue    string
	Doc             string
	Operation       OptionOperationFunc
}

type OptionOperationFunc

type OptionOperationFunc func(context interface{}, value string)

type Properties

type Properties map[string]string

func GetMeasurementProperties

func GetMeasurementProperties() Properties

func LoadProperties

func LoadProperties(fileName string) (Properties, error)

func NewProperties

func NewProperties() Properties

func (Properties) Add

func (self Properties) Add(key, value string)

func (Properties) Get

func (self Properties) Get(key string) string

func (Properties) GetDefault

func (self Properties) GetDefault(key string, defaultValue string) string

func (Properties) Merge

func (self Properties) Merge(other Properties) Properties

type RawDataPoint

type RawDataPoint struct {
	// contains filtered or unexported fields
}

One raw point, has two fields: timestamp(ms) when the datapoint is inserted, and the value.

func NewRawDataPoint

func NewRawDataPoint(value int64) *RawDataPoint

type RawDataPointSlice

type RawDataPointSlice []*RawDataPoint

func (RawDataPointSlice) Len

func (self RawDataPointSlice) Len() int

func (RawDataPointSlice) Less

func (self RawDataPointSlice) Less(i, j int) bool

func (RawDataPointSlice) Swap

func (self RawDataPointSlice) Swap(i, j int)

type Runner

type Runner struct {
	*ClientBase
}

func NewRunner

func NewRunner(args *Arguemnts) *Runner

type SeriesUnit

type SeriesUnit struct {
	Time    int64
	Average float64
}

func NewSeriesUnit

func NewSeriesUnit(t int64, average float64) *SeriesUnit

type Shell

type Shell struct {
	// contains filtered or unexported fields
}

A simple command line client to a database, using the appropriate DB implementation.

func NewShell

func NewShell(args *Arguemnts) *Shell

func (*Shell) Main

func (self *Shell) Main()

type StatusReporter

type StatusReporter struct {
	// contains filtered or unexported fields
}

A routine to periodically show the status of the experiement, to reassure you that process is being made.

func NewStatusReporter

func NewStatusReporter(workers []*Worker, stopCh chan int, waitGroup *sync.WaitGroup, standardStatus bool, intervalSeconds int64, label string) *StatusReporter

type StatusType

type StatusType uint8
const (
	StatusOK StatusType = 1 + iota
	StatusError
	StatusNotFound
	StatusNotImplemented
	StatusUnexpectedState
	StatusBadRequest
	StatusForbidden
	StatusServiceUnavailable
)

func (StatusType) String

func (self StatusType) String() string

type TextMeasurementExporter

type TextMeasurementExporter struct {
	io.WriteCloser
	// contains filtered or unexported fields
}

Write human readable text. Tries to emulate the previous print report method.

func NewTextMeasurementExporter

func NewTextMeasurementExporter(w io.WriteCloser) *TextMeasurementExporter

func (*TextMeasurementExporter) Close

func (self *TextMeasurementExporter) Close() error

func (*TextMeasurementExporter) Write

func (self *TextMeasurementExporter) Write(metric string, measurement string, v interface{}) error

type TwoInOneMeasurement

type TwoInOneMeasurement struct {
	*OneMeasurementBase
	// contains filtered or unexported fields
}

Delegates to 2 measurement instances.

func NewTwoInOneMeasurement

func NewTwoInOneMeasurement(name string, thing1, thing2 OneMeasurement) *TwoInOneMeasurement

func (*TwoInOneMeasurement) ExportMeasurements

func (self *TwoInOneMeasurement) ExportMeasurements(exporter MeasurementExporter) (err error)

This is called from a main goroutine, on orderly termination.

func (*TwoInOneMeasurement) GetSummary

func (self *TwoInOneMeasurement) GetSummary() string

This is called periodically from the status goroutine. There's a single status goroutine per client process. We optionally serialize the interval to log on this opportunity.

func (*TwoInOneMeasurement) Measure

func (self *TwoInOneMeasurement) Measure(latency int64)

It appears latency is reported in microseconds.

type Worker

type Worker struct {
	// contains filtered or unexported fields
}

A routine for executing transactions or data inserts to the database.

func NewWorker

func NewWorker(db DB, workload Workload, props Properties, doTransactions bool, opCount int64, targetPerThreadPerMS float64, stopCh chan int, resultCh chan int64) *Worker

type Workload

type Workload interface {
	// Initialize the scenario. Create any generators and other shared
	// objects here.
	// Called once in the main client routine, before any operations
	// are started.
	Init(p Properties) error

	// Initialize any state for a particular client routine.
	// Since the scenario object will be shared among all threads,
	// this is the place to create any state that is specific to one routine.
	// To be clear, this means the returned object should be created anew
	// on each call to InitRoutine(); do not return the same object multiple
	// times. The returned object will be passed to invocations of DoInsert()
	// and DoTransaction() for this routine. There should be no side effects
	// from this call; all state should be encapsulated in the returned object.
	// If you have no state to retain for this routine, return null.
	// (But if you have no state to retain for this routine, probably
	// you don't need to override this function.)
	InitRoutine(p Properties) (interface{}, error)

	// Cleanup the scenario.
	// Called once, in the main client routine, after all operations
	// have completed.
	Cleanup() error

	// Do one insert operation. Because it will be called concurrently from
	// multiple routines, this function must be routine safe.
	// However, avoid synchronized, or the routines will block waiting for
	// each other, and it will be difficult to reach the target throughput.
	// Ideally, this function would have no side effects other than
	// DB operations and mutations on object. Mutations to object do not need
	// to be synchronized, since each routine has its own object instance.
	DoInsert(db DB, object interface{}) bool

	// Do one transaction operation. Because it will be called concurrently
	// from multiple client routines, this function must be routine safe.
	// However, avoid synchronized, or the routines will block waiting for
	// each other, and it will be difficult to reach the target throughput.
	// Ideally, this function would have no side effects other than
	// DB operations and mutations on object. Mutations to object do not need
	// to be synchronized, since each routine has its own object instance.
	DoTransaction(db DB, object interface{}) bool
}

Workload represents One experiment scenario. One object of this type will be instantiated and shared among all client routines. This class should be constructed using a no-argument constructor, so we can load it dynamically. Any argument-based initialization should be done by init().

func NewWorkload

func NewWorkload(className string) (Workload, error)

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL