loadshed

package module
v2.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 12, 2024 License: Apache-2.0 Imports: 8 Imported by: 1

README

loadshed

A load shedding tool kit for Go.

Go Reference

Overview

Load shedding is the practice of reducing work in a system that is near overload in order to the protect the system from failure. This project presents a tool kit or reference implementation of load shedding for Go. I call this a tool kit because it contains common and re-usable components that can be configured or arranged in a system rather than a monolithic tool. Load shedding must always be tailored to a specific system and there are no safe defaults. This project aims to offer a structured approach to implementing your own load shedding.

Load shedding is usually determined by a collection of rules or policies rather than a single factor. The decision making can be separated in to two categories: deterministic and probabilistic. Deterministic rules may be static or dynamic in nature and always result in a true or false indicator for shedding load. Probabilistic policies use estimations or predictions along with an aspect of chance or randomness to select requests to shed. Both deterministic and probabilistic rules are often used together to create a functioning system.

This tool kit comes with structured support for deterministic rules, probabilistic policies, and request categorization that can be used to prioritize traffic.

The Shedder Helper

The main interface of the project is a type called Shedder that arranges load shedding rules into a single, easier to use container:

import "github.com/kevinconway/loadshed/v2"

shedder := loadshed.NewShedder()
err := shedder.Do(ctx, myAction)
var rejectionInfo ErrRejection
if err != nil && errors.As(err, &rejectionInfo) {
    fmt.Println(rejectionInfo)
}

The Shedder can be configured with any number of deterministic rules and probabilistic policies. If the Shedder determines that it should shed load then the action given to Do() is not performed and an error is returned that details the reasons for the rejection.

Shedder can be used directly or see the stdlib/net/http package for an example of how it can be more seamlessly integrated into a system as middleware.

Probabilistic Load Shedding

Probabilistic polices result in load shedding based on a target rate. For example, a system may be in a state where it needs to reject 50% of requests or 80% of requests to prevent failure. The most challenging aspect of probabilistic load shedding is determining the rejection rate which must be tailored to each specific system and operation.

This project takes an opinionated and structured approach to defining rejection rates. Rejection rates are calculated as a function of the system's likelihood of failure. The likelihood of failure is calculated as a function of a specific resource's capacity utilization.

Capacity Usage Metrics

Inherent in the concept of load shedding is the concept of capacity, or the finite availability of a resource that is consumed when doing work. A capacity in this tool kit is modeled as:

type Capacity interface {
	Name(ctx context.Context) string
	Usage(ctx context.Context) float32
}

All metrics used in probabilistic load shedding policies must be mapped to some upper limit so that their usage can be reported as a value between 0 and 1, representing the percent utilization. Some metrics have a natural capacity, such as CPU or memory, that can be used directly. Others such as max concurrency or queue depth involve an arbitrarily defined limit based on the system's constraints. For example, a queue depth capacity might look like:

type CapacityQueueDepth[T any] struct {
    Queue []T
    Limit int
}
func(self *CapacityQueueDepth) Name(ctx context.Context) string {
    return "QUEUE DEPTH"
}
func(self *CapacityQueueDepth) Usage(ctx context.Context) float32 {
    return float32(len(self.Queue)) / float32(self.Limit)
}

The limit value would be set based on either the design of the system or a value discovered through testing.

This project contains some pre-built capacity implementations for max concurrency, error rate, landing rate, and latency or execution time.

Failure Probability From Capacity

Once a metric is reported as a percent utilization then the next step is to calculate the likelihood that the system or next request will fail due to resource exhaustion. Failure probabilities are modeled as:

type FailureProbability interface {
	Capacity
	Likelihood(ctx context.Context) float32
}

The likelihood calculation is highly dependent on the specific details of a system and should be determined through extensive testing. For example, if a test demonstrates that a system has an increasing risk of catastrophic failure once the CPU usage exceeds 80% and that risk increases with each point of utilization beyond 80% then a potential calculation could be:

type CPUFailureProbability struct {
    *CPUCapacity
}
func (self *CPUFailureProbability) Likelihood(ctx context.Context) float32 {
    usage := self.CPUCapacity.Usage(ctx)
    if usage < .8 {
        // No chance of failure due to CPU when under 80% utilization
        return 0.0
    }
    return (usage - .8) / .2 // Linear interpolation between .8 and 1.0
}

The example calculates a zero percent chance of failure for any usage value below 80% and then a linearly increasing risk proportional to the proximity of the usage to 100% when over 80%. Note that this type of calculation does not fit for all systems. For systems where a linear progression is desirable then this formula can be applied to capacities using:

loadshed.NewFailureProbabilityCurveLinear(cap Capacity, lowerThreshold float32, upperThreshold float32, exponent float32)

For any calculation, the goal is to convert a percent utilization into a percent chance of failure.

Rejection Rate From Failure Probability

The final piece of probabilistic policies is to calculate a rejection rate based on the likelihood of failure. Rejection rates are modeled as:

type RejectionRate interface {
	FailureProbability
	Rate(ctx context.Context) float32
}

A rate calculation requires detailed knowledge of the system and there is no safe default formula. In systems where resources are consumed equally by all requests then it may be possible to use the failure probability, directly, as the rejection rate. This use case is covered by the NewRejectionRateCurveIdentity method that wraps any FailureProbability in a RejectionRate that simply returns the Likelihood() value.

Deterministic Load Shedding

Deterministic rules shed traffic based on binary decision making. This decision making is not necessarily simple or static and may include any amount of complexity, reference to external state, or even dynamically adjusted values within a system. The primary difference between deterministic rules and probabilistic policies is that deterministic rules come to a true/false conclusion rather than a rate or chance value. Deterministic rules are modeled as:

type Rule interface {
	Name(ctx context.Context) string
	Reject(ctx context.Context) bool
}

The Shedder applies all deterministic rules before applying any rejection rates. There are no pre-built or included rules in the project. Examples of deterministic policies to integrate include rate limiting, advanced queue management, and quota enforcement.

Request Priority Classification

A common practice for load shedding is consider the priority, or classification, of a request and apply different rules based on that priority. The Shedder can be given an optional classifier in the form of:

type Classification string
type Classifier interface {
	Classify(ctx context.Context) Classification
}

A classification is a string identifier of the class or priority of the request. This project does not include a standard set of classifications so that they can be tailored to different use cases.

Note that the classifier is only given the request context and not a specific request type, such as *http.Request, so that the tool kit is more broadly applicable to different kinds of systems. As a consequence of this choice, any identifying information of a request that is required to determine the classification must be set in the context. For example, here is how priority might be set based on a target HTTP URL:

func URLExtractingMiddleware(h http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := r.Context()
        ctx = context.WithValue(ctxKey, r.URL.Path)
        h.ServeHTTP(w, r.WithContext(ctx))
    })
}
const (
    PriorityHigh loadshed.Classification = "HIGH"
    PriorityLow loadshed.Classification = "LOW"
)
type URLClassifier struct {}
func (*URLClassifier) Classify(ctx context.Context) Classification {
    path := ctx.Value(ctxKey)
    if path == nil {
        return PriorityLow
    }
    if strings.HasPrefix(path.(string), "/v2/important/endpoint") {
        return PriorityHigh
    }
    return PriorityLow
}

shedder := loadshed.NewShedder(
    loadshed.OptionShedderClassifier(&URLClassifier{}),
)
var FinalHandler = URLExtractingMiddleware(
    loadshedhttp.NewMiddleware(shedder)( // from the stdlib/net/http sub-package
        originalHandler,
    ),
)
Using Classification In Rejection Rates

The primary purpose of classifying requests is to create a priority hierarchy of requests that are shed at different rates. For example, a system may have LOW, NORMAL, HIGH, and CRITICAL classifications. As the system approaches a likelihood of failure then it will start by rejecting only LOW priority requests. If the failure probability continues to increase then it will begin to reject NORMAL requests, and so on.

To support this use case the project includes a NewRejectionRateCurveByClassification constructor that converts a failure probability into a rejection rate using a unique converting function for each classification.

Standard Library HTTP Integration

As both an example integration and a helper for a common case, this project includes pre-build Shedder integrations for the standard library HTTP handler and client interfaces.

HTTP Server

The server integration operates as a middleware for http.Handler types and should work with any HTTP mux or framework that targets the standard library types.

import (
    "github.com/kevinconway/loadshed/v2"
    loadshedhttp "github.com/kevinconway/loadshed/v2/stdlib/net/http"
)

shedder := loadshed.NewShedder()
handler = loadshedhttp.NewMiddleware(shedder)(
    handler,
)

The default behavior is to respond with a 503 Service Unavailable response code when a request is rejected due to load shedding. This can be modified using HandlerOptionCallback to set a callback that is executed when a request is rejected. The callback takes the form of an http.Handler and is allowed to perform any logic and respond with any code. Callbacks can also use FromHandlerContext to get the rejection details to, for example, log the reason why the request was rejected.

If one of the capacities used for load shedding is an error rate then HandlerOptionErrCodes must be used when constructing the middleware. This option defines which HTTP status codes are considered errors from the perspective of an error rate. For example, a set of error codes might include 5xx but exclude 4xx.

HTTP Client

The client integration works very similarly to the server integration except that it targets the http.RoundTripper interface used by the http.Client.

import (
    "github.com/kevinconway/loadshed/v2"
    loadshedhttp "github.com/kevinconway/loadshed/v2/stdlib/net/http"
)

shedder := loadshed.NewShedder()
transport := http.DefaultTransport.Clone()
transport = loadshedhttp.NewTransportMiddleware(shedder)(transport)
client := &http.Client{
    Transport: transport,
}

When a request is rejected due to load shedding then the client calls will return an instance of loadshed.ErrReject which contains the details behind why it was rejected. Similar to the server, a custom callback can be installed to handle the rejection error using TransportOptionCallback.

If one of the capacities used for load shedding is an error rate then TransportOptionErrorCodes must be used when constructing the middleware. This option defines which HTTP status codes are considered errors from the perspective of an error rate.

Installing

go get github.com/kevinconway/loadshed/v2

Development

This project has no hard dependencies on any build tools other than Go. You should be able to run go test for any changes and see the results.

If you prefer, the project includes a Makefile with the following rules:

  • update - Update dependencies in the go.mod file.
  • bin - Download all optional build and test tools.
  • fmt - Run goimports on all Go source files.
  • test - Run all tests and create a test coverage record.
  • coverage - Generate a series of coverage reports from test records.

Contributors

For bugs or performance improvements, I welcome pull requests, issues, or comments. If you make a pull request then please be sure to add tests and run make fmt.

For new features, please start a discussion first by creating an issue and explaining the intended change.

This project includes an integration for the standard library HTTP tools. If there are other, obvious standard library tools that would benefit from load shedding then I'd accept an addition to the stdlib sub-package. However, I won't maintain integrations with 3rd party tools in this repository.

Fork Of github.com/asecurityteam/loadshed

I was part of the original team that built this library for a previous employer. The project was originally published as bitbucket.org/stride/loadshed and was later transferred to github.com/asecurityteam/loadshed. Since then, the company and team priorities have changed and github.com/asecurityteam/loadshed has been archived.

I have new use cases for this library so I'm maintaining this fork.

Drop-in Replacement Of github.com/asecurityteam/loadshed

For convenience, I have created a v1.2.0 tag that matches the last published release of github.com/asecurityteam/loadshed. The only difference is that I have updated the module path to github.com/kevinconway/loadshed and replaced the github.com/asecurityteam/rolling dependency with github.com/kevinconway/rolling which I have also forked for the same reasons as stated above.. You should be able to replace github.com/asecurityteam/loadshed with github.com/kevinconway/loadshed in either your source code or your go.mod using a replace directive to pull from here instead.

There is no particular advantage to doing this, today, unless it's part of a gradual migration to v2 (the version documented in this file). I do not plan on porting any bug fixes or performance improvements to v1.

License

This project is licensed under the Apache 2.0 license. See LICENSE.txt or http://www.apache.org/licenses/LICENSE-2.0 for the full terms.

This project is forked from https://github.com/asecurityteam/loadshed. Though, the majority of the current content is actually a modified form of a public, but unmerged, branch of the project from https://bitbucket.org/atlassian/loadshed/src/dev-2.0/. The original project's copyright attribution and license terms are:

Copyright @ 2017 Atlassian Pty Ltd

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

All files are marked with SPDX tags that both attribute the original copyright as well as identify the author(s) of any significant changes to those files.

Documentation

Index

Constants

View Source
const RuleProbabilistic string = "PROBABILISTIC"

RuleProbabilistic is the name of the rejection rule that covers all cases of using a rejection rate rather than a deterministic rule.

Variables

This section is empty.

Functions

func ClassificationToContext

func ClassificationToContext(ctx context.Context, class Classification) context.Context

Types

type Capacity

type Capacity interface {
	// Name of the capacity or metric being tracked.
	Name(ctx context.Context) string
	// Usage as a percent value. This should report value between 0 and 1 but
	// some implementations may intentionally report negative or greater 100%
	// values if needed.
	Usage(ctx context.Context) float32
}

Capacity represents usage of some finite resource.

type CapacityConcurrency

type CapacityConcurrency struct {
	// contains filtered or unexported fields
}

CapacityConcurrency tracks the number of concurrent calls to a method.

func NewCapacityConcurrency

func NewCapacityConcurrency(limit int32, options ...OptionConcurrency) *CapacityConcurrency

func (*CapacityConcurrency) Add

func (self *CapacityConcurrency) Add(count int32)

func (*CapacityConcurrency) Done

func (self *CapacityConcurrency) Done(count int32)

func (*CapacityConcurrency) Name

func (self *CapacityConcurrency) Name(ctx context.Context) string

func (*CapacityConcurrency) Usage

func (self *CapacityConcurrency) Usage(ctx context.Context) float32

Aggregate returns the current concurrency value.

func (*CapacityConcurrency) Wrap

func (self *CapacityConcurrency) Wrap(fn Fn) Fn

Wrap a function in concurrency tracking.

type CapacityErrorRate

type CapacityErrorRate struct {
	// contains filtered or unexported fields
}

CapacityErrorRate calculates the percent error of invocations within a winow of time. Returned errors and panics are both considered in the rate calculation.

Attempts and errors are always recorded within the same bucket of the window. The rate is then calculated as (errors / attempts) within the window. The current rate is given as the current capacity usage value.

The rate calculation is based on a rolling window. The default size of the window is 1s with each bucket representing 10ms. Both of these values can be modified using constructor options.

func NewCapacityErrorRate

func NewCapacityErrorRate(options ...OptionErrorRate) *CapacityErrorRate

func (*CapacityErrorRate) Name

func (*CapacityErrorRate) Usage

func (self *CapacityErrorRate) Usage(ctx context.Context) float32

func (*CapacityErrorRate) Wrap

func (self *CapacityErrorRate) Wrap(fn Fn) Fn

type CapacityLandingRate

type CapacityLandingRate struct {
	// contains filtered or unexported fields
}

CapacityLandingRate considers the number of method invocations within a window of time that have begun. Note that this counts all attempts to invoke a method and does not distinguish success or failure.

The rate calculation is based on a rolling window. The default size of the window is 1s with each bucket representing 10ms. Both of these values can be modified using constructor options.

func NewCapacityLandingRate

func NewCapacityLandingRate(limit int, options ...OptionLandingRate) *CapacityLandingRate

func (*CapacityLandingRate) Name

func (*CapacityLandingRate) Usage

func (self *CapacityLandingRate) Usage(ctx context.Context) float32

func (*CapacityLandingRate) Wrap

func (self *CapacityLandingRate) Wrap(fn Fn) Fn

type CapacityLatency

type CapacityLatency struct {
	// contains filtered or unexported fields
}

CapacityLatency calculates the execution time of method invocations within a winow of time.

By default, latency is calculated by taking an average of method invocation time within the window. You can provide an alternative calculation using the OptionCapacityLatencyReduction option.

The latency calculation is based on a rolling window. The default size of the window is 1s with each bucket representing 10ms. Both of these values can be modified using constructor options.

func NewCapacityLatency

func NewCapacityLatency(limit time.Duration, options ...OptionLatency) *CapacityLatency

NewCapacityLatency defaults to installing an average function for the window reduction. Use the AggregatorLatencyOptionReduction modifier to set a different reduction.

func (*CapacityLatency) Append

func (self *CapacityLatency) Append(ctx context.Context, v time.Duration)

Append adds a latency measure to the underlying window.

func (*CapacityLatency) Name

func (self *CapacityLatency) Name(context.Context) string

func (*CapacityLatency) Usage

func (self *CapacityLatency) Usage(ctx context.Context) float32

func (*CapacityLatency) Wrap

func (self *CapacityLatency) Wrap(fn Fn) Fn

type CapacityThrottle

type CapacityThrottle struct {
	Capacity
	// contains filtered or unexported fields
}

CapacityThrottle is a wrapper for other Capacity implementations that limits the number of times the underlying usage is calculated within a period of time. This exists to help amortize the cost of expensive capacity calculations when it is safe or desireable to do so.

func NewCapacityThrottle

func NewCapacityThrottle(wrapped Capacity, duration time.Duration) *CapacityThrottle

func (*CapacityThrottle) Usage

func (self *CapacityThrottle) Usage(ctx context.Context) float32

Usage returns from an internal cache until a duration has expired at which point it calls the wrapped Capacity to get a new value.

func (*CapacityThrottle) Wrap

func (self *CapacityThrottle) Wrap(fn Fn) Fn

type CapacityWaitGroup

type CapacityWaitGroup struct {
	*CapacityConcurrency
	// contains filtered or unexported fields
}

CapacityWaitGroup combines a CapacityConcurrency and a sync.WaitGroup. This type may be used in place of a wait group and can satisfy an interface matching the wait group's methods. Each delta given to Add() increases the reported concurrency count and each call to Done() decreases the count.

func NewCapacityWaitGroup

func NewCapacityWaitGroup(limit int32, options ...OptionConcurrency) *CapacityWaitGroup

func (*CapacityWaitGroup) Add

func (self *CapacityWaitGroup) Add(delta int)

func (*CapacityWaitGroup) Done

func (self *CapacityWaitGroup) Done()

func (*CapacityWaitGroup) Wait

func (self *CapacityWaitGroup) Wait()

type Classification

type Classification string

Classification is an arbitrary class or category assigned to method invocations. The most common use of this type is to define priorities that can then be mapped to specific load shedding policies. For example, you might define LOW, NORMAL, HIGH, and CRITICAL as classifications and have LOW shed at a higher rate than NORMAL, etc.

func ClassificationFromContext

func ClassificationFromContext(ctx context.Context) Classification

type Classifier

type Classifier interface {
	Classify(ctx context.Context) Classification
}

Classifier determines the classification of a method invocation.

type ClassifierFN

type ClassifierFN func(ctx context.Context) Classification

ClassifierFN is an adapter for simple classification functions. For example:

ClassifierFN(func(ctx context.Context) Classification {
	requestPath := PathFromContext(ctx)
	if requestPath == "/important/api" {
		return Classification("HIGH")
	}
	return Classification("Normal")
})

func (ClassifierFN) Classify

func (self ClassifierFN) Classify(ctx context.Context) Classification

type Curve

type Curve interface {
	Curve(ctx context.Context, value float32) float32
}

Curve is a function used to scale or plot a value. The primary use cases for a curving function is to either translate a capacity usage to a failure probability or translate a failure probability to a rejection rate. The general expectation is that most inputs and outputes are between the values 0 and 1, though specialized use cases may handle arbitrary values.

type CurveFN

type CurveFN func(ctx context.Context, value float32) float32

CurveFN is an adapter for simple curving functions. For example:

CurveFN(func(ctx context.Context, value float32) float32 { return 0.0 })

func (CurveFN) Curve

func (self CurveFN) Curve(ctx context.Context, value float32) float32

type CurveLinear

type CurveLinear struct {
	Upper    float32
	Lower    float32
	Exponent float32
}

CurveLinear calculates shifts the input based on the following formula:

f(x) = x < LOWER ? 0 : x > UPPER ? 1 : ((x - LOWER) / (UPPER - LOWER) )^EXPONENT

The result is a linear interpolation of the usage value between the upper and lower limits, optionally modified by some exponent.

func (*CurveLinear) Curve

func (self *CurveLinear) Curve(ctx context.Context, value float32) float32

type ErrRejection

type ErrRejection struct {
	// Rule matches the name of any deterministic rule or the value of
	// RuleProbabilistic if a rejection rate was used.
	Rule string
	// Classification optionally contains the invocations classification value
	// if one is set. It is otherwise empty.
	Classification Classification
	// Name is present when Rule matches RuleProbabilistic and contains the
	// name of the RejectionRate/FailureProbability/Capacity used to make the
	// decision
	Name string
	// Usage is present when Rule matches RuleProbabilistic and contains the
	// current capacity utilization.
	Usage float32
	// Likelihood is present when Rule matches RuleProbabilistic and contains
	// the current likelihood of failure due to the capacity utilization.
	Likelihood float32
	// Rate is present when Rule matches RuleProbabilistic and contains the
	// current rejection rate as derived from the probability of failure.
	Rate float32
}

ErrRejection provides the details on why an invocation was rejected.

func (ErrRejection) Error

func (self ErrRejection) Error() string

type FailureProbability

type FailureProbability interface {
	Capacity
	// Likelihood computes a chance of either system or action failure based on
	// the current capacity usage. Values a percentage and should be bounded
	// between 0 and 1. Greater than 100% probability of failure is not
	// particularly meaningful but may have use in some specific scenarios.
	Likelihood(ctx context.Context) float32
}

FailureProbability represents the chance of failure based on capacity usage.

Implementations of FailureProbability that wrap or otherwise do not directly implement Capacity must account for the wrapped Capacity's optional Wrapper interface.

type FailureProbabilityCurve

type FailureProbabilityCurve struct {
	Capacity
	Curve Curve
}

func NewFailureProbabilityCurveIdentity

func NewFailureProbabilityCurveIdentity(cap Capacity) *FailureProbabilityCurve

NewFailureProbabilityCurveIdentity generates a probability that returns the underlying capacity value without modifying it.

func NewFailureProbabilityCurveLinear

func NewFailureProbabilityCurveLinear(cap Capacity, lower float32, upper float32, exponent float32) *FailureProbabilityCurve

NewFailureProbabilityCurveLinear generates a curving probability that uses the linear interpolation curve from CurveLinear.

func (*FailureProbabilityCurve) Likelihood

func (self *FailureProbabilityCurve) Likelihood(ctx context.Context) float32

func (*FailureProbabilityCurve) Wrap

func (self *FailureProbabilityCurve) Wrap(fn Fn) Fn

type Fn

type Fn func(context.Context) error

Fn is the basic unit of execution and represents an action that may be shed under load.

type LatencyReduction

type LatencyReduction = rolling.Reduction[time.Duration]

type OptionConcurrency

type OptionConcurrency func(*CapacityConcurrency)

func OptionConcurrencyName

func OptionConcurrencyName(name string) OptionConcurrency

type OptionErrorRate

type OptionErrorRate func(*CapacityErrorRate)

func OptionErrorRateBucketDuration

func OptionErrorRateBucketDuration(d time.Duration) OptionErrorRate

func OptionErrorRateBucketSizeHint

func OptionErrorRateBucketSizeHint(size int) OptionErrorRate

func OptionErrorRateMinimumPoints

func OptionErrorRateMinimumPoints(min int) OptionErrorRate

func OptionErrorRateName

func OptionErrorRateName(name string) OptionErrorRate

func OptionErrorRateWindowBuckets

func OptionErrorRateWindowBuckets(count int) OptionErrorRate

type OptionLandingRate

type OptionLandingRate func(*CapacityLandingRate)

func OptionLandingRateBucketDuration

func OptionLandingRateBucketDuration(d time.Duration) OptionLandingRate

func OptionLandingRateBucketSizeHint

func OptionLandingRateBucketSizeHint(size int) OptionLandingRate

func OptionLandingRateWindowBuckets

func OptionLandingRateWindowBuckets(count int) OptionLandingRate

func OptionLandingrateName

func OptionLandingrateName(name string) OptionLandingRate

type OptionLatency

type OptionLatency func(*CapacityLatency)

func OptionLatencyBucketDuration

func OptionLatencyBucketDuration(d time.Duration) OptionLatency

func OptionLatencyBucketSizeHint

func OptionLatencyBucketSizeHint(size int) OptionLatency

func OptionLatencyMeasurePanics

func OptionLatencyMeasurePanics(v bool) OptionLatency

OptionLatencyMeasurePanics modifies the capacity to capture latency for executions that resulted in a panic in addition to executions that exit normally. The default value is false.

func OptionLatencyMinimumPoints

func OptionLatencyMinimumPoints(min int) OptionLatency

func OptionLatencyName

func OptionLatencyName(name string) OptionLatency

func OptionLatencyReduction

func OptionLatencyReduction(r LatencyReduction) OptionLatency

OptionLatencyReduction sets the reduction method used when calculating usage value of the window of data. The default value is an average function.

func OptionLatencyWindowBuckets

func OptionLatencyWindowBuckets(count int) OptionLatency

type OptionShedder

type OptionShedder func(*Shedder)

func OptionShedderClassifier

func OptionShedderClassifier(c Classifier) OptionShedder

func OptionShedderRandom

func OptionShedderRandom(r func() float32) OptionShedder

func OptionShedderRejectionRate

func OptionShedderRejectionRate(r RejectionRate) OptionShedder

func OptionShedderRule

func OptionShedderRule(r Rule) OptionShedder

type RejectionRate

type RejectionRate interface {
	FailureProbability
	// Rate compute the percentage of load to shed based on the current failure
	// probability. Outputs are expected to be percentage values between 0 and
	// 1. Values outside of this range may result in unexpected behavior.
	Rate(ctx context.Context) float32
}

RejectionRate represents the amount of load that should be shed based on the current failure probability.

Implementations of RejectionRate that wrap or otherwise do not directly implement FailureProbability must account for the wrapped FailureProbability's optional Wrapper interface.

type RejectionRateCurve

type RejectionRateCurve struct {
	FailureProbability
	// contains filtered or unexported fields
}

func NewRejectionRateCurveByClassification

func NewRejectionRateCurveByClassification(probability FailureProbability, defaultCurve Curve, classes map[Classification]Curve) *RejectionRateCurve

NewRejectionRateCurveByClassification allows for the failure probability to be translated to a rejection rate based on the classification of an invocation. For example, LOW priority requests can have a higher rejection rate for the same failure probability compared to HIGH priority.

func NewRejectionRateCurveIdentity

func NewRejectionRateCurveIdentity(probability FailureProbability) *RejectionRateCurve

NewRejectionRateCurveIdentity generates a rejection rate that returns the underlying failure probability value without modifying it.

func NewRejectionRateCurveLinear

func NewRejectionRateCurveLinear(probability FailureProbability, lower float32, upper float32, exponent float32) *RejectionRateCurve

NewRejectionRateCurveLinear generates a curving rejection rate that uses the linear interpolation curve from CurveLinear.

func (*RejectionRateCurve) Rate

func (self *RejectionRateCurve) Rate(ctx context.Context) float32

func (*RejectionRateCurve) Wrap

func (self *RejectionRateCurve) Wrap(fn Fn) Fn

type Rule

type Rule interface {
	Name(ctx context.Context) string
	Reject(ctx context.Context) bool
}

Rule represents a deterministic load shedding decision. Unlike RejectionRate, a Rule does not incorporate randomness or probability.

Rules can represent virtually any kind of deterministic behavior. For example, rules may be used to integrate rate limiting or quota management policies into the load shedding framework. Rules also do not have to be static. They reference dynamic variables and consult external systems.

type Shedder

type Shedder struct {
	// contains filtered or unexported fields
}

Shedder encapsulates a series of load shedding policies and applies them to method invocations.

The primary usage of the Shedder is intended to be the Do method which applies all load shedding rules and rejection rates.

func NewShedder

func NewShedder(options ...OptionShedder) *Shedder

func (*Shedder) Do

func (self *Shedder) Do(ctx context.Context, fn Fn) error

Do optionally runs the function based on the current state of the load shedding policy configured for the Shedder. In the event that the function is not executed the Shedder will return an ErrRejection.

All invocation monitoring wrappers are applied for the Fn is executed. If a Classifier is provided then the current classification is added to the context before any other action.

func (*Shedder) Select

func (self *Shedder) Select(ctx context.Context) error

Select performs the decision making process for the load shedder and optionally returns an error indicating that an action should be rejected. The returned error, if not nil, is always an ErrRejected instance. This may be used in custom load shedding integrations.

Note that the context given must be the same context that would otherwise be given to Do. Also note that Select does not apply any Fn wrapping or classification so any invocation monitoring, metrics management, and classification must be performed externally.

func (*Shedder) WrapSelect

func (self *Shedder) WrapSelect(fn Fn) Fn

WrapSelect returns a wrapped version of Fn that both applies any invocation monitoring required by rejection rate calculators and applies load shedding rules.

This differs from the Do method by returning a re-usable Fn. Most usage of the shedder should be through the Do method but this method is provided for specialized cases where the input parameters for the Fn do not change with each invocation. This allows Fn to be called repeatedly without needing to be re-wrapped on each invocation.

type Wrapper

type Wrapper interface {
	Wrap(Fn) Fn
}

Wrapper is an optional interface that any Capcity and Probability may implement if they need to collect data from functions being executed. For example, if a Capcity needs to record the execution duration of all function executed within a load shedding policy then it can implement this interface by returning a wrapped copy of the passed in function that tracks the start and end times.

Implementing this behaviour is optional and this interface is only exposed for documentation purposes.

Directories

Path Synopsis
stdlib

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL