racing

package
v1.1.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 20, 2021 License: MIT Imports: 4 Imported by: 1

Documentation

Overview

Package racing provides flexible policies for running multiple HTTP requests simultaneously in a "race" to improve performance.

While racing is a powerful feature for smoothing over pockets of bad server response times, it introduces risks which must be mitigated by thoughtful policy design and tuning with real world data. In particular, naively running multiple parallel requests may raise your costs, raise the risk of browning out the remote service, waste resources, or cause data consistency problems when making mutating requests.

The default racing policy used by the robust HTTP client is Disabled. This policy disables racing and ensures HTTP all request attempts during made while executing the request plan are serialized. Thus the default behavior of the robust HTTP client fits the expectations of a typical user.

The main concepts involved in racing are:

  • A group of concurrent or racing request attempts is called a wave. Every wave starts with one request attempt and may grow as new attempts are started according to the Policy. A wave ends either when the first non-retryable attempt within the policy is detected, or when all concurrent attempts permitted by the policy have ended. A new wave begins if all attempts within the previous wave ended in a retryable state.
  • A racing Policy decides when to add a new concurrent request attempt to the current wave. Policy decisions are broken down into two steps, scheduling and starting. Each time a new request attempt is added to the wave, the Policy is invoked to schedule the next request attempt. When the scheduled time occurs, the Policy is again invoked to decide whether the scheduled request attempt should really start, as circumstances may have changed in the meantime.
  • Although each racing request executes on its own goroutine, the robust HTTP client dispatches every racing requests' events to the event handler on the main goroutine (the one that invoked the client). Thus even if multiple attempts are racing within a single request execution, the events for that execution are serialized and do not race.
  • Attempt-level events (BeforeAttempt, BeforeReadBody, AfterAttempt, AfterAttemptTimeout) always occur in the correct order for a particular request attempt. The events of different attempts in the same wave may be interleaved, but the BeforeAttempt event always occurs for attempt `i` before it occurs for attempt `i+1`. All events for a request attempt in wave `j` occur before any events for an attempt in wave `j+1`.
  • As soon as one request attempt ends, every other concurrent request attempt in the wave is cancelled as redundant. The AfterAttempt event handler is fired for every request that is cancelled as redundant, and the execution error during the event is set to Redundant. If the ended request is retryable, a new wave is started. Otherwise, the ended request's response becomes the result of the execution.

Besides the Disabled policy, this package provides built-in constructors for a scheduler and a starter. Use NewStaticScheduler to create a scheduler based on a static offset schedule. Use NewThrottleStarter to create a starter which can throttle racing if too many parallel request attempts are being scheduled. Use NewPolicy to compose any scheduler and any starter into a racing policy.

Index

Examples

Constants

This section is empty.

Variables

View Source
var AlwaysStart = alwaysStarter(0)

AlwaysStart is a starter that starts every scheduled request.

View Source
var Disabled = disabled{}

Disabled is a policy that disables racing, causing the robust HTTP client to send all requests serially, and never in parallel.

View Source
var Redundant = errors.New("httpx/racing: redundant attempt")

Redundant is the root cause error on Execution.Err when the request attempt is cancelled as redundant. (Redundant will be wrapped in a *url.Error to comply with the contract for Execution.Err.)

Once any request attempt has reached a final (non-retryable) outcome, all other outstanding concurrent attempts racing in the same wave are cancelled as redundant.

Functions

This section is empty.

Types

type Limit

type Limit struct {
	MaxAttempts int
	Period      time.Duration
}

A Limit specifies the maximum number of request attempts allowed per unit time.

type Policy

type Policy interface {
	Scheduler
	Starter
}

A Policy controls if and how concurrent requests may be raced against each other. In particular, after every request attempt is started, a Policy schedules the start of the next attempt in the wave and, when the scheduled time arrives, confirms whether the attempt should start.

Implementations of Policy must be safe for concurrent use by multiple goroutines.

A Policy is composed of the Scheduler and Starter interfaces. Use NewPolicy to construct a Policy given existing Scheduler and Starter implementations.

func NewPolicy

func NewPolicy(sc Scheduler, st Starter) Policy

NewPolicy composes a Scheduler and a Starter into a racing Policy.

type Scheduler

type Scheduler interface {
	// Schedule returns the duration to wait before starting the next
	// parallel request attempt in the current wave. A zero return value
	// halts the race for the wave, meaning no new parallel requests
	// will be started until the next wave.
	Schedule(*request.Execution) time.Duration
}

A Scheduler schedules when the next concurrent request attempt should join the wave.

Implementations of Scheduler must be safe for concurrent use by multiple goroutines.

Use NewPolicy to compose a Scheduler with a Starter to make a racing policy.

func NewStaticScheduler

func NewStaticScheduler(offset ...time.Duration) Scheduler

NewStaticScheduler constructs a scheduler producing a fixed-size wave of request attempts, each starting at a fixed offset from the previous attempt.

Each entry in offset specifies the delay to wait, after starting the previous request in the wave before, starting the next one. So if delay[0] is 250ms, then the second attempt in the wave will be started 250ms after the first attempt; if delay[1] is 500ms, the third attempt will be started 500ms after the second one (750ms after the first one) and so on.

Example
package main

import (
	"fmt"
	"time"

	"github.com/gogama/httpx/request"

	"github.com/gogama/httpx/racing"
)

func main() {
	sc := racing.NewStaticScheduler(250*time.Millisecond, 500*time.Millisecond, 1500*time.Millisecond)
	// Simulate running scheduler with 0, 1, 2, and 3 request attempts
	// already racing.
	var e request.Execution
	for i := 0; i <= 3; i++ {
		e.Racing = i + 1
		fmt.Println(sc.Schedule(&e))
	}
}
Output:

250ms
500ms
1.5s
0s

type Starter

type Starter interface {
	// Start returns true if a previously scheduled request attempt
	// should be added to the race and false if it should be discarded.
	//
	// If the scheduled request attempt is discarded, the wave is closed
	// and no new request attempts will be scheduled until the next
	// wave.
	//
	// The execution contains the plan execution state existing at the
	// current time, which will usually be changed from the state
	// existing at the time the attempt was scheduled.
	Start(*request.Execution) bool
}

A Starter starts or discards a previously scheduled parallel request.

Implementations of Scheduler must be safe for concurrent use by multiple goroutines.

Use NewPolicy to compose a Scheduler with a Starter to make a racing policy.

func NewThrottleStarter

func NewThrottleStarter(limits ...Limit) Starter

NewThrottleStarter constructs a starter which throttles new request attempts based one or more limits.

For example, the following starter would block starting any new parallel request attempts if more than 10 parallel request attempts have been started in the last half second, or more than 15 have been started in the last second:

s := racing.NewThrottleStarter(
	racing.Limit{MaxAttempts: 10, Period: 500*time.Millisecond},
	racing.Limit{MaxAttempts: 15, Period: 1*time.Second))

Note that as with all starters, the constructed throttling starter only affects the starting of concurrent request attempts. Every wave starts with one request attempt.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL