async

package module
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 11, 2022 License: MIT Imports: 6 Imported by: 0

README

async

Tiny futures library for Go. 🔮

  • Supports eager and lazy evaluation, both synchronous and asynchronous
  • Generics-based
  • Propagates panics
  • Designed to interoperate with context

GoDoc

Usage

Let's start with a simple example:

f := NewFuture(func() ([]byte, error) {
    res, err := http.Get("http://www.example.com")
    if err != nil {
        return nil, err
    }
    defer res.Body.Close()
    return io.ReadAll(res.Body)
})

We have just created a Future f that can be used to wait for and obtain the result of the function passed to NewFuture. At this point, exeuction of the function wrapped in the Future has not started yet.

There are multiple ways to start evaluation, the simplest being calling Result, as this will start evaluation and wait until either the result is available or the context is cancelled.

Note that if multiple calls to Result are done concurrently, only the first one starts execution of the wrapped function; when the wrapped function completes the same result is returned to all current (and future) callers of Result:

go func() {
    buf, err := f.Result(ctx1)
    // use buf and err
}()
go func() {
    buf, err := f.Result(ctx2)
    // use buf and err
}()

A call to Result return immediately if the context is cancelled. This does not cancel execution of the wrapped function (to cancel execution of the wrapped function use a context or other cancellation mechanism in the wrapped function). So, for example in the code above is ctx1 is cancelled the call to Result in the first goroutine will return immediately, but the call in the second goroutine will continue waiting until the wrapped function returns (or ctx2 is cancelled).

An important feature of the futures provided by this library is that they propagate panics, so e.g. in the example above if the function wrapped by the Future panicked, the panic would be caught and each call to Result would panic instead (if Result is not called and the wrapped function panics, the panic will be delivered to the go runtime instead, crashing the process as if the panic had not been recovered).

A more complex example

The real power of this library lies in its ability to quickly build lazy evaluation trees that allow performant and efficient concurrent evaluation of the desired results.

As an example, let's consider the case in which we need to construct a response based on three subrequests (foo, bar, baz) whose results are used to construct the two fields in the response (x and y).

ctx, cancel := context.WithCancel(ctx)
defer cancel()

foo := NewFuture(func() (Foo, error) {
    return /* ... */
})
bar := NewFuture(func() (Bar, error) {
    return /* ... */
})
baz := NewFuture(func() (Baz, error) {
    return /* ... */
})

x := NewFuture(func() (string, error) {
    bar.Eager() // start eager evaluation of bar
    res, err := foo.Result(ctx)
    if err != nil {
        return "", err
    }
    res2, err := bar.Result()
    if err != nil {
        return "", err
    }
    return fmt.Sprintf("%v,%v", res, res2), nil
})
y := NewFuture(func() (string, error) {
    baz.Eager() // start eager evaluation of baz
    res, err := foo.Result(ctx) // note: result will be reused
    if err != nil {
        return "", err
    }
    res2, err := baz.Result()
    if err != nil {
        return "", err
    }
    return fmt.Sprintf("%v,%v", res, res2), nil
})

We have now built the evaluation trees. Instead of using futures, we could have simply started eager evaluation of all these functions, and this would would work in simple cases.

Now consider though what would happen if you did not always need both x and y to be populated in the response, and instead you needed to populate them only if requested (or only in some other dynamic condition).

Executing all the functions anyway just in case they are needed would be extremely resource-inefficient, even if you could prune uneeded functions by selectively cancelling the respective context. Another option would be to start only the goroutines that are needed, but this would require spreading the logic needed to control this in multiple places.

Alternatively, you could execute everything serially, once you are certain that each function needs to be executed, but this would potentially be very slow (e.g. in case they involve performing network requests).

Using this library you can instead do:

if req.needY {
    y.Eager()
}

res := &Response{}
if req.needX {
    r, err := x.Result(ctx)
    if err != nil {
        return nil, err
    }
    res.x = r
}
if req.needY {
    r, err := y.Result(ctx)
    if err != nil {
        return nil, err
    }
    res.y = r
}
return res, nil

This will concurrently execute all functions required to satisfy the request, and none of the functions that are not required, while maximizing readability and separation of concerns: the resulting code is linear as all synchronization happens behind the scenes, regardless of how complex the logic to be implemented id.

Specifically, in the case above:

  • if we have both needX and needY true, all futures defiend above are started and execute concurrently
  • if we have only needX true, only x, foo and bar are executed
  • if we have only needY true, only y, foo and baz are executed

Note that thanks to the context defined above, as soon as any future returns an error or panics, the context is cancelled and this makes all futures using that context to return. As such this is an effective replacement for errgroup when it's used to coordinate the execution of multiple parts of a request.

Examples

Future plans

Some potential ideas:

  • Support also promises
  • Adapters for common patterns

Documentation

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Future

type Future[T any] struct {
	// contains filtered or unexported fields
}

Future is a future for the specified type T.

It supports both lazy and eager evaluation, both synchronous and asynchronous. It is especially useful when the results of one or more asynchronous operations may need to be consumed by multiple dependant asynchronous operations that themselves may or may not be executed.

Example
ctx := context.Background()
foo := true
bar := true

f1 := NewFuture(func() (int, error) {
	return 42, nil
}).NonBlocking()

f2 := NewFuture(func() (string, error) {
	res, err := f1.Result(ctx)
	if err != nil {
		return "", err
	}
	return fmt.Sprintf("n=%d", res), nil
})

f3 := NewFuture(func() (string, error) {
	return "hello", nil
}).NonBlocking()

f4 := NewFuture(func() ([]string, error) {
	if foo && bar {
		f2.Eager()
	}

	var s []string
	if foo {
		r, err := f3.Result(ctx)
		if err != nil {
			return nil, err
		}
		s = append(s, r)
	}
	if bar {
		r, err := f2.Result(ctx)
		if err != nil {
			return nil, err
		}
		s = append(s, r)
	}
	return s, nil
})

f4.Result(ctx)
Output:

func NewFuture

func NewFuture[T any](fn func() (T, error)) *Future[T]

NewFuture wraps the provided function into a Future handle that can be used to asynchronously execute the function and obtain its results.

The wrapped function is not invoked immediately by NewFuture. It is invoked at most once when Result, Eager or Done are invoked (regardless of the number of invocations to these functions).

If the provided function panics, the panic is caught and forwarded to all callers of Result.

func (*Future[T]) Done

func (w *Future[T]) Done() <-chan struct{}

Done returns a channel that is closed once the wrapped function has completed execution. Once this happens, calls to Result are guaranteed to not block. If the wrapped function has not been invoked yet by a previous call to Eager or Result, it is started.

If your code calls Done, it MUST eventually call Result as well: failure to do so will cause any panic deriving from the execution of the wrapped function to be delivered to the Go runtime, terminating the process.

func (*Future[T]) Eager

func (w *Future[T]) Eager()

Eager signals to the Future runtime that execution of the wrapped function should be started now (if it has not been started yet).

If your code calls Eager, it MUST eventually call Result as well: failure to do so will cause any panic deriving from the execution of the wrapped function to be delivered to the Go runtime, terminating the process.

func (*Future[T]) NonBlocking

func (w *Future[T]) NonBlocking() *Future[T]

NonBlocking can be used to signal that the function wrapped by the Future is expected to execute quickly (no more than a few µs) and to not block (e.g. waiting for I/O). This allows the Future runtime to avoid executing the function in a separate goroutine, and reduces the amount of synchorinzation needed - potentially yielding higher performance.

NonBlocking, if used, should be called before any call to Eager, Done, Result, or Resolve.

After a call to NonBlocking, calls to Eager and Done will also block until the wrapped function has completed execution.

If used inappropriately (e.g. for wrapped functions that block, or take longer than a few µs) this will slow down your code by inhibiting concurrent execution: in case of doubt avoid using it.

func (*Future[T]) Result

func (w *Future[T]) Result(ctx context.Context) (T, error)

Result returns the results returned by the wrapped function, once execution of the function has completed.

Multiple Result calls will always return the same result, and the wrapped function will be invoked at most once.

If the context is cancelled before the results become available, Result returns immediately (without waiting for the function to complete) with the error from the context.

If the wrapped function panicked, Result will propagate that panic to each function that calls Result.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL