pool

package module
v3.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 23, 2016 License: MIT Imports: 5 Imported by: 109

README

Package pool

Project status Build Status Coverage Status Go Report Card GoDoc License

Package pool implements a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation.

Features:

  • Dead simple to use and makes no assumptions about how you will use it.
  • Automatic recovery from consumer goroutines which returns an error to the results

Pool v2 advantages over Pool v1:

  • Up to 300% faster due to lower contention ( BenchmarkSmallRun used to take 3 seconds, now 1 second )
  • Cancels are much faster
  • Easier to use, no longer need to know the # of Work Units to be processed.
  • Pool can now be used as a long running/globally defined pool if desired ( v1 Pool was only good for one run )
  • Supports single units of work as well as batching
  • Pool can easily be reset after a Close() or Cancel() for reuse.
  • Multiple Batches can be run and even cancelled on the same Pool.
  • Supports individual Work Unit cancellation.

Pool v3 advantages over Pool v2:

  • Objects are not interfaces allowing for less breaking changes going forward.
  • Now there are 2 Pool types, both completely interchangeable, a limited worker pool and unlimited pool.
  • Simpler usage of Work Units, instead of <-work.Done now can do work.Wait()

Installation

Use go get.

go get gopkg.in/go-playground/pool.v3

Then import the pool package into your own code.

import "gopkg.in/go-playground/pool.v3"

Important Information READ THIS!

  • It is recommended that you cancel a pool or batch from the calling function and not inside of the Unit of Work, it will work fine, however because of the goroutine scheduler and context switching it may not cancel as soon as if called from outside.
  • When Batching DO NOT FORGET TO CALL batch.QueueComplete(), if you do the Batch WILL deadlock
  • It is your responsibility to call WorkUnit.IsCancelled() to check if it's cancelled after a blocking operation like waiting for a connection from a pool. (optional)

Usage and documentation

Please see http://godoc.org/gopkg.in/go-playground/pool.v3 for detailed usage docs.

Examples:

both Limited Pool and Unlimited Pool have the same signatures and are completely interchangeable.

Per Unit Work

package main

import (
	"fmt"
	"time"

	"gopkg.in/go-playground/pool.v3"
)

func main() {

	p := pool.NewLimited(10)
	defer p.Close()

	user := p.Queue(getUser(13))
	other := p.Queue(getOtherInfo(13))

	user.Wait()
	if err := user.Error(); err != nil {
		// handle error
	}

	// do stuff with user
	username := user.Value().(string)
	fmt.Println(username)

	other.Wait()
	if err := other.Error(); err != nil {
		// handle error
	}

	// do stuff with other
	otherInfo := other.Value().(string)
	fmt.Println(otherInfo)
}

func getUser(id int) pool.WorkFunc {

	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return "Joeybloggs", nil
	}
}

func getOtherInfo(id int) pool.WorkFunc {

	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return "Other Info", nil
	}
}

Batch Work

package main

import (
	"fmt"
	"time"

	"gopkg.in/go-playground/pool.v3"
)

func main() {

	p := pool.NewLimited(10)
	defer p.Close()

	batch := p.Batch()

	// for max speed Queue in another goroutine
	// but it is not required, just can't start reading results
	// until all items are Queued.

	go func() {
		for i := 0; i < 10; i++ {
			batch.Queue(sendEmail("email content"))
		}

		// DO NOT FORGET THIS OR GOROUTINES WILL DEADLOCK
		// if calling Cancel() it calles QueueComplete() internally
		batch.QueueComplete()
	}()

	for email := range batch.Results() {

		if err := email.Error(); err != nil {
			// handle error
			// maybe call batch.Cancel()
		}

		// use return value
		fmt.Println(email.Value().(bool))
	}
}

func sendEmail(email string) pool.WorkFunc {
	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return true, nil // everything ok, send nil, error if not
	}
}

Benchmarks

Run on MacBook Pro (Retina, 15-inch, Late 2013) 2.6 GHz Intel Core i7 16 GB 1600 MHz DDR3 using Go 1.6.2

run with 1, 2, 4,8 and 16 cpu to show it scales well...16 is double the # of logical cores on this machine.

NOTE: Cancellation times CAN vary depending how busy your system is and how the goroutine scheduler is but worse case I've seen is 1s to cancel instead of 0ns

go test -cpu=1,2,4,8,16 -bench=. -benchmem=true
PASS
BenchmarkLimitedSmallRun              	       1	1002492008 ns/op	    3552 B/op	      55 allocs/op
BenchmarkLimitedSmallRun-2            	       1	1002347196 ns/op	    3568 B/op	      55 allocs/op
BenchmarkLimitedSmallRun-4            	       1	1010533571 ns/op	    4720 B/op	      73 allocs/op
BenchmarkLimitedSmallRun-8            	       1	1008883324 ns/op	    4080 B/op	      63 allocs/op
BenchmarkLimitedSmallRun-16           	       1	1002317677 ns/op	    3632 B/op	      56 allocs/op
BenchmarkLimitedSmallCancel           	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-2         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-4         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-8         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-16        	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel           	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-2         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-4         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-8         	 1000000	      1006 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-16        	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedOverconsumeLargeRun   	       1	4027153081 ns/op	   36176 B/op	     572 allocs/op
BenchmarkLimitedOverconsumeLargeRun-2 	       1	4003489261 ns/op	   32336 B/op	     512 allocs/op
BenchmarkLimitedOverconsumeLargeRun-4 	       1	4005579847 ns/op	   34128 B/op	     540 allocs/op
BenchmarkLimitedOverconsumeLargeRun-8 	       1	4004639857 ns/op	   34992 B/op	     553 allocs/op
BenchmarkLimitedOverconsumeLargeRun-16	       1	4022695297 ns/op	   36864 B/op	     532 allocs/op
BenchmarkLimitedBatchSmallRun         	       1	1000785511 ns/op	    6336 B/op	      94 allocs/op
BenchmarkLimitedBatchSmallRun-2       	       1	1001459945 ns/op	    4480 B/op	      65 allocs/op
BenchmarkLimitedBatchSmallRun-4       	       1	1002475371 ns/op	    6672 B/op	      99 allocs/op
BenchmarkLimitedBatchSmallRun-8       	       1	1002498902 ns/op	    4624 B/op	      67 allocs/op
BenchmarkLimitedBatchSmallRun-16      	       1	1002202273 ns/op	    5344 B/op	      78 allocs/op
BenchmarkUnlimitedSmallRun            	       1	1002361538 ns/op	    3696 B/op	      59 allocs/op
BenchmarkUnlimitedSmallRun-2          	       1	1002230293 ns/op	    3776 B/op	      60 allocs/op
BenchmarkUnlimitedSmallRun-4          	       1	1002148953 ns/op	    3776 B/op	      60 allocs/op
BenchmarkUnlimitedSmallRun-8          	       1	1002120679 ns/op	    3584 B/op	      57 allocs/op
BenchmarkUnlimitedSmallRun-16         	       1	1001698519 ns/op	    3968 B/op	      63 allocs/op
BenchmarkUnlimitedSmallCancel         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-2       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-4       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-8       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-16      	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-2       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-4       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-8       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-16      	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeRun            	       1	1001631711 ns/op	   40352 B/op	     603 allocs/op
BenchmarkUnlimitedLargeRun-2          	       1	1002603908 ns/op	   38304 B/op	     586 allocs/op
BenchmarkUnlimitedLargeRun-4          	       1	1001452975 ns/op	   38192 B/op	     584 allocs/op
BenchmarkUnlimitedLargeRun-8          	       1	1005382882 ns/op	   35200 B/op	     537 allocs/op
BenchmarkUnlimitedLargeRun-16         	       1	1001818482 ns/op	   37056 B/op	     566 allocs/op
BenchmarkUnlimitedBatchSmallRun       	       1	1002391247 ns/op	    4240 B/op	      63 allocs/op
BenchmarkUnlimitedBatchSmallRun-2     	       1	1010313222 ns/op	    4688 B/op	      70 allocs/op
BenchmarkUnlimitedBatchSmallRun-4     	       1	1008364651 ns/op	    4304 B/op	      64 allocs/op
BenchmarkUnlimitedBatchSmallRun-8     	       1	1001858192 ns/op	    4448 B/op	      66 allocs/op
BenchmarkUnlimitedBatchSmallRun-16    	       1	1001228000 ns/op	    4320 B/op	      64 allocs/op

To put some of these benchmarks in perspective:

  • BenchmarkLimitedSmallRun did 10 seconds worth of processing in 1.002492008s
  • BenchmarkLimitedSmallCancel ran 20 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedLargeCancel ran 1000 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedOverconsumeLargeRun ran 100 jobs using 25 workers in 4.027153081s

License

Distributed under MIT License, please see license file in code for more details.

Documentation

Overview

Package pool implements a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation.

Features:

  • Dead simple to use and makes no assumptions about how you will use it.
  • Automatic recovery from consumer goroutines which returns an error to the results

Pool v2 advantages over Pool v1:

  • Up to 300% faster due to lower contention, BenchmarkSmallRun used to take 3 seconds, now 1 second
  • Cancels are much faster
  • Easier to use, no longer need to know the # of Work Units to be processed.
  • Pool can now be used as a long running/globally defined pool if desired, v1 Pool was only good for one run
  • Supports single units of work as well as batching
  • Pool can easily be reset after a Close() or Cancel() for reuse.
  • Multiple Batches can be run and even cancelled on the same Pool.
  • Supports individual Work Unit cancellation.

Pool v3 advantages over Pool v2:

  • Objects are not interfaces allowing for less breaking changes going forward.
  • Now there are 2 Pool types, both completely interchangeable, a limited worker pool and unlimited pool.
  • Simpler usage of Work Units, instead of `<-work.Done` now can do `work.Wait()`

Important Information READ THIS!

important usage information

  • It is recommended that you cancel a pool or batch from the calling function and not inside of the Unit of Work, it will work fine, however because of the goroutine scheduler and context switching it may not cancel as soon as if called from outside.

  • When Batching DO NOT FORGET TO CALL batch.QueueComplete(), if you do the Batch WILL deadlock

  • It is your responsibility to call WorkUnit.IsCancelled() to check if it's cancelled after a blocking operation like waiting for a connection from a pool. (optional)

Usage and documentation

both Limited Pool and Unlimited Pool have the same signatures and are completely interchangeable.

Per Unit Work

package main

import (
    "fmt"
    "time"

    "gopkg.in/go-playground/pool.v3"
)

func main() {

    p := pool.NewLimited(10)
    defer p.Close()

    user := p.Queue(getUser(13))
    other := p.Queue(getOtherInfo(13))

    user.Wait()
    if err := user.Error(); err != nil {
        // handle error
    }

    // do stuff with user
    username := user.Value().(string)
    fmt.Println(username)

    other.Wait()
    if err := other.Error(); err != nil {
        // handle error
    }

    // do stuff with other
    otherInfo := other.Value().(string)
    fmt.Println(otherInfo)
}

func getUser(id int) pool.WorkFunc {

    return func(wu pool.WorkUnit) (interface{}, error) {

        // simulate waiting for something, like TCP connection to be established
        // or connection from pool grabbed
        time.Sleep(time.Second * 1)

        if wu.IsCancelled() {
            // return values not used
            return nil, nil
        }

        // ready for processing...

        return "Joeybloggs", nil
    }
}

func getOtherInfo(id int) pool.WorkFunc {

    return func(wu pool.WorkUnit) (interface{}, error) {

        // simulate waiting for something, like TCP connection to be established
        // or connection from pool grabbed
        time.Sleep(time.Second * 1)

        if wu.IsCancelled() {
            // return values not used
            return nil, nil
        }

        // ready for processing...

        return "Other Info", nil
    }
}

Batch Work

package main

import (
    "fmt"
    "time"

    "gopkg.in/go-playground/pool.v3"
)

func main() {

    p := pool.NewLimited(10)
    defer p.Close()

    batch := p.Batch()

    // for max speed Queue in another goroutine
    // but it is not required, just can't start reading results
    // until all items are Queued.

    go func() {
        for i := 0; i < 10; i++ {
            batch.Queue(sendEmail("email content"))
        }

        // DO NOT FORGET THIS OR GOROUTINES WILL DEADLOCK
        // if calling Cancel() it calles QueueComplete() internally
        batch.QueueComplete()
    }()

    for email := range batch.Results() {

        if err := email.Error(); err != nil {
            // handle error
            // maybe call batch.Cancel()
        }

        // use return value
        fmt.Println(email.Value().(bool))
    }
}

func sendEmail(email string) pool.WorkFunc {
    return func(wu pool.WorkUnit) (interface{}, error) {

        // simulate waiting for something, like TCP connection to be established
        // or connection from pool grabbed
        time.Sleep(time.Second * 1)

        if wu.IsCancelled() {
            // return values not used
            return nil, nil
        }

        // ready for processing...

        return true, nil // everything ok, send nil, error if not
    }
}

Benchmarks

Run on MacBook Pro (Retina, 15-inch, Late 2013) 2.6 GHz Intel Core i7 16 GB 1600 MHz DDR3 using Go 1.6.2

run with 1, 2, 4,8 and 16 cpu to show it scales well...16 is double the # of logical cores on this machine.

NOTE: Cancellation times CAN vary depending how busy your system is and how the goroutine scheduler is but worse case I've seen is 1 second to cancel instead of 0ns

go test -cpu=1,2,4,8,16 -bench=. -benchmem=true
PASS
BenchmarkLimitedSmallRun                       1    1002492008 ns/op        3552 B/op         55 allocs/op
BenchmarkLimitedSmallRun-2                     1    1002347196 ns/op        3568 B/op         55 allocs/op
BenchmarkLimitedSmallRun-4                     1    1010533571 ns/op        4720 B/op         73 allocs/op
BenchmarkLimitedSmallRun-8                     1    1008883324 ns/op        4080 B/op         63 allocs/op
BenchmarkLimitedSmallRun-16                    1    1002317677 ns/op        3632 B/op         56 allocs/op
BenchmarkLimitedSmallCancel             2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedSmallCancel-2           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedSmallCancel-4           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedSmallCancel-8           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedSmallCancel-16          2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedLargeCancel             2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedLargeCancel-2           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedLargeCancel-4           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedLargeCancel-8            1000000          1006 ns/op           0 B/op          0 allocs/op
BenchmarkLimitedLargeCancel-16          2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkLimitedOverconsumeLargeRun            1    4027153081 ns/op       36176 B/op        572 allocs/op
BenchmarkLimitedOverconsumeLargeRun-2          1    4003489261 ns/op       32336 B/op        512 allocs/op
BenchmarkLimitedOverconsumeLargeRun-4          1    4005579847 ns/op       34128 B/op        540 allocs/op
BenchmarkLimitedOverconsumeLargeRun-8          1    4004639857 ns/op       34992 B/op        553 allocs/op
BenchmarkLimitedOverconsumeLargeRun-16         1    4022695297 ns/op       36864 B/op        532 allocs/op
BenchmarkLimitedBatchSmallRun                  1    1000785511 ns/op        6336 B/op         94 allocs/op
BenchmarkLimitedBatchSmallRun-2                1    1001459945 ns/op        4480 B/op         65 allocs/op
BenchmarkLimitedBatchSmallRun-4                1    1002475371 ns/op        6672 B/op         99 allocs/op
BenchmarkLimitedBatchSmallRun-8                1    1002498902 ns/op        4624 B/op         67 allocs/op
BenchmarkLimitedBatchSmallRun-16               1    1002202273 ns/op        5344 B/op         78 allocs/op
BenchmarkUnlimitedSmallRun                     1    1002361538 ns/op        3696 B/op         59 allocs/op
BenchmarkUnlimitedSmallRun-2                   1    1002230293 ns/op        3776 B/op         60 allocs/op
BenchmarkUnlimitedSmallRun-4                   1    1002148953 ns/op        3776 B/op         60 allocs/op
BenchmarkUnlimitedSmallRun-8                   1    1002120679 ns/op        3584 B/op         57 allocs/op
BenchmarkUnlimitedSmallRun-16                  1    1001698519 ns/op        3968 B/op         63 allocs/op
BenchmarkUnlimitedSmallCancel           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedSmallCancel-2         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedSmallCancel-4         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedSmallCancel-8         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedSmallCancel-16        2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeCancel           2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeCancel-2         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeCancel-4         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeCancel-8         2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeCancel-16        2000000000           0.00 ns/op        0 B/op          0 allocs/op
BenchmarkUnlimitedLargeRun                     1    1001631711 ns/op       40352 B/op        603 allocs/op
BenchmarkUnlimitedLargeRun-2                   1    1002603908 ns/op       38304 B/op        586 allocs/op
BenchmarkUnlimitedLargeRun-4                   1    1001452975 ns/op       38192 B/op        584 allocs/op
BenchmarkUnlimitedLargeRun-8                   1    1005382882 ns/op       35200 B/op        537 allocs/op
BenchmarkUnlimitedLargeRun-16                  1    1001818482 ns/op       37056 B/op        566 allocs/op
BenchmarkUnlimitedBatchSmallRun                1    1002391247 ns/op        4240 B/op         63 allocs/op
BenchmarkUnlimitedBatchSmallRun-2              1    1010313222 ns/op        4688 B/op         70 allocs/op
BenchmarkUnlimitedBatchSmallRun-4              1    1008364651 ns/op        4304 B/op         64 allocs/op
BenchmarkUnlimitedBatchSmallRun-8              1    1001858192 ns/op        4448 B/op         66 allocs/op
BenchmarkUnlimitedBatchSmallRun-16             1    1001228000 ns/op        4320 B/op         64 allocs/op

To put some of these benchmarks in perspective:

  • BenchmarkLimitedSmallRun did 10 seconds worth of processing in 1.002492008s
  • BenchmarkLimitedSmallCancel ran 20 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedLargeCancel ran 1000 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedOverconsumeLargeRun ran 100 jobs using 25 workers in 4.027153081s

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Batch

type Batch interface {

	// Queue queues the work to be run in the pool and starts processing immediately
	// and also retains a reference for Cancellation and outputting to results.
	// WARNING be sure to call QueueComplete() once all work has been Queued.
	Queue(fn WorkFunc)

	// QueueComplete lets the batch know that there will be no more Work Units Queued
	// so that it may close the results channels once all work is completed.
	// WARNING: if this function is not called the results channel will never exhaust,
	// but block forever listening for more results.
	QueueComplete()

	// Cancel cancels the Work Units belonging to this Batch
	Cancel()

	// Results returns a Work Unit result channel that will output all
	// completed units of work.
	Results() <-chan WorkUnit

	// WaitAll is an alternative to Results() where you
	// may want/need to wait until all work has been
	// processed, but don't need to check results.
	// eg. individual units of work may handle their own
	// errors, logging...
	WaitAll()
}

Batch contains all information for a batch run of WorkUnits

type ErrCancelled

type ErrCancelled struct {
	// contains filtered or unexported fields
}

ErrCancelled is the error returned to a Work Unit when it has been cancelled.

func (*ErrCancelled) Error

func (e *ErrCancelled) Error() string

Error prints Work Unit Cancellation error

type ErrPoolClosed

type ErrPoolClosed struct {
	// contains filtered or unexported fields
}

ErrPoolClosed is the error returned to all work units that may have been in or added to the pool after it's closing.

func (*ErrPoolClosed) Error

func (e *ErrPoolClosed) Error() string

Error prints Work Unit Close error

type ErrRecovery

type ErrRecovery struct {
	// contains filtered or unexported fields
}

ErrRecovery contains the error when a consumer goroutine needed to be recovers

func (*ErrRecovery) Error

func (e *ErrRecovery) Error() string

Error prints recovery error

type Pool

type Pool interface {

	// Queue queues the work to be run, and starts processing immediately
	Queue(fn WorkFunc) WorkUnit

	// Reset reinitializes a pool that has been closed/cancelled back to a working
	// state. if the pool has not been closed/cancelled, nothing happens as the pool
	// is still in a valid running state
	Reset()

	// Cancel cancels any pending work still not committed to processing.
	// Call Reset() to reinitialize the pool for use.
	Cancel()

	// Close cleans up pool data and cancels any pending work still not committed
	// to processing. Call Reset() to reinitialize the pool for use.
	Close()

	// Batch creates a new Batch object for queueing Work Units separate from any
	// others that may be running on the pool. Grouping these Work Units together
	// allows for individual Cancellation of the Batch Work Units without affecting
	// anything else running on the pool as well as outputting the results on a
	// channel as they complete. NOTE: Batch is not reusable, once QueueComplete()
	// has been called it's lifetime has been sealed to completing the Queued items.
	Batch() Batch
}

Pool contains all information for a pool instance.

func New

func New() Pool

New returns a new unlimited pool instance

func NewLimited

func NewLimited(workers uint) Pool

NewLimited returns a new limited pool instance

type WorkFunc

type WorkFunc func(wu WorkUnit) (interface{}, error)

WorkFunc is the function type needed by the pool for execution

type WorkUnit

type WorkUnit interface {

	// Wait blocks until WorkUnit has been processed or cancelled
	Wait()

	// Value returns the work units return value
	Value() interface{}

	// Error returns the Work Unit's error
	Error() error

	// Cancel cancels this specific unit of work, if not already committed
	// to processing.
	Cancel()

	// IsCancelled returns if the Work Unit has been cancelled.
	// NOTE: After Checking IsCancelled(), if it returns false the
	// Work Unit can no longer be cancelled and will use your returned values.
	IsCancelled() bool
}

WorkUnit contains a single uint of works values

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL