dhammer

package module
v0.0.0-...-dad5089 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 10, 2021 License: Apache-2.0 Imports: 5 Imported by: 0

README

dfuse Hammer Library

reference License

This is a helper library for serializing concurrent and grouped task results. It is useful when you want to debounce network calls that may or may not be grouped in a batch call. It is used as part of dfuse.

Usage

See example usage in dfuse for EOSIO.

Contributing

Issues and PR in this repo related strictly to the dhammer library.

Report any protocol-specific issues in their respective repositories

Please first refer to the general dfuse contribution guide, if you wish to contribute to this code base.

License

Apache 2.0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Hammer

type Hammer struct {
	*shutter.Shutter
	In  chan interface{}
	Out chan interface{}
	// contains filtered or unexported fields
}

Hammer is a tool that batches and parallelize tasks from the 'In' channel and writes results to the 'Out' channel. It can optimize performance in two ways:

  1. It calls your HammerFunc with a maximum of `batchSize` values taken from the 'In' channel (batching)
  2. It calls your HammerFunc a maximum of `maxConcurrency` times in parallel (debouncing)

Both approaches give good results, but combining them gives greatest results, especially with large batch size with small debouncing.

Closing the context will shutdown the batcher immediately. calling "Close" will close the `In` chan and finish processing until the Hammer closes the `Out` chan and shuts down

func NewHammer

func NewHammer(batchSize, maxConcurrency int, hammerFunc HammerFunc, options ...HammerOption) *Hammer

NewHammer returns a single-use batcher startSingle will force batcher to run the first batch with a single object in it

func (*Hammer) Close

func (h *Hammer) Close()

func (*Hammer) Start

func (h *Hammer) Start(ctx context.Context)

type HammerFunc

type HammerFunc func(context.Context, []interface{}) ([]interface{}, error)

type HammerOption

type HammerOption = func(h *Hammer)

func FirstBatchUnitary

func FirstBatchUnitary() HammerOption

func HammerLogger

func HammerLogger(logger *zap.Logger) HammerOption

func SetInChanSize

func SetInChanSize(size int) HammerOption

type Nailer

type Nailer struct {
	*shutter.Shutter

	In  chan interface{}
	Out chan interface{}
	// contains filtered or unexported fields
}

func NewNailer

func NewNailer(maxConcurrency int, nailerFunc NailerFunc, options ...NailerOption) *Nailer

func (*Nailer) Close

func (n *Nailer) Close()

func (*Nailer) Drain

func (n *Nailer) Drain()

func (*Nailer) Push

func (n *Nailer) Push(ctx context.Context, in interface{})

func (*Nailer) PushAll

func (n *Nailer) PushAll(ctx context.Context, ins []interface{})

func (*Nailer) Start

func (n *Nailer) Start(ctx context.Context)

func (*Nailer) WaitUntilEmpty

func (n *Nailer) WaitUntilEmpty(ctx context.Context)

WaitUntilEmpty waits until no more input nor active inflight operations is in progress blocking the current goroutine along the way. The output must be consumed for this method to work. You should use `NailerDiscardall()` option if you don't care about the output.

**Important** You are responsible of ensuring that no new inputs are being push while waiting. This method does not protect against such case right now and could unblock just before a new input is pushed which would make the instance "non-emtpy" anymore.

type NailerFunc

type NailerFunc func(context.Context, interface{}) (interface{}, error)

type NailerOption

type NailerOption = func(h *Nailer)

func NailerDiscardAll

func NailerDiscardAll() NailerOption

func NailerLogger

func NailerLogger(logger *zap.Logger) NailerOption

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL