Documentation
¶
Index ¶
- Constants
- Variables
- func AddToSlice[T any](b []T, item T) []T
- func CountMapKeys[V any, K comparable]() func(map[K]V, int64) int64
- func InitMap[K comparable, V any](i int64) map[K]V
- func InitSlice[T any](i int64) []T
- func InitType[T any](_ int64) T
- func LoggingErrorHandler[B any](_ B, count int64, err error) error
- func NewErrorWithRemaining[B any](err error, remainBatch B, count int64) error
- type CombineFn
- type Error
- type ExtractFn
- type Future
- type IFuture
- type ILoader
- type IProcessor
- type InitBatchFn
- type KeyVal
- type LoadBatchFn
- type LoadKeys
- type Loader
- func (l *Loader[K, T]) Close() error
- func (l *Loader[K, T]) CloseContext(ctx context.Context) error
- func (l *Loader[K, T]) Flush()
- func (l *Loader[K, T]) FlushContext(ctx context.Context) error
- func (l *Loader[K, T]) Get(key K) (T, error)
- func (l *Loader[K, T]) GetAll(keys []K) (map[K]T, error)
- func (l *Loader[K, T]) GetAllContext(ctx context.Context, keys []K) (map[K]T, error)
- func (l *Loader[K, T]) GetContext(ctx context.Context, key K) (T, error)
- func (l *Loader[K, T]) Load(key K) *Future[T]
- func (l *Loader[K, T]) LoadAll(keys []K) map[K]*Future[T]
- func (l *Loader[K, T]) LoadAllContext(ctx context.Context, keys []K) map[K]*Future[T]
- func (l *Loader[K, T]) LoadContext(ctx context.Context, key K) *Future[T]
- func (l *Loader[K, T]) StopContext(ctx context.Context) error
- type LoaderSetup
- type MergeToBatchFn
- type Option
- func WithAggressiveMode() Option
- func WithBlockWhileProcessing() Option
- func WithDisabledDefaultProcessErrorLog() Option
- func WithHardMaxWait(wait time.Duration) Option
- func WithMaxCloseWait(wait time.Duration) Option
- func WithMaxConcurrency[I size](concurrency I) Option
- func WithMaxItem[I size](maxItem I) Option
- func WithMaxWait(wait time.Duration) Option
- type ProcessBatchFn
- type Processor
- func (p *Processor[T, B]) ApproxItemCount() int64
- func (p *Processor[T, B]) Close() error
- func (p *Processor[T, B]) CloseContext(ctx context.Context) error
- func (p *Processor[T, B]) DrainContext(ctx context.Context) error
- func (p *Processor[T, B]) Flush()
- func (p *Processor[T, B]) FlushContext(ctx context.Context) error
- func (p *Processor[T, B]) IsDisabled() bool
- func (p *Processor[T, B]) ItemCount() int64
- func (p *Processor[T, B]) ItemCountContext(ctx context.Context) (int64, bool)
- func (p *Processor[T, B]) Merge(item T, merge MergeToBatchFn[B, T])
- func (p *Processor[T, B]) MergeAll(items []T, merge MergeToBatchFn[B, T])
- func (p *Processor[T, B]) MergeAllContext(ctx context.Context, items []T, merge MergeToBatchFn[B, T]) int
- func (p *Processor[T, B]) MergeContext(ctx context.Context, item T, merge MergeToBatchFn[B, T]) bool
- func (p *Processor[T, B]) Peek(reader ProcessBatchFn[B]) error
- func (p *Processor[T, B]) PeekContext(ctx context.Context, reader ProcessBatchFn[B]) error
- func (p *Processor[T, B]) Put(item T)
- func (p *Processor[T, B]) PutAll(items []T)
- func (p *Processor[T, B]) PutAllContext(ctx context.Context, items []T) int
- func (p *Processor[T, B]) PutContext(ctx context.Context, item T) bool
- func (p *Processor[T, B]) StopContext(ctx context.Context) error
- type ProcessorSetup
- func NewMapProcessor[T any, K comparable, V any](extractor ExtractFn[T, KeyVal[K, V]], combiner CombineFn[V]) ProcessorSetup[T, map[K]V]
- func NewProcessor[T any, B any](init InitBatchFn[B], merge MergeToBatchFn[B, T]) ProcessorSetup[T, B]
- func NewSelfMapProcessor[T any, K comparable](keyExtractor ExtractFn[T, K], combiner CombineFn[T]) ProcessorSetup[T, map[K]T]
- func NewSliceProcessor[T any]() ProcessorSetup[T, []T]
- type RecoverBatchFn
- type RunOption
- type SplitBatchFn
Constants ¶
const Unset = -1
Unset is a special value for various Option functions, usually meaning unrestricted, unlimited, or disable. You need to read the doc of the corresponding function to know what this value does.
Variables ¶
var ErrLoadMissingResult = errors.New("empty missing result for key")
ErrLoadMissingResult is the default error for keys that are missing in the result. Can be configured by LoaderSetup.WithMissingResultError.
Functions ¶
func AddToSlice ¶
func AddToSlice[T any](b []T, item T) []T
AddToSlice is MergeToBatchFn that add item to a slice.
func CountMapKeys ¶ added in v2.1.0
func CountMapKeys[V any, K comparable]() func(map[K]V, int64) int64
CountMapKeys create a counter that count keys in map.
func InitMap ¶
func InitMap[K comparable, V any](i int64) map[K]V
InitMap is InitBatchFn that allocate a map. It uses the default size for map, as the size of item may be much larger than the size of map after merged. However, if you properly configured WithBatchCounter to count the size of map and WithMaxItem to a reasonable value, you may benefit from specifying the size of map using your own InitBatchFn.
func LoggingErrorHandler ¶
LoggingErrorHandler default error handler, always included in RecoverBatchFn chain unless disable.
Types ¶
type CombineFn ¶
type CombineFn[T any] func(T, T) T
CombineFn is a function to combine two values in to one.
type Error ¶
type Error[B any] struct { // Cause the error cause. If not specified, then nil will be passed to the next error handler. Cause error // RemainingBatch the batch to pass to the next handler. The RemainingCount must be specified. RemainingBatch B // RemainingCount number of items to pass to the next handler. // If RemainingCount = 0 and Cause != nil then pass the original batch and count to the next handler. RemainingCount int64 }
Error is an error wrapper that supports passing remaining items to the RecoverBatchFn.
type Future ¶
type Future[T any] struct { // contains filtered or unexported fields }
Future implements IFuture. Can be used to get the result of a load request.
func (*Future[T]) Get ¶
Get wait until the result is available and return the result. This method can block indefinitely. It is recommended to use Future.GetContext instead.
func (*Future[T]) GetContext ¶
GetContext wait until the result is available and return the result. The context can be used to cancel the wait (not the load request).
type IFuture ¶ added in v2.3.0
type IFuture[T any] interface { // Get wait until the result is available. Get() (T, error) // GetContext wait until the result is available. // The context can be used to cancel the wait (not the task). GetContext(ctx context.Context) (T, error) }
IFuture is a future that can be used to get the result of a task.
type ILoader ¶ added in v2.3.0
type ILoader[K comparable, T any] interface { // Get registers a key to be loaded and wait for it to be loaded. // This method can block until the loader is available for loading new batch. // It is recommended to use [ILoader.GetContext] instead. Get(key K) (T, error) // GetContext registers a key to be loaded and wait for it to be loaded. // // Context can be used to provide deadline for this method. // Cancel the context may stop the loader from loading the key. GetContext(ctx context.Context, key K) (T, error) // GetAll registers keys to be loaded and wait for all of them to be loaded. // This method can block until the loader is available for loading new batch, // and may block indefinitely. // It is recommended to use [ILoader.GetAllContext] instead. GetAll(keys []K) (map[K]T, error) // GetAllContext registers keys to be loaded and wait for all of them to be loaded. // Context can be used to provide deadline for this method. // // In the case of context timed out, keys that are loaded will be returned along with error. // Cancel the context may stop the loader from loading the key. GetAllContext(ctx context.Context, keys []K) (map[K]T, error) // Load registers a key to be loaded and return a [Future] for waiting for the result. // This method can block until the loader is available for loading new batch. // It is recommended to use [ILoader.LoadContext] instead. Load(key K) *Future[T] // LoadContext registers a key to be loaded and return a [Future] for waiting for the result. // // Context can be used to provide deadline for this method. // Cancel the context may stop the loader from loading the key. LoadContext(ctx context.Context, key K) *Future[T] // LoadAll registers keys to be loaded and return a [IFuture] for waiting for the combined result. // This method can block until the processor is available for processing new batch, // and may block indefinitely. // It is recommended to use [ILoader.LoadAllContext] instead. LoadAll(keys []K) map[K]*Future[T] // LoadAllContext registers keys to be loaded and return a [Future] for waiting for the combined result. // // Context can be used to provide deadline for this method. // Cancel the context may stop the loader from loading the key. LoadAllContext(ctx context.Context, keys []K) map[K]*Future[T] // Close stop the loader. // This method may process the leftover branch on caller thread. // The implementation of this method may vary, but it must never wait indefinitely. Close() error // CloseContext stop the loader. // This method may load the left-over batch on caller thread. // Context can be used to provide deadline for this method. CloseContext(ctx context.Context) error // StopContext stop the loader. // This method does not load leftover batch. StopContext(ctx context.Context) error // FlushContext force load the current batch. // This method may load the batch on caller thread. // Context can be used to provide deadline for this method. FlushContext(ctx context.Context) error // Flush force load the current batch. // This method may load the batch on caller thread. // It is recommended to use [ILoader.FlushContext] instead. Flush() }
type IProcessor ¶ added in v2.2.0
type IProcessor[T any, B any] interface { // Put add item to the processor. // This method can block until the processor is available for processing new batch, // and may block indefinitely. // It is recommended to use [IProcessor.PutContext] instead. Put(item T) // PutAll add all specified item to the processor. // This method can block until the processor is available for processing new batch, // and may block indefinitely. // It is recommended to use [IProcessor.PutAllContext] instead. PutAll(items []T) // PutContext add item to the processor. // If the context is canceled and the item is not added, then this method will return false. // The context passed in only control the put step, after item added to the processor, // the processing will not be canceled by this context. PutContext(ctx context.Context, item T) bool // PutAllContext add all items to the processor. // If the context is canceled, then this method will return the number of items added to the processor. PutAllContext(ctx context.Context, items []T) int // Merge add item to the processor using merge function. // This method can block until the processor is available for processing new batch, // and may block indefinitely. // It is recommended to use [IProcessor.MergeContext] instead. Merge(item T, merge MergeToBatchFn[B, T]) // MergeAll add all items to the processor using merge function. // This method can block until the processor is available for processing new batch, // and may block indefinitely. // It is recommended to use [IProcessor.MergeAllContext] instead. MergeAll(items []T, merge MergeToBatchFn[B, T]) // MergeContext add item to the processor using merge function. // If the context is canceled and the item is not added, then this method will return false. // The context passed in only control the put step, after item added to the processor, // the processing will not be canceled by this context. MergeContext(ctx context.Context, item T, merge MergeToBatchFn[B, T]) bool // MergeAllContext add all items to the processor using merge function. // If the context is canceled, then this method will return the number of items added to the processor. MergeAllContext(ctx context.Context, items []T, merge MergeToBatchFn[B, T]) int // Peek access the current batch using provided function. // This method can block until the processor is available. // It is recommended to use [IProcessor.PeekContext] instead. // This method does not count as processing the batch, the batch will still be processed. Peek(reader ProcessBatchFn[B]) error // PeekContext access the current batch using provided function. // This method does not count as processing the batch, the batch will still be processed. PeekContext(ctx context.Context, reader ProcessBatchFn[B]) error // ApproxItemCount return number of current item in processor, approximately. ApproxItemCount() int64 // ItemCount return number of current item in processor. ItemCount() int64 // ItemCountContext return number of current item in processor. // If the context is canceled, then this method will return approximate item count and false. ItemCountContext(ctx context.Context) (int64, bool) // Close stop the processor. // This method may process the leftover branch on caller thread. // The implementation of this method may vary, but it must never wait indefinitely. Close() error // CloseContext stop the processor. // This method may process the left-over batch on caller thread. // Context can be used to provide deadline for this method. CloseContext(ctx context.Context) error // StopContext stop the processor. // This method does not process leftover batch. StopContext(ctx context.Context) error // DrainContext force process batch util the batch is empty. // This method may process the batch on caller thread. // Context can be used to provide deadline for this method. DrainContext(ctx context.Context) error // FlushContext force process the current batch. // This method may process the batch on caller thread. // Context can be used to provide deadline for this method. FlushContext(ctx context.Context) error // Flush force process the current batch. // This method may process the batch on caller thread. // It is recommended to use [IProcessor.FlushContext] instead. Flush() }
IProcessor provides common methods of a Processor.
type InitBatchFn ¶
InitBatchFn function to create empty batch. Accept max item limit, can be 1 (disabled), -1 (unlimited) or any positive number.
type LoadBatchFn ¶ added in v2.3.0
type LoadBatchFn[K comparable, T any] func(batch LoadKeys[K], count int64) (map[K]T, error)
LoadBatchFn function to load a batch of keys. Return a map of keys to values and an error, if both are non-nil, the error will be set for all missing keys. If the result does not contain all keys and the error is nil, the error of missing keys will be set to ErrLoadMissingResult or any configured error.
The LoadBatchFn should not modify the key content.
type Loader ¶ added in v2.3.0
type Loader[K comparable, T any] struct { I int32 // contains filtered or unexported fields }
Loader ILoader that is running and can load item.
func (*Loader[K, T]) Close ¶ added in v2.3.0
Close stop the loader. This method will process the leftover branch on caller thread.
func (*Loader[K, T]) CloseContext ¶ added in v2.3.0
CloseContext stop the loader. This method may load the left-over batch on caller thread. Context can be used to provide deadline for this method.
func (*Loader[K, T]) Flush ¶ added in v2.3.0
func (l *Loader[K, T]) Flush()
Flush force load the current batch. This method may load the batch on caller thread. It is recommended to use Loader.FlushContext instead.
func (*Loader[K, T]) FlushContext ¶ added in v2.3.0
FlushContext force load the current batch. This method may load the batch on caller thread. Context can be used to provide deadline for this method.
func (*Loader[K, T]) Get ¶ added in v2.3.0
Get registers a key to be loaded and wait for it to be loaded. This method can block until the loader is available for loading new batch. It is recommended to use Loader.GetContext instead.
func (*Loader[K, T]) GetAll ¶ added in v2.3.0
GetAll registers keys to be loaded and wait for all of them to be loaded. This method can block until the loader is available for loading new batch, and may block indefinitely. It is recommended to use Loader.GetAllContext instead.
func (*Loader[K, T]) GetAllContext ¶ added in v2.3.0
GetAllContext registers keys to be loaded and wait for all of them to be loaded. Context can be used to provide deadline for this method.
In the case of context timed out, keys that are loaded will be returned along with error. Cancel the context may stop the loader from loading the key.
func (*Loader[K, T]) GetContext ¶ added in v2.3.0
GetContext registers a key to be loaded and wait for it to be loaded.
Context can be used to provide deadline for this method. Cancel the context may stop the loader from loading the key.
func (*Loader[K, T]) Load ¶ added in v2.3.0
Load registers a key to be loaded and return a IFuture for waiting for the result. This method can block until the loader is available for loading new batch. It is recommended to use Loader.LoadContext instead.
func (*Loader[K, T]) LoadAll ¶ added in v2.3.0
LoadAll registers keys to be loaded and return a IFuture for waiting for the combined result. This method can block until the processor is available for processing new batch, and may block indefinitely. It is recommended to use Loader.LoadAllContext instead.
func (*Loader[K, T]) LoadAllContext ¶ added in v2.3.0
LoadAllContext registers keys to be loaded and return a IFuture for waiting for the combined result.
Context can be used to provide deadline for this method. Cancel the context may stop the loader from loading the key.
func (*Loader[K, T]) LoadContext ¶ added in v2.3.0
LoadContext registers a key to be loaded and return a Future for waiting for the result.
Context can be used to provide deadline for this method. Cancel the context may stop the loader from loading the key.
type LoaderSetup ¶ added in v2.3.0
type LoaderSetup[K comparable, T any] struct { // contains filtered or unexported fields }
LoaderSetup batch loader that is in setup phase (not running) You cannot load item using this loader yet, use LoaderSetup.Run to create a Loader that can load item. See Option for available options.
func NewLoader ¶ added in v2.3.0
func NewLoader[K comparable, T any]() LoaderSetup[K, T]
NewLoader create a LoaderSetup using specified functions. See LoaderSetup.Configure and Option for available configuration. The result LoaderSetup is in setup state.
Call LoaderSetup.Run with a handler to create a Loader that can load item. By default, the processor operates with the following configuration: - WithMaxConcurrency: Unset (unlimited) - WithMaxItem: 1000 (count keys) - WithMaxWait: 16ms.
func (LoaderSetup[K, T]) Configure ¶ added in v2.3.0
func (p LoaderSetup[K, T]) Configure(options ...Option) LoaderSetup[K, T]
Configure apply Option to this loader setup.
func (LoaderSetup[K, T]) Run ¶ added in v2.3.0
func (p LoaderSetup[K, T]) Run(loadFn LoadBatchFn[K, T], options ...RunOption[LoadKeys[K]]) *Loader[K, T]
Run create a Loader that can accept item. Accept a LoadBatchFn and a list of RunOption of type LoadKeys.
func (LoaderSetup[K, T]) WithMissingResultError ¶ added in v2.3.0
func (p LoaderSetup[K, T]) WithMissingResultError(err error) LoaderSetup[K, T]
WithMissingResultError set the default error for keys that are missing in the result.
type MergeToBatchFn ¶
MergeToBatchFn function to add an item to batch.
func AddSelfToMapUsing ¶ added in v2.1.0
func AddSelfToMapUsing[T any, K comparable](keyExtractor ExtractFn[T, K], combiner CombineFn[T]) MergeToBatchFn[map[K]T, T]
AddSelfToMapUsing create a MergeToBatchFn that add self as item to map using key ExtractFn and apply CombineFn if key duplicated. The original value will be passed as 1st parameter to the CombineFn. If CombineFn is nil, duplicated key will be replaced.
func AddToMapUsing ¶ added in v2.1.0
func AddToMapUsing[T any, K comparable, V any](extractor ExtractFn[T, KeyVal[K, V]], combiner CombineFn[V]) MergeToBatchFn[map[K]V, T]
AddToMapUsing create MergeToBatchFn that add item to map using KeyVal ExtractFn and apply CombineFn if key duplicated. The original value will be passed as 1st parameter to the CombineFn. If CombineFn is nil, duplicated key will be replaced.
type Option ¶
type Option func(*processorConfig)
Option general options for batch processor.
func WithAggressiveMode ¶
func WithAggressiveMode() Option
WithAggressiveMode enable the aggressive mode. In this mode, the processor does not wait for the maxWait or maxItems reached, will continue processing item and only merge into batch if needed (for example, reached concurrentLimit, or dispatcher thread is busy). The maxItems configured by WithMaxItem still control the maximum number of items the processor can hold before block. The WithBlockWhileProcessing will be ignored in this mode.
func WithBlockWhileProcessing ¶
func WithBlockWhileProcessing() Option
WithBlockWhileProcessing enable the processor block when processing item. If concurrency enabled, the processor only blocks when reached max concurrency. This method has no effect if the processor is in aggressive mode.
func WithDisabledDefaultProcessErrorLog ¶
func WithDisabledDefaultProcessErrorLog() Option
WithDisabledDefaultProcessErrorLog disable default error logging when batch processing error occurs.
func WithHardMaxWait ¶
WithHardMaxWait set the max waiting time before the processor will handle the batch anyway. Unlike WithMaxWait, the batch will be processed even if it is empty, which is preferable if the processor must perform some periodic tasks. You should ONLY configure WithMaxWait OR WithHardMaxWait, NOT BOTH.
func WithMaxCloseWait ¶
WithMaxCloseWait set the max waiting time when closing the processor.
func WithMaxConcurrency ¶
func WithMaxConcurrency[I size](concurrency I) Option
WithMaxConcurrency set the max number of go routine this processor can create when processing item. Support 0 (run on dispatcher goroutine) and fixed number. Passing -1 Unset (unlimited) to this function has the same effect of passing math.MaxInt64.
func WithMaxItem ¶
func WithMaxItem[I size](maxItem I) Option
WithMaxItem set the max number of items this processor can hold before block. Support fixed number and -1 Unset (unlimited) When set to unlimited, it will never block, and the batch handling behavior depends on WithMaxWait. When set to 0, the processor will be DISABLED and item will be processed directly on caller thread without batching.
func WithMaxWait ¶
WithMaxWait set the max waiting time before the processor will handle the batch anyway. If the batch is empty, then it is skipped. The max wait start counting from the last processed time, not a fixed period. Accept 0 (no wait), -1 Unset (wait util maxItems reached), or time.Duration. If set to -1 Unset and the maxItems is unlimited, then the processor will keep processing whenever possible without waiting for anything.
type ProcessBatchFn ¶
ProcessBatchFn function to process a batch. Accept the current batch and the input count.
type Processor ¶
type Processor[T any, B any] struct { ProcessorSetup[T, B] // contains filtered or unexported fields }
Processor a processor that is running and can process item.
func (*Processor[T, B]) ApproxItemCount ¶
ApproxItemCount return number of current item in processor. This method does not block, so the counter may not be accurate.
func (*Processor[T, B]) Close ¶
Close stop the processor. This method will process the leftover branch on caller thread. Return error if maxCloseWait passed. Timeout can be configured by WithMaxCloseWait. See getCloseMaxWait for detail.
func (*Processor[T, B]) CloseContext ¶
CloseContext stop the processor. This method will process the leftover branch on caller thread. Context can be used to provide deadline for this method.
func (*Processor[T, B]) DrainContext ¶
DrainContext force process batch util the batch is empty. This method always processes the batch on caller thread. ctx can be used to provide deadline for this method.
func (*Processor[T, B]) Flush ¶
func (p *Processor[T, B]) Flush()
Flush force process the current batch. This method may process the batch on caller thread, depend on concurrent and block settings. It is recommended to use Processor.FlushContext instead.
func (*Processor[T, B]) FlushContext ¶
FlushContext force process the current batch. This method may process the batch on caller thread, depend on concurrent and block settings. Context can be used to provide deadline for this method.
func (*Processor[T, B]) IsDisabled ¶ added in v2.2.0
IsDisabled whether the processor is disabled. Disabled processor won't do batching, instead the process will be executed on caller. All other settings are ignored when the processor is disabled.
func (*Processor[T, B]) ItemCount ¶
ItemCount return number of current item in processor. This method will block the processor for accurate counting. It is recommended to use Processor.ItemCountContext instead.
func (*Processor[T, B]) ItemCountContext ¶
ItemCountContext return number of current item in processor. If the context is canceled, then this method will return approximate item count and false.
func (*Processor[T, B]) Merge ¶
func (p *Processor[T, B]) Merge(item T, merge MergeToBatchFn[B, T])
Merge add item to the processor. This method can block until the processor is available for processing new batch. It is recommended to use Processor.MergeContext instead.
func (*Processor[T, B]) MergeAll ¶
func (p *Processor[T, B]) MergeAll(items []T, merge MergeToBatchFn[B, T])
MergeAll add all item to the processor using merge function. This method will block until all items were put into the processor. It is recommended to use Processor.MergeAllContext instead.
func (*Processor[T, B]) MergeAllContext ¶
func (p *Processor[T, B]) MergeAllContext(ctx context.Context, items []T, merge MergeToBatchFn[B, T]) int
MergeAllContext add all items to the processor using merge function. If the context is canceled, then this method will return the number of items added to the processor. The processing order is the same as the input list, so the output can also be used to determine the next item to process if you want to retry or continue processing.
func (*Processor[T, B]) MergeContext ¶
func (p *Processor[T, B]) MergeContext(ctx context.Context, item T, merge MergeToBatchFn[B, T]) bool
MergeContext add item to the processor using merge function.
func (*Processor[T, B]) Peek ¶
func (p *Processor[T, B]) Peek(reader ProcessBatchFn[B]) error
Peek access the current batch using provided function. This method can block until the processor is available. It is recommended to use Processor.PeekContext instead. This method does not count as processing the batch, the batch will still be processed.
func (*Processor[T, B]) PeekContext ¶
func (p *Processor[T, B]) PeekContext(ctx context.Context, reader ProcessBatchFn[B]) error
PeekContext access the current batch using provided function. This method does not count as processing the batch, the batch will still be processed.
func (*Processor[T, B]) Put ¶
func (p *Processor[T, B]) Put(item T)
Put add item to the processor. This method can block until the processor is available for processing new batch. It is recommended to use Processor.PutContext instead.
func (*Processor[T, B]) PutAll ¶
func (p *Processor[T, B]) PutAll(items []T)
PutAll add all item to the processor. This method will block until all items were put into the processor. It is recommended to use Processor.PutAllContext instead.
func (*Processor[T, B]) PutAllContext ¶
PutAllContext add all items to the processor. If the context is canceled, then this method will return the number of items added to the processor. The processing order is the same as the input list, so the output can also be used to determine the next item to process if you want to retry or continue processing.
func (*Processor[T, B]) PutContext ¶
PutContext add item to the processor.
type ProcessorSetup ¶
ProcessorSetup batch processor that is in setup phase (not running). You cannot put item into this processor, use ProcessorSetup.Run to create a Processor that can accept item. See Option for available options.
func NewMapProcessor ¶
func NewMapProcessor[T any, K comparable, V any](extractor ExtractFn[T, KeyVal[K, V]], combiner CombineFn[V]) ProcessorSetup[T, map[K]V]
NewMapProcessor prepare a processor that backed by a map. If CombineFn is nil, duplicated key will be replaced.
func NewProcessor ¶
func NewProcessor[T any, B any](init InitBatchFn[B], merge MergeToBatchFn[B, T]) ProcessorSetup[T, B]
NewProcessor create a ProcessorSetup using specified functions. See ProcessorSetup.Configure and Option for available configuration. The result ProcessorSetup is in setup state. Call ProcessorSetup.Run with a handler to create a Processor that can accept item. It is recommended to set at least maxWait by WithMaxWait or maxItem by WithMaxItem. By default, the processor operates similarly to aggressive mode, use Configure to change its behavior.
func NewSelfMapProcessor ¶ added in v2.1.0
func NewSelfMapProcessor[T any, K comparable](keyExtractor ExtractFn[T, K], combiner CombineFn[T]) ProcessorSetup[T, map[K]T]
NewSelfMapProcessor prepare a processor that backed by a map, using item as value without extracting. If CombineFn is nil, duplicated key will be replaced.
func NewSliceProcessor ¶
func NewSliceProcessor[T any]() ProcessorSetup[T, []T]
NewSliceProcessor prepare a processor that backed by a slice.
func (ProcessorSetup[T, B]) Configure ¶
func (p ProcessorSetup[T, B]) Configure(options ...Option) ProcessorSetup[T, B]
Configure apply Option to this processor setup.
func (ProcessorSetup[T, B]) Run ¶
func (p ProcessorSetup[T, B]) Run(process ProcessBatchFn[B], options ...RunOption[B]) *Processor[T, B]
Run create a Processor that can accept item. Accept a ProcessBatchFn and a list of RunOption.
type RecoverBatchFn ¶
RecoverBatchFn function to handle an error batch. Each RecoverBatchFn can further return error to enable the next RecoverBatchFn in the chain. The RecoverBatchFn must never panic.
Accept the current batch and a counter with the previous error. The counter and batch can be controlled by returning an Error, otherwise it will receive the same arguments of ProcessBatchFn.
type RunOption ¶
type RunOption[B any] func(*runConfig[B])
RunOption options for batch processing.
func WithBatchCounter ¶
WithBatchCounter provide alternate function to count the number of items in batch. The function receives the current batch and the total input items count of the current batch.
func WithBatchErrorHandlers ¶
func WithBatchErrorHandlers[B any](handlers ...RecoverBatchFn[B]) RunOption[B]
WithBatchErrorHandlers provide a RecoverBatchFn chain to process on error. Each RecoverBatchFn can further return error to enable the next RecoverBatchFn in the chain. The RecoverBatchFn must never panic.
func WithBatchLoaderCountInput ¶ added in v2.3.0
func WithBatchLoaderCountInput[K comparable]() RunOption[LoadKeys[K]]
WithBatchLoaderCountInput unset the current count function. Typically used for Loader to specify that it should use the number of pending load requests for limit instead of the number of pending keys.
func WithBatchSplitter ¶
func WithBatchSplitter[B any](split SplitBatchFn[B]) RunOption[B]
WithBatchSplitter split the batch into multiple smaller batch. When concurrency > 0 and SplitBatchFn are set, the processor will split the batch and process across multiple threads, otherwise the batch will be process on a single thread, and block when concurrency is reached. This configuration may be beneficial if you have a very large batch that can be split into smaller batch and processed in parallel.
type SplitBatchFn ¶
SplitBatchFn function to split a batch into multiple smaller batches. Accept the current batch and the input count. The SplitBatchFn must never panic.
func SplitSliceEqually ¶
func SplitSliceEqually[T any, I size](numberOfChunk I) SplitBatchFn[[]T]
SplitSliceEqually create a SplitBatchFn that split a slice into multiple equal chuck.
func SplitSliceSizeLimit ¶
func SplitSliceSizeLimit[T any, I size](maxSizeOfChunk I) SplitBatchFn[[]T]
SplitSliceSizeLimit create a SplitBatchFn that split a slice into multiple chuck of limited size.