Documentation
Overview ¶
Package muster provides a framework for writing libraries that internally batch operations.
It will be useful to you if you're building an API that benefits from performing work in batches for whatever reason. Batching is triggered based on a maximum number of items in a batch, and/or based on a timeout for how long a batch waits before it is dispatched. For example if you're willing to wait for a maximum of a 1m duration, you can just set BatchTimeout and keep adding things. Or if you want batches of 50 just set MaxBatchSize and it will only fire when the batch is filled. For best results set both.
It would be in your best interest to use this library in a hidden fashion in order to avoid unnecessary coupling. You will typically achieve this by ensuring your implementation of muster.Batch and the use of muster.Client are private.
Index ¶
Examples ¶
Constants ¶
Variables ¶
Functions ¶
Types ¶
type Batch ¶
type Batch interface { // This should add the given single item to the Batch. This is the "other // end" of the Client.Work channel where your application will send items. Add(item interface{}) // Fire off the Batch. It should call Notifier.Done() when it has finished // processing the Batch. Fire(notifier Notifier) }
Batch collects added items. Fire will be called exactly once. The Batch does not need to be safe for concurrent access; synchronization will be handled by the Client.
type Client ¶
type Client struct { // Maximum number of items in a batch. If this is zero batches will only be // dispatched upon hitting the BatchTimeout. It is an error for both this and // the BatchTimeout to be zero. MaxBatchSize uint // Duration after which to send a pending batch. If this is zero batches will // only be dispatched upon hitting the MaxBatchSize. It is an error for both // this and the MaxBatchSize to be zero. BatchTimeout time.Duration // MaxConcurrentBatches determines how many parallel batches we'll allow to // be "in flight" concurrently. Once these many batches are in flight, the // PendingWorkCapacity determines when sending to the Work channel will start // blocking. In other words, once MaxConcurrentBatches hits, the system // starts blocking. This allows for tighter control over memory utilization. // If not set, the number of parallel batches in-flight will not be limited. MaxConcurrentBatches uint // Capacity of work channel. If this is zero, the Work channel will be // blocking. PendingWorkCapacity uint // This function should create a new empty Batch on each invocation. BatchMaker func() Batch // Once this Client has been started, send work items here to add to batch. Work chan interface{} // contains filtered or unexported fields }
The Client manages the background process that makes, populates & fires Batches.