Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Client ¶
type Client struct { ConsumerConfig *cluster.Config ConsumerBrokers []string ConsumerTopics []string ConsumerGroup string ProducerConfig *sarama.Config ProducerBrokers []string ProducerTopics []string // Logger is the configurable logger instance to log messages from this // streamclient. If left undefined, a noop logger will be used. Logger *zap.Logger }
Client provides access to the streaming capabilities.
func (*Client) NewConsumer ¶
NewConsumer returns a consumer that can iterate over messages on a stream.
func (*Client) NewConsumerAndProducer ¶
NewConsumerAndProducer is a convenience method that returns both a consumer and a producer, with a single function call.
func (*Client) NewProducer ¶
NewProducer returns a producer that produces messages on a Kafka stream.
type Consumer ¶
type Consumer struct {
// contains filtered or unexported fields
}
Consumer implements the stream.Consumer interface for standardstream.
type Producer ¶
type Producer struct {
// contains filtered or unexported fields
}
Producer represents the object that will produce messages to a stream.
func (*Producer) Messages ¶
Messages returns the read channel for the messages that are returned by the stream.
func (*Producer) PartitionKey ¶
PartitionKey can be used to define the key to use for partitioning messages.
You pass in a function that accepts the currently processed message as its single value and should return byte slice, representing the partition key.
You can either set the key to a fixed byte slice, or determine the partition key based on the message's value or other properties.