franz-go

module
v0.6.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 10, 2021 License: BSD-3-Clause

README

franz-go - Apache Kafka client written in Go

GoDev GitHub GitHub tag (latest SemVer) Discord Chat

Franz-go is an all-encompassing Apache Kafka client fully written Go. This library aims to provide every Kafka feature from Apache Kafka v0.8.0 onward. It has support for transactions, regex topic consuming, the latest partitioning strategies, data loss detection, closest replica fetching, and more. If a client KIP exists, this library aims to support it.

This library attempts to provide an intuitive API while interacting with Kafka the way Kafka expects (timeouts, etc.).

Features

  • Feature complete client (up to Kafka v2.7.0+)
  • Supported compression types: snappy, gzip, lz4 and zstd
  • SSL/TLS Support
  • Exactly once semantics / idempotent producing
  • Transactions support
  • All SASL mechanisms are supported (OAuthBearer, GSSAPI/Kerberos, SCRAM-SHA-256/512 and plain)
  • Supported Kafka versions >=0.8
  • Provides low level functionality (such as sending API requests) as well as high level functionality (e.g. consuming in groups)
  • Utilizes modern & idiomatic Go (support for contexts, variadic configuration options, ...)
  • Highly performant, see Performance (benchmarks will be added)
  • Written in pure Go (no wrapper lib for a C library or other bindings)
  • Ability to add detailed log messages or metrics using hooks

Getting started

Basic usage for producing and consuming Kafka messages looks like this:

seeds := []string{"localhost:9092"}
client, err := kgo.NewClient(kgo.SeedBrokers(seeds...))
if err != nil {
    panic(err)
}
defer client.Close()

ctx := context.Background()

// 1.) Producing a message
// All record production goes through Produce, and the callback can be used
// to allow for syncronous or asyncronous production.
var wg sync.WaitGroup
wg.Add(1)
record := &kgo.Record{Topic: "foo", Value: []byte("bar")}
err := client.Produce(ctx, record, func(_ *Record, err error) {
        defer wg.Done()
        if err != nil {
                fmt.Printf("record had a produce error: %v\n", err)
        }

}
if err != nil {
        panic("we are unable to produce if the context is canceled, we have hit max buffered," +
                "or if we are transactional and not in a transaction")
}
wg.Wait()

// 2.) Consuming messages from a topic
// Consuming can either be direct (no consumer group), or through a group. Below, we use a group.
// client.AssignGroup("my-group-identifier", kgo.GroupTopics("foo"))
for {
        fetches := client.PollFetches(ctx)
        iter := fetches.RecordIter()
        for !iter.Done() {
            record := iter.Next()
            fmt.Println(string(record.Value))
        }
}

Version Pinning

By default, the client issues an ApiVersions request on connect to brokers and defaults to using the maximum supported version for requests that each broker supports.

Kafka 0.10.0 introduced the ApiVersions request; if you are working with brokers older than that, you must use the kversions package. Use the MaxVersions option for the client if you do so.

As well, it is recommended to set the MaxVersions to the version of your broker cluster. Until KIP-584 is implemented, it is possible that if you do not pin a max version, this client will speak with some features to one broker while not to another when you are in the middle of a broker update roll.

Metrics

Using hooks you can attach to any events happening within franz-go. This allows you to use your favorite metric library and collect the metrics you are interested in.

Supported KIPs

Theoretically, this library supports every (non-Java-specific) client facing KIP. Most are tested, some need testing. Any KIP that simply adds or modifies a protocol is supported by code generation.

  • KIP-12 (sasl & ssl; 0.9.0)
  • KIP-13 (throttling; supported but not obeyed)
  • KIP-31 (relative offsets in message set; 0.10.0)
  • KIP-32 (timestamps in message set v1; 0.10.0)
  • KIP-35 (adds ApiVersion; 0.10.0)
  • KIP-36 (rack aware replica assignment; 0.10.0)
  • KIP-40 (ListGroups and DescribeGroup v0; 0.9.0)
  • KIP-43 (sasl enhancements & handshake; 0.10.0)
  • KIP-54 (sticky group assignment)
  • KIP-62 (join group rebalnce timeout, background thread heartbeats; v0.10.1)
  • KIP-74 (fetch response size limit; 0.10.1)
  • KIP-78 (cluster id in metadata; 0.10.1)
  • KIP-79 (list offset req/resp timestamp field; 0.10.1)
  • KIP-84 (sasl scram; 0.10.2)
  • KIP-98 (EOS; 0.11.0)
  • KIP-101 (offset for leader epoch introduced; broker usage yet; 0.11.0)
  • KIP-107 (delete records; 0.11.0)
  • KIP-108 (validate create topic; 0.10.2)
  • KIP-110 (zstd; 2.1.0)
  • KIP-112 (JBOD disk failure, protocol changes; 1.0.0)
  • KIP-113 (JBOD log dir movement, protocol additions; 1.0.0)
  • KIP-124 (request rate quotas; 0.11.0)
  • KIP-133 (describe & alter configs; 0.11.0)
  • KIP-152 (more sasl, introduce sasl authenticate; 1.0.0)
  • KIP-183 (elect preferred leaders; 2.2.0)
  • KIP-185 (idempotent is default; 1.0.0)
  • KIP-195 (create partitions request; 1.0.0)
  • KIP-207 (new error in list offset request; 2.2.0)
  • KIP-219 (throttling happens after response; 2.0.0)
  • KIP-226 (describe configs v1; 1.1.0)
  • KIP-227 (incremental fetch requests; 1.1.0)
  • KIP-229 (delete groups request; 1.1.0)
  • KIP-255 (oauth via sasl/oauthbearer; 2.0.0)
  • KIP-279 (leader / follower failover; changed offsets for leader epoch; 2.0.0)
  • KIP-320 (fetcher log truncation detection; 2.1.0)
  • KIP-322 (new error when delete topics is disabled; 2.1.0)
  • KIP-339 (incremental alter configs; 2.3.0)
  • KIP-341 (sticky group bug fix)
  • KIP-342 (oauth extensions; 2.1.0)
  • KIP-345 (static group membership, see KAFKA-8224)
  • KIP-360 (safe epoch bumping for UNKNOWN_PRODUCER_ID; 2.5.0)
  • KIP-368 (periodically reauth sasl; 2.2.0)
  • KIP-369 (always round robin produce partitioner; 2.4.0)
  • KIP-380 (inter-broker command changes; 2.2.0)
  • KIP-392 (fetch request from closest replica w/ rack; 2.2.0)
  • KIP-394 (require member.id for initial join; 2.2.0)
  • KIP-412 (dynamic log levels with incremental alter configs; 2.4.0)
  • KIP-429 (incremental rebalance, see KAFKA-8179; 2.4.0)
  • KIP-430 (include authorized ops in describe groups; 2.3.0)
  • KIP-447 (transaction changes to better support group changes; 2.5.0)
  • KIP-455 (admin replica reassignment; 2.4.0)
  • KIP-460 (admin leader election; 2.4.0)
  • KIP-464 (defaults for create topic; 2.4.0)
  • KIP-467 (produce response error change for per-record errors; 2.4.0)
  • KIP-480 (sticky partition producing; 2.4.0)
  • KIP-482 (tagged fields; KAFKA-8885; 2.4.0)
  • KIP-496 (offset delete admin command; 2.4.0)
  • KIP-497 (new API to alter ISR; 2.7.0)
  • KIP-498 (add max bound on reads; unimplemented in Kafka)
  • KIP-511 (add client name / version in apiversions req; 2.4.0)
  • KIP-518 (list groups by state; 2.6.0)
  • KIP-525 (create topics v5 returns configs; 2.4.0)
  • KIP-526 (reduce metadata lookups; done minus part 2, which we wont do)
  • KIP-546 (client quota APIs; 2.5.0)
  • KIP-554 (broker side SCRAM API; 2.7.0)
  • KIP-559 (protocol info in sync / join; 2.5.0)
  • KIP-569 (doc/type in describe configs; 2.6.0)
  • KIP-570 (leader epoch in stop replica; 2.6.0)
  • KIP-580 (exponential backoff; 2.6.0)
  • KIP-588 (producer recovery from txn timeout; 2.7.0)
  • KIP-590 (support for forwarding admin requests; 2.7.0)
  • KIP-595 (new APIs for raft protocol; 2.7.0)
  • KIP-599 (throttle create/delete topic/partition; 2.7.0)
  • KIP-700 (describe cluster; 2.8.0)

Directories

Path Synopsis
examples
pkg
kbin
Package kbin contains Kafka primitive reading and writing functions.
Package kbin contains Kafka primitive reading and writing functions.
kerr
Package kerr contains Kafka errors.
Package kerr contains Kafka errors.
kgo
Package kgo provides a pure Go efficient Kafka client for Kafka 0.8.0+ with support for transactions, regex topic consuming, the latest partition strategies, and more.
Package kgo provides a pure Go efficient Kafka client for Kafka 0.8.0+ with support for transactions, regex topic consuming, the latest partition strategies, and more.
kgo/internal/sticky
Package sticky provides sticky partitioning strategy for Kafka, with a complete overhaul to be faster, more understandable, and optimal.
Package sticky provides sticky partitioning strategy for Kafka, with a complete overhaul to be faster, more understandable, and optimal.
kmsg
Package kmsg contains Kafka request and response types and autogenerated serialization and deserialization functions.
Package kmsg contains Kafka request and response types and autogenerated serialization and deserialization functions.
kversion
Package kversion specifies versions for Kafka request keys.
Package kversion specifies versions for Kafka request keys.
sasl
Package sasl specifies interfaces that any sasl authentication must provide to interop with Kafka SASL.
Package sasl specifies interfaces that any sasl authentication must provide to interop with Kafka SASL.
sasl/kerberos
Package scram provides Kerberos v5 sasl authentication.
Package scram provides Kerberos v5 sasl authentication.
sasl/oauth
Package oauth provides OAUTHBEARER sasl authentication as specified in RFC7628.
Package oauth provides OAUTHBEARER sasl authentication as specified in RFC7628.
sasl/plain
Package plain provides PLAIN sasl authentication as specified in RFC4616.
Package plain provides PLAIN sasl authentication as specified in RFC4616.
sasl/scram
Package scram provides SCRAM-SHA-256 and SCRAM-SHA-512 sasl authentication as specified in RFC5802.
Package scram provides SCRAM-SHA-256 and SCRAM-SHA-512 sasl authentication as specified in RFC5802.
kadm Module
kfake Module
sr Module
plugin
kgmetrics Module
klogr Module
klogrus Module
kotel Module
kphuslog Module
kprom Module
kslog Module
kzap Module
kzerolog Module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL