forked from grafana/loki
-
Notifications
You must be signed in to change notification settings - Fork 0
fix(deps): update module github.com/twmb/franz-go to v1.20.6 (main) #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
anaconda-renovate
wants to merge
1
commit into
main
Choose a base branch
from
deps-update/main-github.comtwmbfranz-go
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
25d4081 to
2c90554
Compare
2c90554 to
a1e651b
Compare
Author
ℹ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
a1e651b to
d2cbcda
Compare
d2cbcda to
619b013
Compare
619b013 to
a13f43c
Compare
a13f43c to
b030b86
Compare
b030b86 to
d556e41
Compare
d556e41 to
ec6c09f
Compare
Author
ℹ️ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.18.1→v1.20.6Release Notes
twmb/franz-go (github.com/twmb/franz-go)
v1.20.6Compare Source
===
This patch release has two improvements.
Previously, you could not use poll functions multiple times if using
BlockRebalanceOnPoll, because rebalancing had a higher lock priority thanpolling and would block all further poll calls. This has been changed to allow
you to call poll as much as you want until you
AllowRebalance. Thanks@KiKoS0!
If brokers indicated they supported epochs, but then used -1 everywhere for
that epoch,
Markfunctions would ignore records being marked and you wouldnever commit progress. This was due to the client defaulting to a 0 epoch
internally (and not using it if the broker did not support it), meaning -1
would be ignored. Brokers that use indicate support but use -1 are now
supported. This was only found to be a problem against Azure Event Hubs.
7cd5ea65kgo: fix mark <=> epoch interaction, make epoch handling more resilient94fd8622kgo: fix deadlock when polling multiple times while blocked from a rebalancev1.20.5Compare Source
===
This fixes a commit in 1.20.4 that accidentally broke client metrics (KIP-714)
and inadvertently made a log spammy. In addition to the fix, a few logs around
client metrics have been reduced in severity.
The new-as-of-1.20
OnPartitionsCallbackBlockedis now called in a goroutine,reducing the chance that you accidentally run into a deadlock based on how you
structure handling the hook.
Deps have been bumped to eliminate any security scanners that flag on CVEs
(even though this is a library and you can bump the dep in your own binary).
The
kgo.Fetches.Errorsdoc has been expanded to account for previouslyundocumented errors, and updates guidance on what's retryable vs what is not.
e86bb6c9kgo:info=>debugfor a few logs in client metrics7c7ca2b4kgo: call OnPartitionsCallbackBlocked concurrentlyebf29a4aall: bump deps97b4a1d4kgo.Fetches.Errors doc: clarify && expand for two undoc'd errors13ea38e3bug kgo: fix remaining usage of kgo.maxVers/kgo.maxVersion (thanks @vincentbernat!)v1.20.4Compare Source
===
This patch release contains fixes for two data races: one new one introduced in
1.20.3 with sharded requests (a super obvious oversight in retrospect..) and a
fix for a hard to encounter race that has existed for years when using
preferred read replicas (via setting the
kgo.Rackoption while consuming).There are also improvements by @carsonip to log
context errors less and to return earlier when pinging if the context is
expired, and an improvement by @rockwotj to
support arbitrary Kafka messages (i.e., custom requests).
cdd0316akgo: source.fetch stop fetching again when context is doned0350d13bugfix kgo: fix sharded-request "minor" data race introduced in 1.20.3566201c3bugfix kgo: fix data race with replica fetching / moving; kfake: add test4e4ff88fkgo: Do not try next broker in Client.Ping if context is done60b8d5b5kgo: support arbitrary Kafka RPCsv1.20.3Compare Source
===
This patch release fixes one bug that has existed since 1.19.0 and improves
retry behavior on dial failures.
The bug: 1.19 introduced code to, when follower fetching, re-check the
preferred replica from the leader every 30 minutes (every
RecheckPreferredReplicaInterval).The logic switched back to a leader after handling a fetch response, using
the offset that the fetch request was issued with. The client would give you
data that it just fetched, go back to the leader, and then get redirected back
to the follower using the offset from before the fetch it just gave you.
You would then receive a bit of duplicate data as the pre-fetch offset is
re-fetched. This is no longer the case.
The improvement: on sharded requests (certain requests that may need to be
split and sent to many brokers), dial errors were not retried. They are now
retried.
d5085e90kgo: retry dial errors on sharded requests if possible70c81779kgo source: expired old preferred replicas while creating req, not handling respv1.20.2Compare Source
===
This patch release fixes a field-access data race that has been around forever.
Specifically, if a partition was moving from one broker to another via a
metadata update at the same time a linger timer fired, there was a data race
reading a pointer that was being written. Most 64 bit systems don't experience
corruption with this type of race, so the code would execute fine but you may
have the old sink start draining when the new sink should have.
This also further improves some linger logic.
73c16c1dkgo: do not trigger draining early if a partition moves sinks while lingeringd5066143kgo: fix data race when the linger timer firesv1.20.1Compare Source
===
This small patchfix release fixes a longstanding bug in
RequestCachedMetadata,which became a problem now that kadm is using it by default: if no metadata was
cached and you requested all topics, no metadata request would be issued and
you'd get no valid response. Thank you @countableSet
for the find and fix.
This also adds the two new 1.20 config options to
OptValues, and a big doccomment hinting to add new config opts going forward.
NOTE Follow up testing showed there are still more long-standing bugs with
RequestCachedMetadata. Usage of that function has been reverted from kadmfor the time being (which is, in the open source ecosystem, the only place this
function was ever used). All users of kadm v1.17.0 should bump to v1.17.1.
1087d3c7kgo: add new opts to OptValues && big doc to do so going forward...cad283f0bugfix kgo: fix for empty fetch mapped metadata (#1143)v1.20.0Compare Source
===
This is a comparatively small minor release that adds support for Kafka 4.1,
adds three new APIs, fixes four bugs (read below to gauge importance), has a
few improvements, and switches the client from a default of 0ms linger to a
default of 10ms linger.
Also of note: a new
srfakepackage has been created so you can run a fakeSchema Registry server in your CI tests (thank you @weeco).
This complements the existing
kfakepackage that allows you to run a fakein-memory Kafka "cluster" for unit testing. If you did not know of either of these,
check them out!
kfakesupports many Kafka features, but transactions are still WIP.All franz-go tests except transaction based tests pass against a kfake "cluster",
so odds are, it'll work for you.
There are a few external contributors this release to features, docs, bugs, and
internal improvements. If I do not call you out below directly, please know I'm
thankful for your contributions!
Behavior changes
lingering by adding
kgo.ProducerLinger(0)to your optionswhen initializing the client. The original theory for 0ms linger was more of
a theory, and years of practice has shown that even a tiny linger can be
beneficial to the throughput and batching of clients.
See #1072 for more details.
Bug fixes
Metadata refreshes could panic if a very specific flow of events happened,
specifically only on a cluster that is transitioning from not using topic IDs
to using topic IDs, and only if the transition is not implemented 100% correctly.
This bug has existed for years and was only encountered during the recent addition
of topic IDs to Redpanda. See
645f1126for more details.The loop that determines whether more batches exist to be produced had its
conditional backwards. This was hidden forever due to other minor logic flaws
that caused the "do more batches exist?" check to occur more than it should
have, so the bug caused no problems. The "do more batches exist?" checks have
been improved and the conditional has been fixed.
The internal linger timers fired way more than they needed to, causing
batches to be cut WAY more frequently than they needed to when using
lingering. The logic here has been fixed, so lingering should actually run
its full time now and batches should be bigger.
Azure resets connections when speaking ApiVersions v4. v1.19 of this library
detected this resetting and after 3 attempts, downgrades to ApiVersions v3.
However, the connection reset error is different when running on Windows.
The code has been improved to detect the proper syscall when this library
is running on Windows. Thanks @axw!
Improvements
you disabled idempotence and bumped the number. The inflight limit is now
unbounded. Thanks @pracucci!
Features
OnPartitionsCallbackBlockednow exists so that, if you are usingBlockRebalanceOnPoll, you can be notified that a rebalance is desired. Ifyour record processing function is slow, this allows you to interrupt your
batch processing (if possible), wrap up committing, and allow a rebalance
before your client is kicked from the group.
ConsumeExcludeTopics, if you are using regex consuming, allows you to havea higher-priority set of regular expressions to exclude topics from being
consumed. This is useful if you want to consume everything except a set of
topics (for example, if you are replicating topics from one cluster to
another). Thanks @mmatczuk!
Fetches.RecordsAllnow exists to return a Go iterator for use in range loops.Thanks @narqo!
Relevant commits
1844d216feature kgo: add OnPartitionsCallbackBlocked157580fdkgo.RequestSharded: support ConsumerGroupDescribe, ShareGroupDescribef176953ebehavior change kgo lingering: default to 10ms679f7c3dkgo: add support for produce v13f7f61420generate: new definitions for share requests32997347generate: new non-share protocols for kafka 4.1be947c20bench: add -batch-recs, -psync, -pgoros0b1dbf0cbugfix kgo: multiple linger fixes2ea3251dbugfix sink: fix old bug determining whether more batches should be produced195bed84feature kgo: add ConsumeExcludeTopics645f9d4bbugfix Check errno for (WSA)ECONNRESET645f1126bugfix kgo: fix panic in metadata updates from inconsistent broker state612f26b6feature kgo: add Fetches.RecordsAll to return a Go native iteratorce2bcd18improvement kgo: use unlimited ring buffers in the Produce path, allowing >5 inflight requestsv1.19.5Compare Source
===
Fixes a bug introduced in 1.19.3 that caused batched FindCoordinator requests
to no longer work against older brokers (Kafka brokers before 2.4, or all
Redpanda versions brokers).
All credit to @douglasbouttell for exactly diagnosing the bug.
06272c66bugfix kgo: bugfix batched FindCoordinator requests against older brokersv1.19.4Compare Source
===
Fixes one bug introduced from the prior release (an obvious data race in
retrospect), and one data race introduced in 1.19.0. I've looped the tests more
in this release and am not seeing further races. I don't mean to downplay the
severity here, but these are races on pointer-sized variables where reading the
before or after state is of little difference. One of the read/write races is
on a context.Context, so there are actually two pointer sized reads & writes --
but reading the (effectively) type vtable for the new context and then the data
pointer for the old context doesn't really break things here. Anyway, you
should upgrade.
This also adds a workaround for Azure EventHubs, which does not handle
ApiVersions correctly when the broker does not recognize the version we are
sending. The broker should reply with an
UNSUPPORTED_VERSIONerror andreply with the version the broker can handle. Instead, Azure is resetting the
connection. To workaround, we detect a cxn reset twice and then downgrade the
request we send client side to 0.
7910f6b6kgo: retryconnection reset by peerfrom ApiVersions to work around EventHubsd310cabdkgo: fix data read/write race on ctx variable7a5ddceckgo bugfix: guard sink batch field access morev1.19.3Compare Source
===
This release fully fixes (and has a positive field report) the KIP-890 problem
that was meant to be fixed in v1.19.2. See the commit description for more
details.
a13f633bkgo: remove pinReq wrapping requestv1.19.2Compare Source
===
This release fixes two bugs, a data race and a misunderstanding in some of the
implementation of KIP-890.
The data race has existed for years and has only been caught once. It could
only be encountered in a specific section of decoding a fetch response WHILE a
metadata response was concurrently being handled, and the metadata response
indicated a partition changed leaders. The race was benign; it was a read race,
and the decoded response is always discarded because a metadata change
happened. Regardless, metadata handling and fetch response decoding are no
longer concurrent.
For KIP-890, some things were not called out all to clearly (imo) in the KIP.
If your 4.0 cluster had not yet enabled the transaction.version feature v2+,
then transactions would not work in this client. As it turns out, Kafka 4
finally started using a v2.6 introduced "features" field in a way that is
important to clients. In short: I opted into KIP-890 behavior based on if a
broker could handle requests (produce v12+, end txn v5+, etc). I also needed to
check if "transaction.version" was v2+. Features handling is now supported in
the client, and this single client-relevant feature is now implemented.
See the commits for more details.
dda08fd9kgo: fix KIP-890 handling of the transaction.version feature8a364819kgo: fix data race in fetch response handlingv1.19.1Compare Source
===
This release fixes a very old bug that finally started being possible to hit in
v1.19.0. The v1.19.0 release does not work for Kafka versions pre-4.0. This
release fixes that (by fixing the bug that has existed since Kafka 2.4) and
adds a GH action to test against Kafka 3.8 to help prevent regressions against
older brokers as this library marches forward.
50aa74f1kgo bugfix: ApiVersions replies only with key 18, not all keysv1.19.0Compare Source
===
This is the largest release of franz-go yet. The last patch release was Jan 20, '25.
The last minor release was Oct 14, '24.
A big reason for delays the past few month+ has been from spin looping tests
and investigating any issue that popped up. Another big delay is that Kafka has
a full company adding features -- some questionable -- and I'm one person that
spent a significant amount of time catching this library up with the latest
Kafka release. Lastly, Kafka released Kafka v3.9 three weeks after my last
major release, and simultaneously, a few requests came in for new features in
this library that required a lot of time. I wanted a bit of a break and only
resumed development more seriously in late Feb. This release is likely >100hrs
of work over the last ~4mo, from understanding new features and implementing
them, reviewing PRs, and debugging rare test failures.
The next Kafka release is poised to implement more large features (share
groups), which unfortunately will mean even more heads down time trying to bolt
in yet another feature to an already large library. I hope that Confluent
chills with introducing massive client-impacting changes; they've introduced
more in the past year than has been introduced from 2019-2023.
Bug fixes / changes / deprecations
The BasicLogger will no longer panic if only a single key (no val) is used. Thanks @vicluq!
An internal coding error around managing fetch concurrency was fixed. Thanks @iimos!
Some off by ones with retries were fixed (tldr: we retried one fewer times than configured)
AllowAutoTopicCreationandConsumeRegexcan now be used together.Previously, topics would not be created if you were producing and consuming
from the same client AND if you used the
ConsumeRegexoption.A data race in the consumer code path has been fixed. The race is hard to
encounter (which is why it never came up even in my weeks of spin-looping
tests with
-race). See PR #984for more details.
EndBeginTxnUnsafeis deprecated and unused.EndAndBeginTransactionnowflushes, and you cannot produce while the function happens (the function will
just be stuck flushing). As of KIP-890, the behavior that the library relied on
is now completely unsupported. Trying to produce while ending & beginning a
transaction very occasionally leads to duplicate messages. The function now is
just a shortcut for flush, end, begin.
The kversion package guts have been entirely reimplemented; version guessing
should be more reliable.
OnBrokerConnectnow encompasses the entire SASL flow (if using SASL) ratherthan just connection dialing. This allows you more visibility into successful
or broken connections, as well as visibility into how long it actually takes
to initialize a connection. The
dialDurarg has been renamed toinitDur.You may see the duration increase in your metrics. enough If feedback comes
in that this is confusing or unacceptable, I may issue a patch to revert
the change and instead introduce a separate hook in the next minor release.
I do not aim to create another minor release for a while.
Features / improvements
This release adds support for user-configurable memory pooling to a few select
locations. See any "Pool" suffixed interface type in the documentation. You can
use this to add bucketed pooling (or whatever strategy you choose) to cut down
on memory waste in a few areas. As well, a few allocations that were previously
many-tiny allocs have been converted to slab allocations (slice backed). Lastly,
if you opt into
kgo.Recordpooling, theRecordtype has a newRecyclemethod to send it and all other pooled slices back to their pools.
You can now completely override how compression or decompression is done via
the new
WithCompressorandWithDecompressoroptions. This allows you touse libraries or options that franz-go does not automatically support, perhaps
opting for higher performance libraries or options or using memory more memory
pooling behind the scenes.
ConsumeResetOffsethas been split into two options,ConsumeResetOffsetandConsumeStartOffset. The documentation has been cleaned up. I personally alwaysfound it confusing to use the reset offset for both what to start consuming from
and what to reset to when the client sees an offset out of range error. The start
offset defaults to the reset offset (and vice versa) if you only set one.
For users that produce infrequently but want the latency to be low when producing,
the client now has a
EnsureProduceConnectionIsOpenmethod. You can call thisbefore producing to force connections to be open.
The client now has a
RequestCachedMetadatafunction, which can be used torequest metadata only if the information you're requesting is not cached,
or is cached but is too stale. This can be very useful for admin packages that
need metadata to do anything else -- rather than requesting metadata for every
single admin operation, you can have metadata requested once and use that
repeatedly. Notably, I'll be switching
kadmto using this function.KIP-714 support: the client now internally aggregates a small set of metrics
and sends them to the broker by default. This client implements all required
metrics and a subset of recommended metrics (the ones that make more sense).
To opt out of metrics collection & sending to the broker by default, you
can use the new
DisableClienMetricsoption. You can also provide your ownmetrics to send to the broker via the new
UserMetricsFnoption. The clientdoes not attempt to sanitize any user provided metric names; be sure you provide
the names in the correct format (see docs).
KIP-848 support: this exists but is hidden. You must explicitly opt in by using
the new WithContext option, and the context must have a special string key,
opt_in_kafka_next_gen_balancer_beta. I noticed while testing that if yourepeat
ConsumerGroupHeartbeatrequests (i.e. what can happen when clientsare on unreliable networks), group members repeatedly get fenced. This is
recoverable, but it happens way way more than it should and I don't believe
the broker implementation to be great at the moment. Confluent historically
ignores any bug reports I create on the KAFKA issue tracker, but if you
would like to follow along or perhaps nudge to help get a reply, please
chime in on KAFKA-19222, KAFKA-19233, and KAFKA-19235.
A few other more niche APIs have been added. See the full breadth of new APIs
below and check pkg.go.dev for docs for any API you're curious about.
API additions
This section contains all net-new APIs in this release. See the documentation
on pkg.go.dev.
Relevant commits
This is a small selection of what I think are the most pertinent commits in
this release. This release is very large, though. Many commits and PRs have
been left out that introduce or change smaller things.
07e57d3ekgo: remove all EndAndBeginTransaction internal "optimizations"a54ffa96kgo: add ConsumeStartOffset, expand offset docs, update readme KIPsPR #​988#988 kgo: add support for KIP-714 (client metrics)7a17a03ckgo: fix data race in consumer code pathae96af1dkgo: expose IsRetryableBrokerErr1eb82feekgo: add EnsureProduceConnectionIsOpenfc778ba8kgo: fix AllowAutoTopicCreation && ConsumeRegex when used togetherae7eea7ckgo: add DisableFetchCRCValidation option6af90823kgo: add the ability to pool memory in a few places while consuming8c7a36dbkgo: export utilities for decompressing and parsing partition fetch responses33400303kgo: do a slab allocation for Record's when processing a batch39c2157akgo: add WithCompressor and WithDecompressor options9252a6b6kgo: export Compressor and Decompressorbe15c285kgo: add Client.RequestCachedMetadatafc040bc0kgo: add OnRebootstrapRequiredc8aec00akversion: document changes through 4.0718c5606kgo: remove all code handling EndBeginTxnUnsafe, make it a no-op5494c59ekversions: entirely reimplement internals9d266fcdkgo: allow outstanding produce requests to be context canceled if the user disables idempotencyc60bf4c2kgo: add DefaultProduceTopicAlways ProducerOpt50cfe060kgo: fix off-by-one with retries accountinge9ba83a6,05099ba0kgo: add WithContext, Client.Context()ddb0c0c3kgo: fix cancellation of a fetch in manageFetchConcurrency83843a53kgo: fixed panic when keyvals len equals 1Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Renovate Bot.