Skip to content

Conversation

@anaconda-renovate
Copy link

@anaconda-renovate anaconda-renovate bot commented Jul 6, 2025

This PR contains the following updates:

Package Change Age Confidence
github.com/twmb/franz-go v1.18.1v1.20.6 age confidence

Release Notes

twmb/franz-go (github.com/twmb/franz-go)

v1.20.6

Compare Source

===

This patch release has two improvements.

Previously, you could not use poll functions multiple times if using
BlockRebalanceOnPoll, because rebalancing had a higher lock priority than
polling and would block all further poll calls. This has been changed to allow
you to call poll as much as you want until you AllowRebalance. Thanks
@​KiKoS0!

If brokers indicated they supported epochs, but then used -1 everywhere for
that epoch, Mark functions would ignore records being marked and you would
never commit progress. This was due to the client defaulting to a 0 epoch
internally (and not using it if the broker did not support it), meaning -1
would be ignored. Brokers that use indicate support but use -1 are now
supported. This was only found to be a problem against Azure Event Hubs.

  • 7cd5ea65 kgo: fix mark <=> epoch interaction, make epoch handling more resilient
  • 94fd8622 kgo: fix deadlock when polling multiple times while blocked from a rebalance

v1.20.5

Compare Source

===

This fixes a commit in 1.20.4 that accidentally broke client metrics (KIP-714)
and inadvertently made a log spammy. In addition to the fix, a few logs around
client metrics have been reduced in severity.

The new-as-of-1.20 OnPartitionsCallbackBlocked is now called in a goroutine,
reducing the chance that you accidentally run into a deadlock based on how you
structure handling the hook.

Deps have been bumped to eliminate any security scanners that flag on CVEs
(even though this is a library and you can bump the dep in your own binary).

The kgo.Fetches.Errors doc has been expanded to account for previously
undocumented errors, and updates guidance on what's retryable vs what is not.

  • e86bb6c9 kgo: info=>debug for a few logs in client metrics
  • 7c7ca2b4 kgo: call OnPartitionsCallbackBlocked concurrently
  • ebf29a4a all: bump deps
  • 97b4a1d4 kgo.Fetches.Errors doc: clarify && expand for two undoc'd errors
  • 13ea38e3 bug kgo: fix remaining usage of kgo.maxVers/kgo.maxVersion (thanks @​vincentbernat!)

v1.20.4

Compare Source

===

This patch release contains fixes for two data races: one new one introduced in
1.20.3 with sharded requests (a super obvious oversight in retrospect..) and a
fix for a hard to encounter race that has existed for years when using
preferred read replicas (via setting the kgo.Rack option while consuming).

There are also improvements by @​carsonip to log
context errors less and to return earlier when pinging if the context is
expired, and an improvement by @​rockwotj to
support arbitrary Kafka messages (i.e., custom requests).

  • cdd0316a kgo: source.fetch stop fetching again when context is done
  • d0350d13 bugfix kgo: fix sharded-request "minor" data race introduced in 1.20.3
  • 566201c3 bugfix kgo: fix data race with replica fetching / moving; kfake: add test
  • 4e4ff88f kgo: Do not try next broker in Client.Ping if context is done
  • 60b8d5b5 kgo: support arbitrary Kafka RPCs

v1.20.3

Compare Source

===

This patch release fixes one bug that has existed since 1.19.0 and improves
retry behavior on dial failures.

The bug: 1.19 introduced code to, when follower fetching, re-check the
preferred replica from the leader every 30 minutes (every RecheckPreferredReplicaInterval).
The logic switched back to a leader after handling a fetch response, using
the offset that the fetch request was issued with. The client would give you
data that it just fetched, go back to the leader, and then get redirected back
to the follower using the offset from before the fetch it just gave you.
You would then receive a bit of duplicate data as the pre-fetch offset is
re-fetched. This is no longer the case.

The improvement: on sharded requests (certain requests that may need to be
split and sent to many brokers), dial errors were not retried. They are now
retried.

  • d5085e90 kgo: retry dial errors on sharded requests if possible
  • 70c81779 kgo source: expired old preferred replicas while creating req, not handling resp

v1.20.2

Compare Source

===

This patch release fixes a field-access data race that has been around forever.
Specifically, if a partition was moving from one broker to another via a
metadata update at the same time a linger timer fired, there was a data race
reading a pointer that was being written. Most 64 bit systems don't experience
corruption with this type of race, so the code would execute fine but you may
have the old sink start draining when the new sink should have.

This also further improves some linger logic.

  • 73c16c1d kgo: do not trigger draining early if a partition moves sinks while lingering
  • d5066143 kgo: fix data race when the linger timer fires

v1.20.1

Compare Source

===

This small patchfix release fixes a longstanding bug in RequestCachedMetadata,
which became a problem now that kadm is using it by default: if no metadata was
cached and you requested all topics, no metadata request would be issued and
you'd get no valid response. Thank you @​countableSet
for the find and fix.

This also adds the two new 1.20 config options to OptValues, and a big doc
comment hinting to add new config opts going forward.

NOTE Follow up testing showed there are still more long-standing bugs with
RequestCachedMetadata. Usage of that function has been reverted from kadm
for the time being (which is, in the open source ecosystem, the only place this
function was ever used). All users of kadm v1.17.0 should bump to v1.17.1.

  • 1087d3c7 kgo: add new opts to OptValues && big doc to do so going forward...
  • cad283f0 bugfix kgo: fix for empty fetch mapped metadata (#​1143)

v1.20.0

Compare Source

===

This is a comparatively small minor release that adds support for Kafka 4.1,
adds three new APIs, fixes four bugs (read below to gauge importance), has a
few improvements, and switches the client from a default of 0ms linger to a
default of 10ms linger
.

Also of note: a new srfake package has been created so you can run a fake
Schema Registry server in your CI tests (thank you @​weeco).
This complements the existing kfake package that allows you to run a fake
in-memory Kafka "cluster" for unit testing. If you did not know of either of these,
check them out! kfake supports many Kafka features, but transactions are still WIP.
All franz-go tests except transaction based tests pass against a kfake "cluster",
so odds are, it'll work for you.

There are a few external contributors this release to features, docs, bugs, and
internal improvements. If I do not call you out below directly, please know I'm
thankful for your contributions!

Behavior changes

  • This library now lingers by default for 10ms. You can switch back to 0ms
    lingering by adding kgo.ProducerLinger(0) to your options
    when initializing the client. The original theory for 0ms linger was more of
    a theory, and years of practice has shown that even a tiny linger can be
    beneficial to the throughput and batching of clients.
    See #​1072 for more details.

Bug fixes

  • Metadata refreshes could panic if a very specific flow of events happened,
    specifically only on a cluster that is transitioning from not using topic IDs
    to using topic IDs, and only if the transition is not implemented 100% correctly.
    This bug has existed for years and was only encountered during the recent addition
    of topic IDs to Redpanda. See 645f1126 for more details.

  • The loop that determines whether more batches exist to be produced had its
    conditional backwards. This was hidden forever due to other minor logic flaws
    that caused the "do more batches exist?" check to occur more than it should
    have, so the bug caused no problems. The "do more batches exist?" checks have
    been improved and the conditional has been fixed.

  • The internal linger timers fired way more than they needed to, causing
    batches to be cut WAY more frequently than they needed to when using
    lingering. The logic here has been fixed, so lingering should actually run
    its full time now and batches should be bigger.

  • Azure resets connections when speaking ApiVersions v4. v1.19 of this library
    detected this resetting and after 3 attempts, downgrades to ApiVersions v3.
    However, the connection reset error is different when running on Windows.
    The code has been improved to detect the proper syscall when this library
    is running on Windows. Thanks @​axw!

Improvements

  • Previously, producing was limited to 5 inflight requests per broker even if
    you disabled idempotence and bumped the number. The inflight limit is now
    unbounded. Thanks @​pracucci!

Features

  • OnPartitionsCallbackBlocked now exists so that, if you are using
    BlockRebalanceOnPoll, you can be notified that a rebalance is desired. If
    your record processing function is slow, this allows you to interrupt your
    batch processing (if possible), wrap up committing, and allow a rebalance
    before your client is kicked from the group.

  • ConsumeExcludeTopics, if you are using regex consuming, allows you to have
    a higher-priority set of regular expressions to exclude topics from being
    consumed. This is useful if you want to consume everything except a set of
    topics (for example, if you are replicating topics from one cluster to
    another). Thanks @​mmatczuk!

  • Fetches.RecordsAll now exists to return a Go iterator for use in range loops.
    Thanks @​narqo!

Relevant commits

  • 1844d216 feature kgo: add OnPartitionsCallbackBlocked
  • 157580fd kgo.RequestSharded: support ConsumerGroupDescribe, ShareGroupDescribe
  • f176953e behavior change kgo lingering: default to 10ms
  • 679f7c3d kgo: add support for produce v13
  • f7f61420 generate: new definitions for share requests
  • 32997347 generate: new non-share protocols for kafka 4.1
  • be947c20 bench: add -batch-recs, -psync, -pgoros
  • 0b1dbf0c bugfix kgo: multiple linger fixes
  • 2ea3251d bugfix sink: fix old bug determining whether more batches should be produced
  • 195bed84 feature kgo: add ConsumeExcludeTopics
  • 645f9d4b bugfix Check errno for (WSA)ECONNRESET
  • 645f1126 bugfix kgo: fix panic in metadata updates from inconsistent broker state
  • 612f26b6 feature kgo: add Fetches.RecordsAll to return a Go native iterator
  • ce2bcd18 improvement kgo: use unlimited ring buffers in the Produce path, allowing >5 inflight requests

v1.19.5

Compare Source

===

Fixes a bug introduced in 1.19.3 that caused batched FindCoordinator requests
to no longer work against older brokers (Kafka brokers before 2.4, or all
Redpanda versions brokers).

All credit to @​douglasbouttell for exactly diagnosing the bug.

  • 06272c66 bugfix kgo: bugfix batched FindCoordinator requests against older brokers

v1.19.4

Compare Source

===

Fixes one bug introduced from the prior release (an obvious data race in
retrospect), and one data race introduced in 1.19.0. I've looped the tests more
in this release and am not seeing further races. I don't mean to downplay the
severity here, but these are races on pointer-sized variables where reading the
before or after state is of little difference. One of the read/write races is
on a context.Context, so there are actually two pointer sized reads & writes --
but reading the (effectively) type vtable for the new context and then the data
pointer for the old context doesn't really break things here. Anyway, you
should upgrade.

This also adds a workaround for Azure EventHubs, which does not handle
ApiVersions correctly when the broker does not recognize the version we are
sending. The broker should reply with an UNSUPPORTED_VERSION error and
reply with the version the broker can handle. Instead, Azure is resetting the
connection. To workaround, we detect a cxn reset twice and then downgrade the
request we send client side to 0.

  • 7910f6b6 kgo: retry connection reset by peer from ApiVersions to work around EventHubs
  • d310cabd kgo: fix data read/write race on ctx variable
  • 7a5ddcec kgo bugfix: guard sink batch field access more

v1.19.3

Compare Source

===

This release fully fixes (and has a positive field report) the KIP-890 problem
that was meant to be fixed in v1.19.2. See the commit description for more
details.

  • a13f633b kgo: remove pinReq wrapping request

v1.19.2

Compare Source

===

This release fixes two bugs, a data race and a misunderstanding in some of the
implementation of KIP-890.

The data race has existed for years and has only been caught once. It could
only be encountered in a specific section of decoding a fetch response WHILE a
metadata response was concurrently being handled, and the metadata response
indicated a partition changed leaders. The race was benign; it was a read race,
and the decoded response is always discarded because a metadata change
happened. Regardless, metadata handling and fetch response decoding are no
longer concurrent.

For KIP-890, some things were not called out all to clearly (imo) in the KIP.
If your 4.0 cluster had not yet enabled the transaction.version feature v2+,
then transactions would not work in this client. As it turns out, Kafka 4
finally started using a v2.6 introduced "features" field in a way that is
important to clients. In short: I opted into KIP-890 behavior based on if a
broker could handle requests (produce v12+, end txn v5+, etc). I also needed to
check if "transaction.version" was v2+. Features handling is now supported in
the client, and this single client-relevant feature is now implemented.

See the commits for more details.

  • dda08fd9 kgo: fix KIP-890 handling of the transaction.version feature
  • 8a364819 kgo: fix data race in fetch response handling

v1.19.1

Compare Source

===

This release fixes a very old bug that finally started being possible to hit in
v1.19.0. The v1.19.0 release does not work for Kafka versions pre-4.0. This
release fixes that (by fixing the bug that has existed since Kafka 2.4) and
adds a GH action to test against Kafka 3.8 to help prevent regressions against
older brokers as this library marches forward.

  • 50aa74f1 kgo bugfix: ApiVersions replies only with key 18, not all keys

v1.19.0

Compare Source

===

This is the largest release of franz-go yet. The last patch release was Jan 20, '25.
The last minor release was Oct 14, '24.

A big reason for delays the past few month+ has been from spin looping tests
and investigating any issue that popped up. Another big delay is that Kafka has
a full company adding features -- some questionable -- and I'm one person that
spent a significant amount of time catching this library up with the latest
Kafka release. Lastly, Kafka released Kafka v3.9 three weeks after my last
major release, and simultaneously, a few requests came in for new features in
this library that required a lot of time. I wanted a bit of a break and only
resumed development more seriously in late Feb. This release is likely >100hrs
of work over the last ~4mo, from understanding new features and implementing
them, reviewing PRs, and debugging rare test failures.

The next Kafka release is poised to implement more large features (share
groups), which unfortunately will mean even more heads down time trying to bolt
in yet another feature to an already large library. I hope that Confluent
chills with introducing massive client-impacting changes; they've introduced
more in the past year than has been introduced from 2019-2023.

Bug fixes / changes / deprecations

  • The BasicLogger will no longer panic if only a single key (no val) is used. Thanks @​vicluq!

  • An internal coding error around managing fetch concurrency was fixed. Thanks @​iimos!

  • Some off by ones with retries were fixed (tldr: we retried one fewer times than configured)

  • AllowAutoTopicCreation and ConsumeRegex can now be used together.
    Previously, topics would not be created if you were producing and consuming
    from the same client AND if you used the ConsumeRegex option.

  • A data race in the consumer code path has been fixed. The race is hard to
    encounter (which is why it never came up even in my weeks of spin-looping
    tests with -race). See PR #​984
    for more details.

  • EndBeginTxnUnsafe is deprecated and unused. EndAndBeginTransaction now
    flushes, and you cannot produce while the function happens (the function will
    just be stuck flushing). As of KIP-890, the behavior that the library relied on
    is now completely unsupported. Trying to produce while ending & beginning a
    transaction very occasionally leads to duplicate messages. The function now is
    just a shortcut for flush, end, begin.

  • The kversion package guts have been entirely reimplemented; version guessing
    should be more reliable.

  • OnBrokerConnect now encompasses the entire SASL flow (if using SASL) rather
    than just connection dialing. This allows you more visibility into successful
    or broken connections, as well as visibility into how long it actually takes
    to initialize a connection. The dialDur arg has been renamed to initDur.
    You may see the duration increase in your metrics. enough If feedback comes
    in that this is confusing or unacceptable, I may issue a patch to revert
    the change and instead introduce a separate hook in the next minor release.
    I do not aim to create another minor release for a while.

Features / improvements

  • This release adds support for user-configurable memory pooling to a few select
    locations. See any "Pool" suffixed interface type in the documentation. You can
    use this to add bucketed pooling (or whatever strategy you choose) to cut down
    on memory waste in a few areas. As well, a few allocations that were previously
    many-tiny allocs have been converted to slab allocations (slice backed). Lastly,
    if you opt into kgo.Record pooling, the Record type has a new Recycle
    method to send it and all other pooled slices back to their pools.

  • You can now completely override how compression or decompression is done via
    the new WithCompressor and WithDecompressor options. This allows you to
    use libraries or options that franz-go does not automatically support, perhaps
    opting for higher performance libraries or options or using memory more memory
    pooling behind the scenes.

  • ConsumeResetOffset has been split into two options, ConsumeResetOffset and
    ConsumeStartOffset. The documentation has been cleaned up. I personally always
    found it confusing to use the reset offset for both what to start consuming from
    and what to reset to when the client sees an offset out of range error. The start
    offset defaults to the reset offset (and vice versa) if you only set one.

  • For users that produce infrequently but want the latency to be low when producing,
    the client now has a EnsureProduceConnectionIsOpen method. You can call this
    before producing to force connections to be open.

  • The client now has a RequestCachedMetadata function, which can be used to
    request metadata only if the information you're requesting is not cached,
    or is cached but is too stale. This can be very useful for admin packages that
    need metadata to do anything else -- rather than requesting metadata for every
    single admin operation, you can have metadata requested once and use that
    repeatedly. Notably, I'll be switching kadm to using this function.

  • KIP-714 support: the client now internally aggregates a small set of metrics
    and sends them to the broker by default. This client implements all required
    metrics and a subset of recommended metrics (the ones that make more sense).
    To opt out of metrics collection & sending to the broker by default, you
    can use the new DisableClienMetrics option. You can also provide your own
    metrics to send to the broker via the new UserMetricsFn option. The client
    does not attempt to sanitize any user provided metric names; be sure you provide
    the names in the correct format (see docs).

  • KIP-848 support: this exists but is hidden. You must explicitly opt in by using
    the new WithContext option, and the context must have a special string key,
    opt_in_kafka_next_gen_balancer_beta. I noticed while testing that if you
    repeat ConsumerGroupHeartbeat requests (i.e. what can happen when clients
    are on unreliable networks), group members repeatedly get fenced. This is
    recoverable, but it happens way way more than it should and I don't believe
    the broker implementation to be great at the moment. Confluent historically
    ignores any bug reports I create on the KAFKA issue tracker, but if you
    would like to follow along or perhaps nudge to help get a reply, please
    chime in on KAFKA-19222, KAFKA-19233, and KAFKA-19235.

  • A few other more niche APIs have been added. See the full breadth of new APIs
    below and check pkg.go.dev for docs for any API you're curious about.

API additions

This section contains all net-new APIs in this release. See the documentation
on pkg.go.dev.

const (
        CodecNone CompressionCodecType = iota
        CodecGzip
        CodecSnappy
        CodecLz4
        CodecZstd
        CodecError = -1
)
const CompressDisableZstd CompressFlag = 1 + iota
const (
    MetricTypeSum = 1 + iota
    MetricTypeGauge
)

type CompressFlag uint16
type CompressionCodecType int8
type Compressor interface {
    Compress(dst *bytes.Buffer, src []byte, flags ...CompressFlag) ([]byte, CompressionCodecType)
}
type Decompressor interface {
    Decompress(src []byte, codecType CompressionCodecType) ([]byte, error)
}
type Metric struct {
        Name string
        Type MetricType
        ValueInt int64
        ValueFloat float64
        Attrs map[string]any
}
type MetricType uint8
type Pool any
type PoolDecompressBytes interface {
        GetDecompressBytes(compressed []byte, codec CompressionCodecType) []byte
        PutDecompressBytes([]byte)
}
type PoolKRecords interface {
        GetKRecords(n int) []kmsg.Record
        PutKRecords([]kmsg.Record)
}
type PoolRecords interface {
        GetRecords(n int) []Record
        PutRecords([]Record)
}
type ProcessFetchPartitionOpts struct {
        KeepControlRecords bool
        DisableCRCValidation bool
        Offset int64
        IsolationLevel IsolationLevel
        Topic string
        Partition int32
        Pools []Pool
}

func DefaultCompressor(...CompressionCodec) (Compressor, error)
func DefaultDecompressor(...Pool) Decompressor
func IsRetryableBrokerErr(error) bool
func ProcessFetchPartition(ProcessFetchPartitionOpts, *kmsg.FetchResponseTopicPartition, Decompressor, func(FetchBatchMetrics)) (FetchPartition, int64)

func DisableClientMetrics() Opt
func OnRebootstrapRequired(func() ([]string, error)) Opt
func UserMetricsFn(fn func() iter.Seq[Metric]) Opt
func WithContext(ctx context.Context) Opt
func WithPools(pools ...Pool) Opt

func ConsumeStartOffset(Offset) ConsumerOpt
func DisableFetchCRCValidation() ConsumerOpt
func RecheckPreferredReplicaInterval(time.Duration) ConsumerOpt
func WithDecompressor(decompressor Decompressor) ConsumerOpt

func DefaultProduceTopicAlways() ProducerOpt
func WithCompressor(Compressor) ProducerOpt

func (*Client) Context() context.Context
func (*Client) EnsureProduceConnectionIsOpen(context.Context, ...int32) error
func (*Client) RequestCachedMetadata(context.Context, *kmsg.MetadataRequest, time.Duration) (*kmsg.MetadataResponse, error)

func (*Record) Recycle()

Relevant commits

This is a small selection of what I think are the most pertinent commits in
this release. This release is very large, though. Many commits and PRs have
been left out that introduce or change smaller things.

  • 07e57d3e kgo: remove all EndAndBeginTransaction internal "optimizations"
  • a54ffa96 kgo: add ConsumeStartOffset, expand offset docs, update readme KIPs
  • PR #&#8203;988#​988 kgo: add support for KIP-714 (client metrics)
  • 7a17a03c kgo: fix data race in consumer code path
  • ae96af1d kgo: expose IsRetryableBrokerErr
  • 1eb82fee kgo: add EnsureProduceConnectionIsOpen
  • fc778ba8 kgo: fix AllowAutoTopicCreation && ConsumeRegex when used together
  • ae7eea7c kgo: add DisableFetchCRCValidation option
  • 6af90823 kgo: add the ability to pool memory in a few places while consuming
  • 8c7a36db kgo: export utilities for decompressing and parsing partition fetch responses
  • 33400303 kgo: do a slab allocation for Record's when processing a batch
  • 39c2157a kgo: add WithCompressor and WithDecompressor options
  • 9252a6b6 kgo: export Compressor and Decompressor
  • be15c285 kgo: add Client.RequestCachedMetadata
  • fc040bc0 kgo: add OnRebootstrapRequired
  • c8aec00a kversion: document changes through 4.0
  • 718c5606 kgo: remove all code handling EndBeginTxnUnsafe, make it a no-op
  • 5494c59e kversions: entirely reimplement internals
  • 9d266fcd kgo: allow outstanding produce requests to be context canceled if the user disables idempotency
  • c60bf4c2 kgo: add DefaultProduceTopicAlways ProducerOpt
  • 50cfe060 kgo: fix off-by-one with retries accounting
  • e9ba83a6, 05099ba0 kgo: add WithContext, Client.Context()
  • ddb0c0c3 kgo: fix cancellation of a fetch in manageFetchConcurrency
  • 83843a53 kgo: fixed panic when keyvals len equals 1

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch 2 times, most recently from 25d4081 to 2c90554 Compare July 15, 2025 09:14
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from 2c90554 to a1e651b Compare October 17, 2025 16:23
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.19.5 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.0 (main) Oct 17, 2025
@anaconda-renovate
Copy link
Author

anaconda-renovate bot commented Oct 17, 2025

ℹ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 10 additional dependencies were updated

Details:

Package Change
github.com/klauspost/compress v1.18.0 -> v1.18.1
golang.org/x/crypto v0.40.0 -> v0.45.0
golang.org/x/net v0.42.0 -> v0.47.0
golang.org/x/sync v0.16.0 -> v0.18.0
golang.org/x/sys v0.34.0 -> v0.38.0
github.com/twmb/franz-go/pkg/kmsg v1.11.2 -> v1.12.0
golang.org/x/text v0.27.0 -> v0.31.0
golang.org/x/mod v0.25.0 -> v0.29.0
golang.org/x/term v0.33.0 -> v0.37.0
golang.org/x/tools v0.34.0 -> v0.38.0

@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from a1e651b to d2cbcda Compare October 22, 2025 17:44
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.0 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.1 (main) Oct 22, 2025
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from d2cbcda to 619b013 Compare October 25, 2025 08:53
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.1 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.2 (main) Oct 25, 2025
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from 619b013 to a13f43c Compare November 7, 2025 12:58
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.2 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.3 (main) Nov 7, 2025
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from a13f43c to b030b86 Compare November 15, 2025 13:20
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.3 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.4 (main) Nov 15, 2025
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from b030b86 to d556e41 Compare November 24, 2025 14:49
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.4 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.5 (main) Nov 24, 2025
@anaconda-renovate anaconda-renovate bot force-pushed the deps-update/main-github.comtwmbfranz-go branch from d556e41 to ec6c09f Compare December 21, 2025 14:07
@anaconda-renovate anaconda-renovate bot changed the title fix(deps): update module github.com/twmb/franz-go to v1.20.5 (main) fix(deps): update module github.com/twmb/franz-go to v1.20.6 (main) Dec 21, 2025
@anaconda-renovate
Copy link
Author

ℹ️ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 10 additional dependencies were updated

Details:

Package Change
github.com/klauspost/compress v1.18.0 -> v1.18.2
golang.org/x/crypto v0.40.0 -> v0.45.0
golang.org/x/net v0.42.0 -> v0.47.0
golang.org/x/sync v0.16.0 -> v0.18.0
golang.org/x/sys v0.34.0 -> v0.38.0
github.com/twmb/franz-go/pkg/kmsg v1.11.2 -> v1.12.0
golang.org/x/text v0.27.0 -> v0.31.0
golang.org/x/mod v0.25.0 -> v0.29.0
golang.org/x/term v0.33.0 -> v0.37.0
golang.org/x/tools v0.34.0 -> v0.38.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant