Skip to content

Conversation

@sjvanrossum
Copy link
Contributor

Replace the restriction trackers in ReadFromKafkaDoFn with UnsplittableRestrictionTracker<OffsetRange, Long> to prevent non-checkpointing splits. Consumer groups must not consume from the same TopicPartition simultaneously and this design detail was used to limit the creation of Consumer instances in ReadFromKafkaDoFn, but failed to be upheld by its split results.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @sjvanrossum, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue in Apache Beam's KafkaIO connector where ReadFromKafkaDoFn could inadvertently allow non-checkpointing splits for Kafka topic partitions. Such splits could lead to multiple consumers processing the same partition simultaneously, resulting in incorrect committed offsets and potential data loss or inconsistencies. By introducing an UnsplittableRestrictionTracker, the PR ensures that Kafka topic partitions are processed exclusively, safeguarding the integrity of consumer group offsets and improving the reliability of Kafka data ingestion.

Highlights

  • New Restriction Tracker: Introduced UnsplittableRestrictionTracker to enforce that certain restrictions, like those for Kafka topic partitions, cannot be split during processing except for checkpointing.
  • KafkaIO Integration: ReadFromKafkaDoFn now uses UnsplittableRestrictionTracker to wrap its internal OffsetRangeTracker and GrowableOffsetRangeTracker, preventing unintended concurrent consumption of Kafka topic partitions.
  • Javadoc Update: Clarified documentation in GrowableOffsetRangeTracker and ReadFromKafkaDoFn regarding the handling of Kafka topic partition offsets and the purpose of unsplittable restrictions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@scwhittle scwhittle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some batch Kafka pipeline integration test to validate this (and catch other possible issues)?

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

* committed offset for a {@link TopicPartition}. Restriction trackers for a {@link
* KafkaSourceDescriptor} are wrapped as {@link UnsplittableRestrictionTracker<OffsetRange, Long>}
* and will only return a non-null {@link org.apache.beam.sdk.transforms.splittabledofn.SplitResult}
* for a checkpoint. This ensures consistent behavior when {@code enable.auto.commit} is set and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "ensures consistent" is too strong I think? There may still be parallel scheduling on different VMs during scaling etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. The intent wasn't to comment on consistency guarantees, but the phrasing is off.

@sjvanrossum
Copy link
Contributor Author

Could we add some batch Kafka pipeline integration test to validate this (and catch other possible issues)?

Added (as you may have seen), but the integration test passes with and without these changes. I'm hoping to validate this patch on the Prism runner later today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants