Skip to content

Conversation

@sjvanrossum
Copy link
Contributor

Replace the restriction trackers in ReadFromKafkaDoFn with UnsplittableRestrictionTracker<OffsetRange, Long> to prevent non-checkpointing splits. Consumer groups must not consume from the same TopicPartition simultaneously and this design detail was used to limit the creation of Consumer instances in ReadFromKafkaDoFn, but failed to be upheld by its split results.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @sjvanrossum, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue in Apache Beam's KafkaIO connector where ReadFromKafkaDoFn could inadvertently allow non-checkpointing splits for Kafka topic partitions. Such splits could lead to multiple consumers processing the same partition simultaneously, resulting in incorrect committed offsets and potential data loss or inconsistencies. By introducing an UnsplittableRestrictionTracker, the PR ensures that Kafka topic partitions are processed exclusively, safeguarding the integrity of consumer group offsets and improving the reliability of Kafka data ingestion.

Highlights

  • New Restriction Tracker: Introduced UnsplittableRestrictionTracker to enforce that certain restrictions, like those for Kafka topic partitions, cannot be split during processing except for checkpointing.
  • KafkaIO Integration: ReadFromKafkaDoFn now uses UnsplittableRestrictionTracker to wrap its internal OffsetRangeTracker and GrowableOffsetRangeTracker, preventing unintended concurrent consumption of Kafka topic partitions.
  • Javadoc Update: Clarified documentation in GrowableOffsetRangeTracker and ReadFromKafkaDoFn regarding the handling of Kafka topic partition offsets and the purpose of unsplittable restrictions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@scwhittle scwhittle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some batch Kafka pipeline integration test to validate this (and catch other possible issues)?

}

@Override
public SplitResult<RestrictionT> trySplit(double fractionOfRemainder) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

base method is @nullable, if you mark this nullable here, can you remove the suppression above? If for some reason it still doesn't work can the suppression be limited to this function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in UnsplittableRestrictionTracker. I had a look at doing the same for OffsetRangeTracker and GrowableOffsetRangeTracker and it looks like there's a synchronization issue between tryClaim and getProgress. FnApiDoFnRunner calls tryClaim when splitLock is not held (during processElement) and modifies lastAttemptedOffset. FnApiDoFnRunner also calls getProgress when splitLock is held so this prevents the observation of different values for range, but it does not prevent the observation of different values for lastAttemptedOffset. It looks benign since the lastAttemptedOffset is only read to perform a null check and then read again to use its value, but the null checker complains about it nonetheless. I'll patch that in a different PR.


@Override
public SplitResult<RestrictionT> trySplit(double fractionOfRemainder) {
return fractionOfRemainder < 1.0 ? null : tracker.trySplit(fractionOfRemainder);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like 0 should also be handled specially, from base comment:

   * @return a {@link SplitResult} if a split was possible, otherwise returns {@code null}. If the
   *     {@code fractionOfRemainder == 0}, a {@code null} result MUST imply that the restriction
   *     tracker is done and there is no more work left to do.

0 also seems ok to delegate since we shouldn't be doing parallel processing in that case either

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

* <h4>Splitting</h4>
*
* <p>Consumer groups must not consume from the same {@link TopicPartition} simultaneously. Doing so
* may arbitrarily overwrite a consumer group's committed offset for a {@link TopicPartition}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this committed offset is mostly for external monitoring correct? Beam is maintaining it's own offsets for reading for correctness reasons and I think that it could handle reading the same partition in parallel if it wasn't sharing a consumer.

If the mode of committing the offset in bundle finalizations, the offset being committed could be out-of-order or stale anyway.

So I think mainly we are preventing the splitting because the current caching strategy doesn't work well with it. I'm also not sure if it would actually improve throughput to read from the same partition from different threads but if we are reading fixed non-overlapping ranges it seems like it possibly could.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think that this could work, but would we rather support processing distinct portions of the same partition by caching more than 1 consumer? I think we'd have to change the code to more explicitly acquire and release a consumer.

Though batch isn't particularly common given we haven't hit this before, this could maybe be useful even for streaming for processing large backlogs if we made the initial splits per partition something like [s, (s+f)/2) [(s+f)/2, f) [f, streaming tail] where s is start position, f is current position.

This is assuming that consumers are independent and that this won't screw up caches or something on the server and kill performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer that over UnsplittableRestrictionTracker, but we'd still need to prevent splitting when enable.auto.commit is set. If runners split too aggressively during processing then there's the risk of overwhelming a broker with connections, but I'm not too concerned about that.

I've asked the release manager to weigh in on whether they want to cherry-pick this change into 2.70.0 since it would unblock a user. If not, I'll add those changes to this PR.

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants