-
Notifications
You must be signed in to change notification settings - Fork 4.4k
[KafkaIO] Make ReadFromKafkaDoFn restriction trackers unsplittable #36935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[KafkaIO] Make ReadFromKafkaDoFn restriction trackers unsplittable #36935
Conversation
Summary of ChangesHello @sjvanrossum, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue in Apache Beam's KafkaIO connector where Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
scwhittle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add some batch Kafka pipeline integration test to validate this (and catch other possible issues)?
| } | ||
|
|
||
| @Override | ||
| public SplitResult<RestrictionT> trySplit(double fractionOfRemainder) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
base method is @nullable, if you mark this nullable here, can you remove the suppression above? If for some reason it still doesn't work can the suppression be limited to this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in UnsplittableRestrictionTracker. I had a look at doing the same for OffsetRangeTracker and GrowableOffsetRangeTracker and it looks like there's a synchronization issue between tryClaim and getProgress. FnApiDoFnRunner calls tryClaim when splitLock is not held (during processElement) and modifies lastAttemptedOffset. FnApiDoFnRunner also calls getProgress when splitLock is held so this prevents the observation of different values for range, but it does not prevent the observation of different values for lastAttemptedOffset. It looks benign since the lastAttemptedOffset is only read to perform a null check and then read again to use its value, but the null checker complains about it nonetheless. I'll patch that in a different PR.
|
|
||
| @Override | ||
| public SplitResult<RestrictionT> trySplit(double fractionOfRemainder) { | ||
| return fractionOfRemainder < 1.0 ? null : tracker.trySplit(fractionOfRemainder); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems like 0 should also be handled specially, from base comment:
* @return a {@link SplitResult} if a split was possible, otherwise returns {@code null}. If the
* {@code fractionOfRemainder == 0}, a {@code null} result MUST imply that the restriction
* tracker is done and there is no more work left to do.
0 also seems ok to delegate since we shouldn't be doing parallel processing in that case either
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
| * <h4>Splitting</h4> | ||
| * | ||
| * <p>Consumer groups must not consume from the same {@link TopicPartition} simultaneously. Doing so | ||
| * may arbitrarily overwrite a consumer group's committed offset for a {@link TopicPartition}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this committed offset is mostly for external monitoring correct? Beam is maintaining it's own offsets for reading for correctness reasons and I think that it could handle reading the same partition in parallel if it wasn't sharing a consumer.
If the mode of committing the offset in bundle finalizations, the offset being committed could be out-of-order or stale anyway.
So I think mainly we are preventing the splitting because the current caching strategy doesn't work well with it. I'm also not sure if it would actually improve throughput to read from the same partition from different threads but if we are reading fixed non-overlapping ranges it seems like it possibly could.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think that this could work, but would we rather support processing distinct portions of the same partition by caching more than 1 consumer? I think we'd have to change the code to more explicitly acquire and release a consumer.
Though batch isn't particularly common given we haven't hit this before, this could maybe be useful even for streaming for processing large backlogs if we made the initial splits per partition something like [s, (s+f)/2) [(s+f)/2, f) [f, streaming tail] where s is start position, f is current position.
This is assuming that consumers are independent and that this won't screw up caches or something on the server and kill performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer that over UnsplittableRestrictionTracker, but we'd still need to prevent splitting when enable.auto.commit is set. If runners split too aggressively during processing then there's the risk of overwhelming a broker with connections, but I'm not too concerned about that.
I've asked the release manager to weigh in on whether they want to cherry-pick this change into 2.70.0 since it would unblock a user. If not, I'll add those changes to this PR.
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
Replace the restriction trackers in
ReadFromKafkaDoFnwithUnsplittableRestrictionTracker<OffsetRange, Long>to prevent non-checkpointing splits. Consumer groups must not consume from the sameTopicPartitionsimultaneously and this design detail was used to limit the creation ofConsumerinstances inReadFromKafkaDoFn, but failed to be upheld by its split results.Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.