-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
feat: enhance WecomAIBotAdapter and WecomAIBotMessageEvent for improved streaming message handling #5000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
feat: enhance WecomAIBotAdapter and WecomAIBotMessageEvent for improved streaming message handling #5000
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (performance): 请考虑
_stream_plain_cache的生命周期以及其潜在的无限增长问题。目前只有在流结束且
has_back_queue(stream_id)为 false 时才会移除条目。如果某个流永远无法正常完成,并且其 back queue 从未被移除(例如生产者崩溃或未来逻辑变更),在长时间运行的进程中,这个 dict 可能会无限增长。建议将缓存清理与队列生命周期直接绑定(例如在remove_queues中或某个共享的 teardown 路径中),并/或增加保护措施,比如 TTL、最大大小限制或调试断言,以检测泄漏。建议实现方式:
) # Cache plain text for streaming responses; guarded by a max size and lifecycle-based cleanup self._stream_plain_cache: dict[str, str] = {} # Hard cap to avoid unbounded growth in long-running processes self._stream_plain_cache_max_size: int = 10_000 self.webhook_client: WecomAIBotWebhookClient | None = None为了在整个代码库中让缓存生命周期与队列生命周期完全对齐,可以考虑:
stream_id的队列的方法中(可能在本文件之外),例如queue_mgr.remove_queues(stream_id)或类似方法,同时移除对应的缓存条目:_stream_plain_cache的写入都位于这个类中,并使用一致的模式,这样未来如果需要改造成更复杂的 TTL 或 LRU 策略,可以集中管理。如果写入点分散在代码库各处,建议将其重构到一个辅助方法中,例如_set_stream_plain_cache(stream_id, value),在其中统一执行限制和日志记录。Original comment in English
suggestion (performance): Consider lifecycle and potential unbounded growth of
_stream_plain_cache.Right now entries are only removed when a stream finishes and
has_back_queue(stream_id)is false. If a stream never reaches completion and its back queue is never removed (e.g., producer crash or future logic changes), this dict can grow without bound in long‑running processes. Consider tying cache cleanup directly to queue lifecycle (e.g., inremove_queuesor a shared teardown path), and/or adding a safeguard such as TTL, max size, or debug assertions to detect leaks.Suggested implementation:
) # Cache plain text for streaming responses; guarded by a max size and lifecycle-based cleanup self._stream_plain_cache: dict[str, str] = {} # Hard cap to avoid unbounded growth in long-running processes self._stream_plain_cache_max_size: int = 10_000 self.webhook_client: WecomAIBotWebhookClient | None = NoneTo fully align cache lifecycle with queue lifecycle across the codebase, consider:
stream_id(e.g.,queue_mgr.remove_queues(stream_id)or similar), also removing the corresponding cache entry:_stream_plain_cachereside in this class and use a consistent pattern so that future lifecycle changes (like a more sophisticated TTL or LRU strategy) can be centralized. If writes are scattered across the codebase, refactor them into a helper like_set_stream_plain_cache(stream_id, value)that can enforce limits and logging in one place.