Feat: round-level device resource reuse for multirun#335
Feat: round-level device resource reuse for multirun#335doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the efficiency of multi-round kernel executions by introducing a robust device resource reuse mechanism. Instead of re-allocating and de-allocating resources for each round, the system now intelligently maintains and re-initializes device memory and other critical components, leading to reduced overhead and improved performance for iterative workloads. This change provides a more streamlined and optimized runtime experience, particularly for scenarios involving repeated computations with varying input data. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces a performance optimization for repeated execution rounds by implementing lightweight runtime re-initialization and round-level finalization. This involves moving runtime initialization outside the loop in code_runner.py and conditionally calling finalize_round() for intermediate rounds. New C API functions (reinit_runtime, finalize_runtime_round) and corresponding Python bindings are added, along with an _initialized flag in the Runtime class to manage state. While tensormap_and_ringbuffer runtimes gain full support for these new operations, aicpu_build_graph and host_build_graph runtimes provide stub implementations indicating lack of support. The review comments highlight code duplication in the validate_runtime_round_impl and validate_runtime_impl functions across a2a3 and a5 tensormap_and_ringbuffer runtimes, suggesting extraction into a shared helper function for improved maintainability.
src/a2a3/runtime/tensormap_and_ringbuffer/host/runtime_maker.cpp
Outdated
Show resolved
Hide resolved
ebaea11 to
5da863d
Compare
Introduce init_runtime_round and finalize_runtime_round to enable reusing device memory allocations across multiple execution rounds, avoiding repeated device_malloc/device_free per round. - Platform API: add init_runtime_round() and finalize_runtime_round() for both a2a3 and a5, onboard and sim backends - Runtime makers: split init into one-time allocation (init_runtime_impl) and per-round data copy (init_runtime_round_impl); split validate into per-round copy-back (validate_runtime_round_impl) and final cleanup (validate_runtime_impl) - Python bindings: add Runtime.initialize_round() and finalize_round() with shared _convert_orch_params helper - CodeRunner: restructure round loop to initialize once, then use per-round init/finalize within the repeat loop - benchmark_rounds.sh: add HOST_TIMING extraction and --show-host flag
|
|
||
| for (int i = 0; i < tensor_pair_count; i++) { | ||
| const TensorPair& pair = tensor_pairs[i]; | ||
| int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, pair.dev_ptr, pair.size); |
There was a problem hiding this comment.
在使用 pair.dev_ptr/pair.host_ptr/pair.size之前,没有检查,其余两个runtime均做了检查
| void* pto2_sm = runtime->get_pto2_gm_sm_ptr(); | ||
| uint64_t graph_out_ptr = 0; | ||
| uint64_t graph_out_size = 0; | ||
| void* graph_out_src = nullptr; |
There was a problem hiding this comment.
不同runtime作用相同的变量应该使用相同的名称,避免增加维护风险,建议检查所有修改
| rc = self.lib.finalize_runtime_round(self._handle) | ||
| if rc != 0: | ||
| # Not supported by this runtime, fallback to full finalize | ||
| self.finalize() |
There was a problem hiding this comment.
这里释放掉所有资源后,其他round继续执行会造成潜在的bug
Summary
Introduce init_runtime_round() and finalize_runtime_round() platform APIs that enable reusing device memory allocations across multiple execution rounds, avoiding repeated device_malloc/device_free per round
Split runtime init/validate into one-time setup (allocation) and per-round operations (data copy / copy-back), applied to all three runtimes on both a2a3 and a5
Restructure CodeRunner round loop from create-init-launch-finalize-per-round to init-once → per-round (init_round → launch → finalize_round) → finalize-once
Changes
Platform API (src/{a2a3,a5}/platform/)
Runtime makers (src/{a2a3,a5}/runtime/*/host/runtime_maker.cpp)
tensormap_and_ringbuffer: move INPUT/INOUT copy_to_device from init_runtime_impl to new init_runtime_round_impl; move output copy-back to validate_runtime_round_impl, leaving validate_runtime_impl for final cleanup only
aicpu_build_graph: same split pattern with phase-specific data copy
host_build_graph: add stub round functions (no device memory to reuse)
Python bindings (python/bindings.py)
CodeRunner (examples/scripts/code_runner.py)