Skip to content

Feat: round-level device resource reuse for multirun#335

Open
doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
doraemonmj:reuse
Open

Feat: round-level device resource reuse for multirun#335
doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
doraemonmj:reuse

Conversation

@doraemonmj
Copy link
Copy Markdown
Contributor

@doraemonmj doraemonmj commented Mar 20, 2026

Summary

  • Introduce init_runtime_round() and finalize_runtime_round() platform APIs that enable reusing device memory allocations across multiple execution rounds, avoiding repeated device_malloc/device_free per round

  • Split runtime init/validate into one-time setup (allocation) and per-round operations (data copy / copy-back), applied to all three runtimes on both a2a3 and a5

  • Restructure CodeRunner round loop from create-init-launch-finalize-per-round to init-once → per-round (init_round → launch → finalize_round) → finalize-once

Changes

Platform API (src/{a2a3,a5}/platform/)

  • Add init_runtime_round() and finalize_runtime_round() declarations and implementations for onboard and sim backends

Runtime makers (src/{a2a3,a5}/runtime/*/host/runtime_maker.cpp)

  • tensormap_and_ringbuffer: move INPUT/INOUT copy_to_device from init_runtime_impl to new init_runtime_round_impl; move output copy-back to validate_runtime_round_impl, leaving validate_runtime_impl for final cleanup only

  • aicpu_build_graph: same split pattern with phase-specific data copy

  • host_build_graph: add stub round functions (no device memory to reuse)

Python bindings (python/bindings.py)

  • Add Runtime.initialize_round() and Runtime.finalize_round() with _initialized state tracking
  • Extract _convert_orch_params() helper to reduce duplication between initialize() and initialize_round()

CodeRunner (examples/scripts/code_runner.py)

  • Restructure round loop: one initialize() + N × (initialize_round → launch → finalize_round) + one finalize()
  • Benchmark tooling (tools/benchmark_rounds.sh):Add HOST_TIMING extraction and --show-host flag for host-side latency visibility

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the efficiency of multi-round kernel executions by introducing a robust device resource reuse mechanism. Instead of re-allocating and de-allocating resources for each round, the system now intelligently maintains and re-initializes device memory and other critical components, leading to reduced overhead and improved performance for iterative workloads. This change provides a more streamlined and optimized runtime experience, particularly for scenarios involving repeated computations with varying input data.

Highlights

  • Resource Reuse for Multi-Round Executions: Implemented a mechanism to allocate device resources once for the first round of execution and reuse them across subsequent rounds when repeat_rounds > 1, freeing them only at the very end. This optimizes performance by avoiding redundant resource allocation and deallocation.
  • New C APIs for Runtime Management: Introduced two new C APIs: reinit_runtime for lightweight re-initialization (re-copying input data to existing device addresses) and finalize_runtime_round for copying results back without freeing device resources.
  • Runtime Specific Implementations and Fallbacks: The tensormap_and_ringbuffer runtime now fully supports true resource reuse. Other runtimes (like aicpu_build_graph and host_build_graph) automatically fall back to full initialization and finalization per round if reinit_runtime or finalize_runtime_round are not explicitly supported.
  • Python API Enhancements: The Python Runtime.initialize() method has been updated to automatically detect whether it's the first or a subsequent call, intelligently delegating to the appropriate C API (init_runtime or reinit_runtime) without requiring explicit branching from the caller.
  • Reused Resources: The resources that are now reused across rounds include kernel binaries, GM Heap memory (approximately 1GB), Shared Memory, Orchestration SO device copies, and tensor buffers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a performance optimization for repeated execution rounds by implementing lightweight runtime re-initialization and round-level finalization. This involves moving runtime initialization outside the loop in code_runner.py and conditionally calling finalize_round() for intermediate rounds. New C API functions (reinit_runtime, finalize_runtime_round) and corresponding Python bindings are added, along with an _initialized flag in the Runtime class to manage state. While tensormap_and_ringbuffer runtimes gain full support for these new operations, aicpu_build_graph and host_build_graph runtimes provide stub implementations indicating lack of support. The review comments highlight code duplication in the validate_runtime_round_impl and validate_runtime_impl functions across a2a3 and a5 tensormap_and_ringbuffer runtimes, suggesting extraction into a shared helper function for improved maintainability.

@doraemonmj doraemonmj force-pushed the reuse branch 3 times, most recently from ebaea11 to 5da863d Compare March 27, 2026 06:11
Introduce init_runtime_round and finalize_runtime_round to enable
reusing device memory allocations across multiple execution rounds,
avoiding repeated device_malloc/device_free per round.

- Platform API: add init_runtime_round() and finalize_runtime_round()
  for both a2a3 and a5, onboard and sim backends
- Runtime makers: split init into one-time allocation (init_runtime_impl)
  and per-round data copy (init_runtime_round_impl); split validate into
  per-round copy-back (validate_runtime_round_impl) and final cleanup
  (validate_runtime_impl)
- Python bindings: add Runtime.initialize_round() and finalize_round()
  with shared _convert_orch_params helper
- CodeRunner: restructure round loop to initialize once, then use
  per-round init/finalize within the repeat loop
- benchmark_rounds.sh: add HOST_TIMING extraction and --show-host flag
@doraemonmj doraemonmj changed the title [WIP]Feat: round-level device resource reuse for multirun Feat: round-level device resource reuse for multirun Mar 27, 2026

for (int i = 0; i < tensor_pair_count; i++) {
const TensorPair& pair = tensor_pairs[i];
int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, pair.dev_ptr, pair.size);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在使用 pair.dev_ptr/pair.host_ptr/pair.size之前,没有检查,其余两个runtime均做了检查

void* pto2_sm = runtime->get_pto2_gm_sm_ptr();
uint64_t graph_out_ptr = 0;
uint64_t graph_out_size = 0;
void* graph_out_src = nullptr;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不同runtime作用相同的变量应该使用相同的名称,避免增加维护风险,建议检查所有修改

rc = self.lib.finalize_runtime_round(self._handle)
if rc != 0:
# Not supported by this runtime, fallback to full finalize
self.finalize()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里释放掉所有资源后,其他round继续执行会造成潜在的bug

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants