Skip to content

Conversation

@xiezhq-hermann
Copy link
Collaborator

Motivation

The current schedule_policy contains quite a bit of legacy code that complicates the logic. This PR aims to simplify the structure and make future refactoring of the mem-cache easier.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @xiezhq-hermann, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the schedule_policy implementation by simplifying its internal state management and token calculation logic. The changes aim to reduce code complexity, improve the accuracy of token usage estimation, and lay the groundwork for easier future refactoring of the memory cache system.

Highlights

  • Consolidated Token Reservation: Replaced rem_total_token_offset and cur_rem_token_offset with a single reserved_tokens attribute, simplifying token budget management within the scheduling policy.
  • Improved Token Usage Estimation: Introduced _decode_token_usage_estimation to provide a more accurate and unified way to calculate token usage, especially considering ignore_eos and max_new_tokens.
  • Removed Redundant Logic: Eliminated the cur_rem_tokens property and the add_one_req_ignore_eos method, streamlining the scheduling policy's internal state and request handling.
  • Simplified Budget Checks: Modified the budget_state method to rely solely on rem_total_tokens, removing a redundant check and simplifying the logic for determining if new requests can be added.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively simplifies the schedule policy implementation by refactoring token reservation logic and removing the complex add_one_req_ignore_eos method. The introduction of _decode_token_usage_estimation is a good move to centralize token usage estimation, which enhances code clarity and maintainability. The changes align well with the stated goal of making the code easier to understand and refactor in the future. I have a couple of suggestions to further improve code quality.

Comment on lines +354 to 356
self.reserved_tokens += sum(
[self._decode_token_usage_estimation(r) for r in running_batch.reqs]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better memory efficiency, you can use a generator expression instead of a list comprehension within sum(). This avoids creating an intermediate list in memory, which is more efficient, especially if running_batch.reqs contains a large number of requests.

            self.reserved_tokens += sum(self._decode_token_usage_estimation(r) for r in running_batch.reqs)

Comment on lines 372 to 378
return (
min(
(req.sampling_params.max_new_tokens - len(req.output_ids)),
max(req.sampling_params.max_new_tokens - len(req.output_ids), 0),
CLIP_MAX_NEW_TOKENS,
)
* self.new_token_ratio
* new_token_ratio
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The return expression multiplies an integer with a float (new_token_ratio), resulting in a float value. However, the function's type hint is -> int. To ensure type consistency and make the truncation behavior explicit, it's better to cast the final result to an integer using int().

        return int(
            min(
                max(req.sampling_params.max_new_tokens - len(req.output_ids), 0),
                CLIP_MAX_NEW_TOKENS,
            )
            * new_token_ratio
        )

@xiezhq-hermann
Copy link
Collaborator Author

@hnyls2002 please help take a look and I will keep polishing this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants