Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Nov 7, 2025

Adds a context usage percentage indicator to chat request messages showing how much of the model's context window is consumed by the request. Displays on hover for normal usage (≤80%), and as a warning badge for high utilization (>80%).

Changes

View Model

  • Extended IChatRequestViewModel with contextUsagePercentage?: number to cache calculated percentages

Renderer

  • Injected ILanguageModelsService for token calculation
  • Asynchronously computes token count via computeTokenLength() on first render
  • Triggers re-render via onDidChangeItemHeight when calculation completes (with template validity check)
  • Sets up delayed hover showing "⭕ X% context used"
  • Renders warning badge directly for >80% utilization
  • Clears existing badges before adding new ones to prevent duplicates

Styling

  • Added .context-usage-badge with warning validation colors
  • Added .request-hover .context-usage-hover for tooltip styling

Implementation Notes

Token calculation is async and cached in the view model. The percentage is calculated as (tokenCount / model.maxInputTokens) × 100. Only the message text is included in the calculation (not variables/attachments).

Edge cases handled: missing modelId, zero maxInputTokens, template recycling, calculation failures.

Original prompt

Plan: Add context utilization indicator to chat requests

Add a context usage percentage indicator to chat request messages that shows on hover (similar to model/multiplier info). When context utilization exceeds 80%, display the indicator automatically without requiring hover.

Steps

  1. Extend IChatRequestViewModel in src/vs/workbench/contrib/chat/common/chatViewModel.ts to include a contextUsagePercentage?: number property that stores the calculated percentage.

  2. Calculate context utilization in renderChatRequest() method of ChatListItemRenderer (src/vs/workbench/contrib/chat/browser/chatListRenderer.ts) by calling ILanguageModelsService.computeTokenLength() for the request message and dividing by the model's maxInputTokens.

  3. Add hover display in the requestHover element to show context percentage (e.g., "⭕ 1% context used") using the existing hoverService.setupDelayedHover() pattern similar to how chatAgentHover is implemented.

  4. Add conditional badge rendering that displays the context indicator directly on the message (without hover) when contextUsagePercentage > 0.80, following the pattern from attachment warnings in src/vs/workbench/contrib/chat/browser/chatAttachmentModel.ts.

  5. Add CSS styling in src/vs/workbench/contrib/chat/browser/media/chatListRenderer.css for .context-usage-badge (always-visible >80% warning) and .context-usage-hover (hover tooltip) classes.

Further Considerations

  1. Performance optimization: Should token counting happen during request creation (in chat service) or lazily during render? Computing upfront and caching in the view model would prevent async delays during rendering.

  2. Progress indicator style: Should this use a circular progress icon (similar to loading spinners), a simple text badge, or leverage existing codicon circular progress? What color scheme for <80% vs >80% thresholds?

  3. Token calculation scope: Should this count only the message text, or include variables/attachments? The current computeTokenLength() API supports both string and IChatMessage types.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI changed the title [WIP] Add context utilization indicator to chat requests Add context utilization indicator to chat request messages Nov 7, 2025
Copilot AI requested a review from pierceboggan November 7, 2025 04:55
Copilot finished work on behalf of pierceboggan November 7, 2025 04:55
@TylerLeonhardt TylerLeonhardt added the ~copilot-video-please ⚠️WIP🏗️ Use in PRs. Runs Copilot CLI agent to record a video of "trying out the change" label Nov 7, 2025
@vs-code-engineering vs-code-engineering bot removed the ~copilot-video-please ⚠️WIP🏗️ Use in PRs. Runs Copilot CLI agent to record a video of "trying out the change" label Nov 7, 2025
@vs-code-engineering
Copy link

🤖 Copilot has tried out this change! Members of the team can view the recording here or download the Playwright trace here (run)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants