Skip to content

Conversation

@double-fault
Copy link

Fixes #1813, #1896.

Lot of the code already existed but it was incomplete, like tracking of multiple leaders already existed but there was no extrapolation based off time running.

There were multiple groups in #1896 and the later groups always returned zero as the first group was pinned.

@google-cla
Copy link

google-cla bot commented Nov 28, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@double-fault
Copy link
Author

@dmah42 @LebedevRI (or whoever is the maintainer) ping for review. I think it's a pretty important bug, especially since there is no warning that there is no multiplexing, which leads to issues such as #1896 where I remember I had spent a lot of time on an unrelated program confused about why the counters were giving zero values.

If this patch is improper/there is no one to review it, it would be nice if a warning could be added somewhere regarding the same.

@dmah42
Copy link
Member

dmah42 commented Jan 10, 2026

I'm not an expert in the perf counter stuff but I'm going to take a look a bit later today

@dmah42
Copy link
Member

dmah42 commented Jan 11, 2026

i think this looks good. i've approved but maybe @LebedevRI wants to also weigh in.

@LebedevRI
Copy link
Collaborator

I'm sorry, i'm unfamiliar with this code. The change looks fairly simple.
Presumaby the author has tested this extensively and found no issues?

@double-fault
Copy link
Author

@LebedevRI Yes the changes are fairly simple and I have tested to some extent - I have run it on programs with a lot of counters and the values seemed to make sense (such as the program I had described in #1896 ), but I have not tested it to check the accuracy of the values reported in some objective way. I am not aware of any easy method to do this, if there is maybe you could let me know and I can do it.

From what I remember, none of the unit tests check for the accuracy of the actual values either (as I said, I'm not sure if there is a proper way to do this).

@double-fault
Copy link
Author

@LebedevRI ping

@LebedevRI
Copy link
Collaborator

@LebedevRI ping

I've already provided all the feedback that i have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Perf counters not multiplexed

3 participants