-
Notifications
You must be signed in to change notification settings - Fork 156
Description
What version were you using?
prometheus-nats-exporter 0.16.0
nats 2.10.25-alpine
What environment was the server running in?
kubernetes via the nats helm chart.
Is this defect reproducible?
Yes
Given the capability you are leveraging, describe your expectation?
I'd expect the varz metrics to be correctly categorized as counters/gauges.
Given the expectation, what is the defect you are observing?
The scrape output for nats_varz_slow_consumers indicates that the slow_consumers value from varz is a gauge.
# HELP nats_varz_slow_consumers slow_consumers
# TYPE nats_varz_slow_consumers gauge
nats_varz_slow_consumers{server_id="<id>"} 0
Looking at the upstream code, I never see this counter being decremented or reset, only incremented like a counter.
Looking in the code here, I don't see any special handling for varz like there is for accstatz with newAccstatzCollector. For accstatz, there are explicit metrics defined and parsed. From what I can tell, inside NewCollector, with varz, it drops down to the newNatsCollector function, which creates the prometheus metrics as gauges inside objectToMetrics.
It seems like the solution could be to create a varzCollector like there is for accstatz. I would be happy to help contribute to this. Thanks!