-
Notifications
You must be signed in to change notification settings - Fork 567
Description
Component(s)
No response
What happened?
Description
When the Service is configured with Prefer/Required Dual Stack and both IPv4 and IPv6 addresses are provided, the Target Allocator generates two targets because there are two distinct endpoint slices—even though they reference the same service.
This behavior results in duplicated metrics and unnecessary resource consumption by the collector.
Steps to Reproduce
Use the following values.yaml configuration for the opentelemetry-kube-stack chart:
collectors:
daemon:
crape_configs_file: ""
targetAllocator:
enabled: true
image: ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:main
allocationStrategy: per-node
prometheusCR:
enabled: true
podMonitorSelector: {}
scrapeInterval: "30s"
serviceMonitorSelector: {}
presets:
logsCollection:
enabled: true
kubeletMetrics:
enabled: true
hostMetrics:
enabled: false
kubernetesAttributes:
enabled: true
kubernetesEvents:
enabled: false
clusterMetrics:
enabled: false
config:
receivers:
prometheus: {}
exporters:
otlp/tempo:
endpoint: lgtm-tempo-gateway.monitoring.svc.cluster.local:4317
tls:
insecure: true
otlphttp/mimir:
endpoint: http://lgtm-mimir-gateway.monitoring.svc.cluster.local/otlp
headers:
X-Scope-OrgID: homelab-k8s
tls:
insecure: true
otlphttp/loki:
endpoint: http://lgtm-loki-gateway.monitoring.svc/otlp
headers:
X-Scope-OrgID: homelab-k8s
tls:
insecure: true
service:
pipelines:
traces:
exporters:
- otlp/tempo
metrics:
receivers:
- otlp
- prometheus
exporters:
- otlphttp/mimir
logs:
exporters:
- otlphttp/loki
kubeStateMetrics:
enabled: true
kube-state-metrics:
service:
ipDualStack:
enabled: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: karpenter.sh/nodepool
operator: In
values: ["system"]
tolerations:
- key: node-role.kubernetes.io/system
operator: Exists
effect: NoSchedule
resources:
requests:
cpu: 10m
memory: 100Mi
limits:
memory: 100Mi
priorityClassName: platform-criticalExpected Result
The job
Job: serviceMonitor/opentelemetry-operator-system/opentelemetry-kube-stack-kube-state-metrics/0
should show only one target for the service, even when both IPv4 and IPv6 endpoint slices exist.
Actual Result
Kubernetes Version
v1.34.0
Operator version
0.136.0
Collector version
0.136.0
Environment information
Environment
OS: 6.12.52-talos
opentelemetry-kube-stack chart version: 0.11.1
Log output
opentelemetry-kube-stack-kube-state-metrics ClusterIP fd85:ee78:d8a6:8607::1eee <none> 8080/TCP 5h52m
opentelemetry-kube-stack-kube-state-metrics-tl2q7 IPv6 8080 2600:xxxx:xxxx:b307:48::8a7b 5h52m
opentelemetry-kube-stack-kube-state-metrics-tzk4w IPv4 8080 10.233.12.197 5h52m
Name: opentelemetry-kube-stack-kube-state-metrics
Namespace: opentelemetry-operator-system
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=opentelemetry-kube-stack
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kube-state-metrics
app.kubernetes.io/part-of=kube-state-metrics
app.kubernetes.io/version=2.17.0
helm.sh/chart=kube-state-metrics-6.3.0
release=opentelemetry-kube-stack
Annotations: argocd.argoproj.io/tracking-id: opentelemetry-kube-stack:/Service:opentelemetry-operator-system/opentelemetry-kube-stack-kube-state-metrics
prometheus.io/scrape: true
Selector: app.kubernetes.io/instance=opentelemetry-kube-stack,app.kubernetes.io/name=kube-state-metrics
Type: ClusterIP
IP Family Policy: PreferDualStack
IP Families: IPv6,IPv4
IP: fd85:ee78:d8a6:8607::1eee
IPs: fd85:ee78:d8a6:8607::1eee,10.233.118.252
Port: http 8080/TCP
TargetPort: http/TCP
Endpoints: [2600:xxxx:xxxx:b307:48::8a7b]:8080,10.233.12.197:8080
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.