-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Hey folks!
Context
We have a use case where we are running CAPI v1.10 from within a workload cluster. In this case, CAPI serves the purpose of reconciling MachinePool objects.
We're getting 2 errors that I believe are related to one another:
E1021 04:49:00.092149 1 cluster_accessor.go:262] "Connect failed" err="error creating REST config: error getting kubeconfig secret:
Secret \"<cluster-name>-kubeconfig\" not found" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster"
E1021 04:49:17.574100 1 controller.go:347] "Reconciler error" err="[error getting client: connection to the workload cluster is down,
error creating watch machinepool-watchNodes for *v1.Node: connection to the workload cluster is down]" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool"
Question
The issue here is we aren't creating a kubeconfig secret inside the workload cluster.
Ideally, we're trying to see if there is a way for CAPI to use in-cluster configurations using the local service account rather than the kubeconfig secret.
From what I understand, when the controller is running in-cluster, it still seems to always look for the secret and establish a successful connection before it can successfully switch to using in-cluster configs:
- Fetching Secret: https://github.com/kubernetes-sigs/cluster-api/blob/release-1.10/controllers/clustercache/cluster_accessor_client.go#L56
- Switching to in-cluster config: https://github.com/kubernetes-sigs/cluster-api/blob/release-1.10/controllers/clustercache/cluster_accessor_client.go#L82
From what I see, I don't think there's a flag/arg that can control this behavior. We were wondering if we might be missing something or understand how others may have circumvented this.