-
Notifications
You must be signed in to change notification settings - Fork 42
Network policy fix for HyperShift #2157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
OlivierCazade
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2157 +/- ##
==========================================
+ Coverage 73.02% 73.05% +0.02%
==========================================
Files 80 80
Lines 9180 9177 -3
==========================================
Hits 6704 6704
+ Misses 2062 2060 -2
+ Partials 414 413 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
|
Hmm, this weakens considerably the egress policy |
I will investigate this issue. I think that we may want to watch the endpoint IP and add it to the network policy, but that would be something more complex, I don't think that ip would change, maybe we can do it one time, although if it changes, won't work, so we may want to watch that endpoint. |
|
@leandroberetta yes, I asked the ovn folks, restricting on 172.20.0.1:6443 should work, and like you said we can know this address by watching the endpoint. |
|
New changes are detected. LGTM label has been removed. |
Hi @jotak, good to know, yes, I implemented a version where first looks for EndpointSlice and fallback to Endpoint in case the new API is not available.
It's working fine for HyperShift, although I'm not sure how to properly try if the ip changes (I'm not sure is a common case or even possible). Same for IPv6 clusters (network policy needs CIDR so the /32 or /128 in the range needs to be provided accordingly). |
|
@leandroberetta: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |

Description
In HyperShift deployments and other external control plane configurations, the
flowlogs-pipelinepods were unable to connect to the Kubernetes API server, resulting in timeout errors:Cannot get dynamic config: Get "https://172.30.0.1:443/api/v1/namespaces/netobserv/configmaps/flowlogs-pipeline-config-dynamic": dial tcp 172.30.0.1:443: i/o timeout
Failed to watch err="failed to list *v1.Pod: Get https://172.30.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 172.30.0.1:443: i/o
timeout"
In HyperShift deployments, the control plane (API server, etcd, controllers) runs in a separate management cluster, while worker nodes run in a different cluster. This is evident from:
kubernetesservice endpoint has the labelhypershift.openshift.io/managed: "true"defaultnamespace contains no pods serving the API server endpointThe previous NetworkPolicy only allowed egress traffic to port 6443 when targeting specific in-cluster namespaces (
openshift-apiserver,openshift-kube-apiserver) usingnamespaceSelectorwithpodSelector. However, external control plane endpoints are not represented as pods within the worker cluster, so the namespace selector rules don't apply and traffic is blocked.NB: this issue is specific to main branch; release-1.10 is unaffected
Dependencies
n/a
Checklist
If you are not familiar with our processes or don't know what to answer in the list below, let us know in a comment: the maintainers will take care of that.