Skip to content

Commit 986037a

Browse files
athavrRaj Athavale
andauthored
update: YAML file explanations: Networking labs (#1586)
Co-authored-by: Raj Athavale <[email protected]>
1 parent 6173a41 commit 986037a

File tree

10 files changed

+118
-89
lines changed

10 files changed

+118
-89
lines changed
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
apiVersion: networking.k8s.io/v1
2+
kind: NetworkPolicy
3+
metadata:
4+
name: test-network-policy
5+
namespace: default
6+
spec:
7+
podSelector:
8+
matchLabels:
9+
role: db
10+
policyTypes:
11+
- Ingress
12+
- Egress
13+
ingress:
14+
- from:
15+
- ipBlock:
16+
cidr: 172.17.0.0/16
17+
except:
18+
- 172.17.1.0/24
19+
- namespaceSelector:
20+
matchLabels:
21+
project: myproject
22+
- podSelector:
23+
matchLabels:
24+
role: frontend
25+
ports:
26+
- protocol: TCP
27+
port: 6379
28+
egress:
29+
- to:
30+
- ipBlock:
31+
cidr: 10.0.0.0/24
32+
ports:
33+
- protocol: TCP
34+
port: 5978

website/docs/networking/eks-hybrid-nodes/connect-hybrid-node.md

Lines changed: 20 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,13 @@ $ export ACTIVATION_ID=$(echo $ACTIVATION_JSON | jq -r ".ActivationId")
2222
$ export ACTIVATION_CODE=$(echo $ACTIVATION_JSON | jq -r ".ActivationCode")
2323
```
2424

25-
With our activation created, we can now create a `nodeconfig.yaml` which will be
26-
referenced when we join our instance to the cluster. This utilizes the SSM
27-
`ACTIVATION_CODE` and `ACTIVATION_ID` created in the previous step as well as
28-
the `EKS_CLUSTER_NAME` name and `AWS_REGION` environment variables.
25+
With our activation created, we can now create a `NodeConfig` which will be
26+
referenced when we join our instance to the cluster.
2927

30-
::yaml{file="manifests/modules/networking/eks-hybrid-nodes/nodeconfig.yaml"}
28+
::yaml{file="manifests/modules/networking/eks-hybrid-nodes/nodeconfig.yaml" paths="spec.cluster,spec.hybrid.ssm"}
29+
30+
1. Specify the target EKS cluster `name` and `region` using the `$EKS_CLUSTER_NAME` and `$AWS_REGION` environment variables
31+
2. Specify the SSM `activationCode` and `activationId` by using the `$ACTIVATION_CODE` and `$ACTIVATION_ID` environment variables created in the previous step
3132

3233
```bash
3334
$ cat ~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/nodeconfig.yaml \
@@ -73,7 +74,20 @@ Great! The node appears but with a `NotReady` status. This is because we must in
7374
$ helm repo add cilium https://helm.cilium.io/
7475
```
7576

76-
With the repo added, we can install Cilium using the configuration provided below.
77+
Next, let us look at the configuration values we will provide as input to the Cilium helm chart:
78+
79+
::yaml{file="manifests/modules/networking/eks-hybrid-nodes/cilium-values.yaml" paths="affinity.nodeAffinity,ipam.mode,ipam.operator.clusterPoolIPv4MaskSize,ipam.operator.clusterPoolIPv4PodCIDRList,operator.replicas,operator.affinity,operator.unmanagedPodWatcher.restart,envoy.enabled"}
80+
81+
1. This `affinity.nodeAffinity` configuration targets nodes by `eks.amazonaws.com/compute-type` and ensures that the main CNI daemonset pods that handle networking on each node only run on `hybrid` nodes
82+
2. Set `ipam.mode` to `cluster-pool` to use cluster-wide IP pool for pod IP allocation
83+
3. Set `clusterPoolIPv4MaskSize: 25` to specify `/25` subnets allocated per node (128 IP addresses)
84+
4. Set `clusterPoolIPv4PodCIDRList` to `10.53.0.0/16` to specify the dedicated CIDR for the hybrid node pods
85+
5. Set `replicas: 1` to specify a single instance of the operator will run
86+
6. This `affinity.nodeAffinity` configuration targets nodes by `eks.amazonaws.com/compute-type` and ensures that the main CNI operator pods that manage the CNI configuration on each node only run on `hybrid` nodes
87+
7. Set `unmanagedPodWatcher.restart: false` to disable pod restart watching
88+
8. Set `envoy.enabled: false` to disable Envoy proxy integration
89+
90+
Let us install Cilium using this configuration.
7791

7892
```bash timeout=300 wait=30
7993
$ helm install cilium cilium/cilium \
@@ -83,8 +97,6 @@ $ helm install cilium cilium/cilium \
8397
--values ~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/cilium-values.yaml
8498
```
8599

86-
::yaml{file="manifests/modules/networking/eks-hybrid-nodes/cilium-values.yaml"}
87-
88100
After installing Cilium our Hybrid Node should come up, happy and healthy.
89101

90102
```bash timeout=300 wait=30

website/docs/networking/vpc-cni/network-policies/debug.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,12 @@ Till now, we were able to apply network policies without issues or errors. But w
77

88
Amazon VPC CNI provides logs that can be used to debug issues while implementing networking policies. In addition, you can monitor these logs through services such as Amazon CloudWatch, where you can leverage CloudWatch Container Insights that can help you provide insights on your usage related to NetworkPolicy.
99

10-
Now, let us try implementing an ingress network policy that will restrict access to the orders' service component from 'ui' component only, similar to what we did earlier with the 'catalog' service component..
10+
Now, let us try implementing an ingress network policy that will restrict access to the orders' service component from 'ui' component only, similar to what we did earlier with the 'catalog' service component.
1111

12-
```file
13-
manifests/modules/networking/network-policies/apply-network-policies/allow-order-ingress-fail-debug.yaml
14-
```
12+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/allow-order-ingress-fail-debug.yaml" paths="spec.podSelector,spec.ingress.0.from.0"}
13+
14+
1. The `podSelector` targets pods with labels `app.kubernetes.io/name: orders` and `app.kubernetes.io/component: service`
15+
2. The `ingress.from` allows inbound connections only from pods with the label `app.kubernetes.io/name: ui`
1516

1617
Lets apply this policy:
1718

website/docs/networking/vpc-cni/network-policies/egress.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ sidebar_position: 70
77

88
As shown in the above architecture diagram, the 'ui' component is the front-facing app. So we can start implementing our network controls for the 'ui' component by defining a network policy that will block all egress traffic from the 'ui' namespace.
99

10-
```file
11-
manifests/modules/networking/network-policies/apply-network-policies/default-deny.yaml
12-
```
10+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/default-deny.yaml" paths="spec.podSelector,spec.policyTypes"}
11+
12+
1. The empty selector `{}` matches all pods
13+
2. The `Egress` policy type controls outbound traffic from pods
1314

1415
> **Note** : There is no namespace specified in the network policy, as it is a generic policy that can potentially be applied to any namespace in our cluster.
1516
@@ -38,14 +39,12 @@ Implementing the above policy will also cause the sample application to no longe
3839

3940
In the case of the 'ui' component, it needs to communicate with all the other service components, such as 'catalog', 'orders, etc. Apart from this, 'ui' will also need to be able to communicate with components in the cluster system namespaces. For example, for the 'ui' component to work, it needs to be able to perform DNS lookups, which requires it to communicate with the CoreDNS service in the `kube-system` namespace.
4041

41-
The network policy below was designed with the above requirements in mind. It has two key sections:
42+
The network policy below was designed with the above requirements in mind.
4243

43-
- The first section focuses on allowing egress traffic to all service components such as 'catalog', 'orders' etc. without providing access to the database components through a combination of namespaceSelector, which allows for egress traffic to any namespace as long as the pod labels match "app.kubernetes.io/component: service".
44-
- The second section focuses on allowing egress traffic to all components in the kube-system namespace, which enables DNS lookups and other key communications with the components in the system namespace.
44+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/allow-ui-egress.yaml" paths="spec.egress.0.to.0,spec.egress.0.to.1"}
4545

46-
```file
47-
manifests/modules/networking/network-policies/apply-network-policies/allow-ui-egress.yaml
48-
```
46+
1. The first egress rule focuses on allowing egress traffic to all `service` components such as 'catalog', 'orders' etc. (without providing access to the database components), along with the `namespaceSelector` which allows for egress traffic to any namespace as long as the pod labels match `app.kubernetes.io/component: service`
47+
2. The second egress rule focuses on allowing egress traffic to all components in the `kube-system` namespace, which enables DNS lookups and other key communications with the components in the system namespace
4948

5049
Lets apply this additional policy:
5150

website/docs/networking/vpc-cni/network-policies/index.md

Lines changed: 7 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -18,51 +18,15 @@ $ prepare-environment networking/network-policies
1818

1919
By default, Kubernetes allows all pods to freely communicate with each other with no restrictions. Kubernetes Network Policies enable you to define and enforce rules on the flow of traffic between pods, namespaces, and IP blocks (CIDR ranges). They act as a virtual firewall, allowing you to segment and secure your cluster by specifying ingress (incoming) and egress (outgoing) network traffic rules based on various criteria such as pod labels, namespaces, IP addresses, and ports.
2020

21-
Below is an example of a network policy,
21+
Below is an example network policy with an explanation of some key elements:
2222

23-
```yaml
24-
apiVersion: networking.k8s.io/v1
25-
kind: NetworkPolicy
26-
metadata:
27-
name: test-network-policy
28-
namespace: default
29-
spec:
30-
podSelector:
31-
matchLabels:
32-
role: db
33-
policyTypes:
34-
- Ingress
35-
- Egress
36-
ingress:
37-
- from:
38-
- ipBlock:
39-
cidr: 172.17.0.0/16
40-
except:
41-
- 172.17.1.0/24
42-
- namespaceSelector:
43-
matchLabels:
44-
project: myproject
45-
- podSelector:
46-
matchLabels:
47-
role: frontend
48-
ports:
49-
- protocol: TCP
50-
port: 6379
51-
egress:
52-
- to:
53-
- ipBlock:
54-
cidr: 10.0.0.0/24
55-
ports:
56-
- protocol: TCP
57-
port: 5978
58-
```
59-
60-
The network policy specification contains the following key segments:
23+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/example-network-policy.yaml" paths="metadata,spec.podSelector,spec.policyTypes,spec.ingress,spec.egress" title="example-network-policy.yaml"}
6124

62-
- **metadata**: similar to other Kubernetes objects, it allows you to specify the name and namespace for the given network policy.
63-
- **spec.podSelector**: allows for the selection of specific pods based on their labels within the namespace to which the given network policy will be applied. If an empty pod selector or matchLabels is specified in the specification, then the policy will be applied to all the pods within the namespace.
64-
- **spec.policyTypes**: specifies whether the policy will be applied to ingress traffic, egress traffic, or both for the selected pods. If you do not specify this field, then the default behavior is to apply the network policy to ingress traffic only, unless the network policy has an egress section, in which case the network policy will be applied to both ingress and egress traffic.
65-
- **ingress**: allows for ingress rules to be configured that specify from which pods (podSelector), namespace (namespaceSelector), or CIDR range (ipBlock) traffic is allowed to the selected pods and which port or port range can be used. If a port or port range is not specified, any port can be used for communication.
25+
1. Similar to other Kubernetes objects, `metadata` allows you to specify the name and namespace for the given network policy
26+
2. `spec.podSelector` allows for the selection of specific pods based on their labels within the namespace to which the given network policy will be applied. If an empty pod selector or matchLabels is specified in the specification, then the policy will be applied to all the pods within the namespace.
27+
3. `spec.policyTypes` specifies whether the policy will be applied to ingress traffic, egress traffic, or both for the selected pods. If you do not specify this field, then the default behavior is to apply the network policy to ingress traffic only, unless the network policy has an egress section, in which case the network policy will be applied to both ingress and egress traffic.
28+
4. `ingress` allows for ingress rules to be configured that specify from which pods (`podSelector`), namespace (`namespaceSelector`), or CIDR range (`ipBlock`) traffic is allowed to the selected pods and which port or port range can be used. If a port or port range is not specified, any port can be used for communication.
29+
5. `egress` allows for egress rules to be configured that specify to which pods (`podSelector`), namespace (`namespaceSelector`), or CIDR range (`ipBlock`) traffic is allowed from the selected pods and which port or port range can be used. If a port or port range is not specified, any port can be used for communication.
6630

6731
For more information about what capabilities are allowed or restricted for Kubernetes network policies, refer to the [Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
6832

website/docs/networking/vpc-cni/network-policies/ingress.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,9 +41,10 @@ $ kubectl exec deployment/orders -n orders -- curl -v catalog.catalog/health --c
4141
4242
Now, we'll define a network policy that will allow traffic to the 'catalog' service component only from the 'ui' component:
4343
44-
```file
45-
manifests/modules/networking/network-policies/apply-network-policies/allow-catalog-ingress-webservice.yaml
46-
```
44+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/allow-catalog-ingress-webservice.yaml" paths="spec.podSelector,spec.ingress.0.from.0"}
45+
46+
1. The `podSelector` targets pods with labels `app.kubernetes.io/name: catalog` and `app.kubernetes.io/component: service`
47+
2. This `ingress.from` configuration allows inbound connections only from pods running in the `ui` namespace identified by `kubernetes.io/metadata.name: ui` with label `app.kubernetes.io/name: ui`
4748
4849
Lets apply the policy:
4950
@@ -82,9 +83,10 @@ As you could see from the above outputs, only the 'ui' component is able to comm
8283
8384
But this still leaves the 'catalog' database component open, so let us implement a network policy to ensure only the 'catalog' service component alone can communicate with the 'catalog' database component.
8485
85-
```file
86-
manifests/modules/networking/network-policies/apply-network-policies/allow-catalog-ingress-db.yaml
87-
```
86+
::yaml{file="manifests/modules/networking/network-policies/apply-network-policies/allow-catalog-ingress-db.yaml" paths="spec.podSelector,spec.ingress.0.from.0"}
87+
88+
1. The `podSelector` targets pods with labels `app.kubernetes.io/name: catalog` and `app.kubernetes.io/component: mysql`
89+
2. The `ingress.from` allows inbound connections only from pods with labels `app.kubernetes.io/name: catalog` and `app.kubernetes.io/component: service`
8890
8991
Lets apply the policy:
9092

website/docs/networking/vpc-cni/prefix/consume.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,12 @@ sidebar_position: 40
55

66
To demonstrate VPC CNI behavior of adding additional prefixes to our worker nodes, we'll deploy pause pods to utilize more IP addresses than are currently assigned. We're utilizing a large number of these pods to simulate the addition of application pods in to the cluster either through deployments or scaling operations.
77

8-
```file
9-
manifests/modules/networking/prefix/deployment-pause.yaml
10-
```
8+
::yaml{file="manifests/modules/networking/prefix/deployment-pause.yaml" paths="spec.replicas,spec.template.spec.containers.0.image"}
9+
10+
1. Creates 150 identical pods
11+
2. Set the image to `registry.k8s.io/pause` which provides a lightweight container that consumes minimal resources
1112

12-
This will spin up `150 pods` and may take some time:
13+
Apply the pause pod deployment and wait for it to be ready. It may take some time to spin up the `150 pods`:
1314

1415
```bash
1516
$ kubectl apply -k ~/environment/eks-workshop/modules/networking/prefix

website/docs/networking/vpc-cni/security-groups-for-pods/add-sg.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,10 @@ This security group:
6565

6666
In order for our Pod to use this security group, we need to use the `SecurityGroupPolicy` CRD to tell EKS which security group is to be mapped to a specific set of Pods. This is what we'll configure:
6767

68-
```file
69-
manifests/modules/networking/securitygroups-for-pods/sg/policy.yaml
70-
```
68+
::yaml{file="manifests/modules/networking/securitygroups-for-pods/sg/policy.yaml" paths="spec.podSelector,spec.securityGroups.groupIds"}
69+
70+
1. The `podSelector` targets pods with label `app.kubernetes.io/component: service`
71+
2. The `CATALOG_SG_ID` environment variable we exported above contains the security group ID that will be mapped to the matching pods
7172

7273
Apply this to the cluster then recycle the catalog Pods once again:
7374

website/docs/networking/vpc-lattice/configuring-routes.md

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,19 @@ checkout-854cd7cd66-s2blp 1/1 Running 0 26s
2222

2323
Now let's demonstrate how weighted routing works by creating `HTTPRoute` resources. First we'll create a `TargetGroupPolicy` that tells Lattice how to properly perform health checks on our checkout service:
2424

25-
```file
26-
manifests/modules/networking/vpc-lattice/target-group-policy/target-group-policy.yaml
27-
```
25+
::yaml{file="manifests/modules/networking/vpc-lattice/target-group-policy/target-group-policy.yaml" paths="spec.targetRef,spec.healthCheck,spec.healthCheck.intervalSeconds,spec.healthCheck.timeoutSeconds,spec.healthCheck.healthyThresholdCount,spec.healthCheck.unhealthyThresholdCount,spec.healthCheck.path,spec.healthCheck.port,spec.healthCheck.protocol,spec.healthCheck.statusMatch"}
26+
27+
1. `targetRef` applies this policy to the `checkout` Service
28+
2. The settings in the `healthCheck` section defines how VPC Lattice monitors service health
29+
3. `intervalSeconds: 10` : Check every 10 seconds
30+
4. `timeoutSeconds: 1` : 1-second timeout per check
31+
5. `healthyThresholdCount: 3` : 3 consecutive successes = healthy
32+
6. `unhealthyThresholdCount: 2` : 2 consecutive failures = unhealthy
33+
7. `path: "/health"`: Health check endpoint path
34+
8. `port: 8080` : Health check endpoint port
35+
9. `protocol: HTTP` : Health check endpoint protocol
36+
10. `statusMatch: "200"` : Expects HTTP 200 response
37+
2838

2939
Apply this resource:
3040

@@ -34,9 +44,11 @@ $ kubectl apply -k ~/environment/eks-workshop/modules/networking/vpc-lattice/tar
3444

3545
Now create the Kubernetes `HTTPRoute` route that distributes 75% traffic to `checkoutv2` and remaining 25% traffic to `checkout`:
3646

37-
```file
38-
manifests/modules/networking/vpc-lattice/routes/checkout-route.yaml
39-
```
47+
::yaml{file="manifests/modules/networking/vpc-lattice/routes/checkout-route.yaml" paths="spec.parentRefs.0,spec.rules.0.backendRefs.0,spec.rules.0.backendRefs.1"}
48+
49+
1. `parentRefs` attaches this `HTTPRoute` route to the `http` listener on the gateway named `${EKS_CLUSTER_NAME}`
50+
2. This `backendRefs` rule sends `25%` of the traffic to the `checkout` Service in the `checkout` namespace on port `80`
51+
3. This `backendRefs` rule sends `75%` of the traffic to the `checkout` Service in the `checkoutv2` namespace on port `80`
4052

4153
Apply this resource:
4254

website/docs/networking/vpc-lattice/service-network.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ The Gateway API controller has been configured to create a VPC Lattice service n
77

88
Before creating a `Gateway`, we need to formalize the types of load balancing implementations that are available via the Kubernetes resource model with a [GatewayClass](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gatewayclass). The controller that listens to the Gateway API relies on an associated `GatewayClass` resource that the user can reference from their `Gateway`:
99

10-
```file
11-
manifests/modules/networking/vpc-lattice/controller/gatewayclass.yaml
12-
```
10+
::yaml{file="manifests/modules/networking/vpc-lattice/controller/gatewayclass.yaml" paths="metadata.name,spec.controllerName"}
11+
12+
1. Set `amazon-vpc-lattice` as the `GatewayClass` name for reference by `Gateway` resources
13+
2. Set `application-networking.k8s.aws/gateway-api-controller` as the `controllerName` to specify the AWS Gateway API controller that manages gateways of this class
1314

1415
Lets create the `GatewayClass`:
1516

@@ -19,9 +20,11 @@ $ kubectl apply -f ~/environment/eks-workshop/modules/networking/vpc-lattice/con
1920

2021
The following YAML will create a Kubernetes `Gateway` resource which is associated with a VPC Lattice **Service Network**.
2122

22-
```file
23-
manifests/modules/networking/vpc-lattice/controller/eks-workshop-gw.yaml
24-
```
23+
::yaml{file="manifests/modules/networking/vpc-lattice/controller/eks-workshop-gw.yaml" paths="metadata.name,spec.gatewayClassName,spec.listeners.0"}
24+
25+
1. Set the Gateway identifier as the EKS Cluster name by setting `metadata.name` to the `EKS_CLUSTER_NAME` environment variable
26+
2. Set `amazon-vpc-lattice` as the `gatewayClassName` to refer to the VPC Lattice GatewayClass defined earlier
27+
3. This configuration specifies that the `listener` will accept `HTTP` traffic on port `80`
2528

2629
Apply it with the following command:
2730

0 commit comments

Comments
 (0)