Skip to content

Commit 8b77ebb

Browse files
committed
feat: add pod readiness gate ingress submodule
1 parent 1559a3d commit 8b77ebb

File tree

1 file changed

+123
-0
lines changed

1 file changed

+123
-0
lines changed
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
title: "Pod Readiness Gate"
3+
sidebar_position: 40
4+
---
5+
6+
The AWS Load Balancer controller supports [Pod readiness gate](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/pod_readiness_gate/) to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation.
7+
8+
:::info
9+
Note that This only works with `target-type: ip`, since when using `target-type: instance`, it's the node used as backend, the ALB itself is not aware of pod/podReadiness in such case.
10+
:::
11+
12+
The current ui service is not using readiness gate (last column is set to `<none>`).
13+
These informations are only visible on wide output:
14+
15+
```bash
16+
$ kubectl -n ui get pods --output wide
17+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
18+
ui-5989474687-swm27 1/1 Running 0 2m24s 10.42.181.33 ip-10-42-176-252.us-west-2.compute.internal <none> <none>
19+
```
20+
21+
We will observe the current situation by doing a rollout the deployment.
22+
You'll notice that the old pod id terminated immediately after being `Ready`.
23+
If you'll be quick, you can observe the healcheck status of the new pod in the ALB target group:
24+
25+
```bash
26+
$ kubectl -n ui get pods --output wide
27+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
28+
ui-6dbf768d69-vx2cz 1/1 Running 0 20s 10.42.142.144 ip-10-42-137-174.us-west-2.compute.internal <none> <none>
29+
$ kubectl -n ui rollout restart deployment ui
30+
deployment.apps/ui restarted
31+
$ kubectl -n ui get pods --output wide
32+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
33+
ui-5d5c6b587d-x5pgz 1/1 Running 0 2s 10.42.181.37 ip-10-42-176-252.us-west-2.compute.internal <none> <none>
34+
ui-6dbf768d69-vx2cz 1/1 Terminating 0 30s 10.42.142.144 ip-10-42-137-174.us-west-2.compute.internal <none> <none>
35+
$ kubectl -n ui get pods --output wide
36+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
37+
ui-5d5c6b587d-x5pgz 1/1 Running 0 6s 10.42.181.37 ip-10-42-176-252.us-west-2.compute.internal <none> <none>
38+
TG_ARN=$(aws elbv2 describe-target-groups --query "TargetGroups[?contains(TargetGroupName, 'k8s-ui-ui')].TargetGroupArn" --output text)
39+
aws elbv2 describe-target-health --target-group-arn $TG_ARN --query "TargetHealthDescriptions[].TargetHealth
40+
```
41+
42+
output as:
43+
44+
```json
45+
{
46+
"State": "initial",
47+
"Reason": "Elb.RegistrationInProgress",
48+
"Description": "Target registration is in progress"
49+
}
50+
```
51+
52+
and after some seconds:
53+
54+
```json
55+
{
56+
"State": "healthy"
57+
}
58+
```
59+
60+
During this delay our ui application will be unreachable (502 errors).
61+
62+
In order to avoid this situation, the AWS Load Balancer controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to `True` only when the corresponding target in the ALB/NLB target group shows a health state of `Healthy`. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are `Healthy` in the ALB/NLB target group and ready to take traffic.
63+
64+
For readiness gate configuration to be injected to the pod spec, you need to apply the label `elbv2.k8s.aws/pod-readiness-gate-inject: enabled` to the pod namespace:
65+
66+
```bash
67+
$ kubectl label namespace ui elbv2.k8s.aws/pod-readiness-gate-inject=enabled
68+
namespace/ui labeled
69+
```
70+
71+
We need to rollout the deployment to enable it:
72+
73+
```bash
74+
$ kubectl -n ui rollout restart deployment ui
75+
```
76+
77+
You can observe that the `Ready` status is `False` as the target health:
78+
```bash
79+
$ kubectl describe pod -n ui -l app.kubernetes.io/name=ui | grep --after-context=10 "Conditions:"
80+
Conditions:
81+
Type Status
82+
target-health.elbv2.k8s.aws/k8s-ui-ui-b21a807597 False
83+
PodReadyToStartContainers True
84+
Initialized True
85+
Ready False
86+
ContainersReady True
87+
PodScheduled True
88+
```
89+
90+
After the target healthcheck is Ready:
91+
92+
```bash
93+
$ kubectl describe pod -n ui -l app.kubernetes.io/name=ui | grep --after-context=10 "Conditions:"
94+
Conditions:
95+
Type Status
96+
target-health.elbv2.k8s.aws/k8s-ui-ui-b21a807597 True
97+
PodReadyToStartContainers True
98+
Initialized True
99+
Ready True
100+
ContainersReady True
101+
PodScheduled True
102+
```
103+
104+
Now the pod has readiness gate enabled, we can observe that the old pod isn't terminated unless the readiness success on the new pod if we do another rollout deployment:
105+
106+
```bash
107+
$ kubectl -n ui rollout restart deployment ui
108+
deployment.apps/ui restarted
109+
$ kubectl -n ui get pods --output wide
110+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
111+
ui-886bb64b8-cxmsx 1/1 Running 0 103s 10.42.158.114 ip-10-42-137-174.us-west-2.compute.internal <none> 1/1
112+
[...]
113+
$ kubectl -n ui get pods --output wide
114+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
115+
ui-886bb64b8-cxmsx 1/1 Running 0 112s 10.42.158.114 ip-10-42-137-174.us-west-2.compute.internal <none> 1/1
116+
ui-6fd4c6cc49-f8tqm 1/1 Running 0 3s 10.42.181.33 ip-10-42-176-252.us-west-2.compute.internal <none> 0/1
117+
[...]
118+
$ kubectl -n ui get pods --output wide
119+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
120+
ui-6fd4c6cc49-f8tqm 1/1 Running 0 66s 10.42.181.33 ip-10-42-176-252.us-west-2.compute.internal <none> 1/1
121+
```
122+
123+
This let the ui still reachable during the rollout.

0 commit comments

Comments
 (0)