generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 551
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Installation method
Own AWS account
What happened?
I deployed my lab environment a few weeks ago and tried to reuse it today with a new cluster.
ec2-user:~/environment:$ export EKS_CLUSTER_NAME=eks-workshop
curl -fsSL https://raw.githubusercontent.com/aws-samples/eks-workshop-v2/stable/cluster/eksctl/cluster.yaml | \
envsubst | eksctl create cluster -f -
2025-05-06 21:04:45 [ℹ] eksctl version 0.205.0
2025-05-06 21:04:45 [ℹ] using region eu-west-1
2025-05-06 21:04:45 [ℹ] subnets for eu-west-1a - public:10.42.0.0/19 private:10.42.96.0/19
2025-05-06 21:04:45 [ℹ] subnets for eu-west-1b - public:10.42.32.0/19 private:10.42.128.0/19
2025-05-06 21:04:45 [ℹ] subnets for eu-west-1c - public:10.42.64.0/19 private:10.42.160.0/19
2025-05-06 21:04:45 [ℹ] nodegroup "default" will use "" [AmazonLinux2023/1.31]
2025-05-06 21:04:45 [ℹ] using Kubernetes version 1.31
2025-05-06 21:04:45 [ℹ] creating EKS cluster "eks-workshop" in "eu-west-1" region with managed nodes
2025-05-06 21:04:45 [ℹ] 1 nodegroup (default) was included (based on the include/exclude rules)
2025-05-06 21:04:45 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2025-05-06 21:04:45 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=eks-workshop'
2025-05-06 21:04:45 [ℹ] Kubernetes API endpoint access will use provided values {publicAccess=true, privateAccess=true} for cluster "eks-workshop" in "eu-west-1"
2025-05-06 21:04:45 [ℹ] CloudWatch logging will not be enabled for cluster "eks-workshop" in "eu-west-1"
2025-05-06 21:04:45 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-1 --cluster=eks-workshop'
2025-05-06 21:04:45 [ℹ] default addons coredns, metrics-server, kube-proxy were not specified, will install them as EKS addons
2025-05-06 21:04:45 [ℹ]
2 sequential tasks: { create cluster control plane "eks-workshop",
2 sequential sub-tasks: {
5 sequential sub-tasks: {
1 task: { create addons },
wait for control plane to become ready,
associate IAM OIDC provider,
no tasks,
update VPC CNI to use IRSA if required,
},
create managed nodegroup "default",
}
}
2025-05-06 21:04:45 [ℹ] building cluster stack "eksctl-eks-workshop-cluster"
2025-05-06 21:04:45 [!] a TGW or VGW was not provided for hybrid nodes connectivity, hence eksctl won't configure any related routes and gateway attachments for your VPC
2025-05-06 21:04:45 [ℹ] deploying stack "eksctl-eks-workshop-cluster"
2025-05-06 21:05:15 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:05:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:06:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:07:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:08:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:09:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:10:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:11:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:12:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:13:45 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-cluster"
2025-05-06 21:13:47 [!] recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2025-05-06 21:13:47 [ℹ] creating addon: vpc-cni
2025-05-06 21:13:47 [ℹ] successfully created addon: vpc-cni
2025-05-06 21:13:47 [ℹ] creating addon: coredns
2025-05-06 21:13:48 [ℹ] successfully created addon: coredns
2025-05-06 21:13:48 [ℹ] creating addon: metrics-server
2025-05-06 21:13:48 [ℹ] successfully created addon: metrics-server
2025-05-06 21:13:48 [ℹ] creating addon: kube-proxy
2025-05-06 21:13:49 [ℹ] successfully created addon: kube-proxy
2025-05-06 21:16:42 [ℹ] addon "vpc-cni" active
2025-05-06 21:16:42 [ℹ] deploying stack "eksctl-eks-workshop-addon-vpc-cni"
2025-05-06 21:16:42 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-addon-vpc-cni"
2025-05-06 21:17:12 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-addon-vpc-cni"
2025-05-06 21:17:12 [ℹ] updating addon
2025-05-06 21:17:23 [ℹ] addon "vpc-cni" active
2025-05-06 21:17:23 [ℹ] building managed nodegroup stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:17:23 [ℹ] deploying stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:17:23 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:17:53 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:18:52 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:20:29 [ℹ] waiting for CloudFormation stack "eksctl-eks-workshop-nodegroup-default"
2025-05-06 21:20:29 [ℹ] waiting for the control plane to become ready
2025-05-06 21:20:30 [✔] saved kubeconfig as "/home/ec2-user/.kube/config"
2025-05-06 21:20:30 [ℹ] no tasks
2025-05-06 21:20:30 [✔] all EKS cluster resources for "eks-workshop" have been created
2025-05-06 21:20:30 [ℹ] nodegroup "default" has 3 node(s)
2025-05-06 21:20:30 [ℹ] node "ip-10-42-107-153.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [ℹ] node "ip-10-42-156-202.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [ℹ] node "ip-10-42-170-254.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [ℹ] waiting for at least 3 node(s) to become ready in "default"
2025-05-06 21:20:30 [ℹ] nodegroup "default" has 3 node(s)
2025-05-06 21:20:30 [ℹ] node "ip-10-42-107-153.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [ℹ] node "ip-10-42-156-202.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [ℹ] node "ip-10-42-170-254.eu-west-1.compute.internal" is ready
2025-05-06 21:20:30 [✔] created 1 managed nodegroup(s) in cluster "eks-workshop"
2025-05-06 21:20:31 [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2025-05-06 21:20:31 [✔] EKS cluster "eks-workshop" in "eu-west-1" region is ready
ec2-user:~/environment:$
ec2-user:~/environment:$
ec2-user:~/environment:$ prepare-environment introduction/getting-started
Refreshing copy of workshop repository from GitHub...
ec2-user:~/environment:$ prepare-environment fundamentals/storage/efs
Refreshing copy of workshop repository from GitHub...
Resetting the environment...
Tip: Read the rest of the lab introduction while you wait!
error: no objects passed to scale
An error occurred, please contact your workshop proctor or raise an issue at https://github.com/aws-samples/eks-workshop-v2/issues
The full log can be found here: /eks-workshop/logs/action-1746566994.log
ec2-user:~/environment:$ prepare-environment introduction/helm
Refreshing copy of workshop repository from GitHub...
Resetting the environment...
Tip: Read the rest of the lab introduction while you wait!
error: no objects passed to scale
An error occurred, please contact your workshop proctor or raise an issue at https://github.com/aws-samples/eks-workshop-v2/issues
The full log can be found here: /eks-workshop/logs/action-1746567393.log
The "full log" is a bit disappointing...
cat /eks-workshop/logs/action-1746566994.log
Added new context default to /home/ec2-user/.kube/config
Refreshing copy of workshop repository from GitHub...
Resetting the environment...
Tip: Read the rest of the lab introduction while you wait!
error: no objects passed to scale
An error occurred, please contact your workshop proctor or raise an issue at https://github.com/aws-samples/eks-workshop-v2/issues
The full log can be found here: /eks-workshop/logs/action-1746566994.log
I checked the cluster and everything looks fine:
ec2-user:~/environment:$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-42-107-153.eu-west-1.compute.internal Ready <none> 15m v1.31.3-eks-59bf375
ip-10-42-156-202.eu-west-1.compute.internal Ready <none> 15m v1.31.3-eks-59bf375
ip-10-42-170-254.eu-west-1.compute.internal Ready <none> 15m v1.31.3-eks-59bf375
ec2-user:~/environment:$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-2j2nd 2/2 Running 0 15m
kube-system aws-node-92cs5 2/2 Running 0 15m
kube-system aws-node-94s2b 2/2 Running 0 15m
kube-system coredns-844dbb9f6f-jtvgl 1/1 Running 0 17m
kube-system coredns-844dbb9f6f-mlllj 1/1 Running 0 17m
kube-system kube-proxy-g4bgc 1/1 Running 0 15m
kube-system kube-proxy-mlhrc 1/1 Running 0 15m
kube-system kube-proxy-rprz5 1/1 Running 0 15m
kube-system metrics-server-5794744d5f-ftf5k 1/1 Running 0 17m
kube-system metrics-server-5794744d5f-tqkcb 1/1 Running 0 17m
ec2-user:~/environment:$ kubectl get deployments
No resources found in default namespace.
ec2-user:~/environment:$ kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system coredns 2/2 2 2 17m
kube-system metrics-server 2/2 2 2 17m
What did you expect to happen?
Labs to be deployed!
How can we reproduce it?
Follow basic setup instructions to create a workshop and then start a lab.
Anything else we need to know?
No response
EKS version
ec2-user:~/environment:$ cat cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
availabilityZones:
- ${AWS_REGION}a
- ${AWS_REGION}b
- ${AWS_REGION}c
metadata:
name: ${EKS_CLUSTER_NAME}
region: ${AWS_REGION}
version: "1.31"
....
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working
Type
Projects
Status
No status