This guide provisions an Amazon Elastic Kubernetes Service (EKS) cluster using Terraform, including configurations for accessing and managing the cluster.
Ensure the following are installed and configured before starting:
- AWS CLI: Version 2.x. Install guide.
- Kubectl: Compatible with your EKS cluster version (e.g., 1.30). Install guide.
- Terraform CLI: Version 1.x (recommended: 1.5 or later). Install guide.
- AWS IAM User: With programmatic access and sufficient permissions (e.g., a custom policy with EKS, VPC, IAM, and EC2 permissions).
- AWS Credentials: Configured via
~/.aws/credentialsor environment variables:export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key" export AWS_DEFAULT_REGION="us-east-1"
Configure your AWS CLI with the appropriate credentials:
aws configureProvide:
- AWS Access Key ID
- AWS Secret Access Key
- Default region (e.g.,
us-east-1) - Output format (e.g.,
json)
Verify authentication:
aws sts get-caller-identityNavigate to your Terraform project directory containing the EKS configuration (e.g., main.tf, variables.tf, outputs.tf). If you don’t have one, use or adapt the AWS EKS Terraform module.
Example directory setup:
cd IAC-EKS-ClusterEnsure your Terraform configuration includes:
- VPC setup (subnets, route tables, NAT gateway, etc.)
- EKS cluster with managed node groups or Fargate profiles
- IAM roles for EKS and node groups
- Security group rules for cluster communication
Initialize the Terraform working directory to download providers and modules:
terraform initReview the resources Terraform will create or modify:
terraform planCheck the output for correctness (e.g., cluster name, region, node group size).
Apply the Terraform configuration to provision the EKS cluster:
terraform applyType yes when prompted to confirm. This may take 10–15 minutes.
Update your kubectl configuration to connect to the EKS cluster. Replace ap-south-1 and cluster-name with your region and EKS cluster name (as defined in your Terraform config):
aws eks --region ap-south-1 update-kubeconfig --name cluster-nameVerify cluster access:
kubectl get nodesYou should see the nodes in your EKS cluster.
To destroy the EKS cluster and associated resources, run:
terraform destroyType yes when prompted. Ensure you’ve backed up any critical data before destroying.