Creating an AWS EKS cluster
The creation of an AWS EKS cluster is complex, and requires adding many AWS EC2 resources. The Anjuna Nitro Kubernetes tools provide a terraform script that makes it very easy to create an AWS Nitro-capable AWS EKS cluster. The script is based on the Terraform EKS module (see https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest for a complete reference in this module).
Configure the cluster
First, create a file named terraform.tfvars
. This file will contain the configurable
variables specific to your cluster. The values in this file consist of optional values
that override defaults in variables.tf
and mandatory values for variables defined in
variables.tf
that do not have a defined default.
You can create a starter terraform.tfvars
file with the following command:
$ cd terraform
$ ./gen-variables.sh
jq is required to run gen-variables.sh .
If you have not installed jq yet,
follow the instructions here: Download jq
|
Now edit terraform.tfvars
to customize your cluster. Commented out values show the default value
that is used when they are not defined. If you are unsure what a variable controls, see the
description in variables.tf
.
The default value for cluster_version
is "1.27". If you wish to use an earlier version, make sure
that you also install the kubectl
version that matches your cluster_version
.
Create the cluster
Before running other terraform commands, you must initialize the directory with this
one-off command. All remaining commands assume that you are in the terraform
subdirectory.
$ terraform init
Next you will create the AWS infrastructure. If you wish to see and approve the
AWS assets before they are created, skip the optional -auto-approve
flag.
$ terraform apply -auto-approve
Creating an AWS EKS cluster is a complex operation and will take some time to complete.
The Terraform scripts will create EC2 instances that are accessible as a public instance. This makes it easy to connect to the EC2 host for debugging purposes and should be used only for testing purposes. A real AWS EKS cluster should not configure those EC2 instances by this method. Please inspect the Terraform scripts and update them to create and deploy a cluster that is consistent with the security best practices for deploying AWS EKS clusters. |
Congratulations, you should have a running EC2 cluster with a single AWS Nitro-capable
node. To update the cluster and add/remove nodes, update the file terraform.tfvars
with the
proper parameters (for example, update the worker_node_count
value).
The final step is to configure kubectl
to have the ability to manage your AWS EKS cluster:
$ aws eks --region "$(terraform output -raw region)" update-kubeconfig --name "$(terraform output -raw cluster_name)"
You can alternatively run |
If the cluster is properly created and kubectl
configured correctly, the following command should show a single node in your cluster:
$ kubectl get nodes