Creating an AWS EKS cluster

The creation of an AWS EKS cluster is complex, and requires adding many AWS EC2 resources. The Anjuna Nitro Kubernetes tools provide a terraform script that makes it very easy to create a Nitro-capable AWS EKS cluster. The script is based on the terraform eks module (see https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest for a complete reference in this module).

Install Terraform

Make sure you have the latest version of Terraform: https://www.terraform.io/downloads.html.

$ wget https://releases.hashicorp.com/terraform/0.15.3/terraform_0.15.3_linux_amd64.zip
$ sudo unzip -d /usr/local/bin terraform_0.15.3_linux_amd64.zip

Initialize Terraform

You can find the Terraform scripts in the directory eks-terraform:

$ cd eks-terraform
$ terraform init

Define environment variables

The scripts and commands shown in this guide require a small number of environment variables. Make sure you run the following commands to specify:

  • the region where the AWS EKS cluster will be created,

  • the name prefix the script should use to create resources in your cluster,

  • the version number of the Anjuna Nitro Kubernetes tools that should be used.

export EKS_REGION=<your-region>
export AWS_KEYNAME=<your-ec2-ssh-key-name>
export PROJECT_NAME=anjuna-eks
export ANJUNA_VERSION=1.15.0002
  • ANJUNA_VERSION MUST be set to 1.15.0002

  • PROJECT_NAME will be used to name the resources created for the cluster. You can pick any name that helps you describe the cluster.

Configure the cluster

Before creating the cluster, update the file variables.tf with some parameters that make sense for your environment. More specifically, the following entries should be updated (change the default value):

  • region: the AWS region where the resources will be created. (EC2 instances, KMS keys, AWS Virtual Private Cloud, AWS EKS, etc.)

  • cluster_prefix: when creating resources, this terraform script will name them with the specified prefix to make it easy to identify the resources associated with the cluster.

  • worker_node_instance_type: the EC2 instance type for the EKS nodes.

  • worker_node_count: the number of EKS nodes to create for this cluster.

  • nitro_reserved_cpu: the number of CPUs to reserve for Nitro on the EKS nodes.

  • nitro_reserved_mem_mb: the amount of memory to reserve for Nitro on the EKS nodes.

If you do not change the default values for the variables above, you can override them when you execute terraform init (in the next section) by using the -var "variable=value" terraform command line arguments.

Create the cluster

$ terraform apply -var "region=$EKS_REGION" -var "deployer_key_name=$AWS_KEYNAME" -var "cluster_prefix=$PROJECT_NAME"

If you edited the file variables.tf with the correct default values for your use case, you can run the following command:

$ terraform apply -var "deployer_key_name=$AWS_KEYNAME"

This command can take a while since creating an AWS EKS cluster is a complex operation.

The Terraform scripts will create EC2 instances that are accessible as a public instance. This makes it easy to connect to the EC2 host for debugging purposes and should be used only for testing purposes.

A real AWS EKS cluster would not configure those EC2 instances in such a way. Please inspect the Terraform scripts and update them to create and deploy a cluster that is consistent with the best security practices for deploying AWS EKS cluster.

Congratulations, you should have a running EC2 cluster with a single Nitro-capable node. To update the cluster and add/remove nodes, just update the file variables.tf with the proper parameters (for example, update the worker_node_count value).

The final step is to configure kubectl to have the ability to manage your AWS EKS cluster:

$ aws eks --region $(terraform output region| tr -d \") update-kubeconfig --name $(terraform output cluster_name| tr -d \")

If the cluster is properly created and kubectl configured correctly, the following command should show a single node in your cluster:

$ kubectl get nodes