Prerequisites

The instructions in this section assume that you will be running the commands to create and configure your AWS EKS cluster on a host that has been set up with the appropriate tools to do so.

AWS CLI tools

Install AWS CLI 2.7.1 or above for full compatibility with the different tools and versions mentioned below. Do not use apt, yum or snap versions of the AWS CLI which install 1.x. You must use the AWS CLI installer from the link below:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
$ sudo yum install git procps

Once you have the AWS CLI installed, you must configure it with your AWS credentials. When it asks for the “Default output format”, enter json:

$ aws configure
AWS Access Key ID [None]: [AWS Access Key ID]
AWS Secret Access Key [None]: [AWS Secret Access Key]
Default region name [None]: us-east-2
Default output format [None]: json

Docker

Follow the official instructions to install Docker for your platform:

jq

Install the jq tool by following the instructions here: Download jq

EKS supported versions

As of the Anjuna Nitro Runtime v1.36, Anjuna supports EKS versions 1.22 through 1.26.

The following table matches the currently supported EKS versions to the Anjuna Nitro Runtime versions:

Supported EKS version Anjuna Nitro Runtime versions

1.26

v1.35 - v1.36

1.25

v1.33 - v1.36

1.24

v1.31 - v1.36

1.23

v1.27 - v1.36

1.22

v1.23 - v1.36

EKS resource requirements

The recommended resource allocation for Anjuna Nitro on EKS per enclave instance is:

  • Trusted vCPUs (Enclave): At least 2

  • Trusted memory (Enclave): At least 1 Gi

  • Untrusted vCPUs (Launcher): 0.5 per trusted vCPU

  • Untrusted memory (Launcher): 1 Gi per enclave

This means a minimum of 3 vCPUs (2 trusted, 1 untrusted) and 2 Gi of memory (1 Gi trusted and 1 Gi untrusted) are recommended per enclave.

AWS Nitro Enclaves support up to four enclaves per EKS Node.

kubectl

The official instructions for installing kubectl are here: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Ideally, you should install a version of kubectl that exactly matches the EKS version you wish to use, but using up to one minor version lower is supported. Using a kubectl version newer than your EKS version is not supported. For more details, see this page: https://kubernetes.io/releases/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager

The following commands will set up your Linux host with kubectl v1.26:

$ curl -LO "https://dl.k8s.io/release/v1.26.4/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# If /usr/local/bin is not in your PATH, and bash is your shell, you can add it like this:
$ export PATH=$PATH:/usr/local/bin
$ echo 'export PATH=$PATH:/usr/local/bin' >> ~/.bashrc

# Verify that kubectl reports the version you installed
$ kubectl version --client

Install Terraform

The Terraform configuration for the Anjuna Nitro K8s Toolset has been tested on Terraform v1.3.1, but it should work on v1.1 or higher.

Install Helm

Before you install Helm, it is important to know that each version of Helm supports different versions of EKS as noted in the Supported Version Skew section of Helm’s documentation.

Anjuna supports Helm versions 3.7.x-3.9.x, but only for the EKS versions shown in the EKS supported versions section above.

Install Helm using the instructions on the Installing Helm page of Helm’s documentation.