Add a new Node Group

The Anjuna Nitro Runtime Toolset can only be installed into a Kubernetes cluster with Nodes that have been properly configured and with the AWS Nitro Enclaves option enabled.

In this section, you will add a new Node Group to your EKS Cluster that is capable of running AWS Nitro Enclaves with the Anjuna Runtime.

Launch Template

In order to add an Anjuna Runtime-capable Node Group to your EKS Cluster, you will first need to define a new Launch Template.

There are several factors to consider beforehand:

  1. The amount of the Node’s memory to reserve for enclaves

  2. The number of the Node’s vCPUs to reserve for enclaves

  3. The hugepage size needed

  4. The Instance Type and Size (e.g., c5.2xlarge)

For simplicity, assume that the applications are not large (less than 4GB in image size), and you want to run at least 2 Enclaves in each Node.

  1. 8GB of memory should be enough

  2. 4 vCPUs is enough

  3. 2 Mib hugepage size is sufficient

  4. A c5.2xlarge Node is ideal

You will use these values to set up the Launch Template.

User Data

From the anjuna-tools folder, define the environment variables for the settings above and some additional information relevant for the next steps:

$ export nitro_reserved_mem_mb=8192
$ export nitro_reserved_cpu=4
$ export nitro_huge_page_size=2Mi
$ export nitro_instance_type=c5.2xlarge

$ export nitro_allocator_gz_b64=$(wget https://raw.githubusercontent.com/aws/aws-nitro-enclaves-cli/v1.2.0/bootstrap/nitro-enclaves-allocator -O- | gzip | base64 -w0)
$ export nitro_allocator_service_b64=$(wget https://raw.githubusercontent.com/aws/aws-nitro-enclaves-cli/v1.2.0/bootstrap/nitro-enclaves-allocator.service -O- | sed "s|usr/bin|usr/local/sbin|" | base64 -w0)
$ export vars='$nitro_reserved_cpu $nitro_reserved_mem_mb $nitro_huge_page_size $nitro_allocator_gz_b64 $nitro_allocator_service_b64'
$ export user_data=$(envsubst "$vars" < terraform/enclave-node-userdata.sh.tpl | sed 's/\$\$/\$/')

$ export CLUSTER_ARN=$(kubectl config view --minify -o jsonpath='{.contexts[0].context.cluster}')
$ export CLUSTER_NAME=$(echo $CLUSTER_ARN | awk -F/ '{print $NF}')
$ export CLUSTER_REGION=$(echo $CLUSTER_ARN | awk -F: '{print $4}')

Then, run the commands below to prepare the Launch Template’s User Data in a MIME multi-part file:

For instructions to set up pre-existing clusters with RedHat Enterprise Linux, contact support@anjuna.io.

  • Amazon Linux 2

  • Amazon Linux 2023

$ cat <<EOF > user_data.mime
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

${user_data}

--==MYBOUNDARY==--
EOF
$ cat <<EOF > user_data.mime
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

${user_data}

--==MYBOUNDARY==
Content-Type: text/yaml; charset="us-ascii"

apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  cluster:
    name: ${CLUSTER_NAME}
    apiServerEndpoint: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.endpoint" --output text --region $CLUSTER_REGION)
    certificateAuthority: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.certificateAuthority.data" --output text --region $CLUSTER_REGION)
    cidr: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --region $CLUSTER_REGION)
    ipFamily: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.kubernetesNetworkConfig.ipFamily" --output text --region $CLUSTER_REGION)

--==MYBOUNDARY==--
EOF

Note that Amazon Linux 2023 introduced the nodeadm initialization process. It is configured by a NodeConfig YAML object that requires some additional information about the cluster to ensure that the Node can communicate with the correct EKS cluster and adhere to its network settings. More information is available on this AWS documentation page.

Launch Template Config

Run the command below to prepare the Launch Template Config JSON file:

$ cat <<EOF > launch-template-config.json
{
    "InstanceType": "c5.2xlarge",
    "EnclaveOptions": {"Enabled": true},
    "UserData": "$(cat user_data.mime | base64 -w0)",
    "MetadataOptions": {
        "HttpEndpoint": "enabled",
        "HttpPutResponseHopLimit": 2,
        "HttpTokens": "optional"
    }
}
EOF
  • InstanceType: Specifies the EC2 instance type (c5.2xlarge in this case)

  • EnclaveOptions: Enables Nitro Enclaves for instances created with this Launch Template

  • UserData: Contains the base64-encoded MIME file you created earlier

  • MetadataOptions:

    • HttpEndpoint: Set to "enabled"; this ensures the Instance Metadata Service is accessible.

    • HttpPutResponseHopLimit: Set to 2, which allows the Enclave to communicate with IMDSv2 on AL2023.

    • HttpTokens: Set to optional; this allows both IMDSv1 and IMDSv2.
      In order to disable IMDSv1 and only allow IMDSv2, set the value to “required”. IMDSv2 is AWS’s recommended method, and is the default in most environments.

Create The Launch Template

Run the command below to create the Launch Template. Make sure to create it in the same region as your EKS cluster.

  • Amazon Linux 2

  • Amazon Linux 2023

$ aws ec2 create-launch-template \
    --launch-template-name anjuna-eks-al2 \
    --launch-template-data file://launch-template-config.json \
    --region "$CLUSTER_REGION"
$ aws ec2 create-launch-template \
    --launch-template-name anjuna-eks-al2023 \
    --launch-template-data file://launch-template-config.json \
    --region "$CLUSTER_REGION"

Node Group

With a Launch Template that supports both AWS Nitro and the Anjuna Runtime, you should now be able to configure a new Node Group via the AWS Web Console:

  1. Access your EKS cluster page:

    EKS Cluster Page
  2. On the Compute tab, find the Node Groups panel, and click on the Add node group button

  3. On the Node group configuration panel:

    1. Assign a unique Name to the Node group

    2. Select the IAM role that will be used by the Nodes

  4. On the Launch template panel, select the option to use a launch template, and then select the Anjuna Runtime launch template that you created above:

    Launch Template Panel
  5. On the Kubernetes labels panel:

    1. Click on the Add label button

    2. Configure the new label Key with anjuna-nitro-device-manager

    3. Configure the new label Value with enabled:

      Kubernetes Labels Panel

      The anjuna-nitro-device-manager label will be used by the Helm Chart on the next page to install the Anjuna Device Manager on these Nodes, providing access to AWS Nitro Enclave devices.

  6. Move to the next page

  7. Select AL2 or AL2023 as the AMI type, depending on which launch template you created

  8. Define the scaling, and update to the configuration that best fits your needs, and move to the next page

  9. Select the best subnets to place the Node Group, and move to the next page

  10. Review the Node Group configuration, and click on the Create button

Notice that because of the Launch Template configuration, no EC2 Key Pair will be assigned to the Node. If you want to configure a Key Pair, edit the launch-template-config.json file, and add a new JSON field called KeyName with the name of the Key Pair that you want to use.

Now, wait for the Node Group to be up and running.

Once the Node Group is up and running, you should be able to use kubectl to check that it has been correctly configured:

$ kubectl describe nodes -l anjuna-nitro-device-manager=enabled | grep -m1 "hugepages-2Mi:"

This command should produce the following output:

hugepages-2Mi:                   8Gi

The output, hugepages-2Mi: 8Gi, confirms that the Nodes labeled with anjuna-nitro-device-manager=enabled have been correctly configured with 8Gi of 2Mi hugepages as determined above.