Add a Node Group for Anjuna

In this section you will add a new Node Group to your EKS Cluster capable of running AWS Nitro Enclaves and Anjuna.

Launch Template

In order to add an Anjuna capable Node Group to your EKS Cluster, you will first need to define a new Launch Template.

Some things are important to be considered beforehand:

  1. How much memory of those Nodes should be reserved for Enclaves?

  2. How many vCPUs of those Nodes should be reserved for Enclaves?

  3. What is the needed hugepage size?

  4. And finally, what is the Instance Type and Size? E.g. c5.2xlarge

For simplicity, let’s assume that Applications are not large (less than 4GB in image size) and you want to run at least 2 Enclaves in each node. Then:

  1. 8GB of memory should be enough;

  2. 4 vCPUs is enough;

  3. 2 Mib hugepage size is sufficient; then

  4. A c5.2xlarge Node is ideal.

Following, you will use these values to setup the Launch Template.

User Data

From the anjuna-tools folder, please define the environment variables for the settings above and some additional information relevant for the next steps:

export nitro_reserved_mem_mb=8192
export nitro_reserved_cpu=4
export nitro_huge_page_size=2Mi
export nitro_instance_type=c5.2xlarge

export nitro_allocator_gz_b64=$(wget https://raw.githubusercontent.com/aws/aws-nitro-enclaves-cli/v1.2.0/bootstrap/nitro-enclaves-allocator -O- | gzip | base64 -w0)
export nitro_allocator_service_b64=$(wget https://raw.githubusercontent.com/aws/aws-nitro-enclaves-cli/v1.2.0/bootstrap/nitro-enclaves-allocator.service -O- | sed "s|usr/bin|usr/local/sbin|" | base64 -w0)
export vars='$nitro_reserved_cpu $nitro_reserved_mem_mb $nitro_huge_page_size $nitro_allocator_gz_b64 $nitro_allocator_service_b64'
export user_data=$(envsubst "$vars" < terraform/enclave-node-userdata.sh.tpl | sed 's/\$\$/\$/')

export CLUSTER_ARN=$(kubectl config view --minify -o jsonpath='{.contexts[0].context.cluster}')
export CLUSTER_NAME=$(echo $CLUSTER_ARN | awk -F/ '{print $NF}')
export CLUSTER_REGION=$(echo $CLUSTER_ARN | awk -F: '{print $4}')

Then, run the commands below to prepare the Launch Template’s User Data in a MIME multi-part file:

For instructions to set up pre-existing clusters with RedHat Enterprise Linux, please contact support@anjuna.io.

  • Amazon Linux 2

  • Amazon Linux 2023

cat <<EOF > user_data.mime
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

${user_data}

--==MYBOUNDARY==--
EOF
cat <<EOF > user_data.mime
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

${user_data}

--==MYBOUNDARY==
Content-Type: text/yaml; charset="us-ascii"

apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  cluster:
    name: ${CLUSTER_NAME}
    apiServerEndpoint: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.endpoint" --output text --region $CLUSTER_REGION)
    certificateAuthority: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.certificateAuthority.data" --output text --region $CLUSTER_REGION)
    cidr: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --region $CLUSTER_REGION)
    ipFamily: $(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.kubernetesNetworkConfig.ipFamily" --output text --region $CLUSTER_REGION)

--==MYBOUNDARY==--
EOF

Amazon Linux 2023 introduced the nodeadm initialization process. It is configured by a NodeConfig YAML object that requires some additional information about the cluster to ensure that the node can communicate with the correct EKS cluster and adhere to its network settings. More information is available here.

Launch Template Config

Run the command below to prepare the Launch Template Config JSON file:

cat <<EOF > launch-template-config.json
{
    "InstanceType": "c5.2xlarge",
    "EnclaveOptions": {"Enabled": true},
    "UserData": "$(cat user_data.mime | base64 -w0)",
    "MetadataOptions": {
        "HttpEndpoint": "enabled",
        "HttpPutResponseHopLimit": 2,
        "HttpTokens": "optional"
    }
}
EOF
  • InstanceType: Specifies the EC2 instance type (c5.2xlarge in this case).

  • EnclaveOptions: Enables Nitro Enclaves for instances created with this Launch Template.

  • UserData: Contains the base64-encoded MIME file you created earlier.

  • MetadataOptions:

    • HttpEndpoint: Set to "enabled", this ensures the Instance Metadata Service is accessible.

    • HttpPutResponseHopLimit: Set to 2, which allows the Enclave to communicate with IMDSv2 on AL2023.

    • HttpTokens: Set to optional, this allows both IMDSv1 and IMDSv2.

Create The Launch Template

Run the command below to create the Launch Template. Make sure to create it in the same region as your EKS cluster.

  • Amazon Linux 2

  • Amazon Linux 2023

aws ec2 create-launch-template \
    --launch-template-name anjuna-eks-al2 \
    --launch-template-data file://launch-template-config.json \
    --region $CLUSTER_REGION
aws ec2 create-launch-template \
    --launch-template-name anjuna-eks-al2023 \
    --launch-template-data file://launch-template-config.json \
    --region $CLUSTER_REGION

Node Group

With a Launch Template that supports both AWS Nitro and Anjuna, you should now be able to configure a new Node Group via the AWS Web Console:

  1. Access your EKS cluster page. E.g.:

    EKS Cluster Page
  2. On the Compute tab, find the Node Groups panel and click on Add node group button;

  3. On the Node group configuration panel:

    1. Assign a unique Name to the node group;

    2. Select the IAM role that will be used by the Nodes;

  4. On the Launch template panel, select the option to use a launch template and then select the Anjuna launch template you created above. E.g.:

    Launch Template Panel
  5. On the Kubernetes labels panel:

    1. Click on the Add label button;

    2. Configure the new label Key with anjuna-nitro-device-manager;

    3. Configure the new label Value with enabled;

      Kubernetes Labels Panel

      The anjuna-nitro-device-manager label determines which nodes have been properly configured to have the Anjuna Toolset installed.

  6. Move to the next page;

  7. Select AL2 or AL2023 as the AMI type, depending on which launch template you created;

  8. Define the scaling and update configuration that best fits your needs and move to the next page;

  9. Select the subnets where to best place the Node Group and move to the next page;

  10. Review the Node Group configuration and click on the Create button;

Notice that due to how the Launch Template is configured, no EC2 Key Pair will be assigned to the Node. If you wish to configure a Key Pair, edit the launch-template-config.json file and add a new JSON field called KeyName with the name of the Key Pair that you want to use.

Now, wait for the Node Group to be up and running.

Once the Node Group is up and running, you should be able to use kubectl to check that it has been correctly configured:

$ kubectl describe nodes -l anjuna-nitro-device-manager=enabled | grep -m1 "hugepages-2Mi:"
hugepages-2Mi:                   8Gi

This output hugepages-2Mi: 8Gi confirms that the Nodes labelled with anjuna-nitro-device-manager=enabled have been correctly configured with 8Gi of 2Mi hugepages as determined above.