Installing the Anjuna Kubernetes Toolset to your cluster
Configure your environment
Before installing the Anjuna Kubernetes Toolset to your cluster, make sure that the environment is correctly configured, as shown in previous sections.
Configure the environment with your cluster information. You can either use an existing AKS cluster or create a new one through Terraform. Select one of the tabs below according to your use case.
-
Create a new cluster
-
Use an existing cluster
If you do not have a cluster created already,
you can use the cluster
module included in the Anjuna Kubernetes Toolset installer bundle
to create a new cluster, using the following commands.
First, create a new terraform.tfvars
file with your desired configuration,
such as public AKS version, Azure region, and network prefix:
$ cat <<EOF > ${WORKSPACE}/iac/azure/cluster/terraform.tfvars
# Kubernetes version to use for the cluster
k8s_version = "1.30"
# Location where resources should be provisioned in Azure
location = "eastus"
# Network address prefix for installing AKS in. E.g: 10.1 or 10.2
# (It is a best practice to ensure that this range does not overlap
# with other subnets within the same Azure region)
base_network_address_prefix = "10.1"
# Prefix to use for all relevant resources created by this configuration
prefix = "anjunaakssev"
# The VM size for the cluster's system agent nodes
agents_size = "standard_d4lds_v5"
# Initial number of k8s system agent nodes
agents_count = 1
# Tags to use on the resources deployed with this configuration
tags = {}
EOF
To create your cluster, run:
$ terraform -chdir=${WORKSPACE}/iac/azure/cluster init
$ terraform -chdir=${WORKSPACE}/iac/azure/cluster apply
Note that creating an AKS cluster usually takes between 5-15 minutes.
After Terraform runs successfully, configure your environment to use the newly created cluster:
$ source ${WORKSPACE}/iac/azure/cluster/env.sh
$ az aks get-credentials \
--resource-group "${AZURE_AKS_RG}" \
--name "${AZURE_CLUSTER_NAME}"
If you already have a cluster, ensure the environment is correctly configured to point to it, and skip the creation of the cluster entirely by running the following commands:
$ export AZURE_CLUSTER_NAME="<cluster name>"
$ export AZURE_AKS_RG="<resource group>"
$ az aks get-credentials \
--resource-group "${AZURE_AKS_RG}" \
--name "${AZURE_CLUSTER_NAME}"
Retrieve the name of the virtual network and subnet used by the AKS cluster so that the Confidential Pods can be part of the same network:
$ export AZURE_VNET_NAME=$(az network vnet list \
--resource-group "${AZURE_AKS_RG}" \
--query "[0].name" \
--output tsv)
$ export AZURE_SUBNET_ID=$(az network vnet subnet list \
--resource-group "${AZURE_AKS_RG}" \
--vnet-name "${AZURE_VNET_NAME}" \
--query "[0].id" \
--output tsv)
Load and push the Anjuna Kubernetes Toolset container images
In this section, you will push the Anjuna Kubernetes Toolset images to a container registry, where they can be accessed by your Kubernetes cluster:
-
The
anjuna-k8s-sev-tools
image holds the binaries for the Anjuna Cloud Adaptor, Webhook, and Operator; that run as a DaemonSet, Mutating Admission Controller, and Deployment respectively. -
The
anjuna-k8s-sev-runtime
image holds the Anjuna Shim and other relevant binaries to extend the capabilities of the cluster’s container runtime.
To continue, first specify the target name of the Anjuna Kubernetes Toolset container images.
$ export REGISTRY="${AZURE_REGISTRY_NAME}.azurecr.io"
$ export ANJUNA_K8S_TOOLS_IMAGE="${REGISTRY}/anjuna-k8s-cc-toolset:tools-1.7.0002"
$ export ANJUNA_K8S_RUNTIME_IMAGE="${REGISTRY}/anjuna-k8s-cc-toolset:runtime-1.7.0002"
Make sure that you are authenticated and have permission to push to the chosen container registry. For example, if you are using Azure Container Registry (ACR), run the following command:
$ az acr login -n "${AZURE_REGISTRY_NAME}"
You should see output like Login Succeeded
when the command completes.
The following commands load the images locally from the Anjuna Kubernetes Toolset installer bundle, and then push them to your chosen container registry. This might take a few seconds depending on the size of the images.
$ cd ${WORKSPACE}
$ docker load -i ${WORKSPACE}/anjuna-k8s-sev-tools-image.tar
$ docker tag anjuna-k8s-sev-tools:1.7.0002 ${ANJUNA_K8S_TOOLS_IMAGE}
$ docker push ${ANJUNA_K8S_TOOLS_IMAGE}
$ docker load -i ${WORKSPACE}/anjuna-k8s-sev-runtime-image.tar
$ docker tag anjuna-k8s-sev-runtime:1.7.0002 ${ANJUNA_K8S_RUNTIME_IMAGE}
$ docker push ${ANJUNA_K8S_RUNTIME_IMAGE}
Then, you must ensure that the cluster is able to pull the images from your container registry.
For example, if you are pushing the images to an Azure Container Registry (ACR), you can generate an authentication token as follows:
The az acr token create command will print a warning about storing your credential safely.
This is an expected output that always occurs when you create a new token.
|
$ export ACR_TOKEN_NAME="acr-token-${RANDOM}"
$ export ACR_TOKEN=$(az acr token create -n ${ACR_TOKEN_NAME} \
-r ${AZURE_REGISTRY_NAME} \
--repository anjuna-k8s-cc-toolset content/read \
| jq -r '.credentials.passwords[] | select(.name == "password1") | .value')
As part of the Anjuna Kubernetes Toolset installation, you can create a new image pull secret from the authentication token generated above, so that the cluster can pull the toolset images.
Alternatively, if you do not want to use registry tokens as illustrated above, you can attach the ACR to the AKS cluster instead (requires Owner permission). |
Install the Anjuna Kubernetes Toolset
The Anjuna Kubernetes Toolset relies on Node labels to select in which Nodes of your cluster it will be installed. This operation can be applied to all Nodes, to a set of Nodes, or even to a single Node.
To query the existing Node labels of your cluster, run kubectl get nodes --show-labels .
To add a new label to a specific Node,
run kubectl label nodes <node-name> <label-key>=<label-value> .
|
In this guide, the Anjuna Kubernetes Toolset selects the label kubernetes.azure.com/role=agent
, which is applied by default to all AKS Nodes.
Run the following command to configure the Node label to be used. If you want to specify a different label selector to install the Anjuna Kubernetes Toolset to a different set of Nodes, change the values below.
$ export ANJUNA_NODE_LABEL_KEY=kubernetes.azure.com/role
$ export ANJUNA_NODE_LABEL_VALUE=agent
With the label key and value defined, move to the iac
folder,
and install the needed Custom Resource Definition (CRD) for the Anjuna Kubernetes Toolset to your cluster:
$ cd ${WORKSPACE}/iac
$ kubectl apply -f k8s/crd.yaml
$ envsubst < k8s/anjunaruntime.template.yaml > k8s/anjunaruntime.yaml
$ kubectl apply -f k8s/anjunaruntime.yaml
For regular Pods,
the resources.requests
and resources.limits
fields of the Pod specification help the Kubernetes scheduler make placement decisions
based on the capacity and constraints of each worker Node.
Anjuna Confidential Pods are deployed as standalone Confidential VMs
and not as containers on the same worker Node.
Therefore, an Anjuna Confidential Pod’s spec.resources
could mislead the scheduler
regarding the actual capacity of the worker Nodes.
To address this issue, the Anjuna Kubernetes Toolset includes a mutating webhook and a controller that adjust the resource requests and limits of an Anjuna Confidential Pod. This allows the Pod to be more accurate regarding Node resource allocation.
cert-manager
is required for the mutating webhook, and can be installed as follows to the cluster:
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml
Since each Anjuna Confidential Pod runs in its own Confidential VM,
the Anjuna Kubernetes Toolset limits the number of simultaneous Anjuna Confidential Pods to 20 per cluster.
If you want to change this number, you can define it in the values.yaml
file (see below)
by setting maxConfidentialPodVMs
to the desired value on line 12.
The Anjuna Kubernetes Toolset accepts a comma-separated list of potential VM sizes to be used, mentioned on line 18. Unless a Pod spec requests a specific VM size, the Anjuna Kubernetes Toolset will choose the best VM size that fits the Pod resource requests, based on the list of potential VM sizes. When a Pod spec does not request resources nor a specific VM size, the Anjuna Kubernetes Toolset will default the VM size to the one mentioned on line 17.
Currently, the following virtual machine families support Confidential Computing:
-
DCasv5/DCadsv5 family
-
ECasv5/ECadsv5 family
To learn more about VM sizes, refer to Configuring Anjuna Confidential Pod VM Sizes
You will use a Helm Chart to install the Anjuna Kubernetes Toolset to your cluster.
To configure it, create a values.yaml
file with the following content:
$ cat <<EOF > values.yaml
cloud: azure
image: ${ANJUNA_K8S_TOOLS_IMAGE}
nodeSelector:
key: "${ANJUNA_NODE_LABEL_KEY}"
value: "${ANJUNA_NODE_LABEL_VALUE}"
imagePullSecret:
name: anjuna-kubernetes-toolset
registry: ${AZURE_REGISTRY_NAME}.azurecr.io
username: ${ACR_TOKEN_NAME}
password: ${ACR_TOKEN}
maxConfidentialPodVMs: 20
azure:
resourceGroup: ${AZURE_RESOURCE_GROUP}
subnetId: ${AZURE_SUBNET_ID}
subscriptionId: ${AZURE_SUBSCRIPTION_ID}
instanceSize: "Standard_DC2as_v5"
instanceSizes: "Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5,Standard_DC32as_v5"
location: ${AZURE_LOCATION}
storageAccount: ${AZURE_STORAGE_ACC_NAME}
credentials:
clientId: ${AZURE_CLIENT_ID}
clientSecret: ${AZURE_CLIENT_SECRET}
tenantId: ${AZURE_TENANT_ID}
EOF
Deploy the Anjuna Kubernetes Toolset to your cluster:
$ helm install anjuna-cc k8s/chart --values values.yaml --wait
Verify the installation
All resources are created in the anjuna-system
namespace.
To ensure that all components are running,
you can verify the Pods running in the anjuna-system
namespace:
$ kubectl get pods -n anjuna-system
The output should include the following Pods:
NAME READY STATUS RESTARTS AGE
anjuna-operator-controller-manager-68ff8494b7-bwcx9 2/2 Running 0 19s
anjuna-operator-daemon-install-w4t82 1/1 Running 0 19s
anjuna-operator-pre-install-daemon-f4f96 1/1 Running 0 16s
anjuna-cloud-adaptor-daemonset-z4kmg 1/1 Running 0 18s
ext-res-updater-7bszc 1/1 Running 0 19s
peer-pods-webhook-controller-manager-5d4675fc4b-6kjnn 2/2 Running 0 19s
Check that the Runtime Class anjuna-remote
was added to the cluster:
$ kubectl get runtimeclass
The output should resemble the following. Note that the Runtime Class might take a couple of minutes to be created.
NAME HANDLER AGE
anjuna-remote anjuna-remote 1m
Your Kubernetes cluster is now ready to deploy applications as Anjuna Confidential Pods. Refer to Deploying Pods as Anjuna Confidential Pods in AKS for examples on how to deploy applications.
Upgrade the Anjuna Kubernetes Toolset
To upgrade the installed version of the Anjuna Kubernetes Toolset, Uninstall the Anjuna Kubernetes Toolset, and follow the installation instructions in Load and push the Anjuna Kubernetes Toolset container images again to install the new version.
You do not need to recreate the Shared Resources or the cluster in order to upgrade.
All Anjuna Confidential Pods must be stopped before an upgrade. After the upgrade,
the Anjuna Confidential Pod images need to be rebuilt with an anjuna-k8s-cli that matches the
new Anjuna Kubernetes Toolset version.
|
Uninstall the Anjuna Kubernetes Toolset
To uninstall the Anjuna Kubernetes Toolset, first stop all Anjuna Confidential Pods.
Then, run the following commands from the ${WORKSPACE}/iac
folder:
$ cd "${WORKSPACE}/iac"
$ kubectl delete -f k8s/crd.yaml
$ helm uninstall anjuna-cc --wait
$ kubectl delete runtimeclass anjuna-remote --ignore-not-found
This operation might take about a minute to fully complete. |
To delete cert-manager
, run the following command:
$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml
Refer to the next section, Cleaning up resources, to see how to destroy the resources created by Terraform in your Azure subscription.