Using Anjuna with Azure Kubernetes Service
Kubernetes is an open-source container orchestration platform. In this guide, you will deploy your Anjuna Docker container into Azure Kubernetes Service.
Azure Kubernetes Service (AKS) is Microsoft Azure’s managed Kubernetes service.
It supports Intel® SGX Nodes using a confidential computing add-on called confcom
.
Create an AKS cluster
This guide requires an up-to-date Azure CLI. Follow the Azure documentation to install the Azure CLI: How to install the Azure CLI.
Then, follow the Azure documentation to create an AKS cluster that supports Intel® SGX: Quickstart: Deploy an AKS cluster with confidential computing Intel SGX agent nodes.
It is recommended to use DCsv3 or DCdsv3-series VMs because they have Ice Lake processors, which have significantly better performance with Intel® SGX.
After creating the AKS cluster, set the name of your AKS cluster as an environment variable:
$ export AKS_CLUSTER_NAME="myAKSCluster" # replace with your cluster name
Upload your Docker image to your Azure container registry
If you do not have an existing Azure container registry (ACR), follow the Azure documentation to Create a private container registry using the Azure CLI.
Then, from the machine where your Docker images are, log into your Azure container registry:
$ export ACR_REGISTRY_NAME="<your_acr_registry_name_here>"
$ export ACR_REGISTRY_DOMAIN="${ACR_REGISTRY_NAME}.azurecr.io"
$ az login
$ az acr login --name ${ACR_REGISTRY_NAME}
From the previous pages in this section, you should have a Dockerfile with the Anjuna SGX Runtime. Build it and tag it with the fully-qualified registry name:
$ export DOCKER_IMAGE_NAME="myname/my-anjuna-runtime"
$ docker build . --tag "${ACR_REGISTRY_DOMAIN}/${DOCKER_IMAGE_NAME}"
Push it to your Azure container registry:
$ docker push "${ACR_REGISTRY_DOMAIN}/${DOCKER_IMAGE_NAME}"
Provide access to the ACR
When the ACR is ready, follow the Azure documentation to provide access to an existing AKS using a managed identity: Configure ACR integration for existing AKS clusters.
Use the cluster
Once the AKS cluster and its Nodes are ready, you can start running Pods with Intel® SGX applications.
Get the AKS credentials to enable using kubectl
:
$ az aks get-credentials --name ${AKS_CLUSTER_NAME} --resource-group "<your_resource_group_name>"
Use a DaemonSet to configure allocation of the zero page
Some programs, like certain builds of Python, are position-dependent and will need to allocate the first (zero) page.
You will configure this for each Node using a temporary DaemonSet.
Create a DaemonSet configuration YAML at enclave-daemonset.yaml
with the following contents:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: anjuna-config-daemonset
spec:
template:
metadata:
labels:
name: anjuna-config-daemonset
spec:
nodeSelector:
# Change this to your correct VM size
node.kubernetes.io/instance-type: "Standard_DC4s_v3"
containers:
- name: anjuna-config
image: ubuntu:20.04
command: ["sysctl", "vm.mmap_min_addr=0"]
resources:
limits:
sgx.intel.com/epc: 1
securityContext:
privileged: true
selector:
matchLabels:
name: anjuna-config-daemonset
Do not forget to change line 13 to the VM type you are using. |
Run the DaemonSet:
$ kubectl apply -f enclave-daemonset.yaml
After about 30 seconds, the DaemonSet should be done making configuration changes. You can delete it now to free up its resources.
$ kubectl delete -f enclave-daemonset.yaml
If you want to add more Intel® SGX Nodes to your cluster, you will need to deploy and delete the DaemonSet again. This is because Kubernetes does not have a construct for running one-time setup jobs on every Node. |
Create a Kubernetes secret for the license file
A license is required for the Anjuna SGX Runtime. In this example, you will put the license file in a Kubernetes secret. Then, you will mount the secret into the Pod.
See the Licensing page for instructions on how to download the license from the Anjuna Resource Center.
Once you have downloaded the anjuna-license.yaml
file from the Anjuna Resource Center,
run the following command to create the secret:
$ kubectl create secret generic anjuna-license --from-file=license.yaml=anjuna-license.yaml
Run a Pod
Create a Kubernetes file named pod.yaml
, replacing fields with the appropriate values:
apiVersion: batch/v1
kind: Job
metadata:
name: some-sgx-app
labels:
app: some-sgx-app
spec:
template:
metadata:
labels:
app: some-sgx-app
spec:
containers:
- name: some-sgx-app
# Change the image to the correct one
image: ${ACR_REGISTRY_DOMAIN}/${DOCKER_IMAGE_NAME}:latest
imagePullPolicy: Always
# This resource limit will schedule the pod on a confidential computing Node
# If the EPC limit is higher than the VM's EPC size, the pod will never get scheduled
resources:
limits:
sgx.intel.com/epc: 20Mi
env:
- name: EXAMPLE_ENV_VAR
value: "foo"
volumeMounts:
- name: anjuna-license
mountPath: "/opt/anjuna"
readOnly: true
command: ["anjuna-sgxrun"]
args: ["ls", "-al", "/"]
restartPolicy: Never
volumes:
- name: anjuna-license
secret: # Prereq: user should run `kubectl create secret generic anjuna-license --from-file=license.yaml=anjuna-license.yaml`
secretName: anjuna-license
backoffLimit: 0
You can run this command to replace the environment variables:
$ envsubst '$ACR_REGISTRY_DOMAIN,$DOCKER_IMAGE_NAME' < pod.yaml > pod.yaml.tmp
$ mv pod.yaml.tmp pod.yaml
Apply the Kubernetes manifest:
$ kubectl apply -f pod.yaml
You can now run the following commands to view the status of the Pod and its Job:
$ kubectl get pods -l app=some-sgx-app
$ kubectl get jobs -l app=some-sgx-app
You should see output like this:
NAME READY STATUS RESTARTS AGE
some-sgx-app-<uid> 0/1 Completed 0 8s
NAME COMPLETIONS DURATION AGE
some-sgx-app 1/1 8s 8s
You can also use kubectl logs
to print the Pod’s logs:
$ kubectl logs -l app=some-sgx-app --tail=-1
"ls.manifest.template.yaml" created
Compiled manifest written to ls.manifest.sgx
"ls.manifest.sgx" created
"ls.sig" created
Starting "/bin/ls" in Anjuna Runtime
+ exec Runtime/anjuna-runtime --dev /bin/ls -al /
[ 1] Anjuna Runtime version release-1.51.0002, Copyright (C) Anjuna Security, Inc. All rights reserved.
[ 1] Enclave initialized:
[ 1] Enclave base address: 0x0000000800000000
[ 1] Enclave size: 2GB
<removed some output...>
drwx------ 1 root root 4096 Sep 14 23:22 root
drwxr-xr-x 1 root root 4096 Sep 14 23:22 run
drwxr-xr-x 2 root root 4096 May 31 11:55 sbin
drwxr-xr-x 2 root root 4096 May 31 11:54 srv
dr-xr-xr-x 12 root root 0 Sep 14 23:22 sys
drwxrwxrwt 1 root root 4096 Sep 14 23:22 tmp
drwxrwxr-x 1 1009 1009 4096 Jul 18 10:37 usr
drwxr-xr-x 11 root root 4096 May 31 11:55 var