Deploying a Pod as an AWS Nitro Enclave
The previous sections were all about setting up an AWS EKS cluster with the Anjuna Nitro Kubernetes tools
installed in the cluster. You are now able to start a container in an AWS Nitro
Enclave without changing the container, and verify that it is in fact running in an enclave. In
this section you will load a simple nginx
container in an enclave.
Refer to Kubernetes Pod specification in the Configuration reference section for more details including using pre-built Enclave Image Files (EIFs). |
First,
download the license from the Anjuna Resource Center.
This license file will be mounted to your nginx
Pod as a
Kubernetes secret.
Run the following command to create a Kubernetes secret:
$ kubectl create secret generic anjuna-license --from-file=license.yaml=license.yaml
The Anjuna Nitro Webhook will automatically mount the license secret to the new Pod’s filesystem.
Run the following command to install the Helm chart:
$ helm install nitro-nginx helm-charts/nitro-nginx
Wait for the Pod to start by running the command until the Pod is running:
$ kubectl get pods
When the Pod is running, run the following command to see what the Pod did:
$ kubectl logs nitro-nginx-pod
Inspecting the logs, you will see that the Pod nitro-nginx-pod
is:
-
downloading the
nginx
container, -
converting it into an EIF using the Anjuna Nitro Runtime,
-
configuring the networking settings using the Anjuna Nitro Runtime,
-
starting the enclave in debug mode,
-
showing the AWS Nitro console output, which indicates that
nginx
should have started.
To confirm that nginx
is running,
you can use kubectl exec
to run a command on the Pod’s parent instance.
The following command uses curl
to verify that nginx
is responding to requests on localhost
.
$ kubectl exec -it nitro-nginx-pod -- curl http://localhost:80
The output should display a welcome page from nginx
.
How this works
To understand how the Anjuna Nitro Kubernetes tools are told to create an enclave, you have to inspect the Pod
specification used for nginx
. Open the file helm-charts/nitro-nginx/templates/nitro-nginx.yaml
.
---
apiVersion: v1
kind: Service
metadata:
name: nitro-nginx
spec:
selector:
name: nitro-nginx-pod
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: nitro-nginx-pod
labels:
name: nitro-nginx-pod
nitro.k8s.anjuna.io/managed: "yes"
spec:
containers:
- name: nitro-nginx-pod
image: nginx:latest
imagePullPolicy: Always
resources:
limits:
memory: "2048Mi"
cpu: "2"
ports:
- containerPort: 80
-
Lines 14-19: Declare that a Pod
nitro-nginx-pod
will be created. -
Line 20: Declares that this Pod should be running in an AWS Nitro Enclave by using the
nitro.k8s.anjuna.io/managed
label. -
Line 24: The Pod should launch the container
nginx:latest
in the AWS Nitro Enclave. -
Lines 26-29: Declare the resources that should be allocated to the enclave (number of vCPUs (must be even due to hyperthreading), RAM). If these resource limits are not defined, the webhook will default to using 2 GB of memory and 2 vCPUs.
All Pod configured volumes are automatically mounted into the enclave using a bind mount. |
Using Deployments and other Workload Resources
You can also run AWS Nitro Enclaves in Deployments
and other Workload Resources
like StatefulSets or DaemonSets.
The Anjuna-specific information moves to the spec.template
field,
which describes the Pods that will be created.
Here is an example of running Nginx in a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nitro-nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
name: nitro-nginx-pod
nitro.k8s.anjuna.io/managed: "yes"
spec:
containers:
- name: nitro-nginx-pod
image: nginx:latest
imagePullPolicy: Always
resources:
limits:
memory: "2048Mi"
cpu: "2"
ports:
- containerPort: 80
K8s probes with the Anjuna Nitro Runtime
The Anjuna Nitro Runtime supports liveness, readiness, and startup probes for network-based applications that export the appropriate ports.
Command-based liveness, readiness, and startup probes might not work since the cluster executes the commands on the launcher Pod, and not inside the AWS Nitro Enclave.
AWS Nitro Enclave Pods first build the EIF (when not using a pre-built EIF) and then run the AWS Nitro Enclave (no matter the EIF build strategy), and therefore require a significantly longer startup period before the application starts running.
Anjuna suggests setting your probes’ initialDelaySeconds
to 180 to allow the AWS Nitro
Enclave to start before probing the application.
The larger the enclave, the longer the initialDelaySeconds value should be. Large enclaves
may require more than 180 seconds to start.
|
Example of a Pod spec file with a Liveness Probe:
apiVersion: v1
kind: Pod
metadata:
name: nitro-nginx-pod
labels:
name: nitro-nginx-pod
nitro.k8s.anjuna.io/managed: "yes"
spec:
containers:
- name: nitro-nginx-pod
image: nginx:latest
imagePullPolicy: Always
resources:
limits:
memory: "2048Mi"
cpu: "2"
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 180
periodSeconds: 3
Deploying multiple enclaves per node
Anjuna Nitro K8s Toolset supports up to four Pods in separate enclaves
(the current AWS Nitro limitation) on a single Node.
When there are multiple Nodes in a cluster,
it may be desirable to define how enclave Pods are scheduled.
For example, you can use nodeSelector
to choose a particular set of Nodes for a given Pod,
or set podAntiAffinity
to ensure that Pods for the same Deployment are placed on different Nodes.
For more information on managing Node labels and assigning Pods to Nodes in multi-node scenarios, please consult the Assigning Pods to Nodes Kubernetes documentation. |
Deleting a Pod
When deleting a Pod, the Anjuna Runtime will first send a SIGTERM signal to the main process inside the enclave, allowing it to do any needed activities to allow for a graceful termination before exiting. The main process has a grace period of 30 seconds for any termination activities. After the main process exits or the grace period expires, the Anjuna Runtime will perform any remaining termination activities and then destroy the enclave. If the grace period expires before the enclave finishes its activities, a message is sent to the Pod logs indicating that a forceful termination of the enclave occurred due to the expiration of the grace period.
You can also define a specific grace period other than 30 seconds or decide to destroy the enclave
immediately.
The Anjuna webhook sets the termination grace period for the enclave based on the
terminationGracePeriodSeconds
value in the Pod specification.
To change the grace period to another value,
change the value of the terminationGracePeriodSeconds
in the Pod specification.
A grace period of 0 (zero) seconds destroys the enclave immediately.
For example, to specify a grace period of 45 seconds, add the following to your Pod specification:
apiVersion: v1
kind: Pod
metadata:
name: nitro-nginx-pod
labels:
name: nitro-nginx-pod
nitro.k8s.anjuna.io/managed: "yes"
spec:
containers:
- name: nitro-nginx-pod
image: nginx:latest
imagePullPolicy: Always
resources:
limits:
memory: "2048Mi"
cpu: "2"
ports:
- containerPort: 80
terminationGracePeriodSeconds: 45
To destroy an enclave immediately with no grace period,
change the value of terminationGracePeriodSeconds
to zero:
spec:
terminationGracePeriodSeconds: 0
Anjuna does not support the --grace-period <seconds> option for kubectl delete ;
you must use terminationGracePeriodSeconds in the Pod specification instead.
|