Set up required cloud resources

This guide will proceed step by step, with conceptual explanations for each step.

Deployment directory structure

The following commands create a top level directory for this deployment named apm-on-gcp. All of the subsequent commands within this guide should be executed from this directory.

$ mkdir -p apm-on-gcp
$ cd apm-on-gcp

Set environment variables

There are several settings that are defined using environment variables. These values determine the names of Google Cloud resources and must be unique for the Google Cloud account, project, or VPC (depending on the resource).

Some Google Cloud resource names persist even when deleted, such as KMS keyring and key names. For this reason, change the value of the following PREFIX environment variable to be unique among executions of this deployment guide. The default value anjuna can be used only once per Google Cloud account.

Run the following command to set the PREFIX environment variable used by additional environment variable settings below. Be sure to change this value to a new, unique value for subsequent executions of this guide.

$ # If you change this value, it should be alphanumeric only,
$ # with no spaces, dashes, underscores, or other characters
$ export PREFIX="anjuna"

Next, define the Google Cloud project name, region, and zone that you are using for deployment. Set the following:

  • GCP_PROJECT environment variable to the name of your project

  • GCP_REGION to the region

  • GCP_ZONE to the zone

The following example commands set the project to the PREFIX defined in the previous step followed by -apm-project, the region to us-central1, and the zone to us-central1-a:

$ export GCP_PROJECT="${PREFIX}-apm-project"
$ export GCP_REGION="us-central1"
$ export GCP_ZONE="us-central1-a"

The following commands define additional values used for the names of various resources that will be created in the subsequent sections of this guide. You may change them according to your preferences.

$ export NETWORK_NAME="${PREFIX}-apm-network"
$ export SUBNET_NAME="${NETWORK_NAME}-subnet"

$ export VAULT_SERVICE_ACCOUNT_NAME="${PREFIX}-vault-sa"
$ export VAULT_SERVICE_ACCOUNT_DESCR="Vault Server with Anjuna Policy Manager"
$ export VAULT_SERVICE_ACCOUNT_EMAIL="${VAULT_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com"

$ export CLIENT_SERVICE_ACCOUNT_NAME="${PREFIX}-client-sa"
$ export CLIENT_SERVICE_ACCOUNT_DESCR="Client App Service Account"
$ export CLIENT_SERVICE_ACCOUNT_EMAIL="${CLIENT_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com"

$ export ATTESTATION_ROLE_NAME="${PREFIX}_attestation_role"

$ export KMS_UNSEAL_ROLE_NAME="${PREFIX}_kms_unseal_role"
$ export KMS_LOCATION="global"
$ export KMS_KEYRING="${PREFIX}-apm-keyring"
$ export KMS_KEY="${PREFIX}-apm-key"

$ export VAULT_SERVER_BUCKET="${PREFIX}-vault-server"
$ export VAULT_SERVER_IMAGE="${PREFIX}-vault-server-image"
$ export VAULT_SERVER_INSTANCE="${PREFIX}-vault-server-instance"
$ export VAULT_SERVER_STORAGE="${PREFIX}-vault-storage"
$ export VAULT_SERVER_TLS_KEY_SECRET="${PREFIX}-vault-tls-key"
$ export VAULT_SERVER_TLS_CERT_SECRET="${PREFIX}-vault-tls-cert"

Virtual Private Cloud network

Instances must be deployed in a Virtual Private Cloud (VPC) network. In this tutorial, you will create a new VPC network. If you have an existing one, you may use it as well.

To create a VPC network, run the following commands:

$ gcloud compute networks create \
    "${NETWORK_NAME}" \
    --subnet-mode "custom"
$ gcloud compute networks subnets create \
    "${SUBNET_NAME}" \
    --network ${NETWORK_NAME} \
    --range "10.128.0.0/20" \
    --region ${GCP_REGION}

Then, create firewall rules to allow default access between instances on the network.

$ # Allow any TCP and ICMP within the VPC
$ gcloud compute firewall-rules create \
    "${NETWORK_NAME}-allow-internal" \
    --network "${NETWORK_NAME}" \
    --allow tcp,icmp \
    --source-ranges "10.128.0.0/20"

$ # Allow TCP port 8200 from anywhere
$ gcloud compute firewall-rules create \
    "${NETWORK_NAME}-firewall-vault" \
    --network "${NETWORK_NAME}" \
    --allow tcp:8200 \
    --source-ranges "0.0.0.0/0"

Service account for the Vault server

The Vault server will access various Google Cloud services. Authorization requires a service account, which will be associated with the Vault server instance.

Run the following command to create the service account:

$ gcloud iam service-accounts create \
    "${VAULT_SERVICE_ACCOUNT_NAME}" \
    --description "${VAULT_SERVICE_ACCOUNT_DESCR}"
Anyone with access to the service account will be able to access sensitive data used by the Vault server, including secrets and the TLS key and certificate. See Google Cloud’s Best practices for using service accounts to restrict usage.

Logging

Using Google Cloud Logging for improved performance and more granular access control is recommended. For more information, see How to configure Google Cloud Logging for the Anjuna Confidential Container.

Run the following command to grant write access for logging:

$ gcloud projects add-iam-policy-binding \
    "${GCP_PROJECT}" \
    --member="serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role="roles/logging.logWriter"

Attestation

In order to verify attestation reports from client enclaves, the APM needs access to the Google Cloud API call getShieldedInstanceIdentity.

To grant access to this API call, create an IAM role and bind it to the service account with the following commands:

$ gcloud iam roles create \
    "${ATTESTATION_ROLE_NAME}" \
    --project "${GCP_PROJECT}" \
    --title "Anjuna Attestation Role" \
    --permissions "compute.instances.getShieldedInstanceIdentity"
$ gcloud projects add-iam-policy-binding \
    "${GCP_PROJECT}" \
    --member "serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role "projects/${GCP_PROJECT}/roles/${ATTESTATION_ROLE_NAME}"

Service account for the client enclave

The client enclave that accesses the Anjuna Policy Manager also needs to access Google Cloud services. Authorization requires a service account, which will be associated with the client enclave instance.

Run the following command to create the service account:

$ gcloud iam service-accounts create \
      "${CLIENT_SERVICE_ACCOUNT_NAME}" \
      --description "${CLIENT_SERVICE_ACCOUNT_DESCR}"

Logging

Using Google Cloud Logging for improved performance and more granular access control is recommended. For more information, see How to configure Google Cloud Logging for the Anjuna Confidential Container.

Run the following command to grant write access for logging:

$ gcloud projects add-iam-policy-binding \
      "${GCP_PROJECT}" \
      --member="serviceAccount:${CLIENT_SERVICE_ACCOUNT_EMAIL}" \
      --role="roles/logging.logWriter"

Cloud Storage bucket for persistent storage

The Vault server uses Google Cloud Storage as a persistent storage backend. This ensures that secrets are not lost if the Vault instance is deleted.

Data stored in Google Cloud Storage is encrypted in-transit and at-rest, but anyone with Google Cloud access to the KMS key and bucket will be able to decrypt and read them. So, only grant read and write access permissions for the storage bucket to the Vault service account.

The following commands create a storage bucket for the Vault storage backend and grant access to the service account:

$ gcloud storage buckets create \
    "gs://${VAULT_SERVER_STORAGE}" \
    --location "${GCP_REGION}"
$ gcloud storage buckets add-iam-policy-binding \
    "gs://${VAULT_SERVER_STORAGE}" \
    --member "serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role=roles/storage.objectAdmin

TLS configuration

A TLS certificate is used so that clients can verify the authenticity of the Vault server and the APM. The certificate and certificate private key are stored in the Google Cloud Secret Manager for access by the Vault server.

Server hostname

The hostname is a part of the TLS certificate. This means that clients must access the Vault/APM server using the same hostname, or the TLS certificate verification will fail. Clients of the server include the administration computer and any client enclaves.

If you have obtained a fully-qualified domain name (FQDN) for your Vault server like anjuna-policy-manager.example.com, run the following command replacing <fully-qualified domain name> with the full domain name of the server:

export VAULT_SERVER_HOST="<fully-qualified domain name>"

If you do not have a fully-qualified domain name, then you can use the Internal DNS feature of Google Cloud. When a VM instance is created, it is automatically assigned a local DNS host name based on the instance name, Google Cloud zone, and project.

The internal DNS name of the Vault server is automatically available to clients only within the same Google Cloud VPC Network. Therefore, when using this feature, client enclaves should be deployed on the same VPC network.

To use the Internal DNS name, run the following command instead:

$ export VAULT_SERVER_HOST="${VAULT_SERVER_INSTANCE}.${GCP_ZONE}.c.${GCP_PROJECT}.internal"

TLS key and certificate

Create a TLS certificate and private key, and sign the certificate with your organization’s certificate authority (CA) for production. You should use the value of the VAULT_SERVER_HOST environment variable that you assigned in the Server hostname section. To view the value of this variable, run the following command:

$ echo "${VAULT_SERVER_HOST}"
If you do not have a TLS certificate already, you can generate a self-signed certificate for development purposes. However, this is not recommended in production environments.

Creating a self-signed certificate

For testing, you can use the following procedure to generate a self-signed certificate:

$ # Create root CA & Private key
$ openssl req -x509 \
            -sha256 -days 356 \
            -nodes \
            -newkey rsa:2048 \
            -subj "/CN=dev.anjuna.io/C=US/L=Palo Alto" \
            -keyout rootCA.key -out rootCA.crt

$ # Generate Private key
$ openssl genrsa -out tls-key.pem 2048

$ # Create csf conf
$ cat > csr.conf <<EOF
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C = US
ST = California
L = Palo Alto
O = Anjuna Security
OU = Anjuna Security Dev
CN = dev.anjuna.io

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = ${VAULT_SERVER_HOST}
EOF

$ # Create CSR request using private key
$ openssl req -new -key tls-key.pem -out server.csr -config csr.conf

$ # Create a external config file for the certificate
$ cat > cert.conf <<EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = ${VAULT_SERVER_HOST}
EOF

$ # Create SSl with self signed CA
$ openssl x509 -req \
    -in server.csr \
    -CA rootCA.crt -CAkey rootCA.key \
    -CAcreateserial -out tls-cert.pem \
    -days 365 \
    -sha256 -extfile cert.conf

Upload TLS key and certificate to Google Cloud Secret Manager

The following commands will store the private key and certificate in Google Cloud Secret Manager and grant access to the Vault service account. These commands assume that the TLS key and certificate are named tls-key.pem and tls-cert.pem respectively:

$ gcloud secrets create \
    "${VAULT_SERVER_TLS_KEY_SECRET}" \
    --project "${GCP_PROJECT}" \
    --data-file "tls-key.pem"
$ gcloud secrets create \
    "${VAULT_SERVER_TLS_CERT_SECRET}" \
    --project "${GCP_PROJECT}" \
    --data-file "tls-cert.pem"
$ gcloud secrets add-iam-policy-binding \
    "${VAULT_SERVER_TLS_KEY_SECRET}" \
    --project "${GCP_PROJECT}" \
    --member "serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role "roles/secretmanager.secretAccessor"
$ gcloud secrets add-iam-policy-binding \
    "${VAULT_SERVER_TLS_CERT_SECRET}" \
    --project "${GCP_PROJECT}" \
    --member "serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role="roles/secretmanager.secretAccessor"

Auto-unseal

The Vault server provides a way to auto-unseal the encrypted storage backend using a Google Cloud KMS key bound to the Vault service account. With this feature, the server is accessible upon enclave boot without having to manually unseal the encrypted storage.

The Vault server requires some permissions to be able to unseal using the Google Cloud KMS keys:

  • cloudkms.cryptoKeyVersions.useToEncrypt and cloudkms.cryptoKeyVersions.useToDecrypt to perform cryptographic operations with this key

  • cloudkms.cryptoKeys.get to obtain some metadata about this key

The following commands create a KMS keyring and key for auto-unsealing, a role to access this key, and bind that role to the service account:

$ gcloud kms keyrings create \
    ${KMS_KEYRING} \
    --project "${GCP_PROJECT}" \
    --location "${KMS_LOCATION}"
$ gcloud kms keys create \
    ${KMS_KEY} \
    --keyring "${KMS_KEYRING}" \
    --project "${GCP_PROJECT}" \
    --location "${KMS_LOCATION}" \
    --purpose "encryption"
$ gcloud iam roles create \
    "${KMS_UNSEAL_ROLE_NAME}" \
    --project "${GCP_PROJECT}" \
    --title "Vault KMS Unseal Role" \
    --permissions "cloudkms.cryptoKeyVersions.useToEncrypt,cloudkms.cryptoKeyVersions.useToDecrypt,cloudkms.cryptoKeys.get"
$ gcloud kms keys add-iam-policy-binding \
    ${KMS_KEY} \
    --keyring "${KMS_KEYRING}" \
    --location "${KMS_LOCATION}" \
    --project "${GCP_PROJECT}" \
    --member "serviceAccount:${VAULT_SERVICE_ACCOUNT_EMAIL}" \
    --role "projects/${GCP_PROJECT}/roles/${KMS_UNSEAL_ROLE_NAME}"