Vault installation to Amazon Elastic Kubernetes Service via Helm
Amazon Elastic Kubernetes Service (EKS) can run and scale Vault in the Amazon Web Services (AWS) cloud or on-premises. Creating a Kubernetes cluster and launching Vault via the Helm chart can be accomplished all from the command-line.
In this tutorial, you create a cluster in AWS, deploy a MySQL server, install Vault in high-availability (HA) mode via the Helm chart and then configure the authentication between Vault and the cluster. Then you deploy a web application with deployment annotations so the application's secrets are installed via the Vault Agent injector service.
Prerequisites
This tutorial requires an AWS account, AWS command-line interface (CLI), Amazon EKS CLI, Kubernetes CLI and the Helm CLI.
First, create an AWS account.
Next, install AWS CLI, Amazon EKS CLI, kubectl CLI and helm CLI.
Install aws
with Homebrew.
$ brew install awscli
Install eksctl
with Homebrew.
$ brew install eksctl
Install kubectl
with Homebrew.
$ brew install kubernetes-cli
Install helm
with Homebrew.
$ brew install helm
Next, configure the aws
CLI with credentials.
$ aws configure
This command prompts you to enter an AWS access key ID, AWS secret access key, and default region name.
Tip
The above example uses IAM user authentication. You can use any authentication method described in the AWS provider documentation.
Next, create a keypair to enable you to SSH into created nodes.
$ aws ec2 create-key-pair --key-name learn-vault
Start cluster
A Vault cluster that is launched in high-availability requires a Kubernetes cluster with three nodes.
Provision with Terraform
An alternative way to manage the lifecycle of cluster is with Terraform. Learn more in the Provision an EKS Cluster (AWS) tutorial.
Create a three node cluster named
learn-vault
.$ eksctl create cluster \ --name learn-vault \ --nodes 3 \ --with-oidc \ --ssh-access \ --ssh-public-key learn-vault \ --managed
Example output:
[ℹ] eksctl version 0.97.0[ℹ] using region us-west-1...snip...[ℹ] node "ip-192-168-26-181.us-west-1.compute.internal" is ready[ℹ] node "ip-192-168-34-73.us-west-1.compute.internal" is ready[ℹ] node "ip-192-168-35-238.us-west-1.compute.internal" is ready[ℹ] kubectl command should work with "/Users/yoko/.kube/config", try 'kubectl get nodes'[✔] EKS cluster "learn-vault" in "us-west-1" region is ready
The cluster is created, deployed and then health-checked. When the cluster is ready the command modifies the
kubectl
configuration so that the commands you issue are performed against that cluster.Managing multiple clusters
kubectl
enables you to manage multiple clusters through the context configuration. You display the available contextskubectl config get-contexts
and set the context by namekubectl config use-context NAME
.Display the nodes of the cluster.
$ kubectl get nodes NAME STATUS ROLES AGE VERSIONip-192-168-26-181.us-west-1.compute.internal Ready <none> 29m v1.22.6-eks-7d68063ip-192-168-34-73.us-west-1.compute.internal Ready <none> 29m v1.22.6-eks-7d68063ip-192-168-35-238.us-west-1.compute.internal Ready <none> 29m v1.22.6-eks-7d68063
Enable volume support with the EBS CSI driver add-on.
$ eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace kube-system \ --cluster learn-vault \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole
$ eksctl create addon \ --name aws-ebs-csi-driver \ --cluster learn-vault \ --service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/AmazonEKS_EBS_CSI_DriverRole
The cluster is ready.
Install the MySQL Helm chart
MySQL is a fast, reliable, scalable, and easy to use open-source relational database system. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
Add the Bitnami Helm repository.
$ helm repo add bitnami https://charts.bitnami.com/bitnami"bitnami" has been added to your repositories
Install the latest version of the MySQL Helm chart.
$ helm install mysql bitnami/mysql
Output:
NAME: mysqlLAST DEPLOYED: Thu May 19 10:37:43 2022NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:CHART NAME: mysqlCHART VERSION: 9.0.2APP VERSION: 8.0.29** Please be patient while the chart is being deployed **Tip: Watch the deployment status using the command: kubectl get pods -w --namespace defaultServices: echo Primary: mysql.default.svc.cluster.local:3306Execute the following to get the administrator credentials: echo Username: root MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)To connect to your database: 1. Run a pod that you can use as a client: kubectl run mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.29-debian-10-r21 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash 2. To connect to primary service (read/write): mysql -h mysql.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
By default the MySQL Helm chart deploys a single pod a service.
Get all the pods within the default namespace.
$ kubectl get podsNAME READY STATUS RESTARTS AGEmysql-0 1/1 Running 0 2m58s
Wait until the
mysql-0
pod is running and ready (1/1
).The
mysql-0
pod runs a MySQL server. aDemonstration Only
MySQL should be run with additional pods to ensure reliability when used in production. Refer to the MySQL Helm chart to override default parameters.
Get all the services within the default namespace.
$ kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 3h24mmysql ClusterIP 10.100.68.110 <none> 3306/TCP 15mmysql-headless ClusterIP None <none> 3306/TCP 15m
The
mysql
service directs request to themysql-0
pod. Pods within the cluster may address the MySQL server with the addressmysql.default.svc.cluster.local
.The MySQL root password is stored as Kubernetes secret. This password is required by Vault to create credentials for the application pod deployed later.
Create a variable named
ROOT_PASSWORD
that stores the mysql root user password.$ ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
The MySQL server, addressed through the service, is ready.
Install the Vault Helm chart
The recommended way to run Vault on Kubernetes is via the Helm chart.
Add the HashiCorp Helm repository.
$ helm repo add hashicorp https://helm.releases.hashicorp.com"hashicorp" has been added to your repositories
Update all the repositories to ensure
helm
is aware of the latest versions.$ helm repo updateHang tight while we grab the latest from your chart repositories......Successfully got an update from the "hashicorp" chart repositoryUpdate Complete. ⎈Happy Helming!⎈
Search for all the Vault Helm chart versions.
$ helm search repo vault --versions NAME CHART VERSION APP VERSION DESCRIPTIONhashicorp/vault 0.20.0 1.10.3 Official HashiCorp Vault Charthashicorp/vault 0.19.0 1.9.2 Official HashiCorp Vault Charthashicorp/vault 0.18.0 1.9.0 Official HashiCorp Vault Chart## ...
The Vault Helm chart contains all the necessary components to run Vault in several different modes.
Default behavior
By default, Vault is launched on a single pod in standalone mode with a file storage backend. Enabling high-availability with Integrated Storage requires that you override these defaults.
Create a file named
helm-vault-raft-values.yml
with the following contents:$ cat > helm-vault-raft-values.yml <<EOFserver: affinity: "" ha: enabled: true raft: enabled: true setNodeId: true config: | cluster_name = "vault-integrated-storage" storage "raft" { path = "/vault/data/" } listener "tcp" { address = "[::]:8200" cluster_address = "[::]:8201" tls_disable = "true" } service_registration "kubernetes" {}EOF
Recommendation
If you are using Prometheus for monitoring and alerting, we recommend to set the
cluster_name
in the HCL configuration. With the Vault Helm chart, this is accomplished with the config parameter.Install the latest version of the Vault Helm chart with Integrated Storage.
$ helm install vault hashicorp/vault --values helm-vault-raft-values.yml
Example output:
NAME: vaultLAST DEPLOYED: Wed May 18 20:19:15 2022NAMESPACE: defaultSTATUS: deployedREVISION: 1NOTES:Thank you for installing HashiCorp Vault!Now that you have deployed Vault, you should look over the docs on usingVault with Kubernetes available here:https://www.vaultproject.io/docs/Your release is named vault. To learn more about the release, try: $ helm status vault $ helm get manifest vault
This creates three Vault server instances with an Integrated Storage (Raft) backend.
Along with the Vault pods and Vault Agent Injector pod are deployed in the default namespace.
Get all the pods within the default namespace.
$ kubectl get podsNAME READY STATUS RESTARTS AGEvault-0 0/1 Running 0 30svault-1 0/1 Running 0 30svault-2 0/1 Running 0 30svault-agent-injector-56bf46695f-crqqn 1/1 Running 0 30s
The
vault-0
,vault-1
, andvault-2
pods deployed run a Vault server and report that they areRunning
but that they are not ready (0/1
). This is because the status check defined in a readinessProbe returns a non-zero exit code.The
vault-agent-injector
pod deployed is a Kubernetes Mutation Webhook Controller. The controller intercepts pod events and applies mutations to the pod if specific annotations exist within the request.Retrieve the status of Vault on the
vault-0
pod.$ kubectl exec vault-0 -- vault status
Example output:
The status command reports that Vault is not initialized and that it is sealed. For Vault to authenticate with Kubernetes and manage secrets requires that that is initialized and unsealed.Key Value--- -----Seal Type shamirInitialized falseSealed trueTotal Shares 0Threshold 0Unseal Progress 0/0Unseal Nonce n/aVersion 1.10.3Storage Type raftHA Enabled truecommand terminated with exit code 2
Initialize and unseal one Vault pod
Vault starts uninitialized and in the sealed state. Prior to initialization the Integrated Storage backend is not prepared to receive data.
Initialize Vault with one key share and one key threshold.
$ kubectl exec vault-0 -- vault operator init \ -key-shares=1 \ -key-threshold=1 \ -format=json > cluster-keys.json
The
operator init
command generates a root key that it disassembles into key shares-key-shares=1
and then sets the number of key shares required to unseal Vault-key-threshold=1
. These key shares are written to the output as unseal keys in JSON format-format=json
. Here the output is redirected to a file namedcluster-keys.json
.Display the unseal key found in
cluster-keys.json
.$ cat cluster-keys.json | jq -r ".unseal_keys_b64[]"rrUtT32GztRy/pVWmcH0ZQLCCXon/TxCgi40FL1Zzus=
Insecure operation
Do not run an unsealed Vault in production with a single key share and a single key threshold. This approach is only used here to simplify the unsealing process for this demonstration.
Create a variable named
VAULT_UNSEAL_KEY
to capture the Vault unseal key.$ VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
After initialization, Vault is configured to know where and how to access the storage, but does not know how to decrypt any of it. Unsealing is the process of constructing the root key necessary to read the decryption key to decrypt the data, allowing access to the Vault.
Unseal Vault running on the
vault-0
pod.$ kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
Example output: The
operator unseal
command reports that Vault is initialized and unsealed.Key Value--- -----Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.10.3Storage Type raftCluster Name vault-cluster-16efc511Cluster ID 649c814a-a505-421d-e4bb-d9175c7e6b38HA Enabled trueHA Cluster n/aHA Mode standbyActive Node Address <none>Raft Committed Index 31Raft Applied Index 31
Insecure operation
Providing the unseal key with the command writes the key to your shell's history. This approach is only used here to simplify the unsealing process for this demonstration.
Retrieve the status of Vault on the
vault-0
pod.$ kubectl exec vault-0 -- vault statusKey Value--- -----Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.10.3Storage Type raftCluster Name vault-cluster-16efc511Cluster ID 649c814a-a505-421d-e4bb-d9175c7e6b38HA Enabled trueHA Cluster https://vault-0.vault-internal:8201HA Mode activeActive Since 2022-05-19T17:41:07.226862254ZRaft Committed Index 36Raft Applied Index 36
The Vault server is initialized and unsealed.
Join the other Vaults to the Vault cluster
The Vault server running on the vault-0
pod is a Vault HA cluster with a
single node. To display the list of nodes requires that you are logging in with
the root token.
Display the root token found in
cluster-keys.json
.$ cat cluster-keys.json | jq -r ".root_token"hvs.3VYhJODbhlQPeW5zspVvBCzD
Create a variable named
CLUSTER_ROOT_TOKEN
to capture the Vault unseal key.$ CLUSTER_ROOT_TOKEN=$(cat cluster-keys.json | jq -r ".root_token")
Login with the root token on the
vault-0
pod.$ kubectl exec vault-0 -- vault login $CLUSTER_ROOT_TOKEN Success! You are now authenticated. The token information displayed belowis already stored in the token helper. You do NOT need to run "vault login"again. Future Vault requests will automatically use this token. Key Value--- -----token hvs.3VYhJODbhlQPeW5zspVvBCzDtoken_accessor 5sy3tZm3qCQ1ai7wTDOS97XGtoken_duration ∞token_renewable falsetoken_policies ["root"]identity_policies []policies ["root"]
Insecure operation
The login command stores the root token in a file for the container user. Subsequent commands are executed with that token. This approach is only used here to simplify the cluster configuration demonstration.
List all the nodes within the Vault cluster for the
vault-0
pod.$ kubectl exec vault-0 -- vault operator raft list-peersNode Address State Voter---- ------- ----- -----09d9b35d-0336-7de7-cc94-90a1f3a0aff8 vault-0.vault-internal:8201 leader true
This displays the one node within the Vault cluster. This cluster is addressable through the Kubernetes service
vault-0.vault-internal
created by the Helm chart. The Vault servers on the other pods need to join this cluster and be unsealed.Join the Vault server on
vault-1
to the Vault cluster.$ kubectl exec vault-1 -- vault operator raft join http://vault-0.vault-internal:8200Key Value--- -----Joined true
This Vault server joins the cluster sealed. To unseal the Vault server requires the same unseal key,
VAULT_UNSEAL_KEY
, provided to the first Vault server.Unseal the Vault server on
vault-1
with the unseal key.$ kubectl exec vault-1 -- vault operator unseal $VAULT_UNSEAL_KEY Key Value--- -----Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.10.3Storage Type raftCluster Name vault-cluster-16efc511Cluster ID 649c814a-a505-421d-e4bb-d9175c7e6b38HA Enabled trueHA Cluster https://vault-0.vault-internal:8201HA Mode standbyActive Node Address http://192.168.58.131:8200Raft Committed Index 76Raft Applied Index 76
The Vault server on
vault-1
is now a functional node within the Vault cluster.Join the Vault server on
vault-2
to the Vault cluster.$ kubectl exec vault-2 -- vault operator raft join http://vault-0.vault-internal:8200Key Value--- -----Joined true
Unseal the Vault server on
vault-2
with the unseal key.$ kubectl exec vault-2 -- vault operator unseal $VAULT_UNSEAL_KEY Key Value--- -----Seal Type shamirInitialized trueSealed falseTotal Shares 1Threshold 1Version 1.10.3Storage Type raftCluster Name vault-cluster-16efc511Cluster ID 649c814a-a505-421d-e4bb-d9175c7e6b38HA Enabled trueHA Cluster https://vault-0.vault-internal:8201HA Mode standbyActive Node Address http://192.168.58.131:8200Raft Committed Index 76Raft Applied Index 76
The Vault server on
vault-2
is now a functional node within the Vault cluster.List all the nodes within the Vault cluster for the
vault-0
pod.$ kubectl exec vault-0 -- vault operator raft list-peersNode Address State Voter---- ------- ----- -----09d9b35d-0336-7de7-cc94-90a1f3a0aff8 vault-0.vault-internal:8201 leader true7078a8b7-7948-c224-a97f-af64771ad999 vault-1.vault-internal:8201 follower trueaaf46893-0a93-17ce-115e-f57033d7f41d vault-2.vault-internal:8201 follower true
This displays all three nodes within the Vault cluster.
Voter status
It may take additional time for each node's voter status to return true.
Get all the pods within the default namespace.
$ kubectl get podsNAME READY STATUS RESTARTS AGEvault-0 1/1 Running 0 5m49svault-1 1/1 Running 0 5m48svault-2 1/1 Running 0 5m47svault-agent-injector-5945fb98b5-vzbqv 1/1 Running 0 5m50s
The
vault-0
,vault-1
, andvault-2
pods report that they areRunning
and ready (1/1
).
Create a Vault database role
The web application that you deploy in the Launch a web
application section, expects Vault to store a
username and password at the path secret/webapp/config
. To create this secret
requires you to login with the root token, enable the key-value secret
engine, and store a
secret username and password at that defined path.
Enable database secrets at the path
database
.$ kubectl exec vault-0 -- vault secrets enable databaseSuccess! Enabled the database secrets engine at: database/
Configure the database secrets engine with the connection credentials for the MySQL database.
$ kubectl exec vault-0 -- vault write database/config/mysql \ plugin_name=mysql-database-plugin \ connection_url="{{username}}:{{password}}@tcp(mysql.default.svc.cluster.local:3306)/" \ allowed_roles="readonly" \ username="root" \ password="$ROOT_PASSWORD"
Output:
Success! Data written to: database/config/mysql
Create a database secrets engine role named
readonly
.$ kubectl exec vault-0 -- vault write database/roles/readonly \ db_name=mysql \ creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \ default_ttl="1h" \ max_ttl="24h"
The
readonly
role generates credentials that are able to perform queries for any table in the database.Output:
Success! Data written to: database/roles/readonly
Note
Important: when you define the role in a production deployment, you must create user creation_statements and revocation_statements, which are valid for the database you've configured. If you do not specify statements appropriate to creating, revoking, or rotating users, Vault inserts generic statements which can be unsuitable for your deployment.
Read credentials from the
readonly
database role.$ kubectl exec vault-0 -- vault read database/creds/readonly Key Value--- -----lease_id database/creds/readonly/qtWlgBT1YTQEPKiXe7CrotsTlease_duration 1hlease_renewable truepassword WLESe5T-RLkTj-h-lDbTusername v-root-readonly-pk168KvLS8sc80Of
Learn more
For more information refer to the Database Secrets Engine tutorial.
Vault is able to generate crentials within the MySQL database.
Configure Kubernetes authentication
The initial root token is a privileged user that can perform any operation at any path. The web application only requires the ability to read secrets defined at a single path. This application should authenticate and be granted a token with limited access.
Best practice
We recommend that root tokens are used only for initial setup of an authentication method and policies. Afterwards they should be revoked. This tutorial does not show you how to revoke the root token.
Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes Service Account Token.
Start an interactive shell session on the
vault-0
pod.$ kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh/ $
Your system prompt is replaced with a new prompt
/ $
.Note
The prompt within this section is shown as
$
but the commands are intended to be executed within this interactive shell on thevault-0
container.Enable the Kubernetes authentication method.
$ vault auth enable kubernetesSuccess! Enabled kubernetes auth method at: kubernetes/
Vault accepts a service token from any client within the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a token review Kubernetes endpoint.
Configure the Kubernetes authentication method to use the location of the Kubernetes API.
For the best compatibility with recent Kubernetes versions, ensure you are using Vault v1.9.3 or greater.
$ vault write auth/kubernetes/config \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Output:
Success! Data written to: auth/kubernetes/config
The environment variable
KUBERNETES_PORT_443_TCP_ADDR
is defined and references the internal network address of the Kubernetes host.For a client of the Vault server to read the credentials defined in the Create a Vault database role step requires that the read capability be granted for the path
database/creds/readonly
.Write out the policy named
devwebapp
that enables theread
capability for secrets at pathdatabase/creds/readonly
$ vault policy write devwebapp - <<EOFpath "database/creds/readonly" { capabilities = ["read"]}EOF
Create a Kubernetes authentication role named
devweb-app
.$ vault write auth/kubernetes/role/devweb-app \ bound_service_account_names=internal-app \ bound_service_account_namespaces=default \ policies=devwebapp \ ttl=24h
Output:
Success! Data written to: auth/kubernetes/role/devweb-app
The role connects a Kubernetes service account,
internal-app
(created in the next step), and namespace,default
, with the Vault policy,devwebapp
. The tokens returned after authentication are valid for 24 hours.Exit the
vault-0
pod.$ exit
Launch a web application
The web application pod requires the creation of the internal-app
Kubernetes
service account specified in the Vault Kubernetes authentication role created in
the Configure Kubernetes authentication
step.
Define a Kubernetes service account named
internal-app
.$ cat > internal-app.yaml <<EOFapiVersion: v1kind: ServiceAccountmetadata: name: internal-appEOF
Create the
internal-app
service account.$ kubectl apply --filename internal-app.yamlserviceaccount/internal-app created
Define a pod named
devwebapp
with the web application.$ cat > devwebapp.yaml <<EOF---apiVersion: v1kind: Podmetadata: name: devwebapp labels: app: devwebapp annotations: vault.hashicorp.com/agent-inject: "true" vault.hashicorp.com/agent-cache-enable: "true" vault.hashicorp.com/role: "devweb-app" vault.hashicorp.com/agent-inject-secret-database-connect.sh: "database/creds/readonly" vault.hashicorp.com/agent-inject-template-database-connect.sh: | {{- with secret "database/creds/readonly" -}} mysql -h my-release-mysql.default.svc.cluster.local --user={{ .Data.username }} --password={{ .Data.password }} my_database {{- end -}}spec: serviceAccountName: internal-app containers: - name: devwebapp image: jweissig/app:0.0.1EOF
Create the
devwebapp
pod.$ kubectl apply --filename devwebapp.yamlpod/devwebapp created
This definition creates a pod with the specified container running with the
internal-app
Kubernetes service account. The container within the pod is unaware of the Vault cluster. The Vault Injector service reads the annotations and determines that it should take actionvault.hashicorp.com/agent-inject
. The credentials, read from Vault atdatabase/creds/readonly
, are retrieved by thedevwebapp-role
Vault role and stored at the file location,/vault/secrets/database-connect.sh
, and then mounted on the pod.The credentials are requested first by the
vault-agent-init
container to ensure they are present when the application pod initializes. After the application pod initializes, the injector service creates avault-agent
pod that assist the application in maintaining the credentials during initialization. The credentials requested by thevault-agent-init
container are cached,vault.hashicorp.com/agent-cache-enable: "true"
, and used byvault-agent
container.Agent Cache
Prior to Vault 1.7 and Vault-K8s 0.9.0 the
vault.hashicorp.com/agent-cache-enable
parameter was not available. The credentials requested by thevault-agent-init
container were requested again by thevault-agent
container resulting in multiple credentials issued for the same pod.Learn more
For more information about annotations refer to the Injecting Secrets into Kubernetes Pods via Vault Agent Injector tutorial and the Annotations documentation.
Get all the pods within the default namespace.
$ kubectl get pods NAME READY STATUS RESTARTS AGEdevwebapp 2/2 Running 0 36smysql-0 1/1 Running 0 7m32svault-0 1/1 Running 0 5m40svault-1 1/1 Running 0 5m40svault-2 1/1 Running 0 5m40svault-agent-injector-76fff8f7c6-lk6gz 1/1 Running 0 5m40s
Wait until the
devwebapp
pod reports that is running and ready (2/2
).Display the secrets written to the file
/vault/secrets/database-connect.sh
on thedevwebapp
pod.$ kubectl exec --stdin=true \ --tty=true devwebapp \ --container devwebapp \ -- cat /vault/secrets/database-connect.sh
The result displays a
mysql
command with the credentials generated for this pod.mysql -h my-release-mysql.default.svc.cluster.local --user=v-kubernetes-readonly-zpqRzAee2b --password=Jb4epAXSirS2s-pnrI9- my_database
Clean up
Destroy the cluster.
$ eksctl delete cluster --name learn-vault
The cluster is destroyed.
Next steps
You launched Vault in high-availability mode with a Helm chart. Learn more about the Vault Helm chart by reading the documentation or exploring the project source code.
The pod you deployed used annotations to inject the secret into the file system. Explore how pods can retrieve secrets through the Vault Injector service via annotations, or secrets mounted on ephemeral volumes.