Deploy HCP Consul Dedicated with EKS using Terraform
In the previous tutorial you got an overview of the available Terraform resources for deploying your application on AWS using HCP Consul Dedicated as service mesh.
In this tutorial you will deploy a demo application on EKS using Terraform code that you can generate from the HCP UI. The Terraform code will deploy HCP Consul and peer your HVN network with your VPC.
Prerequisites
To complete this tutorial you will need the following.
Basic command line access
Terraform v1.0.0+ CLI installed
Git installed
Admin access to the HashiCorp Cloud Platform (HCP) Consul portal
Note
HCP
Admin
access is necessary to create the Service Principal credentials used by Terraform to interact with HCP. If you already have a Service Principal key and client id provided by your admin, you only requireContributor
access. If you are anAdmin
and would like to create a Service Principal, check Deploy HCP Consul Dedicated with Terraform tutorial for instructions on how to create a Service Principal.An AWS account and AWS Access Credentials configured locally.
You can configure the AWS credentials using environment variables.
export AWS_ACCESS_KEY_ID=<your AWS access key ID>export AWS_SECRET_ACCESS_KEY=<your AWS secret access key>export AWS_SESSION_TOKEN=<your AWS session token>
Generate Terraform template
You can generate a Terraform template for this example directly from the Overview page in your HCP organization.
To authenticate Terraform to HCP you need a Service Principal with Contributor
permissions. If you are logged with an Admin
account you can create one during
this step.
In the Authenticate Terraform to HCP section click on the Generate Service Principal and Key.
HCP will generate a new set of credentials for you and you can copy them using the Copy code button and export them in your terminal.
export HCP_CLIENT_ID=<your client id>export HCP_CLIENT_SECRET=<the key generated>
Note
If you are not an Admin
in your HCP account then you should contact your
administrator and obtain a valid Service Principal credentials before proceeding
with the tutorial.
Get Terraform code
Once you have filled in all the options on the bottom side of the page, you will find the generated Terraform code.
Click on Copy code to copy it to your clipboard and save it in a file named main.tf
.
Note
Content should resemble the example below. This example is not guaranteed to be up to date. Always refer to the template file provided by HCP UI after the configuration.
main.tf
locals { vpc_region = "{{ .VPCRegion }}" hvn_region = "{{ .HVNRegion }}" cluster_id = "{{ .ClusterID }}" hvn_id = "{{ .ClusterID }}-hvn" vpc_id = "{{ .VPCID }}" private_route_table_id = "{{ .PrivateRouteTableID }}" private_subnet1 = "{{ .PrivateSubnet1 }}" private_subnet2 = "{{ .PrivateSubnet2 }}"} terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 3.43" } hcp = { source = "hashicorp/hcp" version = ">= 0.18.0" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.4.1" } helm = { source = "hashicorp/helm" version = ">= 2.3.0" } kubectl = { source = "gavinbunney/kubectl" version = ">= 1.11.3" } } } provider "aws" { region = local.vpc_region} provider "helm" { kubernetes { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token }} provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token} provider "kubectl" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false} data "aws_eks_cluster" "cluster" { name = module.eks.cluster_id} data "aws_eks_cluster_auth" "cluster" { name = module.eks.cluster_id} module "eks" { source = "terraform-aws-modules/eks/aws" version = "17.24.0" kubeconfig_api_version = "client.authentication.k8s.io/v1beta1" cluster_name = "${local.cluster_id}-eks" cluster_version = "1.21" subnets = [local.private_subnet1, local.private_subnet2] vpc_id = local.vpc_id node_groups = { application = { name_prefix = "hashicups" instance_types = ["t3a.medium"] desired_capacity = 3 max_capacity = 3 min_capacity = 3 } }} # The HVN created in HCPresource "hcp_hvn" "main" { hvn_id = local.hvn_id cloud_provider = "aws" region = local.hvn_region cidr_block = "172.25.32.0/20"} module "aws_hcp_consul" { source = "hashicorp/hcp-consul/aws" version = "~> 0.7.0" hvn = hcp_hvn.main vpc_id = local.vpc_id subnet_ids = [local.private_subnet1, local.private_subnet2] route_table_ids = [local.private_route_table_id] security_group_ids = [module.eks.cluster_primary_security_group_id]} resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.main.hvn_id public_endpoint = true tier = "development"} resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id} module "eks_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client" version = "~> 0.7.0" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] k8s_api_endpoint = module.eks.cluster_endpoint consul_version = hcp_consul_cluster.main.consul_version boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] # The EKS node group will fail to create if the clients are # created at the same time. This forces the client to wait until # the node group is successfully created. depends_on = [module.eks]} module "demo_app" { source = "hashicorp/hcp-consul/aws//modules/k8s-demo-app" version = "~> 0.7.0" depends_on = [module.eks_consul_client]} output "consul_root_token" { value = hcp_consul_cluster_root_token.token.secret_id sensitive = true} output "consul_url" { value = hcp_consul_cluster.main.public_endpoint ? ( hcp_consul_cluster.main.consul_public_endpoint_url ) : ( hcp_consul_cluster.main.consul_private_endpoint_url )} output "kubeconfig_filename" { value = abspath(module.eks.kubeconfig_filename)} output "hashicups_url" { value = module.demo_app.hashicups_url} output "next_steps" { value = "Hashicups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."}
Locals
The values you provided in the UI during the creation are used as local variables in the generated Terraform code.
vpc_region
- This is the region where you deployed your VPC.hvn_region
- The HashiCorp Virtual Network (HVN) region.cluster_id
- The HCP Consul Dedicated cluster ID. Use a unique name to identify your HCP Consul Dedicated cluster. HCP will pre-populate it with a name following the patternconsul-quickstart-<unique-ID>
.vpc_id
- Because you are using an existing VPC you need to provide Terraform with your VPC ID.private_route_table_id
- A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.private_subnet1
andprivate_subnet2
- A subnet is a range of IP addresses in your VPC. You can launch AWS compute resources into a specific subnet.
Run terraform
If you have not done already, click on Copy code to copy it to your clipboard
and save it in a file named main.tf
.
Refer to the pre-requisites section if you have not installed Git and Terraform.
With the Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.
Check that the following setup is complete before executing the terraform init
step:
- Your AWS credentials are populated as environment variables and Terraform install is complete (refer to prerequisites)
- You have exported the HCP credentials from the UI as environment variables
- If you are deploying in an existing VPC: ensure the two public subnets have internet connectivity and are in different availability zones.
Issue the terraform init
command from your working directory to download the
necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins...... Terraform has been successfully initialized!...
Once Terraform has been initialized, you can verify the resources that will
be created using the plan
command.
$ terraform plan An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + create Terraform will perform the following actions:...
Finally, you can deploy the resources using the apply
command.
$ terraform apply ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Remember to confirm the run by entering yes
.
Once you confirm, it will take a few minutes to complete the deploy. Terraform will print the following output if the deployment is successful.
Apply complete! Resources: xx added, 0 changed, 0 destroyed....
Troubleshooting Terraform run
In case you receive the following error during terraform apply
:
...Error: Unauthorized│ │ with module.eks_consul_client.kubernetes_secret.consul_secrets,│ on .terraform/modules/eks_consul_client/modules/hcp-eks-client/main.tf line 1, in resource “kubernetes_secret” “consul_secrets”:│ 1: resource “kubernetes_secret” “consul_secrets” {...
The is probably due to some internal EKS issues.
You can try solving the issue using the following steps:
Locate the kubeconfig file in the folder you ran Terraform from and use it to configure your
kubectl
command to point to the EKS cluster.$ export KUBECONFIG=<kubeconfig file path>
Use helm to remove the client's workload
$ helm delete consul
Apply the changes again using Terraform
$ terraform apply
Examine Terraform output
At the end of the execution Terraform will output the following lines:
Outputs:consul_root_token = <sensitive>consul_url = "https://consul-quickstart-1637764803819.consul.11eb5071-85f5-1eb2-992c-0242ac110003.aws.hashicorp.cloud"hashicups_url = "http://ad92a8e7735db46068b0319334199899-340959324.eu-west-2.elb.amazonaws.com:8080"kubeconfig_filename = "<redacted>/kubeconfig_consul-quickstart-1637764803819-eks"
As you can notice the consul_root_token
is not showed since is a sensitive value.
You can retrieve it using:
$ terraform output consul_root_token
Verify created resources
Consul UI
Visit the Consul UI using the consul_url
link in the output values.
Sign in to the Consul UI using the token retrieved in the previous step and verify the services are all present in the services view. Click on the services button to view all registered services.
Consul CLI configuration
Using the Terraform output values you can setup your Consul CLI to connect to the datacenter you created.
Setup environment variables:
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Verify the Consul CLI can connect with the Consul datacenter.
$ consul members
Example output:
Node Address Status Type Build Protocol DC Segmentip-172-25-37-170 172.25.37.170:8301 alive server 1.10.4+ent 2 consul-quickstart-1637764803819 <all>ip-10-0-1-105.eu-west-2.compute.internal 10.0.1.51:8301 alive client 1.10.3+ent 2 consul-quickstart-1637764803819 <default>ip-10-0-2-52.eu-west-2.compute.internal 10.0.2.254:8301 alive client 1.10.3+ent 2 consul-quickstart-1637764803819 <default>ip-10-0-3-133.eu-west-2.compute.internal 10.0.3.75:8301 alive client 1.10.3+ent 2 consul-quickstart-1637764803819 <default>
HashiCups application
The Terraform code deployed an application that exposes a web UI accessible
using the hashicups_url
URL.
You can access the configurations of the deployed Hashicups app services here.
Kubectl configuration
Use the file located at the kubeconfig_filename
path to configure your kubectl
.
$ export KUBECONFIG=`terraform output -raw kubeconfig_filename`
Finally, verify you can connect to your EKS cluster using kubectl
.
$ kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGEconsul-connect-injector-webhook-deployment-57b9bb9cc7-5zw99 1/1 Running 0 36mconsul-connect-injector-webhook-deployment-57b9bb9cc7-9p89c 1/1 Running 0 36mconsul-controller-6ffbc4fdd-dvp5r 1/1 Running 0 36mconsul-cxwf7 1/1 Running 0 36mconsul-ingress-gateway-647f47fbf9-ctjjd 2/2 Running 0 36mconsul-ingress-gateway-647f47fbf9-p8xgj 2/2 Running 0 36mconsul-jh8q6 1/1 Running 0 36mconsul-s87jx 1/1 Running 0 36mconsul-webhook-cert-manager-65b8bb9785-7ql2q 1/1 Running 0 36mfrontend-77d67cf7f8-8mzsc 2/2 Running 0 34mpayments-544f94bb7-dglpd 2/2 Running 0 34mpostgres-7bcc78cb6b-6ll89 2/2 Running 0 34mproduct-api-78b86ff5db-h69hf 2/2 Running 0 34mpublic-api-7f67d79fb6-8rkdz 2/2 Running 0 34m
Cleanup environment
Use the terraform destroy
command to clean up the resources you created.
$ terraform destroy ...Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Remember to confirm by entering yes
.
Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.
Destroy complete! Resources: xx destroyed.
Next steps
In this tutorial you learned how to use Terraform to deploy a demo application on AWS EKS using HCP Consul Dedicated as your service mesh.
In the next tutorial you will use Terraform to deploy a demo application on AWS EC2 instances using HCP Consul Dedicated as your service mesh.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.