Create a HCP Consul Dedicated cluster for an existing EKS run time
In a previous tutorial you learned how to deploy a new HCP Consul Dedicated cluster and to deploy your workload in a EKS run time created in the same operation with Terraform.
In this tutorial you will learn how to create Terraform code that will help you deploy the same scenario starting from an existing EKS run time.
Prerequisites
To complete this tutorial you will need the following.
Basic command line access
Terraform v1.0.0+ CLI installed
Git installed
Admin access to the HashiCorp Cloud Platform (HCP) Consul portal
Note
HCP
Admin
access is necessary to create the Service Principal credentials used by Terraform to interact with HCP. If you already have a Service Principal key and client id provided by your admin, you only requireContributor
access. If you are anAdmin
and would like to create a Service Principal, check Deploy HCP Consul Dedicated with Terraform tutorial for instructions on how to create a Service Principal.An AWS account and AWS Access Credentials configured locally.
You can configure the AWS credentials using environment variables.
export AWS_ACCESS_KEY_ID=<your AWS access key ID>export AWS_SECRET_ACCESS_KEY=<your AWS secret access key>export AWS_SESSION_TOKEN=<your AWS session token>
An existing EKS cluster. You can use the Provision an EKS Cluster (AWS) tutorial to deploy an EKS cluster with Terraform or refer to AWS documentation.
Get the Terraform file
You can retrieve the Terraform file either from the HCP UI or from a GitHub repository.
From the overview page click on Deploy with Terraform.
Make sure you select Use existing VPC.
Authenticate Terraform to HCP
To authenticate Terraform to HCP you need a Service Principal with Contributor
permissions. If you are logged with an Admin
account you can create one during
this step.
In the Authenticate Terraform to HCP section click on the Generate Service Principal and Key.
HCP will generate a new set of credentials for you and you can copy them using the Copy code button and export them in your terminal.
export HCP_CLIENT_ID=<your client id>export HCP_CLIENT_SECRET=<the key generated>
Note
If you are not an Admin
on HCP you should contact your
administrator and obtain valid Service Principal credentials before proceeding
with the tutorial.
Download Terraform code
For this tutorial you can skip filling the fields in the UI since you are going to enter the values manually in the code. Go to the Terraform configuration section.
Click on Copy code to copy it to your clipboard and save it in a file named main.tf
.
Review the file
Once you have created the file, open the file with your favorite editor.
./hcp-ui-templates/ec2-existing-vpc/main.tf
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 3.43" } hcp = { source = "hashicorp/hcp" version = "~> 0.19" } kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.4" } helm = { source = "hashicorp/helm" version = "~> 2.3" } kubectl = { source = "gavinbunney/kubectl" version = "~> 1.11" } }} locals { vpc_region = "{{ .VPCRegion }}" hvn_region = "{{ .HVNRegion }}" cluster_id = "{{ .ClusterID }}" vpc_id = "{{ .VPCID }}" route_table_id = "{{ .RouteTableID }}" subnet1 = "{{ .Subnet1 }}" subnet2 = "{{ .Subnet2 }}"} provider "aws" { region = local.vpc_region} provider "helm" { kubernetes { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token }} provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token} provider "kubectl" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false} data "aws_eks_cluster" "cluster" { name = module.eks.cluster_id} data "aws_eks_cluster_auth" "cluster" { name = module.eks.cluster_id} module "eks" { source = "terraform-aws-modules/eks/aws" version = "17.22.0" cluster_name = "${local.cluster_id}-eks" cluster_version = "1.21" subnets = [local.subnet1, local.subnet2] vpc_id = local.vpc_id node_groups = { application = { name_prefix = "hashicups" instance_types = ["t3a.medium"] desired_capacity = 3 max_capacity = 3 min_capacity = 3 } }} resource "hcp_hvn" "main" { hvn_id = "${local.cluster_id}-hvn" cloud_provider = "aws" region = local.hvn_region cidr_block = "172.25.32.0/20"} module "aws_hcp_consul" { source = "hashicorp/hcp-consul/aws" version = "0.3.0" hvn = hcp_hvn.main vpc_id = local.vpc_id subnet_ids = [local.subnet1, local.subnet2] route_table_ids = [local.route_table_id] security_group_ids = [module.eks.cluster_primary_security_group_id]} resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.main.hvn_id public_endpoint = true tier = "development"} resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id} module "eks_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client" version = "0.3.0" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] k8s_api_endpoint = module.eks.cluster_endpoint boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] depends_on = [module.eks]} module "demo_app" { source = "hashicorp/hcp-consul/aws//modules/k8s-demo-app" version = "0.3.0" depends_on = [module.eks_consul_client]} output "consul_root_token" { value = hcp_consul_cluster_root_token.token.secret_id sensitive = true} output "consul_url" { value = hcp_consul_cluster.main.consul_public_endpoint_url} output "kubeconfig_filename" { value = abspath(module.eks.kubeconfig_filename)} output "hashicups_url" { value = module.demo_app.hashicups_url}
You are now going to edit the file to configure Terraform to locate and manage your existing EKS runtime.
Fill the locals
The main.tf
file contains some variables that are used during the resources
creation and need to be filled in for the deployment.
// ... locals { vpc_region = "{{ .VPCRegion }}" vpc_id = "{{ .VPCID }}" route_table_id = "{{ .RouteTableID }}" subnet1 = "{{ .Subnet1 }}" subnet2 = "{{ .Subnet2 }}" hvn_region = "{{ .HVNRegion }}" cluster_id = "{{ .ClusterID }}"} // ...
AWS related fields
vpc_region
- This is the region where you deployed your VPC. For this tutorial we usedeu-west-2
.vpc_id
- Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined.route_table_id
- A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.subnet1
andsubnet2
- A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a specific subnet.
HCP related fields
hvn_region
- The HashiCorp Virtual Network (HVN) region.Note
Not all regions are available for HCP to deploy. Pick one among:
us-west-2
,us-east-1
,eu-west-1
,eu-west-2
,eu-central-1
,ap-southeast-1
,ap-southeast-2
.cluster_id
- The HCP Consul Dedicated cluster ID. Use a unique name to identify your HCP Consul Dedicated cluster. For this tutorial the default value isconsul-quickstart-existing-eks
.
Remove the eks
module
The template is made to use an existing VPC do deploy an EKS cluster and then deploy an application on it. Since you already have an EKS cluster deployed you want to identify all parts of the Terraform code responsible for the EKS cluster creation and either comment them out or replace them with pointers to the existing cluster.
The module used for the EKS cluster creation is the AWS EKS Terraform module
and in the code is configured by the module "eks"
block.
Because you do not need to create the EKS cluster, comment out the whole module block.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... module "eks" { source = "terraform-aws-modules/eks/aws" version = "17.22.0" cluster_name = "${local.cluster_id}-eks" cluster_version = "1.21" subnets = [local.subnet1, local.subnet2] vpc_id = local.vpc_id node_groups = { application = { name_prefix = "hashicups" instance_types = ["t3a.medium"] desired_capacity = 3 max_capacity = 3 min_capacity = 3 } }} // ...
Replace module dependencies
Once you have commented out the module "eks"
block, a few parts of the Terraform code
will need to be changed to provide the information that was previously being
retrieved from the module execution.
The idea is to replace all elements that refer to module.eks
with static values
retrieved from your EKS instance.
Element module.eks.cluster_id
You can comment out the occurrences of module.eks.cluster_id
and replace them
with a new local variable local.eks_cluster_id
.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... data "aws_eks_cluster" "cluster" { // name = module.eks.cluster_id name = local.eks_cluster_id} data "aws_eks_cluster_auth" "cluster" { // name = module.eks.cluster_id name = local.eks_cluster_id} // ...
Element module.eks.cluster_primary_security_group_id
Module aws_hcp_consul
requires the EKS cluster's primary security group ID.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... module "aws_hcp_consul" { source = "hashicorp/hcp-consul/aws" version = "0.3.0" hvn = hcp_hvn.main vpc_id = local.vpc_id subnet_ids = [local.subnet1, local.subnet2] route_table_ids = [local.route_table_id] // security_group_ids = [module.eks.cluster_primary_security_group_id] security_group_ids = [local.eks_primary_security_group_id]} // ...
Element module.eks.cluster_endpoint
Module eks_consul_client
requires the EKS cluster endpoint to be available to connect to it and deploy services. The eks_consul_client
module requires module.eks
to complete before attempting to conduct actions in the EKS cluster.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... module "eks_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client" version = "0.3.0" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] // k8s_api_endpoint = module.eks.cluster_endpoint k8s_api_endpoint = local.eks_cluster_endpoint boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] // depends_on = [module.eks]} // ...
Element module.eks.kubeconfig_filename
Because the module eks
is not used, a kubeconfig file will not be generated
at the end of the Terraform execution. You can remove the output stanza.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... // output "kubeconfig_filename" {// value = abspath(module.eks.kubeconfig_filename)// } // ...
Add new locals
In the previous sections, you replaced some of the values generated at
runtime with local variables. Add them in the locals
block.
./hcp-ui-templates/ec2-existing-eks/main.tf
// ... locals { vpc_region = "eu-west-2" hvn_region = "eu-west-2" cluster_id = "consul-quickstart-existing-eks" vpc_id = "vpc-00d21f8d2b7b98e4c" route_table_id = "rtb-0c04acb0449dc0237" subnet1 = "subnet-00980d4c01ec150d0" subnet2 = "subnet-0c24eea90a7bf7840" // New locals eks_cluster_id = "education-eks-Ol56i68j" eks_cluster_endpoint = "https://45920CC13C3D09EACB1CC9E32EA15CE7.gr7.eu-west-2.eks.amazonaws.com" eks_primary_security_group_id = "sg-0591fbae1a083e6c6"} // ...
eks_cluster_id
- The EKS cluster ID.eks_cluster_endpoint
- The EKS cluster endpoint to deploy the services.eks_primary_security_group_id
- The Cluster Security Group is a unified security group that is used to control communications between the Kubernetes control plane and compute resources on the cluster.Note: from this same view you can also verify you picked the correct Subnets for your configuration. If not, make sure to correct it before running Terraform.
Review modified Terraform file
Once completed all the changes give one final review to the main.tf
file to
make sure everything is configured properly.
./hcp-ui-templates/ec2-existing-eks/main.tf
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 3.43" } hcp = { source = "hashicorp/hcp" version = "~> 0.19" } kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.4" } helm = { source = "hashicorp/helm" version = "~> 2.3" } kubectl = { source = "gavinbunney/kubectl" version = "~> 1.11" } }} locals { vpc_region = "eu-west-2" hvn_region = "eu-west-2" cluster_id = "consul-quickstart-existing-eks" vpc_id = "vpc-00d21f8d2b7b98e4c" route_table_id = "rtb-0c04acb0449dc0237" subnet1 = "subnet-00980d4c01ec150d0" subnet2 = "subnet-0c24eea90a7bf7840" // New locals eks_cluster_id = "education-eks-Ol56i68j" eks_cluster_endpoint = "https://45920CC13C3D09EACB1CC9E32EA15CE7.gr7.eu-west-2.eks.amazonaws.com" eks_primary_security_group_id = "sg-0591fbae1a083e6c6"} provider "aws" { region = local.vpc_region} provider "helm" { kubernetes { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token }} provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token} provider "kubectl" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false} data "aws_eks_cluster" "cluster" { // name = module.eks.cluster_id name = local.eks_cluster_id} data "aws_eks_cluster_auth" "cluster" { // name = module.eks.cluster_id name = local.eks_cluster_id} // module "eks" {// source = "terraform-aws-modules/eks/aws"// version = "17.22.0" // cluster_name = "${local.cluster_id}-eks"// cluster_version = "1.21"// subnets = [local.subnet1, local.subnet2]// vpc_id = local.vpc_id // node_groups = {// application = {// name_prefix = "hashicups"// instance_types = ["t3a.medium"]// desired_capacity = 3// max_capacity = 3// min_capacity = 3// }// }// } resource "hcp_hvn" "main" { hvn_id = "${local.cluster_id}-hvn" cloud_provider = "aws" region = local.hvn_region cidr_block = "172.25.32.0/20"} module "aws_hcp_consul" { source = "hashicorp/hcp-consul/aws" version = "0.3.0" hvn = hcp_hvn.main vpc_id = local.vpc_id subnet_ids = [local.subnet1, local.subnet2] route_table_ids = [local.route_table_id] // security_group_ids = [module.eks.cluster_primary_security_group_id] security_group_ids = ["${local.eks_primary_security_group_id}"]} resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.main.hvn_id public_endpoint = true tier = "development"} resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id} module "eks_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client" version = "0.3.0" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] // k8s_api_endpoint = module.eks.cluster_endpoint k8s_api_endpoint = local.eks_cluster_endpoint boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] // depends_on = [module.eks]} module "demo_app" { source = "hashicorp/hcp-consul/aws//modules/k8s-demo-app" version = "0.3.0" depends_on = [module.eks_consul_client]} output "consul_root_token" { value = hcp_consul_cluster_root_token.token.secret_id sensitive = true} output "consul_url" { value = hcp_consul_cluster.main.consul_public_endpoint_url} // output "kubeconfig_filename" {// value = abspath(module.eks.kubeconfig_filename)// } output "hashicups_url" { value = module.demo_app.hashicups_url}
Run terraform
With the updated Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the
necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins...... Terraform has been successfully initialized!...
Once Terraform has been initialized, you can verify the resources that will
be created using the plan
command.
$ terraform plan An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + create Terraform will perform the following actions:...
Finally, you can deploy the resources using the apply
command.
$ terraform apply ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Remember to confirm the run by entering yes
.
Once you confirm, it will take a few minutes to complete the deploy
. Terraform will print the following output if the deployment is successful.
Apply complete! Resources: xx added, 0 changed, 0 destroyed.
Examine Terraform output
At the end of the execution Terraform will output the following lines:
Outputs:consul_root_token = <sensitive>consul_url = "https://consul-quickstart-existing-eks.consul.11eb5071-85f5-1eb2-992c-0242ac110003.aws.hashicorp.cloud"hashicups_url = "http://aabb6936daa284ba385b2b0a9ed7667e-1572630217.eu-west-2.elb.amazonaws.com:8080"
As you can notice the consul_root_token
is not showed since is a sensitive value.
You can retrieve the Consul token using the command below:
$ terraform output consul_root_token
Verify created resources
Consul UI
Visit the Consul UI using the consul_url
link in the output values.
Login to Consul using the token retrieved in the previous step and verify the services are all present in your Consul datacenter UI
Consul CLI configuration
Using the Terraform output values, you can setup your Consul CLI to connect to the datacenter you created.
Setup environment variables:
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Verify Consul can connect to the datacenter:
$ consul members
Example output:
Node Address Status Type Build Protocol DC Segmentip-172-25-42-77 172.25.42.77:8301 alive server 1.10.4+ent 2 consul-quickstart-existing-eks <all>ip-10-0-1-108.eu-west-2.compute.internal 10.0.1.90:8301 failed client 1.10.3+ent 2 consul-quickstart-existing-eks <default>ip-10-0-1-188.eu-west-2.compute.internal 10.0.1.99:8301 failed client 1.10.3+ent 2 consul-quickstart-existing-eks <default>ip-10-0-2-239.eu-west-2.compute.internal 10.0.2.17:8301 failed client 1.10.3+ent 2 consul-quickstart-existing-eks <default>
HashiCups application
The Terraform code deployed an application that exposes a web UI accessible
using the hashicups_url
URL.
Cleanup environment
Use the terraform destroy
command to clean up the resources you created.
$ terraform destroy ...Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Remember to confirm by entering yes
.
Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.
Destroy complete! Resources: xx destroyed.
Note
The cleanup process do not take into account the existing resources that you used to deploy the EKS cluster. Remember to de-provision your EKS cluster if you do not need it anymore after this tutorial.
Next steps
In this tutorial you learned how to modify the Terraform template generated from the HCP UI to create an HCP Consul Dedicated cluster, connect it to an existing EKS runtime, and deploy a demo application.
This is the first step toward customizing the Terraform code to deploy your scenario.
To learn more about the Terraform resources used in this collection and the functionalities they provide check the next tutorial.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.