Configure EC2 as a Consul client for HCP Consul Dedicated
HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. After you deploy an HCP Consul Dedicated server cluster, you must deploy Consul clients into your network so you can leverage Consul's full feature set including service mesh and service discovery. HCP Consul supports Consul clients running on EKS, EC2, and ECS workloads.
In this tutorial, you will deploy and provision a Consul client running on an EC2 instance that connects to your HCP Consul Dedicated cluster. In the process, you will review the provisioning script to better understand the steps required to properly configure an EC2 instance to connect and interact with an HCP Consul Dedicated cluster.
Prerequisites
For this tutorial, you will need:
- the Terraform 0.14+ CLI installed locally.
- An HCP account configured for use with Terraform
- an AWS account with AWS Credentials configured for use with Terraform.
Clone example repository
In your terminal, clone the example repository. This repository contains Terraform configuration to deploy different types of Consul clusters, including the one you will need in this tutorial.
$ git clone https://github.com/hashicorp/learn-consul-terraform
Navigate to the project directory in the cloned repository.
$ cd learn-consul-terraform/datacenter-deploy-ec2-hcp
Review configuration
The project directory contains two sub-directories:
The
1-vpc-hcp
subdirectory contains Terraform configuration to deploy an AWS VPC and underlying networking resources, an HCP HashiCorp Virtual Network (HVN), and an HCP Consul. In addition, it uses thehashicorp/hcp-consul/aws
Terraform module to set up all networking rules to allow a Consul client to communicate with the HCP Consul Dedicated servers. This includes setting up the peering connection between the HVN and your VPC, setting up the HCP routes, and creating AWS ingress rules.datacenter-deploy-ec2-hcp/1-vpc-hcp/main.tf
module "aws_hcp_consul" { source = "hashicorp/hcp-consul/aws" version = "~> 0.7.0" hvn = hcp_hvn.main vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.public_subnets route_table_ids = module.vpc.public_route_table_ids}
Note
The
hashicorp/hcp-consul/aws
Terraform module creates a security group that allows TCP/UDP ingress traffic on port8301
and allows all egress. The egress security rule lets the EC2 instance download dependencies required for the Consul client including the Consul binary and Docker.The
2-ec2-consul-client
subdirectory contains Terraform configuration that creates an AWS key pair and deploys an EC2 instance. The EC2 instance uses acloud-init
script to automate the Consul client configuration. In the Review Consul client configuration for EC2 section, you will review the automation scripts in more detail.
This tutorial intentionally separates the Terraform configuration into two discrete steps. This process reflects Terraform best practices. By dividing the HCP Consul Dedicated cluster management from the Consul client management, you can separate the duties and reduce the blast radius.
Deploy HCP Consul Dedicated
Navigate to the 1-vpc-hcp
directory.
$ cd 1-vpc-hcp
Initialize the Terraform configuration.
$ terraform initInitializing modules...Downloading registry.terraform.io/hashicorp/hcp-consul/aws 0.7.1 for aws_hcp_consul...- aws_hcp_consul in .terraform/modules/aws_hcp_consulDownloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.10.0 for vpc...- vpc in .terraform/modules/vpc Initializing the backend... Initializing provider plugins...- Reusing previous version of hashicorp/hcp from the dependency lock file- Reusing previous version of hashicorp/aws from the dependency lock file- Installing hashicorp/hcp v0.29.0...- Installed hashicorp/hcp v0.29.0 (signed by HashiCorp)- Installing hashicorp/aws v3.75.2...- Installed hashicorp/aws v3.75.2 (signed by HashiCorp) Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work. If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Next, apply the configuration. Respond yes
to the prompt to confirm.
$ terraform apply ## ... Plan: 23 to add, 0 to change, 0 to destroy. Outputs: hcp_consul_cluster_id = "learn-hcp-consul-ec2-client"hcp_consul_security_group = "sg-007d371f114a2f553"subnet_id = "subnet-0a23cb052f4960d79"vpc_cidr_block = "10.0.0.0/16"vpc_id = "vpc-0b1d246078d615afc"
Notice that Terraform displays the outputs created from the apply.
Create terraform.tfvars file for Consul client directory
Since you created the underlying infrastructure with Terraform, you can use the outputs to help you deploy the Consul clients on an EC2 instance.
Create a terraform.tfvars
file in the 2-ec2-consul-client
directory with the
Terraform outputs from this project.
$ echo "vpc_id=\"$(terraform output -raw vpc_id)\"vpc_cidr_block=\"$(terraform output -raw vpc_cidr_block)\"subnet_id=\"$(terraform output -raw subnet_id)\"cluster_id=\"$(terraform output -raw hcp_consul_cluster_id)\"hcp_consul_security_group_id=\"$(terraform output -raw hcp_consul_security_group)\"" > ../2-ec2-consul-client/terraform.tfvars
Review Consul client configuration for EC2
Navigate to the 2-ec2-consul-client
directory.
$ cd 2-ec2-consul-client
Review Terraform configuration
Open main.tf
. This Terraform configuration creates an AWS key pair, a security
group, an EC2 instance. The EC2 instance uses a cloud-init
script to automate
the Consul client configuration so it can connect to your HCP Consul Dedicated cluster.
The AWS key pair and security group lets you SSH into your EC2 instance.
Notice that the Terraform configuration uses data sources to retrieve information about your AWS and HCP resources.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
data "aws_vpc" "selected" { id = var.vpc_id} data "aws_subnet" "selected" { id = var.subnet_id} data "hcp_hvn" "selected" { hvn_id = data.hcp_consul_cluster.selected.hvn_id} data "hcp_consul_cluster" "selected" { cluster_id = var.cluster_id}
The aws_instance.consul_client
resource defines your EC2 instance that will
serve as a Consul client. Notice the following attributes:
- The
count
attribute lets you easily scale the number of Consul clients running on EC2 instances. Thecloud-init
script lets you automatically configure the EC2 instance to connect to your HCP Consul Dedicated cluster. - The
vpc_security_group_ids
attribute references a security group that allows TCP/UDP ingress traffic on port8301
and allows all egress. The ingress traffic lets the HCP Consul Dedicated server cluster communicate with your Consul clients. The egress traffic lets you download the dependencies required for a Consul client, including the Consul binary. - The
key_name
attribute references a key pair that will let you SSH into the EC2 instance. - The
user_data
attribute references thescripts/user_data.sh
andscripts/setup.sh
automation scripts that configure and set up a Consul client on your EC2 instance. Notice that the automation scripts references the HCP Consul Dedicated's CA certificate, configuration, and ACL tokens. These values are crucial for the Consul client to securely connect to your HCP Consul Dedicated cluster.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" { count = 1 ami = data.aws_ami.ubuntu.id instance_type = "t2.small" associate_public_ip_address = true subnet_id = var.subnet_id vpc_security_group_ids = [ var.hcp_consul_security_group_id, aws_security_group.allow_ssh.id ] key_name = aws_key_pair.consul_client.key_name user_data = templatefile("${path.module}/scripts/user_data.sh", { setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", { consul_ca = data.hcp_consul_cluster.selected.consul_ca_file consul_config = data.hcp_consul_cluster.selected.consul_config_file consul_acl_token = hcp_consul_cluster_root_token.token.secret_id, consul_version = data.hcp_consul_cluster.selected.consul_version, consul_service = base64encode(templatefile("${path.module}/scripts/service", { service_name = "consul", service_cmd = "/usr/bin/consul agent -data-dir /var/consul -config-dir=/etc/consul.d/", })), vpc_cidr = var.vpc_cidr_block })), }) tags = { Name = "hcp-consul-client-${count.index}" }}
Review client configuration files
The client configuration file contains information that lets your Consul client connect to your specific HCP Consul Dedicated cluster.
The Terraform configuration file retrieves the values directly from the HCP Consul Dedicated data source.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" { ## ... user_data = templatefile("${path.module}/scripts/user_data.sh", { setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", { consul_ca = data.hcp_consul_cluster.selected.consul_ca_file consul_config = data.hcp_consul_cluster.selected.consul_config_file consul_acl_token = hcp_consul_cluster_root_token.token.secret_id, ## ... })), }) ## ...}
The following is a sample client configuration file.
client_config.json
{ "acl": { "enabled": true, "down_policy": "async-cache", "default_policy": "deny" }, "ca_file": "./ca.pem", "verify_outgoing": true, "datacenter": "dc1", "encrypt": "GOSSIP_ENCRYPTION_KEY", "server": false, "log_level": "INFO", "ui": true, "retry_join": ["CONSUL_CLUSTER_PRIVATE_ENDPOINT"], "auto_encrypt": { "tls": true }}
Notice these attributes in the client configuration file:
- The
acl.enabled
setting is set totrue
which ensures that only requests with a valid token will be able to access resources in the datacenter. To add your client, you will need to configure an agent token. The automation script automatically configures this. - The
ca_file
setting references theca.pem
file. The automation script will update this to point to/etc/consul.d
. - The
encrypt
setting is set to your Consul cluster's gossip encryption key. Do not modify the encryption key that is provided for you in this file. - The
retry_join
is configured with the private endpoint address of your HCP Consul cluster's API. This is the address that your client will use to interact with the servers running in the HCP Consul Dedicated cluster. Do not modify the value that is provided for you in this file. - The
auto_encrypt.tls
setting is set totrue
to ensure transport layer security is enforced on all traffic with and between Consul agents.
Review provisioning scripts
The 2-ec2-consul-client/scripts
directory contains all the automation scripts.
- The
user_data.sh
file serves as an entrypoint. It loads, configures, and runssetup.sh
. - The
service
file is a template for a systemd service. This will let the Consul client to run as a daemon (background) service on the EC2 instance. It will also automatically restart the Consul client if it fails. - The
setup.sh
contains the core logic to configure the Consul client. First, the script sets up container networking (setup_networking
), then downloads the Consul binary and Docker (setup_deps
).
The setup_consul
function creates the /etc/consul.d
and /var/consul
directories, the Consul configuration and data directories respectively.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/scripts/setup.sh
setup_consul() { mkdir --parents /etc/consul.d /var/consul chown --recursive consul:consul /etc/consul.d chown --recursive consul:consul /var/consul ## …}
The EC2 instance Terraform configuration defines the Consul configuration and data directory path.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" { ## ... user_data = templatefile("${path.module}/scripts/user_data.sh", { setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", { ## ... consul_service = base64encode(templatefile("${path.module}/scripts/service", { service_name = "consul", service_cmd = "/usr/bin/consul agent -data-dir /var/consul -config-dir=/etc/consul.d/", })), ## ... })), }) ## ...}
Next, the setup_consul
function configures and moves the CA file and client
configuration files in their respective destinations in /etc/consul.d
. Notice
that the script updates the client configuration's ca_file
path, ACL token,
ports, and bind address.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/scripts/setup.sh
setup_consul() { ## … echo "${consul_ca}" | base64 -d >/etc/consul.d/ca.pem echo "${consul_config}" | base64 -d >client.temp.0 ip=$(hostname -I | awk '{print $1}') jq '.ca_file = "/etc/consul.d/ca.pem"' client.temp.0 >client.temp.1 jq --arg token "${consul_acl_token}" '.acl += {"tokens":{"agent":"\($token)"}}' client.temp.1 >client.temp.2 jq '.ports = {"grpc":8502}' client.temp.2 >client.temp.3 jq '.bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"'${vpc_cidr}'\" | attr \"address\" }}"' client.temp.3 >/etc/consul.d/client.json }
Finally, the setup.sh
file enables and starts the Consul service.
Create SSH key
The configuration scripts included in the AMIs rely on a user named
consul-client
. Create a SSH key to pair with the user so that you can securely
connect to your instances.
Generate a new SSH key named consul-client
. The argument provided with the
-f
flag creates the key in the current directory and creates two files called
learn-packer
and consul-client.pub
. Change the placeholder email address to
your email address.
$ ssh-keygen -t rsa -C "your_email@example.com" -f ./consul-client
When prompted, press enter to leave the passphrase blank on this key.
Deploy Consul client on EC2
Find the terraform.tfvars
file. This file contains information about your VPC
and HCP deployment and should look like the following.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/terraform.tfvars
vpc_id="vpc-0b1d246078d615afc"vpc_cidr_block="10.0.0.0/16"subnet_id="subnet-0a23cb052f4960d79"cluster_id="learn-hcp-consul-ec2-client"hcp_consul_security_group_id="sg-007d371f114a2f553"
If you do not have this file, go to the step to create the file.
Initialize the Terraform configuration.
$ terraform initInitializing the backend... Initializing provider plugins...- Reusing previous version of hashicorp/aws from the dependency lock file- Reusing previous version of hashicorp/hcp from the dependency lock file- Installing hashicorp/aws v3.75.2...- Installed hashicorp/aws v3.75.2 (signed by HashiCorp)- Installing hashicorp/hcp v0.29.0...- Installed hashicorp/hcp v0.29.0 (signed by HashiCorp) Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work. If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Next, apply the configuration. Respond yes
to the prompt to confirm.
$ terraform apply ## ... Apply complete! Resources: 5 added, 0 changed, 0 destroyed. Outputs: consul_root_token = <sensitive>consul_url = "https://learn-hcp-consul-ec2-client.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"ec2_client = "34.211.17.208"next_steps = "Hashicups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
Verify Consul client
Now that you have deployed the Consul clients on an EC2 instance, you will verify that you have a Consul deployment with at least 1 server and 1 client.
Retrieve your HCP Consul Dedicated dashboard URL and open it in your browser.
$ terraform output -raw consul_urlhttps://learn-hcp-consul-ec2-client.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud
Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.
$ terraform output -raw consul_root_token00000000-0000-0000-0000-000000000000
In your HCP Consul Dedicated dashboard, sign in with the root token you just retrieved. After you sign in, click on Nodes to find the Consul client.
Note
If your Consul client is unable to connect to your HCP Consul Dedicated server cluster, verify that your VPC, HVN, peering connection, and routes are configured correctly. Refer to the example repository for each resource's configuration.
You can also SSH into your EC2 instance to verify that it is running the Consul client and connected to your HCP Consul Dedicated cluster.
First, SSH into your EC2 instance.
$ ssh ubuntu@$(terraform output -raw ec2_client) -i ./consul-client
Then, view the members in your Consul datacenter. Replace ACL_TOKEN
with the
Consul root token (consul_root_token
output). Notice that the command returns
both the HCP Consul Dedicated server nodes and client nodes.
$ consul members -token ACL_TOKENNode Address Status Type Build Protocol DC Partition Segmentip-172-25-33-11 172.25.33.11:8301 alive server 1.11.5+ent 2 learn-hcp-consul-ec2-client default <all>ip-10-0-1-114 10.0.1.114:8301 alive client 1.11.5+ent 2 learn-hcp-consul-ec2-client default <default>
Next steps
In this tutorial, you deployed a Consul client and connected it to your HCP Consul cluster. To learn more about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the Get Started with Consul tutorials.
- Register a Service with Consul Service Discovery
- Secure Applications with Service Sidecar Proxies
- Explore the Consul UI
- Create a Consul service mesh on HCP using Envoy as a sidecar proxy
If you encounter any issues, please contact the HCP team at support.hashicorp.com.