Deploy HCP Consul Dedicated with AKS using Terraform
HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. The HCP Portal has a quickstart template that deploys an end-to-end development environment so you can see HCP Consul Dedicated in action. This Terraform configuration:
- Creates a new HashiCorp virtual network (VNet) and single-node Consul development server
- Connects the HVN with your Azure virtual network (VNet)
- Provisions an Azure Kubernetes Service (AKS) cluster and installs a Consul client
- Deploys HashiCups, a demo application that uses Consul service mesh
In this tutorial, you will use the HCP Consul Dedicated Terraform automation workflow to deploy an end-to-end deployment environment. In the process, you will review the Terraform configuration to better understand how the various components of the development environment interact with each other. This will equip you with the skills to deploy and adopt HCP Consul Dedicated for your own workloads.
Prerequisites
To complete this tutorial you will need the following.
- Terraform v1.0.0+ CLI installed
- An HCP account configured for use with Terraform
- an Azure account
- the Azure CLI
In order for Terraform to run operations on your behalf, login into Azure.
$ az login
Generate Terraform configuration
You can generate a Terraform configuration for the end-to-end deployment directly from the Overview page in your HCP organization.
Click on the tab(s) below to go through each step to select the Terraform Automation deployment method.
Once you have selected the Terraform automation workflow, the HCP Portal presents two options:
- Use an existing virtual network (VNet)
- Create a new virtual network (VNet)
Select the tab for your preferred deployment method.
Fill in all the fields. The HCP region must be the same as your VNet region to reduce latency between the HCP Consul Dedicated server cluster and the Consul client running on the AKS cluster.
The wizard will use this to customize your Terraform configuration, so it can deploy an HVN and peer it to your existing VNet.
Tip
Click on the Where can I find this? links to get help in locating the right values for each fields.
Once you have filled in all the fields, scroll down to the Terraform
Configuration section to find the generated Terraform configuration. Click on
Copy code to copy it to your clipboard and save it in a file named
main.tf
.
Click on the accordion to find an example Terraform configuration. This example is not guaranteed to be up-to-date. Always refer to and use the configuration provided by the HCP UI.
main.tf
locals { hvn_region = "westus2" hvn_id = "consul-quickstart-1658469789875-hvn" cluster_id = "consul-quickstart-1658469789875" subscription_id = "{{ .SubscriptionID }}" vnet_rg_name = "{{ .VnetRgName }}" vnet_id = "/subscriptions/{{ .SubscriptionID }}/resourceGroups/{{ .VnetRgName }}/providers/Microsoft.Network/virtualNetworks/{{ .VnetName }}" subnet1_id = "/subscriptions/{{ .SubscriptionID }}/resourceGroups/{{ .VnetRgName }}/providers/Microsoft.Network/virtualNetworks/{{ .VnetName }}/subnets/{{ .Subnet1Name }}" subnet2_id = "/subscriptions/{{ .SubscriptionID }}/resourceGroups/{{ .VnetRgName }}/providers/Microsoft.Network/virtualNetworks/{{ .VnetName }}/subnets/{{ .Subnet2Name }}" vnet_subnets = { "subnet1" = local.subnet1_id, "subnet2" = local.subnet2_id, }} terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 2.65" configuration_aliases = [azurerm.azure] } azuread = { source = "hashicorp/azuread" version = "~> 2.14" } hcp = { source = "hashicorp/hcp" version = ">= 0.23.1" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.4.1" } helm = { source = "hashicorp/helm" version = ">= 2.3.0" } kubectl = { source = "gavinbunney/kubectl" version = ">= 1.11.3" } } required_version = ">= 1.0.11" } # Configure providers to use the credentials from the AKS cluster.provider "helm" { kubernetes { client_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.cluster_ca_certificate) host = azurerm_kubernetes_cluster.k8.kube_config.0.host password = azurerm_kubernetes_cluster.k8.kube_config.0.password username = azurerm_kubernetes_cluster.k8.kube_config.0.username }} provider "kubernetes" { client_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.cluster_ca_certificate) host = azurerm_kubernetes_cluster.k8.kube_config.0.host password = azurerm_kubernetes_cluster.k8.kube_config.0.password username = azurerm_kubernetes_cluster.k8.kube_config.0.username} provider "kubectl" { client_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.cluster_ca_certificate) host = azurerm_kubernetes_cluster.k8.kube_config.0.host load_config_file = false password = azurerm_kubernetes_cluster.k8.kube_config.0.password username = azurerm_kubernetes_cluster.k8.kube_config.0.username} provider "azurerm" { subscription_id = local.subscription_id features {}} provider "azuread" {} provider "hcp" {} provider "consul" { address = hcp_consul_cluster.main.consul_public_endpoint_url datacenter = hcp_consul_cluster.main.datacenter token = hcp_consul_cluster_root_token.token.secret_id}data "azurerm_subscription" "current" {} data "azurerm_resource_group" "rg" { name = local.vnet_rg_name} resource "azurerm_route_table" "rt" { name = "${local.cluster_id}-rt" resource_group_name = data.azurerm_resource_group.rg.name location = data.azurerm_resource_group.rg.location} resource "azurerm_network_security_group" "nsg" { name = "${local.cluster_id}-nsg" location = data.azurerm_resource_group.rg.location resource_group_name = data.azurerm_resource_group.rg.name} # Create an HCP HVN.resource "hcp_hvn" "hvn" { cidr_block = "172.25.32.0/20" cloud_provider = "azure" hvn_id = local.hvn_id region = local.hvn_region} # Peer the HVN to the vnet.module "hcp_peering" { source = "hashicorp/hcp-consul/azurerm" version = "~> 0.2.5" hvn = hcp_hvn.hvn prefix = local.cluster_id security_group_names = [azurerm_network_security_group.nsg.name] subscription_id = data.azurerm_subscription.current.subscription_id tenant_id = data.azurerm_subscription.current.tenant_id subnet_ids = [local.subnet1_id, local.subnet2_id] vnet_id = local.vnet_id vnet_rg = data.azurerm_resource_group.rg.name} # Create the Consul cluster.resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.hvn.hvn_id public_endpoint = true tier = "development"} resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id} # Create a user assigned identity (required for UserAssigned identity in combination with brining our own subnet/nsg/etc)resource "azurerm_user_assigned_identity" "identity" { name = "aks-identity" location = data.azurerm_resource_group.rg.location resource_group_name = data.azurerm_resource_group.rg.name} # Create the AKS cluster.resource "azurerm_kubernetes_cluster" "k8" { name = local.cluster_id dns_prefix = local.cluster_id location = data.azurerm_resource_group.rg.location private_cluster_enabled = false resource_group_name = data.azurerm_resource_group.rg.name network_profile { network_plugin = "azure" service_cidr = "10.30.0.0/16" dns_service_ip = "10.30.0.10" docker_bridge_cidr = "172.17.0.1/16" } default_node_pool { name = "default" node_count = 3 vm_size = "Standard_D2_v2" os_disk_size_gb = 30 pod_subnet_id = local.subnet1_id vnet_subnet_id = local.subnet2_id } identity { type = "UserAssigned" user_assigned_identity_id = azurerm_user_assigned_identity.identity.id } } # Create a Kubernetes client that deploys Consul and its secrets.module "aks_consul_client" { source = "hashicorp/hcp-consul/azurerm//modules/hcp-aks-client" version = "~> 0.2.5" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] consul_version = hcp_consul_cluster.main.consul_version k8s_api_endpoint = azurerm_kubernetes_cluster.k8.kube_config.0.host boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] # The AKS node group will fail to create if the clients are # created at the same time. This forces the client to wait until # the node group is successfully created. depends_on = [azurerm_kubernetes_cluster.k8]} # Deploy Hashicups.module "demo_app" { source = "hashicorp/hcp-consul/azurerm//modules/k8s-demo-app" version = "~> 0.2.5" depends_on = [module.aks_consul_client]} # Authorize HTTP ingress to the load balancer.resource "azurerm_network_security_rule" "ingress" { name = "http-ingress" priority = 301 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "80" source_address_prefix = "*" destination_address_prefix = module.demo_app.load_balancer_ip resource_group_name = data.azurerm_resource_group.rg.name network_security_group_name = azurerm_network_security_group.nsg.name depends_on = [module.demo_app]}output "consul_root_token" { value = hcp_consul_cluster_root_token.token.secret_id sensitive = true} output "consul_url" { value = hcp_consul_cluster.main.consul_public_endpoint_url} output "hashicups_url" { value = module.demo_app.hashicups_url} output "next_steps" { value = "Hashicups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."} output "kube_config_raw" { value = azurerm_kubernetes_cluster.k8.kube_config_raw sensitive = true}
The locals
block reflects the values of your existing VNet and resource group,
in addition to pre-populated fields with reasonable defaults.
- The
hvn_region
defines the HashiCorp Virtual Network (HVN) region. - The
hvn_id
defines your HVN ID. HCP will pre-populate this with a unique name that uses this pattern:consul-quickstart-UNIQUE_ID-hvn
. - The
cluster_id
defines your HCP Consul Dedicated cluster ID. HCP will pre-populate this with a unique name that uses this pattern:consul-quickstart-UNIQUE_ID
. - The
subscription_id
defines your Azure subscription ID. - The
vnet_rg_name
defines the resource group your VNet is in. - The
vnet_id
defines your VNet ID. Terraform will use this to set up a peering connection between the HVN and your VNet. - The
vnet_subnets
defines your subnet IDs. Terraform will use this to set up a peering connection between the HVN and your subnets. In addition, it will deploy the AKS cluster into these subnets.
Tip
The hvn_id
and cluster_id
must be unique within your HCP
organization.
Deploy resources
Now that you have the Terraform configuration saved in a main.tf
file, you are
ready to deploy the HVN, HCP Consul Dedicated cluster, and end-to-end development
environment.
Verify that you have completed all the steps listed in the Prerequisites.
Note
If you are deploying into an existing VNet, ensure the subnet has internet connectivity.
Initialize your Terraform configuration to download the necessary Terraform providers and modules.
$ terraform init
Deploy the resources. Enter yes
when prompted to accept your changes.
$ terraform apply ## ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes### ... Apply complete! Resources: 60 added, 0 changed, 0 destroyed. Outputs: consul_root_token = <sensitive>consul_url = "https://servers-public-consul-5574a0fd.1cf93619.z1.hashicorp.cloud"hashicups_url = "http://20.120.191.137"kube_config_raw = <sensitive>next_steps = "Hashicups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
Once you confirm, it will take a few minutes for Terraform to set up your end-to-end development environment. While you are waiting for Terraform to complete, proceed to the next section to review the Terraform configuration in more detail to better understand how to set up HCP Consul Dedicated for your workloads.
Review Terraform configuration
The Terraform configuration deploys an end-to-end development environment by:
- Creating a new HashiCorp virtual network (VNet) and single-node Consul development server
- Connecting the HVN with your Azure virtual network (VNet)
- Provisioning an AKS cluster and installing a Consul client
- Deploying HashiCups, a demo application that uses Consul service mesh
Prior to starting these steps, Terraform first retrieves information about your Azure environment.
Terraform uses a data resource to retrieve information about your current Azure subscription and your existing resource group.
main.tf
data "azurerm_subscription" "current" {} data "azurerm_resource_group" "rg" { name = local.vnet_rg_name}
The Terraform configuration also defines an Azure network security group. When Terraform configures a peering connection, it will add Consul-specific rules to this network security groups.
main.tf
resource "azurerm_network_security_group" "nsg" { name = "${local.cluster_id}-nsg" resource_group_name = azurerm_resource_group.rg.name location = azurerm_resource_group.rg.location}
Create HVN and HCP Consul Dedicated
This Terraform configuration defines
hcp_hvn
and
hcp_consul_cluster
to deploy your HVN and HCP Consul Dedicated.
The HVN resource references the
hvn_id
andhvn_regions
local values. The resource also uses172.25.32.0/20
as a default for its CIDR block. Your HVN's CIDR block should not conflict with your VNet CIDR block.main.tf
resource "hcp_hvn" "hvn" { cidr_block = "172.25.32.0/20" cloud_provider = "azure" hvn_id = local.hvn_id region = local.hvn_region}
The HCP Consul Dedicated resource references the HVN's ID. This is because HashiCorp will deploy the HCP Consul Dedicated cluster into the HVN. The HCP Consul Dedicated cluster has a public endpoint and is in the
development
cluster tier. Development tier HCP Consul Dedicated clusters only have one server agent.For production workloads, we do not recommend public endpoints for HCP Consul Dedicated.
Note
HCP Consul Dedicated Azure only supports
development
cluster tiers for public beta.main.tf
resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.hvn.hvn_id public_endpoint = true tier = "development"}
Connect HVN with VNet configuration
This Terraform configuration uses the
hashicorp/hcp-consul/azurerm
Terraform module to connect the HVN with your VNet configuration. This module:
- creates and accepts a peering connection between the HVN and VNet
- creates HVN routes that direct HCP traffic to subnet's CIDR ranges
- creates the necessary Azure ingress rules for HCP Consul Dedicated to communicate with the Consul clients
Notice that the module references the HVN and network security group in addition to your existing resource group, VNet, subnet.
main.tf
module "hcp_peering" { source = "hashicorp/hcp-consul/azurerm" version = "~> 0.2.5" hvn = hcp_hvn.hvn prefix = local.cluster_id security_group_names = [azurerm_network_security_group.nsg.name] subscription_id = data.azurerm_subscription.current.subscription_id tenant_id = data.azurerm_subscription.current.tenant_id subnet_ids = [local.subnet1_id, local.subnet2_id] vnet_id = local.vnet_id vnet_rg = data.azurerm_resource_group.rg.name}
Provision Azure AKS and install Consul client configuration
The quickstart configuration defines the AKS resource with 3 nodes.
main.tf
resource "azurerm_kubernetes_cluster" "k8" { name = local.cluster_id dns_prefix = local.cluster_id location = azurerm_resource_group.rg.location private_cluster_enabled = false resource_group_name = azurerm_resource_group.rg.name network_profile { network_plugin = "azure" service_cidr = "10.30.0.0/16" dns_service_ip = "10.30.0.10" docker_bridge_cidr = "172.17.0.1/16" } default_node_pool { name = "default" node_count = 3 vm_size = "Standard_D2_v2" os_disk_size_gb = 30 pod_subnet_id = module.network.vnet_subnets[0] vnet_subnet_id = module.network.vnet_subnets[1] } identity { type = "UserAssigned" user_assigned_identity_id = azurerm_user_assigned_identity.identity.id } depends_on = [module.network]}
This Terraform configuration uses the
hashicorp/hcp-consul/azurerm//modules/hcp-aks-client
Terraform module to install the Consul client on the AKS cluster.
In this tutorial, you will apply HCP Consul Dedicated's secure-by-default design with
Terraform by configuring your AKS cluster with the gossip encryption key,
the Consul CA cert, and a permissive ACL token. As a result, the hcp-aks-client
module requires the HCP Consul Dedicated cluster token (root ACL token) and HCP Consul Dedicated
client configuration (CA certificate and gossip encryption key).
The HCP Consul Dedicated cluster token bootstraps the cluster's ACL system. The configuration uses
hcp_consul_cluster_root_token
to generate a cluster token.Note
The resource will generate a cluster token, a sensitive value. For production workloads, refer to a list of recommendations for storing sensitive information in Terraform.
main.tf
resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id}
The
hcp_consul_cluster
resource has attributes that store the cluster's CA certificate, gossip encryption key, private CA file, private HCP Consul Dedicated URL and more.main.tf
module "aks_consul_client" { source = "hashicorp/hcp-consul/azurerm//modules/hcp-aks-client" version = "~> 0.2.5" ## ... boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] ## ...}
The hcp-aks-client
module deploys a Consul client onto the AKS by acting as a
wrapper for the Consul Helm chart. Refer to the
module source
for a complete list of resources deployed by the module.
Deploy HashiCups configuration
The hashicorp/hcp-consul/azurerm//modules/k8s-demo-app
Terraform module deploys the HashiCups demo app. The module source
has a complete list of YAML files that define the HashiCups services, intention CRDs, and ingress gateway.
Since HCP Consul Dedicated on Azure is secure by default, the datacenter is created with a "default deny" intention in place. This means that, by default, no services can interact with each other until an operator explicitly allows them to do so by creating intentions for each inter-service operation they wish to allow. The intentions.yaml
file defines service intentions between the HashiCups services through the ServiceIntentions
CRD, enabling them to communicate with each other.
Verify created resources
Once Terraform completes, you can verify the resources using the HCP Consul Dedicated UI or through the Consul CLI.
Consul UI
Retrieve your HCP Consul Dedicated dashboard URL and open it in your browser.
$ terraform output -raw consul_urlhttps://servers-public-consul-5574a0fd.1cf93619.z1.hashicorp.cloud
Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.
$ terraform output -raw consul_root_token00000000-0000-0000-0000-000000000000
In your HCP Consul Dedicated dashboard, sign in with the root token you just retrieved.
You should find a list of services that include consul
and your HashiCups services.
Consul CLI configuration
In order to use the CLI, you must set environment variables that store your ACL token and HCP Consul Dedicated cluster address.
First, set your CONSUL_HTTP_ADDR
environment variable.
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Then, set your CONSUL_HTTP_TOKEN
environment variable.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
Retrieve a list of members in your datacenter to verify your Consul CLI is set up properly.
$ consul membersNode Address Status Type Build Protocol DC Segment0b835929-f8b7-5781-ba7e-89d8e5d5ed40 172.25.32.4:8301 alive server 1.11.6+ent 2 consul-quickstart-1658423089961 <all>aks-default-63161318-vmss000000 10.0.1.29:8301 alive client 1.11.6+ent 2 consul-quickstart-1658423089961 <default>aks-default-63161318-vmss000001 10.0.1.48:8301 alive client 1.11.6+ent 2 consul-quickstart-1658423089961 <default>aks-default-63161318-vmss000002 10.0.1.5:8301 alive client 1.11.6+ent 2 consul-quickstart-1658423089961 <default>
HashiCups application
The end-to-end development environment deploys HashiCups. Visit the hashicups
URL to verify that Terraform deployed HashiCups successfully, and its services
can communicate with each other.
Retrieve your HashiCups URL and open it in your browser.
$ terraform output -raw hashicups_urlhttp://20.120.191.137
Clean up resources
Now that you completed the tutorial, destroy the resources you created with
Terraform. Enter yes
to confirm the destruction process.
$ terraform destroy ## ... Destroy complete! Resources: 60 destroyed.
Next steps
In this tutorial, you have deployed an end-to-end deployment and review the Terraform configuration that defines the deployment.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.