HCP Terraform Driver
This feature requires Consul-Terraform-Sync Enterprise which is available with Consul Enterprise.
Consul-Terraform-Sync (CTS) is more powerful when you integrate it with HCP Terraform. Integrating with HCP Terraform provides features, such as enhanced workspaces and insight into Terraform operations as CTS dynamically updates your network infrastructure. CTS is compatible with both the self-hosted and managed service versions of HCP Terraform. It also supports all tiers of the HCP Terraform managed service.
This page describes how the HCP Terraform driver operates within CTS.
Terraform Workspace Automation
CTS manages Terraform runs following the API-driven run workflow for workspaces in HCP Terraform.
On startup, CTS:
- Creates or discovers HCP Terraform workspaces corresponding to the configured tasks.
- Prepares the local environment and generates Terraform configuration files that make up the root module for each task.
- Packages the generated files and uploads them as a configuration version for the task's workspace on HCP Terraform.
Once all workspaces are set up, CTS monitors the Consul catalog for service changes. When relevant changes are detected, the HCP Terraform driver dynamically updates input variables for that task directly as workspace variables using the HCP Terraform API. The driver then queues a run on the workspace, with auto-apply enabled, to update your network infrastructure.
Note: Although workspaces for tasks are executed in isolated environments, this does not guarantee the infrastructure changes from concurrent task executions are independent. Ensure that modules across all tasks are not modifying the same resource objects or have overlapping changes that may result in race conditions during automation.
Remote Workspaces
CTS will discover or create a new workspaces based on your configured tasks. The task configuration name
and description
are used to set the workspace name and description. The task configuration terraform_cloud_workspace
is used to set options like Terraform version, execution mode, and agent pool if relevant. CTS will also use any globally set workspace configurations, specified in the driver configuration workspaces
.
Workspace automation requirements for CTS are in place to avoid overriding other workspaces unintentionally.
- Must be set to remote or agent execution mode
- Cannot be connected to a VCS
- Cannot have an existing configuration version uploaded by another application
- Must satisfy workspace tag requirements and tag restrictions set by the CTS operator
Workspaces created by CTS will be configured with the following settings:
Setting | Value |
---|---|
Workspace name | CTS task name |
Description | CTS task description |
Execution mode | task.terraform_cloud_workspace.execution_mode or remote by default |
Apply method | Auto apply |
Terraform Version | task.terraform_cloud_workspace.terraform_version , task.terraform_version (deprecated), or the latest Terraform version compatible with CTS available for the organization. |
Tags | source:cts and additional tags set by the CTS operator |
Other workspace settings can be pre-configured or updated, such as setting the workspace to manual apply or adding a run notification to send messages to a Slack channel when CTS updates your network infrastructure.
Manual Apply
CTS can automate remote workspaces with either auto apply or manual apply configured. Having CTS manage workspaces with manual apply is useful to add an approval stage to CTS automation. Operators can manually inspect and approve or discard runs that CTS had queued based on the task run condition.
When CTS detects new changes for a workspace that already has a run pending on approval, CTS will discard the stale run and queue a new run with the latest values. The new run will go through plan and then again wait on an operator to approve it. Only once the run is approved will the infrastructure be updated with the latest Consul changes.
There are two approaches to setup manual apply for a workspace managed by CTS based on how the workspace is created.
- For CTS created workspaces, update the apply method from auto to manual via the HCP Terraform web application or API.
- For pre-configured workspaces, create the workspace prior to CTS task automation via the HCP Terraform web application or API.
- Create a workspace with the same name as the desired task.
- Set the workspace to API-driven run workflow and the execution mode to remote.
- Ensure that the apply method for the workspace is set to manual apply.
- Configure the task for the workspace and run CTS.
Tip: Setup run notifications for workspaces with manual apply to not miss automated runs by CTS. Look into setting the buffer period or a schedule condition to group changes together and reduce runs requiring approval.
Configuration Version
An example configuration version for a task named "cts-example" would have the folder structure below when running with the HCP Terraform driver and using the default working directory.
$ tree sync-tasks/ sync-tasks/└── cts-example/ ├── main.tf └── variables.tf
main.tf
- The main file contains the terraform block, provider blocks, and a module block calling the module configured for the task.terraform
block - The corresponding provider source and versions for the task from the configuration files are placed into this block for the root module.provider
blocks - The provider blocks generated in the root module resemble theterraform_provider
blocks from the configuration for CTS. They have identical arguments present and are set from the intermediate variable created per provider.module
block - The module block is where the task's module is called as a child module. The child module contains the core logic for automation. Required and optional input variables are passed as arguments to the module.
variables.tf
- This file contains three types of variable declarations:services
input variable (required) determines module compatibility with Consul-Terraform Sync (read more on compatible Terraform modules for more details).- Any additional optional input variables provided by CTS that the module may use.
- Various intermediate variables used to configure providers. Intermediate provider variables are interpolated from the provider blocks and arguments configured in the CTS configuration.
variables.module.tf
- This file is created if there are variables configured for the task and contains the interpolated variable declarations that match the variables from configuration. These are then used to proxy the configured variables to the module through explicit assignment in the module block.
Variables
CTS uses Terraform input variables to reflect the latest Consul service information. They are used as parameters for your Terraform module. Input variables are dynamic and are updated by the driver throughout the runtime of CTS.
You can view the latest service information in the Terraform UI by navigating to that workspace and clicking the "Variables" tab in the workspace navigation.
Caution: Dynamic variables maintained by CTS are formatted for automation. Unexpected manual changes to these variables may result in automation errors.
Setting Up HCP Terraform Driver
Deployment
Because a CTS instance can only be configured with one driver, an instance can only be associated with either a Terraform driver or a HCP Terraform driver. If there is a need to run both types of drivers, users will need to deploy a separate CTS instance for each type of driver. Relatedly, if there is a need to run CTS across multiple HCP Terraform organizations, users will need to deploy a separate instance for each organization.
Required Setup
This section captures requirements for setting up CTS to integrate with your HCP Terraform solution.
- Hostname of your HCP Terraform, self-hosted distribution
- Name of your organization
- Team API token used for authentication with HCP Terraform
Prior to running CTS with an HCP Terraform driver, you will need an account and organization set up, as well as a dedicated token. We recommend using a team token that is restricted to Manage Workspaces-level permissions. Below are the steps for the recommended setup.
The first step is to create an account with your HCP Terraform service. After creating an account, create a new organization or select an existing organization. The address of your HCP Terraform service will be used to configure the hostname
, and the organization name will be used to configure the organization
on the HCP Terraform driver.
Once you have an account and organization, the next step is to create a team. We recommend using a dedicated team and team token to run and authenticate CTS. Using a team token has the benefits of restricting organization permissions as well as associating CTS automated actions with the team rather than an individual.
After creating a dedicated team, update the team's permissions with "Manage Workspaces" organization access-level. CTS's main work revolves around creating and managing workspaces. Therefore restricting the dedicated team's permission to Manage Workspaces level is sufficient and reduces security risk.
After setting the team's permissions, the final setup step is to generate the associated team token, which can be done on the same team management page. This token will be used by CTS for API authentication and will be used to configure the token
on the HCP Terraform driver.
Recommendations
We recommend configuring workspaces managed by CTS with run notifications through the Terraform web application. Run notifications notify external systems about the progress of runs and could help notify users of CTS events, particularly errored runs.
In order to configure a run notification, users can manually create a notification configuration for workspaces automated by CTS. A workspace may already exist for a task if the workspace name is identical to the configured task's name
. This may occur if CTS has already already run and created the workspace for the task. This may also occur if the workspace is manually created for the task prior to CTS running.