Migrate state from S3 to HCP Terraform
Terraform backends define how to perform Terraform operations and where to store Terraform state. Terraform supports multiple remote backends, including AWS S3 for state storage and DynamoDB for state locking to prevent concurrent operations. HCP Terraform also offers both of these features, as well as built-in version control (VCS) integration, remote execution environments for safe and uninterrupted runs, and a private module and provider registry.
In this tutorial, you will migrate state from S3 to HCP Terraform. To do so, you will initially use an AWS S3 bucket to store your state, and then migrate your state to HCP Terraform. Finally, you will trigger a plan and apply through HCP Terraform.
Prerequisites
This tutorial assumes that you are familiar with the Terraform and HCP Terraform plan and apply workflows. If you are new to Terraform itself, refer first to the Getting Started tutorials. If you are new to HCP Terraform, refer to the Get Started - HCP Terraform tutorials.
For this tutorial, you will need:
- the Terraform v1.1.0+ CLI installed locally.
- an HCP Terraform account.
- an AWS account with AWS Credentials configured for use with Terraform.
Create variable set for AWS credentials
You will need to configure your HCP Terraform workspace with AWS provider credentials for provider authentication. The most convenient way to do so is with an HCP Terraform variable set, which allows you to centrally manage provider credentials and reuse them across workspaces.
If you do not have a variable set containing your AWS credentials, follow the steps in the Create and Use a Variable Set tutorial to create one. Select Apply to specific workspaces for the variable set scope.
Tip
If you have temporary AWS credentials, you must also add your AWS_SESSION_TOKEN
as an environment variable to the variable set.
Create resources for S3 remote backend
In your terminal, clone the example repository. This repository contains Terraform configuration to deploy an S3 bucket and a DynamoDB table, which you will use as a remote backend for your EC2 instance configuration before you migrate your state to HCP Terraform.
$ git clone https://github.com/hashicorp/learn-terraform-s3-remote-state
Navigate to the cloned repository.
$ cd learn-terraform-s3-remote-state
Initialize the configuration.
$ terraform init Initializing the backend... Initializing provider plugins...- Finding hashicorp/aws versions matching "~> 3.65.0"...- Installing hashicorp/aws v3.65.0...- Installed hashicorp/aws v3.65.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work. If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Next, apply the configuration. Respond yes
to the prompt to confirm.
$ terraform apply ## ... Plan: 2 to add, 0 to change, 0 to destroy. Changes to Outputs: + dynamodb_endpoint = "terraform-state-lock-dynamo" + s3_bucket_name = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_dynamodb_table.terraform_state_lock: Creating...aws_s3_bucket.terraform_state: Creating...aws_s3_bucket.terraform_state: Creation complete after 3s [id=learn-s3-remote-backend-20210720075828379700000001]aws_dynamodb_table.terraform_state_lock: Creation complete after 8s [id=terraform-state-lock-dynamo] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: dynamodb_endpoint = "terraform-state-lock-dynamo"s3_bucket_name = "learn-s3-remote-backend-20210720075828379700000001"
Warning
Do not use this configuration for non-educational purposes. The S3 bucket objects are not properly configured with IAM and your state file may be publicly accessible.
Now that you have an S3 bucket and DynamoDB table, you are ready to use them to store your Terraform state.
Notice that Terraform displays the dynamodb_endpoint
and s3_bucket_name
outputs. You will use these values to configure your remote backend in the next section.
Clone example repository
Navigate out of your first repository directory.
$ cd ..
Clone the second example repository from GitHub. This repository contains the configuration to deploy an Ubuntu EC2 instance in the US West 1 region.
$ git clone https://github.com/hashicorp/learn-terraform-migrate-s3-tfc
Navigate to the cloned repository directory.
$ cd learn-terraform-migrate-s3-tfc
Update and review Terraform configuration
Open main.tf
. The terraform
block defines the S3 remote backend configuration, instructing Terraform to store your state in the S3 bucket you provisioned in the last step. The configuration also uses the DynamoDB table for state locking.
Update the bucket name to the s3_bucket_name
output value from the previous step.
learn-terraform-migrate-s3-tfc/main.tf
terraform { backend "s3" { encrypt = true bucket = "learn-s3-remote-backend-20210720075828379700000001" dynamodb_table = "terraform-state-lock-dynamo" key = "learn-terraform-s3-migrate-tfc/terraform.tfstate" region = "us-west-1" } ## ...}
Find the aws_instance.web
resource, which defines an Ubuntu EC2 instance. Notice that the instance’s workspace
tag is set to terraform.workspace
, a special value that resolves to the configuration’s workspace name.
learn-terraform-migrate-s3-tfc/main.tf
resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" tags = { Name = "HelloWorld" workspace = terraform.workspace }}
Deploy EC2 instance
Initialize the Terraform configuration.
$ terraform init Initializing the backend... Successfully configured the backend "s3"! Terraform will automaticallyuse this backend unless the backend configuration changes. Initializing provider plugins...- Finding hashicorp/aws versions matching "~> 3.65.0"...- Installing hashicorp/aws v3.65.0...- Installed hashicorp/aws v3.65.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work. If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Next, apply the configuration. Respond yes
to the prompt to confirm.
$ terraform apply ## ... Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_instance.web: Creating...aws_instance.web: Still creating... [10s elapsed]aws_instance.web: Still creating... [20s elapsed]aws_instance.web: Still creating... [30s elapsed]aws_instance.web: Creation complete after 33s [id=i-0497f64b5bb340b1b] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Visit the AWS S3 Dashboard and select the S3 bucket containing your remote state. You will find a Terraform state file in the bucket.
Verify EC2 instance
Open the AWS EC2 dashboard and find your recently deployed EC2 instance.
Notice the EC2 instance’s workspace
tag resolves to default
.
Since you ran Terraform in the default workspace, the terraform.workspace
variable resolves to default
.
Deploy development EC2 instance
Terraform CLI workspaces enable you to associate multiple states with a single configuration. Some organizations use CLI workspaces to provision similar infrastructure across multiple environments (for example: development, test, production). Though modules are now the best practice way to reuse configuration, many Terraform configurations still use CLI workspaces.
HCP Terraform workspaces are different from Terraform CLI workspaces. HCP Terraform workspaces manage collections of infrastructure; separate workspaces function like completely separate working directories.
This tutorial guides you through setting up Terraform CLI workspaces to show you how to effectively migrate configuration organized in workspaces to HCP Terraform.
Create a new Terraform workspace named dev
to deploy another EC2 instance using the same Terraform configuration.
$ terraform workspace new devCreated and switched to workspace "dev"! You're now on a new, empty workspace. Workspaces isolate their state,so if you run "terraform plan" Terraform will not see any existing statefor this configuration.
Next, apply the configuration. Respond yes
to the prompt to confirm.
$ terraform apply ## ... Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions in workspace "dev"? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_instance.web: Creating...aws_instance.web: Still creating... [10s elapsed]aws_instance.web: Still creating... [20s elapsed]aws_instance.web: Still creating... [30s elapsed]aws_instance.web: Creation complete after 33s [id=i-005787460b2252e42] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Navigate to the S3 bucket containing your remote state. Find the directory named env:/
. Select the directory, then select dev/
. This directory contains your Terraform state file for the dev
workspace.
Replace remote backend with HCP Terraform
Using AWS services as a remote backend requires managing disparate services to handle your Terraform state. You need an S3 bucket for state storage, and a DynamoDB instance for state locking. You must manage the security and lifecycle of these resources for each of your Terraform projects. Instead, you can use HCP Terraform for state storage, locking, and remote execution for all of your Terraform projects.
To migrate your state from S3 to HCP Terraform, you need to replace the backend configuration.
Still working in your learn-terraform-migrate-s3-tfc
directory, replace the backend “s3"
block in main.tf
with the following cloud
block, replacing <YOUR-ORG-NAME>
with your HCP Terraform organization’s name.
learn-terraform-migrate-s3-tfc/main.tf
cloud { hostname = "app.terraform.io" organization = "<YOUR-ORG-NAME>" workspaces { tags = ["learnterraform"] }}
Your final terraform
block will be similar to the following, except your HCP Terraform organization name replaces hashicorp-learn
.
learn-terraform-migrate-s3-tfc/main.tf
terraform { cloud { hostname = "app.terraform.io" organization = "hashicorp-learn" workspaces { tags = ["learnterraform"] } } required_providers { aws = { source = "hashicorp/aws" version = "~> 3.65.0" } }}
Since you are migrating multiple Terraform workspaces to HCP Terraform, the workspaces
block expects tags
. When migrating a single workspace, use the name
attribute and a string value instead.
Login to HCP Terraform
Log into your HCP Terraform account in your terminal.
$ terraform loginTerraform will request an API token for app.terraform.io using your browser. If login is successful, Terraform will store the token in plain text inthe following file for use by subsequent commands: /Users/<USER>/.terraform.d/credentials.tfrc.json Do you want to proceed? Only 'yes' will be accepted to confirm. Enter a value:
Confirm with a yes
and follow the workflow in the browser window that automatically opens. Paste the generated API key into your
Terminal when prompted. For more detailed instructions on logging in, review the Authenticate
the CLI with HCP Terraform
tutorial.
Migrate existing state to HCP Terraform
Run terraform init
to migrate your Terraform state file from S3 to HCP Terraform.
Tip
In production, you should stop all existing runs for the migrating workspaces before moving them to a multi-user environment like HCP Terraform. Stop all plans and applies or wait for them to complete before continuing.
$ terraform init Initializing HCP Terraform...Migrating from backend "s3" to HCP Terraform.
When prompted, enter main
to name your default workspace.
Note
HCP Terraform returns an error if you attempt to name a workspace default
.
HCP Terraform requires all workspaces to be given an explicit name. Please provide a new workspace name (e.g. dev, test) that will be used to migrate the existing default workspace. Enter a value: main
When prompted, enter 1
to rename your workspaces according to a pattern you will provide on the next prompt.
Would you like to rename your workspaces? Unlike typical Terraform workspaces representing an environment associated with a particular configuration (e.g. production, staging, development), HCP Terraform workspaces are named uniquely across all configurations used within an organization. A typical strategy to start with is <COMPONENT>-<ENVIRONMENT>-<REGION> (e.g. networking-prod-us-east, networking-staging-us-east). ## … When migrating existing workspaces from the backend "s3" to HCP Terraform, would you like to rename your workspaces? Enter 1 or 2. 1. Yes, I'd like to rename all workspaces according to a pattern I will provide. 2. No, I would not like to rename my workspaces. Migrate them as currently named. Enter a value: 1
When prompted, enter learn-terraform-s3-migrate-tfc-*
to specify the pattern for the HCP Terraform workspaces’ names.
How would you like to rename your workspaces? Enter a pattern with an asterisk (*) to rename all workspaces based on their previous names. The asterisk represents the current workspace name. For example, if a workspace is currently named 'prod', the pattern 'app-*' would yield 'app-prod' for a new workspace name; 'app-*-region1' would yield 'app-prod-region1'. Enter a value: learn-terraform-s3-migrate-tfc-* Acquiring state lock. This may take a few moments...Acquiring state lock. This may take a few moments...Migration complete! Your workspaces are as follows:* learn-terraform-s3-migrate-tfc-dev learn-terraform-s3-migrate-tfc-main Initializing provider plugins...- Reusing previous version of hashicorp/aws from the dependency lock file- Using previously-installed hashicorp/aws v3.65.0 HCP Terraform has been successfully initialized! You may now begin working with HCP Terraform. Try running "terraform plan" tosee any changes that are required for your infrastructure. If you ever set or change modules or Terraform Settings, run "terraform init"again to reinitialize your working directory.
Verify HCP Terraform workspaces
In your HCP Terraform dashboard, you will find the two workspaces you migrated from the S3 backend.
Assign variable sets to HCP Terraform workspaces
Go to the Variable sets page and open the AWS Credentials
variable set you created earlier.
Under Workspaces, select learn-terraform-s3-migrate-tfc-main
and learn-terraform-s3-migrate-tfc-dev
to assign the AWS credentials variable set to the workspaces. Click Save variable set.
Go to your learn-terraform-s3-migrate-tfc-main
workspace and select the Variables tab. You will find the AWS Credentials
variable set.
Start and apply plan in workspaces
Go to the learn-terraform-s3-migrate-tfc-main
workspace. Click on Actions from the top navigation bar, then Start new plan to start a new remote HCP Terraform run.
Click Start plan to confirm the plan run.
Once the plan completes, Terraform will propose to change one resource. Click on aws_instance.web
to view the changes to it. Terraform updates the instance’s workspace
tag from default
to learn-terraform-s3-migrate-tfc-main
.
This is because the terraform.workspace
variable the example configuration uses to define the workspace
tag resolves to the HCP Terraform workspace name, instead of the Terraform CLI workspace name.
learn-terraform-migrate-s3-tfc/main.tf
resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" tags = { Name = "HelloWorld" workspace = terraform.workspace }}
Click Confirm & Apply, then click Confirm run to apply the changes.
Start and apply plan in dev workspace
Open the learn-terraform-migrate-s3-backend-dev
workspace and follow the same steps to start and apply a plan run.
Similarly to the learn-terraform-migrate-s3-backend-main
workspace, HCP Terraform will update the instance’s workspace
tag to reflect the HCP Terraform workspace name: learn-terraform-migrate-s3-backend-dev
.
Clean up resources
Now that you have migrated your workspaces to HCP Terraform and used HCP Terraform to manage your infrastructure, clean up the resources created in this tutorial to avoid unexpected AWS charges.
First, visit the learn-terraform-migrate-s3-backend-main
workspace, and from the Settings menu, select Destruction and Deletion. Ensure that the Allow destroy plans checkbox is checked. Next, click the Queue destroy plan button, and follow the steps to queue and confirm a destroy plan.
Once HCP Terraform has destroyed your infrastructure, delete the workspace by clicking on Delete from HCP Terraform and following the steps in the prompt.
Follow the same steps to destroy the EC2 instance provisioned in the learn-terraform-migrate-s3-backend-dev
workspace and the workspace itself.
Optionally, delete your AWS credentials HCP Terraform variable set.
Delete S3 remote state resources
Navigate to the directory containing the configuration for the S3 bucket and DynamoDB table used for the remote S3 backend.
$ cd ../learn-terraform-s3-remote-state
Destroy these resources. Respond yes
to the prompt to confirm.
$ terraform destroy ## … Plan: 0 to add, 0 to change, 2 to destroy. Changes to Outputs: - dynamodb_endpoint = "terraform-state-lock-dynamo" -> null - s3_bucket_name = "learn-s3-remote-backend-20211111183654999200000001" -> null Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes aws_dynamodb_table.terraform_state_lock: Destroying... [id=terraform-state-lock-dynamo]aws_s3_bucket.terraform_state: Destroying... [id=learn-s3-remote-backend-20211111183654999200000001]aws_s3_bucket.terraform_state: Destruction complete after 3saws_dynamodb_table.terraform_state_lock: Destruction complete after 4s Destroy complete! Resources: 2 destroyed.
Next steps
Over the course of this tutorial, you migrated Terraform state from a remote S3 backend to HCP Terraform. Then, you created a variable set and assigned it to your migrated workspaces. Finally, you triggered and applied a plan in HCP Terraform.
For more information on topics covered in this tutorial, check out the following resources:
- Complete the Connect Workspaces with Run Triggers tutorial to learn how you can leverage run triggers in your HCP Terraform organization
- Complete the Automate HCP Terraform Workflows tutorial to use the TFE provider to manage your HCP Terraform workspaces
- Read more about the HCP Terraform CLI Integration in the documentation
- The
tf-helper
provides commands to interact with both HCP Terraform and Terraform Enterprise, including setting workspace variables, view runs and much more.