Revoke an artifact and its descendants using inherited revocation
If one of your artifacta has a vulnerability, you may need to revoke it to prevent infrastructure deployments from using it. If Packer builds use the artifact as a parent or source artifact, you may need to revoke its descendants too. HashiCorp Cloud Platform (HCP) Packer lets you revoke an artifact version and, optionally, all of its descendant artifacts.
In this tutorial you will build parent and child artifacts, and store metadata about their relationship in HCP Packer. You will then deploy the child artifacts to AWS using HCP Terraform, revoke a parent artifact and all its descendants, and observe the downstream impact to the HCP Terraform workflow, enforced by an HCP Packer run task.
Note
HCP Terraform Free Edition includes one run task integration that you can apply to up to ten workspaces. Refer to HCP Terraform pricing for details.
In an ideal world, you would rarely revoke an artifact version, and instead "fail forward" by building a new artifact and launching new infrastructure from it. However, building new artifacts takes time, and often the first priority in a security incident is to reduce the impact of a vulnerable artifact. In those situations, you may want to revoke an artifact to prevent new deployments while you work on a resolution.
Prerequisites
This tutorial assumes that you are familiar with the workflows for Packer and HCP Packer. If you are new to Packer, complete the Get Started tutorials first. If you are new to HCP Packer, complete the Get Started HCP Packer tutorials first.
This tutorial also assumes that you are familiar with the workflows for Terraform and HCP Terraform. If you are new to Terraform, complete the Get Started tutorials first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials.
For this tutorial, you will need:
- Packer 1.10.1+ installed locally
- An HCP account with an HCP Packer Registry
- Terraform v1.3+ installed locally
- an HCP Terraform account and organization
- HCP Terraform locally authenticated
Now, create a new HCP service principal and set the following environment variables locally.
Environment Variable | Description |
---|---|
HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal |
HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal |
HCP_ORGANIZATION_ID | Find this in the URL of the HCP Overview page, https://portal.cloud.hashicorp.com/orgs/ORGANIZATION_ID/projects/xxxx |
HCP_PROJECT_ID | Find this in the URL of the HCP Overview page, https://portal.cloud.hashicorp.com/orgs/xxxx/projects/PROJECT_ID |
You will also need an AWS account with credentials set as local environment variables.
Environment Variable | Description |
---|---|
AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair |
AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair |
Set your HCP Terraform organization name as an environment variable too.
Environment Variable | Description |
---|---|
TF_CLOUD_ORGANIZATION | The name of your HCP Terraform organization |
Clone example repository
Clone the example repository, which contains the Packer templates and Terraform configuration used in this tutorial.
$ git clone https://github.com/hashicorp-education/learn-hcp-packer-revocation.git
Change into the repository directory.
$ cd learn-hcp-packer-revocation
Build artifacts
To save time, this tutorial uses a shell script to build several Packer artifacts and assign them to HCP Packer channels.
Open packer/build-and-assign.sh
in your editor. The script first checks that you have set the necessary environment variables.
packer/build-and-assign.sh
#!/bin/bash set -eEo pipefail # Usage checkif [[ -z "$HCP_CLIENT_ID" || -z "$HCP_CLIENT_SECRET" || -z "$HCP_ORGANIZATION_ID" || -z "$HCP_PROJECT_ID" ]]; then cat <<EOFThis script requires the following environment variables to be set: - HCP_CLIENT_ID - HCP_CLIENT_SECRET - HCP_ORGANIZATION_ID - HCP_PROJECT_IDEOF exit 1fi ## ...
The script then declares a function that assigns the latest version in a bucket to the specified HCP Packer channel. The function creates the channel if it does not exist.
packer/build-and-assign.sh
## ... update_channel() { bucket_slug=$1 channel_name=$2 base_url="https://api.cloud.hashicorp.com/packer/2023-01-01/organizations/$HCP_ORGANIZATION_ID/projects/$HCP_PROJECT_ID" response=$(curl --request GET --silent \ --url "$base_url/buckets/$bucket_slug/versions" \ --header "authorization: Bearer $bearer") api_error=$(echo "$response" | jq -r '.message') if [ "$api_error" != null ]; then # Failed to list versions echo "Failed to list versions: $api_error" exit 1 else version_fingerprint=$(echo "$response" | jq -r '.versions[0].fingerprint') fi response=$(curl --request GET --silent \ --url "$base_url/buckets/$bucket_slug/channels/$channel_name" \ --header "authorization: Bearer $bearer") api_error=$(echo "$response" | jq -r '.message') if [ "$api_error" != null ]; then # Channel likely doesn't exist, create it api_error=$(curl --request POST --silent \ --url "$base_url/buckets/$bucket_slug/channels" \ --data-raw '{"name":"'"$channel_name"'"}' \ --header "authorization: Bearer $bearer" | jq -r '.error') if [ "$api_error" != null ]; then echo "Error creating channel: $api_error" exit 1 fi fi # Update channel to point to version api_error=$(curl --request PATCH --silent \ --url "$base_url/buckets/$bucket_slug/channels/$channel_name" \ --data-raw '{"version_fingerprint": "'$version_fingerprint'", "update_mask": "versionFingerprint"}' \ --header "authorization: Bearer $bearer" | jq -r '.message') if [ "$api_error" != null ]; then echo "Error updating channel: $api_error" exit 1 fi} ## ...
Next, the script authenticates with the HCP API using the HCP_CLIENT_ID
and HCP_CLIENT_SECRET
environment variables, and stores the returned bearer token in the bearer
variable for future API calls.
packer/build-and-assign.sh
## ... # Authenticate and get bearer token for subsequent API callsresponse=$(curl --request POST --silent \ --url 'https://auth.idp.hashicorp.com/oauth/token' \ --data grant_type=client_credentials \ --data client_id="$HCP_CLIENT_ID" \ --data client_secret="$HCP_CLIENT_SECRET" \ --data audience="https://api.hashicorp.cloud")api_error=$(echo "$response" | jq -r '.error')if [ "$api_error" != null ]; then echo "Failed to get access token: $api_error" exit 1fibearer=$(echo "$response" | jq -r '.access_token') ## ...
Then, it initializes Packer, builds both parent artifacts in parallel, and waits for both to finish before proceeding.
packer/build-and-assign.sh
## ... packer init plugins.pkr.hcl echo "Building parent artifacts"export HCP_PACKER_BUILD_FINGERPRINT=$(date +%s)packer build parent-east.pkr.hcl &packer build parent-west.pkr.hcl &wait ## ...
After Packer finishes building the parent artifacts, the script assigns them to their respective production
channels. Then it builds the child artifact and assigns the version to the child bucket's production
channel.
packer/build-and-assign.sh
## ... echo "SETTING US-EAST-2 PARENT CHANNEL"bucket_slug="learn-revocation-parent-us-east-2"update_channel $bucket_slug production echo "SETTING US-WEST-2 PARENT CHANNEL"bucket_slug="learn-revocation-parent-us-west-2"update_channel $bucket_slug production echo "BUILDING CHILD ARTIFACT"export HCP_PACKER_BUILD_FINGERPRINT=$(date +%s)packer build child.pkr.hclecho "SETTING CHILD CHANNEL"bucket_slug="learn-revocation-child"update_channel $bucket_slug production
Change into the packer
directory.
$ cd packer
Run the script. It may take up to 20 minutes to finish building. Continue with the tutorial while it runs.
$ ./build-and-assign.sh Building parent artifactsamazon-ebs.west: output will be in this color. amazon-ebs.east: output will be in this color. ==> amazon-ebs.east: Prevalidating any provided VPC information==> amazon-ebs.east: Prevalidating AMI Name: learn-revocation-parent-1673448026==> amazon-ebs.west: Prevalidating any provided VPC information==> amazon-ebs.west: Prevalidating AMI Name: learn-revocation-parent-1673448026 ## ... Build 'amazon-ebs.east' finished after 3 minutes 6 seconds. ==> Wait completed after 3 minutes 6 seconds ==> Builds finished. The artifacts of successful builds are:--> amazon-ebs.east: AMIs were created:us-east-2: ami-03be0b2e1c6dc27d4 --> amazon-ebs.east: Published metadata to HCP Packer registry packer/learn-revocation-parent-us-east-2/versions/01HMPE48RCTTXS1ENM99CMPZP1 ## ... --> amazon-ebs.west: Published metadata to HCP Packer registry packer/learn-revocation-parent-us-west-2/versions/01HMPE48WEY2QKBRFJYCN3CJ4DSETTING US-EAST-2 PARENT CHANNELSETTING US-WEST-2 PARENT CHANNELBUILDING CHILD ARTIFACTamazon-ebs.child-east: output will be in this color.amazon-ebs.child-west: output will be in this color. ==> amazon-ebs.child-east: Prevalidating any provided VPC information==> amazon-ebs.child-east: Prevalidating AMI Name: learn-revocation-child-1673448644==> amazon-ebs.child-west: Prevalidating any provided VPC information==> amazon-ebs.child-west: Prevalidating AMI Name: learn-revocation-child-1673448644 ## ... Build 'amazon-ebs.child-east' finished after 4 minutes 39 seconds. ==> Wait completed after 3 minutes 24 seconds ==> Builds finished. The artifacts of successful builds are:--> amazon-ebs.child-east: AMIs were created:us-east-2: ami-0dcd206ee83643466 --> amazon-ebs.child-east: Published metadata to HCP Packer registry packer/learn-revocation-child/versions/01HMPEDDZ3YHBWWY5Z4YS580RN--> amazon-ebs.child-west: AMIs were created:us-west-2: ami-0d18be0c7fee98b89 --> amazon-ebs.child-west: Published metadata to HCP Packer registry packer/learn-revocation-child/versions/01HMPEDDZ3YHBWWY5Z4YS580RNSETTING CHILD CHANNELDONE
Review packer templates
While the artifacts build, open packer/parent-east.pkr.hcl
in your editor to review the first parent artifact template.
packer/parent-east.pkr.hcl
data "amazon-ami" "ubuntu" { region = "us-east-2" filters = { name = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*" } most_recent = true owners = ["099720109477"]}
The template declares an amazon-ami
data source which returns the latest Ubuntu 22.04 AMI in the us-east-2
region.
packer/parent-east.pkr.hcl
## ...source "amazon-ebs" "east" { ami_name = "learn-revocation-parent-{{timestamp}}" region = "us-east-2" source_ami = data.amazon-ami.ubuntu.id instance_type = "t2.small" ssh_username = "ubuntu" ssh_agent_auth = false tags = { Name = "learn-revocation-parent" }}
The amazon-ebs
source block uses the Ubuntu AMI as the source for the build.
packer/parent-east.pkr.hcl
## ...build { hcp_packer_registry { bucket_name = "learn-revocation-parent-us-east-2" } sources = [ "source.amazon-ebs.east" ]}
Packer then records the artifact's metadata in the learn-revocation-parent-us-east-2
HCP Packer bucket.
The parent-west.pkr.hcl
file follows the same pattern, but for the us-west-2
region.
This tutorial uses separate Packer templates for each of the parent artifact regions. However, the child template builds artifacts in each region using a single template. While you could define both parent artifacts in a shared template as well, this tutorial defines them separately for the purposes of demonstration. Later in the tutorial you will review how revoking just one of the parent artifacts cascades to the child artifact.
Now, open and review packer/child.pkr.hcl
.
packer/child.pkr.hcl
data "hcp-packer-version" "parent-east" { bucket_name = "learn-revocation-parent-us-east-2" channel_name = "production"} data "hcp-packer-artifact" "parent-east" { bucket_name = data.hcp-packer-version.parent-east.bucket_name version_fingerprint = data.hcp-packer-version.parent-east.fingerprint platform = "aws" region = "us-east-2"} data "hcp-packer-version" "parent-west" { bucket_name = "learn-revocation-parent-us-west-2" channel_name = "production"} data "hcp-packer-artifact" "parent-west" { bucket_name = data.hcp-packer-version.parent-west.bucket_name version_fingerprint = data.hcp-packer-version.parent-west.fingerprint platform = "aws" region = "us-west-2"}## ...
First, the template uses the hcp-packer-version
and hcp-packer-artifact
data sources to fetch metadata about both parent artifacts from HCP Packer. The metadata includes AWS AMI IDs for the us-east-2
and us-west-2
regions.
packer/child.pkr.hcl
## ...source "amazon-ebs" "child-east" { ami_name = "learn-revocation-child-{{timestamp}}" region = "us-east-2" source_ami = data.hcp-packer-artifact.parent-east.external_identifier instance_type = "t2.small" ssh_username = "ubuntu" ssh_agent_auth = false tags = { Name = "learn-revocation-child" }} source "amazon-ebs" "child-west" { ami_name = "learn-revocation-child-{{timestamp}}" region = "us-west-2" source_ami = data.hcp-packer-artifact.parent-west.external_identifier instance_type = "t2.small" ssh_username = "ubuntu" ssh_agent_auth = false tags = { Name = "learn-revocation-child" }}## ...
The amazon-ebs
source blocks use the respective AMIs as the sources for the build.
packer/child.pkr.hcl
## ...build { hcp_packer_registry { bucket_name = "learn-revocation-child" } sources = [ "source.amazon-ebs.child-east", "source.amazon-ebs.child-west" ]}
Finally, Packer records the child artifact metadata in the learn-revocation-child
bucket.
Review Terraform configuration
Now, open terraform/main.tf
. This configuration creates two virtual machines using your child artifact.
terraform/main.tf
data "hcp_packer_version" "child" { bucket_name = "learn-revocation-child" channel_name = "production"}## ...
First, the configuration uses the hcp_packer_version
data source to fetch artifact metadata from the learn-revocation-child
bucket's production
channel.
terraform/main.tf
## ...# us-east-2 region resourcesdata "hcp_packer_artifact" "aws_east" { bucket_name = data.hcp_packer_version.child.bucket_name version_fingerprint = data.hcp_packer_version.child.fingerprint platform = "aws" region = "us-east-2"}## ...
It then uses the hcp_packer_artifact
data source to query the version for the us-east-2
AMI ID.
terraform/main.tf
## ...module "vpc_east" { source = "terraform-aws-modules/vpc/aws" providers = { aws = aws.east } name = "learn-revocation-east" cidr = "10.1.0.0/16" azs = ["us-east-2a"] private_subnets = ["10.1.1.0/24", "10.1.2.0/24"]}## ...
Next, it uses the AWS VPC module to create a VPC, subnets, and a route table in the us-east-2
region.
terraform/main.tf
## ...resource "aws_instance" "east" { provider = aws.east ami = data.hcp_packer_artifact.aws_east.external_identifier instance_type = "t3.micro" subnet_id = module.vpc_east.private_subnets[0] vpc_security_group_ids = [module.vpc_east.default_security_group_id] associate_public_ip_address = false tags = { Name = "learn-revocation-us-east-2" }}## ...
Finally, the configuration creates a virtual machine from the us-east-2
child AMI.
The configuration then defines the same resources for the us-west-2
region.
Review builds and channel assignments
After Packer finishes building the artifacts, visit the HCP Packer Overview page in HCP.
Review the Versions pages for the learn-revocation-parent-us-east-2
, learn-revocation-parent-us-west-2
, and learn-revocation-child
buckets you created. Notice that the script assigned the latest version to the production
channel of each bucket.
Configure HCP Terraform
Now, prepare your HCP Terraform workspace to deploy infrastructure.
First, change to the terraform
directory.
$ cd ../terraform
Set the TF_CLOUD_ORGANIZATION
environment variable to your HCP Terraform
organization name.
$ export TF_CLOUD_ORGANIZATION=
Now, initialize your HCP Terraform workspace.
$ terraform init##...Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Create HCP Packer run task
Now, create an HCP Terraform Run Task for HCP Packer, which blocks terraform apply
s that would create new infrastructure using revoked artifacts.
Navigate to the HCP Packer dashboard, open the Integrate with HCP Terraform menu, and copy the Endpoint URL and HMAC Key values.
In your HCP Terraform Organization's Settings, create a run task named HCP-Packer
. Configure it with the Endpoint URL and HMAC key values from the HCP Packer dashboard. For more detailed instructions, refer to our run task tutorial.
After you create the run task, associate it with your workspace. Go to Settings for the learn-hcp-packer-revocation
workspace, then select Run Tasks. Select HCP-Packer from the list of Available Run Tasks, then choose the Post-plan stage, and the Mandatory enforcement level. Click Create.
Configure workspace variables
Next, navigate to the learn-hcp-packer-revocation
workspace's Variables page.
Set the following workspace-specific variables. Be sure to use the environment variable type and mark the secrets as sensitive.
Type | Variable name | Description | Sensitive |
---|---|---|---|
Environment variable | AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair | No |
Environment variable | AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair | Yes |
Environment variable | HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal | No |
Environment variable | HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal | Yes |
Environment variable | HCP_PROJECT_ID | The ID of your HCP project | No |
Deploy infrastructure
Apply the Terraform configuration to create infrastructure that uses the child artifacts. Respond yes
to the prompt to confirm the operation.
$ terraform applyRunning apply in HCP Terraform. Output will stream here. Pressing Ctrl-Cwill cancel the remote apply if it's still pending. If the apply started itwill stop streaming the logs, but will not stop the apply running remotely. Preparing the remote apply... To view this run in a browser, visit:https://app.terraform.io/app/hashicorp-learn/learn-packer-multicloud/runs/run-000 Waiting for the plan to start... Terraform v1.1.6on linux_amd64Initializing plugins and modules...data.hcp_packer_version.child: Reading...data.hcp_packer_version.child: Read complete after 0s [id=01GPC1VWC8YAN9T85TXK4TS06S]data.hcp_packer_artifact.aws_west: Reading...data.hcp_packer_artifact.aws_east: Reading...data.hcp_packer_artifact.aws_east: Read complete after 0s [id=01GPC21QKAYBVG6V54NMKGKKAC]data.hcp_packer_artifact.aws_west: Read complete after 0s [id=01GPC251E05RS8S86C4JY1R8MW] Terraform used the selected providers to generate the following executionplan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ## ... Plan: 22 to add, 0 to change, 0 to destroy. All tasks completed! 1 passed, 0 failed (4s elapsed) │ HCP-Packer ⸺ Passed│ 2 images scanned.│││ Overall Result: Passed ------------------------------------------------------------------------ Do you want to perform these actions in workspace "learn-hcp-packer-revocation"? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 22 added, 0 changed, 0 destroyed.
Revoke parent artifact and descendants
Imagine that you find a vulnerability in one of the parent artifacts. If you have time you could avoid disrupting downstream teams by reassigning the channels to secure artifacts before revoking the vulnerable ones. (Those secure artifacts could be newly-patched or previous-good versions depending on your workflow.) But, since resolving vulnerabilities can take a long time and the incident might require an immediate response, you may need to revoke vulnerable artifacts before you have safe replacements, to prevent new deployments of the artifact.
In this tutorial, you will revoke an artifact without building a replacement or reassigning channels to a known-good version, to explore how revocation can impact downstream teams' Terraform workflows.
Warning
Revoking an artifact does not modify or replace infrastructure that uses it.
To revoke an artifact version, you must first remove it from all channels.
In HCP Packer, navigate to the learn-revocation-parent-us-east-2
bucket, then to its Channels.
Hover over the production channel and click the ... menu that appears, then click Change assigned version.
Select Choose a version from the dropdown menu to unassign the version. Click Update channel.
Now navigate to Versions, open the latest version's ... menu, and click Revoke Version.
Select Revoke immediately if your organization purchased the HCP Packer Plus tier. Then, enter a reason for the revocation. Under Revoke descendants, select Yes,revoke all descendants. Next, under Rollback channels, select No, do not rollback channel, then click Revoke. HCP Packer revoked this artifact, but it may take a few minutes for the revocation to cascade to its descendants.
Open the HCP Packer dashboard. Locate the learn-revocation-parent-us-east-2
and learn-revocation-child
artifacts. After a few minutes, the Status column for each shows Revoked.
Open the learn-packer-child
bucket. Under Bucket details, HCP Packer lists the newest version as Revoked.
Navigate to the bucket's Versions, then click the ID of the revoked version. The banner lists the revocation date, reason, and which user triggered it.
Modify infrastructure using the revoked artifact
The child artifact's production
channel still references a revoked version, which will prevent Terraform from replacing or creating new EC2 instances.
Open terraform/main.tf
and change the instance_type
of the aws_instance.west
resource to t3.small
.
terraform/main.tf
## ...resource "aws_instance" "west" { provider = aws.west ami = data.hcp_packer_artifact.aws_west.external_identifier instance_type = "t3.small" subnet_id = module.vpc_west.private_subnets[0] vpc_security_group_ids = [module.vpc_west.default_security_group_id] associate_public_ip_address = false tags = { Name = "learn-revocation-us-west-2" }}
Apply the configuration change. Respond yes
to the prompt to confirm the operation.
$ terraform apply ## ... Plan: 0 to add, 1 to change, 0 to destroy. Post-plan Tasks: All tasks completed! 1 passed, 0 failed (4s elapsed) │ HCP-Packer ⸺ Passed│ 2 images scanned. 2 warnings.│││ Overall Result: Passed ------------------------------------------------------------------------ Do you want to perform these actions in workspace "learn-hcp-packer-revocation"? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
The HCP Packer run task identified two resources using revoked artifacts but did not block the apply. The run task prevents the creation of new resources, but will not block operations on existing infrastructure.
Now, modify the configuration to trigger a resource replacement.
In terraform/main.tf
, change the subnet of EC2 instance, which is a destructive change.
terraform/main.tf
## ...resource "aws_instance" "west" { provider = aws.west ami = data.hcp_packer_artifact.aws_west.external_identifier instance_type = "t3.small" subnet_id = module.vpc_west.private_subnets[1] vpc_security_group_ids = [module.vpc_west.default_security_group_id] associate_public_ip_address = false tags = { Name = "learn-revocation-us-west-2" }}
Now, attempt to apply the configuration change. The HCP Packer run task blocks the change because it creates a new resource using a revoked artifact.
$ terraform apply ## ...Plan: 1 to add, 0 to change, 1 to destroy. Post-plan Tasks: All tasks completed! 0 passed, 1 failed (4s elapsed) │ HCP-Packer ⸺ Failed (Mandatory)│ 2 images scanned. 1 failure. 1 warning.│Error: the run failed because the run task, HCP-Packer, is required to succeed││ Overall Result: Failed ------------------------------------------------------------------------ ╷│ Error: Task Stage failed.
You could also use custom conditions in your Terraform configuration to check the revoke_at
attribute of the hcp_packer_version
or hcp_packer_artifact
data sources. But, custom conditions are easy to remove and require that all configuration authors use them. The HCP Packer run task is a more robust way to prevent using revoked artifacts since it automatically applies to all configuration in the workspace.
Destroy infrastructure and clean up
In the terraform
directory, destroy the infrastructure you created in this tutorial. Respond yes
to the prompt to confirm the operation.
$ terraform destroyTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions:##...Plan: 0 to add, 0 to change, 16 to destroy.##... Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes##...Destroy complete! Resources: 16 destroyed.
Clean up HCP Terraform resources
Navigate to your learn-packer-revocation
workspace in HCP Terraform and delete the workspace.
Clean up HCP Packer
Navigate to the HCP Packer dashboard.
Locate the learn-revocation-child
artifact and open the ... menu. Select Delete bucket.
Repeat for the learn-revocation-parent-us-east-2
and learn-revocation-parent-us-west-2
buckets.
Delete AWS AMIs
Your AWS account still has AMIs and their respective snapshots, which you may incur charges.
In the us-east-2
AWS region, deregister the learn-revocation AMIs
by selecting them, clicking the Actions button, then the Deregister AMI
option. Finally, confirm by clicking the Deregister AMI button in the
confirmation dialog.
Delete the learn-revocation snapshots by selecting the snapshots, clicking on the Actions button, then the Delete snapshot option, and finally confirm by clicking the Delete button in the confirmation dialog.
Repeat the above steps for the us-west-2
region.
Next Steps
In this tutorial you used HCP Packer to revoke a parent artifact and all descendant artifacts built from it. You also used the HCP Packer run task to prevent new deployments of revoked artifacts and reviewed how revocation can impact team workflows.
For more information on topics covered in this tutorial, check out the following resources.
- Complete the Build a Golden Image Pipeline with HCP Packer tutorial to build a sample application with a golden image pipeline, and deploy it to AWS using Terraform.
- Complete the Schedule artifact version revocation for compliance tutorial to learn how to schedule artifact revocations and use preconditions to prevent use of revoked artifacts outside of HCP Terraform.
- Review the Standardize Machine Images Across Multiple Cloud Providers tutorial to learn how to build consistent machine images across cloud providers.