Migrate Consul Dedicated cluster to self-managed Enterprise
This page describes the process to migrate operations from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster. HashiCorp plans to retire HCP Consul Dedicated on November 12, 2025.
HCP Consul Dedicated End of Life
On November 12, 2025, HashiCorp will end operations and support for HCP Consul Dedicated clusters. After this date, you will no longer be able to deploy new Dedicated clusters, nor will you be able to access, update, or manage existing Dedicated clusters.
We recommend migrating HCP Consul Dedicated deployments to self-managed server clusters running Consul Enterprise. On virtual machines, this migration requires some downtime for the server cluster but enables continuity between existing configurations and operations. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful.
Migration workflows
The process to migrate a Dedicated cluster to a self-managed environment consists of the following steps, which change depending on whether your cluster runs on virtual machines (VMs) or Kubernetes.
VMs
To migrate on VMs, complete the following steps:
- Take a snapshot of the HCP Consul Dedicated cluster.
- Transfer the snapshot to a self-managed cluster.
- Use the snapshot to restore the cluster in your self-managed environment.
- Update the client configuration file to point to the new server.
- Restart the client agent and verify that the migration was successful.
- Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources.
Kubernetes
To migrate on Kubernetes, complete the following steps:
- Take a snapshot of the HCP Consul Dedicated cluster.
- Transfer the snapshot to a self-managed cluster.
- Use the snapshot to restore the cluster in your self-managed environment.
- Update the CoreDNS configuration.
- Update the
values.yaml
file. - Upgrade the cluster.
- Redeploy workload applications.
- Switch the CoreDNS entry.
- Verify that the migration was successful.
- Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources.
Recommendations and best practices
On VMs, the migration process requires a temporary outage that lasts from the time when you restore the snapshot on the self-managed cluster until the time when you restart client agents after updating their configuration. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful.
In addition, data written to the Dedicated server after the snapshot is created cannot be restored.
To limit the duration of outages, we recommend using a dev environment to test the migration before fully migrating production workloads. The length of the outage depends on the number of clients, the self-managed environment, and the automated processes involved.
Regardless of whether you use VMs or Kubernetes, we also recommend using Consul maintenance mode to schedule a period of inactivity to address unforeseen data loss or data sync issues that result from the migration.
Prerequisites
The migration instructions on this page make the following assumptions about your existing infrastructure:
- You already deployed an HCP Consul Dedicated server cluster and a self-managed server cluster with matching configurations. These configurations should include the following settings:
- Both clusters have 3 nodes.
- ACLs, TLS, and gossip encryption are enabled.
- You have command line access to both the Dedicated cluster and your self-managed cluster.
- You generated an admin token for the Dedicated cluster and exported it to the
CONSUL_HTTP_TOKEN
environment variable. Alternatively, add the-token=
flag to CLI commands. - The clusters have an existing VPC or peering connectivity connection.
- You already identified the client nodes affected by the migration.
If you are migrating clusters on Kubernetes, refer to the version compatibility matrix to ensure that you are using compatible versions of consul
and consul-k8s
.
In addition, you must migrate to an Enterprise cluster, which requires an Enterprise license. Migrating to Community edition clusters is not possible. If you do not have access to a Consul Enterprise license, file a support request to let us know. A member of the account team will reach out to assist you.
Migrate to self-managed on VMs
To migrate to a self-managed Consul Enterprise cluster on VMs, connect to the Dedicated cluster's current leader node and then complete the following steps.
Take a snapshot of the HCP Consul Dedicated cluster
A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment.
Run the following command to create a snapshot.
$ consul snapshot save /home/backup/hcp-cluster.snapshotSaved and verified snapshot to index 4749
For more information on this command, refer to the Consul CLI documentation.
Transfer the snapshot to a self-managed cluster
Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster.
$ scp /home/backup/hcp-cluster.snapshot <user>@<self-managed-node>:/home/backup
Use the snapshot to restore the cluster in your self-managed environment
After you transfer the snapshot file to the self-managed node, restore the cluster’s state from the snapshot in your self-managed environment.
Export the CONSUL_HTTP_TOKEN
environment variable in your self-managed environment and then run the following command.
$ consul snapshot restore /home/backup/hcp-cluster.snapshotRestored snapshot
If you cannot use use environment variables, add the -token=
flag to the command:
$ consul snapshot restore /home/backup/hcp-cluster.snapshot -token="<token-value">Restored snapshot
For more information on this command, refer to the Consul CLI documentation.
Update the client configuration file to point to the new server
Modify the agent configuration on your Consul clients. You must update the following configuration values:
retry_join
IP address- TLS encryption
- ACL token
You can use an existing certificate authority or create a new one in your self-managed cluster. For more information, refer to Service mesh certificate authority overview in the Consul documentation
The following example demonstrates a modified client configuration.
retry_join = ["<new.server.IP.address>"] tls { defaults { auto_encrypt { allow_tls =true tls = true } verify_incoming = true verify_outgoing = true }} acl { enabled = true default_policy = "deny" enable_token_persistence = true tokens { agent = "<Token-Value>" }}
For more information about configuring these fields, refer to the agent configuration reference in the Consul documentation.
Restart the client agent and verify that the migration was successful
Restart the client to apply the updated configuration and reconnect it to the new cluster.
$ sudo systemctl restart consul
After you update and restart all of the client agents, check the catalog to ensure that clients migrated successfully. You can check the Consul UI or run the following CLI command.
$ consul members
Run consul members
on the Dedicated cluster as well. Ensure that all clients appear as inactive
or left
.
Disconnect supporting resources and decommission the HCP Consul Dedicated cluster
After you confirm that your client agents successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources, such as HVNs. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product.
Then delete the HCP Consul Dedicated cluster. For more information, refer to Delete a HCP Consul Dedicated cluster.
Migrate to self-managed on Kubernetes
To migrate to a self-managed Consul Enterprise cluster on Kubernetes, connect to the Dedicated cluster's current leader node and then complete the following steps.
Take a snapshot of the HCP Consul Dedicated cluster
A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment.
Connect to the HCP Consul Dedicated cluster and then run the following command to create a snapshot.
$ consul snapshot save /home/backup/hcp-cluster.snapshotSaved and verified snapshot to index 4749
For more information on this command, refer to the Consul CLI documentation.
Transfer the snapshot to a self-managed cluster
Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster.
$ scp /home/backup/hcp-cluster.snapshot <user>@<self-managed-node>:/home/backup
Use the snapshot to restore the cluster in your self-managed environment
After you transfer the snapshot file to the self-managed node, use the kubectl exec
command to restore the cluster’s state in your self-managed Kubernetes environment.
$ kubectl exec -c consul-server-0 -- consul snapshot restore /home/backup/hcp-cluster.snapshotRestored snapshot
For more information on this command, refer to the Consul CLI documentation.
Update the CoreDNS configuration
Update the CoreDNS configuration on your Kubernetes cluster to point to the Dedicated cluster's IP address. Make sure the configured hostname resolves correctly to cluster’s IP from inside a deployed pod.
Corefile: |- .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } hosts { 35.91.49.134 server.hcp-managed.consul fallthrough } prometheus 0.0.0.0:9153 forward . 8.8.8.8 8.8.4.4 /etc/resolv.conf cache 30 loop reload loadbalance }
If there are issues when you attempt to resolve the hostname, check if the nameserver resolves to the CLUSTER-IP
inside the pod. Run the following command to return the CLUSTER-IP
.
# k -n kube-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 10.100.224.88 <none> 53/UDP,53/TCP 4h24m
Update the values.yaml
file
Update the Helm configuration or values.yaml
file for your self-managed cluster. You should update the following fields:
- Update the server host value. Use the host name you added when you updated the CoreDNS configuration.
- Create a Kubernetes secret in the
consul
namespace with a new CA file created by adding the contents of all of the following CA files. Add the CA file contents of the new self managed server at the end. - Update the
tlsServerName
field to the appropriate value. It is usually the hostname of the managed cluster. If the value is not known, TLS verification fails when you apply this configuration and the error log lists possible values. - Set
useSystemRoots
tofalse
to use the new CA certs.
For more information about configuring these fields, refer to the Consul on Kubernetes Helm chart reference.
Upgrade the cluster
After you update the values.yaml
file, run the following command to update the self-managed Kubernetes cluster.
$ consul-k8s upgrade -config-file=values.yaml
This command redeploys the Consul pods with the updated configurations. Although the CoreDNS installation still points to the Dedicated cluster, the pods have access to the new CA file.
Redeploy workload applications
Redeploy all the workload applications so that the init
containers run again and fetch the new CA file. After you redeploy the applications, run a kubectl describe pod
command on any workload pod and verify the output resembles the following example.
$ kubectl describe pod -l name="product-api-8cf8c8ccc-kvkk8"Environment: POD_NAME: product-api-8cf8c8ccc-kvkk8 (v1:metadata.name) POD_NAMESPACE: default (v1:metadata.namespace) NODE_NAME: (v1:spec.nodeName) CONSUL_ADDRESSES: server.consul.one CONSUL_GRPC_PORT: 8502 CONSUL_HTTP_PORT: 443 CONSUL_API_TIMEOUT: 5m0s CONSUL_NODE_NAME: $(NODE_NAME)-virtual CONSUL_USE_TLS: true CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----\rMIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/\rMSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT\rDkRTVCBSb290IENBIFgzMB4XDTIxMDEyMDE5MTQwM1oXDTI0MDkzMDE4MTQwM1ow\rTzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh\rcmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwggIiMA0GCSqGSIb3DQEB\rAQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC
Switch the CoreDNS entry
Update the CoreDNS configuration with the self-managed server's IP address.
If the tlsServerName
of the self-managed cluster is different than the tlsServerName
on the Dedicated cluster, you must update the field and re-run the consul-k8s upgrade
command. For self-managed clusters, the tlsServerName
usually take form of server.<datacenter-name>.consul
.
Verify that the migration was successful
After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the following CLI command.
$ kubectl exec -c consul-server-0 -- consul members
Run consul members
on the Dedicated cluster as well. Ensure that all service nodes appear as inactive
or left
.
Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources
After you confirm that your services successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources, such as HVNs. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product.
Then delete the HCP Consul Dedicated cluster. For more information, refer to Delete a HCP Consul Dedicated cluster.
Troubleshooting
You might encounter errors when migrating from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster.
Troubleshoot on VMs
If you encounter a 403 Permission Denied
error when you attempt to generate a new ACL bootstrap token, or if you misplace the bootstrap token, you can update the Raft index to reset the ACL system. Use the Raft index number included in the error output to write the reset index into the bootstrap reset file. You must run this command on the Leader node.
The following example uses 13
as its Raft index:
$ echo 13 >> consul.d/acl-bootstrap-reset
Troubleshoot on Kubernetes
If you encounter issues resolving the hostname, check if the nameserver does not match the CLUSTER-IP
. One possible issue is that the ClusterDNS
field points to an IP in the kubelet configuration that differs from the Kubernetes worker nodes. You should change the kubelet configuration to use the CLUSTER-IP
and then restart the kubelet process on all nodes.
Support
If have questions or need additional help when migrating to a self-managed Consul Enterprise cluster, submit a request to our support team.