Manage workers with HCP Boundary
HCP Boundary allows organizations to register their own self-managed workers. Self-managed workers can be deployed in private networks while still communicating with an upstream HCP Boundary cluster.
Note
Deploying self-managed workers with HCP Boundary requires the Boundary Enterprise binary for Linux, MacOS, Windows, BSD and Solaris. The workers also need to be up-to-date with the HCP control plane, otherwise new features will not work. The control plane version can be checked in the HCP Boundary portal. This tutorial was tested using Boundary 0.13.2.
HCP Boundary is an identity-aware proxy that sits between users and the infrastructure they want to connect to. The proxy has two components:
- A control plane that manages state around users under management, targets, and access policies.
- Worker nodes, assigned by the control plane once a user authenticates into HCP Boundary and selects a target.
Self-managing your workers allows Boundary users to securely connect to private endpoints (such as SSH services on hosts, databases, or HashiCorp Vault) without exposing a private network to the public or HashiCorp-managed resources.
This tutorial demonstrates the basics of how to register and manage workers using HCP Boundary.
Prerequisites
This tutorial assumes you have:
- Access to an HCP Boundary instance
- Completed the previous HCP Administration tutorials
- A publicly accessible Ubuntu instance configured as a target (see the Manage Targets tutorial)
Self-managed HCP worker binaries exist for Linux, MacOS, Windows, BSD and Solaris. This tutorial provides two options for configuring the worker instance:
- A publicly accessible Ubuntu instance to be used as a worker OR
- Deploy a worker locally
Regardless of the method used, Workers must install the Boundary Enterprise binary to be registered with HCP. If using the first option, you can follow this guide to create a publicly accessible Amazon EC2 instance to use for this tutorial.
Configure the worker
To configure a self-managed worker, the following details are required:
- HCP Cluster URL (Boundary address)
- Auth Method ID (from the Admin Console)
- Admin login name and password
Visit the Getting Started on HCP tutorial if you need to locate any of these values.
Warning
For the purposes of this tutorial it is important that the security group policy for the AWS worker instance accepts incoming TCP connections on port 9202 to allow Boundary client connections. To learn more about creating this security group and attaching it to your instance, check the AWS EC2 security group documentation. The screenshot below shows an example of this security group policy.
Log in and download Boundary Enterprise
Log in to the Ubuntu instance that will be configured as a worker.
For example, using SSH:
$ ssh ubuntu@198.51.100.1 -i /path/my-key-pair.pemThe authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (198-51-100-1)' can't be established.ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY.Are you sure you want to continue connecting (yes/no)? yes ubuntu@ip-172-31-88-177:~
Note
The above example is for demonstrative purposes. You will need to supply your Ubuntu instance's username, public IP address, and public key to connect. If using AWS EC2, check this article to learn more about connecting to a Linux instance using SSH.
Create a new folder to store your Boundary config file. This tutorial
creates the boundary/
directory in the user's home directory to store the
worker config. If you do not have permission to create this directory, create
the folder elsewhere.
$ mkdir /home/ubuntu/boundary/ && cd /home/ubuntu/boundary/
Next, download and install the Boundary Enterprise binary.
Note
The binary version should match the version of the HCP control
plane. Check the control plane's version in the HCP Boundary portal, and
download the appropriate version using wget. The example below installs the
0.13.2 version of the boundary binary, versioned as 0.13.2+ent
.
Enter the following command to install the latest version of the Boundary Enterprise binary on Ubuntu.
$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg ;\echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list ;\sudo apt update && sudo apt install boundary-enterprise -y
Once installed, verify the version of the boundary binary.
$ boundary version Version information: Build Date: 2023-06-07T16:41:10Z Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7 Metadata: ent Version Number: 0.13.2+ent
Ensure the Version Number matches the version of the HCP Boundary control plane. They should match in order to get the latest HCP Boundary features.
Write the worker config
Next, create a new file named /home/ubuntu/boundary/worker.hcl
.
$ touch /home/ubuntu/boundary/worker.hcl
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
/home/ubuntu/boundary/worker.hcl
disable_mlock = true hcp_boundary_cluster_id = "<cluster_id>" listener "tcp" { address = "0.0.0.0:9202" purpose = "proxy"} worker { public_addr = "<worker_public_addr>" auth_storage_path = "/home/ubuntu/boundary/worker1" tags { type = ["worker1", "upstream"] }}
Update the following values in the worker.hcl
file:
<cluster_id>
on line 3 should be replaced with the HCP Boundary Cluster ID, such asc3a7a20a-f663-40f3-a8e3-1b2f69b36254
<worker_public_addr>
on line 11 should be replaced with the public IP address of the ubuntu worker, such as107.22.128.152
The <cluster-id>
on line 3 can be determined from the UUID in the HCP
Boundary Cluster URL. For example, if you Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
The public_addr
should match the public IP or DNS name of your Ubuntu
instance.
Note the listener "tcp"
stanza:
listener "tcp" { address = "0.0.0.0:9202" purpose = "proxy"}
The address
port is set to 0.0.0.0:9202
. This port should already be
configured by the AWS security group for this instance to accept inbound TCP
connections. If a custom listener port is desired, it should be defined here.
Save this file.
Workers have three configuration fields that can be specified:
auth_storage_path
is a local path where a worker will store its credentials. Storage should not be shared between workers.hcp_boundary_cluster_id
accepts a Boundary cluster id and will be used by a worker when initially connecting to HCP Boundary. This field is set external to the workers stanza.Your cluster id is the UUID in the controller URL. For example, if your controller URL is:
https://abcd1234-e567-f890-1ab2-cde345f6g789.boundary.hashicorp.cloud
Then your cluster id is
abcd1234-e567-f890-1ab2-cde345f6g789
.initial_upstreams
indicates the address or addresses a worker will use when initially connecting to Boundary. This is an alternative to setting the HCP cluster id, and is set within the workers stanza. Unless utilizing multi-hop sessions, this field should be left unset, as settinghcp_boundary_cluster_id
is sufficient.The worker configured with the
hcp_boundary_cluster_id
is known as the ingress worker, which provides access to the HCP Boundary cluster. Ingress workers ensure that connectivity is always available even if the HCP-managed upstream workers change.
Note
In the above example both the auth_storage_path
and
hcp_boundary_cluster_id
are specified. If initial_upstreams
was configured
instead, then the hcp_boundary_cluster_id
would be omitted. Do not set both
hcp_boundary_cluster_id
and initial_upstreams
together, as the HCP cluster
ID will take precedence.
To see all valid config options, refer to the worker configuration docs.
Start the worker
With the worker config defined, start the worker server. Provide the full path
to the worker config file (such as /home/ubuntu/boundary/worker.hcl
).
$ boundary server -config="/home/ubuntu/boundary/worker.hcl" ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7 Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSR7RQJqCjDfxGSJZvEpwQpE7HzYvpDJ88a4QMP3cUUeBXhS5oTgck3ZvZ3nrZWD3HxXzgq4wNScpy7WE7JmNrrGNLNEFeqqMcyhjqGJVvg2PqiZA6arL6zYLNLNCEFtRhcvG5LLMeHc3bthkrbwLg7R7TNswTjDJWmwh4peYpnKuQ9qHEuTK9fapmw4fdvRTiTbrq78ju4asvLByFTCTR3nbk62Tc15iANYsUAn9JLSxjgRXTsuTBkp4QoqBqz89pEi258Wd1ywcACBHRT3 Worker Auth Storage Path: /home/ubuntu/boundary/worker1 Worker Public Proxy Addr: 52.90.177.171:9202 ==> Boundary server started! Log data will stream in below: {"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"}
The worker will start and begin attempting to connect to the upstream Controller, printing a log message "worker is not authenticated to an upstream, not sending status".
The worker also outputs its authorization request as Worker Auth Registration
Request. This will also be saved to a file, auth_request_token
, defined by the
auth_storage_path
in the worker config.
Note the Worker Auth Registration Request:
value on line 12. This value can
also be located in the /boundary/auth_request_token
file. Copy this value.
Exit the Ubuntu worker.
Open a terminal session on your local machine, where Boundary 0.9.0 or greater is installed.
Register the worker with HCP
HCP workers can be registered using the Boundary CLI or Admin Console Web UI.
Authenticate to HCP Boundary as the admin user.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Once logged in, navigate to the Workers page.
Notice that only HCP workers are listed.
Click New.
The new workers page can be used to construct the contents of the
worker.hcl
file.
Do not fill in any of the worker fields.
Providing the following details will construct the worker config file contents for you:
- Boundary Cluster ID
- Worker Public Address
- Config file path
- Worker Tags
The instructions on this page provide details for installing the Boundary Enterprise binary and deploying the constructed config file.
Because the worker has already been deployed, only the Worker Auth Registration Request key needs to be provided on this page.
Scroll down to the bottom of the New Worker page and paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Worker-aware targets
From the Manage Targets tutorial you should already have a configured target.
List the available targets:
$ boundary targets list -recursive Target information: ID: ttcp_xIxdzx3f68 Scope ID: p_A3yaexUoKn Version: 2 Type: tcp Name: ubuntu-target Description: Ubuntu target Authorized Actions: no-op read update delete add-host-sources set-host-sources remove-host-sources add-credential-sources set-credential-sources remove-credential-sources authorize-session
Export the target ID as an environment variable:
$ export TARGET_ID=<ubuntu-target-ID>
Boundary can use worker tags that define key-value pairs targets can use to determine where they should route connections.
A simple tag was included in the worker.hcl
file from before:
worker { tags { type = ["worker1", "upstream"] }
This config creates the resulting tags on the worker:
Tags: Worker Configuration: type: ["worker1" "upstream"] Canonical: type: ["worker1" "upstream"]
In this scenario, only one worker is allowed to handle connections to the Ubuntu target. This worker functions as both the "ingress" worker, which handles initial connections from clients, and an "egress" worker, which establishes the final connection to the target.
In a "multi-hop" worker scenario the egress worker is the last worker in a
series of "hops" to reach targets in private network enclaves. Multi-hop workers
are explored in the next tutorial. The upstream
worker tag will be used in
next tutorial to set up multi-hop.
The Tags
or Name
of the worker (worker1
) can be used to create a
worker filter for the target.
Update this target to add a worker tag filter that searches for workers that
have the worker1
tag. Boundary will consider any worker with this tag assigned
to it an acceptable proxy for this target.
$ boundary targets update tcp -id $TARGET_ID -egress-worker-filter='"worker1" in "/tags/type"' Target information: Created Time: Wed, 08 Feb 2023 13:58:43 MST Description: Ubuntu target Egress Worker Filter: "worker1" in "/tags/type" ID: ttcp_EcoBxVwg0Y Name: ubuntu-target Session Connection Limit: -1 Session Max Seconds: 28800 Type: tcp Updated Time: Thu, 09 Feb 2023 15:30:52 MST Version: 19 Scope: ID: p_p7smJxUmK4 Name: QA_Tests Parent Scope ID: o_cJBJF1PUmd Type: project Authorized Actions: no-op read update delete add-host-sources set-host-sources remove-host-sources add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_HKuvpaMhiV ID: hsst_iYZzbdLHYw Attributes: Default Port: 22
Note
The type: "upstream"
tag could have also been used, or a filter that searches for the name of the worker directly ("/name" == "worker1"
).
With the filter assigned, any connections to this target will be forced to proxy through the worker.
Finally, establish a connection to the target. Enter your instance's login
name after the -l
option and the path to your instance's public key after the
-i
option.
$ boundary connect ssh -target-id $TARGET_ID -- -l ubuntu -i /path/to/key.pem Welcome to Ubuntu 22.04 LTS (GNU/Linux 5.15.0-1011-aws x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Tue Sep 20 17:54:00 UTC 2022 System load: 0.0 Processes: 98 Usage of /: 22.7% of 7.58GB Users logged in: 0 Memory usage: 25% IPv4 address for eth0: 172.31.93.237 Swap usage: 0% * Ubuntu Pro delivers the most comprehensive open source security and compliance features. https://ubuntu.com/aws/pro 0 updates can be applied immediately. The list of available updates is more than a week old.To check for new updates run: sudo apt update Last login: Tue Sep 20 17:41:48 2022 from 44.194.155.74To run a command as administrator (user "root"), use "sudo <command>".See "man sudo_root" for details. ubuntu@ip-172-31-93-237:~$
Sessions can be managed using the same methods discussed in the Manage Sessions tutorial.
When finished, the session can be terminated manually, or canceled via another authenticated Boundary command. Sessions can also be managed using the Admin Console UI.
Note
To cancel this session using the CLI, you will need to open a new
terminal window and re-export the BOUNDARY_ADDR
and BOUNDARY_AUTH_METHOD_ID
environment variables. Then log back into Boundary using boundary
authenticate
.
$ boundary sessions list -recursive Session information: ID: s_Ks3FDSv6Yk Scope ID: p_A3yaexUoKn Status: active Created Time: Tue, 20 Sep 2022 12:08:37 MDT Expiration Time: Tue, 20 Sep 2022 20:08:37 MDT Updated Time: Tue, 20 Sep 2022 12:08:37 MDT User ID: u_UL0xXn8gj6 Target ID: ttcp_zoyuyn7ZWR Authorized Actions: no-op read read:self cancel cancel:self
Cancel the existing session.
$ boundary sessions cancel -id=s_Ks3FDSv6Yk Session information: Auth Token ID: at_FszYgaQHJk Created Time: Tue, 20 Sep 2022 12:08:37 MDT Endpoint: tcp://3.87.143.34:22 Expiration Time: Tue, 20 Sep 2022 20:08:37 MDT Host ID: hst_6kM0snziuh Host Set ID: hsst_Hr8NXSBNzt ID: s_Ks3FDSv6Yk Status: canceling Target ID: ttcp_zoyuyn7ZWR Type: tcp Updated Time: Tue, 20 Sep 2022 12:09:32 MDT User ID: u_UL0xXn8gj6 Version: 3 Scope: ID: p_A3yaexUoKn Name: QA_Tests Parent Scope ID: o_tzz4IxP11N Type: project Authorized Actions: no-op read read:self cancel cancel:self States: Start Time: Tue, 20 Sep 2022 12:09:32 MDT Status: canceling End Time: Tue, 20 Sep 2022 12:09:32 MDT Start Time: Tue, 20 Sep 2022 12:08:37 MDT Status: active End Time: Tue, 20 Sep 2022 12:08:37 MDT Start Time: Tue, 20 Sep 2022 12:08:37 MDT Status: pending
Summary
This tutorial demonstrated self-managed worker registration with HCP Boundary and discussed worker management. You deployed a self-managed worker, registered the worker with HCP Boundary, and tested the proxy connection to a target.
To continue learning about Boundary, check out the Multi-Hop Sessions tutorial.