Managing external traffic with application load balancing
Application load balancing is a concept that focuses on routing and balancing traffic based on the seventh layer of the OSI model, known as the application layer. In practice, for web applications this usually means routing based on the URL path or query parameters. For example, a request to /app/account/balance
will be routed to a node running the application and serve the account balance while /html/about-us
will be routed to a static GitHub Pages website.
These destinations are referred to as targets and are part of a target group, which can contain one or many targets. Targets can be the final destination for a request and have an application running on the node directly serving the response. In cases where orchestration tools like Nomad or Kubernetes are used, these targets can be nodes running several services, each on a different port.
The Load Balancing with NGINX tutorial shows you how to configure one instance of Nginx to balance traffic between three web application instances, each running on a different node. This tutorial extends that knowledge and teaches you how to add an external application load balancer (ALB) to both allow Internet traffic to your internal services and further balance traffic to different instances of Nginx. In this way, the ALB is responsible for forwarding traffic based on which application service is being requested, and Nginx is responsible for balancing traffic between the multiple instances of the same application service.
Though Nginx was chosen as the internal load balancer for this scenario, other load balancing applications like Fabio, HAProxy, and Traefik can also be used.
In this tutorial, you will create a Nomad cluster, deploy an example web application, deploy Nginx to balance requests to the webapp, and create an external ALB to forward traffic to Nginx. You will then add custom rules to the ALB allowing it to forward requests to the different webapp services based on the URL path parameter.
Architecture overview
Infrastructure diagram
The cluster created in this tutorial consists of three Nomad server nodes and five Nomad client nodes. The Nomad clients are split into two logical datacenters: three of them are in dc1
and two are in dc2
. Additionally, the Nomad clients contain custom metadata identifying which hypothetical services should be run on them: the clients in dc1
have a metadata tag of api
for the API service while those in dc2
have a payments
tag for the payments service. A real-world explanation of splitting these services based on client attributes like this might be that clients with the payments
tag have faster storage and higher memory resources enabling quicker processing of payments.
datacenter = "dc1" client { meta { node-name = "nomad-client-1" service-client = "api" }}
Application diagram
The ALB listens for user requests on port 80
and forwards them to the Nomad clients on port 8080
, which is where the Nginx service is listening. Nginx then forwards the request to an instance of the web application running on one of the clients in the same Nomad datacenter as it. The port for the web application is a dynamic port in the range of 20000
to 32000
and is allocated by the Nomad scheduler.
Prerequisites
For this tutorial, you will need:
- Packer 1.7.7 or later installed locally
- Terraform 1.0.5 or later installed locally
- Nomad 1.1.5 or later installed locally
- An AWS account with credentials set as local environment variables and an AWS keypair
Note
This tutorial creates AWS resources that may not qualify as part of the AWS free tier. Be sure to follow the Cleanup process at the end so you don't incur any additional unnecessary charges.
Clone the example repository
The example repository contains configuration files for creating a Nomad cluster on AWS. It uses Consul for the initial setup of the Nomad servers and clients, enables Access Control Lists for Consul and Nomad, and creates an elastic load balancer for easier access to Consul and Nomad.
Clone the example repository.
$ git clone https://github.com/hashicorp/learn-nomad-external-alb
Navigate to the cloned repository folder.
$ cd learn-nomad-external-alb
Check out the v0.1
tag of the repository as a local branch named nomad-alb
.
$ git checkout v0.1 -b nomad-alb
Review repository contents
The shared
top level directory contains configuration files and scripts for building the Amazon Machine Image (AMI), the respective ACL policy files for Consul and Nomad, and the agent configuration files for running Consul and Nomad on the AWS instances.
The shared/config
directory contains configuration files for the Consul and Nomad agents. Those configuration files contain capitalized placeholder strings that get replaced with proper values during the provisioning process. For example, in the nomad_client.hcl
agent file, this includes the datacenter, values for the Consul token, and other custom metadata attributes.
shared/config/nomad_client.hcl
data_dir = "/opt/nomad/data"bind_addr = "0.0.0.0"datacenter = "DATACENTER" # Enable the clientclient { enabled = true options { "driver.raw_exec.enable" = "1" "docker.privileged.enabled" = "true" } meta { node-name = "SERVER_NAME" service-client = "SERVICE_CLIENT" }} acl { enabled = true} consul { address = "127.0.0.1:8500" token = "CONSUL_TOKEN"} ## …
The aws
top level directory contains the Packer build file used to create and publish the AMI to AWS as well as the Terraform configurations and additional files necessary for the infrastructure provisioning process. The post-setup.sh
script here retrieves the Nomad token from the Consul KV store once the cluster is up and running. Lastly the nomad
folder contains the Nomad job spec files for the demo web application and Nginx.
The user-data-client.sh
script replaces the placeholder strings in the Nomad client agent file referenced above with actual values based on the AWS metadata tags for the instance.
Note that the nomad_consul_token_secret
value will be placed there by Terraform during the provisioning process as it renders the file.
aws/data-scripts/user-data-client.sh
#!/bin/bash set -e exec > >(sudo tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1sudo bash /ops/shared/scripts/client.sh "aws" "${retry_join}" "${nomad_binary}" NOMAD_HCL_PATH="/etc/nomad.d/nomad.hcl" sed -i "s/CONSUL_TOKEN/${nomad_consul_token_secret}/g" $NOMAD_HCL_PATH # Place the AWS instance name as metadata on the# client for targeting workloadsAWS_SERVER_TAG_NAME=$(curl http://169.254.169.254/latest/meta-data/tags/instance/Name)sed -i "s/SERVER_NAME/$AWS_SERVER_TAG_NAME/g" $NOMAD_HCL_PATH # Put targeted nodes in a different datacenter# and add service_client meta tagif [[ $AWS_SERVER_TAG_NAME =~ "targeted" ]]; then sed -i "s/DATACENTER/dc2/g" $NOMAD_HCL_PATH sed -i "s/SERVICE_CLIENT/payments/g" $NOMAD_HCL_PATHelse sed -i "s/DATACENTER/dc1/g" $NOMAD_HCL_PATH sed -i "s/SERVICE_CLIENT/api/g" $NOMAD_HCL_PATHfi sudo systemctl restart nomad
The user-data-server.sh
script handles the bootstrapping of the ACL systems for both Consul and Nomad and saves the Nomad user token to the Consul KV store for temporary storage. The post-setup.sh
script deletes the token from Consul KV once it's been retrieved and saved locally.
aws/data-scripts/user-data-server.sh
#!/bin/bash ## … ACL_DIRECTORY="/ops/shared/config"CONSUL_BOOTSTRAP_TOKEN="/tmp/consul_bootstrap"NOMAD_BOOTSTRAP_TOKEN="/tmp/nomad_bootstrap"NOMAD_USER_TOKEN="/tmp/nomad_user_token" ## … # Bootstrap consul ACLsconsul acl bootstrap | grep -i secretid | awk '{print $2}' > $CONSUL_BOOTSTRAP_TOKEN if [ $? -eq 0 ]; then consul acl policy create -name 'nomad-auto-join' -rules="@$ACL_DIRECTORY/consul-acl-nomad-auto-join.hcl" -token-file=$CONSUL_BOOTSTRAP_TOKEN consul acl role create -name "nomad-auto-join" -description "Role with policies necessary for nomad servers and clients to auto-join via Consul." -policy-name "nomad-auto-join" -token-file=$CONSUL_BOOTSTRAP_TOKEN consul acl token create -accessor=${nomad_consul_token_id} -secret=${nomad_consul_token_secret} -description "Nomad server/client auto-join token" -role-name nomad-auto-join -token-file=$CONSUL_BOOTSTRAP_TOKEN # Wait for nomad servers to come up sleep 30 # Bootstrap nomad ACLs nomad acl bootstrap | grep -i secret | awk -F '=' '{print $2}' | xargs > $NOMAD_BOOTSTRAP_TOKEN nomad acl policy apply -token $(cat $NOMAD_BOOTSTRAP_TOKEN) -description "Policy to allow reading of agents and nodes and listing and submitting jobs in all namespaces." node-read-job-submit $ACL_DIRECTORY/nomad-acl-user.hcl nomad acl token create -token $(cat $NOMAD_BOOTSTRAP_TOKEN) -name "read-token" -policy node-read-job-submit | grep -i secret | awk -F "=" '{print $2}' | xargs > $NOMAD_USER_TOKEN # Write user token to kv consul kv put -token-file=$CONSUL_BOOTSTRAP_TOKEN nomad_user_token $(cat $NOMAD_USER_TOKEN)fi
The aws/nomad/webapp.nomad
job spec file runs two services of the demo web application: one acting as the api and the other acting as the payments service. These services are configured to run only on the instances that have the corresponding meta.service-client
attribute of the Nomad client, which was placed in the agent file by the user-data-client.sh
script mentioned above. This is specified in the constraint
attribute.
aws/nomad/webapp.nomad
job "demo-webapp" { datacenters = ["dc1","dc2"] group "api-demo" { constraint { attribute = "${meta.service-client}" operator = "=" value = "api" } count = 3 ## … } service { name = "api-service" port = "http" ## … } ## .. } group "payments-demo" { constraint { attribute = "${meta.service-client}" operator = "=" value = "payments" } count = 2 ## … service { name = "payments-service" port = "http" ## … } ## … }}
Finally, the aws/nomad/nginx.nomad
job spec file runs two instances of Nginx to balance traffic between the different clients associated with the simulated api and payments services – one Nginx for each respective service. It retrieves the IP addresses of the clients running the service by querying Consul. The service name used in the template nginx configuration file matches the name of the service defined in the aws/nomad/webapp.nomad
file.
The nginx services also use the constraint
attribute mentioned above to run on specific clients.
aws/nomad/nginx.nomad
job "nginx" { datacenters = ["dc1", "dc2"] group "nginx-api" { constraint { attribute = "${meta.service-client}" operator = "=" value = "api" } count = 1 ## … task "nginx" { ## … template { data = <<EOFupstream backend {{{ range service "api-service" }} server {{ .Address }}:{{ .Port }};{{ else }}server 127.0.0.1:65535; # force a 502{{ end }}} server { listen 8080; location / { proxy_pass http://backend; }}EOF ## … } } } group "nginx-payments" { constraint { attribute = "${meta.service-client}" operator = "=" value = "payments" } count = 1 ## … task "nginx" { ## … template { data = <<EOFupstream backend {{{ range service "payments-service" }} server {{ .Address }}:{{ .Port }};{{ else }}server 127.0.0.1:65535; # force a 502{{ end }}} server { listen 8080; location / { proxy_pass http://backend; }}EOF ## … } } }}
Create the Nomad cluster
Build the AWS Machine Image (AMI)
Navigate to the aws
directory.
$ cd aws
Be sure to set your AWS environment variables as Packer uses these to build the image and register the AMI in AWS.
$ export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY> && export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY>
Initialize Packer to have it retrieve the required plugins.
$ packer init image.pkr.hcl
Then build the image.
$ packer build image.pkr.hclBuild 'amazon-ebs' finished after 14 minutes 32 seconds. ==> Wait completed after 14 minutes 32 seconds ==> Builds finished. The artifacts of successful builds are:--> amazon-ebs: AMIs were created:us-east-1: ami-0445eeea5e1406960
Provision the Nomad cluster
Open the aws/terraform.tfvars
file with your text editor and update the key_name
variable with the name of your AWS keypair and the ami
variable with the value output from the Packer build command above. These are the only required variables that need to be updated but you can modify the other values if you want to provision in a different region or change the cluster size, for example. Save the file.
Open your terminal and use the built-in uuid()
function of the Terraform console to generate the required UUIDs for the token's credentials.
Generate and save the UUIDs as Terraform specific environment variables.
First, the token ID.
$ export TF_VAR_nomad_consul_token_id=$(echo 'uuid()' | terraform console | tr -d '"')
Then, the token secret.
$ export TF_VAR_nomad_consul_token_secret=$(echo 'uuid()' | terraform console | tr -d '"')
Initialize Terraform to have it retrieve any required plugins and set up the workspace.
$ terraform init
Provision the resources. Respond yes
to the prompt to confirm the operation and then press Enter to start the process. This will take a few minutes to provision.
$ terraform apply Apply complete! Resources: 25 added, 0 changed, 0 destroyed. Outputs: IP_Addresses = <<EOT Client public IPs: 44.202.254.116, 54.89.162.70, 18.234.36.69 Targeted client public IPs: 54.87.30.59, 3.89.255.114 Server public IPs: 3.95.186.208, 18.206.252.210, 3.86.187.77 The Consul UI can be accessed at http://nomad-server-lb-741030279.us-east-1.elb.amazonaws.com:8500/uiwith the bootstrap token: 233c8af2-65ad-236c-0f1a-b3e2903b61a3
Once Terraform finishes provisioning the resources, verify that the services are healthy by navigating to the Consul UI in your web browser with the link in the Terraform output.
Click on the Log in button and use the bootstrap token secret from the Terraform output to login.
Click on the Nodes page from the left navigation. Note that there are 8 healthy nodes, which include the 3 servers and 5 clients created by Terraform.
Next, run the post-setup.sh
script. This script retrieves the Nomad bootstrap token from the Consul KV store, saves it locally to nomad.token
, and then deletes the token from the Consul KV store.
Warning
If the nomad.token
file already exists, the script won't work until it has been deleted. Delete the file manually and re-run the script or use rm nomad.token && ./post-script.sh
instead.
$ ./post-setup.shThe Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store. Set the following environment variables to access your Nomad cluster with the user token created during setup: export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646export NOMAD_TOKEN=$(cat nomad.token) The Nomad UI can be accessed at http://nomad-server-lb-741030279.us-east-1.elb.amazonaws.com:4646/uiwith the bootstrap token: 1cd623c1-c935-1aa2-b80c-bd61e72bfac9
Copy the export
commands from the output, paste them into your terminal, and press Enter.
$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && export NOMAD_TOKEN=$(cat nomad.token)
Finally, verify connectivity to the cluster by running a Nomad command.
$ nomad node statusID DC Name Class Drain Eligibility Status44664b5d dc2 ip-172-31-30-145 <none> false eligible ready5c5caa5c dc1 ip-172-31-30-28 <none> false eligible ready5705660c dc1 ip-172-31-24-223 <none> false eligible readya7830be9 dc2 ip-172-31-25-227 <none> false eligible ready16aac29f dc1 ip-172-31-86-58 <none> false eligible ready
Be aware that you can also navigate to the Nomad UI in your web browser with the link in the post-setup.sh
script output and login with the bootstrap token provided by setting the Secret ID to the token's value, and then clicking on the Clients page from the left navigation.
Run the application job
Open your terminal and submit the demo web application job.
$ nomad job run nomad/webapp.nomad## … ID = 13941ecc Job ID = demo-webapp Job Version = 0 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline api-demo 3 3 3 0 2022-02-16T14:58:51Z payments-demo 2 2 2 0 2022-02-16T14:58:52Z
View information about the web application job with the status
command.
$ nomad job status demo-webappID = demo-webappName = demo-webappSubmit Date = 2022-02-16T09:48:06-05:00Type = servicePriority = 50Datacenters = dc1,dc2Namespace = defaultStatus = runningPeriodic = falseParameterized = false ## … Latest DeploymentID = 13941eccStatus = successfulDescription = Deployment completed successfully ## … AllocationsID Node ID Task Group Version Desired Status Created Modified12994155 4a9a3645 api-demo 0 run running 1m51s ago 1m8s ago8aa28bbc 4056a16a payments-demo 0 run running 1m51s ago 1m5s ago8d8a96e3 dee493d5 api-demo 0 run running 1m51s ago 1m6s agoba2dbd14 b97489a2 api-demo 0 run running 1m51s ago 1m6s agoe757a6b8 d53ce7ce payments-demo 0 run running 1m51s ago 1m6s ago
Navigate back to the Nomad UI in your web browser, click on the Jobs link in the left navigation, and then the demo-webapp job to see similar information.
Note
The Nomad UI topology page displays a great visualization of the cluster, the resources available and in use on each of the nodes, and which jobs are using those resources. You can find it by clicking on the Topology link in the left navigation.
The application instances are ready to handle requests and in the next section you'll use Nginx to balance those incoming requests between the instances.
Run the internal Nginx load balancer job
The Nginx job installs one instance of Nginx on one node in both datacenters. Nginx listens on port 8080
and balances traffic to the web application running on the clients in the same datacenter. Nginx uses consul-template to retrieve the client IPs and ports from the demo-webapp
services registered as part of the demo-webapp job.
Open your terminal and submit the Nginx job.
$ nomad job run nomad/nginx.nomad## … ID = 5ce769a0 Job ID = nginx Job Version = 0 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline nginx-api 1 1 1 0 2022-02-16T15:11:47Z nginx-payments 1 1 1 0 2022-02-16T15:11:48Z
The Nginx instances are filling the role of internal load balancer to the webapp services running within the cluster and will become accessible externally with the application load balancer you'll create in the next section.
Create the application load balancer
Create a new file named alb.tf
in the aws
directory with the other Terraform configuration files. Copy and paste the contents below into the file and save it.
aws/alb.tf
data "aws_subnet_ids" "default_subnets" { vpc_id = data.aws_vpc.default.id} resource "aws_lb" "nomad_clients_ingress" { name = "nomad-ingress-alb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.clients_ingress_sg.id] subnets = data.aws_subnet_ids.default_subnets.ids} resource "aws_lb_listener" "nomad_listener" { load_balancer_arn = aws_lb.nomad_clients_ingress.id port = 80 default_action { type = "forward" forward { target_group { arn = aws_lb_target_group.nomad_clients.arn } target_group { arn = aws_lb_target_group.nomad_clients_targeted.arn } } }} # nomad clients for apiresource "aws_lb_target_group" "nomad_clients" { name = "nomad-clients" # App listener port port = 8080 protocol = "HTTP" vpc_id = data.aws_vpc.default.id health_check { port = 8080 path = "/" # Mark healthy if redirected matcher = "200,301,302" }} resource "aws_lb_target_group_attachment" "nomad_clients" { count = var.client_count target_group_arn = aws_lb_target_group.nomad_clients.arn target_id = element(split(",", join(",", aws_instance.client.*.id)), count.index) port = 8080} # nomad clients for paymentsresource "aws_lb_target_group" "nomad_clients_targeted" { name = "nomad-clients-targeted" # App listener port port = 8080 protocol = "HTTP" vpc_id = data.aws_vpc.default.id health_check { port = 8080 path = "/" # Mark healthy if redirected matcher = "200,301,302" }} resource "aws_lb_target_group_attachment" "nomad_clients_targeted" { count = var.targeted_client_count target_group_arn = aws_lb_target_group.nomad_clients_targeted.arn target_id = element(split(",", join(",", aws_instance.targeted_client.*.id)), count.index) port = 8080} output "alb_address" { value = "http://${aws_lb.nomad_clients_ingress.dns_name}:80"}
This creates an application load balancer (ALB) with two target groups containing the Nomad client nodes. The first group contains the clients in the dc1
datacenter that have the api
meta tag while the second group contains the clients in dc2
with the payments
tag. The ALB listens on port 80
and forwards requests to the Nomad clients on port 8080
where the Nginx service is listening. The target groups each have an equal weighting for requests which means that incoming requests will alternate between being sent to the first group and the second.
Apply the changes with Terraform and respond yes
to the prompt to confirm the operation. This will take a few minutes.
$ terraform applyApply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs: IP_Addresses = <<EOT Client public IPs: 3.80.134.141, 52.207.228.149, 54.225.34.175 Targeted client public IPs: 3.88.132.1, 54.89.250.120 Server public IPs: 54.158.105.32, 34.227.116.136, 34.230.47.119 The Consul UI can be accessed at http://nomad-server-lb-227540418.us-east-1.elb.amazonaws.com:8500/uiwith the bootstrap token: 5978db4b-40e7-da10-60aa-cd9463d93d24 EOTalb_address = "http://nomad-ingress-alb-307671706.us-east-1.elb.amazonaws.com:80"consul_bootstrap_token_secret = "5978db4b-40e7-da10-60aa-cd9463d93d24"lb_address_consul_nomad = "http://nomad-server-lb-227540418.us-east-1.elb.amazonaws.com"
Next, verify that the application load balancer is working correctly.
Note
It may take a few minutes for DNS to propagate and the ALB to become available.
Run the curl
command below. Notice that each response comes from one of the five Nomad client nodes. Press ctrl
and c
to stop the command when you're ready.
$ while true; do curl $(terraform output -raw alb_address); doneWelcome! You are on node 172.31.23.186:30489Welcome! You are on node 172.31.80.235:24980Welcome! You are on node 172.31.93.179:26052Welcome! You are on node 172.31.30.145:26789Welcome! You are on node 172.31.23.186:30489Welcome! You are on node 172.31.93.179:26052Welcome! You are on node 172.31.30.145:26789Welcome! You are on node 172.31.23.186:30489
Target specific clients
You may want to target a specific client node or group of nodes based on attributes like physical resource configuration (presence of GPUs, higher memory, larger and/or faster storage) or software configuration (presence of a certain part of an application like a DB layer). In these cases, an application load balancer can help direct traffic to the appropriate nodes based on the layer 7 properties like the URL path.
To illustrate this type of scenario, the cluster has been set up with nodes separated into two logical datacenters that each contain a specific piece of an application: the api
and payments
services.
Currently the ALB directs traffic to both groups of clients evenly. In the next section, each group will be mapped to a specific URL path related to their part of the application.
Update the ALB configuration
Copy the contents below, add them to the end of the alb.tf
file, and save the file.
aws/alb.tf
resource "aws_lb_listener_rule" "nomad_clients" { listener_arn = aws_lb_listener.nomad_listener.arn action { type = "forward" target_group_arn = aws_lb_target_group.nomad_clients.arn } condition { path_pattern { values = ["/api"] } }} resource "aws_lb_listener_rule" "nomad_clients_targeted" { listener_arn = aws_lb_listener.nomad_listener.arn action { type = "forward" target_group_arn = aws_lb_target_group.nomad_clients_targeted.arn } condition { path_pattern { values = ["/payments"] } }}
This will map the /api
path to the Nomad clients in dc1
with the api
metadata tag and /payments
to the ones in dc2
with the payments
tag.
Open your terminal and apply the changes with Terraform. Respond yes
to the prompt to confirm the operation.
$ terraform apply
With these changes, any requests to the ALB on the /api
path will be forwarded to the Nginx service running in the dc1
datacenter and served by the web application service running in the same datacenter. Requests to the /payments
path will be forwarded to the Nginx and web application services running in dc2
. Finally, any other requests will be split evenly between both Nginx services and their respective web application services – this is just used as an illustration and may not make sense in your application depending on the configuration of that application.
Run the curl
command below which specifies the service path and note that now only certain clients respond based on the path in the request. Verify this by checking the node addresses in the output against the client list in the Nomad UI.
Query the /api
path. Note the three unique addresses and how they match up to the Nomad clients in dc1
.
$ while true; do curl $(terraform output -raw alb_address)/api; doneWelcome! You are on node 172.31.86.134:21854Welcome! You are on node 172.31.23.186:30489Welcome! You are on node 172.31.30.145:26789Welcome! You are on node 172.31.86.134:21854Welcome! You are on node 172.31.23.186:30489
Query the /payments
path. Note the two unique addresses and how they match up to the Nomad clients in dc2
.
$ while true; do curl $(terraform output -raw alb_address)/payments; doneWelcome! You are on node 172.31.80.235:24980Welcome! You are on node 172.31.93.179:26052Welcome! You are on node 172.31.80.235:24980Welcome! You are on node 172.31.93.179:26052Welcome! You are on node 172.31.80.235:24980Welcome! You are on node 172.31.93.179:26052
Cleanup
Run terraform destroy
to clean up your provisioned infrastructure. Respond yes
to the prompt to confirm the operation.
$ terraform destroy## …aws_instance.server[0]: Destruction complete after 31saws_instance.server[2]: Destruction complete after 31saws_instance.server[1]: Still destroying... [id=i-0af70516a51d3fe56, 40s elapsed]aws_instance.server[1]: Destruction complete after 41saws_iam_instance_profile.instance_profile: Destroying... [id=nomad20220216173959430200000002]aws_security_group.primary: Destroying... [id=sg-03f80cb564451a27b]aws_iam_instance_profile.instance_profile: Destruction complete after 0saws_iam_role.instance_role: Destroying... [id=nomad20220216173958373800000001]aws_security_group.primary: Destruction complete after 0saws_security_group.server_lb: Destroying... [id=sg-0f0ede8ee1f14e9db]aws_iam_role.instance_role: Destruction complete after 1saws_security_group.server_lb: Destruction complete after 1s Destroy complete! Resources: 27 destroyed.
Your AWS account still has the AMI and its S3-stored snapshots, which you may be charged for depending on your other usage. Delete the AMI and snapshots stored in your S3 buckets.
Note
Remember to delete the AMI images and snapshots in the region where you created them. If you didn't update the region
variable in the terraform.tfvars
file, they will be in the us-east-1
region.
In your us-east-1
AWS account, deregister the AMI by selecting it, clicking on the Actions button, then the Deregister AMI option, and finally confirm by clicking the Deregister AMI button in the confirmation dialog.
Delete the snapshots by selecting the snapshots, clicking on the Actions button, then the Delete snapshot option, and finally confirm by clicking the Delete button in the confirmation dialog.
Next steps
In this tutorial you created a Nomad cluster, deployed an example web application, deployed Nginx to balance requests to the web application, created an external ALB to forward traffic to Nginx, and modified the ALB to forward requests to different instances of Nginx based on the request path in the URL.
For more information, check out the folowing resources.
- Learn more about the benefits of an ALB
- Read more about the integration between Consul and Nomad
- Try swapping out Nginx for another internal load balancing application