Load balancing with NGINX
You can use Nomad's template stanza to configure NGINX so that it can dynamically update its load balancer configuration to scale along with your services.
The main use case for NGINX in this scenario is to distribute incoming HTTP(S) and TCP requests from the Internet to front-end services that can handle these requests. This tutorial shows you one such example using a demo web application.
Reference material
Prerequisites
To perform the tasks described in this tutorial, you need to have a Nomad environment with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial uses a cluster with one server node and three client nodes.
Note
This tutorial is for demo purposes and only uses a single server node. Please consult the reference architecture for production configuration.
Create and run a demo web app job
Create a job for a demo web application and name the file webapp.nomad.hcl
:
job "demo-webapp" { datacenters = ["dc1"] group "demo" { count = 3 network { port "http" { to = -1 } } service { name = "demo-webapp" port = "http" check { type = "http" path = "/" interval = "2s" timeout = "2s" } } task "server" { env { PORT = "${NOMAD_PORT_http}" NODE_IP = "${NOMAD_IP_http}" } driver = "docker" config { image = "hashicorp/demo-webapp-lb-guide" ports = ["http"] } } }}
This job specification creates three instances of the demo web application for you to target in your NGINX configuration.
Now, deploy the demo web application.
$ nomad run webapp.nomad.hcl==> Monitoring evaluation "ea1e8528" Evaluation triggered by job "demo-webapp" Allocation "9b4bac9f" created: node "e4637e03", group "demo" Allocation "c386de2d" created: node "983a64df", group "demo" Allocation "082653f0" created: node "f5fdf017", group "demo" Evaluation status changed: "pending" -> "complete"==> Evaluation "ea1e8528" finished with status "complete"
Create and run an NGINX job
Create a job for NGINX and name it nginx.nomad.hcl
. This NGINX instance
balances requests across the deployed instances of the web application.
job "nginx" { datacenters = ["dc1"] group "nginx" { count = 1 network { port "http" { static = 8080 } } service { name = "nginx" port = "http" } task "nginx" { driver = "docker" config { image = "nginx" ports = ["http"] volumes = [ "local:/etc/nginx/conf.d", ] } template { data = <<EOFupstream backend {{{ range service "demo-webapp" }} server {{ .Address }}:{{ .Port }};{{ else }}server 127.0.0.1:65535; # force a 502{{ end }}} server { listen 8080; location / { proxy_pass http://backend; }}EOF destination = "local/load-balancer.conf" change_mode = "signal" change_signal = "SIGHUP" } } }}
This configuration uses Nomad's template to populate the load balancer configuration for NGINX. It uses Consul Template. You can use Consul Template's documentation to learn more about the syntax needed to interact with Consul. In this case, the template queries Consul for the address and port of services named
demo-webapp
, which are created in the demo web application's job specification.This job specification uses a static port of
8080
for the load balancer. This allows you to querynginx.service.consul:8080
from anywhere inside your cluster to reach the web application.Please note that although the job contains an inline template, you could alternatively use the template stanza in conjunction with the artifact stanza to download an input template from a remote source such as an S3 bucket.
Now, run the NGINX job.
$ nomad run nginx.nomad.hcl==> Monitoring evaluation "45da5a89" Evaluation triggered by job "nginx" Allocation "c7f8af51" created: node "983a64df", group "nginx" Evaluation status changed: "pending" -> "complete"==> Evaluation "45da5a89" finished with status "complete"
Verify load balancer configuration
Consul Template supports blocking queries. This means your NGINX deployment (which is using the template stanza) is notified immediately when a change in the health of one of the service endpoints occurs and re-render a new load balancer configuration file that only includes healthy service instances.
You can use the nomad alloc fs
command on your NGINX allocation to
read the rendered load balancer configuration file.
First, obtain the allocation ID of your NGINX deployment (output below is abbreviated). Keep in mind, allocation IDs are environment specific, so yours is expected to be different:
$ nomad status nginxID = nginxName = nginx...SummaryTask Group Queued Starting Running Failed Complete Lostnginx 0 0 1 0 0 0 AllocationsID Node ID Task Group Version Desired Status Created Modified76692834 f5fdf017 nginx 0 run running 17m40s ago 17m25s ago
Next, use the alloc fs
command to read the load balancer configuration:
$ nomad alloc fs 766 nginx/local/load-balancer.confupstream backend { server 172.31.48.118:21354; server 172.31.52.52:25958; server 172.31.52.7:29728; } server { listen 80; location / { proxy_pass http://backend; }}
At this point, you can change the count of your demo-webapp
job and repeat the
previous command to verify the load balancer configuration is dynamically
changing.
Make a request to the load balancer
If you query the NGINX load balancer, you should be able to see a response similar to the one shown below (this command should be run from a node inside your cluster):
$ curl nginx.service.consul:8080Welcome! You are on node 172.31.48.118:21354
Note that your request has been forwarded to one of the several deployed instances of the demo web application (which is spread across 3 Nomad clients). The output shows the IP address of the host it is deployed on. If you repeat your requests, the IP address changes based on which backend web server instance received the request.
Note
If you would like to access NGINX from outside your cluster, you
can set up a load balancer in your environment that maps to an active port
8080
on your clients (or whichever port you have configured for NGINX to
listen on). You can then send your requests directly to your external load
balancer.