Make abstract job specs with Levant
Nomad Pack is a new package manager and templating tool that can be used instead of Levant. Nomad Pack is currently in Tech Preview and may change during development.
In this tutorial, you will iteratively modify the Zookeeper template created in the DRY Nomad Job Specs with Levant tutorial to create an abstract job that you can deploy with Levant. When combined with sensible defaults and the ability to override these default values, you can create flexible deployments of Zookeeper clusters without modifying the base template.
Challenge
Every Nomad job running in a cluster requires its own job configuration; typically, these are maintained as a file per job. While this does provide for archival quality job specifications suitable for source control, it precludes certain patterns that operators might desire with job specification: composition, granularity, and abstraction
Solution
Levant, as a Nomad-aware template engine, allows you to create abstract job specifications, which when combined with user supplied values, can be used to render and deploy a composed job specification to your Nomad cluster.
This tutorial uses the DRY Zookeeper job template created in the DRY Nomad Job Specs with Levant tutorial to:
Export key configuration elements to a configuration file, so that they can be edited independently of the job specification
Allow for variable node counts, so that you can create highly available clusters
Allow for instance specific values, so that you can support more than one cluster
You will enhance the previous Zookeeper template to create multiple Zookeeper clusters of different node count and configuration to support each of your internal use cases with one template.
Prerequisites
You need a Nomad cluster with:
Consul integrated
Docker installed and available as a task driver on your clients
One or more distinct client instances. This tutorial describes a case using three clients.
A Nomad host volume configured on each client you would like to deploy ZK to named
zk«ID»
, where ID is the index of the ZK node that should run there. These host volumes can be collocated on a client for the purposes of this tutorial.
You should be:
Familiar with Go's text/template syntax. You can learn more about it in the Learn Go Template Syntax tutorial.
Comfortable in your shell of choice, specifically adding executables to the path, editing text files, and managing directories.
Download Levant
If you haven't already, install Levant using the instructions found in the
README of its GitHub repository. Use one of the methods that provides you
with a binary, rather than the Docker image. Verify that you have installed it
to your executable path by running levant version
.
$ levant versionLevant v0.3.0-dev (d7d77077+CHANGES)
Get started
If you just completed the DRY Nomad Job Specs with Levant tutorial, you have the starting Zookeeper job specification files. If not, you need to create a working directory, change into it, and create these three files. Click each section to reveal each file's contents.
Create a text file named zookeeper.nomad with the following contents.
job "zookeeper" { datacenters = ["dc1"] type = "service" update { max_parallel = 1 } group "zk1" { volume "zk" { type = "host" read_only = false source = "zk1" } count = 1 restart { attempts = 10 interval = "5m" delay = "25s" mode = "delay" } [[- $Protocols := list "client" "peer" "election" "admin" ]] network {[[- range $I, $Protocol := $Protocols -]] [[- $To := -1]] [[- if eq $Protocol "admin" -]] [[- $To = 8080 -]] [[- end ]] [[- if ne $I 0 -]][[- println "" -]][[- end ]] port "[[$Protocol]]" { to = [[$To]] }[[- end ]] } [[- range $I, $Protocol := $Protocols -]] [[- $Tags := list $Protocol -]] [[ if eq $Protocol "client" ]][[- $Tags = append $Tags "zk1" -]][[- end -]] [[ if eq $Protocol "admin" ]][[- $Tags = list "zk1-admin" -]][[- end -]] [[- println "" ]] service { tags = [[ $Tags | toJson ]] name = "zookeeper" port = "[[ $Protocol ]]" meta { ZK_ID = "1" } address_mode = "host" }[[- end ]] task "zookeeper" { driver = "docker" template { destination = "config/zoo.cfg" data = <<EOH[[ fileContents "zoo.cfg" ]]EOH } template { data=<<EOF[[fileContents "template.go.tmpl"]]EOF destination = "config/zoo.cfg.dynamic" change_mode = "noop" } env { ZOO_MY_ID = 1 } volume_mount { volume = "zk" destination = "/data" read_only = false } config { image = "zookeeper:3.6.1" ports = ["client","peer","election","admin"] volumes = [ "config:/config", "config/zoo.cfg:/conf/zoo.cfg" ] } resources { cpu = 300 memory = 256 } } }}
Create a text file named zoo.cfg with the following contents.
tickTime=2000initLimit=30syncLimit=2reconfigEnabled=truedynamicConfigFile=/config/zoo.cfg.dynamicdataDir=/datastandaloneEnabled=falsequorumListenOnAllIPs=true
Create a text file named template.go.tmpl with the following contents.
{{- range $tag, $services := service "zookeeper" | byTag -}} {{- range $services -}} {{- $ID := split "-" .ID -}} {{- $ALLOC := join "-" (slice $ID 0 (subtract 1 (len $ID ))) -}} {{- if .ServiceMeta.ZK_ID -}} {{- scratch.MapSet "allocs" $ALLOC $ALLOC -}} {{- scratch.MapSet "tags" $tag $tag -}} {{- scratch.MapSet $ALLOC "ZK_ID" .ServiceMeta.ZK_ID -}} {{- scratch.MapSet $ALLOC (printf "%s_%s" $tag "address") .Address -}} {{- scratch.MapSet $ALLOC (printf "%s_%s" $tag "port") .Port -}} {{- end -}} {{- end -}}{{- end -}}{{- range $ai, $a := scratch.MapValues "allocs" -}} {{- $alloc := scratch.Get $a -}} {{- with $alloc -}}server.{{ .ZK_ID }} = {{ .peer_address }}:{{ .peer_port }}:{{ .election_port }};{{.client_port}}{{println ""}} {{- end -}}{{- end -}}
Render the job to test
Use the levant render
command to validate that everything is working properly.
$ levant render
If there aren't any errors, Levant renders the template to the screen with the zoo.cfg and dynamic template content embedded in the rendered output. If there are any issues, Levant logs an error to the screen.
Create configurable elements
Levant templates can reference values provided by configuration files, command-line flags, and environment variables. This allows you to create abstract job templates that, when combined with additional configuration, create complete Nomad job specs.
When creating abstract jobs, consider the elements that need to be overridden by your users and the elements that they should not be able to override. Take time to consider the names of the variables and the structure of the variable file itself—is YAML more suitable to your use case than JSON, for example.
To make this Zookeeper job an abstract job, there are several elements you need to replace with variables. The next steps guide you through this process.
Create defaults file
For this tutorial, use this default configuration. Create a file called defaults.json with the following content.
{ "zookeeper": { "job_name": "zookeeper", "service": { "name": "zookeeper" }, "node_count": 3, "resources": { "cpu": 300, "memory": 256 }, "volume": { "source_prefix": "zk" }, "datacenters": ["dc1"], "image": "zookeeper:3.6.1" }}
You might recognize several values from the original job specification. Now, add an action to load in the default values from the default.json file.
Paste this at the top of the zookeeper.nomad file.
[[- /* Template defaults as json */ -]][[- $Values := (fileContents "defaults.json" | parseJSON ) -]]
You can run render to validate that the template still renders; however, none of this new configuration is wired into the template.
Convert static values to variables
Switching the static job values out for dynamic ones supplied by the defaults.json file enables you to easily update the job specification programmatically by rewriting the defaults.json file with your new desired values. In the next few steps, there are specific examples for making these updates, and you also have an opportunity to do some of these on your own.
Update job_name
Change the following line:
job "zookeeper" {
to:
job "[[ $Values.zookeeper.job_name ]]" {
Update datacenters
The datacenters
value is a list; Using the toJson
function properly
formats the value and allows for multiple values to be specified.
Change the line:
datacenters = [ "dc1" ]
to:
datacenters = [[ $Values.zookeeper.datacenters | toJson ]]
Allow variable count
This job specification was designed to allow for multiple Zookeepers to join and
participate in a highly available configuration. The default configuration that
you are using includes an element—node_count
—that can be used along with the
until
function to create an iterable slice of numbers suitable for Zookeeper
IDs.
This also enables you to associate specific Zookeeper IDs with the host volumes where they persist their data.
Wrap the entire group
block in the following template actions.
[[- range $ID := loop 1 ( int $Values.zookeeper.node_count | add 1) ]]## original `group` block - This is quite long, approximately 100 lines.[[- end ]]
This now provides an ID index that you can consult for elements that should be unique per node.
Switch to dynamic IDs
Now that your job specification iterates over these generated IDs, you need to update several items to take advantage of them:
- Nomad group name
- host volume source
- protocol tags
- Consul service metadata
ZOO_MY_ID
environment variable
Nomad group name
Change the line:
group "zk1" {
to:
group "zk[[ $ID ]]" {
Host volume source
Change the line:
source = "zk1"
to:
source = "zk[[ $ID ]]"
Protocol tags
Since the protocol tags are inserted into a list, you need to concatenate
the tag prefix zk
to the ID using the printf
function. Notice that you
need to use %d
as your formatting placeholder because the ID is an integer.
Change the lines:
[[ if eq $Protocol "client" ]][[- $Tags = append $Tags "zk1" -]][[- end -]] [[ if eq $Protocol "admin" ]][[- $Tags = list "zk1-admin" -]][[- end -]]
to:
[[ if eq $Protocol "client" ]][[- $Tags = append $Tags ( printf "zk%d" $ID ) -]][[- end -]] [[ if eq $Protocol "admin" ]][[- $Tags = list ( printf "zk%d-admin" $ID ) -]][[- end -]]
Note: If you accidentally use the wrong format verb, Levant outputs an in-place indicator similar to the following:
tags = ["client","zk%!s(int64=1)"]
This error in this particular example indicates that the value passed was not a
string (!s
). Levant then outputs the actual value and its datatype (int64=1
).
Consul service metadata
Change the line:
meta { ZK_ID = "1" }
to:
meta { ZK_ID = "[[ $ID ]]" }
ZOO_MY_ID
Change the line:
ZOO_MY_ID = 1
to:
ZOO_MY_ID = [[ $ID ]]
Render and test the template
Use the levant render
command to validate that your updates to the job spec
are working as you expect.
$ levant render
Barring any errors, the template renders to the screen with the zoo.cfg and
dynamic template content embedded in the rendered output. You should also have
three groups—zk1
,zk2
, and zk3
— with the proper IDs for each one in the
all of the required locations.
If you have configured your three client nodes with appropriate host volumes,
you can use the levant deploy
command to deploy your multi-host Zookeeper
cluster.
2020-12-07T12:11:43-05:00 |INFO| levant/plan: job is a new addition to the cluster2020-12-07T12:11:43-05:00 |INFO| levant/deploy: job is not running, using template file group counts job_id=zookeeper2020-12-07T12:11:43-05:00 |INFO| levant/deploy: triggering a deployment job_id=zookeeper2020-12-07T12:11:44-05:00 |INFO| levant/deploy: evaluation f36f713e-0544-2771-ca1d-6cfcd2a28a5e finished successfully job_id=zookeeper2020-12-07T12:11:44-05:00 |INFO| levant/deploy: beginning deployment watcher for job job_id=zookeeper2020-12-07T12:11:57-05:00 |INFO| levant/deploy: deployment 246eda12-0740-0fae-ad4b-37c95458412e has completed successfully job_id=zookeeper2020-12-07T12:11:57-05:00 |INFO| levant/deploy: job deployment successful job_id=zookeeper
Once deployed, you can experiment with your 3 node Zookeeper cluster.
Stop and purge the zookeeper
job
Stop and remove your Zookeeper instances from the cluster by running:
$ nomad job stop -purge zookeeper
Allow for multiple clusters
Now that your job specification can create a multi-node cluster, you can apply the same techniques to make a job template that can be used to create more than one Zookeeper cluster in your Nomad cluster.
Update the volume source prefix
In order to eventually allow more than one instance of this job to run, you need to allow each instance to specify a host volume prefix for the job to use when connecting to its own persistent data.
Change the line:
source = "zk1"
to:
source = "[[ $Values.zookeeper.volume.source_prefix ]][[$ID]]"
Note
If you want to actually deploy multiple Zookeeper clusters, you
must create and configure appropriate host volumes that match the host_volume
blocks created by the Levant template. This tutorial reuses the zk1
, zk2
,
and zk3
host volumes configured for the earlier version of the job, so you
can only have one running at a time.
Update the service name
Each unique Zookeeper cluster needs to register a unique service name in Consul so that the nodes can discover each other without seeing other unrelated clusters.
Inside the service
block, change the line:
name = "zookeeper"
to:
name = "[[ $Values.zookeeper.service.name ]]"
Allow the defaults to be overridden
For users to create their own clusters, they need to be able to override
several of the configurations provided by the defaults.json
file, like
service name and host volume prefix. Configure the job to allow Levant
variables to override the default values provided.
Change the lines:
[[- /* Template defaults as json */ -]][[- $Values := (fileContents "defaults.json" | parseJSON ) -]]
to:
[[- /* Template defaults as json */ -]][[- $Defaults := (fileContents "defaults.json" | parseJSON ) -]][[- /* Load variables over the defaults. */ -]][[- $Values := mergeOverwrite $Defaults . -]]
Hint
Note that the variable name in the line that loads
default.json
changes from $Values
to $Defaults
.
These new template actions parse passed in Levant variables and use them preferentially to the values in the default.json file.
Deploy with a variable file
To simulate your end-user's experience with the abstract template,
create a file named custom.json
with the following content.
{ "zookeeper": { "job_name": "custom-zk", "node_count": 1, "image": "zookeeper:latest", "service": { "name": "zk-latest" } }}
When run, you should expect a job specification to be rendered that creates a Nomad job named custom-zk. The custom-zk job creates a single node Zookeeper from the latest Docker image and registers them in Consul with the name zk-latest.
Test this by running levant render
with your custom.json as a
variable file.
$ levant render -var-file=custom.json
Levant then renders a single node Zookeeper job named "custom-zk" that
uses the latest Zookeeper image. You can deploy it to your Nomad cluster
using the levant deploy
command.
$ levant deploy -var-file=custom.json
Complete job specification
[[- /* Template defaults as json */ -]][[- $Defaults := (fileContents "defaults.json" | parseJSON ) -]] [[- /* Load variables over the defaults. */ -]][[- $Values := mergeOverwrite $Defaults . -]] job "[[ $Values.zookeeper.job_name ]]" { datacenters = [[ $Values.zookeeper.datacenters | toJson ]] type = "service" update { max_parallel = 1 }[[- range $ID := loop 1 ( int $Values.zookeeper.node_count | add 1) ]] group "zk[[ $ID ]]" { volume "zk" { type = "host" read_only = false source = "[[ $Values.zookeeper.volume.source_prefix ]][[ $ID ]]" } count = 1 restart { attempts = 10 interval = "5m" delay = "25s" mode = "delay" } [[- $Protocols := list "client" "peer" "election" "admin" ]] network {[[- range $I, $Protocol := $Protocols -]] [[- $To := -1]] [[- if eq $Protocol "admin" -]] [[- $To = 8080 -]] [[- end ]] [[- if ne $I 0 -]][[- println "" -]][[- end ]] port "[[$Protocol]]" { to = [[$To]] }[[- end ]] } [[- range $I, $Protocol := $Protocols -]] [[- $Tags := list $Protocol -]] [[ if eq $Protocol "client" ]][[- $Tags = append $Tags ( printf "zk%d" $ID ) -]][[- end -]] [[ if eq $Protocol "admin" ]][[- $Tags = list ( printf "zk%d-admin" $ID ) -]][[- end -]] [[- println "" ]] service { tags = [[ $Tags | toJson ]] name = "[[ $Values.zookeeper.service.name ]]" port = "[[ $Protocol ]]" meta { ZK_ID = "[[ $ID ]]" } address_mode = "host" }[[- end ]] task "zookeeper" { driver = "docker" template { destination = "config/zoo.cfg" data = <<EOH[[ fileContents "zoo.cfg" ]]EOH } template { data=<<EOF[[fileContents "template.go.tmpl"]]EOF destination = "config/zoo.cfg.dynamic" change_mode = "noop" } env { ZOO_MY_ID = [[ $ID ]] } volume_mount { volume = "zk" destination = "/data" read_only = false } config { image = "[[ $Values.zookeeper.image ]]" ports = ["client","peer","election","admin"] volumes = [ "config:/config", "config/zoo.cfg:/conf/zoo.cfg" ] } resources { cpu = [[ $Values.zookeeper.resources.cpu ]] memory = [[ $Values.zookeeper.resources.memory ]] } } }[[- end ]]}
Learn more
This tutorial demonstrates techniques that you can use to build abstract Nomad jobs using Levant as a templating and deployment engine. For more about Levant, consult the project documentation.