Manage Kubernetes resources via Terraform
Kubernetes (K8S) is an open-source workload scheduler with focus on containerized applications. You can use the Terraform Kubernetes provider to interact with resources supported by Kubernetes.
In this tutorial, you will learn how to interact with Kubernetes using Terraform, by scheduling and exposing a NGINX deployment on a Kubernetes cluster. You will also manage custom resources using Terraform.
The final Terraform configuration files used in this tutorial can be found in the Deploy NGINX on Kubernetes via Terraform GitHub repository.
Why deploy with Terraform?
While you could use kubectl
or similar CLI-based tools to
manage your Kubernetes resources, using Terraform has the following benefits:
Unified Workflow - If you are already provisioning Kubernetes clusters with Terraform, use the same configuration language to deploy your applications into your cluster.
Full Lifecycle Management - Terraform doesn't only create resources, it updates, and deletes tracked resources without requiring you to inspect the API to identify those resources.
Graph of Relationships - Terraform understands dependency relationships between resources. For example, if a Persistent Volume Claim claims space from a particular Persistent Volume, Terraform won't attempt to create the claim if it fails to create the volume.
Prerequisites
The tutorial assumes some basic familiarity with
Kubernetes and kubectl
.
It also assumes that you are familiar with the usual Terraform plan/apply workflow. If you're new to Terraform itself, refer first to the Getting Started tutorial.
For this tutorial, you will need an existing Kubernetes cluster. If you don't have a Kubernetes cluster, you can use kind to provision a local Kubernetes cluster or provision one on a cloud provider.
To provision a GKE Kubernetes cluster on Google Cloud, refer to the Provision a GKE Cluster tutorial
Configure the provider
Before you can schedule any Kubernetes services using Terraform, you need to configure the Terraform Kubernetes provider.
There are many ways to configure the Kubernetes provider. We recommend them in the following order (most recommended first, least recommended last):
- Use cloud-specific auth plugins (for example,
eks get-token
,az get-token
,gcloud config
) - Use oauth2 token
- Use TLS certificate credentials
- Use
kubeconfig
file by setting bothconfig_path
andconfig_context
- Use username and password (HTTP Basic Authorization)
Follow the instructions in the kind or cloud provider tabs to configure the provider to target a specific Kubernetes cluster. The cloud provider tabs will configure the Kubernetes provider using cloud-specific auth tokens.
Create a directory named learn-terraform-deploy-nginx-kubernetes
.
$ mkdir learn-terraform-deploy-nginx-kubernetes
Then, navigate into it
$ cd learn-terraform-deploy-nginx-kubernetes
Note
This directory is only used for managing Kubernetes cluster resources with Terraform. By keeping the Terraform configuration for provisioning a Kubernetes cluster and managing a Kubernetes resources separate, changes in one repository doesn't affect the other. In addition, the modularity makes the configuration more readable and enables you to scope different permissions to each workspace.
Create a new file named kubernetes.tf
and add the following sample
configuration to it. You can also find this configuration on the
gke
branch of Deploy NGINX on Kubernetes repository.
kubernetes.tf
terraform { required_providers { google = { source = "hashicorp/google" version = "3.52.0" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.0.1" } }} data "terraform_remote_state" "gke" { backend = "local" config = { path = "../learn-terraform-provision-gke-cluster/terraform.tfstate" }} # Retrieve GKE cluster informationprovider "google" { project = data.terraform_remote_state.gke.outputs.project_id region = data.terraform_remote_state.gke.outputs.region} # Configure kubernetes provider with Oauth2 access token.# https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/client_config# This fetches a new token, which will expire in 1 hour.data "google_client_config" "default" {} data "google_container_cluster" "my_cluster" { name = data.terraform_remote_state.gke.outputs.kubernetes_cluster_name location = data.terraform_remote_state.gke.outputs.region} provider "kubernetes" { host = data.terraform_remote_state.gke.outputs.kubernetes_cluster_host token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)}
Notice this configuration uses terraform_remote_state
to retrieve outputs from
your GKE cluster. If you followed the previous tutorial to provision your GKE
cluster, this configuration targets your Terraform resources. If not, change
the values so they point to your GKE Terraform resources.
Tip
We recommend using provider-specific data sources when convenient. terraform_remote_state
is more flexible, but requires access to the whole Terraform state.
To learn more about provisioning and configuring your GKE provider, refer to the Provision a GKE Cluster tutorial.
After configuring the provider, run terraform init
to download the latest
version and initialize your Terraform workspace.
$ terraform init
Schedule a deployment
Add the following to your kubernetes.tf
file. This Terraform configuration will schedule
a NGINX deployment with two replicas on your Kubernetes cluster, internally
exposing port 80 (HTTP).
kubernetes.tf
resource "kubernetes_deployment" "nginx" { metadata { name = "scalable-nginx-example" labels = { App = "ScalableNginxExample" } } spec { replicas = 2 selector { match_labels = { App = "ScalableNginxExample" } } template { metadata { labels = { App = "ScalableNginxExample" } } spec { container { image = "nginx:1.7.8" name = "example" port { container_port = 80 } resources { limits = { cpu = "0.5" memory = "512Mi" } requests = { cpu = "250m" memory = "50Mi" } } } } } }}
You may notice the similarities between the Terraform configuration and Kubernetes configuration YAML file.
Apply the configuration to schedule the NGINX deployment. Confirm your apply
with a yes
.
$ terraform apply Terraform used the selected providers to generate the following execution plan. Resource actions areindicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_deployment.nginx will be created + resource "kubernetes_deployment" "nginx" { ## ... } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_deployment.nginx: Creating...kubernetes_deployment.nginx: Still creating... [10s elapsed]kubernetes_deployment.nginx: Still creating... [20s elapsed]kubernetes_deployment.nginx: Creation complete after 26s [id=default/scalable-nginx-example] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Once the apply is complete, verify the NGINX deployment is running.
$ kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEscalable-nginx-example 2/2 2 2 15s
Schedule a Service
There are multiple Kubernetes services you can use to expose your NGINX to users.
If your Kubernetes cluster is hosted locally on kind, you will expose your
NGINX instance via NodePort to access your instance. This exposes the
service on each node's IP at a static port, allowing you to access the service
from outside the cluster at <NodeIP>:<NodePort>
.
If your Kubernetes cluster is hosted on a cloud provider, you will expose your NGINX instance via LoadBalancer to access your instance. This exposes the service externally using a cloud provider's load balancer.
Notice how the Kubernetes Service resource block dynamically assigns the selector to the Deployment's label. This avoids common bugs due to mismatched service label selectors.
Add the following configuration to your kubernetes.tf
file. This creates a
LoadBalancer, which routes traffic from the external load balancer to pods with
the matching selector.
kubernetes.tf
resource "kubernetes_service" "nginx" { metadata { name = "nginx-example" } spec { selector = { App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App } port { port = 80 target_port = 80 } type = "LoadBalancer" }}
Next, create an output which will display the IP address you can use to access the service. Hostname-based (AWS) and IP-based (Azure, Google Cloud) load balancers reference different values.
Add the following configuration to your kubernetes.tf
file. This will set
lb_ip
to your Google Cloud ingress' IP address.
kubernetes.tf
output "lb_ip" { value = kubernetes_service.nginx.status.0.load_balancer.0.ingress.0.ip}
Apply the configuration to schedule the LoadBalancer service. Confirm your apply
with a yes
.
$ terraform applykubernetes_deployment.nginx: Refreshing state... [id=default/scalable-nginx-example] ## ... Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_service.nginx: Creating...kubernetes_service.nginx: Creation complete after 0s [id=default/nginx-example] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Output: lb_ip = ...
Once the apply is complete, verify the NGINX service is running.
$ kubectl get services
You can access the NGINX instance by navigating to the lb_ip
output.
Scale the deployment
You can scale your deployment by increasing the replicas
field in your
configuration. Change the number of replicas in your Kubernetes deployment from
2
to 4
.
kubernetes.tf
resource "kubernetes_deployment" "nginx" { ## ... spec { replicas = 4 ## ... } ## ...}
Apply the change to scale your deployment. Confirm your apply with a yes
.
$ terraform applykubernetes_deployment.nginx: Refreshing state... [id=default/scalable-nginx-example]kubernetes_service.nginx: Refreshing state... [id=default/nginx-example] ## ... Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_deployment.nginx: Modifying... [id=default/scalable-nginx-example]kubernetes_deployment.nginx: Modifications complete after 0s [id=default/scalable-nginx-example] Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Once the apply is complete, verify the NGINX deployment has four replicas.
$ kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEscalable-nginx-example 4/4 4 4 4m48s
Managing Custom Resources
In addition to built-in resources and data sources, the Terraform provider also
includes a
kubernetes_manifest
resource that lets you manage custom resource definitions
(CRDs),
custom
resources,
or any resource that is not built into the Terraform provider.
You will use Terraform to apply a CRD then manage custom resources. You have to do this in two steps:
- Apply the required CRD to the cluster
- Apply the Custom Resources to the cluster
You need two apply steps because at plan time Terraform queries the Kubernetes
API to verify the schema for the kind of object specified in the manifest
field. If Terraform doesn't find the CRD for the resource defined in the
manifest the plan will return an error.
Note
To make this tutorial faster we included the CRD in the same workspace as the Kubernetes resources that it manages. In production create a new workspace for the CRD.
Create a custom resource definition
Create a new file named crontab_crd.tf
and paste in the bellow configuration
for a CRD that extends Kubernetes to store cron data as a resource called
CronTab.
crontab_crd.tf
resource "kubernetes_manifest" "crontab_crd" { manifest = { "apiVersion" = "apiextensions.k8s.io/v1" "kind" = "CustomResourceDefinition" "metadata" = { "name" = "crontabs.stable.example.com" } "spec" = { "group" = "stable.example.com" "names" = { "kind" = "CronTab" "plural" = "crontabs" "shortNames" = [ "ct", ] "singular" = "crontab" } "scope" = "Namespaced" "versions" = [ { "name" = "v1" "schema" = { "openAPIV3Schema" = { "properties" = { "spec" = { "properties" = { "cronSpec" = { "type" = "string" } "image" = { "type" = "string" } } "type" = "object" } } "type" = "object" } } "served" = true "storage" = true }, ] } }}
The resource has two configurable fields: cronSpec
and image
. Apply the
configuration to create the CRD. Confirm your apply with yes
.
$ terraform apply Terraform used the selected providers to generate the following execution plan. Resourceactions are indicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_manifest.crontab_crd will be created + resource "kubernetes_manifest" "crontab_crd" { + manifest = { # ... } + object = { # ... } } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_manifest.crontab_crd: Creating...kubernetes_manifest.crontab_crd: Creation complete after 0s Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Note that in the plan, Terraform created a resource with two attributes:
manifest
and object
.
- The
manifest
attribute is your desired configuration, andobject
is the end state returned by the Kubernetes API server after Terraform created the resource. - The
object
attribute contains many more fields than you specified inmanifest
because Terraform generated a schema containing all of the possible resource attributes that the Kubernetes API server could add. When referencing thekubernetes_manifest
resource from outputs or other resources, always use theobject
attribute.
Confirm that Terraform created the CRD using kubectl
.
$ kubectl get crds crontabs.stable.example.comNAME CREATED ATcrontabs.stable.example.com 2022-04-11T15:53:41Z
The contrabs
resource definition now exists in Kubernetes, but you have not
used it to define any Kubernetes resources yet. Check for the resource
definition with kubectl
, which would return error: the server doesn't have a
resource type "crontab"
if the CRD didn't exist.
$ kubectl get crontabs No resources found in default namespace.
Create a custom resource
Now, create a new file named my_new_crontab.tf
and paste in the following
configuration, which creates a custom resource based on your newly created
CronTab CRD.
my_new_crontab.tf
resource "kubernetes_manifest" "my_new_crontab" { manifest = { "apiVersion" = "stable.example.com/v1" "kind" = "CronTab" "metadata" = { "name" = "my-new-cron-object" "namespace" = "default" } "spec" = { "cronSpec" = "* * * * */5" "image" = "my-awesome-cron-image" } }}
Apply the configuration to create the custom resource. Confirm the apply with yes
.
$ terraform apply Terraform used the selected providers to generate the following execution plan. Resourceactions are indicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_manifest.my_new_crontab will be created + resource "kubernetes_manifest" "my_new_crontab" { + manifest = { + apiVersion = "stable.example.com/v1" + kind = "CronTab" + metadata = { + name = "my-new-cron-object" + namespace = "default" } + spec = { + cronSpec = "* * * * */5" + image = "my-awesome-cron-image" } } + object = { + apiVersion = "stable.example.com/v1" + kind = "CronTab" + metadata = { # ... } + spec = { + cronSpec = "* * * * */5" + image = "my-awesome-cron-image" } } } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_manifest.my_new_crontab: Creating...kubernetes_manifest.my_new_crontab: Creation complete after 0s Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Confirm that Terraform created the custom resource.
$ kubectl get crontabsNAME AGEmy-new-cron-object 5m37s
View the new custom resource.
$ kubectl describe crontab my-new-cron-objectName: my-new-cron-objectNamespace: defaultLabels: <none>Annotations: <none>API Version: stable.example.com/v1Kind: CronTabMetadata: Creation Timestamp: 2022-04-11T16:07:40Z Generation: 1 Managed Fields: API Version: stable.example.com/v1 Fields Type: FieldsV1 fieldsV1: f:spec: f:cronSpec: f:image: Manager: Terraform Operation: Apply Time: 2022-04-11T16:07:40Z Resource Version: 2432053 UID: 6dd859fc-8665-44ae-91f7-959cff8712b1Spec: Cron Spec: * * * * */5 Image: my-awesome-cron-imageEvents: <none>
Clean up your workspace
Destroy any resources you created once you're done with this tutorial.
Running terraform destroy
will de-provision the NGINX deployment and service
you created in this tutorial. Confirm your destroy with a yes
.
$ terraform destroykubernetes_deployment.nginx: Refreshing state... [id=default/scalable-nginx-example]kubernetes_service.nginx: Refreshing state... [id=default/nginx-example] ## ... Plan: 0 to add, 0 to change, 2 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes kubernetes_service.nginx: Destroying... [id=default/nginx-example]kubernetes_service.nginx: Destruction complete after 0skubernetes_deployment.nginx: Destroying... [id=default/scalable-nginx-example]kubernetes_deployment.nginx: Destruction complete after 0s Destroy complete! Resources: 2 destroyed.
If you are using a kind Kubernetes cluster, run the following command to delete it.
$ kind delete cluster --name terraform-learn
If you followed a previous tutorial to set up a Kubernetes cluster, refer to the "Cleaning up your workspace" section of the tutorial to remove those resources as well. Otherwise, your Kubernetes cluster will remain running.
Next steps
In this tutorial, you configured the Terraform Kubernetes provider and used it to schedule, expose and scale an NGINX instance. You also used Terraform to create a custom resource definition and manage a custom resource.
To discover additional capabilities, visit the Terraform Kubernetes Provider Registry Documentation Page.
For a more in-depth Kubernetes examples, complete the Deploy Consul and Vault on a Kubernetes Cluster using Run Triggers (runs on Google Cloud Platform) and Manage Kubernetes Custom Resources tutorials.