Book Review: The Terraform Book

Recently, I finished reading The Terraform Book by James Turnbull.

When I started reading this book, I had very little (but at least some) experience with Terraform. And so, I was looking for a book that would provide a good foundation.

I particularly found chapter 2 (“Installing Terraform”) and chapter 3 (“Building Our First Application”) of great value, because it built on what little I already knew, and gave me not only a good foundation to start with, but also a real-world type of an example in building an application’s infrastructure through code. 

The only thing I wish this book (or similar resources like it) had, was examples specific to Microsoft Azure (since that’s the environment I’m working in). Most Terraform tutorials, books, videos, courses, etc. all seem to focus on Amazon Web Services (AWS). 

I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).

Chapter 1: An Introduction to Terraform

  • None (just and introduction)

Chapter 2: Installing Terraform

  • When Terraform runs inside a directory it will load any Terraform configuration files. Any non-configuration files are ignored and Terraform will not recurse into any sub-directories. Each file is loaded in alphabetical order, and the contents of each configuration file are appended into one configuration.
  • The configuration loading model allows us to treat individual directories, like base, as standalone configurations or environments.
  • Each directory could represent an environment, stack, or application in our organization.
  • Terraform configuration files are normal text files. They are suffixed with either .tf or .tf.json. Files suffixed with .tf are in Terraform’s native file format, and .tf.json files are JSON-formatted.
  • The two configuration file formats are for two different types of audiences:
    • Humans
    • Machines
  • The .tf format, also called the HashiCorp Configuration Language or HCL, is broadly human-readable, allows inline comments, and is generally recommended if humans are crafting your configuration.
  • TIP You can specify a mix of the Terraform file formats in a directory.
  • Providers are not shipped with Terraform since Terraform 0.10. In order to download the providers you’re using in your environment you need to run the terraform init command to install any required providers. 
  • TIP Instead of hard-coding the AWS credentials you can also use environment variables or AWS shared credentials.
  • The name of the resource is specified next. This name is defined by you—here we’ve named this resource base. The name of the resource should generally describe what the resource is or does.
  • NOTE Your configuration is defined as the scope of what configuration
    Terraform loads when it runs. You can have a resource with a duplicate name in another configuration—for example, another directory of Terraform files.
  • NOTE Remember Terraform commands load all the Terraform configuration in the current directory.
  • There are a series of similar indicators:
    • +: A resource that will be added.
    • -: A resource that will be destroyed.
    • -/+: A resource that will be destroyed and then added again.
    • ~: A resource will be changed.
  • A value is one that Terraform does not know the value of yet. The value of the configuration item will only be known when the resource
    is actually created. 
  • TIP Since Terraform 0.11 the terraform apply command now prompts interactively like the terraform plan command. You can override this behavior with the -auto-approve flag.
  • After creating our resource, Terraform has saved the current state of our infrastructure into a file called terraform.tfstate in our base directory. This is called a state file. The state file contains a map of resources and their data to resource IDs.
  • The state is the canonical record of what Terraform is managing for you. This file is important because it is canonical. If you delete the file Terraform will not know what resources you are managing, and it will attempt to apply all configuration from scratch. This is bad. You should ensure you preserve this file.
  • Some Terraform documentation recommends putting this file into version control. We do not. The state file contains everything in your configuration, including any secrets you might have defined in them. We recommend instead adding this file to your .gitignore configuration.
  • As this file is the source of truth for the infrastructure being managed, it’s critical to only use Terraform to manage that infrastructure. If you make a change to your infrastructure manually, or if you use another tool, it can be easy for this state to get out of sync with reality. You can then lose track of the state of your infrastructure and its configuration, or have Terraform reset your infrastructure back to a potentially non-functioning configuration when it runs.
  • TIP Have existing infrastructure? Terraform can import it. You can read about how in the Terraform import documentation. In some cases, it’s also possible to recreate an accidentally deleted state file by importing resources.
  • Attribute references are variables and are very useful. They allow us to use values from one resource in another resource.
  • Two useful commands we might run before planning our configuration are terraform validate and terraform fmt. The validate command checks the syntax and validates your Terraform configuration files and returns any errors. The fmt command neatly formats your configuration files.
  • You could even specify both as a pre-commit hook in your Git repository. There’s an example of a hook like this in this gist.
  • Every Terraform plan or apply follows the same process:
    • We query the state of the resources, if they exist now.
    • We compare that state against any proposed changes to be made, building the graph of resources and their relationships. As a result of the graph, Terraform will only propose the set of required changes.
    • If they are not the same, either show the proposed change, if in the plan phase, or make the change, if in the apply phase.
  • If any other changes had been made to our infrastructure, outside of Terraform, then Terraform would also show us what would be needed to bring the infrastructure back in line with our Terraform configuration.
  • Where possible, Terraform will aim to perform the smallest incremental change rather than rebuilding every resource. In some cases, however, changing a resource requires recreating it. Since this is a destructive action, you should always carefully read the proposed actions in a terraform apply before saying yes or run terraform plan first to understand the impact of executing the change.
  • Terraform has an approach for trying to limit the risk of large-scale destructive changes to our environment and allowing us to make increment changes. To do this Terraform captures the proposed changes by outputting the plan it intends to  run to a file.
  • Terraform calls this a plan output. We capture the plan by specifying the -out flag on a terraform plan command. This will capture the proposed changes in a file we specify. The plan output means we can make small, incremental changes to our infrastructure.
  • So this looks like we made the same change as if we’d run terraform apply alone. Why is this useful? First, we don’t have to apply this change immediately. The plan output can be kept and stored as a potential incremental change. 
  • This is also the way we’d typically run automated Terraform actions, for example in a script or continuous integration tool. This avoids terraform apply’s interactive mode because obviously in most scripts you can’t answer yes.
  • WARNING The generated plan file will contain all your variable values, potentially including any credentials or secrets. It is not encrypted or otherwise protected. Handle this file with appropriate caution! 
  • To help with the systematic and incremental rollout of resources, Terraform has another useful flag: -target. You can use the -target flag on both the terraform plan and terraform apply commands. It allows you to target a resource—or more if you specify multiple -target flags—to be managed in an execution plan. 
  • If our execution plan had failed, then Terraform would not roll back the resources. It’ll instead mark the failed resource as tainted. The tainted state is Terraform’s way of saying, “This resource may not be right.” Why tainting instead of rolling back? Terraform always holds to the execution plan: it was asked to create a resource, not delete one. If you run the execution plan again, Terraform will attempt to destroy and recreate any tainted resources.
  • Terraform configurations do not depend on the order in which they are defined. 
  • Terraform is a declarative system; you specify the proposed state of your resources rather than the steps needed to create those resources. When you specify resources, Terraform builds a dependency graph of your configuration. The dependency graph represents the relationships between the resources in your configuration. When you plan or apply that configuration, Terraform walks that graph, works out which resources are related, and hence knows the order in which to apply them. 
  • TIP As a result of the dependency graph, Terraform tries to perform as many operations in parallel as it can. It can do this because it knows what it has to sequence and what it can create stand-alone. 
  • NOTE Generally you always want the graph to dictate resource ordering. But sometimes we do need to force order in our resources. If we need to do this we can use a special attribute called depends_on
  • The graph command outputs our dependency graph in the DOT graph format. That output can be piped to a file so we can visualize the graph. We can then view this graph in an application like Graphviz. If you don’t want to install Graphviz then you can use the online WebGraphviz tool. 
  • The terraform destroy command, without any options, destroys everything! 
  • WARNING If the above paragraph isn’t already sending warning signals… be very, very careful with terraform destroy. You can easily destroy your entire infrastructure. 
  • If you only want to destroy a specific resource then you can use the -target flag. 
  • The -target flag will also destroy any dependencies of the resource specified. 
  • You can also plan the terraform destroy process by passing the -destroy flag to the terraform plan command and saving a plan file.
  • TIP Terraform also has the concept of tainting and untainting resources. Tainting resources marks a single resource to be destroyed and recreated on the next apply. It doesn’t change the resource but rather the current state of the resource. Untainting reverses the marking.

Chapter 3: Building Our First Application

  • DRY is an abbreviation for “Don’t Repeat Yourself,” a software principle that recommends reducing the repetition of information. 
  • We start by creating a file, called variables.tf, to hold our variables. We create the file in the ~/terraform/base directory.
  • TIP The file can be called anything. We’ve just named it variables.tf for
    convenience and identification. Remember all files that end in .tf will be loaded by Terraform.
  • Terraform variables are created with a variable block. They have a name and an optional type, default, and description.
  • If you omit the type attribute then Terraform assumes your variable is a string, unless the default is in the format of another variable type. 
  • TIP We recommend you always add variable descriptions. You never know who’ll be using your code, and it’ll make their (and your) life a lot easier if every variable has a clear description. Comments are fun too. 
  • Each variable is identified as a variable by the var. prefix.
  • TIP Since Terraform 0.8 there is a command called terraform console. The console is a Terraform REPL that allows you to work with interpolations and other logic. It’s a good way to explore working with Terraform syntax. You can read about it in the console command documentation.
  • NOTE We don’t need to specify the variable type if the variable’s default is in the form of a map. In that case Terraform will automatically assume you’ve defined a map. 
  • Terraform has a set of built-in functions to make it easier to work with variables and values.
  • NOTE You can find a full list of functions in the Terraform documentation.
  • NOTE You can also use the element function to retrieve a value from a
    list.
  • Variables with and without defaults behave differently. A defined, but empty, variable is a required value for an execution plan.
  • Terraform has a variety of methods by which you can populate variables. Those ways, in order of descending resolution, are:
    • Loading variables from command line flags.
    • Loading variables from a file.
    • Loading variables from environment variables.
    • Variable defaults.
  • When Terraform runs it will look for a file called terraform.tfvars. We can populate this file with variable values that will be loaded when Terraform runs. 
  • You can also name the terraform.tfvars file something else—for example, we could have a variable file named base.tfvars. If you do specify a new file name, you will need to tell Terraform where the file is with the -var-file command line flag.
  • TIP You can use more than one –var-file flag to specify more than one file. If you specify more than one file, the files are evaluated from first to last, in the order specified on the command line. If a variable value is specified multiple times, the last value defined is used.
  • TIP Variable files and environment variables are a good way of protecting passwords and secrets. This avoids storing them in our configuration files, where they might end up in version control. A better way is obviously some sort of secrets store. Since Terraform 0.8 there is now support for integration with Vault for secrets management
  • Variable defaults are specified with the default attribute. If nothing in the above list of variable population methods resolves the variable then Terraform will use the default.
  • TIP Terraform also has an “override” file construct. When Terraform loads configuration files it appends them. With an override the files are instead merged. This allows you to override resources and variables. 
  • Terraform configurations in individual directories are isolated. Our new configuration in the web directory will, by default, not be able to refer to, or indeed know about, any of the configuration in the base directory. 
  • Modules are defined with the module block. Modules are a way of constructing reusable bundles of resources. They allow you to organize collections of Terraform code that you can share across configurations. 
  • You can configure inputs and outputs for modules: an API interface to your modules. This allows you to customize them for specific requirements, while your code remains as DRY and reusable as possible.
  • TIP Hashicorp makes available a collection of verified and community modules in the Terraform Module Registry. These include modules for a large number of purposes and are a good point to start if you need a module. You can learn more about the Terraform Module Registry in the documentation
  • To Terraform, every directory containing configuration is automatically a module. Using modules just means referencing that configuration explicitly. References to modules are created with the module block. 
  • Modules look just like resources only without a type. Each module requires a name. The module name must be unique in the configuration. 
  • Modules only have one required attribute: the module’s source. The source tells Terraform where to find the module’s source code.
  • TIP This path manipulation in Terraform is often tricky. To help with this, Terraform provides a built-in variable called path. You can read about how to use the path variable in the interpolation path variable documentation.
  • The namespace is like an organization or source of the module. The name is the module’s name and the provider is the specific provider it uses. The module’s homepage will contain full documentation on how to use it, including any required inputs and any outputs.
  • NOTE Modules with a blue tick on the Terraform Registry are verified and from a Hashicorp partner. These modules should be more resilient and tested than others. You can also publish your own modules on the Registry.
  • TIP You can read more about module provider inheritance in the modules documentation.
  • The output construct can be used in any Terraform configuration, not just in modules. It is a way to highlight specific information from the attributes of resources we’re creating. This allows us to selectively return critical information to the user or to another application rather than returning all the possible attributes of all resources and having to filter the information down. 
  • You can see that, like a variable, an output is configured as a block with a name. Each output has a value, usually an interpolated attribute from a resource being configured.
  • TIP Since Terraform 0.8, you can also add a description attribute to your outputs, much like you can for your variables. 
  • Outputs can also be marked as containing sensitive material by setting the sensitive attribute. 
  • When outputs are displayed—for instance, at the end of the application of a plan—sensitive outputs are redacted, with displayed instead of their value. 
  • NOTE This is purely a visual change. The outputs are not encrypted or
    protected.
  • NOTE We recommend using a naming convention for Terraform files inside modules. This isn’t required but it makes code organization and comprehension easier.
  • TIP Since Terraform 0.8, you can also specify the depends_on meta-parameter to explicitly create a dependency on a module. You can reference a module via name, for example module.vpc.
  • TIP The fine folks at Segment.io have released an excellent tool called
    terraform-docs. The terraform-docs tool reads modules and produces Markdown or JSON documentation for the module based on its variables and outputs.
  • Sometimes we want to refer to the set of resources created via a count. To do this Terraform has a splat syntax: *. This allows us to refer to all of these resources in a variable. 
  • The format function formats strings according to a specified format. 
  • The format function is essentially a sprintf and is a wrapper around Go’s fmt library syntax. So %03d is constructed from 0, indicating that you want to pad the number to the specified width with leading zeros. Then 3 indicates the width that you want, and d specifies a base 10 integer. The flags together will pad single digits with a leading 0 but ignore numbers larger than three digits in length. 
  • We can, however, cause Terraform to wrap the list using the element function. The element function pulls an element from a list using the given index and wraps when it reaches the end of the list.
  • The condition can be any interpolation: a variable, a function, or even chaining another conditional. The true or false values can also return any interpolation or valid value. The true and false values must return the same type though.
  • TIP You can read more about conditionals in their documentation.
  • Terraform also has the concept of local value configuration. Local values assign a name to an expression, essentially allowing you to create repeatable function-like values. 
  • TIP A local is only available in the context of the module it is defined in. It will not work cross-module. 
  • You can specify one or many locals blocks in a module. We’d recommend grouping them together for maintainability. If you use more than one locals block in a module then the names of the locals defined must be unique across the module.
  • If we want to see these outputs again, rather than applying the configuration again, we can run the terraform output command.
  • TIP Remember if you want to see the full list of all our resources and their attributes you can run the terraform show command. 
  • TIP In addition to building a stack from your configuration, you can do the reverse and import existing infrastructure. You can read more about the import process in the Terraform documentation.

Chapter 4: Provisioning and Terraform

  • Terraform isn’t designed to replace your configuration management tool—rather, it’s made to integrate with it. Terraform’s sweet spot is the management of your infrastructure components. 
  • TIP Terraform is not a configuration management tool. Really, it isn’t. Use it to set up your configuration management tool; do not use it to build your hosts. 
  • The most important piece of information here is that provisioners only run when a resource is created or destroyed. They do not run when a resource is changed or updated. This means that if you want to trigger provisioning again, you will need to destroy and recreate the resource, which is often not convenient or practical. Again, Terraform’s provisioning is not a replacement for configuration management. 
  • NOTE To provision you’ll need to be able to make an appropriate WinRM or SSH connection from the machine running Terraform to the resource being provisioned.  
  • TIP Another approach is to bake your provisioning or configuration management tool into your compute images— for example, by using a tool like Packer to create virtual machine images or AMIs. This reduces the requirement to do provisioning with Terraform.
  • You can find a full list of connection options in the connection block documentation.
  • Data sources provide read-only data that can be used in your configuration. Data sources are linked to providers. Not every provider has data sources—generally they exist if there are sources of information that are useful in the configuration managed by the provider. 
  • TIP The template provider only has two resources: template_file, which creates template files, and template_cloudinit_config, which allows you to template cloud-init configurations.  
  • TIP The file provisioner uses the permissions of the user we connect to the instance with. For us, this is the ubuntu user we specified in the connection block. This user must have permission to write to the chosen destination. If it does not have permission, the provisioner will fail. 
  • TIP Another useful place for the template_file data source is as the value of the user_data attribute. We can render a template file that will be executed to provision our hosts, along with any variables we might find useful. 
  • You can only use self variables in provisioners. They do not work anywhere else.
  • The file provisioner loosely follows the rules of the rsync tool and uses the presence or absence of a trailing / to determine its upload behavior. 
  • If the source directory has a trailing /, the contents of the directory will be uploaded into the destination directory. So a source of files/ will upload the contents of the files directory to the destination.
  • If the source directory doesn’t have a trailing /, a new directory will be created inside the destination directory. So a source of files will upload the files directory to the destination, creating, for example, /root/files. 
  • The remote-exec provisioner runs scripts or commands on a remote instance. It can run in three modes:
    • Run a single script.
    • Run a list of scripts in the order specified.
    • Run a list of commands in the order specified.
  • TIP The remote-exec provisioner has a counterpart: local-exec. The
    local-exec provisioner runs commands locally on the host running Terraform. You can read about the local-exec provisioner in its provisioner documentation
  • There is one shortcoming with both the single- and multiple-script execution modes: you can’t pass any arguments to the scripts being run. 
  • If you do want to run a script with an argument, there’s a workaround available. We use the file provisioner to first upload the script and then run it with remote -exec in its final mode, running inline commands. 
  • TIP There is a workaround for decoupling the destroy/recreate life cycle for provisioning. It involves using the null_resource provisioner, which allows you to create centralized provisioning configuration tied to triggers. 

Chapter 5: Collaborating with Terraform

  • TIP Terraform has another command that can be useful here: refresh. The refresh command reconciles the state file with real state of your infrastructure. It modifies your state file but does not change any infrastructure. Any proposed change will take place during the next plan or apply. 
  • TIP Terraform does come with a command line tool for editing the state. It’s called terraform state. You can list the contents of the current state using the terraform state list command. You can use the terraform state show command to show the state of a specific resource. You can move items in the state or to another state. You can also remove items from the state. 
  • The lifecycle meta-parameter provides the ability to control the life cycle of a resource. It has several options you can configure:
    • create_before_destroy — If a resource is going to be recreated, then the new resource is created before the old resource is deleted. This is useful for creating resources that replace others, such as creating a new DNS record before you delete an old one.
    • ignore_changes — Allows you to specify a list of attributes that will be
      ignored by Terraform. 
    • prevent_destroy — Does not delete the resource, even if a plan requires it. Any plan or execution that proposes destroying this resource will exit with an error. 
  • TIP Your backend configuration cannot contain interpolated variables. This is because this configuration is initialized prior to Terraform parsing these variables. 
  • NOTE Remember Terraform stores modules in the .terraform directory. Your local cache of the remote state file is also stored there. As we’ve discussed elsewhere, you’ll want to add this directory and the terraform.tfstate file to your .gitignore file to ensure neither are committed to version control. 
  • To use Terraform in a shared environment you’ll need to develop a workflow and process for collaborating. 
  • TIP We can query more than one remote state by specifying the terraform_remote_state data source multiple times. Remember that each data source, like our resources, needs to be named uniquely. 
  • NOTE For a data source to be read, the remote state needs to exist. If the configuration you’re fetching doesn’t exist—for example, if it’s been destroyed or not yet applied—then your remote state will be returned empty. Any variables you are populating will return an error. 
  • NOTE Only the root-level outputs from the remote state are available.
    Outputs from modules within the state cannot be accessed. If you want a module output to be accessible via a remote state, you must expose the output in the top-level configuration. 
  • Remote state best lends itself to provisioning use. There are variables or data you want to make use of when you build your stack. They’re often only used once or twice during that process. They don’t require you to regularly query that data source while your application or service is being run. Service discovery tends to be used at runtime and exercised regularly when applications and services run—for example,
    by querying the address of a required service. It generally requires a more resilient and faster service with a higher availability than our remote state. 
  • TIP Rather than specify the access_token you could use the CONSUL_HTTP_TOKEN environment variable or specify it via the -backend-config command line flag. This keeps your token out of your local state.
  • There are some other tools available to help manage Terraform’s state:
    • Terrahelp: Go utility that provides Vault-based encryption and decryption of state files.
    • Terragrunt: Go tool for managing locking and state that can be used as glue in a multi-environment setup.
    • Terraform_exec: Go wrapper that allows Terraform projects to have multiple environments synced to S3.

Chapter 6: Building a Multi-Environment Architecture

  • Terraform users will tell you that working out how to organize and lay out your code is crucial to having a usable Terraform environment.
  • We should ensure our code is clean, simple, and well documented. There are some good guidelines to work within to help with this:
    • All code should be in version control.
    • Always comment code when it requires explanation.
    • Add description fields to all variables.
    • Include README files or documentation for your module and their interfaces.
    • Running terraform fmt and terraform validate prior to committing or as a commit hook is strongly recommended to ensure your code is well formatted and valid.
  • Before you apply your configuration in your production environment, apply it in your development environment. Apply it using plan output files in iterative pieces to confirm it is working correctly. 
  • You should have your code in version control so it becomes easy to pass changes through an appropriate workflow. Many people use the GitHub Flow to review and promote code. Broadly, this entails:
    • Creating a branch.
    • Developing your changes.
    • Creating a pull request with your changes.
    • Reviewing your code changes.
    • Merging your changes to master and deploying them.
  • The path variable can be suffixed with a variety of methods to select specific paths. For example:
    • path.cwd for current working directory.
    • path.root for the root directory of the root module.
    • path.module for the root directory of the current module.
  • NOTE The Terraform interpolation documentation explains the path variable in more detail. 
  • Another useful feature when thinking about workflow are Terraform state environments. State environments were introduced in Terraform 0.9. You can think about a state environment as branching version control for your Terraform resources. 
  • A state environment is a namespace, much like a version control branch. They allow a single folder of Terraform configuration to manage multiple states of resources. They are useful for isolating a set of resources to test changes during development. Unlike version control though, state environments do not allow merging, any changes you make in a state environment need to be re-applied to any other environments. 
  • There are also some tools, blog posts, and resources designed to help run Terraform in a multi-environment setup.

Chapter 7: Infrastructure Testing

  • You can also decorate control blocks with metadata.Software tests validate that your software does what it is supposed to do. Loosely, they’re a combination of quality measures and correctness measures. We’re going to apply some of the principles of software testing to our infrastructure. 
  • A Terraform resource is a unit of isolated code about which we can reason and write tests to ensure the combination of the inputs and execution result in the correct outputs. With Terraform this is made
    even easier by the declarative nature of resources. 
  • Sadly, testing on Terraform is still in the early stages and has limitations. At the moment there are a limited set of testing frameworks and harnesses that support Terraform. We’re going to see what we can achieve now by looking at a tool called Test Kitchen. 
  • Test Kitchen is a test harness to execute infrastructure and configuration management code on isolated platforms. It builds your infrastructure, configuration, or environment, and then validates it against a series of tests. 
  • We’re going to use InSpec with Test Kitchen to test our Terraform-built infrastructure. InSpec is an infrastructure-testing framework built around the concept of compliance controls. You write a series of “controls”—compliance statements backed with individual tests. 
  • Test Kitchen works by creating the infrastructure we want to test, connecting to it, and running a series of tests to validate the right infrastructure has been built. 
  • Test Kitchen stores all of its information about state in a special directory called .kitchen at the root of our environment. Test Kitchen also uses a special YAML configuration file, .kitchen.yml, that tells Test Kitchen how and what to test. 
  • NOTE You’ll need to ensure the host running Test Kitchen can connect via SSH to the hosts upon which you wish to run tests. 
  • You can populate the inspec.yml file with a variety of metadata to identify the suite of tests. The name setting is the only required setting, but other settings help to describe the purpose of your suite.
  • We can test whether the resulting suite of tests is valid is using the inspec binary. 
  • Test Kitchen’s InSpec controls are expressed in a Ruby DSL (Domain Specific Language) that will be familiar to anyone who has used RSpec, as it’s built on top of it. 
  • The control block wraps a collection of controls. The describe block wraps individual controls. The describe block must contain at least one control. A control block must contain at least one describe block, but may contain as many as needed. 
  • Each control is made up of resources and matchers that are combined into tests. Resources are components that execute checks of some kind for a test: run a command, check a configuration setting, check the state of a service, and so on.
  • InSpec has a long list of built-in resources and has the ability for you to write your own custom resources. Matchers are a series of methods that check, by various logic, if output from a resource matches the output you expect. So a matcher might test equality, presence, or a regular expression.
  • You can also decorate control blocks with metadata. This adds metadata to the test to help folks understand what the test does and,
    importantly, why the test failing matters. In this example we’ve decorated our control with a title and a description—a plain-English explanation of what it is and how it works. We’ve also added a couple of tags to the control. 
  • The kitchen test command runs all the steps in our workflow: create, converge, and verify. We’re also going to pass in a command line flag: –destroy passing. The –destroy flag potentially destroys the instance after our tests run. The passing option constrains it to only destroy the infrastructure if all the tests pass. An alternative is the –destroy always flag, which always destroys the instance.
  • TIP There’s a useful, detailed Test Kitchen example including fixtures and more complex configuration in the kitchen-terraform GitHub repository. 
  • As it’s the early days for Terraform, there aren’t a lot of alternatives for testing. The current Test Kitchen solution requires direct SSH access to connect, which is an unfortunate limitation.
  • There is, however, an alternative using another framework called ServerSpecJohn Vincent has provided a Gist showing how you might integrate ServerSpec with Terraform.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: