Book Review: Terraform Up & Running: Writing Infrastructure as Code

Recently, I finished reading Terraform: Up and Running: Writing Infrastructure as Code by Yevgeniy Brikman.

Note: There is an updated version of this book scheduled to be released in June 2019, which I also plan on purchasing and reviewing.

When I started reading this book, I had very little (but at least some) experience with Terraform. And so, I was looking for a book that would provide a good foundation.

I particularly found Chapter 3 (“How to Manage Terraform State”), Chapter 4 (“How to Create Reusable Infrastructure with Terraform Modules“), Chater 5 (“Terraform Tips and Tricks: Loops, If-Statements, Deployment, and Gotchas“), and Chapter 6 (“How to Use Terraform as a Team”) of great value.

I especially liked Chapter 4, since it directly relates to using Infrastructure-as-Code (IaC) in a repeatable and management method via modules. That ties in directly with Chapter 6 for working as a team as well.

The only thing I wish this book (or similar resources like it) had, was examples specific to Microsoft Azure (since that’s the environment I’m working in). Most Terraform tutorials, books, videos, courses, etc. all seem to focus on Amazon Web Services (AWS). 

I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).

Chapter 1: Why Terraform?

  • DevOps isn’t the name of a team or a job title or a particular technology. Instead, it’s a set of processes, ideas, and techniques.
  • The goal of DevOps is to make software delivery vastly more efficient.
  • There are four core values in the DevOps movement: Culture, Automation, Measurement, and Sharing
  • A key insight of DevOps is that you can manage almost everything in code, including servers, databases, networks, log files, application configuration, documentation, automated tests, deployment processes, and so on.
  • There are four broad categories of IAC tools:
    • Ad hoc scripts
    • Configuration management tools
    • Server templating tools
    • Server provisioning tools
  • The great thing about ad-hoc scripts is that you can use popular, general-purpose programming languages and you can write the code however you want. The terrible thing about ad-hoc scripts is that you can use popular, general-purpose programming languages and you can write the code however you want.
  • Tools designed for IAC usually enforce a particular structure for your code, whereas with a general-purpose programming language, each developer will use his or her own style and do something different.
  • Chef, Puppet, Ansible, and SaltStack are all configuration management tools, which means they are designed to install and manage software on existing servers.
  • An alternative to configuration management that has been growing in popularity recently are server templating tools such as Docker, Packer, and Vagrant. Instead of launching a bunch of servers and configuring them by running the same code on each one, the idea behind server templating tools is to create an image of a server that captures a fully self-contained “snapshot” of the operating system, the software, the files, and all other relevant details. You can then use some other IAC tool to install that image on all of your servers
  • Packer is typically used to create images that you run directly on top of production servers
  • Vagrant is typically used to create images that you run on your development computers
  • Docker is typically used to create images of individual applications.
  • Server templating is a key component of the shift to immutable infrastructure. This idea is inspired by functional programming, where variables are immutable, so once you’ve set a variable to a value, you can never change that variable again. If you need to update something, you create a new variable.
  • The idea behind immutable infrastructure is similar: once you’ve deployed a server, you never make changes to it again. If you need to update something (e.g., deploy a new version of your code), you create a new image from your server template and you deploy it on a new server.
  • Server provisioning tools such as Terraform, CloudFormation, and OpenStack Heat are responsible for creating the servers themselves.
  • Organizations that use DevOps practices, such as IAC, deploy 200 times more frequently, recover from failures 24 times faster, and have lead times that are 2,555 times lower. When your infrastructure is defined as code, you are able to use a wide variety of software engineering practices to dramatically improve your software delivery process
  • You can store your IAC source files in version control, which means the entire history of your infrastructure is now captured in the commit log. This becomes a powerful tool for debugging issues, as any time a problem pops up, your first step will be to check the commit log and find out what changed in your infrastructure, and your second step may be to resolve the problem by simply reverting back to a previous, known-good version of your IAC code.
  • Terraform’s approach is to allow you to write code that is specific to each provider, taking advantage of that provider’s unique functionality, but to use the same language, toolset, and infrastructure as code practices under the hood for all providers.
  • Chef, Puppet, Ansible, and SaltStack are all configuration management tools, whereas CloudFormation, Terraform, and OpenStack Heat are all provisioning tools.
  • Configuration management tools such as Chef, Puppet, Ansible, and SaltStack typically default to a mutable infrastructure paradigm.
  • It’s possible to force configuration management tools to do immutable deployments too, but it’s not the idiomatic approach for those tools, whereas it’s a natural way to use provisioning tools.
  • Chef and Ansible encourage a procedural style where you write code that specifies, step by step, how to achieve some desired end state. Terraform, CloudFormation, SaltStack, Puppet, and Open Stack Heat all encourage a more declarative style where you write code that specifies your desired end state, and the IAC tool itself is responsible for figuring out how to achieve that state.
  • With declarative code, since all you do is declare the end state you want, and Terraform figures out how to get to that end state, Terraform will also be aware of any state it created in the past.
  • This highlights two major problems with procedural IAC tools:
    • Procedural code does not fully capture the state of the infrastructure.
    • Procedural code limits reusability.

Chapter 2: Getting Started with Terraform

  • The only thing you should use the root user for is to create other user accounts with more limited permissions, and switch to one of those accounts immediately. 2
  • When you use interpolation syntax to have one resource reference another resource, you create an implicit dependency. Terraform parses these dependencies, builds a dependency graph from them, and uses that to automatically figure out in what order it should create resources.
  • The output is in a graph description language called DOT, which you can turn into an image, by using a desktop app such as Graphviz or webapp such as GraphvizOnline.
  • When Terraform walks your dependency tree, it will create as many resources in parallel as it can.
  • That’s the beauty of a declarative language: you just specify what you want and Terraform figures out the most efficient way to make it happen.
  • The body of the variable declaration can contain three parameters, all of them optional:
    • description It’s always a good idea to use this parameter to document how a variable is used. Your teammates will not only be able to see this description while reading the code, but also when running the plan or apply commands
    • default There are a number of ways to provide a value for the variable, including passing it in at the command line (using the -var option), via a file (using the -var-file option), or via an environment variable (Terraform looks for environment variables of the name TF_VAR_ < variable_name >). If no value is passed in, the variable will fall back to this default value. If there is no default value, Terraform will interactively prompt the user for one.
    • type Must be one of “string”, “list”, or “map”. If you don’t specify a type, Terraform will try to guess the type from the default value. If there is no default, then Terraform will assume the variable is a string.
  • If you don’t want to deal with remembering a command-line flag every time you run plan or apply, you’re better off specifying a default value.
  • You can also use the terraform output command to list outputs without applying any changes and terraform output OUTPUT_NAME to see the value of a specific output:
  • You can add a lifecycle block to any resource to configure how that resource should be created, updated, or destroyed. One of the available lifecycle settings is create_before_destroy, which, if set to true, tells Terraform to always create a replacement resource before destroying the original resource.
  • The catch with the create_before_destroy parameter is that if you set it to true on resource X, you also have to set it to true on every resource that X depends on (if you forget, you’ll get errors about cyclical dependencies).
  • A data source represents a piece of read-only information that is fetched from the provider (in this case, AWS) every time you run Terraform. Adding a data source to your Terraform configurations does not create anything new; it’s just a way to query the provider’s APIs for data.

Chapter 3: How to Manage Terraform State

  • The state file format is a private API that changes with every release and is meant only for internal use within Terraform. You should never edit the Terraform state files by hand or write code that reads them directly. If for some reason you need to manipulate the state file — which should be a relatively rare occurrence — use the terraform import command (you’ll see an example of this in Chapter 5) or the terraform state command (this is only for advanced use cases).
  • Instead of using version control, the best way to manage shared storage for state files is to use Terraform’s built-in support for Remote State Storage. Using the terraform remote config command, you can configure Terraform to fetch and store state data from a remote store every time it runs.
  • When you set prevent_destroy to true on a resource, any attempt to delete that resource (e.g., by running terraform destroy) will cause Terraform to exit with an error. This is a good way to prevent the accidental deletion of an important resource.
  • With remote state enabled, Terraform will automatically pull the latest state from this S3 bucket before running a command, and automatically push the latest state to the S3 bucket after running a command.
  • Using a build server to automate deployments is a good idea regardless of the locking strategy you use, as it allows you to catch bugs and enforce compliance rules by running automated tests before applying any change.
  • Put the Terraform configuration files for each environment into a separate folder. For example, all the configurations for the staging environment can be in a folder called stage and all the configurations for the production environment can be in a folder called prod. That way, Terraform will use a separate state file for each environment, which makes it significantly less likely that a screw up in one environment can have any impact on another.
  • I recommend using separate Terraform folders (and therefore separate state files) for each environment (staging, production, etc.) and each component (vpc, services, databases).
  • If your Terraform configurations are becoming massive, it’s OK to break out certain functionality into separate files (e.g., iam.tf, s3. tf, database.tf), but that may also be a sign that you should break your code into smaller modules instead,
  • There is another data source that is particularly useful when working with state: terraform_remote_state. You can use this data source to fetch the Terraform state file stored by another set of Terraform configurations in a completely read-only manner.
  • In general, embedding one programming language (Bash) inside another (Terraform) makes it harder to maintain each one, so it’s a good idea to externalize the Bash script. To do that, you can use the file interpolation function and the template_file data source.
  • A great way to experiment with interpolation functions is to run the terraform console command to get an interactive console where you can try out different Terraform syntax, query the state of your infrastructure, and see the results instantly.
  • One of the benefits of extracting the User Data script into its own file is that you can write unit tests for it. The test code can even fill in the interpolated variables by using environment variables, since the Bash syntax for looking up environment variables is the same as Terraform’s interpolation syntax.
  • The reason you need to put so much thought into isolation, locking, and state is that infrastructure as code (IAC) has different trade-offs than normal coding.
  • When you’re writing code that controls your infrastructure, bugs tend to be more severe, as they can break all of your apps — and all of your data stores and your entire network topology and just about everything else. Therefore, I recommend including more “safety mechanisms” when working on IAC than with typical code.

Chapter 4: How to Create Reusable Infrastructure with Terraform Modules

  • A Terraform module is very simple: any set of Terraform configuration files in a folder is a module.
  • Note that whenever you add a module to your Terraform configurations or modify the source parameter of a module, you need to run the get command before you run plan or apply.
  • In Terraform, modules can have input parameters, too. To define them, you use a mechanism you’re already familiar with: input variables.
  • In Terraform, a module can also return values. Again, this is done using a mechanism you already know: output variables.
  • When creating modules, watch out for these gotchas:
    • File paths
    • Inline blocks
  • The catch with the file function is that the file path you use has to be relative (since you could run Terraform on many different computers)
  • By default, Terraform interprets the path relative to the current working directory. That works if you’re using the file function in a Terraform configuration file that’s in the same directory as where you’re running terraform apply (that is, if you’re using the file function in the root module), but that won’t work when you’re using file in a module that’s defined in a separate folder. To solve this issue, you can use path.module to convert to a path that is relative to the module folder.
  • The configuration for some Terraform resources can be defined either as inline blocks or as separate resources. When creating a module, you should always prefer using a separate resource.
  • If you try to use a mix of both inline blocks and separate resources, you will get errors where routing rules conflict and overwrite each other. Therefore, you must use one or the other. Because of this limitation, when creating a module, you should always try to use a separate resource instead of the inline block. Otherwise, your module will be less flexible and configurable.
  • If both your staging and production environment are pointing to the same module folder, then as soon as you make a change in that folder, it will affect both environments on the very next deployment. This sort of coupling makes it harder to test a change in staging without any chance of affecting production. A better approach is to create versioned modules so that you can use one version in staging (e.g., v0.0.2) and a different version in production (e.g., v0.0.1),
  • The easiest way to create a versioned module is to put the code for the module in a separate Git repository and to set the source parameter to that repository’s URL.
  • You can also add a tag to the modules repo to use as a version number. If you’re using GitHub, you can use the GitHub UI to create a release, which will create a tag under the hood. If you’re not using GitHub, you can use the Git CLI.
  • The ref parameter allows you to specify a specific Git commit via its sha1 hash, a branch name, or, as in this example, a specific Git tag. I generally recommend using Git tags as version numbers for modules. Branch names are not stable, as you always get the latest commit on a branch, which may change every time you run the get command, and the sha1 hashes are not very human friendly. Git tags are as stable as a commit (in fact, a tag is just a pointer to a commit) but they allow you to use any name you want.
  • A particularly useful naming scheme for tags is semantic versioning. This is a versioning scheme of the format MAJOR.MINOR.PATCH (e.g., 1.0.4) with specific rules on when you should increment each part of the version number. In particular, you should increment the…
    • MAJOR version when you make incompatible API changes,
    • MINOR version when you add functionality in a backward-compatible manner, and
    • PATCH version when you make backward-compatible bug fixes.
  • Semantic versioning gives you a way to communicate to users of your module what kind of changes you’ve made and the implications of upgrading.
  • If your Terraform module is in a private Git repository, you will need to ensure the computer you’re using has SSH keys configured correctly that allow Terraform to access that repository. In other words, before using the URL ssh:// git@ github.com/ foo/ modules.git in the source parameter of your module, make sure you can git clone that URL in your terminal.
  • Versioned modules are great when you’re deploying to a shared environment (e.g., staging or production), but when you’re just testing on your own computer, you’ll want to use local file paths. This allows you to iterate faster, as you’ll be able to make a change in the module folders and rerun the plan or apply command in the live folders immediately, rather than having to commit your code and publish a new version each time.

Chapter 5: Terraform Tips and Tricks: Loops, If-Statements, Deployment, and Gotchas

  • Terraform provides a few primitives — namely, a meta-parameter called count, a lifecycle block called create_before_destroy, a ternary operator, plus a large number of interpolation functions — that allow you to do certain types of loops, if-statements, and zero-downtime deployments.
  • Almost every Terraform resource has a meta-parameter you can use called count. This parameter defines how many copies of the resource to create.
  • In Terraform, you can use count.index to get the index of each “iteration” in the “loop”:
  • The element function returns the item located at INDEX in the given LIST. 1 The length function returns the number of items in LIST (it also works with strings and maps).
  • When you use the splat character, you get back a list, so you need to wrap the output variable with brackets:
  • In Terraform, if you set a variable to a boolean true (that is, the word true without any quotes around it), it will be coerced into a 1, and if you set it to a boolean false, it will be coerced into a 0.
  • If you set count to 1 on a resource, you get one copy of that resource; if you set count to 0, that resource is not created at all.
  • Using count and interpolation functions to simulate if-else-statements is a bit of a hack, but it’s one that works fairly well, and as you can see from the code, it allows you to conceal lots of complexity from your users so that they get to work with a clean and simple API.
  • What you want to do instead is a zero-downtime deployment. The way to accomplish that is to create the replacement ASG first and then destroy the original one. As it turns out, this is exactly what the create_before_destroy lifecycle setting does!
  • There is a significant limitation: you cannot use dynamic data in the count parameter. By “dynamic data,” I mean any data that is fetched from a provider (e.g., from a data source) or is only available after a resource has been created (e.g., an output attribute of a resource).
  • The key realization is that terraform plan only looks at resources in its Terraform state file. If you create resources out-of-band — such as by manually clicking around the AWS console — they will not be in Terraform’s state file, and therefore, Terraform will not take them into account when you run the plan command. As a result, a valid-looking plan may still fail.
  • There are two main lessons to take away from this:
    • Once you start using Terraform, you should only use Terraform
      • Once a part of your infrastructure is managed by Terraform, you should never make changes manually to it. Otherwise, you not only set yourself up for weird Terraform errors, but you also void many of the benefits of using infrastructure as code in the first place, as that code will no longer be an accurate representation of your infrastructure.
    • If you have existing infrastructure, use the import command
      • If you created infrastructure before you started using Terraform, you can use the terraform import command to add that infrastructure to Terraform’s state file, so Terraform is aware of and can manage that infrastructure.
  • Note that if you have a lot of existing resources that you want to import into Terraform, writing the Terraform code for them from scratch and importing them one at a time can be painful, so you may want to look into a tool such as Terraforming, which can import both code and state from an AWS account automatically.
  • Refactoring is an essential coding practice that you should do regularly. However, when it comes to Terraform, or any infrastructure as code tool, you have to be careful about what defines the “external behavior” of a piece of code, or you will run into unexpected problems.
  • There are four main lessons you should take away from this discussion:
    • Always use the plan command
      • All of these gotchas can be caught by running the plan command, carefully scanning the output, and noticing that Terraform plans to delete a resource that you probably don’t want deleted.
    • Create before destroy
      • If you do want to replace a resource, then think carefully about whether its replacement should be created before you delete the original. If so, then you may be able to use create_before_destroy to make that happen. Alternatively, you can also accomplish the same effect through two manual steps: first, add the new resource to your configurations and run the apply command; second, remove the old resource from your configurations and run the apply command again.
    • All identifiers are immutable
      • Treat the identifiers you associate with each resource as immutable. If you change an identifier, Terraform will delete the old resource and create a new one to replace it. Therefore, don’t rename identifiers unless absolutely necessary, and even then, use the plan command, and consider whether you should use a create-before-destroy strategy.
    • Some parameters are immutable
      • The parameters of many resources are immutable, so if you change them, Terraform will delete the old resource and create a new one to replace it. The documentation for each resource often specifies what happens if you change a parameter, so RTFM. And, once again, make sure to always use the plan command, and consider whether you should use a create-before-destroy strategy.
  • Whenever you use an asynchronous and eventually consistent API, you are supposed to wait and retry for a while until that action has completed and propagated. Unfortunately, Terraform does not do a great job of this. As of version 0.8. x, Terraform still has a number of eventual consistency bugs that you will hit from time to time after running terraform apply.

Chapter 6: How to Use Terraform as a Team

  • If your team is used to managing all of its infrastructure by hand, switching to infrastructure as code (IAC) requires more than just introducing a new tool or technology. It also requires changing the culture and processes of the team. In particular, your team will need to shift from a mindset of making changes directly to infrastructure (e.g., by SSHing to a server and running commands) to a mindset where you make those changes indirectly (e.g., by updating Terraform code) and allowing automated processes to do all the actual work.
  • This up-front investment in learning has a massive payoff. Doing things by hand may feel simpler and faster for a few servers, but once you have tens, hundreds, or thousands of servers, proper IAC processes are the only options that work.
  • All of your code should be in version control. No exceptions.
  • Not only should the code that defines your infrastructure be stored in version control, but you may want to have at least two separate version control repositories: one for modules, and one for live infrastructure.
  • Your team should have one or more separate repositories where you define versioned, reusable modules. Think of each module as a “blueprint” that defines a specific part of your infrastructure. The beauty of this arrangement is that you could have an infrastructure team that specializes in creating reusable, best-practices definitions of pieces of infrastructure within the modules repo.
  • There should be a separate repository that defines the live infrastructure you’re running in each environment (stage, prod, mgmt, etc). Think of this as the “houses” you build from the “blueprints” in the modules repository.
  • You should be able to reason about your infrastructure just by looking at the live repository. If you can scan the code of that repository and get an accurate understanding of what’s deployed, then you’ll find it easy to maintain your infrastructure.
  • The Golden Rule of Terraform: The master branch of the live repository should be a 1: 1 representation of what’s actually deployed in production.
  • The only way to ensure that the Terraform code in the live repository is an up-to-date representation of what’s actually deployed is to never make out-of-band changes. Once you start using Terraform, do not make changes via a web UI, or manual API calls, or any other mechanism. Out-of-band changes not only lead to complicated bugs, but they also void many of the benefits you get from using infrastructure as code in the first place.
  • Every resource you have deployed should have a corresponding line of code in your live repository.
  • The better way to get this kind of reuse is to create a module, write explicit code that uses that module 10 times, and run terraform apply once.
  • You should only have to look at a single branch to understand what’s actually deployed in production. Typically, that branch will be master. That means all changes that affect the production environment should go directly into master (you can create a separate branch, but only to create a pull request with the intention of merging that branch into master) and you should only run terraform apply for the production environment against the master branch.
  • If you manage your infrastructure through code, you have a better way to mitigate risk: automated tests. The idea is to write code that verifies that your infrastructure code works as expected. You should run these tests after every commit and revert any commits that fail. This way, every change that makes it into your codebase is proven to work and most issues will be found at build time rather than during a nervewracking deployment.
  • Most automated tests for Terraform simply run terraform apply and then try to verify that the deployed resources behave as expected. That means that automated tests for infrastructure are a bit slower to run and a bit more fragile than other types of automated tests. However, this is a small price to pay for the ability to validate all your infrastructure changes before those changes can cause problems in production.
  • You need to make it possible to deploy your Terraform configurations into an isolated test environment
  • The first step to making Terraform code testable is to make the various aspects of the environment pluggable.
  • You may find some of the existing infrastructure testing tools handy, such as kitchen-terraform and serverspec.
  • There are several different types of automated tests you may write for your Terraform code, including unit tests, integration tests, and smoke tests. Most teams should use a combination of all three types of tests, as each type can help prevent different types of bugs.
  • Unit tests verify the functionality of a single, small unit of code. The definition of unit varies, but in a general-purpose programming language, it’s typically a single function or class. The equivalent in Terraform is to test a single module.
  • Integration tests verify that multiple units work together correctly. In a general-purpose programming language, you might test that several functions or classes work together correctly. The equivalent in Terraform is to test that several modules work together.
  • Smoke tests run as part of the deployment process, rather than after each commit. You typically have a set of smoke tests that run each time you deploy to staging and production that do a sanity check that the code is working as expected.
  • Whenever you’re writing code as a team, regardless of what type of code you’re writing, you should define guidelines for everyone to follow.
  • If I look at a single file and it’s written by 10 different engineers, it should be almost indistinguishable which part was written by which person. To me, that is clean code.
  • Most Terraform modules should have a Readme that explains what the module does, why it exists, how to use it, and how to modify it. In fact, you may want to write the Readme first, before any of the actual Terraform code, as that will force you to consider what you’re building and why you’re building it before you dive into the code and get lost in the details of how to build it.
  • You may also want to have tutorials, API documentation, wiki pages, and design documents that go deeper into how the code works and why it was built this way.
  • Don’t use comments to explain what the code does; the code should do that itself. Only include comments to offer information that can’t be expressed in code, such as how the code is meant to be used or why the code uses a particular design choice. Terraform also allows every input variable to declare a description parameter, which is a great place to describe how that variable should be used.
  • When creating Terraform modules, you may also want to create example code that shows how that module is meant to be used. This is a great way to highlight proper usage patterns as well as a way for users to try your module without having to write code.
  • Your team should define conventions for where Terraform code is stored and the file layout you use. Since the file layout for Terraform also determines the way Terraform state is stored, you should be especially mindful of how file layout impacts your ability to provide isolation guarantees.
  • Every team should enforce a set of conventions about code style, including the use of whitespace, newlines, indentation, curly braces, variable naming, and so on.
  • What really matters is that you are consistent throughout your codebase.
  • Terraform even has a built-in fmt command that can reformat code to a consistent style automatically. You could run this command as part of a commit hook to ensure that all code committed to version control automatically gets a consistent style.
  • The workflow I recommend for most teams consists of the following:
    • Plan
    • Staging
    • Code review
    • Production
  • I recommend that every team maintains at least two environments:
  • Production An environment for production workloads (i.e., user-facing apps).
  • Staging An environment for nonproduction workloads (i.e., testing).
  • Since everything is automated with Terraform anyway, it doesn’t cost you much extra effort to try a change in staging before production, but it will catch a huge number of errors.
  • Testing in staging is especially important because Terraform does not roll back changes in the case of errors. If you run terraform apply and something goes wrong, you have to fix it yourself.
  • As always, run the plan command before apply, and make sure the plan matches up with what you saw in staging.
  • Whenever you run the plan or apply command, Terraform automatically looks for a terraform.tfvars file, and if it finds one, it uses any variables defined within it to set the variables in your configurations. The .tfvars file format is fairly easy to generate from an automated deployment script, although if you don’t want to deal with HCL syntax, Terraform also allows you to use JSON in a terraform.tfvars.json file.
  • The gold standard is to allow a developer to spin up their own personal testing environment on demand whenever they are making infrastructure changes, and to tear those environments down when they are done.
  • The general idea is to define the Terraform code in a single place and to create a pipeline that allows you to promote a single, immutable version of that definition through each of your environments. Here’s one way to implement this idea: in your modules repository, define all of your Terraform code for a single environment just as if you were defining it in the live repo
  • Each of the components in the modules repo contains standard Terraform code, ready to be deployed with a call to terraform apply, except for one thing: anything that needs to vary between environments is exposed as an input variable.
  • In your live repository, you can deploy each component by creating a .tfvars file that sets those input variables to the appropriate values for each environment.
  • The benefit of this approach is that the code it takes to define an environment is reduced to just a handful of .tfvars files, each of which specifies solely the variables that are different for each environment. This is about as DRY as you can get, which helps reduce the maintenance overhead and copy/ paste errors of maintaining multiple environments.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: