Recently, I finished reading Terraform Cookbook: Efficiently define, launch, and manage Infrastructure as Code across various cloud platforms by Mikael Krief.
I am already experienced with Terraform, and have read 3 other Terraform books, along with many other articles, blogs, and videos. Not to mention my own blog posts about Terraform, my 3-part video series on “Infrastructure-as-Code (IaC) Using Terraform” available on my YouTube channel, and my own personal curated list of Terraform Learning Resources (which I plan to add this book to).
Out of the many books, blogs, articles, etc. I’ve read, I found this one to be very clear and concise. It is a great resource if you are just getting started with Terraform. It doesn’t overload you with a lot of text/material, and has a great format of… Getting Ready, How To Do It, and How It Works.
There isn’t one specific chapter that stands out for me, but I appreciated all of the extra links shared for additional information. In Chapter 4 (“Using the Terraform CLI“), Chapter 5 (“Sharing Terraform Configuration with Modules“), and Chapter 6 (“Provisioning Azure Infrastructure with Terraform“), there are some good links to tools that generate Terraform configuration files, and testing frameworks.
I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).
If my highlights peak your interest, I strongly recommend that you pick up a copy for yourself.
Chapter 1: Setting Up the Terraform Environment
- It is important to know that if your Terraform configuration is in version 0.11, it is not possible to migrate it directly to 0.13. You will first have to upgrade to 0.12 and then migrate to 0.13.
- It is also recommended by HashiCorp, before performing the migration process, to commit its code in a source code manager (for example, Git) in order to be able to visualize the code changes brought by the migration.
Chapter 2: Writing Terraform Configuration
- With regard to the specification of the provider version, when executing the terraform init command, if no version is specified, Terraform downloads the latest version of the init provider, otherwise it downloads the specified version
- It is also important to mention that the version of the Terraform binary that’s used is specified in the Terraform state file. This is to ensure that nobody applies this Terraform configuration with a lower version of the Terraform binary, thus ensuring that the format of the Terraform state file conforms with the correct version of the Terraform binary.
- In addition, with the 0.13 version of Terraform released in August 2020, we can now create custom validation rules for variables which makes it possible for us to verify a value during the terraform plan execution.
- Note that using the -var option or the TF_VAR_<name of the environment variable> doesn’t hardcode these variable’s values variable> inside the Terraform configuration. They make it possible for us to give values of variables to the flight. But be careful – these options can have – consequences if the same code is executed with other values initially provided in parameters and the plan’s output isn’t reviewed carefully.
- Optionally, we can add a description that describes what the output returns, which can also be very useful for autogenerated documentation or in the use of modules.
- An article explaining the best practices surrounding Terraform configuration can be found at https://www.terraform-best-practices.com/code-structure.
- The following blog post explains the folder structure for production Terraform configuration: https://www.hashicorp.com/blog/structuring-hashicorp-terraform-configuration-for-production
- The use of data blocks is to be preferred to the use of IDs written in clear text in the code, which can change because the data block recovers the information dynamically.
- Separating the Terraform configuration is a good practice because it allows better control and maintainability of the Terraform configuration. It also allows us to provision each part separately, without it impacting the rest of the infrastructure.
- To know when to use a data block or a terraform_remote_state block, the following recommendations must be kept in mind:
- The data block is used in the following cases:
- When external resources have not been provisioned with Terraform configuration (it has been built manually or with a script)
- When the user providing the resources of our Terraform configuration does not have access to another remote backend
- The terraform_remote_state block is used in the following cases:
- External resources have not been provisioned with Terraform configuration
- When the user providing the resources of our Terraform configuration has read access to the other remote backend
- When the external Terraform state file contains the output of the property we need in our Terraform configuration
- The data block is used in the following cases:
- There is an external resource in Terraform that allows you to call an external program and retrieve its output data so that it can be used in the Terraform
configuration. - This external resource contains specifics about the protocol, the format of the parameters, and its output. I advise that you read its documentation to learn more: https://www.terraform.io/docs/providers/external/data_source.html
- The following are some example articles regarding how to use the Terraform external resource:
- https://dzone.com/articles/lets-play-with-terraform-external-provider
- https://thegrayzone.co.uk/blog/2017/03/external-terraform-provider-powershell/
- In this function, we used the verb to indicate that it is a %s character string that will be replaced, in order, by the name of the application and the name of the environment.
- It is important to know that the local-exec provisioner, once executed, ensures that the Terraform state file cannot be executed a second time by the terraform apply command.
- To be able to execute the local-exec command based on a trigger element, such as a resource that has been modified, it is necessary to add a trigger object inside null_resource that will act as the trigger element of the local-exec resource.
- Please note that the fact a property is sensitive in Terraform means that it cannot be displayed when using the Terraform plan and apply commands in the console output display. On the other hand, it will be present in clear text in the Terraform state file.
Chapter 3: Building Dynamic Environments with Terraform
- Regarding the lookup and element functions, they can be used, but it is preferable to use the native syntax instead (such as var_name[42] and var_map[“key”]) to access elements of a map, list, or set.
Chapter 4: Using the Terraform CLI
- Among the other options of this command, there is also the -check option, which can be added and allows you to preview the files that will be indented, without applying the changes in the file(s).
- With the Terraform extension of Visual Studio Code, we can have every Terraform file saved and formatted with the terraform fmt command. For more information, read the pertinent documentation: https://marketplace.terraform.visualstudio.com/items?itemName=HashiCorp.
- For Git commits, it’s possible to automate the execution of the terraform fmt command before each commit by using pre-commits that are hooks to Git: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks.
- To use pre-commits with Terraform, refer to this list of hooks provided by Gruntwork: https://github.com/gruntwork-io/pre-commit.
- If the Terraform configuration contains a backend block, then, for this validation of the configuration, we don’t need to connect to this state file. We can add the -backend=false option to the
terraform init command. - Finally, if the execution of this Terraform configuration requires variables passed with the -var argument, or with the -var-file option, you cannot use this command. Instead, use terraform plan command, which performs validation during its execution.
- Since the terraform destroy command deletes all the resources tracked in the Terraform state file, it is important to break the Terraform configuration by separating it into multiple state files to reduce the room for error when changing the infrastructure.
- If you need to destroy a single resource and not all the resources tracked in the state file, you can add the -target option to the terraform destroy command, which allows you to target the resource to be deleted.
- Note that the targeting mechanism should only be used as a last resort. In an ideal scenario, the configuration stays in sync with the state file (as applied without any extra target flags). The risk of executing a targeted apply or destroy operation is that other contributors may miss the context and, more importantly, it becomes much more difficult to apply further changes after changing the configuration.
- Be careful when deleting a workspace that it does not delete the associated resources. That’s why, in order to delete a workspace, you must first delete the resources provided by that workspace using the terraform destroy command. Otherwise, if this operation is not carried out, it will no longer be possible to manage these resources with Terraform because the Terraform state file of this workspace will have been deleted.
- If you are provisioning resources in Azure, there are rather interesting tools that generate the Terraform configuration and the corresponding Terraform state file from Azure resources that have already been created. This open source Az2Tf tool is available at https://github.com/andyt530/py-az2tf. Alternatively, there is TerraCognita, which is available at https://github.com/cycloidio/terracognita/blob/master/README.md.
- Moreover, in order to cancel the taint flag applied with the terraform taint command, we can execute the inverse command.
Chapter 5: Sharing Terraform Configuration with Modules
- Documentation on the use of the generator is available at https://docs.microsoft.com/en-us/azure/developer/terraform/create-a-base-template-using-yeoman.
- Yeoman documentation is available at https://yeoman.io/.
- Among all of the tools in the Terraform toolbox, there is terraform-docs, an open source, cross-platform tool that allows the documentation of a Terraform module to be generated automatically.
- We execute terraform-docs specifying in the first argument the type of format of the documentation. In our case, we want it in markdown format. Then, in the second argument, we specify the path of the modules directory.
- But to go further, we added the > Modules/webapp/Readme.md command, which indicates that the content of the generated documentation will be written in the Readme.md file that will be created in the module directory.
- In our recipe, we chose to generate markdown documentation, but it is also possible to generate it in JSON, XML, YAML, or text (pretty) format. To do so, you have to add the format option to the terraform-docs command. To know more about the available generation formats, read the documentation here: https://github.com/terraform-docs/terraform-docs/blob/master/docs/FORMATS_GUIDE.md.
- The mechanism of the Terrafile pattern is that instead of using the Git sources directly in module calls, we reference them in a file in Terrafile YAML format. In the Terraform configuration, in the module call, we instead use a local path relative to the modules folder.
- Among the Terraform framework and testing tools is the Terratest framework, created by the Gruntwork community (https://gruntwork.io/static/), which is popular and allows testing on code written in the Go language.
- If Terraform modules provide resources in cloud providers, the authentication parameters must be set before running tests.
- Read this blog post about Terratest and GitHub Actions, provided by HashiCorp: https://www.hashicorp.com/blog/continuous-integration-for-terraform-modules-with-github-actions/.
Chapter 6: Provisioning Azure Infrastructure with Terraform
- A tutorial that shows how to use and configure locally installed Visual Studio Code to execute a Terraform configuration in Azure Cloud Shell: https://docs.microsoft.com/en-us/azure/developer/terraform/configure-vs-code-extension-for-terraform.
- This method allows you to provision elements in Azure that are not available in the azurerm provider, but it is important to know that Terraform knows the resources described in this ARM template when it is executed.
- That is to say that these resources (here, in our resource, it is the extension) do not follow the life cycle of the Terraform workflow and are not registered in the Terraform state file. The only thing that is written in the Terraform state file is the configuration of the resource, azurerm_template_deployment, and, as a consequence, for example, if you run the terraform destroy command on the Terraform configuration, these resources provided by the ARM template will not be destroyed. Instead, only the azurerm_template_deployment resource will be removed from Terraform state file. For this reason, it is advisable that you use this type of deployment only to complete resources that have been provisioned with Terraform HCL code.
- Documentation pertaining to the when property of provisioner is available here: https://www.terraform.io/docs/provisioners/index.html#destroy-time-provisioners.
- If you want to keep a real IaC, it is preferable to use an as-code configuration tool, such as Ansible, Puppet, Chef, or PowerShell DSC.
- Warning: There can only be one custom script extension per VM. Therefore, you have to put all the configuration operations in a single script.
- One of the Terraform configuration generation tools, called Terraformer, which is hosted in the GitHub repo of Google Cloud Platform, at https://github.com/GoogleCloudPlatform/terraformer.
- We generate the Terraform configuration by executing the following Terraformer command:
- terraformer import azure –resources=resource_group –compact –path-pattern {output}/{provider}/
- Terraformer also contains an option that allows a dry run to preview the code that will be generated. To do this, we will execute the following command that generates a plan.json file, along with a description of the resources that will be generated:
- terraformer plan azure –resources=resource_group –compact –path-pattern {output}/{provider}/
- We visualize the content of this created JSON file to check its conformity and then, in order to carry out the generation, we execute the following command:
- terraformer import plan generated/azurerm/plan.json
- Moreover, before using Terraformer, it is necessary to check that the resources to be generated are well supported. For example, in the case of Azure, the list of resources is available here: https://github.com/GoogleCloudPlatform/terraformer#use-with-azure.
- Finally, among the other Terraform configuration generation tools, there is a very good tool called az2tf (https://github.com/andyt530/py-az2tf) that used to work on the same Terraformer principle, but unfortunately, this tool is no longer maintained. There is also TerraCognita (https://github.com/cycloidio/terracognita/), which still integrates a number of resources for Azure, and Terraforming (https://github.com/dtan4/terraforming), which is only operational for AWS.
Chapter 7: Deep Diving into Terraform
- One of the advantages of Ansible is that it’s agentless, which means you don’t need to install an agent on the VMs you want to configure. Thus, to know which VMs to configure, Ansible uses a file called inventory, which contains the list of VMs that need configuring.
- For more details on this templating format, read the documentation at https://www.terraform.io/docs/configuration/expressions.html#string-templates.
- We use the built-in Terraform zipmap function that allows us to build a map from two lists, one being the keys list and the other the values list.
- Documentation on the zipmap function is available at https://www.terraform.io/docs/configuration/functions/zipmap.html.
- Here is a list of web articles that deal with the same subject of Ansible inventories generated by Terraform by proposing different solutions:
- https://hooks.technology/2020/02/using-terraform-and-ansible-together/
- https://www.linkbynet.com/produce-an-ansible-inventory-with-terraform
- https://gist.github.com/hectorcanto/71f732dc02541e265888e924047d47ed
- https://stackoverflow.com/questions/45489534/best-way-currently-to-create-an-ansible-inventory-from-terraform
- Finally, concerning the writing of the tests, we will use Inspec, which is a test framework based on Rspec. Inspec allows you to test local systems or even infrastructures in the cloud. For more information about Inspec, I suggest you read its documentation at https://www.inspec.io/.
- To learn more about Inspec profiles, refer to the documentation at https://www.inspec.io/docs/reference/profiles/.
- In the case of integration tests in which, after executing the tests, we don’t want to destroy the resources that have been built with Terraform, we can execute the kitchen verify command.
- You can find tutorials on kitchen-terraform at https://newcontext-oss.github.io/kitchen-terraform/tutorials/.
- For more information about the kitchen test command, see the documentation at https://kitchen.ci/docs/getting-started/running-test/.
- However, it should be noted that if a resource in the Terraform configuration contains this property, and this property must be deleted when executing the terraform apply command, then this prevent_destroy property prevents the application from making changes to all the resources described in the Terraform configuration. This blocks us from applying changes to resources.
- An interesting article on the HashiCorp blog about drift management can be found at https://www.hashicorp.com/blog/detecting-and-managing-drift-with-terraform/.
- Read this article from HashiCorp about feature toggles, blue-green deployments, and canary testing using Terraform, available at https://www.hashicorp.com/blog/terraform-feature-toggles-blue-green-deployments-canary-test/.
- In this block, we added the create_before_destroy property with its value set to true. This property makes the regeneration of a resource possible in the event of destruction by indicating to Terraform to first recreate the resource, and only then to delete the original resource.
- However, before using create_before_destroy, there are some things to consider, as follows:
- The create_before_destroy property only works when a configuration change requires the deletion and then regeneration of resources. It only works when executing the terraform apply command; it does not work when executing the terraform destroy command.
- You must be careful that the names of the resources that will be created have different names than the ones that will be destroyed afterward. Otherwise, if the names are identical, the resource may not be created.
- To implement zero downtime in Azure with Packer and Terraform, read the tutorial at https://docs.microsoft.com/en-us/azure/developer/terraform/create-vm-scaleset-network-disks-using-packer-hcl.
- A good article on zero downtime can be found at https://dzone.com/articles/zero-downtime-deployment.
- There are also other tools for parsing and processing the plan generated by the terraform plan command. Among these tools, there are npm packages such as terraform-plan-parser, available at https://github.com/lifeomic/terraform-plan-parser, or Open Policy Agent for Terraform at https://www.openpolicyagent.org/docs/latest/terraform/.
- One of the best practices regarding the structure of the configuration is to separate the Terraform configuration into infrastructure and application components, as explained in the article at https://www.cloudreach.com/en/resources/blog/how-to-simplify-your-terraform-code-structure/.
- Externalizing the configuration (which is redundant between each environment) by reading the documentation available at https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/.
- A useful blog article on the architecture of the Terraform configuration can be found at https://www.hashicorp.com/blog/structuring-hashicorp-terraform-configuration-for-production/.
- Detailed CLI configuration documentation is available at https://terragrunt.gruntwork.io/docs/features/keep-your-cli-flags-dry/.
- Before automating Terraform in any CI/CD pipeline, it is recommended to read HashiCorp’s automation guides with recommendations for Terraform. These guides are available here:
- In Terraform’s vision, workspaces make it possible to manage several environments by creating several Terraform state files for the same Terraform configuration.
Chapter 8: Using Terraform Cloud to Improve Collaboration
- If you already have Terraform configurations with state files that are stored in other types of backends and you would like to migrate them to Terraform Cloud, here is the migration documentation: https://www.terraform.io/docs/cloud/migrate/index.html.
- In addition, if your modules have been published in this private registry, you can generate
the Terraform configuration that calls them using the design configuration feature of Terraform Cloud. You can find out more about this at https://www.terraform.io/docs/cloud/registry/design.html. - In order to ensure that the changes are applied in one place, you can’t run the terraform apply command on a workspace that is connected to a VCS. However, if your workspace is not connected to a VCS, then you can also execute the apply command from your local CLI.
- For more information about additional third-party tools in the execution of Terraform Cloud, I recommend reading the documentation available at https://www.terraform.io/docs/cloud/run/install-software.html.
- It is still a good practice to put these policies in a separate repository so that you don’t mix the Terraform configuration commits of the policy. Another reason to do this would be that this separate repository could be managed by another team (such as ops or security).
- As far as blocking the application is concerned, this is configured in the sentinel.hcl file with the enforcement_level =”hard-mandatory” property for each policy. To find out more about the values of this property and their implication, read the documentation at https://docs.hashicorp.com/sentinel/concepts/enforcement-levels/ and here https://www.terraform.io/docs/cloud/sentinel/manage-policies.html.
- The guide to writing and installing policies is available here: https://www.hashicorp.com/resources/writing-and-testing-sentinel-policies-for-terraform/.
- The basic learning guide for policies is available here: https://learn.hashicorp.com/terraform/cloud-getting-started/enforce-policies.
- There are other tools we can use to write and execute Terraform compliance configuration, such as terraform-compliance (https://github.com/eerkunt/terraform-compliance) and Open Policy Agent (https://www.openpolicyagent.org/docs/latest/terraform/). They are both free and open source, but beware: they can’t be used in a Terraform Cloud execution.
- You can also write policies with Sentinel (which we studied in the previous recipe) to integrate compliance rules for estimated costs. For more information, please read the documentation at https://www.terraform.io/docs/cloud/cost-estimation/index.html#verifying-costs-in-policies.