|

Book Review: Terraform Cookbook: Efficiently define, launch, and manage Infrastructure as Code across various cloud platforms

Recently, I finished reading Terraform Cookbook: Efficiently define, launch, and manage Infrastructure as Code across various cloud platforms by Mikael Krief.

I am already experienced with Terraform, and have read 3 other Terraform books, along with many other articles, blogs, and videos. Not to mention my own blog posts about Terraform, my 3-part video series on “Infrastructure-as-Code (IaC) Using Terraform” available on my YouTube channel, and my own personal curated list of Terraform Learning Resources (which I plan to add this book to).

Out of the many books, blogs, articles, etc. I’ve read, I found this one to be very clear and concise. It is a great resource if you are just getting started with Terraform. It doesn’t overload you with a lot of text/material, and has a great format of… Getting Ready, How To Do It, and How It Works.

There isn’t one specific chapter that stands out for me, but I appreciated all of the extra links shared for additional information. In Chapter 4 (“Using the Terraform CLI“), Chapter 5 (“Sharing Terraform Configuration with Modules“), and Chapter 6 (“Provisioning Azure Infrastructure with Terraform“), there are some good links to tools that generate Terraform configuration files, and testing frameworks.

I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).

If my highlights peak your interest, I strongly recommend that you pick up a copy for yourself.

Chapter 1: Setting Up the Terraform Environment

  • It is important to know that if your Terraform configuration is in version 0.11, it is not possible to migrate it directly to 0.13. You will first have to upgrade to 0.12 and then migrate to 0.13.
  • It is also recommended by HashiCorp, before performing the migration process, to commit its code in a source code manager (for example, Git) in order to be able to visualize the code changes brought by the migration.

Chapter 2: Writing Terraform Configuration

  • With regard to the specification of the provider version, when executing the terraform init command, if no version is specified, Terraform downloads the latest version of the init provider, otherwise it downloads the specified version
  • It is also important to mention that the version of the Terraform binary that’s used is specified in the Terraform state file. This is to ensure that nobody applies this Terraform configuration with a lower version of the Terraform binary, thus ensuring that the format of the Terraform state file conforms with the correct version of the Terraform binary.
  • In addition, with the 0.13 version of Terraform released in August 2020, we can now create custom validation rules for variables which makes it possible for us to verify a value during the terraform plan execution.
  • Note that using the -var option or the TF_VAR_<name of the environment variable> doesn’t hardcode these variable’s values variable> inside the Terraform configuration. They make it possible for us to give values of variables to the flight. But be careful – these options can have – consequences if the same code is executed with other values initially provided in parameters and the plan’s output isn’t reviewed carefully.
  • Optionally, we can add a description that describes what the output returns, which can also be very useful for autogenerated documentation or in the use of modules.
  • An article explaining the best practices surrounding Terraform configuration can be found at https://www.terraform-best-practices.com/code-structure.
  • The following blog post explains the folder structure for production Terraform configuration: https://www.hashicorp.com/blog/structuring-hashicorp-terraform-configuration-for-production
  • The use of data blocks is to be preferred to the use of IDs written in clear text in the code, which can change because the data block recovers the information dynamically.
  • Separating the Terraform configuration is a good practice because it allows better control and maintainability of the Terraform configuration. It also allows us to provision each part separately, without it impacting the rest of the infrastructure.
  • To know when to use a data block or a terraform_remote_state block, the following recommendations must be kept in mind:
    • The data block is used in the following cases:
      • When external resources have not been provisioned with Terraform configuration (it has been built manually or with a script)
      • When the user providing the resources of our Terraform configuration does not have access to another remote backend
    • The terraform_remote_state block is used in the following cases:
      • External resources have not been provisioned with Terraform configuration
      • When the user providing the resources of our Terraform configuration has read access to the other remote backend
      • When the external Terraform state file contains the output of the property we need in our Terraform configuration
  • There is an external resource in Terraform that allows you to call an external program and retrieve its output data so that it can be used in the Terraform
    configuration.
  • This external resource contains specifics about the protocol, the format of the parameters, and its output. I advise that you read its documentation to learn more: https://www.terraform.io/docs/providers/external/data_source.html
  • The following are some example articles regarding how to use the Terraform external resource:
    • https://dzone.com/articles/lets-play-with-terraform-external-provider
    • https://thegrayzone.co.uk/blog/2017/03/external-terraform-provider-powershell/
  • In this function, we used the verb to indicate that it is a %s character string that will be replaced, in order, by the name of the application and the name of the environment.
  • It is important to know that the local-exec provisioner, once executed, ensures that the Terraform state file cannot be executed a second time by the terraform apply command.
  • To be able to execute the local-exec command based on a trigger element, such as a resource that has been modified, it is necessary to add a trigger object inside null_resource that will act as the trigger element of the local-exec resource.
  • Please note that the fact a property is sensitive in Terraform means that it cannot be displayed when using the Terraform plan and apply commands in the console output display. On the other hand, it will be present in clear text in the Terraform state file.

Chapter 3: Building Dynamic Environments with Terraform

  • Regarding the lookup and element functions, they can be used, but it is preferable to use the native syntax instead (such as var_name[42] and var_map[“key”]) to access elements of a map, list, or set.

Chapter 4: Using the Terraform CLI

  • Among the other options of this command, there is also the -check option, which can be added and allows you to preview the files that will be indented, without applying the changes in the file(s).
  • With the Terraform extension of Visual Studio Code, we can have every Terraform file saved and formatted with the terraform fmt command. For more information, read the pertinent documentation: https://marketplace.terraform.visualstudio.com/items?itemName=HashiCorp.
  • For Git commits, it’s possible to automate the execution of the terraform fmt command before each commit by using pre-commits that are hooks to Git: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks.
  • To use pre-commits with Terraform, refer to this list of hooks provided by Gruntwork: https://github.com/gruntwork-io/pre-commit.
  • If the Terraform configuration contains a backend block, then, for this validation of the configuration, we don’t need to connect to this state file. We can add the -backend=false option to the
    terraform init command.
  • Finally, if the execution of this Terraform configuration requires variables passed with the -var argument, or with the -var-file option, you cannot use this command. Instead, use terraform plan command, which performs validation during its execution.
  • Since the terraform destroy command deletes all the resources tracked in the Terraform state file, it is important to break the Terraform configuration by separating it into multiple state files to reduce the room for error when changing the infrastructure.
  • If you need to destroy a single resource and not all the resources tracked in the state file, you can add the -target option to the terraform destroy command, which allows you to target the resource to be deleted.
  • Note that the targeting mechanism should only be used as a last resort. In an ideal scenario, the configuration stays in sync with the state file (as applied without any extra target flags). The risk of executing a targeted apply or destroy operation is that other contributors may miss the context and, more importantly, it becomes much more difficult to apply further changes after changing the configuration.
  • Be careful when deleting a workspace that it does not delete the associated resources. That’s why, in order to delete a workspace, you must first delete the resources provided by that workspace using the terraform destroy command. Otherwise, if this operation is not carried out, it will no longer be possible to manage these resources with Terraform because the Terraform state file of this workspace will have been deleted.
  • If you are provisioning resources in Azure, there are rather interesting tools that generate the Terraform configuration and the corresponding Terraform state file from Azure resources that have already been created. This open source Az2Tf tool is available at https://github.com/andyt530/py-az2tf. Alternatively, there is TerraCognita, which is available at https://github.com/cycloidio/terracognita/blob/master/README.md.
  • Moreover, in order to cancel the taint flag applied with the terraform taint command, we can execute the inverse command.

Chapter 5: Sharing Terraform Configuration with Modules

  • Documentation on the use of the generator is available at https://docs.microsoft.com/en-us/azure/developer/terraform/create-a-base-template-using-yeoman.
  • Yeoman documentation is available at https://yeoman.io/.
  • Among all of the tools in the Terraform toolbox, there is terraform-docs, an open source, cross-platform tool that allows the documentation of a Terraform module to be generated automatically.
  • We execute terraform-docs specifying in the first argument the type of format of the documentation. In our case, we want it in markdown format. Then, in the second argument, we specify the path of the modules directory.
  • But to go further, we added the > Modules/webapp/Readme.md command, which indicates that the content of the generated documentation will be written in the Readme.md file that will be created in the module directory.
  • In our recipe, we chose to generate markdown documentation, but it is also possible to generate it in JSON, XML, YAML, or text (pretty) format. To do so, you have to add the format option to the terraform-docs command. To know more about the available generation formats, read the documentation here: https://github.com/terraform-docs/terraform-docs/blob/master/docs/FORMATS_GUIDE.md.
  • The mechanism of the Terrafile pattern is that instead of using the Git sources directly in module calls, we reference them in a file in Terrafile YAML format. In the Terraform configuration, in the module call, we instead use a local path relative to the modules folder.
  • Among the Terraform framework and testing tools is the Terratest framework, created by the Gruntwork community (https://gruntwork.io/static/), which is popular and allows testing on code written in the Go language.
  • If Terraform modules provide resources in cloud providers, the authentication parameters must be set before running tests.
  • Read this blog post about Terratest and GitHub Actions, provided by HashiCorp: https://www.hashicorp.com/blog/continuous-integration-for-terraform-modules-with-github-actions/.

Chapter 6: Provisioning Azure Infrastructure with Terraform

  • A tutorial that shows how to use and configure locally installed Visual Studio Code to execute a Terraform configuration in Azure Cloud Shell: https://docs.microsoft.com/en-us/azure/developer/terraform/configure-vs-code-extension-for-terraform.
  • This method allows you to provision elements in Azure that are not available in the azurerm provider, but it is important to know that Terraform knows the resources described in this ARM template when it is executed.
  • That is to say that these resources (here, in our resource, it is the extension) do not follow the life cycle of the Terraform workflow and are not registered in the Terraform state file. The only thing that is written in the Terraform state file is the configuration of the resource, azurerm_template_deployment, and, as a consequence, for example, if you run the terraform destroy command on the Terraform configuration, these resources provided by the ARM template will not be destroyed. Instead, only the azurerm_template_deployment resource will be removed from Terraform state file. For this reason, it is advisable that you use this type of deployment only to complete resources that have been provisioned with Terraform HCL code.
  • Documentation pertaining to the when property of provisioner is available here: https://www.terraform.io/docs/provisioners/index.html#destroy-time-provisioners.
  • If you want to keep a real IaC, it is preferable to use an as-code configuration tool, such as Ansible, Puppet, Chef, or PowerShell DSC.
  • Warning: There can only be one custom script extension per VM. Therefore, you have to put all the configuration operations in a single script.
  • One of the Terraform configuration generation tools, called Terraformer, which is hosted in the GitHub repo of Google Cloud Platform, at https://github.com/GoogleCloudPlatform/terraformer.
  • We generate the Terraform configuration by executing the following Terraformer command:
    • terraformer import azure –resources=resource_group –compact –path-pattern {output}/{provider}/
  • Terraformer also contains an option that allows a dry run to preview the code that will be generated. To do this, we will execute the following command that generates a plan.json file, along with a description of the resources that will be generated:
    • terraformer plan azure –resources=resource_group –compact –path-pattern {output}/{provider}/
  • We visualize the content of this created JSON file to check its conformity and then, in order to carry out the generation, we execute the following command:
    • terraformer import plan generated/azurerm/plan.json
  • Moreover, before using Terraformer, it is necessary to check that the resources to be generated are well supported. For example, in the case of Azure, the list of resources is available here: https://github.com/GoogleCloudPlatform/terraformer#use-with-azure.
  • Finally, among the other Terraform configuration generation tools, there is a very good tool called az2tf (https://github.com/andyt530/py-az2tf) that used to work on the same Terraformer principle, but unfortunately, this tool is no longer maintained. There is also TerraCognita (https://github.com/cycloidio/terracognita/), which still integrates a number of resources for Azure, and Terraforming (https://github.com/dtan4/terraforming), which is only operational for AWS.

Chapter 7: Deep Diving into Terraform

Chapter 8: Using Terraform Cloud to Improve Collaboration

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *