share post
So you’ve just been tasked with creating a Terraform Provider (or maybe upgrading an existing one). As you do your research to prepare for the project, you slowly begin to realize, “well this looks like I’ll just be developing a Terraform specific wrapper for a client of my API.”
It’s not particularly difficult, but it seems tedious.
Is there a better way to build this? Maybe something that can be modular and automated? A way that’s actually stimulating to implement?
If that speaks to you, then you’re not all too different from my coworkers and me. We wanted something better. Something interesting. Something easy to maintain and quick to extend.
In this article, we:
To get the most out of this article, keep the following in mind.
Now, let’s cover some basics.
Terraform is an open-source Infrastructure as Code tool developed by HashiCorp. It is used to build, change, and version cloud and on-prem resources safely and efficiently using a declarative language. It’s similar in nature to AWS CloudFormation, but isn’t limited to just AWS.
You can learn more from this introduction to Terraform. Then, try one of the Getting Started Projects (it defaults to AWS, but you can select other options like Docker on the left column of the page.
Not sure what a Provider is? Well, you better figure it out before you go building one!
To put it simply, Providers are Terraform’s version of plugins. You can learn more about them from HashiCorp.
Resources and Data Sources are fundamental components of Terraform and are very important to understand before building a provider. Learn about resources here. Learn about data sources here.
You can learn all this and more on HashiCorp’s Plugin Development page.
Formerly known as a Swagger Specification (you may see us reference it as such), an OpenAPI Specification is a formal way to describe a REST API. You can learn more from the Swagger team here. If you’re so inclined, you can also sneak a peek at the Specification itself here. Finally, you can check out LogicMonitor’s API Specification right here.
Go-Swagger is an awesome Golang package that provides the Go community with a Swagger 2.0 implementation. It does this mainly using the Go Template package. This package does a whole lot, but we’re primarily interested in its ability to generate API Clients and its support of custom code generation (again, through Go Templates!).
You can learn more about Go-Swagger here. Here are some sections we think are particularly important to review: client generation from spec, custom generation, and custom templates. For extra credit, take a look at the schema generation rules. Finally, the repository can be found here.
The template package in Go allows for the data-driven generation of textual output (in our case, code). It can be very difficult trying to digest a Go Template if you don’t understand the notation, so we highly recommend checking out the documentation here. If you want to get your hands dirty, gopheracademy.com has a great introduction to Go Templates here.
That depends on your goals and whether or not you trust us 🙂 If you want to fully test out the provider we’re building in this article, then you’ll need an LM portal. The provider will need to communicate with a specific portal to carry out the processes we’re implementing. Luckily, LogicMonitor provides a free trial!
Once you have your new portal set up (or if you already have one), you’ll need to generate some API tokens. You can learn how to do that here.
Now, if you don’t necessarily have to test this provider (maybe you just want to understand the process so you can apply it to your own API), then you won’t need a LogicMonitor portal. You’ll still be able to learn a lot while going through this article, the provider build process, and the code in the accompanying GitHub repository.
In order to start working on this project, you’ll need to have a few tools installed and set up:
Ah yes, one of the biggest questions of all.
Why?
When we were tasked with putting together a Terraform Provider for the LogicMonitor API, we initially considered a more standard approach: manual implementation. It was during our due diligence that we discovered Go-Swagger.
We knew we wouldn’t want to manually develop an SDK/client for the LogicMonitor API, so Go-Swagger was the perfect solution.
As we continued our research, something rather obvious became apparent to us: a Provider is essentially just a Terraform-specific wrapper for an API client written in Go.
So that begged the question: If we can automate the creation of that API client, why not automate the creation of the Terraform wrapper as well? Since Go-Swagger supports custom code generation, we decided to take the leap.
The primary benefit of this approach is automation.
Going into this project, we knew we would want to start small and incrementally add functionality to our provider. Additionally, after looking over existing provider implementations, we recognized that there tends to be a lot of boilerplate code in them (mostly mapping fields from one object to another).
As you’ll see later, this approach allows us to extend our provider very easily by leveraging our API’s OpenAPI specification. We can break down this (rather large) specification into individual features/components. Then, whenever we need to add functionality, we simply add the corresponding component to Go-Swagger’s input specification. Don’t worry if this doesn’t make complete sense yet, it will become clear as we move forward.
As you may have already gathered from the prerequisites section, this approach is complicated! There are a significant number of technologies to become familiar with and, on top of that, working with generated code isn’t a walk in the park.
Not to sound dismissive, but we didn’t see this as much of a negative. In fact, it was quite positive in our minds. We had been looking for a new challenge and an opportunity to learn something new, and this approach was the perfect solution.
Below you will find summaries of the various directories and files that make up the project. This is just to give you a high-level understanding of the roles and responsibilities of each of these components. You can find more detailed information in the files themselves; we’ve added helpful comments to the code in an effort to explain parts of it that are either complex or whose purpose may be difficult to intuit.
Bold files/directories = items found at the root level of the repoGray background rows = child items of those directories
* = templates that correspond to base Go-Swagger templates. These have been slightly altered to better fit our API. If you apply this framework to your own API, you’ll want to alter config.yml to point to the original templates instead of ours, at least to start.
You can find this same information in the project repository’s README, but we’ve included the following for convenience. If it doesn’t work, check the repository in case it has been updated since this article was published.
$ git clone [email protected]:logicmonitor/automated-terraform-provider.git
$ cd automated-terraform-provider/ $ make
Assuming you didn’t run into any build errors, you’re all set to start testing!
After running the Makefile, your provider should be installed. That means we just need to write up a Terraform configuration file using our provider, initialize the provider, and run it.
To help, we’ve already included a test/ directory that contains a test.tf Terraform configuration file you can use for testing. Be sure to edit the test.tf file by entering values for api_id, api_key, and company within the LogicMonitor provider definition. See the Prerequisites section for more information about generating the API Key and ID.
To actually run the tests, we’ve included a Makefile to provide convenience commands that we thought were helpful during the development process.
You must initialize any new provider after you’ve installed it in your Terraform plugin directory. You can do that by running terraform init. However, Terraform locks the version of a provider after it is initialized. This prevents you from making changes. For example, if you initialize the provider and make changes to it without changing the version, Terraform prevents you from initializing the changed version and displays an error.
During development, we often need to make incremental changes without actually changing the version number so this sort of behavior can become a problem. To fix this, we added the clean target. It deletes the lock file and re-initializes the provider using the most recent build in your repo.
$ make clean rm .terraform.lock.hcl terraform init Initializing the backend... Initializing provider plugins... - Finding logicmonitor.com/com/logicmonitor versions matching "0.1.0"... - Installing logicmonitor.com/com/logicmonitor v0.1.0... - Installed logicmonitor.com/com/logicmonitor v0.1.0 (unauthenticated) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
These two targets don’t need much introduction. They run the terraform plan and terraform apply steps respectively while using a file, lm.tfstate, to save the plan for reference if need be.
$ make plan terraform plan -out=lm.tfstate An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.logicmonitor_device.my_devices will be read during apply # (config refers to values not yet known) <= data "logicmonitor_device" "my_devices" { + filter = "displayName~\"Cisco Router\"" + id = (known after apply) } # logicmonitor_device.my_device will be created + resource "logicmonitor_device" "my_device" { + auto_properties = (known after apply) ... } Plan: 1 to add, 0 to change, 0 to destroy. Changes to Outputs: + devices = { + auto_balanced_collector_group_id = null ... } ------------------------------------------------------------------------ This plan was saved to: lm.tfstate To perform exactly these actions, run the following command to apply: terraform apply "lm.tfstate"
$ make apply terraform apply lm.tfstate logicmonitor_device.my_device: Creating... logicmonitor_device.my_device: Creation complete after 1s [id=6858] data.logicmonitor_device.my_devices: Reading... data.logicmonitor_device.my_devices: Read complete after 0s [id=6858] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the path below. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the `terraform show` command. State path: terraform.tfstate Outputs: devices = { "auto_balanced_collector_group_id" = 0 ... }
By default, the Makefile will run clean, plan, and then apply.
After you run the tests, you should see the new device in your LogicMonitor portal.
You can now edit the test.tf file to make changes to your new device. Alternatively, you can remove the device by running the terraform destroy command.
Now that you have 1. Examined and understood the templates used to generate the provider 2. Generated the provider code and 3. Tested the provider, you can confidently work towards making your own changes (more on that in the next section!).
Before you do that, we have a tip that helped us immensely during the development of the LogicMonitor Provider.
We recommend first making changes to the generated source code directly, instead of trying to tweak the templates straight away. Get things working and test them (here is where the make nogen target of the root directory Makefile is very useful) and THEN alter the templates to replicate the manual changes you’ve made. Better yet, you can make manual changes and then commit them to the Git branch you’re working on. This will keep the changes you made to the generated code safe to reference later.
After your changes are tested, working, and saved to your Git branch, you should now modify the template files. Since you committed your changes to the generated code, making changes to the template files and building the code (using the make build target of the root directory Makefile) is much safer. Make your template file changes, generate the code, and run a diff command like git diff. You can now precisely identify the changes your new template files are producing.
Whew! That’s a lot of information! Let’s take a step back and review the high-level process of adding functionality to the provider.
Now that you’ve got a handle on all of this, let’s really push your understanding and show you what the development cycle looks like through some additional exercises!
Importing is an important feature for any halfway serious Terraform Provider. We’ve left it unimplemented so that you can add it yourself. Learn more about developing the import feature here: https://www.terraform.io/plugin/sdkv2/resources/import
We’ve included a ready-made dashboards component spec file in the GitHub repository, but its values weren’t included in the current.json file. Try adding it by running the createSpecFile.py file in the spec_files directory and then rebuild the provider.
Does it build successfully? When you test the addition of dashboards, are you able to create, edit, and delete dashboards using the dashboard resource? Is the dashboard data source functioning properly?
If you’re having trouble, we have a branch in the automated-terraform-provider repo called “adding-dashboards” that will show you how we got it to work.
Like we’ve mentioned, one of the major advantages to our approach is that it’s easy to add new components to the provider as needed. We covered an easy example with the dashboards exercise above, but now it’s your choice!
Look through the various definitions in the reference LM OpenAPI spec file and see if there’s one that interests you. Some options: Admin (LM’s user object), AlertAck, Alert Rule, Chain, Collector, Device Group, Dashboard Group, Role, SDT, and WebCheck.
The overall process for creating a component spec file is to identify the object that will be your resource (i.e. Device), copy all definitions related to that object (i.e. Device, DevicePaginationResponse, ErrorResponse, and NameAndValue), copy all the paths you’ll need (CRUD and getList paths – i.e. GET/POST /device/devices and GET/DELETE/PUT /device/devices/{id}), and finally copy the appropriate tag. You can see all of these items in the device.json and dashboard.json files in the spec_files directory.
One final note: you will need to add a field to the definition of the object that will map to your Terraform resource of choice. The field to add is “example” and the value should be “isResource” and it should only be added to the definition of the object that represents the resource you want to add.
You can look at the existing component spec files for examples (i.e. the device definition as opposed to the DevicePaginationResponse definition). Objects that are resources need to be treated differently than the standard data objects. Search through the repository for “isResource” to see how things differ between the two.
When Terraform adds a provider, they expect that provider to come with documentation to help users maximize their utilization of it. This process can be “automated” as well. Within our framework, this typically means adding the documentation to the spec file in the appropriate fields and then accessing them within a new template (or two). To summarize, this exercise requires changes to the spec files (adding your documentation), adding new templates for the documentation, and updating the config.yml file to include these new templates.You can learn more about what documentation Terraform expects here: https://www.terraform.io/registry/providers/docs
Now that you have a great understanding of the end-to-end process, try applying it to your own product’s API. All you need to do is construct a new current.json that uses your API’s OpenAPI specification, switch to the original Go-Swagger client templates, and generate the code! Add in some tweaks to the templates to better handle your own objects and you’ll be well on your way to having a full fledged, custom built Terraform provider.
Surely, once you’ve completed each of these exercises you will be a master of automating the generation of a Terraform provider for your particular API.
This blog was written by Adam Johnson with contributions from Carlos Alvarenga and Ned Imming.
Adam Johnson is an employee at LogicMonitor. Subscribe to our LogicBlog to stay updated on the latest developments from LogicMonitor and get notified about blog posts from our world-class team of IT experts and engineers, as well as our leadership team with in-depth knowledge and decades of collective experience in delivering a product IT professionals love.
There are a few Agile certifications available to choose from, and in this article, we’ll discuss the best agile certifications currently available for IT professionals.
Join LogicMonitor Wednesday June 1st for Dinner @ Frankie & Johnnie's Steakhouse
Join LogicMonitor for a CiscoLive Dinner @ SushiSamba - June 14th, 2022