Over the last 10 years, I’ve had the opportunity to work at several different organizations. At each of those, my development workflow had to adapt to the processes within. This made the working experience rather unique - as if it had its own flavour. And just like with food, I started noticing which flavours I enjoyed and which I didn’t. Compiling a binary and direct messaging it over to Dave so he can deploy it to production left a bad taste in my mouth. So did spending half a day tracking down the permission wizard that can set up a GitHub repository for this new project I had to ship.
There was a pattern - I disliked the parts of the process that drastically limited my productivity. And more often than not, that drop in productivity came from a lack of visibility and autonomy over resources owned by the organization. The issue stems from a simple but important reality: most of these resources come with a price tag. Whether it’s a database instance, a storage bucket, a key-value store, or just a GitHub repository with advanced permissions — they all contribute to the organization’s bill. And if these resources are created ad hoc, duplicated, or left running without oversight, that bill can ramp up quickly and silently. Yet these are the resources I rely on to do my job. I’ve come to believe that organizations that build digital products on the cloud and fail to address this tension cannot compete. The goal is to enable speed and self-service for engineering, while giving finance and operations the oversight and predictability they need. The solution lies in balance, not in centralising resource management to a single, omnipotent individual.
Fortunately, there are tools and practices that can alleviate this tension. The industry has converged on one in particular: Terraform. We’ll focus on it’s open-source alternative OpenTofu. OpenTofu allows you to describe the resources you need with declarative code. Think of it like placing an order at a restaurant — the code is your order, and OpenTofu is the chef. It figures out how to assemble what you asked for, following the rules of the restaurant (in this case, the organization’s policies and constraints). Because the resources are defined in code, they can be version controlled just like application code. That means teams get a clear, shared view of what resources exist, how they’ve changed over time, and who made those changes — all without relying on tribal knowledge or ad hoc requests.
Because OpenTofu is code, collections of related resources can be packaged into modules — reusable units of infrastructure that follow a consistent pattern. Sticking with the restaurant analogy: if writing OpenTofu is like placing an order, then a module is the set menu. Instead of ordering each individual item à la carte — an appetizer, a main course, a drink, and dessert — you just ask for the “Static Website Combo” or the “Database & Cache Special.” Behind the scenes, the module knows exactly what ingredients (resources) to put together, how they should be prepared (configured), and how they should be served (deployed). This makes life easier for everyone. Engineers don’t have to figure out how to configure every piece from scratch. And the organization benefits from consistency — the same set menu is used across teams, ensuring standards for security, networking, monitoring, and cost controls are followed by default.
In the articles that follow, I’ll dive deeper into how OpenTofu actually works in practice. I’ll walk through setting it up for an organization from scratch, explain how to structure environments and teams, and demonstrate real-world integrations with tools like Google Cloud and Firebase. Whether you’re new to OpenTofu or looking to bring more order to your existing cloud workflows, you’ll get a practical, hands-on guide to making infrastructure work for your teams — not against them.