A meditation on competence and irreversible systems in an era where AI makes it easy to act without understanding.
For most of the last decade, we have told engineers a comforting story: “Abstraction makes systems safer.” We have moved up the stack, hidden complexity behind better tooling, and trusted that higher-level interfaces would protect us from the sharp edges underneath.
AI challenges that story, not because it introduces more abstraction, but because it collapses the distance between intent and action faster than it collapses the distance between ignorance and understanding. That distinction matters, especially in infrastructure.
The Phrasebook Problem
Years ago, while traveling in Portugal, I watched a familiar pattern play out. Armed with a Portuguese phrasebook, we asked an elderly local for directions to a pharmacy. The question was flawless. The pronunciation was careful. The response was generous, detailed, and completely unintelligible to us! We had succeeded at speaking without succeeding at understanding.
AI has given organizations a kind of phrasebook for software and infrastructure. We can now ask sophisticated questions and receive plausible, well-structured answers without possessing the fluency required to evaluate them.
The question platform leaders should be asking is not whether AI can generate code. It is this:
What is the value of generating artifacts you do not understand and cannot safely reason about?
What’s Actually New About the AI Era
Despite the headlines, the most important change in AI is not the models themselves. Year over year, their capabilities improve incrementally. What has changed dramatically is everything around them: the tooling, the workflows, the feedback loops, and the places where AI is allowed to act. The real innovation is how we’re harnessing AI.
We have become extraordinarily good at channeling AI output directly into real systems. CI pipelines, deployment tools, and infrastructure provisioning workflows now accept machine-generated intent with minimal friction. That leverage explains why so many skeptics have flipped. The results feel immediate, and productivity appears real, and in many contexts, it is. But leverage without understanding introduces a new kind of risk: it creates the ability to look competent without being competent.
When Writing Detaches From Reading
Until recently, reading and writing code were inseparable skills. If you could write it, you could usually read it. If you understood what a system did, you could modify it responsibly. AI breaks that symmetry.
Today, it is possible to generate code you cannot meaningfully read, evaluate, or debug. The output may compile, and it may even work, at least temporarily. When it fails, the person who initiated the change may have no idea why.
This is not a theoretical concern. We have already seen examples where AI-generated infrastructure code looked reasonable, passed casual inspection, and was completely wrong. Not because of bad intent or incompetence, but because pattern completion is not the same thing as systems understanding. The danger is not sloppiness; the danger is misplaced confidence.
Why Infrastructure Is Different
In application development, mistakes are often cheap. Tests fail. Features can be rolled back. A bad deployment can be reverted.
Infrastructure does not work that way. Infrastructure is stateful. When it changes, the world changes with it. For example, you can delete a database and recreate it, but you do not get the data back. You can remove a DNS record and restore it, but caches, timeouts, and downstream failures do not rewind. You can fix the code that caused corruption, but the corruption remains.
This is why infrastructure errors behave less like software bugs and more like aviation incidents. They happen mid-flight, with real systems and real users already in motion. AI does not make this better or worse by default; it simply makes it faster.
Automation Is Not Optional, but Naïve Automation Is Dangerous
Let’s be clear: At modern scale, manual infrastructure management is a fantasy. The volume of change is too high. The pace of software delivery is too fast. Automation, including AI-assisted automation, is a necessity. But there is a critical difference between automating execution and automating judgment.
Organizations that blur that line tend to fail in predictable ways. Platform teams become bottlenecks and get bypassed. Guardrails are removed in the name of speed. “Just this once” becomes standard practice. Eventually, something irreversible happens: a data leak, a corruption event, or an outage that cannot be cleanly undone. At that point, AI is not the root cause. It is simply the force multiplier that made the failure arrive sooner.
The Real Role of Humans in the Loop
Keeping humans in the loop does not mean asking people to approve ever-larger changes they cannot possibly understand. No human can responsibly review thousands of lines of generated infrastructure code. When faced with that volume, the only honest response is silence or false confidence.
The real human role sits upstream. Platform teams exist to encode judgment, not to manually exercise it every time. Their job is to define golden paths, constrain degrees of freedom, and turn deep expertise into rules that machines can enforce consistently.
This is not about removing autonomy. It is about recognizing a hard truth: there are infinitely many ways for infrastructure to go wrong and very few ways for it to go right. AI works best when it is forced to travel paved roads, whether the operator is a junior engineer or another machine.
Craft Doesn’t Scale
There will always be room for deep experts and bespoke solutions. But organizations cannot depend on artisanal correctness to run critical systems at scale. It’s like craft beer: it exists, but most people do not drink it every day. Infrastructure is no different.
AI accelerates this reality. It rewards organizations that have invested in clear abstractions, well-defined constraints, and institutional knowledge. It punishes those that rely on heroics and tribal memory.
A Sober Conclusion
AI raises the stakes in infrastructure: it makes the need for expertise more acute, not less.
The winners in this next phase will not be the teams that generate the most code the fastest. The winners will be the ones who understand where automation belongs, where it does not, and how to harness it without mistaking fluency for comprehension.
A phrasebook can get you started. It can even make you sound confident. But someone still needs to know the language. Especially when the directions matter.

