AI, survey, CI/CD, AI, platform, IDP, generative ai, AI, responsible ai, automation, AI, human, observability, engineering

Platform engineering is eating the world. That’s not my phrase — that’s how the State of AI in Platform Engineering report puts it. Well, it is, after all, by the platformengineering.org community. However, it is clear that over the past few years, platform engineering has evolved from a niche practice to a central operating model, and now, inevitably, it’s colliding head-on with the most disruptive force of our era: AI.

If you’ve been around DevOps long enough, you’ve seen this play before. We chase the shiny thing — containers, microservices, serverless, cloud-native — and somewhere between the conference keynotes and the boardroom mandates, adoption becomes “table stakes.” AI is no different. In fact, according to this latest research, AI has already crossed the adoption chasm in platform engineering: A full 89% of platform professionals use it daily. Let that sink in. Not since the dotcom boom have we seen a new technology reach this level of saturation so quickly.

But as this report makes clear, ubiquity is not the same as maturity. And hype is not the same as trust.

Everyone’s Using It — But To What End?

The headline numbers are staggering. Three-quarters of respondents use AI for code generation, and 70% for documentation. Tools like Cursor and GitHub Copilot pump out billions of lines of code. On paper, productivity has never looked better.

Yet the dirty secret is this: Most usage is tactical. Individual engineers experimenting, plugging in tools to save time on boilerplate, to crank out user guides, or to autocomplete a tricky syntax. That’s useful, sure, but does it move the needle for organizations? Does it justify the hype and the C-suite mandates demanding ROI?

The survey suggests not yet. AI has become so accessible — browser-based, IDE plugins, zero infrastructure overhead — that it’s spread like wildfire. But grassroots adoption without strategy has created what I’ll call Shadow AI: Productivity wins in pockets, but no measurable enterprise impact. It’s the same lesson we learned from “Shadow IT” a decade ago. Without governance, integration and alignment, we’re just automating chaos.

The Great Divide: Hype vs. Reality

Perhaps the most counterintuitive finding: Platform engineers themselves are split down the middle on whether AI is overhyped. Forty-seven percent say yes, 45% say it’s appropriately hyped. Think about that. The very people tasked with building the foundations for AI can’t agree on whether it’s living up to expectations.

Why? Because we’ve hit what the report calls the “AI implementation plateau.” Early excitement delivered quick wins. But now? Organizations struggle to quantify ROI, to integrate AI into pipelines, and to validate outputs that often hallucinate. Executives are frustrated. Developers are exhausted by “prompt fatigue.” And platform teams are stuck in the middle.

It’s déjà vu. In the early DevOps days, we saw the same story: Great tools, clear potential, but culture, process and governance lagged far behind. AI today is following that exact curve.

From AI Users to AI Enablers

Here’s where platform engineering comes in. The report highlights a fascinating shift: A full 75% of platform teams are already hosting or preparing to host AI workloads. That’s more than just using Copilot in your IDE. It means becoming the enabler — the builder of platforms for AI, not just AI-powered platforms.

Think about it: Platform engineers are uniquely positioned to break through the plateau. They sit at the intersection of developers, data scientists, ML engineers and security. They know how to standardize, govern and scale. They’re already in the business of making the right path the easy path. Now, they have to do it for AI.

This “dual mandate” is no small thing. On one hand, platform engineers must integrate AI into internal developer platforms (IDPs) — golden paths, CI/CD pipelines, observability tools — all supercharged with intelligence. On the other hand, they must build Platforms for AI — GPU-enabled infrastructure, MLOps pipelines, secure data governance — for the data scientists training and deploying models.

It’s a heavier lift than the cloud-native shift ever was. And the stakes are higher.

Trust, Governance, and Human Challenges

The technical hurdles are significant — legacy systems, tool sprawl and performance bottlenecks. But the real blockers? People.

The report shows 57% of teams face skill gaps, 56% wrestle with hallucinations, and a surprising 35% admit confusion about what “agentic AI” even means. Add in job security fears — junior engineers wondering if AI will replace them, mid-level engineers fearing their hard-won skills are now commoditized — and you see the picture.

Security isn’t making life easier either. Nearly 17% of platform teams report security functions outright blocking AI use. Others enforce deterministic-era controls on probabilistic systems, and the result is friction.

So yes, AI is helping. Yes, it’s working in pockets. But trusted? Not yet. And without trust, adoption hits walls.

The Road Ahead: AI-Native Platforms

Still, I come away from this report more optimistic than not. We are at the dawn of what is called the AI-native era. Just as cloud-native redefined delivery pipelines, AI-native will redefine platforms. We’re talking about:

  • Agentic AI: Autonomous systems that don’t just assist, but decide, adapt, and optimize.
  • Composable GPU infrastructure: Supporting workloads from cloud to edge.
  • Self-evolving platforms: Systems that learn from patterns and improve themselves without human babysitting.

This isn’t science fiction. Early movers are already embedding these capabilities. The challenge isn’t the tech—it’s whether organizations will invest in the human capital, governance, and cultural shifts needed to wield it responsibly.

Shimmy’s Take

The State of AI in Platform Engineering paints a picture of both inevitability and immaturity. AI is here, it’s everywhere, but we’re still learning how to trust it, govern it, and scale it.

If you’re a platform engineer, here’s my advice:

  • Don’t chase vanity metrics like “lines of code generated.” Measure usefulness — defects reduced, time-to-restore, developer satisfaction.
  • Build the foundations before the intelligence. AI amplifies existing problems; it doesn’t fix them.
  • Embrace the dual mandate. You’re not just building platforms with AI — you’re building platforms for AI.

We’re standing where we stood a decade ago with DevOps: Potential everywhere, results uneven, culture lagging. But this time the wave is bigger, faster, and less forgiving. AI won’t wait for us to catch up.

Trusted? Not yet. Working? Sometimes. Helping? Absolutely — but only if we get serious about how we use it.

The future belongs to platform teams who can harness AI not as a shiny tool, but as a transformative force — balancing innovation with discipline, and hype with reality. Those who can’t? They’ll be left behind on the plateau.

Tech Field Day Events

SHARE THIS STORY