
A full 89% of platform teams are already hosting or planning to host AI workloads, according to new survey data from Vultr and Platform Engineering.
The findings, published in The State of AI in Platform Engineering report, show that while AI adoption is now firmly mainstream, most organizations remain mired in experimentation, with significant gaps in infrastructure and collaboration slowing progress toward enterprise-scale deployment.
The survey reveals AI is now integral to platform engineering practices. A full 75% of respondents report using AI daily for coding tasks, while 70% rely on it for documentation.
The findings suggest how deeply AI has embedded itself into everyday developer workflows. Yet, despite this momentum, the report highlights that many organizations are still pursuing quick wins rather than developing long-term strategies.
One major roadblock is ownership of AI platforms. The survey found 39% of organizations assign AI platform responsibilities to platform engineering teams, but 13% admit they have no clear ownership model at all.
This lack of accountability has left many enterprises without a consistent path for scaling AI workloads.
Kevin Cochrane, CMO of Vultr, says to establish clear ownership and accountability for AI infrastructure, organizations need to treat it like a product, not a project.
“This means defining clear boundaries, standards, and guardrails — while enabling other groups to build within them,” he explains.
Cochrane says by setting ownership up front and enforcing best practices like CI/CD, observability and incident response, organizations can ensure AI amplifies value instead of technical debt.
He points out infrastructure remains another sticking point, with 16% of organizations using hybrid environments for AI workloads, and only 9% running GPU workloads fully on-premises.
The cloud remains dominant, but the data suggests that many teams are still experimenting with deployment models rather than executing well-defined hybrid or multicloud strategies.
Kubernetes adoption for AI orchestration tells a similar story. While 40% of respondents leverage Kubernetes to manage GPUs, 35% admit they don’t orchestrate AI workloads at all. Without orchestration, enterprises risk bottlenecks, inefficiencies and escalating costs as workloads increase in complexity.
Collaboration across technical roles is another challenge, with nearly a third (31%) of platform engineers admitting they experience limited collaboration with data science teams, and 16% reported no collaboration at all.
These silos prevent organizations from fully capitalizing on AI investments, as disconnected teams struggle to align models, pipelines and infrastructure.
Outdated processes also weigh heavily on AI adoption. A full 41% of survey respondents said their organizations have not updated CI/CD or DevSecOps pipelines to accommodate AI workloads. This gap makes it difficult to operationalize AI in a secure, scalable and repeatable way.
At the same time, platform engineers are clear about what they need. More than half — over 50% of respondents — want infrastructure templates and blueprints to help streamline deployment and reduce duplication of effort.
That foundation, according to the survey, includes GPU-ready instances that can deploy in minutes, global orchestration built in from the start, composable architectures tailored for AI and machine learning pipelines, advanced MLOps governance and flexible deployment across hybrid and multicloud environments.
With these capabilities, teams can move beyond proofs of concept and pilots into enterprise-wide AI production.
Cochrane cautions running AI workloads without orchestration risks inefficiency, inconsistent resource utilization and uncontrolled technical debt.
“Platform engineers can help by implementing GPU and AI-aware orchestration frameworks and defining clear paths for deployment,” he says. “They will play a critical role in ensuring workloads are scalable, reproducible, and secure.”
Despite today’s challenges, the report makes clear that the future of AI in platform engineering is not in doubt.
Adoption is widespread, use cases are multiplying, and demand for scalable, secure AI infrastructure is only intensifying. The current hurdles — ownership, orchestration, collaboration and governance — are less about willingness and more about execution.
The survey findings indicate platform engineers are being positioned at the center of enterprise AI strategy, responsible not only for enabling experimentation but for building the infrastructure and processes that will allow organizations to scale AI with confidence.
Cochrane says to foster cross-functional trust, leaders can establish joint governance structures where both platform engineering and data science teams have a voice in AI infrastructure decisions and define shared ownership of workloads.
“Transparent communication channels, including regular check-ins, dashboards and documentation, will be key in helping teams stay aligned on goals, progress and challenges,” he says.