
With silos between platform engineering teams and data scientists slowing down AI adoption, closing this gap is critical for moving beyond experimentation.
There are many contributing factors that complicate the release lifecycle of software in any organization. Some of those reasons include the differences that exist between academic “ideal” state and a business’s implementation or customer needs.
Another stumbling block is the decentralization of data and inconsistency between product lifecycle environments, costs of provider production level datasets at scale and ultimately a skills gap and common set of shared terms between the disparate roles.
Some businesses also suffer from a pattern of separation between platform teams and data science, which can quickly lead to a scenario where the models that are needed for core functionality never make it into production.
Steve Touw, co-founder and CTO of Immuta, says platform teams are focused on enforcing security, compliance, and scalability, while data scientists are measured on speed and experimentation.
“While those goals are not inherently in conflict, the workflows and tooling used to connect them often are,” he says.
Touw explains most organizations still rely on static access controls and manual approvals, which are not built to support the speed or scale that modern AI projects demand.
“AI adoption still sits largely in a silo, and not that from the perspective of AI’s actual availability, since most people in this world now interface or actively use AI in their day to day interactions,” says Brandon Baguley, director of DevOps at Pluralsight
The silos exist from not understanding which tools should be brought to bear in each situation leading to what he calls the perennial, “square peg round hole problem”.
He recommends starting with a clear definition of where you want to start your strategy, including developer enablement, office staff enhancement (i.e. finance, support, sales), product integrations.
“Creating a clear initial strategy will allow organizations to then cover your next set of blockers, such as data protection, legal terms and conditions and being socially responsible around how and where AI tooling is used,” Baguley says.
Focus on Federated Governance
From Touw’s perspective, federated governance is key: instead of forcing every decision through a central team, you enable data domains to manage their own policies, with oversight.
“This allows platform teams to define controls while giving data scientists more autonomy,” he says.
When paired with things like dynamic access controls and automated approvals, you start to remove the bottlenecks and scale data access in a sustainable way.
Baguley says internally, companies should look to facilitate sharing and cross-collaboration efforts through lunch and learns, team demos and other social interactions to foster growth.
“It is important first to set a culture of AI, agentic tooling and have a common definition within your organization of what aspects of AI your company is focusing on,” he says.
Right now, these terms are so overloaded that organizations are whiplashing trying to meet the desired outcomes of their leadership team.
“The most common misdirection comes from not having a clear set of metrics, tools and a story about where you are investing in the enablement of AI practices and patterns,” Baguley says.
Cross Functional Squads, Strong Leadership
Steve Malko, head of software engineering at tech consultancy Searce, says multi-disciplinary, cross functional squads have worked the best.
“Aligning teams by particular product, feature, or journeys allows seamless collaboration, as the team members are working together towards the same goal with a unified direction,” he says.
Malko explains the leadership role should be to break down any barriers or roadblocks and be the voice of the organization, sending clear top-down communications to unify delivery teams to enable tech team collaboration.
“Assign change agents and champions to execute necessary changes, and reward teams that adopt the change,” he adds.
Touw says leadership needs to treat secure data access as a core enabler of AI strategy.
“That includes investing in the right governance infrastructure, setting clear expectations for collaboration between data, platform, and compliance teams, and removing organizational roadblocks,” he says.
He says he believes the most effective organizations are those where leadership understands that scaling AI depends on getting data access and governance right from the beginning.