LinearB Splunk GitLab LogDNA observability metrics Splunk Adds Remote Workforce Metrics Dashboard

When developers choose the platform because it makes delivery faster and safer, the business sees measurable gains.  

Mature teams track deployment frequency, onboarding time, and voluntary adoption—not ticket counts. Metrics that monitor and analyze the speed of real-time decision-making best represent productivity for engineering teams. 

Lenses.io’s head of AI, Tun Shwe, explains that these measurements are superior to ticket counts, which commonly fail to capture the value of their organization’s adaptive systems that process information instantly at the edge. 

“True productivity is reflected in the speed and ability to modernize legacy codes and deploy real-time data streams into their autonomous systems,” he says.  

Dmitry Chuyko, performance architect at BellSoft, says “DORA” metrics – deployment frequency, lead time, change failure rate, mean time to recovery – tell you whether your software delivery is improving. 

“These are the standards, and they matter,” he says. But platforms affect peopleware, not just software.” 

From Chuyko’s perspective, the real productivity question is: Are you reducing cognitive load?  

“Developers spend time thinking, learning, context-switching, troubleshooting,” he says. “When the platform removes friction from these activities, productivity improves even if DORA metrics stay flat temporarily.” 

Measuring Voluntary Adoption  

A best practice when measuring voluntary adoption is by monitoring how frequently developers utilize their self-service tools to access data streams independently and autonomously. 

Shwe explains that higher adoption rates emphasize that self-service platforms are successfully removing frictions and enabling engineers to focus on high-level tasks rather than small administrative functions. 

“If developers are actively modernizing the platform through their own data inputs, it proves the tools are intuitive enough to replace previously inefficient workflows,” he says.  

Chuyko points out that voluntary adoption is binary: Teams either choose your platform or build workarounds. 

“If you’re mandating adoption through policy rather than earning it through value, you’ve already lost,” he cautions. “Developers vote with their time and attention. When they voluntarily invest both, you know you’re solving real problems.” 

Tie Outcomes to Business Value  

Faster delivery is straightforward: shorter release cycles, higher deployment frequency, while safety shows up in SRE metrics–fewer critical CVEs in production, reduced time to patch vulnerabilities, improved uptime. 

“The key relationship most teams miss is that shorter release cycles directly improve security posture,” Chuyko says. “When you deploy multiple times per day instead of monthly, vulnerabilities spend less time exposed.” 

The average number of open CVEs at any moment drops simply because the window of exposure shrinks. 

“Tie this to business value through impact analysis,” he says. “Calculate the cost of downtime. Measure time-to-market for revenue-generating features. Track how quickly you can respond to competitive threats or regulatory requirements.” 

Shwe says AI return on investment (ROI) is a strong metric that not only measures faster and safer delivery but also the stability of autonomous deployments. 

“It’s essential that organizations tie their AI ROI back to business value to emphasize an IT team’s ability to keep their AI models adaptive and relevant despite ongoing technological changes,” he says.  

Tracking Health, Diagnosing Stagnation  

Platform teams can monitor self-service system health by tracking the integrity of the data streams themselves, ensuring that the root causes of system failures are always easy to identify.  

“By combining human effort with AI-driven analysis, IT teams can maintain accuracy and security without being overwhelmed by alerts,” Shwe says. 

He adds it’s vital that IT teams ensure that their data connections with AI models remain intact and performant. 

Chuyko says when metrics show stagnation—flat adoption, slow deployments, longer onboarding—there are diagnostic steps to help uncover whether the issue is tooling, process, or culture. 

“Address the elephant in the room,” he says. “Are there known critical issues you’ve been deferring?”
He points out that platform teams often tolerate bugs or missing features because “we’re working on bigger things.”  

“Meanwhile, those issues quietly kill adoption,” Chuyko says. “Review your backlog honestly.” 

Next, check infrastructure constraints. Do you have sufficient hardware resources? Budget for scaling? Acceptable access times and response rates? 

“These limitations are often invisible to the platform team but painfully obvious to users,” he explains. 

The next step is to gather systematic feedback by surveying current users and, critically, interviewing teams who tried the platform and stopped using it. 

“They’ll tell you what your metrics can’t: Whether the tooling is too complex, whether processes add friction, whether organizational politics are blocking adoption,” Chuyko says.  

Tech Field Day Events

SHARE THIS STORY