Sometimes a research report lands at just the right moment. The newly released “Navigating Digital Resilience 2026” study from SUSE is one of those reports. Based on a survey of more than 300 IT leaders across the United States, Europe and Asia, the findings paint a picture of an industry that has quietly moved into a new phase of thinking about infrastructure.
The numbers tell the story. Ninety-eight percent of IT leaders say digital sovereignty is now a priority for their organizations. Ninety-three percent say they either already have a sovereignty strategy or are developing one. And perhaps most telling for those of us who have spent years in the open source and DevOps communities, 94% say open source is critical to achieving digital resilience.
You can read the full report here.
Vendor research should always be read carefully, but sometimes the patterns inside the data line up with what many of us are already seeing in the field. This is one of those cases. The conversation in enterprise IT has shifted. For years, the dominant focus was reliability and availability. We built entire disciplines around keeping systems online and running smoothly. DevOps, SRE and platform engineering all grew out of that mission.
Today, the conversation is broader. Organizations are no longer asking only how to prevent outages. They are asking how their systems will continue operating when disruptions occur. That difference may sound subtle, but it reflects a major shift in mindset.
The reality is that modern infrastructure now operates in an environment that is far more complex and volatile than it was even five years ago. Cyberattacks have become more sophisticated. Data flows constantly across national boundaries. AI systems depend on massive datasets and infrastructure pipelines that stretch across clouds, providers and jurisdictions. At the same time, governments around the world are introducing new regulations around data sovereignty, AI governance and digital control.
All of this means enterprises are increasingly confronting questions that used to sit outside the traditional scope of IT operations. Where does our data live? Who controls the infrastructure we depend on? What happens if a supplier, a cloud region or an entire jurisdiction becomes unavailable?
That is where digital resilience enters the picture.
One of the more striking findings in the SUSE report is that 51% of organizations say a foreign entity has breached their company’s privacy or data regulations, and nearly a quarter describe the incident as severe. Numbers like that explain why sovereignty has moved from policy discussions into operational planning.
When organizations begin experiencing cross-border incidents, the abstract concept of sovereignty becomes a practical infrastructure concern. It becomes about control of data, control of systems and the ability to adapt when conditions change.
AI is also accelerating the shift. The report found that if organizations were given an additional 20% of IT budget, 70% say they would invest it in AI initiatives, followed by investments in security and resilience. That is not surprising. AI is rapidly becoming a foundational capability for many businesses. But it also expands the operational and governance challenges infrastructure teams must manage.
In fact, 64% of IT leaders surveyed say AI transparency requirements are now pushing them to strengthen their resilience strategies. The more organizations rely on AI systems, the more they need to understand the pipelines, data flows and dependencies that feed those systems. Infrastructure architecture suddenly becomes part of AI governance.
This is one reason open infrastructure is gaining renewed attention. The SUSE report’s finding that 94% of IT leaders see open source as essential to resilience reflects something many practitioners have long understood. Open technologies offer transparency and flexibility that proprietary systems often cannot. They allow organizations to understand how their infrastructure works and adapt it when circumstances demand.
Resilience ultimately depends on that adaptability. Systems must be able to absorb disruption, reroute workloads and recover quickly without requiring an entirely new architecture.
Platform engineering is playing a major role here as well. Internal developer platforms are becoming the control layer that standardizes infrastructure across environments, enforces security policies and enables developers to ship software consistently. In many organizations, platform teams are now responsible for embedding resilience into the software delivery process itself.
Instead of treating resilience as a separate operational concern, it becomes part of the platform that developers use every day.
Seen from that perspective, the themes in the SUSE report make sense. Sovereignty, open infrastructure and resilience are not isolated topics. They are different angles on the same underlying question: how much control an organization has over the systems that run its business.
That question is only becoming more important as enterprises deepen their dependence on AI, cloud platforms and distributed software supply chains.
Shimmy’s Take
For a long time, we treated reliability as the gold standard of infrastructure. If systems stayed online and performed well, we considered the job done.
But the world our systems operate in has changed. Cyber threats are constant. Regulations are evolving. AI is introducing new dependencies and new risks. Cloud architectures span continents and jurisdictions.
In that environment, resilience becomes the real objective.
The organizations that succeed in the coming decade will not just be the ones that build the fastest or most scalable systems. They will be the ones who understand their infrastructure deeply enough to keep operating even when parts of it fail.
Digital resilience is not just another buzzword. It is the next stage in how we think about modern infrastructure.
