zero trust

As enterprise platforms grow in complexity, securing them requires a shift from traditional perimeter-based security to a zero trust model, where every access request is authenticated, authorized and continuously validated.

This approach mitigates risks like lateral movement within networks and ensures sensitive data remains protected.

However, implementing zero trust at scale presents challenges, including balancing security with performance, maintaining a seamless user experience and integrating with legacy systems.

Stephen Christiansen, principal security consultant at Stratascale, explained one of the primary concerns when adopting zero trust is avoiding excessive latency that could degrade platform performance.

He emphasized microsegmentation as a foundational step, which divides networks into smaller, isolated segments to enhance security while optimizing traffic flow.

โ€œYou strive for your technologies to be efficient,โ€ he said. โ€œMicrosegmentation helps by dividing your network into smaller, isolated segments that both improve your security posture and network performance.โ€

He noted the importance of Identity and Access Management (IAM) as a next step in layering security controls without impacting speed.

Garrett Weber, field CTO, enterprise security at Akamai said he agrees but stressed the importance of selecting vendors with purpose-built zero trust solutions to ensure performance is maintained.

โ€œOpt for agent-based approaches that do not rely on the existing operating system firewall because it was never built for scalability,โ€ he said.

He recommended leveraging cloud-native zero trust solutions that integrate seamlessly with cloud infrastructure and choosing vendors with points of presence close to users and environments to enhance both security and performance.

Addressing Security Gaps with Zero Trust

Both pointed to identity and access management (IAM) failures and lateral movement as the most significant security gaps in existing platform architectures.

Christiansen explained traditional perimeter-based models rely on simplistic access controls, which leave resources vulnerable if an attacker breaches the outer defenses.

โ€œZero trust requires that every access request is authenticated, authorized, and encrypted continuously,โ€ he said.

By enforcing strict verification for every request, unauthorized access is significantly reduced.

Weber added that least-privilege access is crucial in reducing exposure across large platforms.

โ€œPlatforms are large and complex, and, in most cases, no one needs full access to all services that it offers,โ€ he said.

By restricting access to only what is necessary based on roles and responsibilities, organizations can limit the potential damage of a breach.

โ€œThe biggest threat from an attacker accessing a platform or service is their ability to move sideways and reach other’s systems,โ€ he said.

Using zero trust network segmentation ensures that a breach in one area does not automatically expose the entire system.

Balancing Security with User Experience

A common challenge in zero trust implementation is balancing strict authentication controls with a seamless user experience (UX).

Christiansen outlined three key strategies: Risk-based policies, Single Sign-On (SSO) and Multi-Factor Authentication (MFA).

โ€œLeveraging policies that adjust the userโ€™s risk profile based on their role, device and location allows for flexible controls,โ€ he said.

Single Sign-On simplifies access by reducing the number of login prompts users encounter, while MFA enhances security using biometric authentication, app-based verification or hardware tokens.

Weber added that effective communication is key to user adoption, noting most users are open to using the security controls put in place if they understand why they are being asked to.

He recommended evaluating platform services and workloads to determine which require stricter authentication measures.

In cases where compensating security controls such as microsegmentation exist, access policies can be adjusted to reduce friction without compromising security.

He also suggested that integrating MFA directly with browsers or administrator tools can make authentication more seamless.

Leveraging Automation for Continuous Verification

Automation plays a crucial role in maintaining continuous verification and enforcing least-privilege access across large-scale platforms.

Christiansen said automated systems can monitor user sessions, detect anomalies and validate requests in real-time.

They can also dynamically adjust authentication methods based on user risk profiles and automate access provisioning based on activity.

โ€œAutomated systems continuously monitor user sessions, picking up on abnormal activities and validating requests in real time,โ€ he said.

Weber noted the importance of integrating zero-trust policies into automation and Continuous Integration/Continuous Delivery (CI/CD) pipelines.

โ€œFor newer platforms, implementing zero trust policies into the automation and CI/CD pipeline as the platform is being built is becoming more common,โ€ he said.

He cited Firewall-as-Code as an example, where microsegmentation policies are automatically applied when new services come online.

โ€œAs platforms scale up during peak times, leveraging automation ensures zero-trust policies are applied in real-time,โ€ he adds.

Both experts stressed that scalability is a critical advantage of automation in zero trust: If a Kubernetes cluster scales up with new nodes, automation ensures that security policies are automatically assigned, reducing the risk of misconfigurations.

Transitioning Legacy Platforms to Zero Trust

Migrating legacy platforms to a zero trust model presents additional challenges, but both Christiansen and Weber emphasize careful planning and phased implementation.

Christiansen cautioned that operational continuity risks increase for organizations less mature in zero trust adoption.

โ€œYou need to start with a comprehensive assessment of your organizationโ€™s legacy platformsโ€”systems, applications, and dataโ€”that need to be protected,โ€ he said.

Organizations should prioritize low-risk systems for initial implementation before gradually expanding to mission-critical workloads.

โ€œMission-critical systems should be run in both legacy mode and zero trust to ensure issues can be addressed without disruption,โ€ he said.

Weber suggested identifying zero trust solutions that can integrate with legacy platforms without requiring complex architecture changes.

Since legacy systems often lack documentation and institutional knowledge, profiling their traffic is essential for identifying dependencies and anomalies.

โ€œItโ€™s unlikely that any documentation is accurate or that the team that implemented the platform is even still around, so being able to profile these legacy, brownfield environments is critical to retrofitting them for a zero trust model,โ€ he said.

SHARE THIS STORY