The burger analogy for cloud stacks is a familiar one. Between the application and infrastructure layers, there are several pieces that need to come together as a cohesive stack to allow an application to function optimally.  

However, with the complexity surrounding Kubernetes and cloud infrastructure in general, it often feels like engineering teams are unable to take advantage of the benefits that cloud native provides.  

While all the ‘juicy’ benefits of cloud native are quite apparent, engineering teams are unable to take advantage of the abilities to deploy and scale because of inherent complexity. 

This complexity is spread across two layers. The first is the infrastructure — not just compute, which IaaS and hyperscalers alike have commoditized, but also components that sit beyond this traditional threshold. The second is the applications, which has expanded beyond their orthodox boundaries to mean much more than just ‘business logic’.  

Infrastructure Needs Introspection and Inquisitive Exploration 

Let’s ask ourselves a question — or three. 

  1. How can our platform team manage hundreds of Kubernetes clusters across multiple clouds and on-premises locations, preferably using a single control plane? 
  2. What tool allows us to eliminate configuration drift by deploying all our clusters from standardized, version-controlled templates? 
  3. How can we fully automate the life cycle of our Kubernetes clusters — from provisioning and upgrading to decommissioning — using a declarative API? 

The answer to all these is k0rdent. It is purpose-built to transform multi-cluster Kubernetes management from a complex, error-prone manual task into a standardized, automated and policy-driven operation. It provides the necessary abstractions to treat your entire fleet of clusters as a single, cohesive system. It is an open-source project, born out of the burgeoning need for managing large fleets of Kubernetes clusters. 

Are Frictionless Deployments Real? 

That borders on clickbait, but yes — there is a path to low-friction deployments. There is a way to achieve deployments that are easier than having to write, correct and maintain reams of YAML. It goes by the name Cloud Foundry Korifi. 

Cloud Foundry Korifi is an open-source project that reimagines the classic Cloud Foundry developer experience, making it native to Kubernetes. In essence, it provides the well-loved, high-productivity cf push workflow on top of any standard Kubernetes cluster. 

For years, Cloud Foundry offered developers a powerful, streamlined way to deploy applications without worrying about the underlying infrastructure. A developer could take their source code, and with a simple cf push command, get a scalable, publicly-accessible URL for their running application. Cloud Foundry handled the entire process: Building the code into a container, scheduling it, managing networking and aggregating logs. 

Korifi’s goal is to preserve this highly abstracted, developer-centric experience but swap out Cloud Foundry’s VM-based components with the now-ubiquitous industry standard — Kubernetes. It acts as an adapter, converting familiar Cloud Foundry API commands (cf push, cf bind-service, etc.) into native Kubernetes resources such as deployments, services and custom resources. 

What is the ‘Golden’ Golden Path for Kubernetes Clusters? 

1. Centralized Control and Standardization by Eliminating Configuration Drift 

Instead of manually configuring each cluster and hoping they stay in sync, define cluster configurations declaratively using templates (ClusterDeployment objects). This pattern enforces ‘Golden Paths’ for the underlying pieces. Platform teams can create pre-approved, secure and compliant cluster templates. When a development team needs a new cluster, they get one that already includes the necessary security policies, monitoring agents and networking configurations out of the box. 

2. Fully Automated Cluster Lifecycle Management

k0rdent addresses the massive operational burden of managing the entire cluster life cycle — from creation to deletion — through a declarative provisioning model. Platform teams can define the desired state of their clusters in a YAML file, and k0rdent — leveraging Cluster API (CAPI) under the hood — continuously works to make that state a reality. This declarative method dramatically simplifies operations: Upgrading the Kubernetes version across the entire fleet is done by changing a single line in the cluster template, prompting k0rdent to orchestrate a controlled, rolling upgrade. Effortless decommissioning is just as simple, as deleting the cluster’s manifest from the management cluster triggers k0rdent to automatically handle the complete teardown of all associated cloud resources. 

3. Define Once, Deploy Everywhere 

k0rdent offers simplified application and service delivery across clusters by streamlining how applications run in a multi-cluster environment. Using ServiceTemplates, you can package an application, such as a Prometheus monitoring agent or logging forwarder, a single time. From there, a MultiClusterService object enables targeted deployments to groups of clusters identified by labels, such as env: production or region: apac, providing a powerful method to ensure all necessary baseline services are consistently rolled out. 

Where k0rdent Ends, Korifi Begins

The best way to test this idea is to try it on a local kind Kubernetes cluster. Begin by installing k0rdent on a kind (Kubernetes in Docker) cluster to create a local management environment. The process involves setting up required command-line tools, creating a specially configured kind cluster and then using the k0rdenctl utility to bootstrap the k0rdent components. 

Section 1: Prerequisites 

Before you begin, install a few command-line tools on your local machine. 

  • Docker: The container runtime that Kind uses to run the Kubernetes cluster 
  • kubectl: The standard Kubernetes command-line tool for interacting with clusters 
  • kind: The tool for creating local Kubernetes clusters using Docker 
  • k0rdenctl: The official CLI for bootstrapping and managing k0rdent 
  • Helm: The package manager for installing Korifi using a Helm chart 
  • cf CLI: The official CLI for working with the Cloud Foundry API 

Section 2: Install k0rdent 

You can install the latest version of k0rdenctl with the following command: 

curl -sSL https://get.k0rdenctl.sh | sudo sh 

 

Step 1: Create the kind cluster. 

k0rdent’s installation includes an ingress controller to expose services. For this to work locally, you must create a kind cluster with a specific port mapping configuration. 

Create a kind configuration file. Save the following content as kind-config.yaml: 

kind: Cluster 

apiVersion: kind.x-k8s.io/v1alpha4 

nodes: 

– role: control-plane 

  kubeadmConfigPatches: 

  – | 

kind: InitConfiguration 

nodeRegistration: 

   kubeletExtraArgs: 

     node-labels: “ingress-ready=true” 

  extraPortMappings: 

  – containerPort: 80 

hostPort: 80 

protocol: TCP 

  – containerPort: 443 

hostPort: 443 

protocol: TCP 

This configuration maps ports 80 and 443 from your local machine to the kind cluster’s ingress controller, allowing you to access services running inside the cluster. 

Step 2: Create the cluster using the configuration file. We’ll name the cluster k0rdent-manager to be explicit. 

kind create cluster –config kind-config.yaml –name k0rdent-management 

Step 3: Verify whether the cluster is running and that kubectl is configured to use its context. 

kubectl cluster-info –context kind-k0rdent-manager 

You should see output confirming the control plane is running. 

Step 4: Install k0rdent using k0rdenctl. 

Now that you have a running Kubernetes cluster, you can use the k0rdenctl tool to install k0rdent and all its dependencies. Run the bootstrap command. This single command installs Flux CD (for GitOps), CAPI (for cluster lifecycle management) and the core k0rdent controllers onto your kind cluster. 

k0rdenctl bootstrap –context kind-k0rdent-management 

This process will take several minutes as it pulls container images and applies numerous Kubernetes manifests. It will create several namespaces, including k0rdent-system, kcm-system and flux-system. 

Step 5: Verify the installation. 

Once the bootstrap command finishes, verify that all the components are running correctly. All pods must eventually be in the running state. It may take a few minutes for all containers to start up and become healthy. Look for pods in namespaces such as k0rdent-system, kcm-system, capi-system, cert-manager and flux-system.  

Alternatively, you can check for CRDs. You can verify by listing resources such as servicetemplates, clusterdeployments and multiclusterservices. 

kubectl get pods -A 

kubectl api-resources | grep k0rdent 

Your kind cluster is now a fully functional k0rdent management cluster, ready for you to define ServiceTemplates and provision new workload clusters. 

Section 3: Install Korifi 

Korifi is installed on Kubernetes using Helm. It has a couple of prerequisites and several dependencies that need to be installed first. Just a reminder about the prerequisites for Korifi — it requires kubectl, helm and the cf CLI.  

Set some environment variables to be used for installation and operation.  

export ROOT_NAMESPACE=”cf” 

export KORIFI_NAMESPACE=”korifi” 

export ADMIN_USERNAME=”cf-admin” 

export BASE_DOMAIN=”127.0.0.1.nip.io” # Or your actual domain 

export GATEWAY_CLASS_NAME=”contour” # Name for the Gateway API gatewayclass
export KORIFI_VERSION=”v0.13.0” #Replace with the latest version 

The next step is to install certain dependencies used by Korifi. Korifi relies on several open-source projects. Install the following:
 

1. cert-Manager: For automatic certificate management
 

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml 

kubectl wait –for=condition=ready pod -l app.kubernetes.io/component=webhook -n cert-manager –timeout=300s 

 

2. Kpack: For building runnable application images from source code
 

kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.12.0/release-0.12.0.yaml 

# Wait for kpack pods to be ready (optional, but good practice) 

kubectl wait –for=condition=ready pod -l app=kpack-controller -n kpack –timeout=300s 

 

3. Contour: The ingress controller for Korifi, implementing Gateway API
 

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml 

kubectl wait –for=condition=ready pod -l app=contour -n projectcontour –timeout=300s 

 

# Create the GatewayClass for Contour 

kubectl apply -f – <<EOF 

apiVersion: gateway.networking.k8s.io/v1beta1 

kind: GatewayClass 

metadata: 

  name: ${GATEWAY_CLASS_NAME} 

spec: 

  controllerName: projectcontour.io/gateway-controller 

EOF 

 

4. Metrics Server: Used for process stats 

 

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 

 

5. Service Binding Runtime 

 

kubectl apply -f https://github.com/servicebinding/runtime/releases/download/v1.0.0/servicebinding-runtime-v1.0.0.yaml 

 

Section 4: Prepare the Kubernetes cluster 

Korifi uses specific namespaces and needs credentials for a container registry to store built application images. Create namespaces using the following commands: 

# Create root and Korifi namespaces 

cat <<EOF | kubectl apply -f – 

apiVersion: v1 

kind: Namespace 

metadata: 

  name: ${ROOT_NAMESPACE} 

  labels: 

pod-security.kubernetes.io/audit: restricted 

pod-security.kubernetes.io/enforce: restricted 

EOF 

 

cat <<EOF | kubectl apply -f – 

apiVersion: v1 

kind: Namespace 

metadata: 

  name: ${KORIFI_NAMESPACE} 

  labels: 

pod-security.kubernetes.io/audit: restricted 

pod-security.kubernetes.io/enforce: restricted 

EOF 

 

Next, create a secret to be used to access the container registry. This example says Docker Hub, but this could well be any other container registry.  

# This example uses placeholder values for demonstration.  

# Replace with your actual credentials. 

export DOCKER_USERNAME=”<your-dockerhub-username>” 

export DOCKER_PASSWORD=”<your-dockerhub-password>” # Or a personal access token 

export DOCKER_SERVER=”https://index.docker.io/v1/” # Or your registry host 

 

Apply the secret and create the Kubernetes Secret object in the namespace identified. This kubectl command creates a Kubernetes Secret of type docker-registry. This secret is used to store credentials needed to pull images from the container registry specified.  

kubectl –namespace “$ROOT_NAMESPACE” create secret docker-registry image-registry-credentials \ 

  –docker-username=”$DOCKER_USERNAME” \ 

  –docker-password=”$DOCKER_PASSWORD” \ 

  –docker-server=”$DOCKER_SERVER” 

 

Section 5: Install Korifi using Helm 

 

helm install korifi https://github.com/cloudfoundry/korifi/releases/download/${KORIFI_VERSION}/korifi-${KORIFI_VERSION}.tgz \ 

  –namespace=”${KORIFI_NAMESPACE}” \ 

  –set=generateIngressCertificates=true \ 

  –set=rootNamespace=”${ROOT_NAMESPACE}” \ 

  –set=adminUserName=”${ADMIN_USERNAME}” \ 

  –set=api.apiServer.url=”api.${BASE_DOMAIN}” \ 

  –set=defaultAppDomainName=”apps.${BASE_DOMAIN}” \ 

  –set=containerRepositoryPrefix=”${DOCKER_SERVER}/${DOCKER_USERNAME}/” \ 

  –set=kpackImageBuilder.builderRepository=”${DOCKER_SERVER}/${DOCKER_USERNAME}/kpack-builder” \ 

  –set=networking.gatewayClass=${GATEWAY_CLASS_NAME} \ 

  –wait 

 

There are no k0s-specific configurations needed for Korifi itself beyond ensuring k0s is running correctly and kubectl is configured to interact with it. 

Outcomes You Can Now Unlock 

Both k0rdent and Cloud Foundry Korifi operate on Kubernetes, but they do so at different levels of abstraction. k0rdent focuses on the infrastructure (the cluster itself), while Korifi focuses on the developer experience and application workloads running on that cluster. 

 

Capability  k0rdent (Kubernetes cluster life cycle)  Cloud Foundry Korifi (Application Life Cycle) 
Provisioning and Deployment  Provisions the entire Kubernetes cluster. It automates the creation of the control plane, worker nodes (node pools) and foundational networking and storage configurations on a cloud provider (such as AWS, GCP and Azure).  Deploys application source code. It takes developer code, uses buildpacks to automatically create a container image and deploys it as a running workload (e.g., a Kubernetes deployment and pods) on an existing cluster. 
Configuration and Management  Manages the cluster’s configuration. This includes defining Kubernetes versions, instance types for nodes, network CIDRs and integrating core add-ons such as a CNI (networking) or CSI (storage) driver.  Manages the application’s configuration. This includes setting environment variables, defining resource requests (memory, CPU) and managing application-specific settings through a simple command-line interface. 
Scaling  Scales the cluster infrastructure. It handles scaling at the node level by adding or removing worker nodes from a node pool (horizontal scaling) or changing the instance size of the nodes (vertical scaling).  Scales the application workload. It handles scaling at the application instance level by increasing or decreasing the number of running pods for an application, either manually or based on load. 
Life Cycle and Upgrades  Manages the cluster’s life cycle. It automates complex operations such as upgrading the Kubernetes version of the control plane and worker nodes or updating cluster-wide add-ons (e.g., an ingress controller).  Manages the application’s life cycle. It automates application updates by handling rolling deployments of new code versions with zero downtime. The developer simply pushes a new version of their code. 
Abstraction Layer  Abstracts the underlying cloud provider. It provides a unified way to manage clusters across different clouds, so the operator doesn’t need to be an expert in the specifics of EKS, GKE or AKS APIs.  Abstracts the underlying Kubernetes complexity. It provides a high-level, developer-friendly experience (cf push) so developers don’t need to write or manage YAML files for deployments, services, ingress, etc. 
Networking and Routing  Manages cluster-level ingress and networking. It is responsible for installing and configuring components such as an ingress controller that manages how external traffic enters the cluster as a whole.  Manages application-specific routing. It creates and manages routes (URLs) that map to specific applications, abstracting the creation of Kubernetes Service and Ingress or Gateway API objects for the developer. 
Service Integration and Binding  Integrates the cluster with infrastructure services. It configures the necessary drivers and permissions for the cluster to use cloud services such as block storage (e.g., EBS, Persistent Disk) or IAM roles.  Binds applications to backing services. It manages the connection between a running application and services such as databases or message queues by injecting credentials and connection details as Kubernetes Secrets. 

 

So, this demonstrates how the two tools have complementary behavior, which, when utilized by platform engineering teams, can help unlock a ‘golden’ path both for application workloads and platform projects.  

When used in tandem, k0rdent and Cloud Foundry Korifi create a comprehensive and streamlined cloud-native platform that effectively serves both platform operators and application developers. k0rdent empowers the platform or SRE team to automate and standardize the management of the underlying infrastructure, allowing them to efficiently provision, upgrade and scale secure and reliable Kubernetes clusters. With this robust foundation in place, Korifi provides a high-productivity PaaS experience on top, abstracting away Kubernetes complexity for application developers. This enables developers to focus entirely on writing code and shipping features using simple commands like cf push, without needing to manage complex YAML files or understand the intricacies of the cluster. This powerful combination establishes a clear separation of concerns, boosting productivity for both teams and enabling an organization to accelerate its application delivery life cycle while maintaining operational excellence. 

Tech Field Day Events

SHARE THIS STORY