ClusterClass
The ClusterClass feature introduces a new way to create clusters which reduces boilerplate and enables flexible and powerful customization of clusters. ClusterClass is a powerful abstraction implemented on top of existing interfaces and offering a set of tools and operations to streamline cluster lifecycle management while maintaining the same underlying API.
This tutorial covers using Cluster API and ClusterClass to create a Kubernetes cluster and how to perform a one-touch Kubernetes version upgrade.
Installation
Prerequisites
Install tools
This guide requires the following tools are installed:
- Install and setup kubectl and clusterctl
- Install Kind and Docker
Enable experimental features
ClusterClass is currently behind a feature gate that needs to be enabled. This tutorial will also use another experimental gated feature - Cluster Resource Set. This is not required for ClusterClass to work but is used in this tutorial to set up networking.
To enable these features set the respective environment variables by running:
export EXP_CLUSTER_RESOURCE_SET=true
export CLUSTER_TOPOLOGY=true
This ensures that the Cluster Topology and Cluster Resource Set features are enabled when the providers are initialized.
Create a CAPI management cluster
A script to set up a Kind cluster pre-configured for CAPD (the docker infrastructure provider) can be found in the hack folder of the core repo.
To set up the cluster from the root of the repo run:
./hack/kind-install-for-capd.sh
clusterctl init --infrastructure docker
Create a new Cluster using ClusterClass
Create the ClusterClass and templates
With a management Cluster with CAPD initialized and the Cluster Topology feature gate enabled, the next step is to create the ClusterClass and its referenced templates. The ClusterClass - first in the yaml below - contains references to the templates needed to build a full cluster, defining a shape that can be re-used for any number of clusters.
- ClusterClass
- For the InfrastructureCluster:
- DockerClusterTemplate
- For the ControlPlane:
- KubeadmControlPlaneTemplate
- DockerMachineTemplate
- For the worker nodes:
- DockerMachineTemplate
- KubeadmConfigTemplate
The full ClusterClass definition can also be found in the CAPI repo.
ClusterClass
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: clusterclass
namespace: default
spec:
controlPlane:
metadata:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
name: clusterclass-control-plane
namespace: default
machineInfrastructure:
ref:
kind: DockerMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
name: "clusterclass-control-plane"
namespace: default
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerClusterTemplate
name: clusterclass-infrastructure
namespace: default
workers:
machineDeployments:
- class: linux-worker
template:
metadata:
bootstrap:
ref:
kind: KubeadmConfigTemplate
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
name: "clusterclass-md-1"
infrastructure:
ref:
kind: DockerMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
name: "clusterclass-md-1"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerClusterTemplate
metadata:
name: clusterclass-infrastructure
namespace: default
---
kind: KubeadmControlPlaneTemplate
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "clusterclass-control-plane"
namespace: default
spec:
template:
spec:
replicas: 1
machineTemplate:
infrastructureRef:
kind: DockerMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
name: "clusterclass-control-plane"
namespace: default
kubeadmConfigSpec:
clusterConfiguration:
controllerManager:
extraArgs: { enable-hostpath-provisioner: 'true' }
apiServer:
certSANs: [ localhost, 127.0.0.1 ]
initConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
version: v1.21.2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: "clusterclass-control-plane"
namespace: default
spec:
template:
spec:
extraMounts:
- containerPath: "/var/run/docker.sock"
hostPath: "/var/run/docker.sock"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: "clusterclass-md-1"
namespace: default
spec:
template:
spec: { }
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: "clusterclass-md-1"
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
To create the objects on your local cluster run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api/main/docs/book/src/tasks/yamls/clusterclass.yaml
Enable networking for workload clusters
To make sure workload clusters come up with a functioning network a Kindnet ConfigMap with a Kindnet ClusterResourceSet is required. Kindnet only offers networking for Clusters built with Kind and CAPD. This can be substituted for any other networking solution for Kubernetes e.g. Calico as used in the Quickstart guide.
The kindnet configuration file can be found in the CAPI repo.
To create the resources run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api/main/docs/book/src/tasks/yamls/kindnet-clusterresourceset.yaml
Create the workload cluster
This is a Cluster definition that leverages the ClusterClass created above to define its shape.
Cluster
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "clusterclass-quickstart"
namespace: default
labels:
cni: kindnet
spec:
clusterNetwork:
services:
cidrBlocks: ["10.128.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
serviceDomain: "cluster.local"
topology:
class: clusterclass
version: v1.21.2
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: linux-worker
name: linux-workers
replicas: 1
Create the Cluster object from the file in the CAPI repo with:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api/main/docs/book/src/tasks/yamls/clusterclass-quickstart.yaml
Verify the workload cluster is running
The cluster will now start provisioning. You can check status with:
kubectl get cluster
Get a view of the cluster and its resources as they are created by running:
clusterctl describe cluster clusterclass-quickstart
To verify the control plane is up:
kubectl get kubeadmcontrolplane
The output should be similar to:
NAME INITIALIZED API SERVER AVAILABLE VERSION REPLICAS READY UPDATED UNAVAILABLE
clusterclass-quickstart true v1.21.2 1 1 1
Upgrade a Cluster using Managed Topology
The spec.topology
field added to the Cluster object as part of ClusterClass allows changes made on the Cluster to be propagated across all relevant objects. This turns a Kubernetes cluster upgrade into a one-touch operation.
Looking at the newly-created cluster, the version of the control plane and the machine deployments is v1.21.2.
> kubectl get kubeadmcontrolplane,machinedeployments
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterclass-quickstart-XXXX clusterclass-quickstart true true 1 1 1 0 2m21s v1.21.2
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
machinedeployment.cluster.x-k8s.io/clusterclass-quickstart-linux-workers-XXXX clusterclass-quickstart 1 1 1 0 Running 2m21s v1.21.2
To update the Cluster the only change needed is to the version
field under spec.topology
in the Cluster object.
Change 1.21.2
to 1.22.0
as below.
kubectl patch cluster clusterclass-quickstart --type json --patch '[{"op": "replace", "path": "/spec/topology/version", "value": "v1.22.0"}]'
The upgrade will take some time to roll out as it will take place machine by machine with older versions of the machines only being removed after healthy newer versions come online.
To watch the update progress run:
watch kubectl get kubeadmcontrolplane,machinedeployments
After a few minutes the upgrade will be complete and the output will be similar to:
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterclass-quickstart-XXXX clusterclass-quickstart true true 1 1 1 0 7m29s v1.22.0
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
machinedeployment.cluster.x-k8s.io/clusterclass-quickstart-linux-workers-XXXX clusterclass-quickstart 1 1 1 0 Running 7m29s v1.22.0
Clean Up
Delete workload cluster.
kubectl delete cluster clusterclass-quickstart
Delete management cluster
kind delete clusters capi-test