ClusterAPI Introduction
Cluster API is a platform agnostic way of managing physical resources (i.e. VMs) using Kubernetes.
This provides a number of advantages including:
Familiarity to experienced K8s admins, as nodes act similarly to pods…etc.
Tooling which directly supports Openstack, including resource creation and management.
Support and fixes for newer Kubernetes versions without waiting for Openstack upgrades.
The Kubernetes image provided is based on upstream’s Ubuntu image, which has been forked to comply with UKRI security policy here.
Unlike Magnum, the Kubernetes version supported is completely decoupled from Openstack, with upstream typically supporting the N-1 major version (where N is the latest release).
Setup a new cluster using Cluster API
Deployment Considerations
Management Machine
For production workloads, it’s recommended to create a dedicated cluster management machine. This should only be accessible to Cluster Administrators and holds a copy of the kubeconfig. The kubeconfig cannot be recovered (unlike Magnum) after generation. Additionally, a known working dedicated machine can be used to quickly gain access and perform recovery or upgrades as required.
Account Security
A clouds.yaml
file with the application credentials is also required. The credentials should not be unrestricted, as a compromised cluster would allow an attacker to create additional credentials. This file should be removed or restricted on shared machines to prevent unauthorized access.
Automatic Bootstrap
An automatic bootstrap script is recommended. This script will:
Update your VM, install Kubectl, Helm and MicroK8s (to bootstrap)
Verify your application credentials, and populate the project ID automatically
Create a bootstrap cluster, install the OpenStack Provider and clusterctl
Requirements
A Ubuntu VM
Ensure a dedicated floating IP exists. If required, allocate an IP to the project from the External pool.
# SSH into your VM git clone https://github.com/stfc/cloud-capi-values management_cluster
Copy your application credentials,
clouds.yaml
into the management_cluster dir
You will need to add /v3
to the end of the auth URL in the clouds.yaml
file
Add your allocated floating IP to
user-values.yaml
Run the bootstrap script
cd management_cluster && ./bootstrap.sh
Configuring the management cluster
Yaml Files
The configuration is spread across multiple YAML files to make it easier to manage. These are as follows:
values.yaml
contains the default values for the cluster using the STFC Cloud service. These should not be changed.user-values.yaml
contains some values that must be set by the user. There are also optional values that can be changed too for advanced users.flavors.yaml
contains the Openstack flavors to use for worker nodes. Common flavors are provided and can be uncommented and changed as required. By default the cluster will usel3.nano
workers by default if unspecified.clouds.yaml
contains the Openstack application credentials. This file should be in the same directory as the other YAML files.
The cloud team will periodically update flavors.yaml
, values.yaml
, and user-values.yaml
to reflect changes in the STFC Cloud service. These include new versions of Kubernetes or machine images, best practices, new flavors…etc. A user will pull these changes by running git pull
in the cloud-capi-values
directory in the future.
Description of values
The Floating IP in
user-values.yaml
must be set. Optional values may also be changed as required.Check that the Kubernetes version in
machineImage:
matches thekubernetesVersion:
The
flavors.yaml
file contains the Openstack flavors to use for worker nodes. These can be changed as required but will usel3.nano
by default if unspecified.
Deploying Cluster
This assumes you have completed the bootstrap steps and customised your cluster as described above, if required.
Deploy your management cluster through Helm:
export CLUSTER_NAME="demo-cluster" # or your cluster name helm upgrade $CLUSTER_NAME capi/openstack-cluster --install -f values.yaml -f clouds.yaml -f user-values.yaml -f flavors.yaml -n clusters
When the deployment is complete
clusterctl
will report the cluster asReady: True
clusterctl describe cluster $CLUSTER_NAME -n clusters
Progress can be monitored with the following command in a separate terminal:
kubectl logs deploy/capo-controller-manager -n capo-system -f
Once this is deployed you can validate your cluster is up with:
clusterctl get kubeconfig $CLUSTER_NAME -n clusters > $CLUSTER_NAME.kubeconfig KUBECONFIG=$CLUSTER_NAME.kubeconfig kubectl get nodes
Moving the control plane
At this point the control plane is still on the minikube cluster. This is not recommended for long-lived or production workloads. We can pivot the cluster to self-manage:
After moving the control plane the kubeconfig cannot be retrieved if lost. Ensure a copy of the kubeconfig is placed into secure storage for production clusters.
Moving to a self-managed cluster
Install clusterctl into the new cluster and move the control plane
clusterctl init --infrastructure=openstack:v0.10.4 --kubeconfig=$CLUSTER_NAME.kubeconfig clusterctl move --to-kubeconfig $CLUSTER_NAME.kubeconfig -n clusters
Ensure the control plane is now running on the new cluster:
kubectl get kubeadmcontrolplane --kubeconfig=$CLUSTER_NAME.kubeconfig -n clusters
Using the new control plane by default
Replace the existing kubeconfig with the new cluster’s kubeconfig
cp -v $CLUSTER_NAME.kubeconfig ~/.kube/config # Ensure kubectl now uses the new kubeconfig displayed the correct nodes: kubectl get nodes
Ensure that it does not say either
minikube
ormicrok8s
(i.e. your local machine)
# Update the cluster to ensure everything lines up with your helm chart helm upgrade cluster-api-addon-provider capi-addons/cluster-api-addon-provider --install --wait --version 0.6.1 -n clusters helm upgrade $CLUSTER_NAME capi/openstack-cluster --install -f values.yaml -f clouds.yaml -f user-values.yaml -f flavors.yaml --wait -n clusters
Check the cluster status
clusterctl describe cluster $CLUSTER_NAME -n clusters
Creating workload cluster(s)
For production workloads we recommend a management cluster which then controls one or more child cluster(s). These child clusters would include prod, staging, developers areas…etc.
Copy or clone the cloud values to a new directory. E.g. for a prod-cluster:
cp -rv management_cluster prod-cluster-values
Modify the user-values and flavors as required
Change the floating IP to a new one, each cluster must have it’s own FIP and load balancer
Deploy this cluster, it’s nodes are managed by the management cluster
cd prod-cluster-values export CLUSTER_NAME=prod-cluster helm upgrade $CLUSTER_NAME capi/openstack-cluster --install -f values.yaml -f clouds.yaml -f user-values.yaml -f flavors.yaml -n clusters
Unlike the previous deployment, don’t move the control plane. This child cluster should be managed by the parent cluster to promote simple disaster recovery, upgrades, replication…etc.
clusterctl get kubeconfig $CLUSTER_NAME -n clusters > $CLUSTER_NAME.kubeconfig export KUBECONFIG=$(pwd)/$CLUSTER_NAME.kubeconfig kubectl get nodes
Distribute / store the kubeconfig as required for each cluster
kubectl can interact with multiple clusters from the same
~/.kube/config
file, however we recommend single purpose management VMs for each cluster:A production cluster management machine would have
prod-cluster.kubeconfig
stored in~/.kube/config
A developer would take
dev-cluster.kubeconfig
and store it in their own~/.kube/config
.This creates role-based access and prevents
kubectl delete --all
moments wiping out an entire tree of clusters.
Manual Deployment
Background
A Kubernetes cluster is required to create the cluster. To break this “chicken-egg” problem a minikube cluster is created to bootstrap the main cluster. It is not recommended to use this cluster for any production workloads.
Bootstrap Machine Prep
A Ubuntu machine is used to provide the minikube cluster. This should use the normal cloud Ubuntu image, not the stripped down CAPI image designed for nodes.
The following packages are required and can be installed and configured using the following commands:
Docker is used to run a Kubernetes staging cluster locally
# Docker sudo apt update && sudo apt install -y docker.io sudo usermod -aG docker $USER && newgrp docker
You will need to exit and login again if you have added yourself to the docker group (usermod). This is to pick up the new group membership
Snap is used to install kubectl and Helm
sudo apt-get update && sudo apt-get install -y snapd export PATH=$PATH:/snap/bin sudo snap install kubectl --classic sudo snap install helm --classic sudo snap install yq
Install minikube and start a cluster:
wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 chmod +x minikube-linux-amd64 sudo mv minikube-linux-amd64 /usr/local/bin/minikube minikube start --driver=docker
Install clusterctl and the Openstack provider into your minikube cluster:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.7.4/clusterctl-linux-amd64 -o clusterctl chmod +x ./clusterctl sudo mv ./clusterctl /usr/local/bin/clusterctl clusterctl init --infrastructure=openstack:v0.10.4
If you run into GitHub rate limiting you will have to generate a personal API token as described here. This only requires the Repo scope, and is set on the CLI as follows
export GITHUB_TOKEN=<your token>
These setup steps only have to be completed once per management machine.
Openstack Preparation
Ensure a dedicated floating IP exists. If required, allocate an IP to the project from the External pool.
Clone https://github.com/stfc/cloud-capi-values then copy this to a directory called
management_cluster
git clone https://github.com/stfc/cloud-capi-values management_cluster
clouds.yaml Prep
Generate your application credentials: . It is recommended you use Horizon (the web interface) to download the
clouds.yaml
file.
The clouds.yaml
file should have the following format:
clouds: openstack: auth: auth_url: https://openstack.stfc.ac.uk:5000/v3 application_credential_id: "" application_credential_secret: "" region_name: "RegionOne" interface: "public" identity_api_version: 3 auth_type: "v3applicationcredential"
Add the UUID of the project you want to create the cluster in. This is the project ID under the Openstack section which is omitted by default. This can be found here.
Your clouds.yaml
should now look like:
clouds: openstack: auth: auth_url: https://openstack.stfc.ac.uk:5000/v3 application_credential_id: "" application_credential_secret: "" project_id: "" region_name: "RegionOne" interface: "public" identity_api_version: 3 auth_type: "v3applicationcredential"
Place this file in the
cloud-capi-values
directory you cloned earlier.Deploy the cluster requirements
helm repo add capi https://azimuth-cloud.github.io/capi-helm-charts helm repo add capi-addons https://azimuth-cloud.github.io/cluster-api-addon-provider helm repo update kubectl create namespace clusters helm upgrade cluster-api-addon-provider capi-addons/cluster-api-addon-provider --install --wait -n clusters --version 0.5.9