Magnum
- 1 OpenStack Magnum: Container-as-a-Service
- 2 Create a Kubernetes Cluster
- 3 Submitting jobs to a Kubernetes Cluster
- 3.1 Submitting jobs
- 3.2 Parallel execution
- 3.3 Scheduling jobs
- 4 JupyterHub on Kubernetes
- 4.1 Creating a Kubernetes Cluster
- 4.2 Helm v3
- 4.3 JupyterHub
- 5 Autoscaling Clusters
- 6 OpenStack Magnum Users
- 7 References:
OpenStack Magnum: Container-as-a-Service
Deprecated
Magnum is deprecated and will be replaced in the future with Cluster API.
Magnum is the Container-as-a-Service for OpenStack and can be used to create and launch clusters.
The clusters Magnum supports are:
Kubernetes
Swarm
Mesos
Magnum uses the following OpenStack Components:
Keystone: for multi tenancy
Nova compute: computing service
Heat: virtual application deployment service
Neutron: networking
Glance: virtual machine image
Cinder: volume service
Magnum uses Cluster Templates to define the desired cluster, and passes the template for the cluster to Heat to create the user’s cluster.
python-magnumclient
To use commands for creating or managing clusters using Magnum in the command line, you will need to install the Magnum CLI. This can be done using pip:
pip install python-magnumclient
To test that we can run OpenStack commands for container orchestration engines (coe), we can run the following command:
openstack coe cluster list #list clusters in the project
This should return an empty line if there are no clusters in the project, or a table similar to the following:
+----------+---------------------------+----------------+------------+--------------+-----------------+---------------+
| uuid | name | keypair | node_count | master_count | status | health_status |
+----------+---------------------------+----------------+------------+--------------+-----------------+---------------+
| UUID_1 | test-1 | mykeypair | 1 | 1 | CREATE_COMPLETE | UNKNOWN |
| UUID_2 | kubernetes-cluster-test-2 | mykeypair | 1 | 1 | CREATE_COMPLETE | UNKNOWN |
+----------+---------------------------+----------------+------------+--------------+-----------------+---------------+
Other commands available from python-magumclient
include:
To view details of a cluster:
To view the list of cluster templates:
To view the details of a specific template:
To delete a cluster or cluster template:
Note: Cluster Templates can only be deleted if there are no clusters using the template.
Creating Clusters
Clusters can be created using:
OpenStack CLI
Horizon Web UI
Heat Templates: using the resources
OS::Magnum::ClusterTemplate
andOS::Magnum::Cluster
The documentation Create A Kubernetes Cluster has examples for handling cluster templates and creating a Kubernetes cluster in the command line.
Create a Cluster using OpenStack CLI
Create A Cluster Template
To create a cluster template, we can use the following command:
<name>: Name of the ClusterTemplate to create. The name does not have to be unique but the template UUID should be used to select a ClusterTemplate if more than one template has the same name.
<coe>: Container Orchestration Engine to use. Supported drivers are: kubernetes, swarm, mesos.
<image>: Name or UUID of the base image to boot servers for the clusters.
Images which OpenStack Magnum supports:
COE | os_distro |
---|---|
Kubernetes | fedora-atomic, coreos |
Swarm | fedora-atomic |
Mesos | ubuntu |
<keypair>
: SSH keypair to configure in servers for ssh access. The login name is specific to the cluster driver.
fedora-atomic:
ssh -i <private-key> fedora@<ip-address>
coreos:
ssh -i <private-key> core@<ip-address>
external-network <external-network>
: name or ID of a Neutron network to provide connectivity to the external internet.
--public
Access to a ClusterTemplate is, by default, limited to admin, owner or users within the same tenant as the owners. Using this flag makes the template accessible by other users. Default is not public
server-type <server-type>
: Servers can be VM or bare metal (bm). The default is vm.
network-driver <network-driver>
Name of a network driver for providing networks for the containers - this is different and separate from the Neutron network for the cluster. Drivers that Magnum supports:
COE | Network Driver | Default |
---|---|---|
Kubernetes | flannel, calico | flannel |
Swarm | docker, flannel | flannel |
Mesos | docker | docker |
Note: For Kubernetes clusters, we are using the flannel
network driver.
dns-nameserver <dns-nameserver>
: The DNS nameserver for the servers and containers in the cluster to use. The default is 8.8.8.8.
flavor <flavor>
: flavor to use for worker nodes. The default is m1.small. Can be overridden at cluster creation.
master-flavor <master-flavor>
: flavor for master nodes. Default is m1.small. Can be overridden at cluster creation.
http-proxy <http-proxy>
: The IP address for a proxy to use when direct http access from the servers to sites on the external internet is blocked. The format is a URL including a port number. The default is None.
https-proxy <https-proxy>
: The IP address for a proxy to use when direct https access from the servers to sites on the external internet is blocked. The format is a URL including a port number. The default is None.
no-proxy <no-proxy>
: When a proxy server is used, some sites should not go through the proxy and should be accessed normally. In this case, you can specify these sites as a comma separated list of IPs. The default is None.
docker-volume-size <docker-volume-size>
: If specified, container images will be stored in a cinder volume of the specified size in GB. Each cluster node will have a volume attached of the above size. If not specified, images will be stored in the compute instances local disk. For the devicemapper storage driver, must specify volume and the minimum value is 3GB. For the overlay and overlay2 storage driver, the minimum value is 1GB or None(no volume). This value can be overridden at cluster creation.
docker-storage-driver <docker-storage-driver>
: The name of a driver to manage the storage for the images and the containers writable layer. The default is devicemapper.
labels <KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3>
: Arbitrary labels in the form of key=value pairs. The accepted keys and valid values are defined in the cluster drivers. They are used as a way to pass additional parameters that are specific to a cluster driver. The value can be overridden at cluster creation.
--tls-disabled
Transport Layer Security (TLS) is normally enabled to secure the cluster. The default is TLS enabled.
--registry-enabled
Docker images by default are pulled from the public Docker registry, but in some cases, users may want to use a private registry. This option provides an alternative registry based on the Registry V2: Magnum will create a local registry in the cluster backed by swift to host the images. Refer to Docker Registry 2.0 for more details. The default is to use the public registry.
--master-lb-enabled
Since multiple masters may exist in a cluster, a load balancer is created to provide the API endpoint for the cluster and to direct requests to the masters. As we have Octavia enabled, Octavia would create these load balancers. The default is master load balancers are created.
Create a Cluster
We can create clusters using a cluster template from our template list. To create a cluster, we use the command:
Note: It is recommended that to have master load balancers enabled, to use the kubernetes-ha-master-v1_14_3 template, or create a new cluster template and include the flag --master-lb-enabled
.
Labels
Labels are used by OpenStack Magnum to define a range of parameters such as the Kubernetes version, enable autoscaling, enable autohealing, version of draino to use etc. Any labels included at cluster creation overwrite the labels in the cluster template. A table containing all of the labels which Magnum uses can be found here:
https://docs.openstack.org/magnum/train/user/
Note: For OpenStack Train release, Magnum only offers labels for installing Helm 2 and Tiller. However, Helm 3 can be installed onto the master node after the cluster has been created.
Horizon Web Interface
Clusters can also be created using the Horizon Web Interface. Clusters and their templates can be found under the Container Infra
section.
There are a few differences between the parameters which can be defined when creating a cluster using the CLI or Horizon Web UI. If you are using the Horizon web UI to create clusters, the fixed network, fixed subnet, and floating ip enabled can only be defined in the cluster template.
Heat Templates
Clusters can also be created using a Heat template using the resources OS::Magnum::CluterTemplate
and OS::Magnum::Cluster
.
OS::Magnum::ClusterTemplate
OS::Magnum::Cluster
Example Template
For example, we could have the template example.yaml
which outlines the template for a Kubernetes cluster and instructs heat to create a cluster using this template:
Then we can launch this stack using:
To delete a cluster created using example.yaml, delete the stack which was built by example.yaml:
Accessing the Cluster
To access the cluster, add a floating IP to the master node and ssh using:
Upgrading Clusters
Rolling upgrades can be applied to Kubernetes Clusters using the command openstack coe cluster upgrade <cluster-id> <new-template-id>
. This command can be used for upgrading the Kubernetes version or for upgrading the node operating system version.
openstack coe cluster upgrade
Example
This example will go through how to upgrade an existing cluster to use Kubernetes v1.15.7.
The cluster we will update has the following features:
To upgrade the Kubernetes version for our cluster, we create a new template where we change the value of the label kube_tag
from v1.14.3 to v1.15.7
Then we apply the cluster upgrade to this cluster:
The cluster will then move into UPDATE_IN_PROGRESS
state while the cluster updates the Kubernetes version. The cluster will move to UPDATE_COMPLETE
status when the upgrade is complete. We can verify that our cluster is using a different version of Kubernetes by using SSH to connect to the master node and running the following command:
We can see that the Kubernetes and Docker version have been upgraded for our cluster.
Updating Clusters
Clusters can be modified using the command:
The following table summarizes the possible changes that can be applied to the cluster.
Attribute | add | replace | remove |
---|---|---|---|
node_count | no | add/remove nodes | reset to default of 1 |
master_count | no | no | no |
name | no | no | no |
discovery_url | no | no | no |
Resize a Cluster
The size of a cluster can be changed by using the following command:
Create a Kubernetes Cluster
Clusters are groups of resources (nova instances, neutron networks, security groups etc.) combined to function as one system. To do this, Magnum uses Heat to orchestrate and create a stack which contains the cluster.
This documentation will focus on how to create Kubernetes clusters using OpenStack Magnum.
Magnum CLI
Any commands for creating clusters using OpenStack Magnum begin with:
In order to have the openstack commands for Magnum available to use through the CLI, you will need to install the python client for Magnum. This can be done using pip:
Now the commands relating to the container orchestration engine, clusters, cluster templates are available on the command line.
Cluster Templates
Clusters can be created from templates which are passed through Heat. To view the list of cluster templates which are in your project, you can use the following command:
Templates can be created using the following command:
Kubernetes Cluster Template:
Create a Kubernetes Cluster
We can create a Kubernetes cluster using one of the cluster templates that are available. To create a cluster, we use the command:
For example, consider a user that wants to create a cluster using the Kubernetes cluster template. They want the cluster to have:
one master node
one worker node
their keypair mykeypair
A cluster containing one master node and one worker node takes approximately 14 minutes to build. By default, cluster creation times out at 60 minutes.
After the cluster has been created, you can associate a floating IP to the master node and SSH into the node using:
` ssh -i <mykeypair-private-key> fedora@<floating_ip> `
Submitting jobs to a Kubernetes Cluster
A Kubernetes job creates one or more pods on a cluster and have the added benefit of being retried until a specified number of pods successfully terminate. Jobs are described by YAML and can be executed using kubectl.
Submitting jobs
Jobs are defined by a YAML config with a kind parameter of Job. Below is an example job config for computing π to 2000 places.
job.yaml
To run this job use:
This will result in the creation of a pod with a single container named pi that is based on the perl image. The specified command, equivalent to perl -Mbignum=bpi -wle "print bpi(2000)"
, will then be executed in the container. You can check on the status of the job using
Then the output from the container can be obtained through
Important
Parallel execution
The above example runs a single pod until completion or 4 successive failures. It is also possible to execute multiple instances of pods in parallel. For a simple example we can require a fixed completion count by assigning .spec.completions
in the YAML file to require more than one successful execution of a pod is required before the job is considered complete. We can then also specify .spec.parallelism
to increase the number of pods that can be running at any one time. For example, the below will run up to 2 pods in parallel until 8 of them finish successfully.
If one pod fails a new pod will be created to take its place and the job will continue.
You can also use a work queue for parallel jobs by not specifying .spec.completions
at all. In this case the pods should coordinate amongst themselves or via an external service to determine when they have finished as when any one of them successfully exits the job will be considered complete. Therefore each should exit only there is no more work for any of the pods.
Scheduling jobs
To run jobs on a schedule you can use CronJobs. These are also described using a YAML file, for example:
cronjob.yaml
This will run a job every minute that prints “Hello from the Kubernetes cluster”. You can create the CronJob using:
You may check on the status of the CronJob using kubectl get cronjob hello
and watch the jobs it creates in real time using kubectl get jobs --watch
. From the latter you will see that the job names appear as hello- followed by some numbers e.g. hello-27474266. This can be used to view the output of the job using
The schedule parameter, which in this case causes the job to run every minute, is in the following format.
Where the day of the week is 0 for Sunday and 6 for Saturday. In the example the asterisk is used to indicate any. You may find tools such as https://crontab.guru/ helpful in writing these. For example 5 4 * * 2
will run at 04:05 on Tuesdays. The time specified is the local time of the machine.
You may delete the CronJob, along with any of its existing jobs and pods using
JupyterHub on Kubernetes
This documentation assumes that you will be installing JupyterHub on a Kubernetes cluster that has been created using OpenStack Magnum.
In this tutorial we will break the installation down into the following:
Create a cluster template and launch a Kubernetes cluster using OpenStack Magnum
Install Helm v3 and define persistent volume for the cluster
Install JupyterHub
Creating a Kubernetes Cluster
The template for the cluster:
Create a cluster:
Once the cluster has been created successfully, we can associate a floating IP to the master node VM and then SSH into the cluster:
Configure Storage
Magnum does not automatically configure cinder storage for clusters.
The storage class can be defined using a YAML file. For example we could define the storage class to be:
YAML File from: https://github.com/zonca/jupyterhub-deploy-kubernetes-jetstream/blob/master/kubernetes_magnum/storageclass.yaml
Then we create the storage class:
Helm v3
The Train release supports Helm v2 charts being installed and supports labels for installing Tiller.
However, it is possible to install and run charts for Helm v3.
Note: Helm v2 reaches end of support in November 2020
To install Helm 3:
Other methods for installing Helm v3 can be found here: https://helm.sh/docs/intro/install/
Now Helm v3 has been installed, we can install JupyterHub.
JupyterHub
The following is the tutorial from the _Zero to JupyterHub with Kubernetes_ installation documentation.
Then create a file called config.yaml and write the following:
Next is to add the JupyterHub Helm chart to your chart repository and install it.
When installation is complete it should return a message similar to the following:
Autoscaling Clusters
The Cluster Autoscaler (CA) is a feature in OpenStack Magnum that can be enabled in order for the cluster to scale up or down the worker nodegroup. The default version which the Train release uses is v1.0. The version of CA to use can be changed at cluster creation by using the label autoscaler_tag
This feature can be enabled by using the label auto_scaling_enabled=true
in a cluster template or at cluster creation.
Machine IDs
On nodes in a Kubernetes cluster, the system UUID matches the ID of the VM hosting that node. However, the Cluster Autoscaler uses the machine ID to refer to the node when the cluster needs to be scaled down and a node removed. Kubernetes reads the ID for the VMs from the file /etc/machine-id
on each VM in the cluster. However, these IDs may not match the IDs of the VMs. If the machine ID and system UUID (VM ID) on a node do not match, then the following errors may be present in the CA pod’s log:
To update the machine ID to match the VM ID, the file can be edited directly using:
After a few minutes Kubernetes will have updated the IDs for the nodes on those VMs. The system UUID and the machine ID can be seen using kubectl describe node <node-name>
.
For example:
This shows that this node had the machine ID updated so that it now matches the System UUID and will refer to the VM by the correct ID if the Cluster AutoScaler attempts to remove the node when scaling the cluster.
The Cluster Autoscaler will begin to successfully scale down nodes once machine IDs match VM IDs. To prevent a node being scaled down, the following annotation needs to be added to the node:
This will indicate to CA that this node cannot be removed from the cluster when scaling down.
Cluster Autoscaler Deployment
The deployment of the CA on the cluster will be similar to the following:
We can see in the Command
can change the time the autoscaler waits before determining that a node is unneeded and should be scaled down. We can also change the delay time between adding nodes during scaling up and the amount of time to wait after scaling down fails.
Example: A Cluster Scaling Up
Let’s have a cluster that has CA enabled and consists of one master node and one node. If the worker node is cordoned and nginx pods still need to be scheduled, the CA will send an OpenStack request to resize the cluster and increase the node count from 1 to 2 in order to have a node available to schedule a node. This can be seen in the container or pod logs for the CA:
You should see the stack for the cluster being updated on OpenStack and see the node visible in the cluster:
OpenStack Magnum Users
When a cluster is created, Magnum creates unique credentials for each cluster. This allows the cluster to make changes to its structure (e.g. create load balancers for specific services, create and attach cinder volumes, update the stack, etc.) without exposing the user’s cloud credentials.
How to find the Magnum User Credentials
We can obtain the cluster credentials directly from the VM which the master node is on. First, SSH into the master node’s VM and then:
This will return the cloud-config file containing the cluster’s credentials similar to:
References:
https://docs.openstack.org/magnum/train/user/
https://docs.openstack.org/heat/train/template_guide/openstack.html
https://www.openstack.org/videos/summits/austin-2016/intro-to-openstack-magnum-with-kubernetes
https://clouddocs.web.cern.ch/containers/quickstart.html
https://kubernetes.io/docs/concepts/workloads/controllers/job/
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
https://github.com/zonca/jupyterhub-deploy-kubernetes-jetstream
https://www.zonca.dev/posts/2020-05-21-jetstream_kubernetes_magnum.html
https://zero-to-jupyterhub.readthedocs.io/en/latest/
https://helm.sh/docs/intro/install/
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/magnum