diff --git a/docs/background/gui-vs-api.md b/docs/background/gui-vs-api.md index e4d785954..cafa808fd 100644 --- a/docs/background/gui-vs-api.md +++ b/docs/background/gui-vs-api.md @@ -9,7 +9,7 @@ What are the reasons, if any, for choosing one method over the other? ## The case for the {{gui}} Ease of use is probably the main reason for choosing the {{gui}} over the OpenStack API. -Provided you have an [account in {{brand}}](../howto/getting-started/create-account.md), you can simply log in and then follow step-by-step guides to create entities such as [networks](../howto/openstack/neutron/new-network.md), [servers](../howto/openstack/nova/new-server.md), or even [Kubernetes clusters](../howto/openstack/magnum/new-k8s-cluster.md). +Provided you have an [account in {{brand}}](../howto/getting-started/create-account.md), you can simply log in and then follow step-by-step guides to create entities such as [networks](../howto/openstack/neutron/new-network.md) or [servers](../howto/openstack/nova/new-server.md). You can just as easily perform administrative tasks like diff --git a/docs/background/kubernetes/index.md b/docs/background/kubernetes/index.md index 2ed9b190d..8c9c3bcbd 100644 --- a/docs/background/kubernetes/index.md +++ b/docs/background/kubernetes/index.md @@ -1,59 +1,54 @@ # Kubernetes in Cleura Cloud -{{brand}} has two management facilities for Kubernetes clusters: +{{brand}} has a management facility for Kubernetes clusters, based on [{{k8s_management_service}}](../../howto/kubernetes/gardener/index.md). -* [{{brand_container_orchestration}}](../../howto/kubernetes/gardener/index.md), -* [OpenStack Magnum](../../howto/kubernetes/magnum/index.md). +{{k8s_management_service}} supports recent Kubernetes versions and offers you a great degree of "hands-off" management. -In most scenarios, {{k8s_management_service}} is the preferred option, since it supports more recent Kubernetes versions and offers you a greater degree of "hands-off" management. - -For more details on the relative merits of both options, refer to the sections below. +For more details on the merits of {{k8s_management_service}}, refer to the sections below. ## General characteristics -| | {{k8s_management_service}} | Magnum | -| ------------- | ---------------- | ---------------- | -| Kubernetes Cloud Provider | OpenStack | OpenStack | -| Base operating system for nodes | Garden Linux | Fedora CoreOS | -| Latest installable Kubernetes minor release | 1.33 | 1.27 | +| | {{k8s_management_service}} | +| ------------- | ---------------- | +| Kubernetes Cloud Provider | OpenStack | +| Base operating system for nodes | Garden Linux | +| Latest installable Kubernetes minor release | 1.33 | ## API and CLI support -| | {{k8s_management_service}} | Magnum | -| ------------- | ---------------- | ---------------- | -| Manageable via Cleura Cloud REST API | :material-check: | :material-check: | -| Manageable via OpenStack REST API | :material-close: | :material-check: | -| Manageable via OpenStack CLI | :material-close: | :material-check: | +| | {{k8s_management_service}} | +| ------------- | ---------------- | +| Manageable via Cleura Cloud REST API | :material-check: | +| Manageable via OpenStack REST API | :material-close: | +| Manageable via OpenStack CLI | :material-close: | ## Updates and upgrades -| | {{k8s_management_service}} | Magnum | -| ------------- | ---------------- | ---------------- | -| Automatic update to new Kubernetes patch release | :material-check: | :material-close: | -| Rolling upgrade to new Kubernetes minor release | :material-check: | :material-check: | -| Automatic upgrade to new Kubernetes minor release | :material-close: | :material-close: | -| Rolling upgrade to new base operating system release | :material-check: | :material-check: | -| Automatic upgrade to new base operating system release | :material-check: | :material-close: | +| | {{k8s_management_service}} | +| ------------- | ---------------- | +| Automatic update to new Kubernetes patch release | :material-check: | +| Rolling upgrade to new Kubernetes minor release | :material-check: | +| Automatic upgrade to new Kubernetes minor release | :material-check: | +| Rolling upgrade to new base operating system release | :material-check: | +| Automatic upgrade to new base operating system release | :material-check: | ## Functional features -| | {{k8s_management_service}} | Magnum | -| ------------- | ---------------- | ---------------- | -| Built-in private registry for container images | :material-close: | :material-check: | -| [Hibernation](gardener/hibernation.md) | :material-check: | :material-close: | -| Manual vertical scaling (bigger/smaller worker nodes) | :material-check:[^vertical-scaling] | :material-check: | -| Vertical autoscaling | :material-close: | :material-close: | -| Manual horizontal scaling (more/fewer worker nodes) | :material-check: | :material-check: | -| Horizontal [autoscaling](gardener/autoscaling.md) | :material-check: | :material-check:[^cluster-autoscaler] | -| Kubernetes dashboard | :material-check:[^dashboard] | :material-check: | +| | {{k8s_management_service}} | +| ------------- | ---------------- | +| Built-in private registry for container images | :material-close: | +| [Hibernation](gardener/hibernation.md) | :material-check: | +| Manual vertical scaling (bigger/smaller worker nodes) | :material-check:[^vertical-scaling] | +| Vertical autoscaling | :material-close: | +| Manual horizontal scaling (more/fewer worker nodes) | :material-check: | +| Horizontal [autoscaling](gardener/autoscaling.md) | :material-check: | +| Kubernetes dashboard | :material-check:[^dashboard] | [^vertical-scaling]: Vertical scaling is only supported via defining additional worker node groups. -[^cluster-autoscaler]: You must deploy [Magnum Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/magnum/README.md) to use horizontal autoscaling. - [^dashboard]: You must separately [deploy](https://github.com/kubernetes/dashboard/#install) the Kubernetes Dashboard. ## Charges and billing -| | {{k8s_management_service}} | Magnum | -| ------------- | ---------------- | ---------------- | -| Monthly subscription fee | :material-check: | :material-close: | -| {{brand}} charges for Kubernetes control plane nodes | :material-close: | :material-check: | -| {{brand}} charges for Kubernetes worker nodes | :material-check: | :material-check: | +| | {{k8s_management_service}} | +| ------------- | ---------------- | +| Monthly subscription fee | :material-check: | +| {{brand}} charges for Kubernetes control plane nodes | :material-close: | +| {{brand}} charges for Kubernetes worker nodes | :material-check: | diff --git a/docs/howto/kubernetes/gardener/kubectl.md b/docs/howto/kubernetes/gardener/kubectl.md index 4a67c3204..1f857b8f9 100644 --- a/docs/howto/kubernetes/gardener/kubectl.md +++ b/docs/howto/kubernetes/gardener/kubectl.md @@ -117,7 +117,7 @@ shoot--pqrxyz--myshoot-cwncz1-z1-7656c-nvgmc Ready 23h v1.30.5 shoot--pqrxyz--myshoot-cwncz1-z1-7656c-rrkmz Ready 23h v1.30.5 ``` -> Please note that in contrast to an [OpenStack Magnum-managed Kubernetes cluster](../../openstack/magnum/new-k8s-cluster.md), where the output of `kubectl get nodes` includes control plane *and* worker nodes, in a {{k8s_management_service}} cluster the same command *only* lists the worker nodes. +> Please note that in contrast to other Kubernetes platforms, where the output of `kubectl get nodes` includes control plane *and* worker nodes, in a {{k8s_management_service}} cluster the same command *only* lists the worker nodes. ## Deploying an application diff --git a/docs/howto/kubernetes/index.md b/docs/howto/kubernetes/index.md index eec585e05..6d775006a 100644 --- a/docs/howto/kubernetes/index.md +++ b/docs/howto/kubernetes/index.md @@ -1,8 +1,7 @@ # Kubernetes in Cleura Cloud -In {{brand}}, you have several options for deploying and managing Kubernetes clusters. +In {{brand}}, you have the option for deploying and managing Kubernetes clusters. -[{{gui}}](https://{{gui_domain}}) includes management interfaces for [{{k8s_management_service}}](gardener/index.md) and [OpenStack Magnum](magnum/index.md). -To manage Magnum clusters, you can also use the [OpenStack command-line interface](../getting-started/enable-openstack-cli.md). +More specifically, the [{{gui}}](https://{{gui_domain}}) includes a management interface for [{{k8s_management_service}}](gardener/index.md). -To assess which facility is more suitable for your specific deployment scenario and use case, refer to [this summary](../../background/kubernetes/index.md). +Please refer to [this summary](../../background/kubernetes/index.md) for an overview of {{k8s_management_service}}'s characteristics and features. diff --git a/docs/howto/kubernetes/magnum/index.md b/docs/howto/kubernetes/magnum/index.md deleted file mode 100644 index 6897674b2..000000000 --- a/docs/howto/kubernetes/magnum/index.md +++ /dev/null @@ -1,10 +0,0 @@ -# Magnum - -Magnum lets you create clusters via the OpenStack APIs. -To do that, you base your configuration on a Cluster Template. -The template defines parameters, such as worker flavors, which describe how the cluster will be constructed. -In each region, we offer predefined public Cluster Templates with ready-to-use configurations. -To learn more about Cluster Templates, check out the [Magnum documentation](https://docs.openstack.org/magnum/latest/user/#clustertemplate). - -Once you have chosen your Cluster Template, you move on to [create a cluster based on that template](../../openstack/magnum/new-k8s-cluster.md). -When you create the cluster, you can define the number of nodes, ask for multiple masters behind a load balancer, etc. diff --git a/docs/howto/openstack/.pages b/docs/howto/openstack/.pages index be949873e..8d093a2e0 100644 --- a/docs/howto/openstack/.pages +++ b/docs/howto/openstack/.pages @@ -7,6 +7,5 @@ nav: - octavia - cinder - glance - - magnum - keystone - barbican diff --git a/docs/howto/openstack/index.md b/docs/howto/openstack/index.md index 9a1d36e20..c728cb5d6 100644 --- a/docs/howto/openstack/index.md +++ b/docs/howto/openstack/index.md @@ -9,8 +9,6 @@ Thus, most {{brand}} components correspond to OpenStack services: * [Load balancing (Octavia)](octavia/index.md) is for managing load balancers supporting the TCP and HTTP(S) protocols. * [Block storage (Cinder)](cinder/index.md) provides persistent block storage for virtual servers. * [Image management (Glance)](glance/index.md) provides ready-to-launch, preconfigured operating system images for virtual servers. -* [Kubernetes management (Magnum)](magnum/index.md) enables you to launch and manage Kubernetes clusters. - This is one of two ways you can manage Kubernetes clusters in {{brand}}; the other is [Gardener](../kubernetes/gardener/index.md). * [Identity (Keystone)](keystone/index.md) is for managing credentials and authentication. * [Secret storage (Barbican)](barbican/index.md) provides key and secret management. diff --git a/docs/howto/openstack/keystone/app-creds.md b/docs/howto/openstack/keystone/app-creds.md index 304216436..26e9d32ed 100644 --- a/docs/howto/openstack/keystone/app-creds.md +++ b/docs/howto/openstack/keystone/app-creds.md @@ -180,7 +180,7 @@ No matter how you choose to list or view application credentials, secrets are ne By default, application credentials are created as being *restricted*. That is why, in the `openstack application credential show` output above, the `unrestricted` parameter is set to `False`. -You cannot use restricted application credentials for Heat or Magnum, or for managing other application credentials or [trusts](https://docs.openstack.org/keystone/latest/user/trusts.html). +You cannot use restricted application credentials for Heat, or for managing other application credentials or [trusts](https://docs.openstack.org/keystone/latest/user/trusts.html). This restricted-by-default policy acts as a safeguard, so compromised application credentials cannot be used for creating other sets of application credentials. If your application **has** to be able to perform such actions, and you accept the risks involved, you may create *unrestricted* application credentials like this: diff --git a/docs/howto/openstack/magnum/.pages b/docs/howto/openstack/magnum/.pages deleted file mode 100644 index 8c9c00044..000000000 --- a/docs/howto/openstack/magnum/.pages +++ /dev/null @@ -1,5 +0,0 @@ -title: "Kubernetes management (Magnum)" -nav: - - index.md - - new-k8s-cluster.md - - kubectl.md diff --git a/docs/howto/openstack/magnum/assets/new-k8s/shot-01.png b/docs/howto/openstack/magnum/assets/new-k8s/shot-01.png deleted file mode 100644 index 5de2a265a..000000000 Binary files a/docs/howto/openstack/magnum/assets/new-k8s/shot-01.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/new-k8s/shot-02.png b/docs/howto/openstack/magnum/assets/new-k8s/shot-02.png deleted file mode 100644 index 185c374fa..000000000 Binary files a/docs/howto/openstack/magnum/assets/new-k8s/shot-02.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/new-k8s/shot-03.png b/docs/howto/openstack/magnum/assets/new-k8s/shot-03.png deleted file mode 100644 index 937edeffa..000000000 Binary files a/docs/howto/openstack/magnum/assets/new-k8s/shot-03.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/new-k8s/shot-04.png b/docs/howto/openstack/magnum/assets/new-k8s/shot-04.png deleted file mode 100644 index 5d1ecf182..000000000 Binary files a/docs/howto/openstack/magnum/assets/new-k8s/shot-04.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/new-k8s/shot-05.png b/docs/howto/openstack/magnum/assets/new-k8s/shot-05.png deleted file mode 100644 index 2c63bb3fa..000000000 Binary files a/docs/howto/openstack/magnum/assets/new-k8s/shot-05.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-01.png b/docs/howto/openstack/magnum/assets/shot-01.png deleted file mode 100644 index 08b6c61de..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-01.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-02.png b/docs/howto/openstack/magnum/assets/shot-02.png deleted file mode 100644 index 7ecddb9ff..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-02.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-03.1.png b/docs/howto/openstack/magnum/assets/shot-03.1.png deleted file mode 100644 index bd064004b..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-03.1.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-03.png b/docs/howto/openstack/magnum/assets/shot-03.png deleted file mode 100644 index 24da4ab8a..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-03.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-04.png b/docs/howto/openstack/magnum/assets/shot-04.png deleted file mode 100644 index d00723316..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-04.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-05.png b/docs/howto/openstack/magnum/assets/shot-05.png deleted file mode 100644 index a9d6b6a6a..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-05.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-06.png b/docs/howto/openstack/magnum/assets/shot-06.png deleted file mode 100644 index 53d103e05..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-06.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/assets/shot-07.png b/docs/howto/openstack/magnum/assets/shot-07.png deleted file mode 100644 index d64a603fc..000000000 Binary files a/docs/howto/openstack/magnum/assets/shot-07.png and /dev/null differ diff --git a/docs/howto/openstack/magnum/index.md b/docs/howto/openstack/magnum/index.md deleted file mode 100644 index a23f447ce..000000000 --- a/docs/howto/openstack/magnum/index.md +++ /dev/null @@ -1,7 +0,0 @@ -# Kubernetes management (Magnum) - -[Magnum](https://docs.openstack.org/magnum/) is OpenStack's native service for container orchestration. -It enables you to launch and manage Kubernetes clusters. - -Magnum is one of two ways you can manage Kubernetes clusters in {{brand}}; the other is [Gardener](../../kubernetes/gardener/index.md). - diff --git a/docs/howto/openstack/magnum/kubectl.md b/docs/howto/openstack/magnum/kubectl.md deleted file mode 100644 index f2bda65b5..000000000 --- a/docs/howto/openstack/magnum/kubectl.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -description: How to fetch, verify, and use your kubeconfig with kubectl in a Magnum-managed Kubernetes cluster. ---- -# Managing a Kubernetes cluster - -Once you [have launched a new cluster](new-k8s-cluster.md), you can interact with it using `kubectl` and a [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file. - -## Prerequisites - -You must install the Kubernetes command line tool, `kubectl`, on your local computer, and run commands against your cluster. -To install `kubectl`, follow [the relevant Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/#kubectl). - -## Extracting the kubeconfig file - -Due to Magnum's security policy configuration, you cannot use the OpenStack CLI for downloading the kubeconfig of a cluster that was created with {{gui}}, or vice versa. - -To fetch your kubeconfig, you must always use the same facility that you used to deploy the cluster. - -=== "{{gui}}" - In the left-hand side pane of the {{gui}}, select *Magnum* → *Clusters*. - Click on the cluster row to expand the details view, then click the *KubeConfig* tab. - In a second or two, you will see the contents of the kubeconfig file. - Click the blue *Download KubeConfig* button to download it locally. - - ![Kubeconfig view](assets/shot-07.png) - - The kubeconfig file you get has a name similar to this one: - - ```plain - kubeconfig------.yaml - ``` - - Feel free to rename it to something simpler, like `config`. -=== "OpenStack CLI" - To download the kubeconfig file for your Kubernetes cluster, type the following: - - ```bash - openstack coe cluster config --dir=${PWD} - ``` - -After saving the kubeconfig file locally, set the value of variable `KUBECONFIG` to the full path of the file. -Type, for example: - -```bash -export KUBECONFIG=${PWD}/config -``` - -If you are currently managing only one cluster, and you already have its kubeconfig file stored as `~/.kube/config`, then you do not need to set the `KUBECONFIG` variable. - -## Accessing the Kubernetes cluster with kubectl - -You may now use `kubectl` to run commands against your cluster. -See, for instance, all cluster nodes... - -```console -$ kubectl get nodes - -NAME STATUS ROLES AGE VERSION -bangor-id6nijycp2wy-master-0 Ready master 113m v1.18.6 -bangor-id6nijycp2wy-node-0 Ready 111m v1.18.6 -``` - -...or all running pods in every namespace: - -```console -$ kubectl get pods --all-namespaces - -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system coredns-786ffb7797-tw2hg 1/1 Running 0 167m -kube-system coredns-786ffb7797-vbqwn 1/1 Running 0 167m -kube-system csi-cinder-controllerplugin-0 5/5 Running 0 167m -kube-system csi-cinder-nodeplugin-4nr69 2/2 Running 0 166m -kube-system csi-cinder-nodeplugin-vtwqf 2/2 Running 0 167m -kube-system dashboard-metrics-scraper-6b4884c9d5-4mlrg 1/1 Running 0 167m -kube-system k8s-keystone-auth-wk5v2 1/1 Running 0 167m -kube-system kube-dns-autoscaler-75859754fd-2wsd9 1/1 Running 0 167m -kube-system kube-flannel-ds-7z9dp 1/1 Running 0 167m -kube-system kube-flannel-ds-dmvk6 1/1 Running 0 166m -kube-system kubernetes-dashboard-c98496485-stn42 1/1 Running 0 167m -kube-system magnum-metrics-server-79556d6999-xdlpm 1/1 Running 0 167m -kube-system npd-5p6gk 1/1 Running 0 165m -kube-system openstack-cloud-controller-manager-44rz9 1/1 Running 0 167m -``` - -## Defining a default storage class - -An OpenStack Magnum-managed cluster does not automatically define a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/). -You should define one immediately upon cluster creation. - -To do so, create a file named `storageclass.yaml` with the following content: - -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: csi-sc-cinderplugin - annotations: - "storageclass.kubernetes.io/is-default-class": "true" -provisioner: cinder.csi.openstack.org -``` - -You can use an alternate `name` if you prefer. - -Then, apply the storage class definition: - -```console -$ kubectl apply -f storageclass.yaml -storageclass.storage.k8s.io/csi-sc-cinderplugin created -``` - -Subsequently, any persistent volume claims will default to using this storage class, unless you choose to [override](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#using-dynamic-provisioning) the default by setting the `spec.storageClassName` property. diff --git a/docs/howto/openstack/magnum/new-k8s-cluster.md b/docs/howto/openstack/magnum/new-k8s-cluster.md deleted file mode 100644 index 05fd161be..000000000 --- a/docs/howto/openstack/magnum/new-k8s-cluster.md +++ /dev/null @@ -1,260 +0,0 @@ -# Creating a Kubernetes cluster - -By employing OpenStack [Magnum](https://docs.openstack.org/magnum) you can create Kubernetes clusters via OpenStack, using the {{gui}} or the OpenStack CLI. - -## Prerequisites - -First and foremost, you need an [account in {{brand}}](../../getting-started/create-account.md). -Should you choose to work from your terminal, you will also need to [enable the OpenStack CLI](../../getting-started/enable-openstack-cli.md). -In that case, in addition to the Python `openstackclient` module, make sure you also install the corresponding plugin module for Magnum. -Use either the package manager of your operating system or `pip`: - -=== "Debian/Ubuntu" - ```bash - apt install python3-magnumclient - ``` -=== "Mac OS X with Homebrew" - This Python module is unavailable via `brew`, but you can install it via `pip`. -=== "Python Package" - ```bash - pip install python-magnumclient - ``` - -## Creating a Kubernetes cluster - -=== "{{gui}}" - Fire up your favorite web browser, navigate to the [{{gui}}](https://{{gui_domain}}) start page, and log into your {{brand}} account. - On the top right-hand side of the {{gui}}, click the *Create* button. - A new pane titled *Create* slides into view. - - ![Create new object](assets/shot-01.png) - - You will notice several rounded boxes on that pane, each for defining, configuring, and instantiating a different {{brand}} object. - Go ahead and click the *Magnum Cluster* box. - A new vertical pane titled *Create a Magnum Cluster* slides over. - At the top, type in a name for the new cluster and select one of the available regions. - - ![Name and region](assets/shot-02.png) - - A bit further below, use the drop-down menus to select a template and a keypair for the cluster nodes. - Notice the *Docker volume size* option which, by default, is set to 50 GiB. - This pertains to the size of the extra block device each cluster node will have. - That whole storage capacity will be used for saving persistent data. - If you believe the default size is too much or too little, change it accordingly. - Unless you want to define the number of master nodes or the number of worker nodes, click the green *Create* button now. - - ![Select template and keypair](assets/shot-03.png) - - If you do want to play with the aforementioned parameters, then before clicking the *Create* button go ahead and expand the *Advanced Options* section. - Since the new cluster templates come with the master load balancer enabled by default, you might want to equip your new cluster with 3 master nodes. - Optionally, increase the number of worker nodes from 1 (the default) to, say, 3. - Contrary to the number of master nodes, you may change the number of worker nodes *after* the cluster is created. - - ![Cluster creation in progress](assets/shot-03.1.png) - - When the cluster creation process begins, please keep in mind that it takes some time to complete. - While waiting, bring the vertical pane on the left-hand side of the {{gui}} in full view, select *Magnum* → *Clusters*, and in the main pane, take a look at the creation progress. - You can tell when the whole process is complete by the animated icon at the left of the cluster row. - - ![Magnum cluster ready](assets/shot-04.png) -=== "OpenStack CLI" - A simple, general command for creating a new Kubernetes cluster with Magnum looks like this: - - ```bash - openstack coe cluster create \ - --cluster-template $CLUSTER_TMPL \ - --keypair $KEYPAIR \ - --docker-volume-size $PERSIST_VOL_SIZE \ - $CLUSTER_NAME - ``` - - First, list all available templates in the region: - - ```console - $ openstack coe cluster template list - +--------------------------------------+---------------------------------------------+------+ - | uuid | name | tags | - +--------------------------------------+---------------------------------------------+------+ - | 3f476f01-b3de-4687-a188-6829ed947db0 | Kubernetes 1.15.5 on Fedora-atomic 29 | None | - | | 4C-8GB-20GB No Master LB | | - | c458f02d-54b0-4ef8-abbc-e1c25b61165a | Kubernetes 1.15.5 on Fedora-atomic 29 | None | - | | 2C-4GB-20GB No Master LB | | - | f9e1a2ea-b1ff-43e7-8d1e-6dd5861b82cf | Kubernetes 1.18.6 on Fedora-coreos 33 | None | - | | 2C-4GB-20GB No Master LB | | - | 59bd894b-0f5f-4a6e-98d3-a3eb7040faab | Kubernetes v1.23.3 on Fedora-coreos 35 | None | - | 9ca03308-996e-4eaa-b507-5730dcc19fcc | Kubernetes v1.24.16 on Fedora-coreos 37 | None | - +--------------------------------------+---------------------------------------------+------+ - ``` - - Select the template you want by setting the corresponding `name` value to the `CLUSTER_TMPL` variable: - - ```console - $ CLUSTER_TMPL="Kubernetes v1.24.16 on Fedora-coreos 37" - ``` - - Then, list all available keypairs... - - ```console - $ openstack keypair list - +----------+-------------------------------------------------+------+ - | Name | Fingerprint | Type | - +----------+-------------------------------------------------+------+ - | lefkanti | e7:e9:c5:95:ee:7b:72:37:3c:89:c5:fc:6e:8c:a1:72 | ssh | - +----------+-------------------------------------------------+------+ - ``` - - ...and set the `KEYPAIR` variable to the `Name` of the keypair you want: - - ```console - $ KEYPAIR="lefkanti" # this is just an example - ``` - - Besides the OS disk, all cluster nodes have an extra disk for permanently storing application data. - The size of this extra disk is specified during cluster creation via the `--docker-volume-size` parameter. - Staying loyal to the theme of setting parameters via shell variables, decide on the size of this extra volume like so: - - ```console - $ PERSIST_VOL_SIZE=50 - ``` - - The size is expressed in Gibibytes, and in the example above, we decided to go with a 50GiB extra volume. - Finally, decide on a name for your new Kubernetes cluster: - - ```console - $ CLUSTER_NAME="bangor" - ``` - - With everything in place, go ahead and create your new Kubernetes cluster like so: - - ```console - $ openstack coe cluster create \ - --cluster-template "$CLUSTER_TMPL" \ - --keypair $KEYPAIR \ - --docker-volume-size $PERSIST_VOL_SIZE \ - $CLUSTER_NAME - ``` - - New Magnum clusters start with 1 master node and 1 worker node by default. - Since the new cluster templates have the master load balancer enabled, you might want to give your cluster 3 master nodes from the get-go. - Optionally, you can increase the number of worker nodes from 1 to 3 or to 5. - Contrary to the number of master nodes, you can always change the number of worker nodes *after* the cluster is created. - So, to start the new cluster with 3 master nodes and 5 worker nodes, type the following: - - ```console - $ openstack coe cluster create \ - --cluster-template "$CLUSTER_TMPL" \ - --keypair $KEYPAIR \ - --docker-volume-size $PERSIST_VOL_SIZE \ - --master-count 3 \ - --node-count 3 \ - $CLUSTER_NAME - ``` - - In any case, if everything went well with your request for a new cluster, on the terminal, you will see a message like the following: - - ```plain - Request to create cluster 7ca7838a-aa33-4259-8784-02e5941a2cf0 accepted - ``` - - The cluster creation process takes some time to complete, and while you are waiting, you can check if everything is progressing smoothly: - - ```console - $ openstack coe cluster list -c status - +--------------------+ - | status | - +--------------------+ - | CREATE_IN_PROGRESS | - +--------------------+ - ``` - - If everything is going well, the message you will get will be `CREATE_IN_PROGRESS`. - When Magnum has finished creating the cluster, the message will be `CREATE_COMPLETE`. - - ```console - $ openstack coe cluster list -c status - +-----------------+ - | status | - +-----------------+ - | CREATE_COMPLETE | - +-----------------+ - ``` - -## Viewing the Kubernetes cluster - -After the Kubernetes cluster is ready, you may at any time view it and get detailed information about it. - -=== "{{gui}}" - Bring the vertical pane on the left-hand side of the {{gui}} in full view, then select *Magnum* → *Clusters*. - In the main pane, take a look at the row of the cluster you are interested in. - In our example, there is only one cluster, hence only one row. - - ![Cluster row](assets/shot-05.png) - - To see more of the cluster, just click on its row. - Then, all relative information will appear below. - - ![Cluster info](assets/shot-06.png) -=== "OpenStack CLI" - To list all available Kubernetes clusters, type: - - ```console - $ openstack coe cluster list - +---------------+--------+----------+------------+--------------+---------------+---------------+ - | uuid | name | keypair | node_count | master_count | status | health_status | - +---------------+--------+----------+------------+--------------+---------------+---------------+ - | 7ca7838a- | bangor | lefkanti | 1 | 1 | CREATE_COMPLE | HEALTHY | - | aa33-4259- | | | | | TE | | - | 8784- | | | | | | | - | 02e5941a2cf0 | | | | | | | - +---------------+--------+----------+------------+--------------+---------------+---------------+ - ``` - - For many more details on a specific cluster, issue a command like this: - - ```console - $ openstack coe cluster show $CLUSTER_NAME - +----------------------+---------------------------------------------------------------------------+ - | Field | Value | - +----------------------+---------------------------------------------------------------------------+ - | status | CREATE_COMPLETE | - | health_status | HEALTHY | - | cluster_template_id | 9ca03308-996e-4eaa-b507-5730dcc19fcc | - | node_addresses | ['198.51.100.5'] | - | uuid | 7ca7838a-aa33-4259-8784-02e5941a2cf0 | - | stack_id | 923c938a-81cd-4a0d-b645-0681ff507bc5 | - | status_reason | None | - | created_at | 2024-07-05T12:00:41+00:00 | - | updated_at | 2024-07-05T12:08:52+00:00 | - | coe_version | v1.24.16-rancher1 | - | labels | {'container_runtime': 'containerd', 'cinder_csi_enabled': 'True', | - | | 'cloud_provider_enabled': 'True', 'docker_volume_size': '20', 'kube_tag': | - | | 'v1.24.16-rancher1', 'hyperkube_prefix': 'docker.io/rancher/'} | - | labels_overridden | {} | - | labels_skipped | {} | - | labels_added | {} | - | fixed_network | None | - | fixed_subnet | None | - | floating_ip_enabled | True | - | faults | | - | keypair | lefkanti | - | api_address | https://203.0.113.129:6443 | - | master_addresses | ['203.0.113.144'] | - | master_lb_enabled | True | - | create_timeout | 60 | - | node_count | 1 | - | discovery_url | https://discovery.etcd.io/f1473529bd109bea2fb02ac936497b95 | - | docker_volume_size | 50 | - | master_count | 1 | - | container_version | 1.12.6 | - | name | bangor | - | master_flavor_id | b.2c4gb | - | flavor_id | b.4c24gb | - | health_status_reason | {'bangor-xh3fuqin3arw-master-0.Ready': 'True', 'bangor-xh3fuqin3arw- | - | | node-0.Ready': 'True', 'api': 'ok'} | - | project_id | dfc700467396428bacba4376e72cc3e9 | - +----------------------+---------------------------------------------------------------------------+ - ``` - -## Interacting with your cluster - -Once your new Magnum-managed Kubernetes cluster is operational, you can [start interacting with it](kubectl.md). diff --git a/docs/reference/api/openstack/index.md b/docs/reference/api/openstack/index.md index 3572f3e54..0b0432f5a 100644 --- a/docs/reference/api/openstack/index.md +++ b/docs/reference/api/openstack/index.md @@ -28,7 +28,6 @@ The {{brand}} {{api_region}} region exposes the following OpenStack API endpoint | designate | dns | | | nova | compute | | | glance | image | | -| magnum | container-infra | | ## OpenStack SDKs diff --git a/docs/reference/features/compliant.md b/docs/reference/features/compliant.md index 1467385cc..32e9e92fa 100644 --- a/docs/reference/features/compliant.md +++ b/docs/reference/features/compliant.md @@ -63,5 +63,4 @@ ## Kubernetes management | | Sto1HS | Sto2HS | Sto-Com | | ----------------- | ---------------- | ---------------- | ---------------- | -| OpenStack Magnum | :material-check: | :material-check: | :material-close: | | {{k8s_management_service}} | :material-check: | :material-check: | :material-check: | diff --git a/docs/reference/features/public.md b/docs/reference/features/public.md index ae661629e..4ed12b81a 100644 --- a/docs/reference/features/public.md +++ b/docs/reference/features/public.md @@ -63,5 +63,4 @@ ## Kubernetes management | | Kna1 | Sto2 | Fra1 | | ----------------- | ---------------- | ---------------- | ---------------- | -| OpenStack Magnum | :material-check: | :material-check: | :material-check: | | {{k8s_management_service}} | :material-check: | :material-check: | :material-check: | diff --git a/docs/reference/limitations/kubernetes.md b/docs/reference/limitations/kubernetes.md index 36362ea46..7ce282a58 100644 --- a/docs/reference/limitations/kubernetes.md +++ b/docs/reference/limitations/kubernetes.md @@ -1,34 +1,6 @@ # Kubernetes service limitations -## OpenStack Magnum - -### Container Orchestration Engines - -In {{brand}}, Magnum only supports the `kubernetes` Container Orchestration Engine (COE). -The legacy `swarm` and `mesos` COEs are not supported. - -### Kubernetes version - -The latest Kubernetes version you can install in {{brand}} with OpenStack Magnum is 1.27. - -### IP version - -In {{brand}}, you can use OpenStack Magnum to deploy Kubernetes clusters that use either IPv4 or IPv6. -Dual-stack clusters or services are not supported. - -### Cluster networking - -The only supported Magnum network driver in {{brand}} is `calico`. -We do not support the `flannel` network driver. - -### Persistent volumes (PVs) - -In Magnum-managed Kubernetes clusters in {{brand}}, the only supported PV [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) is `ReadWriteOnce` (`RWO`). -Note that this still enables multiple Pods to access the same volume, as long as they are [configured to run on the same node](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/). - -You cannot use `ReadWriteOncePod` (`RWOP`), `ReadWriteMany` (`RWX`), or `ReadOnlyMany` (`ROX`) PVs. - ## {{k8s_management_service}} ### Kubernetes version diff --git a/docs/reference/limitations/openstack.md b/docs/reference/limitations/openstack.md index 5f65f9400..306eb2323 100644 --- a/docs/reference/limitations/openstack.md +++ b/docs/reference/limitations/openstack.md @@ -77,3 +77,9 @@ The [OpenStack Designate](../../howto/openstack/designate/index.md) DNS-as-a-ser The [OpenStack Manila](https://docs.openstack.org/manila/) filesystem-as-a-service (FSaaS) facility is currently not available in {{brand}}. If you require multiple servers to be able to access the same files, [create a server](../../howto/openstack/nova/new-server.md) that exposes an internal NFS or CIFS service, backed by a Cinder volume. + +## Magnum + +The [OpenStack Magnum](https://docs.openstack.org/magnum/) container orchestration service facility, previously available in {{brand}}, is no longer supported. + +To manage Kubernetes clusters, you can use [{{brand_container_orchestration}}](../../howto/kubernetes/gardener/index.md) instead. diff --git a/docs/reference/versions/compliant.md b/docs/reference/versions/compliant.md index 09a15481a..dca1dbf80 100644 --- a/docs/reference/versions/compliant.md +++ b/docs/reference/versions/compliant.md @@ -9,7 +9,6 @@ | Glance (image management) | Epoxy | Epoxy | Epoxy | | Heat (orchestration) | Epoxy | Epoxy | Epoxy | | Keystone (identity management) | Epoxy | Epoxy | Epoxy | -| Magnum (container management) | Epoxy | Epoxy | :material-close: | | Neutron (networking) | Epoxy | Epoxy | Epoxy | | Nova (server virtualization) | Epoxy | Epoxy | Epoxy | | Octavia (load balancing) | Epoxy | Epoxy | Epoxy | diff --git a/docs/reference/versions/public.md b/docs/reference/versions/public.md index a76cc81c7..7103e2623 100644 --- a/docs/reference/versions/public.md +++ b/docs/reference/versions/public.md @@ -9,7 +9,6 @@ | Glance (image management) | Epoxy | Epoxy | Epoxy | | Heat (orchestration) | Epoxy | Epoxy | Epoxy | | Keystone (identity management) | Epoxy | Epoxy | Epoxy | -| Magnum (container management) | Epoxy | Epoxy | Epoxy | | Neutron (networking) | Epoxy | Epoxy | Epoxy | | Nova (server virtualization) | Epoxy | Epoxy | Epoxy | | Octavia (load balancing) | Epoxy | Epoxy | Epoxy |