You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+9-3
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,11 @@
1
1
# Changelog
2
2
3
+
## 26.0.1+1.31.5
4
+
5
+
-**OTHER CHANGES**
6
+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_controller_manager_settings` (contribution by @hajowieland). Fixes [#69](https://github.com/githubixx/ansible-role-kubernetes-controller/issues/69)
7
+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_scheduler_settings`
8
+
3
9
## 26.0.0+1.31.5
4
10
5
11
-**UPDATE**
@@ -116,8 +122,8 @@
116
122
- Rename `k8s_ca_conf_directory` to `k8s_ctl_ca_conf_directory`
117
123
118
124
-**FEATURE**
119
-
- Introduce `k8s_run_as_user` variable. Previously all control plane services like `kube-apiserver`, `kube-scheduler` and `kube-controller-manager` run as user `root`. Securitywise that's not optimal. There is just no need to run them as `root` as long they use a listening port > `1024` which they do by default. In this version all these services will run as the user specified with `k8s_run_as_user` which is `k8s` by default. Related to this variable are the new variables `k8s_run_as_user_shell`, `k8s_run_as_user_system`, `k8s_run_as_group` and `k8s_run_as_group_system`. See [README](https://github.com/githubixx/ansible-role-kubernetes-controller/blob/master/README.md) for further information about this variables. The defaults should be just fine even for upgrading from a previous version of this role.
120
-
- Introduce `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables. Previously `kube-scheduler` and `kube-controller-manager` where configured to connect to the first host in the Ansible `k8s_controller` group and communicate with the `kube-apiserver` that was running there. This was hard-coded and couldn't be changed. If that host was down the K8s worker nodes didn't receive any updates. Now one can install and use a load balancer like `haproxy` e.g. that distributes requests between all `kube-apiserver`'s and takes a `kube-apiserver` out of rotation if that one is down (also see my Ansible [haproxy role](https://github.com/githubixx/ansible-role-haproxy) for that use case). The default is still to use the first host/kube-apiserver in the Ansible `k8s_controller` group. So behaviorwise nothing changed basically.
125
+
- Introduce `k8s_run_as_user` variable. Previously all control plane services like `kube-apiserver`, `kube-scheduler` and `kube-controller-manager` run as user `root`. Security-wise that's not optimal. There is just no need to run them as `root` as long they use a listening port > `1024` which they do by default. In this version all these services will run as the user specified with `k8s_run_as_user` which is `k8s` by default. Related to this variable are the new variables `k8s_run_as_user_shell`, `k8s_run_as_user_system`, `k8s_run_as_group` and `k8s_run_as_group_system`. See [README](https://github.com/githubixx/ansible-role-kubernetes-controller/blob/master/README.md) for further information about this variables. The defaults should be just fine even for upgrading from a previous version of this role.
126
+
- Introduce `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables. Previously `kube-scheduler` and `kube-controller-manager` where configured to connect to the first host in the Ansible `k8s_controller` group and communicate with the `kube-apiserver` that was running there. This was hard-coded and couldn't be changed. If that host was down the K8s worker nodes didn't receive any updates. Now one can install and use a load balancer like `haproxy` e.g. that distributes requests between all `kube-apiserver`'s and takes a `kube-apiserver` out of rotation if that one is down (also see my Ansible [haproxy role](https://github.com/githubixx/ansible-role-haproxy) for that use case). The default is still to use the first host/kube-apiserver in the Ansible `k8s_controller` group. So behavior-wise nothing changed basically.
121
127
- Introduce `k8s_admin_api_endpoint_host` and `k8s_admin_api_endpoint_port` variables. For these two variables the same is basically true as for `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables above. But these settings are meant to be used by the `admin` user that this role creates by default. These settings are written into `admin.kubeconfig`. So it's possible to configure another host/load balancer for the `admin` user as for the K8s control plane services mentioned in the previous paragraph.
122
128
- Introduce `k8s_ctl_log_base_dir` and `k8s_ctl_log_base_dir_mode`. Normally `kube-apiserver`, `kube-controller-manager` and `kube-scheduler` log to `journald`. But there are exceptions like the audit log. For this kind of log files this directory will be used as a base path.
123
129
- Introduce `k8s_apiserver_audit_log_dir`. Directory to store kube-apiserver audit logs.
@@ -328,7 +334,7 @@ v1.2.0_v1.8.4
328
334
329
335
- update `k8s_release` to `1.14.2`
330
336
- add all admissions plugins to `enable-admission-plugins` option that are enabled by default in K8s 1.14
331
-
- remove `Initializers`addmission plugin (no longer available in 1.14)
337
+
- remove `Initializers`admission plugin (no longer available in 1.14)
Copy file name to clipboardExpand all lines: README.md
+14-2
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ This role is used in [Kubernetes the not so hard way with Ansible - Control plan
4
4
5
5
## Versions
6
6
7
-
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `26.0.0+1.31.5` means this is release `26.0.0` of this role and it's meant to be used with Kubernetes version `1.31.5` (but should work with any K8s 1.31.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
7
+
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `26.0.1+1.31.5` means this is release `26.0.1` of this role and it's meant to be used with Kubernetes version `1.31.5` (but should work with any K8s 1.31.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
8
8
9
9
## Requirements
10
10
@@ -30,6 +30,12 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
30
30
31
31
**Recent changes:**
32
32
33
+
## 26.0.1+1.31.5
34
+
35
+
-**OTHER CHANGES**
36
+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_controller_manager_settings` (contribution by @hajowieland). Fixes [#69](https://github.com/githubixx/ansible-role-kubernetes-controller/issues/69)
37
+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_scheduler_settings`
38
+
33
39
## 26.0.0+1.31.5
34
40
35
41
-**UPDATE**
@@ -75,7 +81,7 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
0 commit comments