Skip to content

Commit 2775def

Browse files
authored
26.0.1+1.31.5 (#73)
* added settings to k8s_controller_manager_settings and k8s_scheduler_settings * update .gitignore * fix
1 parent 8633b57 commit 2775def

File tree

4 files changed

+27
-5
lines changed

4 files changed

+27
-5
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
*.swp
22
*.retry
33
.ansible
4+
.vscode

CHANGELOG.md

+9-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# Changelog
22

3+
## 26.0.1+1.31.5
4+
5+
- **OTHER CHANGES**
6+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_controller_manager_settings` (contribution by @hajowieland). Fixes [#69](https://github.com/githubixx/ansible-role-kubernetes-controller/issues/69)
7+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_scheduler_settings`
8+
39
## 26.0.0+1.31.5
410

511
- **UPDATE**
@@ -116,8 +122,8 @@
116122
- Rename `k8s_ca_conf_directory` to `k8s_ctl_ca_conf_directory`
117123

118124
- **FEATURE**
119-
- Introduce `k8s_run_as_user` variable. Previously all control plane services like `kube-apiserver`, `kube-scheduler` and `kube-controller-manager` run as user `root`. Securitywise that's not optimal. There is just no need to run them as `root` as long they use a listening port > `1024` which they do by default. In this version all these services will run as the user specified with `k8s_run_as_user` which is `k8s` by default. Related to this variable are the new variables `k8s_run_as_user_shell`, `k8s_run_as_user_system`, `k8s_run_as_group` and `k8s_run_as_group_system`. See [README](https://github.com/githubixx/ansible-role-kubernetes-controller/blob/master/README.md) for further information about this variables. The defaults should be just fine even for upgrading from a previous version of this role.
120-
- Introduce `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables. Previously `kube-scheduler` and `kube-controller-manager` where configured to connect to the first host in the Ansible `k8s_controller` group and communicate with the `kube-apiserver` that was running there. This was hard-coded and couldn't be changed. If that host was down the K8s worker nodes didn't receive any updates. Now one can install and use a load balancer like `haproxy` e.g. that distributes requests between all `kube-apiserver`'s and takes a `kube-apiserver` out of rotation if that one is down (also see my Ansible [haproxy role](https://github.com/githubixx/ansible-role-haproxy) for that use case). The default is still to use the first host/kube-apiserver in the Ansible `k8s_controller` group. So behaviorwise nothing changed basically.
125+
- Introduce `k8s_run_as_user` variable. Previously all control plane services like `kube-apiserver`, `kube-scheduler` and `kube-controller-manager` run as user `root`. Security-wise that's not optimal. There is just no need to run them as `root` as long they use a listening port > `1024` which they do by default. In this version all these services will run as the user specified with `k8s_run_as_user` which is `k8s` by default. Related to this variable are the new variables `k8s_run_as_user_shell`, `k8s_run_as_user_system`, `k8s_run_as_group` and `k8s_run_as_group_system`. See [README](https://github.com/githubixx/ansible-role-kubernetes-controller/blob/master/README.md) for further information about this variables. The defaults should be just fine even for upgrading from a previous version of this role.
126+
- Introduce `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables. Previously `kube-scheduler` and `kube-controller-manager` where configured to connect to the first host in the Ansible `k8s_controller` group and communicate with the `kube-apiserver` that was running there. This was hard-coded and couldn't be changed. If that host was down the K8s worker nodes didn't receive any updates. Now one can install and use a load balancer like `haproxy` e.g. that distributes requests between all `kube-apiserver`'s and takes a `kube-apiserver` out of rotation if that one is down (also see my Ansible [haproxy role](https://github.com/githubixx/ansible-role-haproxy) for that use case). The default is still to use the first host/kube-apiserver in the Ansible `k8s_controller` group. So behavior-wise nothing changed basically.
121127
- Introduce `k8s_admin_api_endpoint_host` and `k8s_admin_api_endpoint_port` variables. For these two variables the same is basically true as for `k8s_ctl_api_endpoint_host` and `k8s_ctl_api_endpoint_port` variables above. But these settings are meant to be used by the `admin` user that this role creates by default. These settings are written into `admin.kubeconfig`. So it's possible to configure another host/load balancer for the `admin` user as for the K8s control plane services mentioned in the previous paragraph.
122128
- Introduce `k8s_ctl_log_base_dir` and `k8s_ctl_log_base_dir_mode`. Normally `kube-apiserver`, `kube-controller-manager` and `kube-scheduler` log to `journald`. But there are exceptions like the audit log. For this kind of log files this directory will be used as a base path.
123129
- Introduce `k8s_apiserver_audit_log_dir`. Directory to store kube-apiserver audit logs.
@@ -328,7 +334,7 @@ v1.2.0_v1.8.4
328334

329335
- update `k8s_release` to `1.14.2`
330336
- add all admissions plugins to `enable-admission-plugins` option that are enabled by default in K8s 1.14
331-
- remove `Initializers` addmission plugin (no longer available in 1.14)
337+
- remove `Initializers` admission plugin (no longer available in 1.14)
332338

333339
## 7.0.0+1.13.5
334340

README.md

+14-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This role is used in [Kubernetes the not so hard way with Ansible - Control plan
44

55
## Versions
66

7-
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `26.0.0+1.31.5` means this is release `26.0.0` of this role and it's meant to be used with Kubernetes version `1.31.5` (but should work with any K8s 1.31.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
7+
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `26.0.1+1.31.5` means this is release `26.0.1` of this role and it's meant to be used with Kubernetes version `1.31.5` (but should work with any K8s 1.31.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
88

99
## Requirements
1010

@@ -30,6 +30,12 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
3030

3131
**Recent changes:**
3232

33+
## 26.0.1+1.31.5
34+
35+
- **OTHER CHANGES**
36+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_controller_manager_settings` (contribution by @hajowieland). Fixes [#69](https://github.com/githubixx/ansible-role-kubernetes-controller/issues/69)
37+
- add flags `client-ca-file`, `tls-cert-file` and `tls-private-key-file` to `k8s_scheduler_settings`
38+
3339
## 26.0.0+1.31.5
3440

3541
- **UPDATE**
@@ -75,7 +81,7 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
7581
roles:
7682
- name: githubixx.kubernetes_controller
7783
src: https://github.com/githubixx/ansible-role-kubernetes-controller.git
78-
version: 26.0.0+1.31.5
84+
version: 26.0.1+1.31.5
7985
```
8086
8187
## Role (default) variables
@@ -370,6 +376,9 @@ k8s_controller_manager_settings:
370376
"requestheader-client-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
371377
"service-account-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-sa-key.pem"
372378
"use-service-account-credentials": "true"
379+
"client-ca-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
380+
"tls-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager.pem"
381+
"tls-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-key.pem"
373382

374383
# The directory to store scheduler configuration.
375384
k8s_scheduler_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-scheduler"
@@ -381,6 +390,9 @@ k8s_scheduler_settings:
381390
"authentication-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
382391
"authorization-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
383392
"requestheader-client-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
393+
"client-ca-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
394+
"tls-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-scheduler.pem"
395+
"tls-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-scheduler-key.pem"
384396

385397
# These sandbox security/sandbox related settings will be used for
386398
# "kube-apiserver", "kube-scheduler" and "kube-controller-manager"

defaults/main.yml

+3
Original file line numberDiff line numberDiff line change
@@ -302,6 +302,9 @@ k8s_scheduler_settings:
302302
"authentication-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
303303
"authorization-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
304304
"requestheader-client-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
305+
"client-ca-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
306+
"tls-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-scheduler.pem"
307+
"tls-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-scheduler-key.pem"
305308

306309
# These sandbox security/sandbox related settings will be used for
307310
# "kube-apiserver", "kube-scheduler" and "kube-controller-manager"

0 commit comments

Comments
 (0)