From d5a17907b86b8be7607b82004081bccb1dca4f9a Mon Sep 17 00:00:00 2001 From: neogopher Date: Thu, 5 Jun 2025 17:44:31 +0530 Subject: [PATCH 1/2] add main page layout and content --- .github/styles/Google/Headings.yml | 2 + .../learn-how-to/hardening-guide/README.mdx | 262 ++++++++++++++++++ .../hardening-guide/_category_.json | 5 + .../control-plane-components.mdx | 5 + .../hardening-guide/control-plane.mdx | 6 + .../learn-how-to/hardening-guide/etcd.mdx | 5 + .../learn-how-to/hardening-guide/policies.mdx | 6 + .../hardening-guide/worker-node.mdx | 5 + 8 files changed, 296 insertions(+) create mode 100644 vcluster/learn-how-to/hardening-guide/README.mdx create mode 100644 vcluster/learn-how-to/hardening-guide/_category_.json create mode 100644 vcluster/learn-how-to/hardening-guide/control-plane-components.mdx create mode 100644 vcluster/learn-how-to/hardening-guide/control-plane.mdx create mode 100644 vcluster/learn-how-to/hardening-guide/etcd.mdx create mode 100644 vcluster/learn-how-to/hardening-guide/policies.mdx create mode 100644 vcluster/learn-how-to/hardening-guide/worker-node.mdx diff --git a/.github/styles/Google/Headings.yml b/.github/styles/Google/Headings.yml index 035b526f0..42c5b20d1 100644 --- a/.github/styles/Google/Headings.yml +++ b/.github/styles/Google/Headings.yml @@ -8,6 +8,8 @@ indicators: - ":" exceptions: - Azure + - API Server + - CIS Hardening - CLI - Cosmos - Docker diff --git a/vcluster/learn-how-to/hardening-guide/README.mdx b/vcluster/learn-how-to/hardening-guide/README.mdx new file mode 100644 index 000000000..5911a23dc --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/README.mdx @@ -0,0 +1,262 @@ +--- +title: CIS Benchmark +sidebar_label: Harden your vCluster with CIS Benchmark +description: How to harden your vCluster with CIS Benchmark +--- + +# CIS Hardening guide + +This document outlines a set of hardening guidelines for securing vCluster deployments in alignment with the [Center for Internet Security (CIS) Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes). It is intended for Platform engineers, DevOps, and security teams responsible for managing and securing virtual Kubernetes clusters deployed within a multi-tenant or production environment. It is also meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. + +vCluster provides a lightweight, namespaced Kubernetes control plane that runs within a host Kubernetes cluster. This unique architecture introduces specific security considerations, as some components are virtualized while others remain dependent on the underlying host infrastructure. As such, certain CIS benchmark controls may apply differently or require alternative implementations within the context of vCluster. + +The objectives of this guide are to: +- Identify applicable CIS controls and best practices for vCluster environments, +- Highlight configuration options that improve the security posture of deployed vCluster, +- Distinguish between responsibilities of the host cluster and vCluster with respect to benchmark compliance, +- Recommend alternative techniques for evaluating and enforcing hardening policies. + +By following the practices outlined in this guide, organizations can improve the security and compliance readiness of their vCluster deployments while maintaining flexibility and operational efficiency. + +:::note +This guide assumes that the host Kubernetes cluster is already hardened in accordance with the CIS benchmark. The focus here is on securing the vCluster environment itself, including its API server, control plane components, and access controls. +::: + +This guide is intended to be used with the following versions of CIS Kubernetes Benchmark, Kubernetes, and vCluster: +
+ +| vCluster Version | CIS Benchmark Version | Kubernetes Version | +| --- | --- | --- | +| v0.26.0 | v1.10.0 | v1.31 | + +:::important +This guide currently supports the K8S distribution for the vCluster API Server and Embedded Etcd as the backing store. +Other Kubernetes distributions or backing store setups are **not covered** and may require different hardening approaches. +::: + +## vCluster runtime requirements +### Configure `default` service account +**Set `automountServiceAccountToken: false` for 'default' service accounts in all namespaces** + +Kubernetes provides a default service account which is used by cluster workloads +where no specific service account is assigned to the pod. +Where access to the Kubernetes API from a pod is required, a specific service account +should be created for that pod, and rights granted to that service account. +The default service account should be configured such that it does not provide a service +account token and does not have any explicit rights assignments. + +Hence, the default service account in all the namespaces must include this value +```yaml +automountServiceAccountToken: false +``` + +To achieve the same, run the below command in the virtual cluster context +```bash +for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do + for sa in $(kubectl get sa -n $ns -o jsonpath='{.items[*].metadata.name}'); do + kubectl patch serviceaccount "$sa" \ + -p '{"automountServiceAccountToken": false}' \ + -n "$ns" + done +done +``` +### API Server audit configuration +**Enable auditing on the Kubernetes API Server and set the desired audit log path** + +Control 1.2.16 of the benchmark recommends enabling auditing on the Kubernetes API Server. Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system. +Additionally, a minimal audit policy should be in place for the auditing to be carried out. + +Create a config map with a minimal audit policy. +```yaml title="audit-config-configmap.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: audit-config + namespace: +data: + audit-policy.yaml: | + apiVersion: audit.k8s.io/v1 + kind: Policy + rules: + - level: Metadata +``` + +Pass both configurations as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-policy-file=/etc/kubernetes/audit-policy.yaml + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + - --audit-log-maxbackup=10 + - --audit-log-maxsize=100 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + - name: audit-policy + configMap: + name: audit-config + addVolumeMounts: + - name: audit-log + mountPath: /var/log + - name: audit-policy + mountPath: /etc/kubernetes +``` + +### API Server encryption configuration + +**Ensure that encryption providers are appropriately configured** + +Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers are utilized. Currently, the aescbc, kms, and secretbox are likely to be appropriate options. + +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` + +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + +## Reference hardened vCluster configuration + +Below is a reference vCluster values file with minimum required configuration for a hardened installation. This needs to be used in combination with the above runtime level settings and other measures to achieve full compliance. + +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction + - --request-timeout=300s + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + - --audit-policy-file=/etc/kubernetes/audit-policy.yaml + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + - --audit-log-maxbackup=10 + - --audit-log-maxsize=100 + controllerManager: + extraArgs: + - --terminated-pod-gc-threshold=12500 + - --profiling=false + - --use-service-account-credentials=true + scheduler: + extraArgs: + - --profiling=false + advanced: + virtualScheduler: + enabled: true + backingStore: + etcd: + embedded: + enabled: true + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + - name: audit-policy + configMap: + name: audit-config + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: audit-log + mountPath: /var/log + - name: audit-policy + mountPath: /etc/kubernetes + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +sync: + fromHost: + nodes: + enabled: true +``` + +## Known failures & remediation +TODO + +## Assessment guides +If you have followed this guide, your vCluster would be configured to pass the CIS Kubernetes Benchmark. You can review the below guides to understand the expectations of each of the benchmark's checks and how you can do the same on your cluster. +While not every control applies directly due to the namespaced and virtualized nature of vCluster, the recommended practices in this document address the subset of controls that are applicable or adaptable. + +:::important Note on Validation Tools +While tools like [kube-bench](https://github.com/aquasecurity/kube-bench) are commonly used to validate CIS benchmark compliance in traditional Kubernetes deployments, they cannot be directly applied to vCluster environments due to the fundamental differences in how vCluster operates. vCluster's virtualized control plane architecture means that many components run as containers within the host cluster rather than as traditional system processes, making standard automated assessment tools incompatible with this deployment model. + +For vCluster environments, manual verification of controls and custom assessment approaches are necessary to ensure compliance with the benchmark requirements. +::: + +The following sections provide detailed assessment guidance for each area of the CIS Kubernetes Benchmark: + +- [**Section 1: Control Plane Security Configuration**
](./control-plane-components) + +- [**Section 2: Etcd Node Configuration**
](./etcd) + +- [**Section 3: Control Plane Configuration**
](./control-plane) + +- [**Section 4: Worker Node Security Configuration**
](./worker-node) + +- [**Section 5: Kubernetes Policies**
](./policies) + +### Test controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a vCluster that was configured according to this hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to vCluster are provided for testing. + +These are the possible results for each control: + +- PASS - The vCluster under test passed the audit outlined in the benchmark. +- NOT APPLICABLE - The control is not applicable to vCluster because of how it is designed to operate. The remediation section contains explaination of why this is so. +- WARN - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure vCluster does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + + +By reviewing these assessment sections and mapping them to your vCluster configuration, you can ensure that your hardened deployment not only follows security best practices but is also auditable and defensible in a compliance context. \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/_category_.json b/vcluster/learn-how-to/hardening-guide/_category_.json new file mode 100644 index 000000000..54d551e96 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/_category_.json @@ -0,0 +1,5 @@ +{ + "label": "CIS Hardening guide", + "collapsible": true, + "collapsed": true +} diff --git a/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx b/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx new file mode 100644 index 000000000..18b1c3f3d --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx @@ -0,0 +1,5 @@ +--- +title: Self assessment guide to validate control plane components +sidebar_label: Control Plane Components +description: Self assessment guide to validate control plane components +--- \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/control-plane.mdx b/vcluster/learn-how-to/hardening-guide/control-plane.mdx new file mode 100644 index 000000000..bbff94d98 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/control-plane.mdx @@ -0,0 +1,6 @@ +--- +title: Self assessment guide to validate control plane configuration +sidebar_label: Control Plane Configuration +description: Self assessment guide to validate control plane configuration +--- + diff --git a/vcluster/learn-how-to/hardening-guide/etcd.mdx b/vcluster/learn-how-to/hardening-guide/etcd.mdx new file mode 100644 index 000000000..f32e78e49 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/etcd.mdx @@ -0,0 +1,5 @@ +--- +title: Self assessment guide to validate ETCD configuration +sidebar_label: Etcd +description: Self assessment guide to validate ETCD configuration +--- \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/policies.mdx b/vcluster/learn-how-to/hardening-guide/policies.mdx new file mode 100644 index 000000000..33e3277d2 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/policies.mdx @@ -0,0 +1,6 @@ +--- +title: Self assessment guide to validate Policies +sidebar_label: Policies +description: Self assessment guide to validate Policies +--- + diff --git a/vcluster/learn-how-to/hardening-guide/worker-node.mdx b/vcluster/learn-how-to/hardening-guide/worker-node.mdx new file mode 100644 index 000000000..a9042bf08 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/worker-node.mdx @@ -0,0 +1,5 @@ +--- +title: Self assessment guide to validate worker node configuration +sidebar_label: Worker Node Configuration +description: Self assessment guide to validate worker node configuration +--- \ No newline at end of file From f0dcc03320d1ef27ec21d601a0b03b53e5a617f3 Mon Sep 17 00:00:00 2001 From: neogopher Date: Tue, 17 Jun 2025 13:35:28 +0530 Subject: [PATCH 2/2] add control plane security page --- .../control-plane-components.mdx | 1666 ++++++++++++++++- 1 file changed, 1662 insertions(+), 4 deletions(-) diff --git a/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx b/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx index 18b1c3f3d..5d9e60c29 100644 --- a/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx +++ b/vcluster/learn-how-to/hardening-guide/control-plane-components.mdx @@ -1,5 +1,1663 @@ --- -title: Self assessment guide to validate control plane components -sidebar_label: Control Plane Components -description: Self assessment guide to validate control plane components ---- \ No newline at end of file +title: Self assessment guide - Control Plane Security Configuration +sidebar_label: Control Plane Security Configuration +description: Self assessment guide to validate control plane security configuration +--- + +This section covers security recommendations for the direct configuration of Kubernetes control plane processes, including: +- API Server configuration and security settings +- Controller Manager security parameters +- Scheduler security configurations +- General control plane security practices + +_Assessment focus for vCluster_: Since vCluster virtualizes the control plane components, verification involves checking the extraArgs configurations in your values.yaml file and ensuring proper security parameters are set for the virtualized API server, controller manager, and scheduler. + +:::note +For auditing each control, create the vCluster using default values as shown below, unless specified otherwise. +```bash +vcluster create my-vcluster --connect=false +``` +::: + + +## 1.1 Master Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments. + + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments. + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the etcd data directory, passed as an argument to `--data-dir`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` + +Run the audit command to verify the permissions on the data directory. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- chmod 700 /data/etcd +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/etcd +``` +Verify that the etcd data directory permissions are set to 700 or more restrictive. + +**Expected Result:** +``` +permissions has value 700, expected 700 or more restrictive +``` + +**Returned Value:** +``` +permissions=700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends that the etcd data directory be owned by the etcd user and group (etcd:etcd) to follow least privilege principles. +However, in vCluster, etcd is embedded and runs as `root` within the syncer container. There is no separate etcd user present in the container. Thus the directory ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/admin.conf +``` +Verify that the admin.conf file permissions are 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/admin.conf +``` +Verify that the admin.conf file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the scheduler kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/scheduler.conf +``` +Verify that the scheduler kubeconfig file permissions are set to 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Remediation:** Get the scheduler kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/scheduler.conf +``` +Verify that the scheduler kubeconfig file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the controller-manager kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/controller-manager.conf +``` +Verify that the controller-manager kubeconfig file permissions are set to 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Remediation:** Get the controller-manager kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/controller-manager.conf +``` +Verify that the controller-manager kubeconfig file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- find /data/pki -not -user root -o -not -group root | wc -l | grep -q '^0$' && echo "All files owned by root" || echo "Some files not owned by root" +``` +Verify that the ownership of all files and directories in this hierarchy is set to root:root. + +**Expected Result:** +``` +All files owned by root +``` + +**Returned Value:** +``` +All files owned by root +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Run the audit command to verify the permissions on the certifcate files. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec chmod 600 {} \;" +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec stat -c permissions=%a {} \;" +``` +Verify that the permissions on all the certificate files are 600 or more restrictive. + +**Expected Result:** +``` +permissions on all the certificate files are 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated) + +**Result:** PASS + +**Remediation:** Run the audit command to verify the permissions on the key files. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec chmod 600 {} \;" +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec stat -c permissions=%a {} \;" +``` +Verify that the permissions on all the key files are 600 or more restrictive. + +**Expected Result:** +``` +permissions on all the key files are 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --anonymous-auth=false +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -- ps -ef | grep kube-apiserver +``` +Verify that the --anonymous-auth argument is set to false. + +**Expected Result:** +``` +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value:** +```bash +41 root 0:07 /binaries/kube-apiserver --service-cluster-ip-range=10.96.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --anonymous-auth=false +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --token-auth-file parameter is not set. + +**Expected Result:** +``` +'--token-auth-file' is not present +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is set (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=DenyServiceExternalIPs +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -- ps -ef | grep kube-apiserver +``` +Verify that the `DenyServiceExternalIPs' argument exist as a string value in --enable-admission-plugins. + +**Expected Result:** +``` +'DenyServiceExternalIPs' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends to set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. However, since vCluster does not interact directly with kubelets running on the host cluster these client certificates are not required. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends to ensure that the kubelet certificate authority is set appropriately. However, since vCluster does not interact directly with kubelets running on the host cluster, verifying certificates using certifcate authority is not required. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is not set to AlwaysAllow. + +**Expected Result:** +``` +'AlwaysAllow' argument does not exist as a string value in the --authorization-mode list. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --authorization-mode=Node +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is set to a value to include Node. + +**Expected Result:** +``` +'Node' argument exists as a string value in the --authorization-mode list. +``` + +**Returned Value:** +```bash +47 root 0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node +``` + + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is set to a value to include RBAC. + +**Expected Result:** +``` +'RBAC' argument exists as a string value in the --authorization-mode list. +``` + +**Returned Value:** +``` +47 root 0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node +``` + + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and set the desired limits in a configuration file. Create a config map in the vCluster namespace that contains the configuration file. +```yaml title="admission-control.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: admission-control + namespace: vcluster-my-vcluster +data: + admission-control.yaml: | + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: EventRateLimit + configuration: + apiVersion: eventratelimit.admission.k8s.io/v1alpha1 + kind: Configuration + limits: + - type: Server + qps: 50 + burst: 100 +``` + +Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=EventRateLimit + - --admission-control-config-file=/etc/kubernetes/admission-control.yaml + statefulSet: + persistence: + addVolumes: + - name: admission-control + configMap: + name: admission-control + addVolumeMounts: + - name: admission-control + mountPath: /etc/kubernetes +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes EventRateLimit. + +**Expected Result:** +``` +'EventRateLimit' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml +``` + + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that if the --enable-admission-plugins argument is set, its value does not include AlwaysAdmit. + +**Expected Result:** +``` +'AlwaysAdmit' argument does not exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=AlwaysPullImages +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes AlwaysPullImages. + +**Expected Result:** +``` +'AlwaysPullImages' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages +``` + +### 1.2.12 Ensure that the admission control plugin ServiceAccount is set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --disable-admission-plugins argument is set to a value that does not includes ServiceAccount. + +**Expected Result:** +``` +--disable-admission-plugins is not set. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.13 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --disable-admission-plugins argument is set to a value that does not include NamespaceLifecycle. + +**Expected Result:** +``` +--disable-admission-plugins is not set. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.14 Ensure that the admission control plugin NodeRestriction is set (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=NodeRestriction +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes NodeRestriction. + +**Expected Result:** +``` +'NodeRestriction' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +44 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=NodeRestriction +``` + +### 1.2.15 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.16 Ensure that the --audit-log-path argument is set (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-path argument is set as appropriate. + +**Expected Result:** +``` +'--audit-log-path' is present +``` + +**Returned Value:** +```bash +45 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log +``` + +### 1.2.17 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxage argument is set to 30 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxage=30 +``` + +### 1.2.18 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxbackup=10 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxbackup argument is set to 10 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value:** +```bash +44 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxbackup=10 +``` + +### 1.2.19 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxsize=100 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxsize argument is set to 100 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value:** +```bash +43 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxsize=100 +``` + +### 1.2.20 Ensure that the --request-timeout argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --request-timeout=300s +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --request-timeout argument is either not set or set to an appropriate value. + +**Expected Result:** +``` +'--request-timeout' is set to 300s +``` + +**Returned Value:** +```bash +43 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --request-timeout=300s +``` + +### 1.2.21 Ensure that the --service-account-lookup argument is set to true (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --service-account-lookup=true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that if the --service-account-lookup argument exists it is set to true. + +**Expected Result:** +``` +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value:** +```bash +43 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --service-account-lookup=true +``` + +### 1.2.22 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --service-account-key-file argument exists and is set as appropriate. + +**Expected Result:** +``` +'--service-account-key-file' argument exists and is set appropriately +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.23 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + backingStore: + etcd: + embedded: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --etcd-certfile and --etcd-keyfile arguments exist and they are set as appropriate. + +**Expected Result:** +``` +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value:** +```bash +47 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.24 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate. + +**Expected Result:** +``` +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.25 Ensure that the --client-ca-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --client-ca-file argument exists and it is set as appropriate. + +**Expected Result:** +``` +'--client-ca-file' is present +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.26 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + backingStore: + etcd: + embedded: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --etcd-cafile argument exists and it is set as appropriate. + +**Expected Result:** +``` +'--etcd-cafile' is present +``` + +**Returned Value:** +```bash +47 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.27 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and configure a EncryptionConfig file. +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --encryption-provider-config argument is set to a EncryptionConfig file. Additionally, ensure that the EncryptionConfig file has all the desired resources covered especially any secrets. + +**Expected Result:** +``` +'--encryption-provider-config' is present +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --encryption-provider-config=/etc/encryption/encryption-config.yaml +``` + +### 1.2.28 Ensure that encryption providers are appropriately configured (Automated) + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and configure a EncryptionConfig file. +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- cat /etc/encryption/encryption-config.yaml +``` +Verify that aescbc, kms, or secretbox is set as the encryption provider for all the desired resources. + +**Expected Result:** +``` +aescbc is set as the encryption provider for the configured resources +``` + +**Returned Value:** +```bash +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` + + +### 1.2.29 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --tls-cipher-suites argument is set with one of the cipher suites listed below +``` +TLS_AES_128_GCM_SHA256 +TLS_AES_256_GCM_SHA384 +TLS_CHACHA20_POLY1305_SHA256 +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA +TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 +TLS_RSA_WITH_3DES_EDE_CBC_SHA +TLS_RSA_WITH_AES_128_CBC_SHA +TLS_RSA_WITH_AES_128_GCM_SHA256 +TLS_RSA_WITH_AES_256_CBC_SHA +TLS_RSA_WITH_AES_256_GCM_SHA384 +``` + +**Expected Result:** +``` +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value:** +```bash +43 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +``` + + +## 1.3 Controller Manager + +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + controllerManager: + extraArgs: + - --terminated-pod-gc-threshold=12500 +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --terminated-pod-gc-threshold argument is set as appropriate. + +**Expected Result:** +``` +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value:** +```bash +98 root 0:01 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --terminated-pod-gc-threshold=12500 +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + controllerManager: + extraArgs: + - --profiling=false +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +```bash +98 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --profiling=false +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --use-service-account-credentials argument is set to true. + +**Expected Result:** +``` +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --service-account-private-key-file argument is set as appropriate. + +**Expected Result:** +``` +'--service-account-private-key-file' is present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --root-ca-file argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate. + +**Expected Result:** +``` +'--root-ca-file' is present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends enabling the `RotateKubeletServerCertificate` feature-gate which ensures that kubelet server certificates are automatically rotated by the Kubernetes control plane. However, vCluster does not run real kubelets; it operates entirely within the host cluster and abstracts away node-level operations. Since, vCluster has no control over kubelet configuration or certificate rotation, enforcing this setting must be done at the host cluster level, where the actual kubelets are running. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --bind-address argument is set to 127.0.0.1 + +**Expected Result:** +``` +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the scheduler while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + scheduler: + extraArgs: + - --profiling=false + advanced: + virtualScheduler: + enabled: true +sync: + fromHost: + nodes: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +``` + 98 root 0:01 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false --profiling=false +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + advanced: + virtualScheduler: + enabled: true +sync: + fromHost: + nodes: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` +Verify that the --bind-address argument is set to 127.0.0.1 + +**Expected Result:** +``` +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value:** +``` +88 root 0:00 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false +``` \ No newline at end of file