diff --git a/.github/styles/Google/Headings.yml b/.github/styles/Google/Headings.yml index 035b526f0..c8ea62e90 100644 --- a/.github/styles/Google/Headings.yml +++ b/.github/styles/Google/Headings.yml @@ -8,6 +8,9 @@ indicators: - ":" exceptions: - Azure + - API Server + - Authorization + - CIS Hardening - CLI - Cosmos - Docker diff --git a/vcluster/learn-how-to/hardening-guide/1-control-plane-components.mdx b/vcluster/learn-how-to/hardening-guide/1-control-plane-components.mdx new file mode 100644 index 000000000..249b8956c --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/1-control-plane-components.mdx @@ -0,0 +1,1663 @@ +--- +title: Self assessment guide - Control Plane Security Configuration +sidebar_label: Control Plane Security Configuration +description: Self assessment guide to validate control plane security configuration +--- + +This section covers security recommendations for the direct configuration of Kubernetes control plane processes, including: +- API Server configuration and security settings +- Controller Manager security parameters +- Scheduler security configurations +- General control plane security practices + +_Assessment focus for vCluster_: Since vCluster virtualizes the control plane components, verification involves checking the extraArgs configurations in your values.yaml file and ensuring proper security parameters are set for the virtualized API server, controller manager, and scheduler. + +:::note +For auditing each control, create the vCluster using default values as shown below, unless specified otherwise. +```bash +vcluster create my-vcluster --connect=false +``` +::: + + +## 1.1 Master Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster. + + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments. + + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments. + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the etcd data directory, passed as an argument to `--data-dir`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` + +Run the audit command to verify the permissions on the data directory. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- chmod 700 /data/etcd +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/etcd +``` +Verify that the etcd data directory permissions are set to 700 or more restrictive. + +**Expected Result:** +``` +permissions has value 700, expected 700 or more restrictive +``` + +**Returned Value:** +``` +permissions=700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends that the etcd data directory be owned by the etcd user and group (etcd:etcd) to follow least privilege principles. +However, in vCluster, etcd is embedded and runs as `root` within the syncer container. There is no separate etcd user present in the container. Thus the directory ownership check defined in this control is not applicable in the context of vCluster. + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/admin.conf +``` +Verify that the admin.conf file permissions are 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/admin.conf +``` +Verify that the admin.conf file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the scheduler kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/scheduler.conf +``` +Verify that the scheduler kubeconfig file permissions are set to 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Remediation:** Get the scheduler kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/scheduler.conf +``` +Verify that the scheduler kubeconfig file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Get the controller-manager kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/controller-manager.conf +``` +Verify that the controller-manager kubeconfig file permissions are set to 600 or more restrictive. + +**Expected Result:** +``` +permissions has value 600, expected 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +``` + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + +**Result:** PASS + +**Remediation:** Get the controller-manager kubeconfig file, passed as an argument to `--kubeconfig`, from the command +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/controller-manager.conf +``` +Verify that the controller-manager kubeconfig file ownership is set to root:root. + +**Expected Result:** +``` +root:root +``` + +**Returned Value:** +``` +root:root +``` + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- find /data/pki -not -user root -o -not -group root | wc -l | grep -q '^0$' && echo "All files owned by root" || echo "Some files not owned by root" +``` +Verify that the ownership of all files and directories in this hierarchy is set to root:root. + +**Expected Result:** +``` +All files owned by root +``` + +**Returned Value:** +``` +All files owned by root +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Automated) + +**Result:** PASS + +**Remediation:** Run the audit command to verify the permissions on the certifcate files. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec chmod 600 {} \;" +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec stat -c permissions=%a {} \;" +``` +Verify that the permissions on all the certificate files are 600 or more restrictive. + +**Expected Result:** +``` +permissions on all the certificate files are 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated) + +**Result:** PASS + +**Remediation:** Run the audit command to verify the permissions on the key files. If they do not match the expected result only then run the below command to set the appropriate permissions. +``` +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec chmod 600 {} \;" +``` + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec stat -c permissions=%a {} \;" +``` +Verify that the permissions on all the key files are 600 or more restrictive. + +**Expected Result:** +``` +permissions on all the key files are 600 or more restrictive +``` + +**Returned Value:** +``` +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --anonymous-auth=false +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --anonymous-auth argument is set to false. + +**Expected Result:** +``` +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value:** +```bash +41 root 0:07 /binaries/kube-apiserver --service-cluster-ip-range=10.96.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --anonymous-auth=false +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --token-auth-file parameter is not set. + +**Expected Result:** +``` +'--token-auth-file' is not present +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is set (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=DenyServiceExternalIPs +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the `DenyServiceExternalIPs' argument exist as a string value in --enable-admission-plugins. + +**Expected Result:** +``` +'DenyServiceExternalIPs' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends to set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. However, since vCluster does not interact directly with kubelets running on the host cluster these client certificates are not required. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends to ensure that the kubelet certificate authority is set appropriately. However, since vCluster does not interact directly with kubelets running on the host cluster, verifying certificates using certifcate authority is not required. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is not set to AlwaysAllow. + +**Expected Result:** +``` +'AlwaysAllow' argument does not exist as a string value in the --authorization-mode list. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 +``` + + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --authorization-mode=Node +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is set to a value to include Node. + +**Expected Result:** +``` +'Node' argument exists as a string value in the --authorization-mode list. +``` + +**Returned Value:** +```bash +47 root 0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node +``` + + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --authorization-mode argument exists and is set to a value to include RBAC. + +**Expected Result:** +``` +'RBAC' argument exists as a string value in the --authorization-mode list. +``` + +**Returned Value:** +``` +47 root 0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node +``` + + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and set the desired limits in a configuration file. Create a config map in the vCluster namespace that contains the configuration file. +```yaml title="admission-control.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: admission-control + namespace: vcluster-my-vcluster +data: + admission-control.yaml: | + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: EventRateLimit + configuration: + apiVersion: eventratelimit.admission.k8s.io/v1alpha1 + kind: Configuration + limits: + - type: Server + qps: 50 + burst: 100 +``` + +Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=EventRateLimit + - --admission-control-config-file=/etc/kubernetes/admission-control.yaml + statefulSet: + persistence: + addVolumes: + - name: admission-control + configMap: + name: admission-control + addVolumeMounts: + - name: admission-control + mountPath: /etc/kubernetes +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes EventRateLimit. + +**Expected Result:** +``` +'EventRateLimit' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml +``` + + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that if the --enable-admission-plugins argument is set, its value does not include AlwaysAdmit. + +**Expected Result:** +``` +'AlwaysAdmit' argument does not exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=AlwaysPullImages +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes AlwaysPullImages. + +**Expected Result:** +``` +'AlwaysPullImages' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages +``` + +### 1.2.12 Ensure that the admission control plugin ServiceAccount is set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --disable-admission-plugins argument is set to a value that does not includes ServiceAccount. + +**Expected Result:** +``` +--disable-admission-plugins is not set. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.13 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --disable-admission-plugins argument is set to a value that does not include NamespaceLifecycle. + +**Expected Result:** +``` +--disable-admission-plugins is not set. +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.14 Ensure that the admission control plugin NodeRestriction is set (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=NodeRestriction +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --enable-admission-plugins argument is set to a value that includes NodeRestriction. + +**Expected Result:** +``` +'NodeRestriction' argument exist as a string value in the --enable-admission-plugins list. +``` + +**Returned Value:** +```bash +44 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=NodeRestriction +``` + +### 1.2.15 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +``` +45 root 0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.16 Ensure that the --audit-log-path argument is set (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-path argument is set as appropriate. + +**Expected Result:** +``` +'--audit-log-path' is present +``` + +**Returned Value:** +```bash +45 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log +``` + +### 1.2.17 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxage argument is set to 30 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxage=30 +``` + +### 1.2.18 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxbackup=10 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxbackup argument is set to 10 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value:** +```bash +44 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxbackup=10 +``` + +### 1.2.19 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-log-path=/var/log/audit.log + - --audit-log-maxsize=100 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + addVolumeMounts: + - name: audit-log + mountPath: /var/log +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-log-maxsize argument is set to 100 or as appropriate. + +**Expected Result:** +``` +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value:** +```bash +43 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxsize=100 +``` + +### 1.2.20 Ensure that the --request-timeout argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --request-timeout=300s +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --request-timeout argument is either not set or set to an appropriate value. + +**Expected Result:** +``` +'--request-timeout' is set to 300s +``` + +**Returned Value:** +```bash +43 root 0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --request-timeout=300s +``` + +### 1.2.21 Ensure that the --service-account-lookup argument is set to true (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --service-account-lookup=true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that if the --service-account-lookup argument exists it is set to true. + +**Expected Result:** +``` +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value:** +```bash +43 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --service-account-lookup=true +``` + +### 1.2.22 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --service-account-key-file argument exists and is set as appropriate. + +**Expected Result:** +``` +'--service-account-key-file' argument exists and is set appropriately +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.23 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + backingStore: + etcd: + embedded: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --etcd-certfile and --etcd-keyfile arguments exist and they are set as appropriate. + +**Expected Result:** +``` +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value:** +```bash +47 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.24 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate. + +**Expected Result:** +``` +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.25 Ensure that the --client-ca-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --client-ca-file argument exists and it is set as appropriate. + +**Expected Result:** +``` +'--client-ca-file' is present +``` + +**Returned Value:** +``` +45 root 0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.26 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + backingStore: + etcd: + embedded: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --etcd-cafile argument exists and it is set as appropriate. + +**Expected Result:** +``` +'--etcd-cafile' is present +``` + +**Returned Value:** +```bash +47 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` + +### 1.2.27 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and configure a EncryptionConfig file. +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --encryption-provider-config argument is set to a EncryptionConfig file. Additionally, ensure that the EncryptionConfig file has all the desired resources covered especially any secrets. + +**Expected Result:** +``` +'--encryption-provider-config' is present +``` + +**Returned Value:** +```bash +45 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --encryption-provider-config=/etc/encryption/encryption-config.yaml +``` + +### 1.2.28 Ensure that encryption providers are appropriately configured (Automated) + +**Result:** PASS + +**Remediation:** Follow the Kubernetes documentation and configure a EncryptionConfig file. +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- cat /etc/encryption/encryption-config.yaml +``` +Verify that aescbc, kms, or secretbox is set as the encryption provider for all the desired resources. + +**Expected Result:** +``` +aescbc is set as the encryption provider for the configured resources +``` + +**Returned Value:** +```bash +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` + + +### 1.2.29 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --tls-cipher-suites argument is set with one of the cipher suites listed below +``` +TLS_AES_128_GCM_SHA256 +TLS_AES_256_GCM_SHA384 +TLS_CHACHA20_POLY1305_SHA256 +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA +TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA +TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 +TLS_RSA_WITH_3DES_EDE_CBC_SHA +TLS_RSA_WITH_AES_128_CBC_SHA +TLS_RSA_WITH_AES_128_GCM_SHA256 +TLS_RSA_WITH_AES_256_CBC_SHA +TLS_RSA_WITH_AES_256_GCM_SHA384 +``` + +**Expected Result:** +``` +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value:** +```bash +43 root 0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 +``` + + +## 1.3 Controller Manager + +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + controllerManager: + extraArgs: + - --terminated-pod-gc-threshold=12500 +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --terminated-pod-gc-threshold argument is set as appropriate. + +**Expected Result:** +``` +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value:** +```bash +98 root 0:01 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --terminated-pod-gc-threshold=12500 +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + controllerManager: + extraArgs: + - --profiling=false +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +```bash +98 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --profiling=false +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --use-service-account-credentials argument is set to true. + +**Expected Result:** +``` +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --service-account-private-key-file argument is set as appropriate. + +**Expected Result:** +``` +'--service-account-private-key-file' is present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --root-ca-file argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate. + +**Expected Result:** +``` +'--root-ca-file' is present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** This control recommends enabling the `RotateKubeletServerCertificate` feature-gate which ensures that kubelet server certificates are automatically rotated by the Kubernetes control plane. However, vCluster does not run real kubelets; it operates entirely within the host cluster and abstracts away node-level operations. Since, vCluster has no control over kubelet configuration or certificate rotation, enforcing this setting must be done at the host cluster level, where the actual kubelets are running. Thus the check defined in this control is not applicable in the context of vCluster. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager +``` +Verify that the --bind-address argument is set to 127.0.0.1 + +**Expected Result:** +``` +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value:** +``` +102 root 0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration as arguments to the scheduler while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + scheduler: + extraArgs: + - --profiling=false + advanced: + virtualScheduler: + enabled: true +sync: + fromHost: + nodes: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` +Verify that the --profiling argument is set to false. + +**Expected Result:** +``` +'--profiling' is equal to 'false' +``` + +**Returned Value:** +``` + 98 root 0:01 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false --profiling=false +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + +**Result:** PASS + +**Remediation:** Pass the below configuration while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + advanced: + virtualScheduler: + enabled: true +sync: + fromHost: + nodes: + enabled: true +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler +``` +Verify that the --bind-address argument is set to 127.0.0.1 + +**Expected Result:** +``` +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value:** +``` +88 root 0:00 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false +``` \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/2-etcd.mdx b/vcluster/learn-how-to/hardening-guide/2-etcd.mdx new file mode 100644 index 000000000..c9f7ee101 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/2-etcd.mdx @@ -0,0 +1,170 @@ +--- +title: Self assessment guide - ETCD configuration +sidebar_label: Etcd +description: Self assessment guide to validate ETCD configuration +--- + +This section covers security areas related to etcd configuration, including: +- Encryption of sensitive data at rest on server, applications and in transit. + +_Assessment focus for vCluster_: Key areas include verifying correct authentication mechanisms are used and safeguarding +the data at rest and in transit via TLS encryption. + +## 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that the --cert-file and the --key-file arguments are set as appropriate. + +**Expected Result:** +``` +'--cert-file' and '--key-file' arguments are appropriately set +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +## 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that the --client-cert-auth argument is set to true. + +**Expected Result:** +``` +'--client-cert-auth' is set to 'true' +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +## 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that if the --auto-tls argument exists, it is not set to true. + +**Expected Result:** +``` +'--auto-tls' argument does not exist +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +## 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that the --peer-cert-file and --peer-key-file arguments are set as appropriate. + +**Expected Result:** +``` +'--peer-cert-file' and '--peer-key-file' arguments are appropriately set +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +## 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that the --peer-client-cert-auth argument is set to true. + +**Expected Result:** +``` +'--peer-client-cert-auth' is set to 'true' +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +## 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Verify that if the --peer-auto-tls argument exists, it is not set to true. + +**Expected Result:** +``` +'--peer-auto-tls' argument does not exist +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + + +## 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual) + + +**Result:** PASS + +**Audit:** +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd +``` +Note the file referenced by the --trusted-ca-file argument. + +Run the following command and note the file referenced by '--client-ca-file' +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the file referenced by the --client-ca-file for apiserver is different from the --trusted-ca-file used by etcd. + +**Expected Result:** +``` +The file referenced by the --client-ca-file for api-server is different from the --trusted-ca-file +``` + +**Returned Value:** +```bash +31 root 2:41 etcd --data-dir=/data/etcd --advertise-client-urls=https://emb-0.emb-headless.emb:2379 --initial-advertise-peer-urls=https://emb-0.emb-headless.emb:2380 --initial-cluster-token=vcluster --listen-client-urls=https://0.0.0.0:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://0.0.0.0:2380 --name=emb-0 --heartbeat-interval=500 --election-timeout=5000 --experimental-watch-progress-notify-interval=5s --experimental-peer-skip-client-san-verification --log-level=info --snapshot-count=10000 --log-outputs=stderr --logger=zap --client-cert-auth=true --cert-file=/data/pki/etcd/server.crt --key-file=/data/pki/etcd/server.key --peer-client-cert-auth=true --peer-key-file=/data/pki/etcd/peer.key --peer-cert-file=/data/pki/etcd/peer.crt --peer-trusted-ca-file=/data/pki/etcd/ca.crt --trusted-ca-file=/data/pki/etcd/ca.crt --initial-cluster=emb-0= https://emb-0.emb-headless.emb:2380 --initial-cluster-state=new --force-new-cluster +``` + +```bash +47 root 6:43 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer= https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file= /data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false +``` \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/3-control-plane.mdx b/vcluster/learn-how-to/hardening-guide/3-control-plane.mdx new file mode 100644 index 000000000..889a92cd9 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/3-control-plane.mdx @@ -0,0 +1,137 @@ +--- +title: Self assessment guide - Control Plane Configuration +sidebar_label: Control Plane Configuration +description: Self assessment guide to validate control plane configuration +--- + +This section covers cluster-wide security areas, including: +- Authentication and authorization mechanisms +- Audit logging configuration + +_Assessment focus for vCluster_: Key areas include verifying audit logging is enabled, and alternative mechanisms such as OIDC are used. + + +## 3.1 Authentication and Authorization + +### 3.1.1 Client certificate authentication should not be used for users (Manual) +**Result:** WARN + +**Remediation:** Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) +**Result:** WARN + +**Remediation:** Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) +**Result:** WARN + +**Remediation:** Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of bootstrap tokens. + +## 3.2 Logging + +### 3.2.1 Ensure that a minimal audit policy is created (Automated) +**Result:** PASS + +**Remediation:** +Follow the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/) and create a config map with a minimal audit policy +```yaml title="audit-config-configmap.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: audit-config + namespace: vcluster-my-vcluster +data: + audit-policy.yaml: | + apiVersion: audit.k8s.io/v1 + kind: Policy + rules: + - level: Metadata +``` + +Pass the below configuration as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-policy-file=/etc/kubernetes/audit-policy.yaml + statefulSet: + persistence: + addVolumes: + - name: audit-policy + configMap: + name: audit-config + addVolumeMounts: + - name: audit-policy + mountPath: /etc/kubernetes +``` + +**Audit:** +Create the vCluster using the above values file. +```bash +vcluster create my-vcluster -f vcluster.yaml --connect=false +``` + +Run the following command against the vCluster pod: +```bash +kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver +``` +Verify that the --audit-policy-file is set. + +**Expected Result:** +``` +'--audit-policy-file' is present +``` + +**Returned Value:** +```bash +26 root 0:08 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) +**Result:** WARN + +**Remediation:** Review the audit policy provided for the cluster and ensure that it covers at least the following areas :- +- Access to Secrets managed by the cluster. Care should be taken to only log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, to avoid the risk of logging sensitive data. +- Modification of pod and deployment objects. +- Use of pods/exec, pods/portforward, pods/proxy and services/proxy. + +For most requests, minimally logging at the Metadata level is recommended (the most basic level of logging). +Consider modification of the audit policy in use on the cluster to include these items, at a minimum. A sample policy that satisfies this criteria is as below. + +```yaml title="audit-config-configmap.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: audit-config + namespace: vcluster-my-vcluster +data: + audit-policy.yaml: | + apiVersion: audit.k8s.io/v1 + kind: Policy + rules: + - level: Metadata + resources: + - group: "" + resources: ["secrets", "configmaps"] + - group: "authentication.k8s.io" + resources: ["tokenreviews"] + - level: RequestResponse + resources: + - group: "" + resources: ["pods"] + - group: "apps" + resources: ["deployments", "replicasets"] + verbs: ["create", "update", "patch", "delete"] + - level: RequestResponse + resources: + - group: "" + resources: ["pods/exec", "pods/portforward", "pods/proxy", "services/proxy"] + - level: Metadata + omitStages: + - RequestReceived +``` + diff --git a/vcluster/learn-how-to/hardening-guide/4-worker-node.mdx b/vcluster/learn-how-to/hardening-guide/4-worker-node.mdx new file mode 100644 index 000000000..dda64f818 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/4-worker-node.mdx @@ -0,0 +1,306 @@ +--- +title: Self assessment guide - Worker Node Configuration +sidebar_label: Worker Node Configuration +description: Self assessment guide to validate Worker Node configuration +--- + +This section provides security recommendations for components running on Kubernetes worker nodes: +- Kubelet configuration and security +- File system permissions + +_Assessment focus for vCluster_: Since vCluster uses the host cluster's nodes, this section's requirements are primarily inherited from the host cluster's configuration. Verification should focus on ensuring the host cluster meets these requirements. + + +## 4.1 Worker Node Configuration Files + +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chmod 600 /etc/systemd/system/kubelet.service.d/kubeadm.conf +``` + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chown root:root /etc/systemd/system/kubelet.service.d/kubeadm.conf +``` + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chmod 600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chown root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chmod 600 /etc/kubernetes/kubelet.conf +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the below command (based on the file location on your system) on the each worker node. For example, +```bash +chown root:root /etc/kubernetes/kubelet.conf +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + + +**Remediation:** Run the following command to modify the file permissions of the `--client-ca-file` +```bash +chmod 600 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the following command to modify the ownership of the `--client-ca-file`. +```bash +chown root:root +``` + +### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 600 or more restrictive (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the following command (using the config file location identied in the Audit step) +```bash +chmod 600 /var/lib/kubelet/config.yaml +``` + +### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Run the following command (using the config file location identied in the Audit step) +```bash +chown root:root /etc/kubernetes/kubelet.conf +``` + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to `false`. If using executable arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. +```bash +--anonymous-auth=false +``` +Based on your system, restart the kubelet service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `authorization: mode` to `Webhook`. +If using executable arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_AUTHZ_ARGS` variable. +```bash +--authorization-mode=Webhook +``` +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `authentication: x509: clientCAFile` to the location of the client CA file. +If using command line arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_AUTHZ_ARGS` variable. +```bash +--client-ca-file= +``` +Based on your system, restart the kubelet service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + + +### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `readOnlyPort` to `0`. +If using command line arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. +```bash +--read-only-port=0 +``` +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a value other than 0. +If using command line arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. +```bash +--streaming-connection-idle-timeout=5m +``` +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains: true`. +If using command line arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and remove the `--make-iptables-util-chains` argument from the `KUBELET_SYSTEM_PODS_ARGS` variable. + +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + +**Result:** NOT APPLICABLE + +**Remediation:** Edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and remove the `--hostname-override` argument from the `KUBELET_SYSTEM_PODS_ARGS` variable. +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `eventRecordQPS:` to an appropriate level. +If using command line arguments, edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and set the below parameter in `KUBELET_ARGS` variable. +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set tlsCertFile to the location of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf` on each worker node and set the below parameters in `KUBELET_CERTIFICATE_ARGS` variable. +```bash +--tls-cert-file= --tls-private-key-file= +``` +Based on your system, restart the kubelet service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to add the line `rotateCertificates: true` or remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and remove `--rotate-certificates=false` argument from the `KUBELET_CERTIFICATE_ARGS` variable or set --rotate-certificates=true. +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** Edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the below parameter in `KUBELET_CERTIFICATE_ARGS` variable. +```bash +--feature-gates=RotateKubeletServerCertificate=true +``` +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** NOT APPLICABLE + +**Remediation:** If using a Kubelet config file, edit the file to set `TLSCipherSuites:` to +``` +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM +_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM +_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM +_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` +or to a subset of these values.
+If using executable arguments, edit the kubelet service file `/etc/kubernetes/kubelet.conf` on each worker node and set the `--tls-cipher-suites` parameter as follows, or to a subset of these values. +```bash +--tls-cipher- +suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM +_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM +_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM +_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` +Based on your system, restart the `kubelet` service. For example: +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** NOT APPLICABLE + +**Remediation:** Decide on an appropriate level for this parameter and set it, either via the `--pod-max-pids` command line parameter or the `PodPidsLimit` configuration file setting. + + +## 4.3 kube-proxy + +### 4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated) + +**Result:** NOT APPLICABLE + +**Remediation:** Modify or remove any values which bind the metrics service to a non-localhost address diff --git a/vcluster/learn-how-to/hardening-guide/5-policies.mdx b/vcluster/learn-how-to/hardening-guide/5-policies.mdx new file mode 100644 index 000000000..858e2e778 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/5-policies.mdx @@ -0,0 +1,391 @@ +--- +title: Self assessment guide - Kubernetes Policies +sidebar_label: Kubernetes Policies +description: Self assessment guide to validate Kubernetes policies +--- + +This section covers specific security policies for Kubernetes elements: +- RBAC configuration and least privilege access +- Pod Security Standards and security contexts +- Network policies and CNI security + +_Assessment focus for vCluster_: This section is directly applicable to vCluster environments. Verification involves ensuring proper RBAC configurations are in place, Pod Security Standards are enforced, and network policies are appropriately configured within the virtual cluster. + +## Validation using kube-bench + +While most CIS Kubernetes Benchmark checks are not directly verifiable via kube-bench due to vCluster’s virtualized control plane architecture, the Policies section (Section 5) includes controls that are relevant and testable within the virtual cluster environment. These include namespace-level restrictions, pod security standards, and security contexts that can be enforced from within vCluster. + +To validate your vCluster’s compliance with this section, you can run kube-bench with a customized kubeconfig pointing to the vCluster API server. The following steps outline the procedure to perform the compliance check: + +**Step 1:** Create the vCluster but don't connect to it yet. +```bash +vcluster create my-vcluster --connect=false +``` + +**Step 2:** Wait for the vCluster to be ready and get its kubeconfig; then connect to it. +```bash +vcluster connect my-vcluster --server `kubectl get svc -n vcluster-my-vcluster my-vcluster -o jsonpath='{.spec.clusterIP}'` --print > kubeconfig.yaml && vcluster connect my-vcluster +``` + +**Step 3:** Create a dedicated namespace for the kube-bench job. +```bash +kubectl create namespace kube-bench +``` + +**Step 4:** Load the vCluster kubeconfig into a secret. +```bash +kubectl create secret generic my-kubeconfig-secret \ + --from-file=kubeconfig=./kubeconfig.yaml \ + -n kube-bench +``` + +**Step 5:** Harden service accounts (recommended) + +To maintain compliance with service account token mounting policies, disable automatic token mounting for all service accounts in the cluster: +```bash +for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do + for sa in $(kubectl get sa -n $ns -o jsonpath='{.items[*].metadata.name}'); do + kubectl patch serviceaccount "$sa" \ + -p '{"automountServiceAccountToken": false}' \ + -n "$ns" + done +done +``` + +This is aligned with CIS 5.1.5 & 5.1.6 and ensures that tokens are not unnecessarily exposed. + +**Step 6:** Run the kube-bench job. + +Deploy a short-lived job to run kube-bench using the kubeconfig you mounted. +```bash +kubectl apply -f - < +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** PASS + +**Remediation:** Where possible, remove `get`, `list` and `watch` access to Secret objects in the cluster. + + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** PASS + +**Remediation:** Where possible replace any use of wildcards in clusterroles and roles with specific objects or actions. + + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** PASS + +**Remediation:** Where possible, remove `create` access to `pod` objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + +**Result:** PASS + +**Remediation:** Create explicit service accounts wherever a Kubernetes workload requires specific access to the Kubernetes API server. Modify the configuration of each default service account to include this value +``` +automountServiceAccountToken: false +``` + + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** PASS + +**Remediation:** Modify the definition of pods and service accounts which do not need to mount service account tokens to disable it. + + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** WARN + +**Remediation:** Remove the `system:masters` group from all users in the cluster. + + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** WARN + +**Remediation:** Where possible, remove the impersonate, bind and escalate rights from subjects. + + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** WARN + +**Remediation:** Where possible, remove `create` access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + +**Result:** WARN + +**Remediation:** Where possible, remove access to the `proxy` sub-resource of `node` objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + +**Result:** WARN + +**Remediation:** Where possible, remove access to the `approval` sub-resource of `certificatesigningrequest` objects. + + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** WARN + +**Remediation:** Where possible, remove access to the `validatingwebhookconfigurations` or `mutatingwebhookconfigurations` objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + +**Result:** WARN + +**Remediation:** Where possible, remove access to the `token` sub-resource of `serviceaccount` objects. + + +## 5.2 Pod Security Standards + +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + +**Result:** WARN + +**Remediation:** Ensure that either Pod Security Admission or an external policy control system is in place for every namespace which contains user workloads. + + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** PASS + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + +**Result:** PASS + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + +**Result:** PASS + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + +**Result:** PASS + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of `hostNetwork` containers. + + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated) + + +**Result:** PASS + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with `.spec.allowPrivilegeEscalation` set to true. + + +### 5.2.7 Minimize the admission of root containers (Automated) + + +**Result:** WARN + +**Remediation:** Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` or `MustRunAs` with the range of UIDs not including 0, is set. + + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated) + + +**Result:** WARN + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Automated) + +**Result:** WARN + +**Remediation:** Ensure that `allowedCapabilities` is not present in policies for the cluster unless it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + +**Result:** WARN + +**Remediation:** Review the use of capabilities in applications running on your cluster. Where a namespace contains applications which do not require any Linux capabities to operate consider adding a PSP which forbids the admission of containers which do not drop all capabilities. + + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** WARN + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers that have .securityContext.windowsOptions.hostProcess set to true. + + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** WARN + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with `hostPath` volumes. + + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** WARN + +**Remediation:** Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI + +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** WARN + +**Remediation:** If the CNI plugin in use does not support network policies, consideration should be given to making use of a different plugin, or finding an alternate mechanism for restricting traffic in the Kubernetes cluster. + + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** WARN + +**Remediation:** Follow the documentation and create `NetworkPolicy` objects as you need them. + + +## 5.4 Secrets Management + +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + +**Result:** WARN + +**Remediation:** If possible, rewrite application code to read Secrets from mounted secret files, rather than from environment variables. + + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** WARN + +**Remediation:** Refer to the Secrets management options offered by your cloud provider or a third-party secrets management solution. + + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** WARN + +**Remediation:** Follow the Kubernetes documentation and setup image provenance. + + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** WARN + +**Remediation:** Follow the documentation and create namespaces for objects in your deployment as you need them. + + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** WARN + +**Remediation:** Use securityContext to enable the docker/default seccomp profile in your pod definitions. An example is as below: +```yaml +securityContext: + seccompProfile: + type: RuntimeDefault +``` + + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** WARN + +**Remediation:** Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker Containers. + + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** WARN + +**Remediation:** Ensure that namespaces are created to allow for appropriate segregation of Kubernetes resources and that all new resources are created in a specific namespace. diff --git a/vcluster/learn-how-to/hardening-guide/README.mdx b/vcluster/learn-how-to/hardening-guide/README.mdx new file mode 100644 index 000000000..8da73c55b --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/README.mdx @@ -0,0 +1,324 @@ +--- +title: CIS Hardening Guide +sidebar_label: Harden your vCluster for the CIS Benchmark +description: How to harden your vCluster for the CIS Benchmark +--- + + +# CIS Hardening Guide + + +This document outlines a set of hardening guidelines for securing vCluster deployments in alignment with the [Center for Internet Security (CIS) Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes). It is intended for Platform engineers, DevOps, and security teams responsible for managing and securing virtual Kubernetes clusters deployed within a multi-tenant or production environment. It is also meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. + +vCluster provides a lightweight, namespaced Kubernetes control plane that runs within a host Kubernetes cluster. This unique architecture introduces specific security considerations, as some components are virtualized while others remain dependent on the underlying host infrastructure. As such, certain CIS benchmark controls may apply differently or require alternative implementations within the context of vCluster. + +The objectives of this guide are to: +- Identify applicable CIS controls and best practices for vCluster environments, +- Highlight configuration options that improve the security posture of deployed vCluster, +- Distinguish between responsibilities of the host cluster and vCluster with respect to benchmark compliance, +- Recommend alternative techniques for evaluating and enforcing hardening policies. + +By following the practices outlined in this guide, organizations can improve the security and compliance readiness of their vCluster deployments while maintaining flexibility and operational efficiency. + +:::note +This guide assumes that the host Kubernetes cluster is already hardened in accordance with the CIS benchmark. The focus here is on securing the vCluster environment itself, including its API server, control plane components, and access controls. +::: + +This guide is intended to be used with the following versions of CIS Kubernetes Benchmark, Kubernetes, and vCluster: +
+ +| vCluster Version | CIS Benchmark Version | Kubernetes Version | +| --- | --- | --- | +| v0.26.0 | v1.10.0 | v1.31 | + +:::important +This guide currently supports the K8S distribution for the vCluster API Server and Embedded Etcd as the backing store. +Other Kubernetes distributions or backing store setups are **not covered** and may require different hardening approaches. +::: + +## vCluster runtime requirements +### Configure `default` service account +**Set `automountServiceAccountToken: false` for 'default' service accounts in all namespaces** + +Kubernetes provides a default service account which is used by cluster workloads +where no specific service account is assigned to the pod. +Where access to the Kubernetes API from a pod is required, a specific service account +should be created for that pod, and rights granted to that service account. +The default service account should be configured such that it does not provide a service +account token and does not have any explicit rights assignments. + +Hence, the default service account in all the namespaces must include this value +```yaml +automountServiceAccountToken: false +``` + +To achieve the same, run the below command in the virtual cluster context +```bash +for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do + for sa in $(kubectl get sa -n $ns -o jsonpath='{.items[*].metadata.name}'); do + kubectl patch serviceaccount "$sa" \ + -p '{"automountServiceAccountToken": false}' \ + -n "$ns" + done +done +``` +### API Server audit configuration +**Enable auditing on the Kubernetes API Server and set the desired audit log path** + +Control 1.2.16 of the benchmark recommends enabling auditing on the Kubernetes API Server. Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system. +Additionally, a minimal audit policy should be in place for the auditing to be carried out. + +Create a config map with a minimal audit policy. +```yaml title="audit-config-configmap.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: audit-config + namespace: vcluster-my-vcluster +data: + audit-policy.yaml: | + apiVersion: audit.k8s.io/v1 + kind: Policy + rules: + - level: Metadata +``` + +Pass both configurations as arguments to the API Server while creating the vCluster as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --audit-policy-file=/etc/kubernetes/audit-policy.yaml + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + - --audit-log-maxbackup=10 + - --audit-log-maxsize=100 + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + - name: audit-policy + configMap: + name: audit-config + addVolumeMounts: + - name: audit-log + mountPath: /var/log + - name: audit-policy + mountPath: /etc/kubernetes +``` + +### API Server encryption configuration + +**Ensure that encryption providers are appropriately configured** + +Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers are utilized. Currently, the aescbc, kms, and secretbox are likely to be appropriate options. + +Generate a 32-bit key using the below command +```bash +head -c 32 /dev/urandom | base64 +``` + +Create an encryption configuration file with base64 encoded key created previously. + +```yaml title="encryption-config.yaml" +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` + +Create a secret in the vCluster namespace from the configuration file. +```bash +kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster +``` + +Finally, create the vCluster referring the secret as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + statefulSet: + persistence: + addVolumes: + - name: encryption-config + secret: + secretName: encryption-config + addVolumeMounts: + - name: encryption-config + mountPath: /etc/encryption + readOnly: true +``` + + +### API Server setting EventRateLimit + + +**Ensure that the admission control plugin EventRateLimit is set** + + +Using EventRateLimit admission control enforces a limit on the number of events that the API Server will accept in a given time slice. A misbehaving workload could overwhelm and DoS the API Server, making it unavailable. This particularly applies to a multi-tenant cluster, where there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. Hence, it is recommended to limit the rate of events that the API server will accept. + + +Create a config map in the vCluster namespace that contains the configuration file. +```yaml title="admission-control.yaml" +apiVersion: v1 +kind: ConfigMap +metadata: + name: admission-control + namespace: vcluster-my-vcluster +data: + admission-control.yaml: | + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: EventRateLimit + configuration: + apiVersion: eventratelimit.admission.k8s.io/v1alpha1 + kind: Configuration + limits: + - type: Server + qps: 50 + burst: 100 +``` + +Finally, create the vCluster referring the ConfigMap as: +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --enable-admission-plugins=EventRateLimit + - --admission-control-config-file=/etc/kubernetes/admission-control.yaml + statefulSet: + persistence: + addVolumes: + - name: admission-control + configMap: + name: admission-control + addVolumeMounts: + - name: admission-control + mountPath: /etc/kubernetes +``` + +## Reference hardened vCluster configuration + +Below is a reference vCluster values file with minimum required configuration for a hardened installation. This needs to be used in combination with the above runtime level settings and other measures to achieve full compliance. + +```yaml title="vcluster.yaml" +controlPlane: + distro: + k8s: + enabled: true + apiServer: + extraArgs: + - --admission-control-config-file=/etc/kubernetes/admission-control.yaml + - --anonymous-auth=false + - --audit-policy-file=/etc/kubernetes/audit-policy.yaml + - --audit-log-path=/var/log/audit.log + - --audit-log-maxage=30 + - --audit-log-maxbackup=10 + - --audit-log-maxsize=100 + - --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,EventRateLimit,NodeRestriction + - --encryption-provider-config=/etc/encryption/encryption-config.yaml + - --request-timeout=300s + - --service-account-lookup=true + - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 + controllerManager: + extraArgs: + - --terminated-pod-gc-threshold=12500 + - --profiling=false + scheduler: + extraArgs: + - --profiling=false + advanced: + virtualScheduler: + enabled: true + backingStore: + etcd: + embedded: + enabled: true + statefulSet: + persistence: + addVolumes: + - name: audit-log + emptyDir: {} + - name: audit-policy + configMap: + name: audit-config + - name: encryption-config + secret: + secretName: encryption-config + - name: admission-control + configMap: + name: admission-control + addVolumeMounts: + - name: audit-log + mountPath: /var/log + - name: audit-policy + mountPath: /etc/kubernetes/audit-policy.yaml + subPath: audit-policy.yaml + - name: encryption-config + mountPath: /etc/encryption + readOnly: true + - name: admission-control + mountPath: /etc/kubernetes/admission-control.yaml + subPath: admission-control.yaml +sync: + fromHost: + nodes: + enabled: true +``` + +## Assessment guides +If you have followed this guide, your vCluster would be configured to pass the CIS Kubernetes Benchmark. You can review the below guides to understand the expectations of each of the benchmark's checks and how you can do the same on your cluster. +While not every control applies directly due to the namespaced and virtualized nature of vCluster, the recommended practices in this document address the subset of controls that are applicable or adaptable. + +:::important Note on Validation Tools +While tools like [kube-bench](https://github.com/aquasecurity/kube-bench) are commonly used to validate CIS benchmark compliance in traditional Kubernetes deployments, they cannot be directly applied to vCluster environments due to the fundamental differences in how vCluster operates. vCluster's virtualized control plane architecture means that many components run as containers within the host cluster rather than as traditional system processes, making standard automated assessment tools incompatible with this deployment model. + +For vCluster environments, manual verification of controls and custom assessment approaches are necessary to ensure compliance with the benchmark requirements. +::: + +The following sections provide detailed assessment guidance for each area of the CIS Kubernetes Benchmark: + +- [**Section 1: Control Plane Security Configuration**
](./control-plane-components) + +- [**Section 2: Etcd Node Configuration**
](./etcd) + +- [**Section 3: Control Plane Configuration**
](./control-plane) + +- [**Section 4: Worker Node Security Configuration**
](./worker-node) + +- [**Section 5: Kubernetes Policies**
](./policies) + +### Test controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a vCluster that was configured according to this hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to vCluster are provided for testing. + +These are the possible results for each control: + +- PASS - The vCluster under test passed the audit outlined in the benchmark. +- NOT APPLICABLE - The control is not applicable to vCluster because of how it is designed to operate. The remediation section contains explaination of why this is so. +- WARN - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure vCluster does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + + +By reviewing these assessment sections and mapping them to your vCluster configuration, you can ensure that your hardened deployment not only follows security best practices but is also auditable and defensible in a compliance context. \ No newline at end of file diff --git a/vcluster/learn-how-to/hardening-guide/_category_.json b/vcluster/learn-how-to/hardening-guide/_category_.json new file mode 100644 index 000000000..9c6610636 --- /dev/null +++ b/vcluster/learn-how-to/hardening-guide/_category_.json @@ -0,0 +1,5 @@ +{ + "label": "CIS Hardening Guide", + "collapsible": true, + "collapsed": true +}