Skip to content

k8s-conformance-test is missing securityContext for pod controller-manager, container manager #50

@mtulio

Description

@mtulio

Description of problem:

The container manager for a pod controller-manager is causing failures on tests test_kubernetes_configuration_helm_operator and test_kubernetes_configuration_flux_operator.

Version-Release number of selected component (if applicable):

  • Kubernetes distribution:
- Upstream Kubernetes Version: v1.22.3+e790d7f
- K8s Distribution Version: OpenShift Container Platform 4.9.17
arck8sconformance.azurecr.io/arck8sconformance/clusterconnect:0.1.5
1.5.2
1.4.0
1.3.8
  • sonobuoy
$ sonobuoy version
Sonobuoy Version: v0.56.0
MinimumKubeVersion: 1.17.0
MaximumKubeVersion: 1.99.99
GitSHA: 0665cd322b11bb40c2774776de765c38d8104bed

How reproducible:

Always

Steps to Reproduce:

  1. Run the k8s conformance test script: bash -x k8s-conformance-test-suite.sh
  2. Check the logs of config-agent-XXX pod, container config-agent. It should be returning error waiting CRD[1]
{"Message":"2022/02/21 19:26:00 Started Polling for local CRD Changes that needs to be reported to Azure",
..
{"Message":"error: Unable to get the status from the local CRD with the error : {Error : Retry for given duration didn't get any results with err {status not populated}}","LogType":"ConfigAgentTrace","LogLevel":"Error","Environment":"prod","Role":"ClusterConfigAgent","
  1. When looking at the controller logs, we can see missing permissions to run fluxctl binary:
{"Message":"2022/02/21 19:34:38 open /data/fluxctl: permission denied"
  1. When adding the securityContext.privileged: true to the container manager of pod controller-manager-xyz it will work as expected
 kubectl patch deployment.apps/controller-manager -n azure-arc \
    --type='json' \
    -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/securityContext", "value":{"privileged": true}}]'

Actual results:

Expected results:

  • The pod can be fixed to run successfully the conformance tests

Desired:

  • The container does not need to run as privileged

Additional info:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions