Skip to content

Openshift Updates #112

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 168 additions & 3 deletions admin_guide/install/fragments/install_defender_twistcli_export_oc.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
== Install Defender

Defender is installed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster.
Use _twistcli_ to generate a YAML configuration file for the Defender DaemonSet, then deploy it using _kubectl_.
Use _twistcli_ to generate a YAML configuration file or helm chart for the Defender DaemonSet, then deploy it using _oc_ or _kubectl_.
You can use the same method to deploy Defender DaemonSets from both macOS and Linux kubectl-enabled cluster controllers.

The benefit of declarative object management, where you work directly with YAML configuration files, is that you get the full "source code" for the objects you create in your cluster.
You can use a version control tool to manage and track modifications to config files so that you can delete and reliably recreate DaemonSets in your environment.

If you don't have kubectl access to your cluster (or oc access for OpenShift), you can deploy Defender DaemonSets directly from the xref:../install/install_defender/install_cluster_container_defender.adoc[Console UI].
If you don't have kubectl access to your cluster (or oc access for OpenShift), you can deploy Defender DaemonSets directly from the xref:../install/install_defender/install_cluster_container_defender.adoc[Console UI].

NOTE: The following procedure shows you how to deploy Defender DaemonSets with twistcli using declarative object management.
Alternatively, you can generate Defender DaemonSet install commands in the Console UI under *Manage > Defenders > Deploy > DaemonSet*.
Expand All @@ -33,6 +33,12 @@ It is simply the host part of the URL.

.. Copy the address from *1* (*The name that clients and Defenders use to access this Console*).

=== Deployment via Kubernetes YAML files

The twistcli defender export command can be used to generate native Kubernetes YAML files to deploy the Defender as a DaemonSet.

==== Openshift 3.9

. Generate a _defender.yaml_ file, where:
+
The following command connects to Console's API (specified in _--address_) as user <ADMIN> (specified in _--user_), and generates a Defender DaemonSet YAML config file according to the configuration options passed to _twistcli_.
Expand All @@ -50,7 +56,166 @@ The _--cluster-address_ option specifies the address Defender uses to connect to

$ oc create -f ./defender.yaml

. Confirm the Defenders were deployed.
==== Openshift 4

. Generate a _defender.yaml_ file, where:
+
The following command connects to Console's API (specified in _--address_) as user <ADMIN> (specified in _--user_), and generates a Defender DaemonSet YAML config file according to the configuration options passed to _twistcli_.
The _--cluster-address_ option specifies the address Defender uses to connect to Console, or Console's service address.
+
$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR>
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--cri
+
* <PLATFORM> can be linux, osx, or windows.
* <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.

. Deploy the Defender DaemonSet.

$ oc create -f ./defender.yaml

=== Deployment via Helm chart

. Generate the Defender DaemonSet helm chart.
A number of command variations are provided.
Use them as a basis for constructing your own working command. The following commands connects to Console's API (specified in _--address_) as user <ADMIN> (specified in _--user_), and generates a Defender DaemonSet YAML config file according to the configuration options passed to _twistcli_.
The _--cluster-address_ option specifies the address Defender uses to connect to Console, or Console's service address.
+
*Openshift 3.9: Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.*
Use the OpenShift external route for your Prisma Cloud Console, _--address \https://twistlock-console.apps.ose.example.com_.
Designate Prisma Cloud's cloud registry by omitting the _--image-name_ flag.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--helm
+
*Openshift 4: Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.*
Use the OpenShift external route for your Prisma Cloud Console, _--address \https://twistlock-console.apps.ose.example.com_.
Designate Prisma Cloud's cloud registry by omitting the _--image-name_ flag. Defining CRI-O as the default container engine by using the _-cri_ flag.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--helm \
--cri
+
*Openshift 3.9: Outside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.*
Use the _--image-name_ flag to designate an image from the OpenShift internal registry.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
--helm
+
*Openshift 4: Outside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.*
Use the _--image-name_ flag to designate an image from the OpenShift internal registry. Defining CRI-O as the default container engine by using the _-cri_ flag.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
--helm \
--cri
+
*Openshift 3.9: Inside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.*
When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console's service name (twistlock-console) or cluster IP in the _--cluster-address_ flag.
This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--helm
+
*Openshift 4: Inside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.*
When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console's service name (twistlock-console) or cluster IP in the _--cluster-address_ flag.
This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the _-cri_ flag.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--helm \
--cri
+
*Openshift 3.9: Inside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.*
Use the _--image-name_ flag to designate an image in the OpenShift internal registry.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
--helm
+
*Openshift 4: Inside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.*
Use the _--image-name_ flag to designate an image in the OpenShift internal registry. Defining CRI-O as the default container engine by using the _-cri_ flag.

$ <PLATFORM>/twistcli defender export openshift \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> \
--user <ADMIN_USER> \
--cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR> \
--image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
--helm \
--cri

====== Openshift 3.9

Deploy the helm chart via the helm command

$ helm install --namespace=twistlock twistlock-defender-helm.tar.gz

====== Openshift 4
// https://github.com/twistlock/twistlock/issues/13333

Prisma Cloud Defenders Helm charts fail to install on OpenShift 4 clusters due to a Helm bug.
If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you'll get the following error:

Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"

To work around the issue, modify the generated Helm chart.

[.procedure]

. Unpack the chart into a temporary directory.

$ mkdir helm-defender
$ tar xvzf twistlock-defender-helm.tar.gz -C helm-defender/

. Open _helm-console/twistlock-defender/templates/securitycontextconstraints.yaml_ for editing.

. Change `apiVersion` from `v1` to `security.openshift.io/v1`.
+
[source,yaml]
----
{{- if .Values.openshift }}
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: twistlock-console
...
----

. Repack the Helm chart

$ cd helm-defender/
$ tar cvzf twistlock-defender-helm.tar.gz twistlock-defender/

. Install the new helm chart via the helm command

$ helm install --namespace=twistlock -g twistlock-defender-helm.tar.gz



=== Confirm the Defenders were deployed.

.. In Prisma Cloud Console, go to *Compute > Manage > Defenders > Manage* to see a list of deployed Defenders.
+
Expand Down
78 changes: 54 additions & 24 deletions admin_guide/install/install_kubernetes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -699,47 +699,77 @@ The _kubectl apply_ command lets you make https://kubernetes.io/docs/concepts/cl
=== Troubleshooting

[.section]
==== RBAC issues

If RBAC is enabled in your cluster, you might get the following error when trying to create a Defender DaemonSet.
==== Pod Security Policy
If Pod Security Policy is enabled in your cluster, you might get the following error when trying to create a Defender DaemonSet.

Error creating: pods "twistlock-defender-ds-" is forbidden: unable to validate against any pod security policy ..Privileged containers are not allowed

If you get this error, then you must create a Role and RoleBinding so that Defender can run with the xref:system_requirements.adoc#kernel[privileges] it needs.
Create a Role and RoleBinding for the twistlock namespace.
You can use the following example Role and RoleBinding:
If you get this error, then you must create a PodSecurityPolicy for the defender and the necessary ClusterRole and ClusterRoleBinding for the twistlock namespace.
You can use the following Pod Security Policy, ClusterRole and ClusterRoleBinding:

.Role
.PodSecurityPolicy
[source,yaml]
----
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: prismacloudcompute-service
spec:
privileged: false
seLinux:
rule: RunAsAny
allowedCapabilities:
- AUDIT_CONTROL
- NET_ADMIN
- SYS_ADMIN
- SYS_PTRACE
- MKNOD
- SETFCAP
volumes:
- "hostPath"
- "secret"
allowedHostPaths:
- pathPrefix: "/etc"
- pathPrefix: "/var"
- pathPrefix: "/run"
- pathPrefix: "/dev/log"
- pathPrefix: "/"
hostNetwork: true
hostPID: true
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
----

.ClusterRole
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
kind: ClusterRole
metadata:
name: twistlock-role
namespace: twistlock
name: prismacloudcompute-defender-role
rules:
- apiGroups:
- extensions
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
- prismacloudcompute-service
----

.RoleBinding
.ClusterRoleBinding
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
kind: ClusterRoleBinding
metadata:
name: twistlock-rolebinding
namespace: twistlock
name: prismacloudcompute-defender-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: twistlock-role
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prismacloudcompute-defender-role
subjects:
- kind: ServiceAccount
name: twistlock-service
Expand Down
Loading