Version: | 1.3.0 |
---|---|
Revision: | Tuesday Sep 08, 2020 |
The following table lists the resolved issues for HPE CSI Driver for Kubernetes v1.3.0.
The following table lists the known issues for HPE CSI Driver for Kubernetes v1.3.0. Please note that, Known Issues
from previous releases are still applicable with suggested workarounds, if they are not part of Resolved Issues
above.
ID | Component | Title | Description | Workaround |
---|---|---|---|---|
CON-1107 | csi.k8s | Restoring NFS access on extended outage with backends | If PVCs are created with NFS access using HPE CSI Driver , then a NFS Server Provisioner instance will be created for each PVC. Since these servers are backed by underlying FC/iSCSI disks, if there is an extended outage (> 150s) with the backend, then NFS servers will get I/O failures and cannot service NFS shares anymore. Since all filehandles are stale, NFS servers do not automatically recover if access is restored later on and application pods can be hung due to hard mounts |
In these cases we recommend to restart affected NFS Server Provisioner pods or the node to restore iSCSI/FC disk access again and serve NFS mounts |
CON-1122 | csi.k8s | iSCSI CHAP credentials applied are not reflected using iscsiadm command | When iSCSI CHAP settings are modified with HPE CSI Driver , credentials are updated using iscsiadm command when next pod is scheduled onto the node. However, these credentials are not displayed using iscsiadm command with already existing sessions |
This is just cosmetic and current CHAP settings are updated with each iSCSI node entry under /etc/iscsi/nodes or /var/lib/iscsi/nodes directories. When iSCSI login is attempted again, latest values will be used by iscsid |
CON-1260 | csi.k8s | iSCSI CHAP username/password is not updated in HPENodeInfos custom resource | HPE CSI driver maintains iSCSI CHAP credentials within each HPENodeInfo custom resource instance. When driver is re-deployed with changes to iSCSI CHAP username or password, sometimes HPENodeInfo instances are not updated/re-created causing them to contain stale data. This causes failures when an application pod is scheduled onto the node with stale iSCSI CHAP username or password | Possible workarounds for this include: 1. Delete HPENodeInfos instances manually and re-deploy CSI driver to create them with updated values. 2. Edit HPENodeInfos to update iSCSI CHAP username/password manually with each instance. Password has to be base64 encoded value |
CON-1266 | csi.k8s | Failure to edit PVC size and other volume properties at the same time | With the introduction of the HPE Volume Mutator from v1.3.0, users can now edit volume parameters through PVC annotations. But, when volume properties are edited along with size at the same time, both the CSI Resizer and the HPE Volume Mutator might attempt PVC updates(i.e status and conditions). This might cause errors with either one of them during update saying resource version has changed externally and update process might fail | Avoid editing volume properties through annotations along with size at the same time. Resize is a different operation which involves controller/node expansion triggered through CSI external resizer. It is recommended to perform resize (volume size edit) and editing the rest of the volume properties like limitIops, limitMbps, etc.. as seperate operations. |
CON-1099 | csi.k8s | Editing PVC with alllowed mutations but missing overrides fails | Editing a PVC fails with incorrect error if a field is not allowed in "Overrides" but allowed in "mutation" and present in PVC definition | Mutation fields need to be present for overrides in storage class to prevent this error. |
CON-1476 | csi.k8s | PVCs parameters which are allowed in mutations and not in overrides are updated automatically | PVC with annotation for parameters which are allowed in "mutation" and not in "Overrides" gets updated automatically after about 2 minutes of time. Volumes get removed from volume collection as well | Mutation fields need to be present for overrides in storage class to prevent this error. |
The following table lists the known issues for HPE Primera/3PAR CSP for HPE CSI Driver release 1.3.0
ID | Component | Title | Description | Workaround |
---|---|---|---|---|
Replication | Pod mount during primary array down | If the primary array is down in a replicated array setup, and if the pod mount is attempted, it will fail. | It is recommended to edit backend in primary secret (using kubectl apply -f <secret> ) to make it point to secondary array ip |
|
Replication | Cannot add volume to RCG (Replication Copy Group) when it is in primary role on secondary | When the RCG is active on the secondary array, a new volume (PV) addition to it will not be allowed | None | |
Replication | Autocompute of CPG is disallowed for replicated volumes | When using replication it is a must to use cpg parameter in the storage class (SC) |
None | |
Replication | Mandatory RCG policies required | If existing RCG group is used for remoteCopyGroup parameter in SC, please ensure auto_failover, path_management policies are set to avoid failures during writes and during switchover process. |
None | |
Snapshot | Restore of snapshot created using virtualCopyOf parameter |
Snapshot created using virtualCopyOf option cannot be restored using CSI driver. That is to be done using cli command promotesv |
None | |
cloneOf, virtualCopyOf, importVolAsClone | I/O restrictions | Source volumes used as part of cloneOf , virtualCopyOf ,importVolAsClone options should not having any I/O activity taking place, when this snapshot/clone operation is done via CSI driver. This might lead to data consistency issues, due to the hidden snapshots taken by the array |
None |