Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to provision volume with StorageClass "hpe-standard": rpc error: code = DeadlineExceeded desc = context deadline exceeded #370

Open
arupdevops opened this issue Dec 14, 2023 · 7 comments

Comments

@arupdevops
Copy link

Hi All,

With hpe csidriver 2.40 we are unable to create a PVC.

Below is the error---

failed to provision volume with StorageClass "hpe-standard": rpc error: code = DeadlineExceeded desc = context deadline exceeded

regards
Arup

@datamattsson
Copy link
Collaborator

Can you show the steps you performed to get here?

@arupdevops
Copy link
Author

@datamattsson : After the HPE CSI driver is successfully got deployed perform the below steps--

  1. Create the secret.

kind: Secret
apiVersion: v1
metadata:
name: custom-secret
namespace: hpe-storage
uid: 211f5a3d-0c23-4fb6-a462-1abcc72e7b61
resourceVersion: '21922279'
creationTimestamp: '2023-12-13T17:37:37Z'
managedFields:
- manager: kubectl-create
operation: Update
apiVersion: v1
time: '2023-12-13T17:37:37Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
.: {}
'f:backend': {}
'f:password': {}
'f:serviceName': {}
'f:username': {}
'f:type': {}
- manager: Mozilla
operation: Update
apiVersion: v1
time: '2023-12-14T06:49:35Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
'f:servicePort': {}
data:
backend: IP of the Storage
password: Password of the storage
serviceName: primera3par-csp-svc
servicePort: 8080
username: username to login to the storage
type: Opaque
2. Then the storage class.

YAML attached
storageclass-hpe-standard (1).yaml.zip

  1. Finally the PVC.
    persistentvolumeclaim-aspire-ocp-claim (1).yaml.zip

Cluster is communicating well with the HPE Primera Storage.
Port 443,22 Open for Inbound to the Storage.
regards
Arup

@datamattsson
Copy link
Collaborator

You need to pull the logs from the hpe-csi-node and hpe-csi-controller and the 3PAR CSP and figure out what is timing out where. There's something not connecting somewhere.

@arupdevops
Copy link
Author

@datamattsson :
Find the below logs from "primera3par-csp-54cdcb7c65-k8x69" pod

Don't understand as from the cluster we are able to ssh the storage.

goroutine 244 [running]:
net/http.(*conn).serve.func1()
/usr/lib/golang/src/net/http/server.go:1850 +0xbf
panic({0x14e96e0, 0xc000510330})
/usr/lib/golang/src/runtime/panic.go:890 +0x262
github.hpe.com/hpe/hpe_3par_primera_csp/api.(*CspManager).doExecute.func1()
/usr/src/main/api/csp_manager.go:52 +0x19c
panic({0x14e96e0, 0xc000510330})
/usr/lib/golang/src/runtime/panic.go:884 +0x212
github.hpe.com/hpe/hpe3parprimera_common_libs/hpessh.ConnectUsingPassword({0xc000410110, 0xf}, {0xc0004100f8, 0x7}, {0xc000410100, 0x8})
/usr/src/main/vendor/github.hpe.com/hpe/hpe3parprimera_common_libs/hpessh/hpe_ssh.go:93 +0x345
github.hpe.com/hpe/hpe3parprimera_common_libs/hpessh.GetPortNumber({0xc000410110?, 0x20?}, {0xc0004100f8?, 0x1?}, {0xc000410100?, 0x18?})
/usr/src/main/vendor/github.hpe.com/hpe/hpe3parprimera_common_libs/hpessh/hpe_ssh.go:37 +0x5d
github.hpe.com/hpe/hpe_3par_primera_go_client/v1/rest.getArrayPort({0xc000410110, 0xf}, {0xc0004100f8, 0x7}, {0xc000410100, 0x8})
/usr/src/main/vendor/github.hpe.com/hpe/hpe_3par_primera_go_client/v1/rest/managed_client.go:110 +0x1ba
github.hpe.com/hpe/hpe_3par_primera_go_client/v1/rest.NewManagedArrayClient({0xc000410110, 0xf}, {0xc0004100f8, 0x7}, {0xc000410100, 0x8})
/usr/src/main/vendor/github.hpe.com/hpe/hpe_3par_primera_go_client/v1/rest/managed_client.go:127 +0x3b
github.hpe.com/hpe/hpe_3par_primera_csp/factory.(*ArrayClientFactory).NewArrayClient(0xc0006364c0?, {0xc000410110?, 0x18?}, {0xc0004100f8?, 0x0?}, {0xc000410100?, 0x71353e?})
/usr/src/main/factory/array_client_factory.go:18 +0x3a
github.hpe.com/hpe/hpe_3par_primera_csp/cmds/v1.(*CreateSessionCmd).Execute(0xc0003161c0)
/usr/src/main/cmds/v1/create_session_cmd.go:56 +0x15c
github.hpe.com/hpe/hpe_3par_primera_csp/api.(*CspManager).doExecute(0x15e4d00?, {0x1a34ff8?, 0xc0003161c0?}, {0x1a447f0?, 0xc000560000?})
/usr/src/main/api/csp_manager.go:56 +0x8c
github.hpe.com/hpe/hpe_3par_primera_csp/api.(*CspManager).CreateSession(0xc000627a00, 0x31?, 0xc0005103f0, {0x1a447f0, 0xc000560000})
/usr/src/main/api/csp_manager.go:91 +0xfa
github.hpe.com/hpe/hpe_3par_primera_csp/api.(*CspRestInterface).createArraySession(0x1a451e0?, {0x1a447f0, 0xc000560000}, 0x1a2be70?)
/usr/src/main/api/request_handler.go:259 +0x1fe
net/http.HandlerFunc.ServeHTTP(0xc0004c4200?, {0x1a447f0?, 0xc000560000?}, 0x19c58afaf38a?)
/usr/lib/golang/src/net/http/server.go:2109 +0x2f
github.com/gorilla/mux.(*Router).ServeHTTP(0xc0002d8900, {0x1a447f0, 0xc000560000}, 0xc0004c4000)
/usr/src/main/vendor/github.com/gorilla/mux/mux.go:210 +0x1cf
net/http.serverHandler.ServeHTTP({0x1a38bd0?}, {0x1a447f0, 0xc000560000}, 0xc0004c4000)
/usr/lib/golang/src/net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc0004be0a0, {0x1a451e0, 0xc0002421b0})
/usr/lib/golang/src/net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
/usr/lib/golang/src/net/http/server.go:3102 +0x4db
time="2023-12-14T01:04:07Z" level=info msg="[ REQUEST-ID 100006 ] -- >>>> createArraySession /containers/v1/tokens" file="request_handler.go:250"
time="2023-12-14T01:04:07Z" level=info msg="[ REQUEST-ID 100006 ] -- >>>>> Create Session Cmd" file="create_session_cmd.go:51"
time="2023-12-14T01:04:07Z" level=info msg="Port Map Contains: map[string]string{}" file="managed_client.go:105"
time="2023-12-14T01:04:07Z" level=info msg="Getting WSAPI port for array: 192.168.236.192" file="managed_client.go:109"
time="2023-12-14T01:06:18Z" level=error msg="unable to connect to 192.168.236.192: dial tcp 192.168.236.192:22: connect: connection timed out\n" file="hpe_ssh.go:87"
time="2023-12-14T01:06:18Z" level=info msg="[ REQUEST-ID 100006 ] -- <<<<< Create Session Cmd" file="panic.go:884"
time="2023-12-14T01:06:18Z" level=error msg="Non-CSP panic received: &hpessh.HpeSshErrorContext{RespBody:[]uint8(nil), ErrCode:1000, Err:(*errors.errorString)(0xc0002f6020)}\n" file="csp_manager.go:50"
time="2023-12-14T01:06:18Z" level=error msg="Allowing panic to escape" file="csp_manager.go:51"
time="2023-12-14T01:06:18Z" level=info msg="[ REQUEST-ID 100006 ] -- <<<<< createArraySession" file="panic.go:884"
2023/12/14 01:06:18 http: panic serving 100.64.0.4:39574: &{[] 1000 unable to connect to 192.168.236.192: dial tcp 192.168.236.192:22: connect: connection timed out

@arupdevops
Copy link
Author

error
This is the revert from Redhat

@datamattsson
Copy link
Collaborator

It looks like the CSP Pod doesn't have connectivity to the external array. This could be a number of different reasons and I've lost count on how many times a simple pod restart have resolved the issue.

kubectl rollout restart -n hpe-storage deploy/primera3par-csp

@arupdevops
Copy link
Author

The issue is resolved now. Port 22 is allowed for the CSP POD IP range, after that PVC is deployed along with PV.
Cluster Host IP Range---192.168.33. was allowed for 22.*
POD IP---10.x.x.x.x was not allowed for 22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants