Skip to content

Conversation

emy
Copy link
Contributor

@emy emy commented Sep 24, 2025

- What I did
Restarting Network Manager on a node will lead to the node losing connection if the nodes br-ex interface is being managed by nmstate. This is happening because the ofport dispatcher script does not take into account that the br-ex bridge ID is br-ex-br instead of br-ex. This PR is adding a check to fall back to check for nmstate managed br-ex if no bridge ID can be found.

- How to verify it
Deploy a nmstate managed br-ex cluster.
Restart NetworkManager using systemctl restart NetworkManager.
Node will lose connection if fix was unsuccessful.
Node will retain connection if fix was successful.

- Description for the changelog

Nodes with a br-ex interface managed by nmstate will not lose connection anymore if Network Manager is restarted on the node.

@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Sep 24, 2025
@openshift-ci-robot
Copy link
Contributor

@emy: This pull request references Jira Issue OCPBUGS-54682, which is invalid:

  • expected the bug to target the "4.21.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did
Restarting Network Manager on a node will lead to the node losing connection if the nodes br-ex interface is being managed by nmstate. This is happening because the ofport dispatcher script does not take into account that the br-ex bridge ID is br-ex-br instead of br-ex. This PR is adding a check to fall back to check for nmstate managed br-ex if no bridge ID can be found.

- How to verify it
Deploy a nmstate managed br-ex cluster.
Restart NetworkManager using systemctl restart NetworkManager.
Node will lose connection if fix was unsuccessful.
Node will retain connection if fix was successful.

- Description for the changelog

Nodes with a br-ex interface managed by nmstate will not lose connection anymore if Network Manager is restarted on the node.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@emy
Copy link
Contributor Author

emy commented Sep 24, 2025

cc: @mkowalski

@mkowalski
Copy link
Contributor

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

@openshift-ci-robot
Copy link
Contributor

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-cherrypick-robot

@openshift-ci-robot: once the present PR merges, I will cherry-pick it on top of release-4.16, release-4.17, release-4.18, release-4.19, release-4.20 in new PRs and assign them to you.

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@mkowalski
Copy link
Contributor

/lgtm

BRIDGE_NAME=$(nmcli -t -f connection.interface-name conn show "${BRIDGE_ID}" | awk -F ':' '{print $NF}') || true
if [ "${BRIDGE_NAME}" == "" ]; then
#Check if br-ex is managed by nmstate (br-ex-br)
PORT_CONNECTION_UUID=$(nmcli -t -f device,type,uuid conn | awk -F ':' '{if( $1=="br-ex" && $2~/^ovs-bridge/) print $NF}')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious how this works. If the connection uuid was incorrectly detected, do we not exit on line 46? Are we still getting a uuid there but it's not the correct one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is no connection that matches both "br-ex" and "ovs-bridge" then PORT_CONNECTION_UUID will be empty, thus the next line fails and process exits with code 10 or some other big number.

If there is only one connection that matches both "br-ex" and "ovs-bridge", then PORT_CONNECTION_UUID will get one value and it's a correct value. This is the scenario when everything works okay.

If there are two or more connections that match both "br-ex" and "ovs-bridge", then PORT_CONNECTION_UUID will get a random value out of 2 or more possible ones. This scenario is bad, undesired. How can it happen that we end up in such a scenario? We would need to have 2 connections that have the same device and type. Can it ever happen?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I realize now that you are asking about L46 which uses output of

PORT_CONNECTION_UUID=$(nmcli -t -f device,type,uuid conn | awk -F ':' '{if( ($1=="'${PORT}'" || $3=="'${PORT}'") && $2~/^ovs*/) print $NF}')

but I do not see there how we can get an incorrect one easily. At least, in more scenarios than we do today. Let's start with

[root@master-0 ~]# nmcli -t -f device,type,uuid conn
br-ex:ovs-interface:2225810d-187f-432f-9c09-67bf9727ae88
br-ex:ovs-bridge:ce9f4f14-e059-4d8e-bf2e-2379fc3fed8a
enp2s0:802-3-ethernet:8d1041e5-9992-450e-954f-8e4f981ae9d2
br-ex:ovs-port:bf2011c7-24f6-4e35-85cd-ca639f4769e5
enp2s0:ovs-port:fb1ae9eb-d718-4a72-9b42-f12c1f2c9942
enp1s0:802-3-ethernet:d4a98e92-8232-40d5-9a2c-c69796bbd40c
enp3s0:802-3-ethernet:d4a98e92-8232-40d5-9a2c-c69796bbd40c
enp4s0:802-3-ethernet:d4a98e92-8232-40d5-9a2c-c69796bbd40c
lo:loopback:ec98b9a0-abe4-409a-86c7-ffc9e3fb3ae0
:802-3-ethernet:e5bf500e-35e8-4888-b4eb-74314c6473e5

From that we get

[root@master-0 ~]# export INTERFACE_NAME=enp2s0

[root@master-0 ~]# INTERFACE_CONNECTION_UUID=$(nmcli -t -f device,type,uuid conn | awk -F ':' '{if($1=="'${INTERFACE_NAME}'" && $2!~/^ovs*/) print $NF}')
[root@master-0 ~]# echo $INTERFACE_CONNECTION_UUID
8d1041e5-9992-450e-954f-8e4f981ae9d2

[root@master-0 ~]# INTERFACE_OVS_SLAVE_TYPE=$(nmcli -t -f connection.slave-type conn show "${INTERFACE_CONNECTION_UUID}" | awk -F ':' '{print $NF}')
[root@master-0 ~]# echo $INTERFACE_OVS_SLAVE_TYPE
ovs-port

[root@master-0 ~]# PORT=$(nmcli -t -f connection.master conn show "${INTERFACE_CONNECTION_UUID}" | awk -F ':' '{print $NF}')
[root@master-0 ~]# echo $PORT
fb1ae9eb-d718-4a72-9b42-f12c1f2c9942

[root@master-0 ~]# PORT_CONNECTION_UUID=$(nmcli -t -f device,type,uuid conn | awk -F ':' '{if( ($1=="'${PORT}'" || $3=="'${PORT}'") && $2~/^ovs*/) print $NF}')
[root@master-0 ~]# echo $PORT_CONNECTION_UUID
fb1ae9eb-d718-4a72-9b42-f12c1f2c9942

So seems like PORT_CONNECTION_UUID is trying to find type ovs* with the name that is your interface name (e.g. eth0). For those it's okay because we will not have multiple ovs* with such a name.

It could get tricky when you have the run with INTERFACE_NAME=br-ex, but that one finishes very quickly, i.e.

[root@master-0 ~]# export INTERFACE_NAME=br-ex

[root@master-0 ~]# INTERFACE_CONNECTION_UUID=$(nmcli -t -f device,type,uuid conn | awk -F ':' '{if($1=="'${INTERFACE_NAME}'" && $2!~/^ovs*/) print $NF}')
[root@master-0 ~]# echo $INTERFACE_CONNECTION_UUID

[root@master-0 ~]#

So, do we actually miss something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure. The reason I'm asking is that we're recalculating the port UUID here with a slightly different command than the one above, but if the one above didn't find any UUID then the script would have exited before now. Which leads me to believe that either we get a different UUID from this command for some reason, or we don't need to recalculate it at all.

The latter would make this second call unnecessary, but it would still work fine so I'm mostly making sure I understand the logic correctly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair.

  1. If we had no PORT_CONNECTION_UUID previously, we can't reach this code here.
  2. If we had correct PORT_CONNECTION_UUID previously, we recalculate it here but we don't need to.
  3. If we had wrong PORT_CONNECTION_UUID previously, it's actually bad

I have a gut feeling we are in the scenario (3). Look that the previous PORT_CONNECTION_UUID, it only matches for ovs and not for ovs-bridge. Given that for br-ex* we have more than one entry matching ovs*, the way of calculating it here is more robust than the way of calculating it the old way.

Maybe we should just move PORT_CONNECTION_UUID=$( from there up to L44 ? As I read it, this should work for both old and new way of defining br-ex. L44 works correctly for old method and may(?) race for the new one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure I ended up in scenario (3). I'll check back and I agree that we could/should make this a little more solid for cases where a wrong selection could happen.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 24, 2025
@mkowalski
Copy link
Contributor

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 26, 2025
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 30, 2025
# Get the port's master. If it doesn't have any, assume it's not our bridge
BRIDGE_ID=$(nmcli -t -f connection.master conn show "${PORT_CONNECTION_UUID}" | awk -F ':' '{print $NF}')
BRIDGE_ID=$(nmcli -t -f general.name conn show "${PORT_CONNECTION_UUID}" | awk -F ':' '{print $NF}')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change changes the logic

[root@worker-1 ~]# nmcli -t -f general.name conn show "${PORT_CONNECTION_UUID}" | awk -F ':' '{print $NF}'
ovs-port-phys0
[root@worker-1 ~]# nmcli -t -f connection.master conn show "${PORT_CONNECTION_UUID}" | awk -F ':' '{print $NF}'
br-ex

@mkowalski
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 10, 2025
@mkowalski
Copy link
Contributor

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 10, 2025
@emy
Copy link
Contributor Author

emy commented Oct 10, 2025

/jira refresh

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Oct 10, 2025
@openshift-ci-robot
Copy link
Contributor

@emy: This pull request references Jira Issue OCPBUGS-54682, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rbbratta

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot removed the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Oct 10, 2025
@openshift-ci openshift-ci bot requested a review from rbbratta October 10, 2025 10:36
@mkowalski
Copy link
Contributor

/retest-required

Copy link
Contributor

openshift-ci bot commented Oct 10, 2025

@emy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-mco-disruptive 28a1a75 link false /test e2e-gcp-mco-disruptive
ci/prow/e2e-azure-ovn-upgrade-out-of-change 28a1a75 link false /test e2e-azure-ovn-upgrade-out-of-change
ci/prow/e2e-gcp-op-ocl 28a1a75 link false /test e2e-gcp-op-ocl
ci/prow/e2e-aws-mco-disruptive 28a1a75 link false /test e2e-aws-mco-disruptive
ci/prow/okd-scos-e2e-aws-ovn 0a6dfa5 link false /test okd-scos-e2e-aws-ovn
ci/prow/bootstrap-unit 0a6dfa5 link false /test bootstrap-unit

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@djoshy
Copy link
Contributor

djoshy commented Oct 13, 2025

/approve

Deferring to existing reviews by folks more well-versed in networking than myself.

Copy link
Contributor

openshift-ci bot commented Oct 13, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, emy, mkowalski

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 13, 2025
@mkowalski
Copy link
Contributor

/verified @mkowalski

I have ran this code against a cluster with MC that creates nmstate that creates a bond out of 2 interfaces and plugs it to br-ex. Restarting NM on such a system produces the following log

Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex
Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + '[' -f /run/ofport_requests.br-ex ']'
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ get_interface_ofport_request
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ declare -A ofport_requests
Oct 15 16:52:46 master-1 nm-dispatcher[332406]: +++ ovs-vsctl get Interface bond0 ofport
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ local current_ofport=1
Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:46 master-1 ovs-vsctl[332407]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + '[' -f /run/ofport_requests.br-ex ']'
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:'
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents:
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + cat /run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + source /run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:48 master-1 ovs-vsctl[332693]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface bond0 ofport_request=1

On the same system without this patch the log when restarting NM looks as follows

Oct 15 16:42:45 master-1 nm-dispatcher[313789]: req:14 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:45 master-1 NetworkManager[313869]: <warn>  [1760546565.6173] dispatcher: (4) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:46 master-1 nm-dispatcher[313789]: req:30 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:46 master-1 NetworkManager[313869]: <warn>  [1760546566.9737] dispatcher: (20) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:47 master-1 nm-dispatcher[313789]: req:34 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:47 master-1 NetworkManager[313869]: <warn>  [1760546567.6208] dispatcher: (24) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10

@openshift-ci-robot
Copy link
Contributor

@mkowalski: The /verified command must be used with one of the following actions: by, later, remove, or bypass. See https://docs.ci.openshift.org/docs/architecture/jira/#premerge-verification for more information.

In response to this:

/verified @mkowalski

I have ran this code against a cluster with MC that creates nmstate that creates a bond out of 2 interfaces and plugs it to br-ex. Restarting NM on such a system produces the following log

Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex
Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + '[' -f /run/ofport_requests.br-ex ']'
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ get_interface_ofport_request
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ declare -A ofport_requests
Oct 15 16:52:46 master-1 nm-dispatcher[332406]: +++ ovs-vsctl get Interface bond0 ofport
Oct 15 16:52:46 master-1 nm-dispatcher[332405]: ++ local current_ofport=1
Oct 15 16:52:46 master-1 nm-dispatcher[332335]: + ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:46 master-1 ovs-vsctl[332407]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + '[' -f /run/ofport_requests.br-ex ']'
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:'
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents:
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + cat /run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + source /run/ofport_requests.br-ex
Oct 15 16:52:48 master-1 nm-dispatcher[332599]: + ovs-vsctl set Interface bond0 ofport_request=1
Oct 15 16:52:48 master-1 ovs-vsctl[332693]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface bond0 ofport_request=1

On the same system without this patch the log when restarting NM looks as follows

Oct 15 16:42:45 master-1 nm-dispatcher[313789]: req:14 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:45 master-1 NetworkManager[313869]: <warn>  [1760546565.6173] dispatcher: (4) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:46 master-1 nm-dispatcher[313789]: req:30 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:46 master-1 NetworkManager[313869]: <warn>  [1760546566.9737] dispatcher: (20) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:47 master-1 nm-dispatcher[313789]: req:34 'pre-up' [bond0], "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh": complete: process failed with Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10
Oct 15 16:42:47 master-1 NetworkManager[313869]: <warn>  [1760546567.6208] dispatcher: (24) /etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh failed (failed): Script '/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh' exited with status 10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@mkowalski
Copy link
Contributor

/verified by @mkowalski

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Oct 15, 2025
@openshift-ci-robot
Copy link
Contributor

@mkowalski: This PR has been marked as verified by @mkowalski.

In response to this:

/verified by @mkowalski

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-bot openshift-merge-bot bot merged commit b5688e7 into openshift:main Oct 15, 2025
13 of 15 checks passed
@openshift-ci-robot
Copy link
Contributor

@emy: Jira Issue Verification Checks: Jira Issue OCPBUGS-54682
✔️ This pull request was pre-merge verified.
✔️ All associated pull requests have merged.
✔️ All associated, merged pull requests were pre-merge verified.

Jira Issue OCPBUGS-54682 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓

In response to this:

- What I did
Restarting Network Manager on a node will lead to the node losing connection if the nodes br-ex interface is being managed by nmstate. This is happening because the ofport dispatcher script does not take into account that the br-ex bridge ID is br-ex-br instead of br-ex. This PR is adding a check to fall back to check for nmstate managed br-ex if no bridge ID can be found.

- How to verify it
Deploy a nmstate managed br-ex cluster.
Restart NetworkManager using systemctl restart NetworkManager.
Node will lose connection if fix was unsuccessful.
Node will retain connection if fix was successful.

- Description for the changelog

Nodes with a br-ex interface managed by nmstate will not lose connection anymore if Network Manager is restarted on the node.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-cherrypick-robot

@openshift-ci-robot: new pull request created: #5352

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot

@openshift-ci-robot: new pull request created: #5353

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot

@openshift-ci-robot: new pull request created: #5354

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot

@openshift-ci-robot: new pull request created: #5355

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot

@openshift-ci-robot: new pull request created: #5356

In response to this:

@mkowalski: The following backport issues have been created:

Queuing cherrypicks to the requested branches to be created after this PR merges:
/cherrypick release-4.16
/cherrypick release-4.17
/cherrypick release-4.18
/cherrypick release-4.19
/cherrypick release-4.20

In response to this:

/jira backport release-4.16,release-4.17,release-4.18,release-4.19,release-4.20

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.21.0-0.nightly-2025-10-17-012128

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants