Skip to content

Use new API in the Devices Widget #267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 20, 2025

Conversation

marmarta
Copy link
Member

@marmarta marmarta commented Jun 24, 2025

  • icons appropriate for category
  • show/hide child devices
  • connect with mic
  • open global config

fixes QubesOS/qubes-issues#6811
fixes QubesOS/qubes-issues#8537
fixes QubesOS/qubes-issues#7612

@marmarta
Copy link
Member Author

tests fail because of QubesOS/qubes-core-admin-client#362

@marmarta
Copy link
Member Author

(will fix pylint promptly)

@marmarta marmarta changed the title Use new API in the Devices Widget WIP: Use new API in the Devices Widget Jun 25, 2025
@marmarta

This comment was marked as resolved.

@marmarta marmarta force-pushed the device_widget_upgrade branch from 053f63f to 2703b5b Compare June 25, 2025 12:43
@marmarta marmarta changed the title WIP: Use new API in the Devices Widget Use new API in the Devices Widget Jun 25, 2025
@qubesos-bot
Copy link

qubesos-bot commented Jun 27, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025071903-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025061004-4.3&flavor=update

  • system_tests_pvgrub_salt_storage

  • system_tests_dispvm

    • TC_20_DispVM_whonix-workstation-17: test_030_edit_file (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

    • TC_20_DispVM_whonix-workstation-17: test_100_open_in_dispvm (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

  • system_tests_devices

    • TC_00_List_whonix-gateway-17: test_011_list_dm_mounted (failure)
      AssertionError: 'test-dm' == 'test-dm' : Device test-inst-vm:dm-0::...
  • system_tests_audio@hw1

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win11@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Failed tests

10 failures
  • system_tests_pvgrub_salt_storage

  • system_tests_extra

    • TC_00_QVCTest_whonix-workstation-17: test_010_screenshare (failure)
      AssertionError: 1 != 0 : Timeout waiting for /dev/video0 in test-in...
  • system_tests_dispvm

    • TC_20_DispVM_whonix-workstation-17: test_030_edit_file (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

    • TC_20_DispVM_whonix-workstation-17: test_100_open_in_dispvm (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

  • system_tests_devices

    • TC_00_List_whonix-gateway-17: test_011_list_dm_mounted (failure)
      AssertionError: 'test-dm' == 'test-dm' : Device test-inst-vm:dm-0::...
  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio@hw1

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win11@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/142375#dependencies

10 fixed

Unstable tests

Performance Tests

Performance degradation:

9 performance degradations
  • debian-12-xfce_exec-data-simplex: 72.42 🔺 ( previous job: 65.51, degradation: 110.54%)
  • debian-12-xfce_exec-data-duplex-root: 82.28 🔺 ( previous job: 70.01, degradation: 117.53%)
  • whonix-gateway-17_exec-root: 43.81 🔺 ( previous job: 39.57, degradation: 110.71%)
  • whonix-gateway-17_socket: 9.83 🔺 ( previous job: 7.85, degradation: 125.16%)
  • whonix-gateway-17_socket-root: 8.70 🔺 ( previous job: 7.89, degradation: 110.24%)
  • whonix-gateway-17_exec-data-duplex-root: 101.47 🔺 ( previous job: 90.74, degradation: 111.83%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 253646.00 :small_red_triangle: ( previous job: 289982.00, degradation: 87.47%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 14119.00 :small_red_triangle: ( previous job: 17102.00, degradation: 82.56%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 105280.00 :small_red_triangle: ( previous job: 122848.00, degradation: 85.70%)

Remaining performance tests:

63 tests
  • debian-12-xfce_exec: 7.29 🟢 ( previous job: 8.63, improvement: 84.48%)
  • debian-12-xfce_exec-root: 29.21 🟢 ( previous job: 29.44, improvement: 99.24%)
  • debian-12-xfce_socket: 8.96 🔺 ( previous job: 8.50, degradation: 105.43%)
  • debian-12-xfce_socket-root: 8.56 🔺 ( previous job: 8.31, degradation: 102.94%)
  • debian-12-xfce_exec-data-duplex: 67.72 🟢 ( previous job: 73.55, improvement: 92.08%)
  • debian-12-xfce_socket-data-duplex: 160.98 🟢 ( previous job: 161.35, improvement: 99.77%)
  • fedora-42-xfce_exec: 9.10
  • fedora-42-xfce_exec-root: 58.04
  • fedora-42-xfce_socket: 8.05
  • fedora-42-xfce_socket-root: 8.53
  • fedora-42-xfce_exec-data-simplex: 69.44
  • fedora-42-xfce_exec-data-duplex: 71.77
  • fedora-42-xfce_exec-data-duplex-root: 98.53
  • fedora-42-xfce_socket-data-duplex: 156.15
  • whonix-gateway-17_exec: 6.91 🟢 ( previous job: 7.34, improvement: 94.14%)
  • whonix-gateway-17_exec-data-simplex: 78.89 🔺 ( previous job: 77.76, degradation: 101.45%)
  • whonix-gateway-17_exec-data-duplex: 82.13 🔺 ( previous job: 78.39, degradation: 104.78%)
  • whonix-gateway-17_socket-data-duplex: 169.74 🔺 ( previous job: 161.95, degradation: 104.81%)
  • whonix-workstation-17_exec: 7.73 🟢 ( previous job: 8.27, improvement: 93.38%)
  • whonix-workstation-17_exec-root: 58.83 🔺 ( previous job: 57.61, degradation: 102.11%)
  • whonix-workstation-17_socket: 8.78 🟢 ( previous job: 8.97, improvement: 97.90%)
  • whonix-workstation-17_socket-root: 10.34 🔺 ( previous job: 9.46, degradation: 109.33%)
  • whonix-workstation-17_exec-data-simplex: 62.25 🟢 ( previous job: 74.54, improvement: 83.51%)
  • whonix-workstation-17_exec-data-duplex: 81.62 🔺 ( previous job: 74.84, degradation: 109.07%)
  • whonix-workstation-17_exec-data-duplex-root: 86.93 🔺 ( previous job: 86.00, degradation: 101.08%)
  • whonix-workstation-17_socket-data-duplex: 169.35 🔺 ( previous job: 160.20, degradation: 105.71%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 135279.00 :green_circle: ( previous job: 101988.00, improvement: 132.64%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 56932.00 :green_circle: ( previous job: 14284.00, improvement: 398.57%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 38763.00 :green_circle: ( previous job: 32696.00, improvement: 118.56%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 1354.00 :green_circle: ( previous job: 1091.00, improvement: 124.11%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11835.00 :green_circle: ( previous job: 11086.00, improvement: 106.76%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 4559.00 :green_circle: ( previous job: 1840.00, improvement: 247.77%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 500991.00 :green_circle: ( previous job: 289182.00, improvement: 173.24%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 439286.00 :green_circle: ( previous job: 433654.00, improvement: 101.30%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 157711.00 :small_red_triangle: ( previous job: 167872.00, degradation: 93.95%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 103257.00 :small_red_triangle: ( previous job: 108760.00, degradation: 94.94%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 9261.00 :green_circle: ( previous job: 8874.00, improvement: 104.36%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7920.00 :green_circle: ( previous job: 6356.00, improvement: 124.61%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4954.00 :green_circle: ( previous job: 4420.00, improvement: 112.08%)
  • fedora-42-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 387786.00
  • fedora-42-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 274280.00
  • fedora-42-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 308223.00
  • fedora-42-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 139265.00
  • fedora-42-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 83540.00
  • fedora-42-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 5449.00
  • fedora-42-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8108.00
  • fedora-42-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 2626.00
  • fedora-42-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 334154.00
  • fedora-42-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 225791.00
  • fedora-42-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 281044.00
  • fedora-42-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 125146.00
  • fedora-42-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 36961.00
  • fedora-42-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 2612.00
  • fedora-42-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8504.00
  • fedora-42-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 1051.00
  • fedora-42-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 346293.00
  • fedora-42-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 208754.00
  • fedora-42-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 292489.00
  • fedora-42-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 106325.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 46029.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 2084.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8014.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 1975.00

Copy link
Member

@marmarek marmarek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When attaching a device, it issues a call with full port_id:device_id, but when detaches - it issues a call with * as device_id - why the difference? This is especially problematic when using sys-gui, as * is not allowed in the qrexec call argument. When issuing a call without device_id, simply use just the port_id (but that's probably in core-admin-client repo, not here).

If it is not, add it.
"""
feature = self._vm.features.get(feature_name, "")
all_devs: List[str] = [f for f in feature.split(",") if f]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be safer to use space as a separator. While comma currently isn't part of device ID, it may be part of some other identifier (or maybe device id of some future class?). Space is safe, as it's forbidden also in qrexec argument, and device ID need to be compatible with that.

Comment on lines +275 to +412
for dev in self.devices.values():
dev.devices_to_attach_with_me = []
dev.hide_this_device = False
dev.show_children = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks to clear info related to all domains, and yet with non-None vm it will re-populate it only for a given VM. So changing a feature on one VM will cause the widget to forget about settings for other VMs...

[dev for dev in mic_feature.split(",") if dev]
)

microphone.devices_to_attach_with_me = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And same here.

self.devices[dev].devices_to_attach_with_me = [microphone]
microphone.devices_to_attach_with_me.append(self.devices[dev])

self.parent_ids_to_hide = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here.

@marmarta
Copy link
Member Author

marmarta commented Jul 2, 2025

When attaching a device, it issues a call with full port_id:device_id, but when detaches - it issues a call with * as device_id - why the difference? This is especially problematic when using sys-gui, as * is not allowed in the qrexec call argument. When issuing a call without device_id, simply use just the port_id (but that's probably in core-admin-client repo, not here).

I think this is how the API works, see https://github.com/QubesOS/qubes-core-admin-client/blob/main/qubesadmin/tests/devices.py#L169

I don't think I should change it here, and I don't think I could without some painful acrobatics and working around the API...

@marmarta marmarta force-pushed the device_widget_upgrade branch from 2703b5b to 96ddadd Compare July 2, 2025 15:06
@marmarta
Copy link
Member Author

marmarta commented Jul 2, 2025

I think I fixed all the problems, but this was not yet tested for hiding child devices (due to how last one we attached a usb stick to a test laptop it stopped booting)

@marmarta
Copy link
Member Author

marmarta commented Jul 9, 2025

rewrote some code to be faster and batch some slow operations together. the tests fail due to QubesOS/qubes-core-admin-client#362

@marmarta marmarta force-pushed the device_widget_upgrade branch from 783753c to 3346912 Compare July 15, 2025 12:47
@marmarta
Copy link
Member Author

(not a real change, added some issue references to commit messages)

def device_list_update(self, vm, _event, **_kwargs):

changed_devices: Dict[str, backend.Device] = {}
def _update_queue(self, vm, device, **_kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that "device" parameter is actually an event name (device-list-change:usb for example), not the device.

asyncio.create_task(self.update_parents(vm))
if device not in self.dev_update_queue:
self.dev_update_queue.add(device)
asyncio.create_task(self.update_assignments())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this pass the device class to the function (from the event name)? then you could refresh devices only of that type

@marmarek
Copy link
Member

Note to self: update qrexec policy for sys-gui to allow new features.

All relevant information should happen in device-added
and device-removed events.
@marmarta marmarta force-pushed the device_widget_upgrade branch from 098d1d4 to 3710003 Compare July 18, 2025 18:11
@marmarta
Copy link
Member Author

requested changes done, ghost exorcised (hopefully)

Copy link

codecov bot commented Jul 20, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.94%. Comparing base (3a85bee) to head (3710003).
Report is 15 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #267      +/-   ##
==========================================
- Coverage   92.99%   92.94%   -0.06%     
==========================================
  Files          64       64              
  Lines       13034    13063      +29     
==========================================
+ Hits        12121    12141      +20     
- Misses        913      922       +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@marmarek marmarek merged commit e4ef31b into QubesOS:main Jul 20, 2025
3 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants