Skip to content

Do not use grant tables for allocating framebuffer #233

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 12, 2025

Conversation

marmarek
Copy link
Member

@marmarek marmarek commented Jun 8, 2025

It seems like the framebuffer is never shared with the GUI daemon directly
(at least without GPU passthrough), so it doesn't need to be backed by grant tables. Allocating it as normal userspace memory has several benefits:

  • it can be swapped if needed
  • kernel has more options to allocate it (it's no longer allocated from kernel
    memory)
  • it can be resized later, allowing dynamic videoram changes (no need to restart VMs after connecting external display anymore!)

There is one corner case I'm not sure about when an actual GPU is available and
glamor is used. So, in this case I keep the old behavior.
See commit messages for more details.

@HW42 can you sanity check this? You added grant tables support initially, so
maybe you remember why it was done this way...

Fixes QubesOS/qubes-issues#7448
Fixes QubesOS/qubes-issues#9992

@marmarek
Copy link
Member Author

marmarek commented Jun 8, 2025

Fixes QubesOS/qubes-issues#9992

I can confirm it indeed does fix this issue. On a system with 4K screen, without this change the Xorg takes over 90MB RSS, and with 300MB sys-net (HVM, with wired+wireless devices) results in OOM-killer killing it. With this change, nothing gets killed, and Xorg takes below 1MB RSS (and there is some swap usage). Virtual size of Xorg is similar in both cases, about 270MB (as expected).

Note, this will not only fix systems with huge monitors, but also reduce RAM usage in all the VMs.

@marmarek
Copy link
Member Author

marmarek commented Jun 8, 2025

Fixes QubesOS/qubes-issues#7448

And this I tested too, after connecting external display one can interact with windows anywhere on the screen and it appears nothing explodes (at least after a short tests).

@qubesos-bot
Copy link

qubesos-bot commented Jun 9, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025061202-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025061004-4.3&flavor=update

  • system_tests_qwt_win10@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/LDWI1-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_dispvm

Failed tests

13 failures
  • system_tests_qwt_win10@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/LDWI1-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_extra

    • TC_00_QVCTest_whonix-workstation-17: test_010_screenshare (failure)
      AssertionError: 1 != 0 : Timeout waiting for /dev/video0 in test-in...
  • system_tests_dispvm

  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/142375#dependencies

10 fixed

Unstable tests

Performance Tests

Performance degradation:

8 performance degradations
  • debian-12-xfce_exec-data-duplex-root: 81.91 🔺 ( previous job: 70.01, degradation: 117.00%)
  • fedora-41-xfce_exec-data-duplex: 76.19 🔺 ( previous job: 68.34, degradation: 111.49%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 5986.00 :small_red_triangle: ( previous job: 8874.00, degradation: 67.46%)
  • fedora-41-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 90052.00 :small_red_triangle: ( previous job: 138008.00, degradation: 65.25%)
  • fedora-41-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 34038.00 :small_red_triangle: ( previous job: 75708.00, degradation: 44.96%)
  • fedora-41-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 1462.00 :small_red_triangle: ( previous job: 3547.00, degradation: 41.22%)
  • fedora-41-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 36658.00 :small_red_triangle: ( previous job: 43760.00, degradation: 83.77%)
  • fedora-41-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 1295.00 :small_red_triangle: ( previous job: 1713.00, degradation: 75.60%)

Remaining performance tests:

64 tests
  • debian-12-xfce_exec: 6.75 🟢 ( previous job: 8.63, improvement: 78.23%)
  • debian-12-xfce_exec-root: 29.39 🟢 ( previous job: 29.44, improvement: 99.83%)
  • debian-12-xfce_socket: 7.83 🟢 ( previous job: 8.50, improvement: 92.13%)
  • debian-12-xfce_socket-root: 8.39 🔺 ( previous job: 8.31, degradation: 100.96%)
  • debian-12-xfce_exec-data-simplex: 67.50 🔺 ( previous job: 65.51, degradation: 103.03%)
  • debian-12-xfce_exec-data-duplex: 74.27 🔺 ( previous job: 73.55, degradation: 100.99%)
  • debian-12-xfce_socket-data-duplex: 165.15 🔺 ( previous job: 161.35, degradation: 102.35%)
  • fedora-41-xfce_exec: 9.07 🟢 ( previous job: 9.30, improvement: 97.47%)
  • fedora-41-xfce_exec-root: 60.60 🔺 ( previous job: 60.59, degradation: 100.01%)
  • fedora-41-xfce_socket: 8.70 🔺 ( previous job: 8.48, degradation: 102.62%)
  • fedora-41-xfce_socket-root: 8.38 🟢 ( previous job: 8.81, improvement: 95.03%)
  • fedora-41-xfce_exec-data-simplex: 65.50 🟢 ( previous job: 76.90, improvement: 85.18%)
  • fedora-41-xfce_exec-data-duplex-root: 111.42 🔺 ( previous job: 109.83, degradation: 101.45%)
  • fedora-41-xfce_socket-data-duplex: 139.80 🟢 ( previous job: 156.23, improvement: 89.49%)
  • whonix-gateway-17_exec: 7.68 🔺 ( previous job: 7.34, degradation: 104.57%)
  • whonix-gateway-17_exec-root: 37.51 🟢 ( previous job: 39.57, improvement: 94.78%)
  • whonix-gateway-17_socket: 7.64 🟢 ( previous job: 7.85, improvement: 97.26%)
  • whonix-gateway-17_socket-root: 7.17 🟢 ( previous job: 7.89, improvement: 90.85%)
  • whonix-gateway-17_exec-data-simplex: 64.82 🟢 ( previous job: 77.76, improvement: 83.35%)
  • whonix-gateway-17_exec-data-duplex: 81.66 🔺 ( previous job: 78.39, degradation: 104.18%)
  • whonix-gateway-17_exec-data-duplex-root: 97.93 🔺 ( previous job: 90.74, degradation: 107.92%)
  • whonix-gateway-17_socket-data-duplex: 175.20 🔺 ( previous job: 161.95, degradation: 108.18%)
  • whonix-workstation-17_exec: 8.28 🔺 ( previous job: 8.27, degradation: 100.01%)
  • whonix-workstation-17_exec-root: 52.60 🟢 ( previous job: 57.61, improvement: 91.31%)
  • whonix-workstation-17_socket: 8.33 🟢 ( previous job: 8.97, improvement: 92.90%)
  • whonix-workstation-17_socket-root: 9.44 🟢 ( previous job: 9.46, improvement: 99.83%)
  • whonix-workstation-17_exec-data-simplex: 74.51 🟢 ( previous job: 74.54, improvement: 99.96%)
  • whonix-workstation-17_exec-data-duplex: 73.05 🟢 ( previous job: 74.84, improvement: 97.60%)
  • whonix-workstation-17_exec-data-duplex-root: 86.09 🔺 ( previous job: 86.00, degradation: 100.10%)
  • whonix-workstation-17_socket-data-duplex: 144.13 🟢 ( previous job: 160.20, improvement: 89.97%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 336621.00 :green_circle: ( previous job: 289982.00, improvement: 116.08%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 110816.00 :green_circle: ( previous job: 101988.00, improvement: 108.66%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 218180.00 :green_circle: ( previous job: 14284.00, improvement: 1527.44%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 99386.00 :green_circle: ( previous job: 32696.00, improvement: 303.97%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 17430.00 :green_circle: ( previous job: 17102.00, improvement: 101.92%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 6053.00 :green_circle: ( previous job: 1091.00, improvement: 554.81%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11668.00 :green_circle: ( previous job: 11086.00, improvement: 105.25%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 3570.00 :green_circle: ( previous job: 1840.00, improvement: 194.02%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 464177.00 :green_circle: ( previous job: 289182.00, improvement: 160.51%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 131957.00 :green_circle: ( previous job: 122848.00, improvement: 107.41%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 435998.00 :green_circle: ( previous job: 433654.00, improvement: 100.54%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 167467.00 :small_red_triangle: ( previous job: 167872.00, degradation: 99.76%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 108460.00 :small_red_triangle: ( previous job: 108760.00, degradation: 99.72%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7682.00 :green_circle: ( previous job: 6356.00, improvement: 120.86%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4847.00 :green_circle: ( previous job: 4420.00, improvement: 109.66%)
  • fedora-41-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 372231.00 :small_red_triangle: ( previous job: 401292.00, degradation: 92.76%)
  • fedora-41-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 325442.00 :green_circle: ( previous job: 306332.00, improvement: 106.24%)
  • fedora-41-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 92539.00 :green_circle: ( previous job: 88110.00, improvement: 105.03%)
  • fedora-41-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8263.00 :green_circle: ( previous job: 7675.00, improvement: 107.66%)
  • fedora-41-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 1796.00 :green_circle: ( previous job: 950.00, improvement: 189.05%)
  • fedora-41-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 399762.00 :small_red_triangle: ( previous job: 404699.00, degradation: 98.78%)
  • fedora-41-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 122704.00 :green_circle: ( previous job: 99783.00, improvement: 122.97%)
  • fedora-41-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 353770.00 :green_circle: ( previous job: 330572.00, improvement: 107.02%)
  • fedora-41-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 93675.00 :green_circle: ( previous job: 86107.00, improvement: 108.79%)
  • fedora-41-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 4410.00 :green_circle: ( previous job: 1209.00, improvement: 364.76%)
  • fedora-41-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8497.00 :small_red_triangle: ( previous job: 8908.00, degradation: 95.39%)
  • fedora-41-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 1063.00 :green_circle: ( previous job: 653.00, improvement: 162.79%)
  • fedora-41-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 371967.00 :green_circle: ( previous job: 335115.00, improvement: 111.00%)
  • fedora-41-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 111738.00 :green_circle: ( previous job: 88088.00, improvement: 126.85%)
  • fedora-41-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 352225.00 :green_circle: ( previous job: 323135.00, improvement: 109.00%)
  • fedora-41-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 60728.00 :small_red_triangle: ( previous job: 62556.00, degradation: 97.08%)
  • fedora-41-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 83576.00 :small_red_triangle: ( previous job: 86131.00, degradation: 97.03%)
  • fedora-41-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 4934.00 :green_circle: ( previous job: 2636.00, improvement: 187.18%)
  • fedora-41-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8129.00 :green_circle: ( previous job: 8052.00, improvement: 100.96%)

This was referenced Jun 9, 2025
@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

@HW42 can you sanity check this? You added grant tables support initially, so
maybe you remember why it was done this way...

I can't recall a specific reason. Most likely I was just making sure that all pixmaps are gnt backed.

Based on your testing and thinking about it again, I think your change makes sense. Unlike the old method the gnt based protocol doesn't even have a method to specify an offset. So you would have to share the complete pixmap (or in theory some page aligned part of it). So the X server would need to give the framebuffer pixmap to a client and that would need to map it. Based on that I would even go so far and throw it out completely.

Do we have someone that uses the glamor setup (I see it was added around 2021)? Then we could let them test drive it, to be really sure.

if (!newFBBase) {
xf86DrvMsg(pScrn->scrnIndex, X_ERROR,
"Unable to set up a virtual screen size of %dx%d, "
"cannot allocate memory (%lu bytes)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"cannot allocate memory (%lu bytes)\n",
"cannot allocate memory (%zu bytes)\n",

nitpick: %zu for size_t

@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

Based on your testing [...]

Hmm, on openQA the guivm tests failed due to qubes-menu dying with Request refused (likely unrelated). So this should be fixed and re-run.

@marmarek
Copy link
Member Author

Do we have someone that uses the glamor setup (I see it was added around 2021)? Then we could let them test drive it, to be really sure.

It was added together (and for) GVT-g support, and the most important part (dom0 kernel patch) is not in our repos, as it was too much risk. That technology is basically dead. Theoretically glamor should work also with any GPU passthrough, and did managed to get it initialized (at least according to X server logs), but I'm not really sure how to test it properly. I can tell it didn't crash (even without limiting this change to non-glamor case), but I can't be sure it actually exercised all possible paths.

@marmarek
Copy link
Member Author

Hmm, on openQA the guivm tests failed due to qubes-menu dying with Request refused (likely unrelated). So this should be fixed and re-run.

Most recent yes, but earlier did pass (see edits history of that report, especially the initial comment)

@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

Do we have someone that uses the glamor setup (I see it was added around 2021)? Then we could let them test drive it, to be really sure.

It was added together (and for) GVT-g support, and the most important part (dom0 kernel patch) is not in our repos, as it was too much risk. That technology is basically dead. Theoretically glamor should work also with any GPU passthrough, and did managed to get it initialized (at least according to X server logs), but I'm not really sure how to test it properly. I can tell it didn't crash (even without limiting this change to non-glamor case), but I can't be sure it actually exercised all possible paths.

Given that state and that it's not likely needed, sounds like it would be fine to remove that case and if someone will work on it again in the future they can re-add it, should it actually turn out to be necessary. OTOH keeping that if also doesn't cost much. I'm fine either way.

Hmm, on openQA the guivm tests failed due to qubes-menu dying with Request refused (likely unrelated). So this should be fixed and re-run.

Most recent yes, but earlier did pass (see edits history of that report, especially the initial comment)

I see, great.

@marmarek
Copy link
Member Author

Given that state and that it's not likely needed, sounds like it would be fine to remove that case and if someone will work on it again in the future they can re-add it, should it actually turn out to be necessary. OTOH keeping that if also doesn't cost much. I'm fine either way.

What I'm worried about is that FBBase is given out in DUMMY_OpenFramebuffer() and I have no idea how (and for how long) is used then. If it's used for a longer time, with the realloc() change it could lead to use-after-free. Logically, resolution change should force re-opening framebuffer, but I found no evidence of it actually being guaranteed...

@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

That's a good point, I have to take another look, but I'm wondering if DGA ever worked, for example AdjustFrame doesn't look functional ...

@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

Sorry, meant SetViewport (which calls DUMMYAdjustFrame)

@marmarek
Copy link
Member Author

but I'm wondering if DGA ever worked

No idea, especially as you pointed out, there appears to be no way to actually see the framebuffer from dom0/gui domain... DGA is there for quite a bit longer than glamor (git blame says the original import of the dummy driver in 2013, and unmodified since then). If it was always broken and nobody complained, I guess we can simply remove it completely. OTOH, with GPU passthrough there may be use cases where something would like to put rendered result via DGA directly (some games?) - in which case it might be better to keep it for now (disabled? we have configure option for it) and fix it later.

@HW42
Copy link
Contributor

HW42 commented Jun 11, 2025

If it was always broken and nobody complained, I guess we can simply remove it completely. OTOH, with GPU passthrough there may be use cases where something would like to put rendered result via DGA directly (some games?) - in which case it might be better to keep it for now (disabled? we have configure option for it) and fix it later.

So having looked at it:

  • It seems to have been broken all the time
  • Looking at reverse dependencies of libxxf86dga in Fedora/Debian there's only a moderate list of software that might make use of it
  • Adding (actual) support would be significant work
  • An X client using it needs to run as root

I would vote for dropping it. But if you prefer to just disable it, fine with me too.

@marmarek
Copy link
Member Author

I would vote for dropping it.

Ok, lets do it then.

marmarek added a commit to marmarek/qubes-gui-agent-linux that referenced this pull request Jun 11, 2025
It appears to be broken forever. Since nobody complained, it looks like
nobody actually need it. Drop it.

See discussion at QubesOS#233
marmarek added a commit to marmarek/qubes-gui-agent-linux that referenced this pull request Jun 11, 2025
It appears to be broken forever. Since nobody complained, it looks like
nobody actually need it. Drop it.

See discussion at QubesOS#233
marmarek added 3 commits June 11, 2025 20:45
If glamor is not used (which is default), FBBasePriv seems to be unused.
GUI agent doesn't share the root window with GUI daemon. If glamor is
used, FBBasePriv probably unused in practice too - the GUI agent doesn't
support sharing part of the pixmap anyway, but I'm not 100% sure about
it (see qubes_create_screen_resources() - the "pixmap" of the root
window is passed to glamor, and it isn't clear if it wouldn't end up
used in some window that is handled by the GUI agent).

Based on this observation, use normal memory for the framebuffer. The
main benefit is not having the framebuffer mlock()-ed, which helps
especially with small VMs. But also, allocating it as normal userspace
memory gives the kernel more flexibility in finding memory for it (it
doesn't need to be physically continuous, etc).
Due to the glamor case, leave the grant-based FBBase allocation in the
code, but disabled by default. And leave a #define to re-enable it, with
appropriate comment.

Fixes QubesOS/qubes-issues#9992
It appears to be broken forever. Since nobody complained, it looks like
nobody actually need it. Drop it.

See discussion at QubesOS#233
If FBBase is not backed by grant tables, it's now possible to reallocate
it. This is especially useful when new external display is connected and
the overall resolution is increased.

Note the realloc() call may change FBBase address. In practice, FBBase
is given to fbScreenInit, and it does save it at some point
(fbScreenInit->fbFinishScreenInit->miScreenInit->miScreenDevPrivateInit),
but then it's used only to attach it to the screen pixmap, which is
updated via ModifyPixmapHeader() call few lines below anyway.

This change adjusts also pScrn->videoRam. But fortunately X server seems
to use it only at startup, and only to validate initial modes list.

Fixes QubesOS/qubes-issues#7448
@marmarek
Copy link
Member Author

Then the last question is about xf86_qubes_pixmap_set_private() called for the root window (in qubes_create_screen_resources()). As you noted, there is currently no way to share part of that buffer, so it's unlikely need grant tables. So, I disabled grant-based FBBase even for the glamor case, but left the code in place just in case (it's just a few lines so no big deal).

Copy link
Contributor

@HW42 HW42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. (Assuming tests passes. I didn't test locally, since I think this is code path should be exercised enough by our openQA test)

@marmarek marmarek merged commit d771e9e into QubesOS:main Jun 12, 2025
2 of 3 checks passed
@marmarek
Copy link
Member Author

Ugh, now I found this in one of VMs:

[     9.511] (EE) can't dump window without grant table allocation
[     9.517] (EE) can't dump window without grant table allocation
[ 55848.651] (EE) can't dump window without grant table allocation

I haven't noticed any breakage, like missing content of some window, but nevertheless it tried to dump content of a window without grants. OTOH, it appears as at least one of those messages are related to this window:

     0x160001b "[redacted]: 451x576+734+1752  +734+1752

which definitely is not the root window. Maybe unrelated issue? Some race condition or something?

@HW42
Copy link
Contributor

HW42 commented Jun 20, 2025

Hmm ...

Did you notice any effects, like missing window content?

Do you have a reproducer? If not, one idea would be to make, for testing, that error fatal and let openQA run and see if we ever hit it there.

Maybe unrelated issue? Some race condition or something?

Possibly. Or given the window size, likely. But we should still figure out where that pixmap is coming from.

@marmarek
Copy link
Member Author

Did you notice any effects, like missing window content?

Not really, everything looks normal.

Do you have a reproducer?

Yes. It appears as starting chromium triggers this. I tried to capture that window content in the VM using xwd, but I got an area of a window that was in that place below (terminal in my case) instead. So, maybe it's simply a window that doesn't have any pixmap (so X server fallbacks to the root window) and isn't really displayed?

@HW42
Copy link
Contributor

HW42 commented Jun 21, 2025

So turns out to be a race. You get the error when dump_window_grant_refs executes before the window get mapped.

I could easily trigger it with other programs too. zenity --info test hit it most of the time for me.

For details see #236.

@HW42
Copy link
Contributor

HW42 commented Jun 21, 2025

Let's see which corner that PR will uncover ... ;]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

sys-net has too little memory if 4K screen is connected Mouse events only in limited area after connecting display.
3 participants