-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use direct I/O for loop devices #127
Conversation
This statement could use some benchmark results attached. If it is "huge", it should be fairly easy to observe. |
This is fine, but should be a separate patch. |
Indeed it should be. I plan on using FIO in a VM with and without this patch. Any suggestions for benchmark workloads? |
e879ec2
to
2044fa8
Compare
Isn't
Maybe something like
|
Currently no: https://github.com/QubesOS/qubes-vmm-xen/blob/xen-4.14/xen.spec.in#L156 |
Gah, I always forget to check that the patch file is actually applied. Good riddance I guess. But going through the repo commits I can't find any rationale for commenting it out. Is the script not a performance issue anymore? |
@rustybird It still is, and I have plans for fixing that. |
This patch wasn't welcome upstream (this part of libxl is rather generic, while the patch supports only Linux case), so I haven't spent time to update it to Xen 4.14. Upstream devs suggested improving block script performance by using binary instead - which I think may be easier to maintain, than a patch that needs to be rebased on some updates. |
And BTW, this patch as is may have some memory leak. |
I agree. Some future changes involving Linux 5.15+ diskseq will require a binary. |
Makes sense. Thanks for the explanation. |
PipelineRefresh |
The build fails. |
2044fa8
to
cced625
Compare
PipelineRetry |
Arch build fails, I think you need to add the patch to |
3f25f85
to
bfe5ffd
Compare
OpenQA test summaryComplete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.1&build=2022082619-4.1&flavor=pull-requests New failures, excluding unstableCompared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.1&build=2022071906-4.1&flavor=update
Failed tests38 failures
Fixed failuresCompared to: https://openqa.qubes-os.org/tests/44309#dependencies 22 fixed
Unstable tests
|
I think it's worth sending this patch upstream. |
Worth waiting for upstream patch review, or is this obvious enough to keep downstream for now? |
I'm fine with merging it once the patch is sent upstream (not necessarily committed yet). |
Makes sense. The kernel patch for grant table handling is much trickier and I would prefer to wait until I at least get a “this is good”. |
@DemiMarie have you sent the patch upstream? |
@marmarek I don’t think so, sorry. I forgot. |
bfe5ffd
to
23d5b18
Compare
Isn't this on by default?
|
My understanding of the above is |
Yes, I think you're right, I was confused by the off option |
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
23d5b18
to
b43e630
Compare
For the record: sent upstream here: https://lore.kernel.org/xen-devel/[email protected]/T/#u. There is pending request to handle old losetup version (not supporting the option). That should be done at some point, but I'm not going to block merging just on this. |
This is a huge performance improvement for two reasons:
synchronous I/O.
associated overhead.
I also took the opportunity to rip out some cruft related to old losetup
versions, which Qubes OS doesn't need to support anymore.
Fixes QubesOS/qubes-issues#7332.
Marking as draft because I have not tested this yet, and there is the possibility that it could break something.