-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use direct I/O for loop devices #7332
Comments
Could this be causing #5086? |
Possibly |
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. I also took the opportunity to rip out some cruft related to old losetup versions, which Qubes OS doesn't need to support anymore. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. I also took the opportunity to rip out some cruft related to old losetup versions, which Qubes OS doesn't need to support anymore. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
This is a huge performance improvement for two reasons: 1. It uses the filesystem’s asynchronous I/O support, rather than using synchronous I/O. 2. It bypasses the page cache, removing a redundant layer of caching and associated overhead. Fixes QubesOS/qubes-issues#7332.
Automated announcement from builder-github The component
|
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
Reopening as QubesOS/qubes-vmm-xen#127 had to be reverted (#7828). |
For the record, the next iteration of this patch (if it comes), I'm going to block on actually proving it helps with performance. We see it's more risky change than it seems to be, so lets be sure it's actually worth doing. |
I agree. Someone on the forum seems to have a benchmark setup that could be used. |
If someone wants to play around with this on file(-reflink) without having to build a bleeding edge util-linux
|
Looks like #7828 now includes a clarification that the failure of the first iteration of the patch (on 4Kn drives) applied to LVM thick, not LVM thin. For the record. B |
In the LVM Thin installation layout it would still have affected the legacy 'file' driver |
Automated announcement from builder-github The component
Or update dom0 via Qubes Manager. |
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
How to file a helpful issue
The problem you're addressing (if any)
Qubes OS currently does not use direct I/O for the pools that use loop devices This leads to double-caching and poor I/O performance. I believe it is the cause of the anomalies in this benchmark.
The solution you'd like
Use O_DIRECT for loop devices.
The value to a user, and who that user might be
Users of the reflink pool will experience improved and more consistent performance.
The text was updated successfully, but these errors were encountered: