-
Notifications
You must be signed in to change notification settings - Fork 153
nvme_driver: support multi-page allocations for queues #2061
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances the NVMe driver to support multi-page queue allocations, enabling higher queue depths and better I/O performance. The driver now attempts to allocate multi-page contiguous memory for queues while maintaining backward compatibility by falling back to single-page allocations when contiguous memory is unavailable.
Key changes:
- Expanded queue size support from single-page to multi-page allocations
- Added fallback mechanism for non-contiguous memory scenarios
- Enhanced DMA allocation with private pool fallback support
Reviewed Changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
vm/devices/storage/disk_nvme/nvme_driver/src/queue_pair.rs | Core queue pair implementation with multi-page support and contiguous memory detection |
vm/devices/storage/disk_nvme/nvme_driver/src/driver.rs | Driver initialization updates to handle variable queue sizes |
vm/devices/storage/disk_nvme/nvme_driver/src/queues.rs | Queue structure updates with debug logging and completion read optimization |
vm/devices/user_driver/src/memory.rs | Added contiguous PFN detection utility method |
openhcl/openhcl_dma_manager/src/lib.rs | Enhanced DMA manager with private pool fallback allocation strategy |
vmm_tests/vmm_tests/tests/tests/x86_64/openhcl_linux_direct.rs | Test updates to enable VTL2 GPA pool and add cached I/O testing |
openhcl/openhcl_boot/src/main.rs | Boot parameter handling for NVMe keep-alive control |
openhcl/openhcl_boot/src/cmdline.rs | Command line parsing for NVMe keep-alive disable option |
openhcl/openhcl_dma_manager/Cargo.toml | Added tracing dependency for logging support |
vm/devices/user_driver/src/lib.rs | API documentation updates for DMA allocation |
cq_addr: u64, | ||
} | ||
|
||
impl Inspect for QueuePair { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can move to the inspect derive with the new send helper John added?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but his change isn't in. #2001 is the one you're thinking of, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is, I thought it was merged already, my bad.
fecc9ab
to
7cf0691
Compare
The vast majority of this work was done by @chris-oo , many thanks to him!
The outstanding queue depth from the nvme driver can be limited by the number of entries in the IO submission queue. Prior to this change, OpenHCL's nvme driver supported 1 page worth of submission queue entries. After this change, the nvme driver will support as many queue entries as the device supports.
Contiguous allocations are likely, but not strictly guaranteed. This code opts to allow forward progress by falling back to one page allocations. This is testable (and has been validated manually and with some of the existing CI tests) by not configuring a private pool.
There are some other design tradeoffs, which are noted as comments in the change.