Add batch_lookup_and_delete method to PerCpuHashMap#1434
Add batch_lookup_and_delete method to PerCpuHashMap#1434arthur-zhang wants to merge 1 commit intoaya-rs:mainfrom
Conversation
✅ Deploy Preview for aya-rs-docs ready!Built without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Pull request overview
This PR adds batch lookup and delete functionality to PerCpuHashMap, enabling efficient retrieval and removal of multiple key-value pairs in a single system call. The feature requires kernel 5.6+ and uses cursor-based iteration to process large datasets.
Key Changes:
- Implemented
batch_lookup_and_delete()method onPerCpuHashMapthat returns keys, per-CPU values, and a cursor for iterating through batches - Added low-level
bpf_map_lookup_and_delete_batch_per_cpu()syscall wrapper that handles per-CPU value alignment and memory layout - Included comprehensive tests for empty maps and maps with multiple entries
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| aya/src/sys/bpf.rs | Implements the low-level BPF syscall wrapper for batch lookup and delete operations on per-CPU maps, including proper memory allocation and per-CPU value alignment |
| aya/src/maps/hash_map/per_cpu_hash_map.rs | Adds public API method batch_lookup_and_delete() with documentation and examples, plus comprehensive unit tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let attr_mut = &mut *(attr as *const _ as *mut aya_obj::generated::bpf_attr); | ||
| attr_mut.batch.count = 2; | ||
|
|
||
| // Set out_batch (next cursor) | ||
| let out_batch_ptr = attr.batch.out_batch as *mut u32; | ||
| *out_batch_ptr = 30; |
There was a problem hiding this comment.
The unnecessary unsafe cast can be removed. Since 'attr' is already a mutable reference to bpf_attr (from the pattern match on line 319), you can directly set the count without the cast: attr.batch.count = 2; instead of creating attr_mut.
| let attr_mut = &mut *(attr as *const _ as *mut aya_obj::generated::bpf_attr); | |
| attr_mut.batch.count = 2; | |
| // Set out_batch (next cursor) | |
| let out_batch_ptr = attr.batch.out_batch as *mut u32; | |
| *out_batch_ptr = 30; | |
| attr.batch.count = 2; | |
| // Set out_batch (next cursor) | |
| let out_batch_ptr = attr.batch.out_batch as *mut u32; | |
| *out_batch_ptr = 30; | |
| *out_batch_ptr = 30; |
aya/src/sys/bpf.rs
Outdated
| // SAFETY: | ||
| // 1. `values_buffer` is allocated with size `batch_size * nr_cpus * value_size`. | ||
| // 2. The loop bounds ensure `i < actual_count` (<= batch_size) and `cpu < nr_cpus`. | ||
| // 3. Therefore, `value_offset` is always within the bounds of `kernel_mem`. |
There was a problem hiding this comment.
The comment refers to 'kernel_mem' but the actual variable name is 'values_buffer'. This should be corrected for accuracy.
| // 3. Therefore, `value_offset` is always within the bounds of `kernel_mem`. | |
| // 3. Therefore, `value_offset` is always within the bounds of `values_buffer`. |
aya/src/sys/bpf.rs
Outdated
| let mut out_batch = MaybeUninit::<K>::uninit(); | ||
| let mut keys = vec![unsafe { mem::zeroed() }; batch_size]; | ||
|
|
||
| let value_size = mem::size_of::<V>().next_multiple_of(8); |
There was a problem hiding this comment.
For consistency with the existing codebase, the alignment calculation should use the pattern (mem::size_of::<V>() + 7) & !7 instead of mem::size_of::<V>().next_multiple_of(8). While these are functionally equivalent, the codebase uses the bitwise pattern throughout (see aya/src/maps/mod.rs:922, 930, 946).
8bde463 to
5e08a9d
Compare
tamird
left a comment
There was a problem hiding this comment.
Hello, thanks for the PR! You'll need to run AYA_BPF_TARGET_ARCH=x86_64 cargo +nightly xtask public-api --bless to update the API files.
@codex review
@tamird reviewed all commit messages and made 5 comments.
Reviewable status: 0 of 2 files reviewed, 7 unresolved discussions (waiting on @arthur-zhang).
-- commits line 2 at r2:
Please put all the relevant information (which you have in the PR description) in this commit message.
aya/src/sys/bpf.rs line 260 at r2 (raw file):
} /// Batch lookup and delete elements from a per-cpu map.
This is describing the function but it is anchored on the type. That seems wrong.
aya/src/sys/bpf.rs line 277 at r2 (raw file):
/// /// # Introduced in kernel v5.6 type PerCpuBatchResult<K, V> = io::Result<(Vec<K>, Vec<PerCpuValues<V>>, Option<K>)>;
Can you make the inner type a struct with named fields? This is not very readable.
aya/src/sys/bpf.rs line 310 at r2 (raw file):
if let Err(e) = unit_sys_bpf(bpf_cmd::BPF_MAP_LOOKUP_AND_DELETE_BATCH, &mut attr) { if e.raw_os_error() != Some(ENOENT) {
what does ENOENT mean?
|
ENOENT in batch means:no more data, but may have items in current result. @tamird i see the similar logic in ebpf-go |
… efficient retrieval and removal of multiple key-value pairs in a single system call. The feature requires kernel 5.6+ and uses cursor-based iteration to process large datasets. Key Changes: 1. Implemented batch_lookup_and_delete() method on PerCpuHashMap that returns keys, per-CPU values, and a cursor for iterating through batches 2. Added low-level bpf_map_lookup_and_delete_batch_per_cpu() syscall wrapper that handles per-CPU value alignment and memory layout
tamird
left a comment
There was a problem hiding this comment.
@tamird reviewed 1 file, made 6 comments, and resolved 3 discussions.
Reviewable status: 1 of 3 files reviewed, 9 unresolved discussions (waiting on @arthur-zhang).
aya/src/sys/bpf.rs line 310 at r2 (raw file):
Previously, tamird (Tamir Duberstein) wrote…
what does ENOENT mean?
shouldn't we be signaling this out to the caller somehow?
-- commits line 1 at r4:
The PR description mentions testing against an old kernel, but this commit doesn't have any integration tests. Could you add some?
-- commits line 2 at r4:
Please follow the advice in https://cbea.ms/git-commit/
aya/src/maps/hash_map/per_cpu_hash_map.rs line 19 at r4 (raw file):
}; type BatchResult<K, V> = (Vec<K>, Vec<PerCpuValues<V>>, Option<K>);
This type seems very raw, I don't think this is the API we want -- for a start, the first two vectors being always the same size is odd, can we instead have a single Vec<K, PerCpuValues<V>>?
I also think exposing the "next key" doesn't make for a great API. This should be exposed as a sort of iterator -- the next key should be private.
aya/src/sys/bpf.rs line 262 at r4 (raw file):
/// Result of a batch lookup operation on a per-CPU map. pub(crate) struct PerCpuBatch<K: Pod, V: Pod> { /// Vector of retrieved keys
please add a period after each of these
aya/src/sys/bpf.rs line 288 at r4 (raw file):
/// - out_batch: Optional cursor for the next batch /// /// # Introduced in kernel v5.6
can you please add a citation?
What Changed
New API (aya/src/maps/hash_map/per_cpu_hash_map.rs:141):
System Call Implementation (aya/src/sys/bpf.rs:260):
Testing:
Test plan
This change is