Conversation
✅ Deploy Preview for aya-rs-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Pull request overview
This PR adds a batch API for ring buffer consumption that amortizes atomic writes of the consumer position. Instead of committing the position after each item, the batch API commits once when the batch is dropped, improving performance when consuming multiple items.
Changes:
- Added
RingBuf::batch()method to create a batch reader - Added
RingBufBatchstruct that defers consumer position commits - Refactored internal logic to support both immediate and batched commits
- Updated all tests and aya-log to use the new batch API
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| aya/src/maps/ring_buf.rs | Core implementation of batch API: new RingBufBatch struct, refactored ProducerData::next() to support deferred commits, split ConsumerPos operations into consume() and commit() |
| aya-log/src/lib.rs | Updated flush() method to use batch API for improved performance |
| test/integration-test/src/tests/uprobe_cookie.rs | Updated test to use batch API, simplified error handling |
| test/integration-test/src/tests/ring_buf.rs | Updated all ring buffer tests to use batch API, removed unused anyhow import, simplified error handling |
| test/integration-test/src/tests/load.rs | Updated test to use batch API with proper scoping |
| xtask/public-api/aya.txt | Added new public API entries for RingBuf::batch() and RingBufBatch |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
fca1329 to
5c4aec4
Compare
This makes it easier to reuse later in new experimental APIs.
582a40e to
3e58029
Compare
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Codex Review: Didn't find any major issues. Can't wait for the next one! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
5634fe9 to
6104cba
Compare
This implementation should have lower overhead than `RingBuffer::next` and by avoiding the overhead associated with `RingBufItem`.
6104cba to
403a44f
Compare
ajwerner
left a comment
There was a problem hiding this comment.
Did something concrete motivate this change? When I first saw the title I was an expecting a different sort of API that might enable the consumer to do vectorized processing of items and then to selectively choose where to commit from the returned batch. None of the existing APIs let the consumer client examine multiple items with a shared lifetime so you can't really make a tight loop over a batch. Maybe that doesn't matter.
I can see how this patch helps reduce contention between the producer and consumer on the consumer position. I'm okay with this change but would love to see a motivating benchmark or something.
| Item::Discard { len } => consume(consumer, advanced, len), | ||
| Item::Data(data) => { | ||
| // This must be deferred in case `f` panics. | ||
| scopeguard::defer! { consume(consumer, advanced, data.len()) }; |
There was a problem hiding this comment.
If f panics do you definitely want to consume the item? I think it's a reasonable contract for the use case because panicking in a loop is bad, but it is worth documenting and I could see it being pushed into f if needed (like the caller of this function in some way tracks if it panicked)
It was sort of like that initially, but it ended up having worse performance than the existing API. You're right that there's no API for iterating in batches. I'm also not exactly sure how you're implement such an API because if you field 2 items and the later item is dropped, what do you do? How do you express this ordering in the type system?
In addition to reducing contention it also optimizes the consumer; the consumer code path is up to 75% faster. I need to figure out how to properly integrate the benchmarks into the integration test harness. |
This implementation should have lower overhead than
RingBuffer::nextandby avoiding the overhead associated with
RingBufItem.This change is