Skip to content

maps: add ring buffer batch API#1479

Open
tamird wants to merge 3 commits intomainfrom
ring-buf-consume-batch
Open

maps: add ring buffer batch API#1479
tamird wants to merge 3 commits intomainfrom
ring-buf-consume-batch

Conversation

@tamird
Copy link
Member

@tamird tamird commented Feb 17, 2026

This implementation should have lower overhead than RingBuffer::next and
by avoiding the overhead associated with RingBufItem.


This change is Reviewable

Copilot AI review requested due to automatic review settings February 17, 2026 21:48
@netlify
Copy link

netlify bot commented Feb 17, 2026

Deploy Preview for aya-rs-docs ready!

Name Link
🔨 Latest commit 403a44f
🔍 Latest deploy log https://app.netlify.com/projects/aya-rs-docs/deploys/699747d7298111000961b681
😎 Deploy Preview https://deploy-preview-1479--aya-rs-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a batch API for ring buffer consumption that amortizes atomic writes of the consumer position. Instead of committing the position after each item, the batch API commits once when the batch is dropped, improving performance when consuming multiple items.

Changes:

  • Added RingBuf::batch() method to create a batch reader
  • Added RingBufBatch struct that defers consumer position commits
  • Refactored internal logic to support both immediate and batched commits
  • Updated all tests and aya-log to use the new batch API

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

Show a summary per file
File Description
aya/src/maps/ring_buf.rs Core implementation of batch API: new RingBufBatch struct, refactored ProducerData::next() to support deferred commits, split ConsumerPos operations into consume() and commit()
aya-log/src/lib.rs Updated flush() method to use batch API for improved performance
test/integration-test/src/tests/uprobe_cookie.rs Updated test to use batch API, simplified error handling
test/integration-test/src/tests/ring_buf.rs Updated all ring buffer tests to use batch API, removed unused anyhow import, simplified error handling
test/integration-test/src/tests/load.rs Updated test to use batch API with proper scoping
xtask/public-api/aya.txt Added new public API entries for RingBuf::batch() and RingBufBatch

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@tamird tamird force-pushed the ring-buf-consume-batch branch 5 times, most recently from fca1329 to 5c4aec4 Compare February 18, 2026 18:08
This makes it easier to reuse later in new experimental APIs.
@tamird tamird force-pushed the ring-buf-consume-batch branch 3 times, most recently from 582a40e to 3e58029 Compare February 19, 2026 00:23
@tamird tamird requested a review from Copilot February 19, 2026 11:09
@tamird
Copy link
Member Author

tamird commented Feb 19, 2026

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Can't wait for the next one!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@vadorovsky vadorovsky self-requested a review February 19, 2026 15:55
@tamird tamird force-pushed the ring-buf-consume-batch branch 2 times, most recently from 5634fe9 to 6104cba Compare February 19, 2026 17:11
This implementation should have lower overhead than `RingBuffer::next` and
by avoiding the overhead associated with `RingBufItem`.
@tamird tamird force-pushed the ring-buf-consume-batch branch from 6104cba to 403a44f Compare February 19, 2026 17:26
Copy link
Member

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did something concrete motivate this change? When I first saw the title I was an expecting a different sort of API that might enable the consumer to do vectorized processing of items and then to selectively choose where to commit from the returned batch. None of the existing APIs let the consumer client examine multiple items with a shared lifetime so you can't really make a tight loop over a batch. Maybe that doesn't matter.

I can see how this patch helps reduce contention between the producer and consumer on the consumer position. I'm okay with this change but would love to see a motivating benchmark or something.

Item::Discard { len } => consume(consumer, advanced, len),
Item::Data(data) => {
// This must be deferred in case `f` panics.
scopeguard::defer! { consume(consumer, advanced, data.len()) };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If f panics do you definitely want to consume the item? I think it's a reasonable contract for the use case because panicking in a loop is bad, but it is worth documenting and I could see it being pushed into f if needed (like the caller of this function in some way tracks if it panicked)

@tamird
Copy link
Member Author

tamird commented Feb 20, 2026

Did something concrete motivate this change? When I first saw the title I was an expecting a different sort of API that might enable the consumer to do vectorized processing of items and then to selectively choose where to commit from the returned batch. None of the existing APIs let the consumer client examine multiple items with a shared lifetime so you can't really make a tight loop over a batch. Maybe that doesn't matter.

It was sort of like that initially, but it ended up having worse performance than the existing API. You're right that there's no API for iterating in batches. I'm also not exactly sure how you're implement such an API because if you field 2 items and the later item is dropped, what do you do? How do you express this ordering in the type system?

I can see how this patch helps reduce contention between the producer and consumer on the consumer position. I'm okay with this change but would love to see a motivating benchmark or something.

In addition to reducing contention it also optimizes the consumer; the consumer code path is up to 75% faster. I need to figure out how to properly integrate the benchmarks into the integration test harness.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants