-
Notifications
You must be signed in to change notification settings - Fork 118
[Bifrost] Introduce bifrost.record-size-limit (hidden) #4139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ece5fd3 to
a9d3771
Compare
tillrohrmann
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for adding the record-size-limit guardrails to bifrost @AhmedSoliman. LGTM. +1 for merging :-)
muhamadazmy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for this PR! The changes looks good to me. I left couple of very minor comments below. +1 for merging
crates/bifrost/src/error.rs
Outdated
| MetadataStoreError(#[from] Arc<ReadWriteError>), | ||
| #[error("record batch too large: {record_size} bytes exceeds limit of {limit} bytes")] | ||
| BatchTooLarge { | ||
| record_size: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this should be batch_size?
| record_size: usize, | |
| batch_size: usize, |
crates/bifrost/src/appender.rs
Outdated
| let record_size = body.estimated_encode_size(); | ||
| let limit = self.record_size_limit(); | ||
| if body.estimated_encode_size() > self.record_size_limit().get() { | ||
| return Err(Error::BatchTooLarge { record_size, limit }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if this should be a RecordTooLarge error instead?
| return Err(Error::BatchTooLarge { record_size, limit }); | |
| return Err(Error::RecordTooLarge { record_size, limit }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, the appender only operates on batches.
| #[error("error when self proposing")] | ||
| SelfProposer, | ||
| #[error("error when self proposing: {0}")] | ||
| SelfProposer(String), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that we finally propagate the source error 😆! But maybe it's a good idea to use a Box<dyn Error + Send + Sync> or anyhow::Error ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opted out from doing this because in various places we return EnqueueError(T) where T might not be Debug or Send+Sync.
11d2bc4 to
e08efaa
Compare
Add a record size limit check at append time in Bifrost to validate that individual records do not exceed a configured maximum size. Important note: This configuration option is not going to be effective without implementing size estimation of records. At the moment, all typed records are assumed to be 2048 bytes in size which makes this check useless. Nevertheless, This check is useful for the future when we implement size estimation of records. Changes: - Add `bifrost.record-size-limit` configuration option that defaults to `networking.message-size-limit` (32 MiB) and is clamped to that value - Add `BatchTooLarge/RecordTooLarge` error variants to get notified when a record too large or when a batch is too large depending on whether you're using Appender or BackgroundAppender. - Add record size validation to all `LogSender` enqueue methods in `BackgroundAppender` to fail fast at enqueue time This prevents oversized records from being written to the log, which could cause issues during replication and network transmission. Part of #4130, #4132
Add a record size limit check at append time in Bifrost to validate that
individual records do not exceed a configured maximum size.
Important note: This configuration option is not going to be effective without implementing size estimation of records. At the moment, all typed records are assumed to be 2048 bytes in size which makes this check useless. Nevertheless,
This check is useful for the future when we implement size estimation of records.
Changes:
bifrost.record-size-limitconfiguration option that defaults tonetworking.message-size-limit(32 MiB) and is clamped to that valueBatchTooLarge/RecordTooLargeerror variants to get notified when a record too large or when a batch is too large depending on whether you're using Appender or BackgroundAppender.LogSenderenqueue methods inBackgroundAppenderto fail fast at enqueue timeThis prevents oversized records from being written to the log, which could cause issues during replication and network transmission.
Part of #4130, #4132
Stack created with Sapling. Best reviewed with ReviewStack.