-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Statement-store: Propagate all statements to newly connected peers #10718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1c63bc1 to
7e0277c
Compare
|
|
||
| if !self.sync.is_major_syncing() { | ||
| if let Ok(statements) = self.statement_store.statements() { | ||
| self.send_statements_to_peer(&peer, &statements).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will keep this loop busy until all statements are sent, which potentially are many.
I wonder if we don't want to move this out on separate task/thread, but that could easily be a follow up.
| debug_assert!(_was_in.is_none()); | ||
|
|
||
| if !self.sync.is_major_syncing() { | ||
| if let Ok(statements) = self.statement_store.statements() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also add a test for it.
|
Related to #10046 |
|
/cmd prdoc --audience node_dev --bump major |
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
alexggh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a second look over the newly added changes, looks good to me!
Thank you!
AndreiEres
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you! Left a few nits.
| - audience: Node Dev | ||
| description: | | ||
| When a new node connects, we now propagate all statements in our store to them. This happens in bursts of ~1MiB messages | ||
| over time to not completley use up all resources. If multiple peers are connecting, round robin between them. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| over time to not completley use up all resources. If multiple peers are connecting, round robin between them. | |
| over time to not completely use up all resources. If multiple peers are connecting, round robin between them. |
| result.push((*hash, statement)); | ||
| }, | ||
| FilterDecision::Abort => { | ||
| // We did not process it :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we can increase it after match filter
| } | ||
|
|
||
| fn statement_hashes(&self) -> Result<Vec<Hash>> { | ||
| Ok(self.index.read().entries.keys().cloned().collect()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need Result here?
| let (statements, processed) = match self.statement_store.statements_by_hashes( | ||
| &entry.get().hashes, | ||
| &mut |_hash, encoded, _stmt| { | ||
| if accumulated_size > 0 && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we process Skip for oversized statements here as well?
| for hash in hashes { | ||
| processed += 1; | ||
| let Some(encoded) = | ||
| self.db.get(col::STATEMENTS, hash).map_err(|e| Error::Db(e.to_string()))? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We abandon already processed statements here, maybe we could send them?
| Ok(r) => r, | ||
| Err(e) => { | ||
| log::debug!(target: LOG_TARGET, "Failed to fetch statements for initial sync: {e:?}"); | ||
| entry.remove(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no retry mechanism, so the peer has no more chance to receive smth? Maybe we can at least error! it?
| const LOG_TARGET: &str = "statement-gossip"; | ||
| /// Maximim time we wait for sending a notification to a peer. | ||
| const SEND_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10); | ||
| /// Interval for sending statement batches during initial sync to new peers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: initial sync every time misleads me to startup time of the current peer
| } | ||
|
|
||
| /// Result of finding a sendable chunk of statements. | ||
| enum ChunkResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: FindChunkResult?
| self.pending_initial_syncs.remove(&peer_id); | ||
| return; | ||
| }, | ||
| SendChunkResult::Sent(_) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: debug_assert!(to_sent.len(), sent)?
# Description This follow-up PR addresses review comments from PR #10718: - Removed unnecessary Result wrapper from statement_hashes() - method is infallible - Added debug assertion to validate sent count matches prepared count ## Integration Should not affect downstream projects.
…aritytech#10718) Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ew (paritytech#10770) # Description This follow-up PR addresses review comments from PR paritytech#10718: - Removed unnecessary Result wrapper from statement_hashes() - method is infallible - Added debug assertion to validate sent count matches prepared count ## Integration Should not affect downstream projects.
No description provided.