-
Notifications
You must be signed in to change notification settings - Fork 624
refactor(rollup relayer): remove max_block_num_per_chunk
configuration parameter
#1729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis PR removes the per-chunk block-count limit (MaxBlockNumPerChunk) across configs, structs, tests, and logic, shifts chunk sizing to gas-based limits (MaxL2GasPerChunk), adjusts chunk proposer control flow/logging/metrics, updates tests/configs accordingly, and bumps the internal tag to v4.5.46. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Config
participant CP as ChunkProposer
participant L2 as L2 Node
participant HF as Hardfork Rules
participant M as Metrics/Logs
Note over C,CP: Gas-based sizing replaces per-chunk block-count
C->>CP: Load MaxL2GasPerChunk, timeouts, blob size
CP->>L2: Fetch up to 1000 blocks
L2-->>CP: Blocks[]
CP->>HF: Check Blocks[] for hardfork boundary
HF-->>CP: Boundary index (if any)
CP->>CP: Truncate at boundary (if present)
alt First-block timeout reached
CP->>M: Log "first block timeout reached" (+time) and increment metric
CP->>CP: Persist chunk info and emit chunk
else Gas/blob constraints satisfied
CP->>CP: Continue accumulating until gas/blob/timeout triggers
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
common/version/version.go (1)
8-8
: Prefer const over var for the tagPrevents accidental mutation and clarifies intent.
-var tag = "v4.5.44" +const tag = "v4.5.44"rollup/internal/controller/watcher/batch_proposer.go (1)
246-247
: Surface the active chunk cap in logs for debuggabilityWhen triggering on “maximum number of chunks,” consider logging maxChunksThisBatch alongside chunk count in the message at Line 333 to aid on-call debugging. Example:
log.Info("reached maximum number of chunks in batch or first block timeout", "chunk count", metrics.NumChunks, "max chunks", maxChunksThisBatch, ...)rollup/internal/controller/watcher/chunk_proposer.go (2)
234-236
: Avoid fetching an excessively large block windowFetching 100000 blocks can create memory/CPU spikes. Prefetch a sane window and iterate (or stream/page) until constraints are hit.
Apply this diff to introduce a named cap (adjust value per prod SLOs):
- // select blocks without a hard limit on count in practice (use a large value) - // The actual limits will be enforced by gas, timeout, and blob size constraints - blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, 100000) + // Prefetch a large-but-bounded window; constraints (gas/timeout/blob) decide termination. + const maxPrefetchBlocks = 4096 + blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, maxPrefetchBlocks)
252-252
: Hardfork boundary no longer forces chunk finalization — confirm intended latencyPreviously, encountering a hardfork boundary effectively forced chunk finalization. Now we truncate and rely on gas/blob/timeout only. If ChunkTimeoutSec is large, this can stall chunk publication at a boundary until timeout.
If you prefer immediate finalization at a boundary (when at least one block is present), track a boundaryTruncated flag and finalize after metrics calculation. Minimal changes:
- // Truncate blocks at hardfork boundary + // Truncate blocks at hardfork boundary + boundaryTruncated = trueAnd after computing metrics (near Lines 324–335):
// Outside selected range: add near the timeout check if boundaryTruncated && len(chunk.Blocks) > 0 { p.recordAllChunkMetrics(metrics) return p.updateDBChunkInfo(&chunk, codecVersion, metrics) }Also declare before the loop:
// Outside selected range: before the for i := 1; i < len(blocks); i++ { ... } boundaryTruncated := falserollup/internal/controller/watcher/bundle_proposer_test.go (1)
96-96
: Use readable numeric literal and centralize the “one block by gas” valueUse 1_100_000 for readability and consider a shared const to avoid drift across tests.
Apply this diff:
- MaxL2GasPerChunk: 1100000, // One block per chunk via gas limit + MaxL2GasPerChunk: 1_100_000, // One block per chunk via gas limitrollup/internal/controller/watcher/batch_proposer_test.go (2)
75-86
: Unify numeric literal style for clarityMatch other tests by using 1_100_000 (readable) and consider a shared test constant to keep the “one block via gas” threshold consistent.
Apply this diff:
- MaxL2GasPerChunk: 1100000, // Allow only 1 block per chunk + MaxL2GasPerChunk: 1_100_000, // Allow only 1 block per chunk
306-309
: Consistent numeric literal formattingAlign with the underscore style used elsewhere.
Apply this diff:
- MaxL2GasPerChunk: 1100000, // One block per chunk via gas limit + MaxL2GasPerChunk: 1_100_000, // One block per chunk via gas limit
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (13)
common/version/version.go
(1 hunks)permissionless-batches/conf/relayer/config.json
(0 hunks)rollup/cmd/proposer_tool/app/app.go
(0 hunks)rollup/cmd/rollup_relayer/app/app.go
(0 hunks)rollup/conf/config.json
(0 hunks)rollup/internal/config/l2.go
(0 hunks)rollup/internal/controller/watcher/batch_proposer.go
(1 hunks)rollup/internal/controller/watcher/batch_proposer_test.go
(2 hunks)rollup/internal/controller/watcher/bundle_proposer_test.go
(1 hunks)rollup/internal/controller/watcher/chunk_proposer.go
(3 hunks)rollup/internal/controller/watcher/chunk_proposer_test.go
(1 hunks)rollup/proposer-tool-config.json
(0 hunks)rollup/tests/rollup_test.go
(0 hunks)
💤 Files with no reviewable changes (7)
- rollup/internal/config/l2.go
- rollup/tests/rollup_test.go
- rollup/proposer-tool-config.json
- rollup/conf/config.json
- rollup/cmd/proposer_tool/app/app.go
- permissionless-batches/conf/relayer/config.json
- rollup/cmd/rollup_relayer/app/app.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-10-20T16:13:20.397Z
Learnt from: colinlyguo
PR: scroll-tech/scroll#1530
File: rollup/internal/controller/watcher/batch_proposer.go:291-294
Timestamp: 2024-10-20T16:13:20.397Z
Learning: In `batch_proposer.go`, it's acceptable to call `utils.CalculateBatchMetrics` multiple times within the loop because the batch's chunks are increasing in the loop, and each calculation reflects the updated batch state.
Applied to files:
rollup/internal/controller/watcher/batch_proposer.go
rollup/internal/controller/watcher/batch_proposer_test.go
🔇 Additional comments (5)
common/version/version.go (2)
8-8
: LGTM: bump to v4.5.44Matches the PR’s bump-version objective; no logic changes and Version format remains stable.
8-8
: Verification complete: removed config keys are gone and version tag is single-sourced No references tomax_chunks_per_batch
ormax_block_num_per_chunk
remain, andvar tag = "v4.5.44"
is defined only once at common/version/version.go:8.rollup/internal/controller/watcher/batch_proposer.go (1)
246-247
: Remove config cap in favor of codec limit — LGTMUsing only codec.MaxNumChunksPerBatch() aligns with the PR goals and keeps a single source of truth for the cap.
rollup/internal/controller/watcher/chunk_proposer.go (1)
325-326
: Timeout-only termination path — LGTMLogging includes start timestamp and current time; matches the PR’s intent to rely on time rather than per-block caps.
rollup/internal/controller/watcher/chunk_proposer_test.go (1)
50-55
: Gas-based single-block chunk test — LGTMGood rename and use of 1_100_000 to make intent clear.
max_chunks_per_batch
and max_block_num_per_chunk
configuration parametersmax_block_num_per_chunk
configuration parameter
…roll into feat/remove-config-params
…roll into feat/remove-config-params
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
rollup/internal/controller/watcher/chunk_proposer.go (4)
234-236
: Fix misleading comment and avoid the magic number.The comment says “without a hard limit” but we do cap at 1000. Also, replace 1000 with a named constant for clarity.
Apply within-range diff:
- // select blocks without a hard limit on count in practice (use a large value) - // The actual limits will be enforced by gas, timeout, and blob size constraints - blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, 1000) + // Select up to a moderate cap to bound memory/DB load; chunk size is still constrained + // by gas, timeout, and blob size constraints. + blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, l2BlocksSelectCap)Add outside-range (top-level in this file):
const l2BlocksSelectCap = 1000
247-254
: Add a debug log when truncating at a hardfork boundary.This greatly helps explain why fewer blocks were considered in a cycle and correlates with codec/version shifts.
currentHardfork := encoding.GetHardforkName(p.chainCfg, blocks[i].Header.Number.Uint64(), blocks[i].Header.Time) if currentHardfork != hardforkName { + log.Debug("truncating at hardfork boundary", + "prevHardfork", hardforkName, + "newHardfork", currentHardfork, + "startBlock", blocks[0].Header.Number, + "boundaryBlock", blocks[i].Header.Number) blocks = blocks[:i] // Truncate blocks at hardfork boundary break }
325-327
: Make timeout check inclusive to avoid 1-second edge oscillation.Use <= so chunks finalize exactly when the timeout is reached.
- if metrics.FirstBlockTimestamp+p.cfg.ChunkTimeoutSec < currentTimeSec { + if metrics.FirstBlockTimestamp+p.cfg.ChunkTimeoutSec <= currentTimeSec {
337-338
: Tweak log message to reflect current logic.We don’t have a “timeout block” anymore; it’s a first-block timeout. Consider rewording for clarity.
- log.Debug("pending blocks do not reach one of the constraints or contain a timeout block") + log.Debug("pending blocks do not hit gas/blob limits and first-block timeout not reached")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
rollup/internal/controller/watcher/bundle_proposer_test.go
(1 hunks)rollup/internal/controller/watcher/chunk_proposer.go
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- rollup/internal/controller/watcher/bundle_proposer_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (1)
rollup/internal/controller/watcher/chunk_proposer.go (1)
234-236
: Capping the prefetch at 1k looks good and addresses earlier perf/memory concerns.Fetching 1000 blocks per iteration is a sane upper bound and aligns with the prior suggestion to drop the 100k cap. The actual chunk size remains governed by gas/blob/timeout limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
rollup/internal/controller/watcher/chunk_proposer_test.go (1)
50-51
: Stabilize the gas threshold to avoid test flakiness.The 1_100_000 gas cap assumes block1 fits but block1+block2 does not. If block traces change, this can flake. Consider deriving the limit from the actual L2 gas of block1 (e.g., “just above block1 gas, below block1+block2”) or assert the precondition before running the case.
rollup/internal/controller/watcher/batch_proposer_test.go (4)
39-39
: Expectation vs. comments: ensure determinism and update misleading inline notes.You now expect 1 chunk in the first batch, but the comments claim “chunk1 contains block1” and “chunk2 contains block2.” With MaxL2GasPerChunk = math.MaxUint64 and ChunkTimeoutSec = 300, cp.TryProposeChunk may not flush per call. Either:
- make chunk creation deterministic (e.g., force a cp timeout or low gas cap), or
- keep the behavior and update the comments to avoid implying per-call chunking, plus assert the precondition (#chunks) before batching.
Suggested comment tweak:
- cp.TryProposeChunk() // chunk1 contains block1 - cp.TryProposeChunk() // chunk2 contains block2 + // Attempt two proposals; exact chunking depends on gas/timeout config. + cp.TryProposeChunk() + cp.TryProposeChunk()Also applies to: 87-89
309-313
: Guard the “one block per chunk” invariant.1200000 works today, but it’s a magic number tied to blockTrace_03. Add a precondition check (or compute the cap from the block’s measured L2 gas) to avoid drift.
// After setting up cp, before batching: chunks, err := chunkOrm.GetChunksGEIndex(context.Background(), 1, 0) assert.NoError(t, err) assert.Len(t, chunks, 2, "expected exactly 2 chunks from 3 blocks with 1-block gas cap")
316-316
: Confirm chunk count given the 3-block loop.With three inserts and per-chunk gas cap, you expect exactly two chunks before batching. If chunking semantics change (e.g., timeout flush), this may produce 0/1/3. Add an explicit assertion right after the loop (see previous comment) to lock expectations.
325-327
: Tighten the comment to describe the invariant, not the mechanism.Propose rewording to avoid implying per-call chunking and to state the expected end state:
- // We create 2 chunks here, as we have 3 blocks and reach the gas limit for the 1st chunk with the 2nd block - // and the 2nd chunk with the 3rd block. + // By the end of the loop we expect 2 chunks: block1 in chunk1 (block2 would exceed cap), + // and block2 in chunk2 (block3 would exceed cap). Block3 remains pending pre-batch.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
rollup/internal/controller/watcher/batch_proposer_test.go
(4 hunks)rollup/internal/controller/watcher/bundle_proposer_test.go
(1 hunks)rollup/internal/controller/watcher/chunk_proposer_test.go
(1 hunks)rollup/internal/controller/watcher/watcher_test.go
(0 hunks)
💤 Files with no reviewable changes (1)
- rollup/internal/controller/watcher/watcher_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- rollup/internal/controller/watcher/bundle_proposer_test.go
🧰 Additional context used
🧬 Code graph analysis (1)
rollup/internal/controller/watcher/batch_proposer_test.go (1)
rollup/internal/orm/l2_block.go (1)
NewL2Block
(43-45)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (1)
rollup/internal/controller/watcher/batch_proposer_test.go (1)
75-75
: LGTM: test isolates batch gating from chunk gas limits.Using MaxL2GasPerChunk = math.MaxUint64 aligns with the removal of per-block caps and keeps gas from influencing this test.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1729 +/- ##
========================================
Coverage 36.98% 36.98%
========================================
Files 245 245
Lines 20808 20804 -4
========================================
- Hits 7696 7695 -1
+ Misses 12301 12299 -2
+ Partials 811 810 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Purpose or design rationale of this PR
This PR removes
max_block_num_per_chunk
configuration parameter.PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
Refactor
Chores