Skip to content

Conversation

@ldmberman
Copy link
Member

No description provided.

@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch from deed12b to f72704b Compare June 18, 2025 16:23
@ldmberman
Copy link
Member Author

A summary of the recent changes:

  • The footprint record is now maintained only for 256 KiB chunks before the strict threshold and all chunks after the threshold.
  • The footprint record is maintained only for replica 2.9 data.
  • The “normal” and “footprint” syncing procedures are scheduled in phases: first, the normal procedure completes one iteration over the partition range; then the footprint procedure completes one iteration; then the normal procedure runs again, and so on.
  • The “footprint” procedure syncs only replica 2.9 data by ignoring non-replica 2.9 packing returned by GET /footprints.
  • The “normal” procedure syncs only non-replica 2.9 data (replica 2.9 data is already excluded from the normal sync record on master).
  • Because both procedures are running, a two-phase deployment is no longer necessary.
  • Footprint records are initialized from the existing sync records on startup.
  • Added tests for ar_footprint_record and ar_peer_intervals.
  • Replica 2.9 syncing does not start before the entropy generation has been completed.
  • The server-side blocking of serving of replica 2.9 chunks (along with the BLOCK_2_9_SYNCING flag) is removed.
  • Put entropy generation behind a mutex to avoid redundant entropy generation; logging entropy_generation_lock_collision on collision.
  • Added tracking and reporting of redundant entropy generation (the number of entropy generations per key in the last 30 minutes).

@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch 4 times, most recently from f2accdb to 391647e Compare July 23, 2025 13:44
@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch 2 times, most recently from 66bd51b to 7587176 Compare July 29, 2025 12:38
@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch 3 times, most recently from b9c681a to 64e57cd Compare August 7, 2025 21:05
@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch 2 times, most recently from 7f282f1 to 6e05931 Compare August 14, 2025 18:08
@ldmberman ldmberman force-pushed the lb/sync-replica-2-9 branch 5 times, most recently from 2491973 to e25c853 Compare August 21, 2025 21:51
JamesPiechota and others added 30 commits December 17, 2025 15:08
Since it is primarily a wrapper around access to an ETS table the gen_server layer was causing some race conditions. Now that it's a striaght interface to ETS we can rely on ETS to resolve certain classes of race conditins (e.g. what happened before is a `put` was implemented as a `cast` and before the `cast` was processed a straight read from the ETS table was done and missed the put update)
Adding a new application in rebar3 tree requires specific files, at
least on arweave point of view. After the release is compiled, a github
job will create artifacts (to be reused in other jobs), but an error is
produced instead:

    tar: ./_build/default/lib/arweave_diagnostic/include: File removed before we read it
    tar: ./_build/default/lib/arweave_diagnostic/priv: File removed before we read it
    tar: ./_build/default/rel/arweave/lib/arweave_diagnostic-0.0.1/include: File removed before we read it
    tar: ./_build/default/rel/arweave/lib/arweave_diagnostic-0.0.1/priv: File removed before we read it
    tar: ./_build/test/lib/arweave_diagnostic/include: File removed before we read it
    tar: ./_build/test/lib/arweave_diagnostic/priv: File removed before we read it
    tar: ./_build/test/rel/arweave/lib/arweave_diagnostic-0.0.1/include: File removed before we read it
    tar: ./_build/test/rel/arweave/lib/arweave_diagnostic-0.0.1/priv: File removed before we read it

include and priv folders are required. Those must be present in the
github repository as well.
also try to reduce some of the coordinator and peer_worker reductions be replacing gen_server:call with gen_server:cast and caching the formatted ppeer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants