Skip to content

Conversation

@AustinWise
Copy link
Contributor

Motivation and Context

When sending snapshots back and forth between two datasets, it is currently possible to end up in a state where a receive fails with this error message:

cannot receive incremental stream: incremental send stream requires -L (--large-block), to match previous receive.

Adding the -L flag to the zfs send does not fix the problem.

See issue #18101 for details.

Description

When receiving a stream with the large block feature flag, activate the large_blocks feature in the destination dataset.

I think this is a reasonable way to solve the problem I'm having for a number of reasons:

  • This makes the destination dataset more similar to the sending dataset, i.e. the large blocks feature will be active whether or not the dataset currently contains any large blocks.
  • If users wish to avoid the large blocks feature being activated in the destination dataset, they can create a send stream without using either -L or --raw.

How Has This Been Tested?

Some new tests have been included in this PR. They fail when run on master currently and pass on this PR. They test that both full sends and incremental sends cause the large blocks feature to be activated in the destination dataset.

I ran these tests on Ubuntu 24.04, with kernel 6.14.0-37.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Quality assurance (non-breaking change which makes the code more robust against bugs)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@github-actions github-actions bot added the Status: Work in Progress Not yet ready for general review label Jan 3, 2026
@AustinWise AustinWise marked this pull request as ready for review January 3, 2026 20:35
Copilot AI review requested due to automatic review settings January 3, 2026 20:35
@github-actions github-actions bot added Status: Code Review Needed Ready for review and testing and removed Status: Work in Progress Not yet ready for general review labels Jan 3, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes issue #18101 where incremental ZFS send/receive operations can fail with a large_blocks feature mismatch error. The solution automatically activates the large_blocks feature on the destination dataset when receiving a stream that has the large block feature flag set, ensuring feature consistency between source and destination datasets.

Key Changes

  • Modified dmu_recv_begin_sync() to activate the large_blocks feature on the destination dataset when receiving streams with the DMU_BACKUP_FEATURE_LARGE_BLOCKS flag
  • Added two new test cases to verify the feature activation works correctly for both full and incremental sends

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
module/zfs/dmu_recv.c Adds logic to activate the large_blocks feature on the destination dataset during receive operations, following the same pattern as the existing longname feature activation
tests/zfs-tests/tests/functional/rsend/send_large_blocks_inital.ksh New test verifying that full send streams propagate large_blocks feature activation even when no large blocks are present
tests/zfs-tests/tests/functional/rsend/send_large_blocks_incremental.ksh New test verifying that incremental sends activate the large_blocks feature when the source dataset has it activated

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@AustinWise AustinWise force-pushed the austin/large-block-send-receive branch from d49befb to b44313e Compare January 4, 2026 15:37
Copy link
Member

@amotin amotin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the actual change makes sense to me, please take a look on the AI comment about the test name, and also you should probably add the tests into Makefile.am and runfiles. Also I wonder why have you inserted the feature activation at the end? I would do it earlier, considering the flag was introduced earlier that longname.

Copilot AI review requested due to automatic review settings January 6, 2026 05:47
@AustinWise AustinWise force-pushed the austin/large-block-send-receive branch from b44313e to 3af961c Compare January 6, 2026 05:47
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@AustinWise
Copy link
Contributor Author

While the actual change makes sense to me, please take a look on the AI comment about the test name, and also you should probably add the tests into Makefile.am and runfiles. Also I wonder why have you inserted the feature activation at the end? I would do it earlier, considering the flag was introduced earlier that longname.

Thanks, I addressed the misspelling of the test case and added the new test to the runfile an Makefile.am. I moved the activation of large block feature above the long name feature as requested. I also extended the comment to explain why the feature activation is needed.

ZFS send streams include a feature flag DMU_BACKUP_FEATURE_LARGE_BLOCKS
to indicate the presence of large blocks in the dataset. On the sending
side, this flag is included if the `-L` flag is passed to `zfs send`
and the feature is active in the dataset. On the receive side, the
stream is refused if the feature is active in the destination dataset
but the stream does not include the feature flag.

The problem is the feature is only activated when a large block is
born. If a large block has been born in the destination, but never
the source, the send can't work. This can arise when sending streams
back and forth between two datasets.

This commit fixes the problem by always activating the large blocks
feature when receiving a stream with the large block feature flag.

Signed-off-by: Austin Wise <[email protected]>
@AustinWise AustinWise force-pushed the austin/large-block-send-receive branch from 3af961c to ea29048 Compare January 6, 2026 05:54
@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Jan 8, 2026
@behlendorf behlendorf merged commit 794f158 into openzfs:master Jan 8, 2026
40 of 42 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Status: Accepted Ready to integrate (reviewed, tested)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants