Skip to content

Allow and prefer special vdevs as ZIL #17505

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

amotin
Copy link
Member

@amotin amotin commented Jul 2, 2025

Before this change ZIL blocks were allocated only from normal or SLOG vdevs. In typical situation when special vdevs are SSDs and normal are HDDs it could cause weird inversions when data blocks are written to SSDs, but ZIL referencing them to HDDs.

This change assumes that special vdevs typically have much better (or at least not worse) latency than normal, and so in absence of SLOGs should store ZIL blocks. It means similar to normal vdevs introduction of special embedded log allocation class and updating the allocation fallback order to: SLOG -> special embedded log -> special -> normal embedded log -> normal.

The code tries to guess whether data block is going to be written to normal or special vdev (it can not be done precisely before compression) and prefer indirect writes for blocks written to a special vdev to avoid double-write. For blocks that are going to be written to normal vdev, special vdev by default plays as SLOG, reducing write latency by the cost of higher special vdev wear, but it is tunable via module parameter.

This should allow HDD pools with decent SSD as special vdev to work under synchronous workloads without requiring additional SLOG SSD, impractical in many scenarios.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Quality assurance (non-breaking change which makes the code more robust against bugs)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@amotin amotin added the Status: Code Review Needed Ready for review and testing label Jul 2, 2025
@behlendorf behlendorf self-requested a review July 3, 2025 17:39
Copy link
Contributor

@pcd1193182 pcd1193182 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conceptually, I think this makes sense. The special class is likely going to have better performance, and using that to store ZIL writes makes a lot of sense. The two parts that give me pause are:

  1. The special embedded log; this is a permanent sacrifice of some of the high-performing storage dedicated to storing metadata. For users with zil-heavy workloads, who don't also have a SLOG, it's probably worth it; but for users who don't fit that bill, they're losing some of their special space unconditionally. On the other hand, it's not that much space, all things considered. One metaslab (currently at most 16GiB) is not the end of the world.
  2. The attempt to automatically detect which class the write will eventually end up in. This sort of auto-sensing is very tricky, and can result in weird an unpredictable behavior once deployed. That said, for this case it's probably not too problematic if it gets it wrong; the data will still end up in the right place eventually.

Before this change ZIL blocks were allocated only from normal or
SLOG vdevs.  In typical situation when special vdevs are SSDs and
normal are HDDs it could cause weird inversions when data blocks
are written to SSDs, but ZIL referencing them to HDDs.

This change assumes that special vdevs typically have much better
(or at least not worse) latency than normal, and so in absence of
SLOGs should store ZIL blocks.  It means similar to normal vdevs
introduction of special embedded log allocation class and updating
the allocation fallback order to: SLOG -> special embedded log ->
special -> normal embedded log -> normal.

The code tries to guess whether data block is going to be written
to normal or special vdev (it can not be done precisely before
compression) and prefer indirect writes for blocks written to a
special vdev to avoid double-write.  For blocks that are going to
be written to normal vdev, special vdev by default plays as SLOG,
reducing write latency by the cost of higher special vdev wear,
but it is tunable via module parameter.

This should allow HDD pools with decent SSD as special vdev to
work under synchronous workloads without requiring additional
SLOG SSD, impractical in many scenarios.

Signed-off-by:	Alexander Motin <[email protected]>
Sponsored by:	iXsystems, Inc.
@amotin
Copy link
Member Author

amotin commented Jul 11, 2025

@pcd1193182 :

but for users who don't fit that bill, they're losing some of their special space unconditionally.

True. I decided that it does not worth complexity of changing it in run time when SLOG added/removed, and may be not good for fragmentation to not reserve the log space. With my recent change we may allow fallback from special class to special_embedded_log to allow use of that space if have to, but with the same consequences for fragmentation once it is used.

That said, for this case it's probably not too problematic if it gets it wrong; the data will still end up in the right place eventually.

True. The only difference is whether we double-write when we should not or not when we should. The difference should be only in performance.

Comment on lines 246 to 250
metaslab_class_t *spa_log_class; /* intent log data class */
metaslab_class_t *spa_embedded_log_class; /* log on normal vdevs */
metaslab_class_t *spa_special_class; /* special allocation class */
metaslab_class_t *spa_special_embedded_log_class; /* log on special */
metaslab_class_t *spa_dedup_class; /* dedup allocation class */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Future cleanup opportunity: there's possibly enough of these now to make them an array indexed on an enum, and then we don't have to repeat ourselves over and over when doing something to all of them (init/fini, stats, etc).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Code Review Needed Ready for review and testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants