Skip to content

[PoC] Split rootfs#5470

Draft
OhmSpectator wants to merge 4 commits intolf-edge:masterfrom
OhmSpectator:feature/split-rootfs
Draft

[PoC] Split rootfs#5470
OhmSpectator wants to merge 4 commits intolf-edge:masterfrom
OhmSpectator:feature/split-rootfs

Conversation

@OhmSpectator
Copy link
Member

Description

This PR introduces the split-rootfs proof of concept for EVE. The build now produces two LinuxKit images - a minimal “bootstrap” rootfs with only boot-critical services and a “pkgs” rootfs containing non-critical services - and exposes make targets to build/run them either independently or together. Pillar gains the extsloader agent that discovers pkgs.img, mounts it, and starts its services via containerd so they appear in eve list. Documentation in docs/SPLIT-ROOTFS.md describes the architecture, workflows, and validation steps for the experiment.

PR dependencies

None.

How to test and validate this PR

  1. Build both rootfs images and a bootstrap live disk:
make pkgs eve multi_rootfs live-bootstrap

Confirm the artifacts appear under dist/amd64/<version>/installer/.

  1. Boot with pkgs injection:
make run-bootstrap-with-pkgs

Inside the EVE, confirm /persist/pkgs is mounted, extsloader is running (logread | grep extsloader), and the pkgs services show up in eve status.

No automated tests cover this yet.

Changelog notes

Adds experimental split-rootfs tooling (bootstrap + pkgs images), developer run targets, and the Pillar external services loader that auto-starts pkgs.img contents.

PR Backports

  • 16.0: No, experimental development-only change.
  • 14.5-stable: No.
  • 13.4-stable: No.

Checklist

  • I've provided a proper description
  • I've added the proper documentation
  • I've tested my PR on amd64 device
  • I've tested my PR on arm64 device
  • I've written the test verification instructions
  • I've set the proper labels to this PR
  • I've checked the boxes above, or I've provided a good reason why I didn't check them.

Add dedicated bootstrap/pkgs LinuxKit templates, glue them into the
build graph, and expose make targets for producing each squashfs so the
new architecture can be built independently of the legacy rootfs.

Signed-off-by: Nikolay Martyanov <nikolay@zededa.com>
Wire the split rootfs into developer workflows by adding live/run
targets, and documenting the new entry points in the help output.

Signed-off-by: Nikolay Martyanov <nikolay@zededa.com>
Introduce the external-services loader that discovers pkgs.img, mounts
it, and starts services through containerd, and register the agent
inside the boot sequence.

Signed-off-by: Nikolay Martyanov <nikolay@zededa.com>
@OhmSpectator OhmSpectator marked this pull request as draft December 2, 2025 15:21
Add SPLIT-ROOTFS.md outlining the motivation, build pipeline, and Pillar
integration for the bootstrap/pkgs experiment.

Signed-off-by: Nikolay Martyanov <nikolay@zededa.com>
@OhmSpectator
Copy link
Member Author

I'm impressed I had only one Yetus warning =D

@deitch
Copy link
Contributor

deitch commented Dec 2, 2025

I am quite in favour of this idea, at least in principle. @rene has raised it before as well.

I am not convinced this is the best way to solve the "300MB limit" issue; that is better handled by resizing partitions (which can be done safely, if done correctly). However, this does have the option of making the "common OS" (term created by Daniel Derksen) much more common, and then all of the "additional things" be added later or earlier. That is the real value IMO; reducing EVE's proliferation.

I have been using the term "boot+core" for the first part, and "system+app" for the second part:

  • boot+core = everything needed to get EVE to the point where it can communicate with a controller, download config and OS updates, and get the second parts
  • system+app = everything that is add on for managing the system (observability, logging, debug, etc.) or apps (k3s, runtimes, volume management, etc.)

Of course, we can be OK with any terminology.

I do have some specific disagreements with parts of the design.

  1. Is linuxkit the best way to make the "system+app" (other parts)? I am not convinced.
  2. Should "other parts" ("system+app") be a single monolithic squashfs? I don't think so. I think having it be container images is much more flexible, and allows us to pick and choose which pieces we want.
  3. We should use those in erofs formats, so that we can mount them directly as is.
  4. We should use fs-verity or dm-verity to verify those, so they cannot be changed even on disk
  5. How does the core get access to system+app? I think it should be able to find them, but it also should be able to download them. More accurately, I think these should be designated by the controller, along with the expected digests, so that it is as immutable and verifiable as the system itself.

I have done a number of experiments on containers in erofs with verity, and it works quite well.

Summarizing: I like what you are doing, definitely bring @rene into it, as he has ideas. How we build and integrate with the "other parts" should be subject to discussion.

@OhmSpectator
Copy link
Member Author

@deitch, it's just a PoC for one of the approaches that @rene asked me to accomplish by the end of this year. I totally like your comments and I guess you have a better vision of which tools should be used for that. There is a design doc that you can also comment on. Ping @rene in DM to get a link to it. For now I'm done with the task.

@deitch
Copy link
Contributor

deitch commented Dec 2, 2025

Never assume I have a better vision; just a potentially different one.

@codecov
Copy link

codecov bot commented Dec 2, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 28.08%. Comparing base (2281599) to head (b5a1ebb).
⚠️ Report is 143 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5470      +/-   ##
==========================================
+ Coverage   19.52%   28.08%   +8.55%     
==========================================
  Files          19       19              
  Lines        3021     2314     -707     
==========================================
+ Hits          590      650      +60     
+ Misses       2310     1520     -790     
- Partials      121      144      +23     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@eriknordmark
Copy link
Contributor

I am not convinced this is the best way to solve the "300MB limit" issue; that is better handled by resizing partitions (which can be done safely, if done correctly). However, this does have the option of making the "common OS" (term created by Daniel Derksen) much more common, and then all of the "additional things" be added later or earlier. That is the real value IMO; reducing EVE's proliferation.

Even if we had the resizing today I don't know how we would convince all users to do that operation in production before they need the next set of fixes for CVEs (which are likely to push the current image above the 300MB limit). So I think we need this tool in our collection of tools to be able to move forward (in addition to getting the resizing in place).

@eriknordmark
Copy link
Contributor

I am not convinced this is the best way to solve the "300MB limit" issue; that is better handled by resizing partitions (which can be done safely, if done correctly).

I thought I added this comment from my phone earlier, but here we go again.

Even if we have good partition resizing support, a user probably needs some additional care (e.g., don't power off intentionally or accidentally during the resizing operation), which means they might want to pick a good time to do that.
And while they prepare for that there might be another CVE (akin to the recent runc CVE) which will grow the size of the EVE rootfs image by a more megabytes.
If I were a user/customer I would be unhappy if I couldn't apply a fix to such a CVE until I have taken that larger maintenance window and effort to do the resizing.

Thus I think we need this approach plus resizing (plus a larger default partition size at install).

@deitch
Copy link
Contributor

deitch commented Dec 12, 2025

Thus I think we need this approach plus resizing (plus a larger default partition size at install).

Agreed. In the long run, the resize will solve us for many years to come, especially with the new sizes. But we need this.

@deitch
Copy link
Contributor

deitch commented Dec 12, 2025

Even if I think implementation-wise it should be somewhat different.

@shjala
Copy link
Member

shjala commented Dec 16, 2025

This approach (or any, for that matter), if not designed from the ground up to address the security implications it may bring, can completely undermine one of EVE’s foundational security guarantees: a device left unattended cannot be permanently backdoored (at both the EVE-OS level and the application level if an encrypted partition is used).

We should use fs-verity or dm-verity to verify those, so they cannot be changed even on disk

@deitch This can’t help without root hash signing (which is another can of worms by itself). An attacker with root access or, in our case with physical disk access, can simply modify the image and then update the entire hash tree.

There are possible solutions but each have their own limitations and bring complexity :

  • We can use full-disk encryption with TPM-stored keys (this can include the run-time part of the os), but that then brings the risk of the device not booting at all because of PCR changes.
  • Build-time signing of run-time images is possible, but who owns the signing key? lf-edge? Zededa? eve-os project?
  • We can extend our measured boot + remote attestation to the run-time images, but then what happens when measured boot fails to match the good known state? we won't run part of the system? then we may need move all the diagnostic related code to base-os.
  • deliver extra run-time images signed by the controller, with the controller's public key embedded in the verified base-os?
  • Controller-based hash verification, base-os queries controller to verify runtime image hashes at load time (no pre-signing needed, works offline with cached approvals).
  • etc.

@andrewd-zededa I'm not familiar with the design of eve-k, but the same issue may apply to it ☝🏼

@eriknordmark
Copy link
Contributor

  • We can extend our measured boot + remote attestation to the run-time images, but then what happens when measured boot fails to match the good known state? we won't run part of the system? then we may need move all the diagnostic related code to base-os.

@shjala
If we treat the additional EVE images the same as the rootfs image, then can't we just measure them into the appropriate PCR before we start using them?
Today we let EVE run (and attest its PCRs to the controller) so that approach doesn't have to change just because we split out parts from the rootfs image (and we can potentially extend this to other "optional" images like longhorn down the road but that is a different but related topic).

@deitch
Copy link
Contributor

deitch commented Dec 19, 2025

We should use fs-verity or dm-verity to verify those, so they cannot be changed even on disk

This can’t help without root hash signing (which is another can of worms by itself). An attacker with root access or, in our case with physical disk access, can simply modify the image and then update the entire hash tree

@shjala why not?

  1. System boots
  2. Hardware root of trust in PCR unlocks root filesystem - any changes would cause failure to unlock
  3. Root filesystem downloads or otherwise accesses "extended root blocks" on read-write filesystem
  4. Root filesystem has embedded in it (or downloaded from controller) digest of those blocks
  5. Root uses fsverity/dmverity so the kernel loads those "extended root blocks" and verifies their content

With verity, any changes on the filesystem will cause the kernel to refuse to pass those through. The key is to have:

  • the kernel is verified (as now)
  • the extended blocks reader is verified (as now, just part of core root)
  • the extended blocks reader either includes the digests of the extended blocks built into it (and therefore already verified) or downloads from controller (which it trusts)

With the above, the extended blocks reader is ensured that the contents of the extended block never are changed, or are prevented if changed.

What did I miss?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants