Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unraid: upper fs does not support RENAME_WHITEOUT. #56

Closed
1 task done
nspitko opened this issue May 10, 2024 · 4 comments
Closed
1 task done

[BUG] Unraid: upper fs does not support RENAME_WHITEOUT. #56

nspitko opened this issue May 10, 2024 · 4 comments

Comments

@nspitko
Copy link

nspitko commented May 10, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

While this image is running on Unraid, there is a frequent (>1/m) spew in the logs:

May  9 23:17:20 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered blocking state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: device veth37da7ab entered promiscuous mode
May  9 23:17:22 jibril kernel: eth0: renamed from veth56a27fb
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered blocking state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered forwarding state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: veth56a27fb: renamed from eth0
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: device veth37da7ab left promiscuous mode
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state

This coincides with the container log spewing:

time="2024-05-09T23:30:11.018606437-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:30:11.018766017-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:30:11.018786907-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:30:11.019033491-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ffe3bf2d66c0af80420d615b2b4460653d7e9cda3a121362e6d44b4d9eb2e32e pid=2286 runtime=io.containerd.runc.v2

Expected Behavior

Kasm should not pollute the log file

Steps To Reproduce

  1. Install image via the community app store in unraid
  2. Launch an instance
  3. Observe logs

Environment

- OS: Unraid
- How docker service was installed: Community app store
- /opt mount: /mnt/cache/appdata/kasm
- /mnt/cache format: zfs mirror

CPU architecture

x86-64

Docker creation

Extra params: --gpus all
/opt:  /mnt/cache/appdata/kasm

Container logs

[migrations] started
[migrations] no migrations found
usermod: no changes
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    911
User GID:    911
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.
time="2024-05-09T23:37:22.104006866-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:37:22.104487801-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:37:22.104511125-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:37:22.104811290-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/57be05bf9fd97ae2ac5ddd1d77500159f531cfc24a8d95225c9b92800d682388 pid=2211 runtime=io.containerd.runc.v2
time="2024-05-09T23:37:42.587858142-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:37:42.588029725-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:37:42.588048260-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:37:42.588305415-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d20e75c39f679956719fbb10e7ef32affdb2dd14c6ea9bdab7aa3e84e2c2492d pid=2440 runtime=io.containerd.runc.v2
time="2024-05-09T23:38:13.335136400-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:38:13.335292053-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:38:13.335321068-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:38:13.335576799-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e2080790b4e9fe5d5a75cb56602c69aa5f97da66d138c23e1e6995f4262475d4 pid=2580 runtime=io.containerd.runc.v2

(This last segment goes on for quite a while, truncated for readability)
Copy link

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

@LinuxServer-CI
Copy link
Collaborator

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

@nspitko
Copy link
Author

nspitko commented Jun 19, 2024

Did some investigation, and it seems this is a known/expected issue with openzfs, which is fixed in version 2.2. This will go away once Unraid updates to this version.

@nspitko nspitko closed this as completed Jun 19, 2024
@LinuxServer-CI LinuxServer-CI moved this from Issues to Done in Issue & PR Tracker Jun 19, 2024
Copy link

This issue is locked due to inactivity

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 20, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Archived in project
Development

No branches or pull requests

2 participants