-
-
Notifications
You must be signed in to change notification settings - Fork 3k
.pr_agent_accepted_suggestions
| PR 7665 (2026-05-03) |
[reliability] uncaughtException doesn’t exit
uncaughtException doesn’t exit
tests/backend/diagnostics.ts registers an uncaughtException handler that only logs and returns, which can prevent backend-tests from exiting non-zero on fatal errors when tests/backend/common.ts is not imported. This can turn a crash into a hang/continued execution, and it also breaks the intended “convert unhandledRejection to uncaught exception” fail-fast behavior for specs that don’t load common.ts.src/tests/backend/diagnostics.ts installs an uncaughtException handler that only logs. If tests/backend/common.ts is not imported in a spec run, this handler can prevent a fatal error from forcing a non-zero exit (common.ts explicitly calls process.exit(1) to preserve default behavior).
-
src/tests/backend/common.tshas anuncaughtExceptionhandler that logs and thenprocess.exit(1)specifically to preserve default behavior when a handler exists. -
src/package.jsonnow requires./tests/backend/diagnostics.tsglobally, but does not requirecommon.ts. - Some specs don’t import
common.ts, sodiagnostics.tscan be the only handler.
- src/tests/backend/diagnostics.ts[55-61]
After logging, ensure the process still fails fast when no other handler will do it. Options:
- Mimic
common.ts: callprocess.exit(1)after logging. - If you want to defer to other handlers when present: only force-exit if this is the only
uncaughtExceptionlistener (e.g.,if (process.listeners('uncaughtException').length === 1) process.exit(1);), otherwise return. - Alternatively, set
process.exitCode = 1and schedule asetImmediate(() => process.exit(1))so later-registered handlers still get a chance to run/log first.
| PR 7660 (2026-05-02) |
[correctness] Click steals rename focus
Click steals rename focus
Clicking the editable name for unnamed users will also trigger the new row click handler, which then focuses #chatinput and interrupts the rename workflow. This makes it difficult or impossible to name unnamed users from the user list.The new delegated click handler on #otheruserstable tr[data-authorId] triggers even when the click target is the existing rename <input> in the name cell, and it later focuses #chatinput. This breaks the unnamed-user rename interaction.
Unnamed users are rendered with an <input> in the .usertdname cell and are wired up via #otheruserstable input.newinput.
- src/static/js/pad_userlist.ts[373-410]
- src/static/js/pad_userlist.ts[183-196]
Add early-return guards before doing any prefill/show work, for example:
- Return if
$(event.target).closest('input, textarea, select, button, a, [contenteditable=true]').length. - Or at minimum return if
$(event.target).closest('.usertdname input').length. This keeps the row-click behavior while preserving rename semantics.
| PR 7647 (2026-05-01) |
[observability] Hardcoded 5s socket wait
Hardcoded 5s socket wait
waitForSocketEvent now hardcodes a 5s timeout for all socket events, which can cause suites that rely on Mocha’s default per-test timeout to fail with a generic Mocha timeout instead of the helper’s descriptive error. It also makes failing runs wait longer before surfacing the root cause.waitForSocketEvent() uses a hardcoded 5000ms timeout for all socket waits. In suites that don’t increase Mocha’s default per-test timeout, failures may surface as a generic Mocha timeout instead of waitForSocketEvent’s explicit timed out waiting for <event> error.
Some callers (e.g., connect/handshake paths) legitimately need a longer timeout on slow CI runners, but other call sites benefit from failing fast and producing a clear error.
- Add an optional
timeoutMsparameter towaitForSocketEvent(socket, event, timeoutMs?), and use it insetTimeout(..., timeoutMs). - Update slow paths (
connect(),handshake(), and any other known-slow call sites) to pass5000explicitly. - Keep a shorter default (or ensure suites that rely on defaults set
this.timeout(...)high enough) to avoid Mocha masking the helper’s error. - src/tests/backend/common.ts[114-219]
- src/tests/backend/specs/messages.ts[12-60]
[reliability] SessionStore waits still tight
SessionStore waits still tight
SessionStore expiry tests still rely on fixed sleeps with only ~30ms headroom (e.g., expires in 300ms, then sleep 330ms), so timer jitter or event-loop delays can still cause intermittent failures under heavy CI load. This PR improves the margin vs. before, but it doesn’t eliminate the underlying flake pattern.Several SessionStore tests use fixed setTimeout sleeps and then assert the DB record is gone/present. Even with the increased windows, the assertions can still race the actual cleanup work if the event loop is delayed.
SessionStore schedules expiration cleanup with setTimeout(...) and documents races on slow systems. Tests that assume cleanup has run at an exact time remain inherently timing-fragile.
- Replace fixed sleeps like
await new Promise(r => setTimeout(r, 330))+ assert with a small polling helper (e.g., poll every 25ms up to a 2–5s max) that waits until the DB condition is met. - Keep the expiry durations modest, but remove tight coupling between “sleep duration” and “expiry duration”.
- src/tests/backend/specs/SessionStore.ts[47-168]
| PR 7645 (2026-05-01) |
[reliability] Concurrency blocks PR builds
Concurrency blocks PR builds
With the new pull_request trigger, PR docs builds share the same concurrency group ("pages") as real deployments, so only one run can execute at a time. This can delay PR feedback or delay production docs deployments whenever both are active.Docs builds on PRs now contend with production docs deployments because the workflow uses a single concurrency.group: "pages" for all events. This serializes PR builds and push deployments.
The PR adds a pull_request trigger but keeps the existing global concurrency group. Concurrency is useful for deployments but typically undesirable for PR-only builds.
- .github/workflows/build-and-deploy-docs.yml[4-17]
- .github/workflows/build-and-deploy-docs.yml[28-33]
Either:
- Use distinct concurrency groups per event (e.g.,
pages-${{ github.event_name }}), or - Split into separate jobs: a PR
buildjob without the Pages concurrency lock and a push-onlydeployjob with the lock.
[maintainability] Undocumented `engines.node` bump
Undocumented `engines.node` bump
The PR raises the minimum supported Node version via `engines.node`, but related documentation still states the old requirement. This can mislead contributors/users and makes a breaking compatibility change without updating the documented guidance.package.json now sets engines.node to >=22.12.0, but documentation still claims the project requires >=22.0.0.
This is a user-visible compatibility/requirements change and should be reflected anywhere the Node requirement is documented.
- doc/npm-trusted-publishing.md[88-91]
- package.json[45-45]
| PR 7644 (2026-05-01) |
[security] Unvalidated plugin names
Unvalidated plugin names
`update()` installs every name from `var/installed_plugins.json` (except `ep_etherpad-lite`) without enforcing the `ep_` prefix. If that file is corrupted or modified, running `plugins update` can install arbitrary packages, unlike `checkForMigration()` which explicitly restricts to `ep_` plugins.bin/plugins.ts updates plugins by trusting var/installed_plugins.json and installing every entry by name. This should be restricted to actual Etherpad plugins (ep_ prefix) to prevent accidental or malicious installation of arbitrary packages.
checkForMigration() already enforces plugin.name.startsWith(plugins.prefix) before installing from installed_plugins.json, but update() does not.
- Add an explicit
ep_/plugins.prefixvalidation filter before invokinginstallPlugin(). - Consider de-duplicating names (e.g., via
new Set(names)) to avoid repeated installs if the file contains duplicates.
- bin/plugins.ts[81-112]
- src/static/js/pluginfw/installer.ts[81-134]
- src/static/js/pluginfw/plugins.ts[36-154]
| PR 7636 (2026-04-30) |
[correctness] `theme-color` skipped for non-colibris
`theme-color` skipped for non-colibris
`pad.html` only emits `` when `configuredToolbarColor()` returns a value, but that helper returns `null` for any `skinName` other than `colibris`. This means pads using `no-skin` or third-party skins will not include the meta tag, failing the requirement that pad HTML output includes a theme-color meta matching the active theme.pad.html conditionally omits <meta name="theme-color"> for non-colibris skins because configuredToolbarColor() returns null unless skinName === 'colibris'. This violates the requirement that pad HTML output includes a theme-color meta whose content matches the active theme's toolbar color.
The current implementation avoids emitting a potentially wrong color for unknown skins, but the compliance requirement is explicit about always including the meta and matching the active theme.
- src/templates/pad.html[9-14]
- src/templates/pad.html[51-51]
- src/node/utils/SkinColors.ts[23-33]
[correctness] Dark meta mismatches timeslider
Dark meta mismatches timeslider
timeslider.html emits a prefers-color-scheme: dark theme-color meta whenever enableDarkMode is true, but the timeslider client does not switch to dark skin-variant classes based on prefers-color-scheme, so on dark-mode devices the browser chrome can be dark while the toolbar stays in the configured (typically light) variant.timeslider.html emits a dark theme-color meta based on prefers-color-scheme: dark when settings.enableDarkMode is true, but the timeslider page does not appear to switch its toolbar to a dark variant based on OS color scheme. This can cause a persistent mismatch (dark address bar vs light toolbar) on dark-mode devices.
Pad pages have client-side logic to switch to dark variants on dark OS preference; timeslider appears not to.
- src/templates/timeslider.html[40-42]
- src/static/js/timeslider.ts[70-129]
- src/static/js/pad.ts[648-652]
-
Template-only mitigation: Only emit a single
theme-colormeta for timeslider that matches the actual configured toolbar (noprefers-color-schemevariants), or only emit the dark variant ifsettings.skinVariantsalready includes a dark toolbar class. -
Proper dark-mode support for timeslider: Add early client-side logic in
timeslider.tsto mirror the pad page behavior (switch skin variant classes to the dark set whenenableDarkModeandmatchMedia('(prefers-color-scheme: dark)')match, respecting any stored user preference if applicable). Then the existing dark theme-color meta becomes accurate. If you pick option (2), consider also updating thetheme-colormeta dynamically when the skin variants change so the browser chrome stays in sync.
[correctness] Light theme-color stays white
Light theme-color stays white
If settings.skinVariants contains only a dark toolbar variant (for example "dark-toolbar"), toolbarThemeColors() updates only the returned "dark" color and leaves "light" at the white default, so the emitted light-scheme won't match the actual dark toolbar on light-mode devices.toolbarThemeColors() leaves light at #ffffff when the configured settings.skinVariants contains only a dark toolbar variant (e.g. dark-toolbar). This causes the emitted prefers-color-scheme: light theme-color meta to be white even though the toolbar background is dark.
This shows up when an instance is configured with a dark toolbar variant but the user's OS/browser is in light mode.
- src/node/utils/SkinColors.ts[17-27]
Adjust toolbarThemeColors() so that a toolbar variant token sets the effective toolbar color for both schemes unless an explicit scheme-specific override is present. For example:
- Track the last matched
*-toolbartoken color astoolbar. - Initialize
{light, dark}to{toolbar, toolbar}whentoolbaris found. - If you want separate values, only split when both a light-toolbar and dark-toolbar token are present.
Also add/extend unit tests to cover a
skinVariantsstring that contains onlydark-toolbarand assert that.lightis set to#576273(or whatever the configured toolbar color is).
[correctness] `theme-color` wrong for no-skin
`theme-color` wrong for no-skin
`pad.html` always sets `theme-color` from `settings.skinVariants` via `configuredToolbarColor()`, which is hardcoded to colibris variant colors and defaults to `#ffffff`. For the `no-skin` skin, the actual toolbar background comes from core CSS (`#f4f4f4`), so the emitted `theme-color` will not match the toolbar color for that theme.<meta name="theme-color"> is computed from colibris skinVariants and defaults to #ffffff, which does not match the toolbar color when settings.skinName is no-skin (core toolbar background is #f4f4f4). This violates the requirement that theme-color match the toolbar color for non-default themes.
-
pad.htmlemitstheme-colorbased solely onsettings.skinVariants. -
SkinColors.configuredToolbarColor()only knows colibris variant tokens and falls back to white. -
no-skinuses core CSS toolbar styling (background-color: #f4f4f4).
- src/templates/pad.html[51-52]
- src/node/utils/SkinColors.ts[14-31]
[correctness] `theme-color` missing default meta
`theme-color` missing default meta
The pad page emits the light `theme-color` only with `media="(prefers-color-scheme: light)"`, and omits the dark variant when `settings.enableDarkMode` is false. In a dark OS/browser color-scheme this results in no applicable `theme-color`, so the browser UI will not match the (still light) toolbar.pad.html sets the light toolbar theme-color only for (prefers-color-scheme: light). When settings.enableDarkMode is false and the user agent prefers dark, there is no applicable theme-color, causing the browser UI color to not match the (light) toolbar.
The pad does not switch to dark variants unless enableDarkMode is enabled, so the toolbar remains light even if the OS/browser prefers dark.
- src/templates/pad.html[46-47]
- src/tests/backend/specs/specialpages.ts[81-89]
[correctness] Dark meta mismatches timeslider
Dark meta mismatches timeslider
timeslider.html emits a prefers-color-scheme: dark theme-color meta whenever enableDarkMode is true, but the timeslider client does not switch to dark skin-variant classes based on prefers-color-scheme, so on dark-mode devices the browser chrome can be dark while the toolbar stays in the configured (typically light) variant.timeslider.html emits a dark theme-color meta based on prefers-color-scheme: dark when settings.enableDarkMode is true, but the timeslider page does not appear to switch its toolbar to a dark variant based on OS color scheme. This can cause a persistent mismatch (dark address bar vs light toolbar) on dark-mode devices.
Pad pages have client-side logic to switch to dark variants on dark OS preference; timeslider appears not to.
- src/templates/timeslider.html[40-42]
- src/static/js/timeslider.ts[70-129]
- src/static/js/pad.ts[648-652]
-
Template-only mitigation: Only emit a single
theme-colormeta for timeslider that matches the actual configured toolbar (noprefers-color-schemevariants), or only emit the dark variant ifsettings.skinVariantsalready includes a dark toolbar class. -
Proper dark-mode support for timeslider: Add early client-side logic in
timeslider.tsto mirror the pad page behavior (switch skin variant classes to the dark set whenenableDarkModeandmatchMedia('(prefers-color-scheme: dark)')match, respecting any stored user preference if applicable). Then the existing dark theme-color meta becomes accurate. If you pick option (2), consider also updating thetheme-colormeta dynamically when the skin variants change so the browser chrome stays in sync.
[correctness] Dark color mismatches toolbar
Dark color mismatches toolbar
SkinColors.toolbarThemeColors() treats any configured token containing "dark" (e.g., "dark-toolbar") as the dark-scheme theme-color, but the client-side dark mode code always switches the toolbar class to "super-dark-toolbar". If settings.skinVariants includes "dark-toolbar", the server will emit #576273 for dark theme-color while the actual toolbar in dark mode will be super-dark (#485365).toolbarThemeColors() can return dark = #576273 when settings.skinVariants contains dark-toolbar, but the pad client applies super-dark-toolbar for dark mode, so the server-rendered dark theme-color can disagree with the real toolbar color.
Both initial dark-mode application and the UI toggle hardcode super-dark-toolbar.
- src/node/utils/SkinColors.ts[17-28]
- src/templates/pad.html[42-48]
Make the pad page’s dark-scheme theme-color match the toolbar class that is actually used in dark mode (super-dark-toolbar). Options include:
- Adjust
toolbarThemeColors()(or introduce a pad-specific helper) sodarkis derived fromTOOLBAR_COLORS['super-dark-toolbar']instead of being overridden bydark-toolbar. - Update tests accordingly (the expected dark theme-color should match the forced
super-dark-toolbarbehavior).
[maintainability] `theme-color` lacks feature flag
`theme-color` lacks feature flag
The PR introduces a new always-on code path that emits `` without any feature flag or disable-by-default mechanism. This violates the requirement that new features be gated so they can be safely toggled off if needed.<meta name="theme-color"> emission is a new behavior that is enabled unconditionally (for light mode) and is not protected by a feature flag that is disabled by default.
Compliance requires new features to be opt-in/flagged so they can be turned off safely if needed.
- src/templates/pad.html[46-47]
- src/templates/timeslider.html[41-42]
[correctness] Light theme-color stays white
Light theme-color stays white
If settings.skinVariants contains only a dark toolbar variant (for example "dark-toolbar"), toolbarThemeColors() updates only the returned "dark" color and leaves "light" at the white default, so the emitted light-scheme won't match the actual dark toolbar on light-mode devices.toolbarThemeColors() leaves light at #ffffff when the configured settings.skinVariants contains only a dark toolbar variant (e.g. dark-toolbar). This causes the emitted prefers-color-scheme: light theme-color meta to be white even though the toolbar background is dark.
This shows up when an instance is configured with a dark toolbar variant but the user's OS/browser is in light mode.
- src/node/utils/SkinColors.ts[17-27]
Adjust toolbarThemeColors() so that a toolbar variant token sets the effective toolbar color for both schemes unless an explicit scheme-specific override is present. For example:
- Track the last matched
*-toolbartoken color astoolbar. - Initialize
{light, dark}to{toolbar, toolbar}whentoolbaris found. - If you want separate values, only split when both a light-toolbar and dark-toolbar token are present.
Also add/extend unit tests to cover a
skinVariantsstring that contains onlydark-toolbarand assert that.lightis set to#576273(or whatever the configured toolbar color is).
[maintainability] Mixed export styles
Mixed export styles
SkinColors.ts uses both TypeScript named exports and a CommonJS module.exports assignment, which is redundant and makes module semantics harder to reason about across require() and TS imports. This increases the risk of accidental breakage when refactoring or changing build tooling.src/node/utils/SkinColors.ts mixes ES exports (export const ...) with CommonJS (module.exports = ...). This is redundant and can create confusion or subtle interop issues over time.
- EJS templates load the helper via
require('.../SkinColors'). - Vitest tests import it via
import { ... } from .... - TS is configured for
module: CommonJS, so ES named exports already compile to CommonJS-compatible exports.
- src/node/utils/SkinColors.ts[17-43]
- Remove
module.exports = ...and rely on the existingexport const ...named exports (TypeScript will emit CommonJS exports under the current tsconfig). - Alternatively, remove the
exportkeywords and switch callers/tests torequire()consistently, but prefer the first option for TS files.
| PR 7635 (2026-04-30) |
[testability] `publicURL` undocumented in `doc/`
`publicURL` undocumented in `doc/`
This PR introduces a new user-facing configuration key `publicURL`, but there are no corresponding documentation updates under the `doc/` folder. Operators may miss the new setting and deploy with incorrect OG/Twitter canonical URLs.A new config key publicURL is added/used, but no documentation under doc/ was updated in the same PR.
This is a feature-impacting operator setting that affects canonical OG/Twitter URLs, so it must be documented in doc/ per the compliance checklist.
- settings.json.template[111-124]
- src/node/utils/Settings.ts[164-168]
- src/node/utils/Settings.ts[327-338]
- doc/docker.md[70-86]
[correctness] IPv6 Host breaks OG URLs
IPv6 Host breaks OG URLs
sanitizeHost rejects valid bracketed IPv6 Host headers (e.g. "[::1]:9001"), causing buildAbsoluteUrl to fall back to "localhost" and emit incorrect og:url/og:image values on IPv6 literal-host deployments.sanitizeHost() rejects bracketed IPv6 literals in the Host header (required format for IPv6 in URLs/Host), which makes buildAbsoluteUrl() fall back to localhost and produces incorrect og:url / og:image.
Current host allowlist regex does not permit [ or ].
- src/node/utils/socialMeta.ts[109-138]
- Update
sanitizeHostto accept either: - DNS-style hosts (current behavior), or
- bracketed IPv6 literals with optional port (e.g.
\[[0-9a-f:.]+\](:\d{1,5})?). - Consider parsing
publicURLwithnew URL()and validating host via URL properties, to avoid regex edge cases. - Add a unit test that asserts
Host: [::1]:9001results inog:urlusing that host (or at least not falling back to localhost).
[correctness] IPv6 publicURL rejected
IPv6 publicURL rejected
sanitizePublicURL() rejects bracketed IPv6 hosts, so settings.publicURL values like "https://[2001:db8::1]" are ignored and og:url/og:image fall back to request-derived origin (or "localhost"). This produces incorrect canonical URLs in link previews for IPv6-based deployments.sanitizePublicURL()/sanitizeHost() currently reject bracketed IPv6 hosts (e.g. https://[2001:db8::1]), causing publicURL to be ignored and OG URLs to fall back to request-derived origin.
IPv6 literals in URLs must be bracketed per RFC 3986. The current HOST_RE only accepts DNS-like hostnames.
- src/node/utils/socialMeta.ts[109-138]
- Prefer parsing with
new URL()forpublicURLvalidation instead of regex. - Update host validation to accept bracketed IPv6 (and optionally validate port range 1–65535).
- Keep existing protections against CRLF/userinfo and overly long values.
[reliability] `decodeURIComponent(o.padName)` may throw
`decodeURIComponent(o.padName)` may throw
`renderSocialMeta()` calls `decodeURIComponent(o.padName)` on the Express route param, which can throw for pad names that decode to strings containing `%` (e.g., `/p/100%25`). This can break pad/timeslider page responses, preventing OG tags from being emitted for some valid pad IDs.decodeURIComponent(o.padName) can throw for some pad names (for example those that contain % after Express has already decoded the route param).
This logic runs on the request path for /p/:pad and /p/:pad/timeslider. A thrown exception can prevent the response from rendering OG tags (and potentially the page).
- src/node/utils/socialMeta.ts[123-129]
[reliability] XSS test allows false pass
XSS test allows false pass
The XSS escape test only asserts that og:title (if present) lacks a raw `` tag, so it can pass when og:title is missing entirely (for example due to a 500 response), masking regressions in social meta rendering.The XSS-focused test can pass even if the meta tags are not emitted (e.g., the endpoint errors and returns no <meta property="og:title">). This reduces the test’s ability to catch regressions in social meta rendering.
The test currently:
- Does not assert a status code.
- Does not assert that
og:titleexists.
- src/tests/backend/specs/socialMeta.ts[86-99]
- Ensure the request hits a reliably-rendering path and assert that
og:titleis present: - Use a known-good pad ID and expect
200. - Assert
ogTag(res.text, 'og:title')is non-null and contains the escaped form (e.g.,<script>...). - Optionally add a separate test that uses a pad ID containing
%25to prevent regressions related to URL decoding/URIError crashes.
[security] Host header poisons OG URLs
Host header poisons OG URLs
buildAbsoluteUrl() constructs og:url and og:image using req.protocol and req.get('host'), so a forged Host/X-Forwarded-* header can make emitted metadata point at an attacker-controlled origin. This enables misleading unfurl previews and can contribute to cache poisoning if any intermediary caches HTML by path only.renderSocialMeta() currently builds absolute URLs using req.protocol and req.get('host'), which are derived from client-controlled headers (and, with trust proxy, from X-Forwarded-*). This can cause OG/Twitter tags to advertise attacker-chosen origins.
The affected values are og:url, og:image, and their Twitter equivalents, which are inserted into templates via <%- socialMetaHtml %>.
- src/node/utils/socialMeta.ts[95-104]
- src/node/utils/socialMeta.ts[132-140]
- Prefer a configured canonical external origin (e.g., a single setting such as
settings.externalUrl/settings.baseUrl) when generating absolute URLs; fall back to request-derived origin only if not configured. - If falling back to request-derived values, validate/normalize the host (e.g., strict hostname/host:port parsing) and consider rejecting/ignoring unexpected values.
- Keep
og:image/twitter:imageandog:urlconsistent (same origin).
[correctness] Default `socialDescription` mismatched
Default `socialDescription` mismatched
The shipped default `socialDescription` string does not match the required default text, so pad pages will not emit the mandated `og:description` value out of the box. This breaks the compliance success criteria for OG metadata defaults and configurability.The default socialDescription value does not match the compliance-required default string.
Compliance requires the default og:description text to be exactly A document that everybody can edit at the same time. while still being configurable via settings.json.
- src/node/utils/Settings.ts[328-334]
- settings.json.template[111-126]
- settings.json.docker[120-126]
[reliability] `decodeURIComponent(o.padName)` may throw
`decodeURIComponent(o.padName)` may throw
`renderSocialMeta()` calls `decodeURIComponent(o.padName)` on the Express route param, which can throw for pad names that decode to strings containing `%` (e.g., `/p/100%25`). This can break pad/timeslider page responses, preventing OG tags from being emitted for some valid pad IDs.decodeURIComponent(o.padName) can throw for some pad names (for example those that contain % after Express has already decoded the route param).
This logic runs on the request path for /p/:pad and /p/:pad/timeslider. A thrown exception can prevent the response from rendering OG tags (and potentially the page).
- src/node/utils/socialMeta.ts[123-129]
[reliability] XSS test allows false pass
XSS test allows false pass
The XSS escape test only asserts that og:title (if present) lacks a raw `` tag, so it can pass when og:title is missing entirely (for example due to a 500 response), masking regressions in social meta rendering.The XSS-focused test can pass even if the meta tags are not emitted (e.g., the endpoint errors and returns no <meta property="og:title">). This reduces the test’s ability to catch regressions in social meta rendering.
The test currently:
- Does not assert a status code.
- Does not assert that
og:titleexists.
- src/tests/backend/specs/socialMeta.ts[86-99]
- Ensure the request hits a reliably-rendering path and assert that
og:titleis present: - Use a known-good pad ID and expect
200. - Assert
ogTag(res.text, 'og:title')is non-null and contains the escaped form (e.g.,<script>...). - Optionally add a separate test that uses a pad ID containing
%25to prevent regressions related to URL decoding/URIError crashes.
| PR 7630 (2026-04-29) |
[reliability] `$insertorderedlistButton.first()` index use
`$insertorderedlistButton.first()` index use
The updated ordered list spec still relies on `.first()` to choose a toolbar button match, which is DOM-order dependent and can change under plugins. This violates the guideline to avoid plugin-sensitive selector/index assumptions.The spec clicks the ordered-list toolbar button via $insertorderedlistButton.first(), which is order-dependent and may break when plugins alter the toolbar DOM.
Prefer a uniquely identifying selector (for example, button[data-l10n-id='pad.toolbar.ol.title'] or another stable attribute that does not depend on element order).
- src/tests/frontend-new/specs/ordered_list.spec.ts[16-21]
| PR 7628 (2026-04-28) |
[correctness] Installer allows Node 20
Installer allows Node 20
The PR bumps the minimum supported Node.js version to >=22, but the one-line installers still only require Node 20, allowing users to proceed with an unsupported runtime and hit failures later (dependency install or runtime).Minimum supported Node.js is now >=22, but the POSIX and PowerShell one-line installers still accept Node 20.
This PR updates engines.node and the README requirement to Node >=22. The installer scripts should reject Node 20 to avoid installing a broken setup.
- bin/installer.sh[33-56]
- bin/installer.ps1[37-57]
[correctness] Packages depend on Node 20
Packages depend on Node 20
The Debian/RPM packaging metadata still declares `nodejs (>= 20)` even though Etherpad now requires Node >=22, so package managers can install Node 20 and deliver a broken Etherpad install.Packaging metadata still allows installation with Node 20, but the project now requires Node >=22.
The .deb/.rpm dependencies should enforce the same minimum Node version as package.json to prevent broken installs.
- packaging/nfpm.yaml[22-26]
- packaging/nfpm.yaml[110-119]
- packaging/README.md[72-75]
- .github/workflows/deb-package.yml[140-147]
[maintainability] Docs still reference Node 20
Docs still reference Node 20
Some documentation still states Node >=20 and references setup-node 20, contradicting the new minimum Node >=22 and potentially causing contributors to use an unsupported runtime.Docs still mention Node 20 after the project minimum was bumped to Node >=22.
README and engines.node have been updated; remaining docs should be consistent to avoid setup confusion.
- AGENTS.MD[8-11]
- doc/npm-trusted-publishing.md[86-92]
| PR 7624 (2026-04-28) |
[reliability] Publishes empty apt repo
Publishes empty apt repo
The apt repo generation step never asserts that any `.deb` artifacts were actually copied into the pool, so a tagged run can wipe `site/public/apt/` and publish an empty repository if artifact names change or downloads fail silently. This can break installs/upgrades for all apt users until the next successful publish.The workflow can publish an empty apt repository because the copy loop skips missing globs without failing.
A tagged release run wipes site/public/apt and regenerates it; if no .deb files are present, the generated repo will be empty but still signed and pushed.
- .github/workflows/deb-package.yml[310-339]
After downloading artifacts (or after the copy loop), assert at least one .deb exists and fail otherwise. For example:
[reliability] Artifact glob exits early
Artifact glob exits early
The `Resolve artefact paths` step runs `ls ... | head` under `set -euo pipefail`, so if the glob matches nothing the step exits before reaching the explicit empty-check and custom error message. This can break the release publish job with a non-obvious failure mode whenever the artifact names or download step change.Resolve artefact paths uses ls ... | head under set -euo pipefail. If no files match, ls exits non-zero and the step terminates before the intended -z checks and friendly ::error::... message.
This job is release-gated (refs/tags/v*) and is expected to fail with a clear message if artifacts are missing. Current behavior fails earlier and more opaquely.
- .github/workflows/deb-package.yml[250-263]
Replace the ls | head pipelines with a non-fatal glob match, for example:
-
AMD64=$(ls ... 2>/dev/null | head -n1 || true)(and same for ARM64), or - use
shopt -s nullgloband pick from an array, or - use
compgen -G 'dist/etherpad_*_amd64.deb'to test existence beforels. Ensure the step reaches the explicit missing-artifact error path when no matches exist.
[reliability] Key fetch breaks on tags
Key fetch breaks on tags
The apt-publish job downloads `packaging/apt/key.asc` from raw.githubusercontent.com using `${{ github.sha }}`, which can be an annotated tag object SHA and therefore not resolve to repository contents, causing the key download (and publish) to fail on release tag runs.apt-publish fetches packaging/apt/key.asc using a raw.githubusercontent.com URL built with ${{ github.sha }}. On release runs, Etherpad creates annotated tags, so ${{ github.sha }} can be a tag object SHA that does not resolve to repository contents on raw.githubusercontent.com, causing the key download to fail.
The job is gated to tag pushes (refs/tags/v*) and includes a fallback curl to fetch packaging/apt/key.asc because only gh-pages is checked out.
Prefer a ref that resolves to a tree, e.g. ${{ github.ref_name }} (the tag name) in the raw URL, or add a second checkout of the current ref (in a different path) and copy packaging/apt/key.asc from that checkout.
- .github/workflows/deb-package.yml[337-343]
- .github/workflows/deb-package.yml[230-243]
[maintainability] Key URL inconsistency
Key URL inconsistency
generate-signing-key.sh documents the public key URL as ether.github.io/etherpad/key.asc, but the workflow/README in this PR publish and instruct users to fetch it from etherpad.org/key.asc. This inconsistency will cause confusion and failed setup if someone follows the script comments.Docs/comments disagree on where key.asc is published.
- Workflow comments and the staging step indicate
site/public/key.asc→https://etherpad.org/key.asc. - README installation steps use
https://etherpad.org/key.asc. - The new key generation helper script references
https://ether.github.io/etherpad/key.asc.
- packaging/apt/generate-signing-key.sh[11-15]
- packaging/README.md[56-67]
- .github/workflows/deb-package.yml[230-237]
Update the helper script comment to match the actual published URL (https://etherpad.org/key.asc). Ensure all references across these files use the same canonical URL.
| PR 7623 (2026-04-28) |
[reliability] test-ui runs admin project
test-ui runs admin project
`src/package.json` now runs `npx playwright test` with no path or project filter, so it will execute all configured projects including `chromium-admin`. The admin specs require extra setup (admin UI enabled, admin frontend built) that is not part of the regular frontend test setup, causing failures/flakes when `pnpm run test-ui` is run without `--project`.pnpm run test-ui now runs npx playwright test without limiting projects, so it can execute the chromium-admin project (admin specs) in contexts that do not prepare the Admin UI environment.
Admin specs require setup that regular frontend runs do not do (enabling admin UI tests, building admin assets). CI avoids this by explicitly passing --project=chromium/--project=firefox for frontend and using a separate workflow/script for admin.
- src/package.json[153-156]
- src/playwright.config.ts[49-70]
Update the test-ui / test-ui:ui scripts to explicitly run only the frontend projects (e.g., --project=chromium --project=firefox), keeping test-admin as the only entry point that runs chromium-admin. Alternatively, move admin specs into a separate Playwright config and have test-admin pass -c to avoid including the admin project in the default config.
[correctness] `test-ui` path filters plugins
`test-ui` path filters plugins
The Playwright config adds plugin `testMatch` globs, but `pnpm run test-ui` runs `npx playwright test tests/frontend-new/specs`, which restricts discovery to core specs and can prevent plugin-owned frontend specs from running in CI.Plugin frontend specs may still not run in CI because the test-ui script passes tests/frontend-new/specs to playwright test, which can filter out plugin spec paths added via testMatch.
The PR’s goal (Compliance ID 1) is that plugin-owned Playwright specs (for example under ../node_modules/ep_*/static/tests/frontend-new/specs/**) execute when plugins are installed. This requires the CI entrypoint (pnpm run test-ui) to not restrict discovery to core-only paths.
- src/package.json[153-154]
- src/playwright.config.ts[22-34]
- doc/PLUGIN_FRONTEND_TESTS.md[10-13]
[correctness] Admin tests excluded
Admin tests excluded
src/playwright.config.ts now defines an explicit testMatch that does not include tests/frontend-new/admin-spec, so the admin Playwright suite will not be discovered. This breaks pnpm run test-admin and the frontend-admin-tests GitHub Actions workflow (likely “No tests found” / missing coverage).Playwright config now sets an explicit testMatch list that excludes tests/frontend-new/admin-spec/**/*.spec.ts, so admin UI tests are no longer discovered.
pnpm run test-admin (and .github/workflows/frontend-admin-tests.yml) runs Playwright against tests/frontend-new/admin-spec, which depends on config-based test discovery.
- src/playwright.config.ts[22-30]
Add an additional glob for admin tests, e.g.:
-
tests/frontend-new/admin-spec/**/*.spec.tsOptionally, consolidate with a brace pattern for readability: tests/frontend-new/{specs,admin-spec}/**/*.spec.ts
[maintainability] `testMatchGlobs` uses 4-space indent
`testMatchGlobs` uses 4-space indent
New/modified lines in `src/playwright.config.ts` use 4-space indentation, but the repository’s `.editorconfig` requires 2-space indentation.Changed/added code in src/playwright.config.ts does not follow the repository’s required 2-space indentation.
.editorconfig specifies indent_size = 2 for all files by default.
- src/playwright.config.ts[23-52]
- .editorconfig[3-8]
| PR 7609 (2026-04-27) |
[reliability] Missing server-ready abort
Missing server-ready abort
In the new *-with-plugins jobs, the test step proceeds to Playwright even if Etherpad never becomes reachable within the 15s loop, which can lead to misleading failures/flakes. Etherpad startup runs plugin migration/installation when `var/installed_plugins.json` is absent, adding extra startup work that these new jobs now trigger by installing 11 plugins.The workflow starts Etherpad in the background and waits up to 15 seconds for http://localhost:9001/, but it never fails if the server is still unreachable. With plugins installed, Etherpad may do extra work during startup (plugin migration/installation), so the fixed wait can be insufficient and Playwright will run against a down server.
The script sets connected=true inside can_connect() but never checks it after the loop.
- .github/workflows/frontend-tests.yml[192-209]
- .github/workflows/frontend-tests.yml[269-286]
- Increase the timeout (for example 60–180s), and after the loop do something like:
if [ "$connected" != true ]; then echo "Etherpad failed to start"; tail -n +1 /tmp/etherpad-server.log; exit 1; fi- Optionally add
set -euo pipefailand ensure the background server process is cleaned up on exit (trap).
| PR 7602 (2026-04-26) |
[maintainability] Hardcoded locales output path
Hardcoded locales output path
The build-time copy uses a hard-coded destination (`../src/templates/admin/locales`) instead of deriving it from Vite’s resolved `build.outDir`, duplicating configuration in two places. If `outDir` is ever changed/overridden, locales may be copied to an unexpected location.The locale copy destination is duplicated as a literal path, separate from build.outDir. This increases the chance of the two drifting over time.
Express serves admin files from <settings.root>/src/templates + request path, and Vite builds admin into build.outDir.
- admin/vite.config.ts[18-35]
- admin/vite.config.ts[65-69]
Capture the resolved Vite config via configResolved(resolved) and compute destDir from resolved.build.outDir (e.g., path.resolve(resolved.root, resolved.build.outDir, 'locales') or similar), instead of hard-coding ../src/templates/admin/locales.
[maintainability] Hardcoded `http://` in tests
Hardcoded `http://` in tests
New code hardcodes `http://localhost:9001` in Playwright navigation/requests and in the Vite dev proxy target, which reduces protocol flexibility in environments that might run under HTTPS or use a configurable base URL.Hardcoded http://localhost:9001 URLs were added in tests and Vite proxy config, reducing protocol independence.
Compliance requires protocol-independent URLs where applicable. Tests can usually rely on Playwright baseURL (or relative navigation) and Vite proxy targets can often be derived from environment/config.
- src/tests/frontend-new/admin-spec/admini18n.spec.ts[18-36]
- admin/vite.config.ts[77-79]
[reliability] Dev URL decode crash
Dev URL decode crash
The Vite dev middleware uses `decodeURIComponent()` on the request path without handling `URIError`, so a malformed percent-encoded URL can crash the `vite dev` process. This makes the admin dev server fragile to unexpected requests.decodeURIComponent() can throw on malformed percent-encoding. The dev middleware should not allow a bad URL to crash the Vite dev server.
This middleware is mounted at /admin/locales and parses req.url to map to a locale JSON file.
- admin/vite.config.ts[41-50]
Wrap the decode in try/catch and return next() (or respond 400) on URIError, e.g.:
let decodedPath; try { decodedPath = decodeURIComponent(...); } catch { return next(); }
[reliability] Unhandled stream error
Unhandled stream error
The dev middleware pipes `fs.createReadStream()` to the response without an `'error'` handler, so an I/O failure can raise an unhandled stream error and crash the dev server. This is a TOCTOU-prone path because the file can disappear between `existsSync()` and streaming.fs.createReadStream() can emit error. Without a handler, an I/O error can crash the dev server.
The dev middleware serves JSON files under /admin/locales during vite dev.
- admin/vite.config.ts[45-50]
Create the stream, attach stream.on('error', ...), then stream.pipe(res). On error, respond with an error status (or call next(err)), e.g.:
const stream = fs.createReadStream(filepath);stream.on('error', (err) => { res.statusCode = 500; res.end('Failed to read locale'); });-
stream.pipe(res);Also considerreturnafter starting the response to avoid accidental fallthrough.
| PR 7585 (2026-04-23) |
[correctness] NBSP lost at boundaries
NBSP lost at boundaries
contentcollector.ts only preserves a NBSP run if it is between non-whitespace characters within the same DOM text node, so a user-intended NBSP at the start/end of a text node is converted back to a normal space. This breaks NBSP round-tripping when words are split across spans (e.g., formatting/attribute boundaries), despite the intent of #3037.The NBSP preservation heuristic in contentcollector relies on per-string neighbor characters (before/after) but textify() is applied per DOM text node. If a user-intended NBSP sits at a text-node boundary (e.g., "100" in one span and "\u00a0km" in the next), the heuristic sees before === '' or after === '' and incorrectly converts the NBSP to a normal space.
-
collectContent()processes eachTEXT_NODEindependently and callslines.appendText(textify(txt2), ...). - The editor DOM is built from multiple
<span>runs andprocessSpaces()runs on the full line HTML, so it is normal for the first character of a span’s text node to be a NBSP.
- src/static/js/contentcollector.ts[79-99]
- src/static/js/contentcollector.ts[350-402]
Move the [ \u00a0]+ run canonicalization out of textify() (per-text-node) and into a post-processing step that runs on the fully assembled line string (or provide textify with cross-node context such as the last appended character and the next character to be appended). Ensure the transformation remains length-preserving so selection offsets and attribution lengths stay consistent.
[maintainability] `PadType` adds `spliceText`
`PadType` adds `spliceText`
The PR adds `spliceText` to the exported `PadType`, expanding the Pad API surface for TS/plugin consumers, but this PR does not include a corresponding documentation update under `doc/`. This can leave integrators unaware of the new method and its expected behavior.A new exported Pad API method (spliceText) was added to PadType without updating the project documentation in doc/.
PadType is part of the typed interface for Pad objects, which are exposed to server-side hook/plugin code. Per compliance, public API surface changes should be documented in the same PR.
- src/node/types/PadType.ts[15-20]
- doc/api/hooks_server-side.md[267-325]
| PR 7583 (2026-04-22) |
[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup
The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.
Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.
- packaging/scripts/postinstall.sh[34-40]
- packaging/systemd/etherpad.service[22-44]
Implement one of the following (prefer the symlink approach to keep writes out of /opt):
-
Symlink
/opt/etherpad/varto a writable location:
- In
postinstall.sh(configure):mkdir -p /var/lib/etherpad/varln -sfn /var/lib/etherpad/var /opt/etherpad/varchown -R etherpad:etherpad /var/lib/etherpad/var
- This keeps
ReadWritePaths=/var/lib/etherpadsufficient.
-
Allow
/opt/etherpad/varwrites explicitly:
- Create
/opt/etherpad/varinpostinstall.shandchownit toetherpad:etherpad. - Add
/opt/etherpad/vartoReadWritePathsin the unit. Either way, ensure the directory exists before service start sofs.mkdirSync(.../var/js)and plugin migration writes don’t throw.
[security] nfpm download not verified
nfpm download not verified
The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.The workflow installs nfpm from a downloaded .deb without any integrity verification.
Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.
- .github/workflows/deb-package.yml[64-71]
Add an integrity verification step before dpkg -i, for example:
- Download nfpm’s published
checksums.txt(or equivalent) for${NFPM_VERSION}and verify/tmp/nfpm.debwithsha256sum -c. - Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
- Only proceed to
sudo dpkg -iif verification passes.
[reliability] Node before setup-node
Node before setup-node
In `.github/workflows/deb-package.yml`, the “Resolve version” step runs `node -p ...` before `actions/setup-node`, so the workflow can fail on runners without a suitable preinstalled `node` binary. This breaks packaging builds on non-tag pushes/PRs where VERSION is derived from `package.json`.The workflow executes node to compute the version before actions/setup-node runs, which can fail on runners without a preinstalled (or compatible) Node.js.
This affects non-tag builds where VERSION is read from package.json.
Reorder steps so actions/setup-node runs before any node command, or avoid Node entirely in the version resolution step (e.g., parse JSON via jq or use a GitHub expression when possible).
- .github/workflows/deb-package.yml[58-77]
[reliability] Smoke test doesn't ensure Node>=20
Smoke test doesn't ensure Node>=20
The deb-package workflow installs `nodejs` from the runner’s default APT sources without ensuring it meets the package dependency `nodejs (>= 20)`, so `dpkg -i` can fail to configure (or behave differently) depending on the runner image. This makes the smoke-test nondeterministic and can invalidate the subsequent assertions/startup test.The CI smoke test installs nodejs from default APT sources without ensuring it satisfies the package dependency (nodejs (>= 20)), making the smoke test nondeterministic and potentially invalid.
- The
.debdeclaresDepends: nodejs (>= 20). - The workflow currently does
apt-get install -y nodejswith no repo setup/pinning. -
packaging/test-local.shalready uses NodeSource (setup_lts.x) before installing nodejs.
- .github/workflows/deb-package.yml[136-139]
In the smoke-test step, install Node.js via a deterministic mechanism, for example:
- Add NodeSource LTS repo (
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -) thensudo apt-get install -y nodejs, or - Use a pinned distro/repo that guarantees
nodejs >= 20. This should happen beforedpkg -i dist/*.debso the package can configure andpostinstallassertions are meaningful.
[reliability] Preinst depends not guaranteed
Preinst depends not guaranteed
The preinstall maintainer script requires addgroup/adduser, but the package only declares adduser as a regular dependency, so `dpkg -i` on a minimal system can fail before unpacking if adduser isn’t already installed.preinstall.sh calls addgroup/adduser during the preinst phase, but adduser is only in Depends. Direct installs via dpkg -i can run preinst before adduser is present, causing installation to abort.
This is especially problematic because a preinst failure happens before unpacking, so follow-up dependency repair (apt-get -f install) may not be able to recover.
Implement one of:
-
Debian-correct dependency: add a Debian
Pre-Depends: adduser(if nfpm supports it) soadduseris guaranteed beforepreinst. -
Move user/group creation later: shift user/group creation to
postinstall(configure), and adjust packaging so unpack-time ownership does not require theetherpadgroup to already exist (e.g., set/etc/default/etherpadgroup to root in the payload andchgrpit inpostinstall). - packaging/scripts/preinstall.sh[1-18]
- packaging/nfpm.yaml[22-26]
- packaging/scripts/postinstall.sh[16-34]
[correctness] Wrong tsx loader mode
Wrong tsx loader mode
packaging/bin/etherpad starts Etherpad with Node’s ESM preload (--import …/tsx/…/esm/index.mjs), but Etherpad’s production entrypoint expects the CommonJS tsx/cjs require hook and uses CommonJS exports. This will likely crash at startup (exports is undefined) and prevent etherpad.service from starting./usr/bin/etherpad preloads tsx via Node's --import pointing at tsx's ESM loader, but Etherpad's server entrypoint is started in CommonJS mode (tsx/cjs) and uses exports.*. This mismatch can break startup.
Etherpad's own src/package.json uses node --require tsx/cjs node/server.ts for production. src/node/server.ts assigns exports.start = ..., which will throw if the file is evaluated as an ES module.
- packaging/bin/etherpad[13-18]
Switch to the CJS preload used by Etherpad:
- Replace
--import "file://.../tsx/.../esm/index.mjs"with--require tsx/cjs(or an absolute path totsx/cjsunder the installed tree). - Keep
NODE_ENV=productionas you already do.
[correctness] Plugins not installable
Plugins not installable
The package installs /opt/etherpad as root:root and the systemd unit uses ProtectSystem=strict without making Etherpad’s plugin install paths writable. Etherpad installs plugins into /opt/etherpad/src/plugin_packages and creates symlinks under /opt/etherpad/src/node_modules, so plugin installs/upgrades will fail with EACCES under the packaged service.The packaged service cannot install/upgrade plugins at runtime because Etherpad's plugin manager writes to:
${settings.root}/src/plugin_packages-
${settings.root}/src/node_modules(creates symlinks) In the package,${settings.root}resolves to/opt/etherpad, which is installed as root:root and is read-only underProtectSystem=strict.
-
installer.tsdefinespluginInstallPathundersettings.root/src/plugin_packagesandnode_modulesunder.../src/node_modules. -
LinkInstallerusesPluginManager({pluginsPath: pluginInstallPath})and symlinks intonode_modules. -
nfpm.yamlsets/opt/etherpadtree owner/group to root:root. -
etherpad.serviceonly allows writes to/var/lib/etherpad,/var/log/etherpad,/etc/etherpad. -
postinstall.shcurrently only redirects/opt/etherpad/var.
- packaging/systemd/etherpad.service[23-44]
- packaging/nfpm.yaml[51-59]
- packaging/scripts/postinstall.sh[40-50]
- Redirect plugin storage to
/var/lib/etherpad(similar to the existing/opt/etherpad/varredirect):
- Create
/var/lib/etherpad/plugin_packagesowned byetherpad:etherpad. - Symlink
/opt/etherpad/src/plugin_packages -> /var/lib/etherpad/plugin_packagesinpostinstall.sh.
- Allow symlink creation under
/opt/etherpad/src/node_modules:
- Add
/opt/etherpad/src/node_modulestoReadWritePaths=in the unit, and - In
postinstall.sh, make that directory writable for the service user (e.g.,chgrp etherpad /opt/etherpad/src/node_modules && chmod g+w /opt/etherpad/src/node_modules). If runtime plugin installation is intentionally unsupported for the .deb, then explicitly disable/hide the plugin installer UI and document the limitation; otherwise users will hit runtime failures.
[reliability] Broken tag trigger globs
Broken tag trigger globs
The deb-package workflow’s tag filters use regex-like patterns (v[0-9]+.[0-9]+.[0-9]+) but GitHub Actions uses glob matching, where `[0-9]+` is a single-character class (digit or '+'), not “one-or-more digits”. As a result, common tags like v2.10.0 will not match and the workflow won’t run.The workflow tag filters are written like regex ([0-9]+), but Actions tag filters are globs. In a glob, [0-9]+ matches a single character (a digit or '+'), so tags with multi-digit components (e.g., v2.10.0) won't trigger the workflow.
There is already a correct pattern in .github/workflows/handleRelease.yml: v*.*.*.
- .github/workflows/deb-package.yml[2-7]
Replace the tag filters with glob patterns such as:
v*.*.*-
v*.*.*-*(or a single broaderv*if you prefer and then validate/tag-parse inside the job).
[reliability] Plugin migration startup failure
Plugin migration startup failure
On fresh installs, Etherpad startup runs checkForMigration(), which falls back to running `pnpm ls` and then writing `/opt/etherpad/var/installed_plugins.json` if the file is missing; the package does not ship/create that file, and the systemd unit disallows writes under `/opt/etherpad`. This can cause the service to crash during startup and never reach `/health`.Fresh installs can fail because Etherpad calls checkForMigration() during startup, and if /opt/etherpad/var/installed_plugins.json is missing it tries to execute pnpm ls and write that file under /opt/etherpad/var. The .deb packaging currently doesn’t ship/create that file (or even /opt/etherpad/var), and the systemd unit doesn’t permit writes under /opt/etherpad.
- Packaging stages
/opt/etherpadfrom a curated file list. - Etherpad’s plugin migration logic treats absence of
var/installed_plugins.jsonas a trigger to runpnpmand then persist plugin state undersettings.root/var. - The systemd sandbox only allows writes to
/var/lib/etherpad,/var/log/etherpad, and/etc/etherpad.
- .github/workflows/deb-package.yml[73-85]
- packaging/nfpm.yaml[47-59]
- packaging/scripts/postinstall.sh[14-40]
- packaging/systemd/etherpad.service[22-44]
- Ensure
/opt/etherpad/var/exists in the packaged tree (or create it inpostinstall.sh). - Seed
/opt/etherpad/var/installed_plugins.jsonat install time with minimal valid content (for example, onlyep_etherpad-lite), socheckForMigration()does not attempt to runpnpmon first boot. - If you want runtime plugin install/uninstall to work under the hardened unit, relocate plugin state to
/var/lib/etherpad(and symlink/opt/etherpad/varto it), and/or extendReadWritePathsto include the required writable plugin directories.
[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup
The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.
Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.
- packaging/scripts/postinstall.sh[34-40]
- packaging/systemd/etherpad.service[22-44]
Implement one of the following (prefer the symlink approach to keep writes out of /opt):
-
Symlink
/opt/etherpad/varto a writable location:
- In
postinstall.sh(configure): mkdir -p /var/lib/etherpad/varln -sfn /var/lib/etherpad/var /opt/etherpad/varchown -R etherpad:etherpad /var/lib/etherpad/var- This keeps
ReadWritePaths=/var/lib/etherpadsufficient.
-
Allow
/opt/etherpad/varwrites explicitly:
- Create
/opt/etherpad/varinpostinstall.shandchownit toetherpad:etherpad. - Add
/opt/etherpad/vartoReadWritePathsin the unit. Either way, ensure the directory exists before service start sofs.mkdirSync(.../var/js)and plugin migration writes don’t throw.
[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup
The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.
Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.
- packaging/scripts/postinstall.sh[34-40]
- packaging/systemd/etherpad.service[22-44]
Implement one of the following (prefer the symlink approach to keep writes out of /opt):
-
Symlink
/opt/etherpad/varto a writable location:
- In
postinstall.sh(configure):mkdir -p /var/lib/etherpad/varln -sfn /var/lib/etherpad/var /opt/etherpad/varchown -R etherpad:etherpad /var/lib/etherpad/var
- This keeps
ReadWritePaths=/var/lib/etherpadsufficient.
-
Allow
/opt/etherpad/varwrites explicitly:
- Create
/opt/etherpad/varinpostinstall.shandchownit toetherpad:etherpad. - Add
/opt/etherpad/vartoReadWritePathsin the unit. Either way, ensure the directory exists before service start sofs.mkdirSync(.../var/js)and plugin migration writes don’t throw.
[security] Overbroad workflow token scope
Overbroad workflow token scope
The deb-package workflow grants `contents: write` at the workflow level, so the build job also gets write permissions even on pull_request runs, despite only the release job needing to upload release assets..github/workflows/deb-package.yml sets permissions: contents: write at the workflow level, unnecessarily broadening the GITHUB_TOKEN scope for the build job (including pull_request runs).
Only the release job needs to write release assets. The build job checks out code, builds packages, and uploads artifacts.
- .github/workflows/deb-package.yml[34-36]
- .github/workflows/deb-package.yml[40-194]
- .github/workflows/deb-package.yml[195-202]
- Change workflow-level permissions to read-only (or omit entirely).
- Keep/add
permissions: contents: writeonly on thereleasejob.
[reliability] Service stays enabled after removal
Service stays enabled after removal
packaging/scripts/postinstall.sh enables etherpad.service on first install, but packaging/scripts/postremove.sh never disables it on remove/purge, which can leave an enabled systemd symlink pointing at a removed unit file after uninstall.The package enables etherpad.service on first install but does not disable it on remove/purge, which can leave stale enablement symlinks under /etc/systemd/system/* after uninstall.
-
postinstall.shrunssystemctl enable etherpad.serviceon fresh installs. -
postremove.shonly runssystemctl daemon-reloadand removes a few symlinks under/opt/etherpad, but never disables the unit.
- packaging/scripts/postinstall.sh[92-103]
- packaging/scripts/preremove.sh[6-11]
- packaging/scripts/postremove.sh[9-34]
- Add
systemctl disable etherpad.service(and optionallysystemctl reset-failed etherpad.service) during uninstall. The safest place ispreremove.sh(before the unit file is removed), and/or inpostremove.shfor bothremoveandpurgeas a fallback.
[correctness] Purge skips symlink cleanup
Purge skips symlink cleanup
`packaging/scripts/postremove.sh` removes the `/opt/etherpad` symlinks only in the `remove)` case, not in the `purge)` case. If `postrm purge` runs without the `remove` cleanup, `/opt/etherpad` can be left behind containing dangling symlinks created by `postinstall.sh`.The purge) branch does not remove /opt/etherpad symlinks created during install, which can leave /opt/etherpad non-empty (and symlinks dangling).
Symlinks are created in postinstall to expose /etc/etherpad/settings.json and redirect writable paths.
In the purge) case, also remove the same symlinks as in remove) (or remove /opt/etherpad entirely if appropriate and safe).
- packaging/scripts/postremove.sh[9-34]
- packaging/scripts/postinstall.sh[36-50]
- packaging/scripts/postinstall.sh[65-82]
[security] curl|sudo bash in CI
curl|sudo bash in CI
The Debian smoke test installs NodeSource by piping a remote script directly into `sudo bash`, executing network-fetched content as root during the workflow run. This increases the blast radius of a supply-chain compromise of that endpoint to the job that builds and uploads release artifacts.The workflow executes curl ... | sudo bash - to install Node.js, which runs network content as root.
This happens in the smoke test job before installing the newly built .deb.
- .github/workflows/deb-package.yml[133-143]
Prefer a repository-based installation with explicit keyring + signed-by configuration (or a pinned, checksum-verified installer artifact) rather than piping to bash. For example:
- Fetch the NodeSource GPG key into
/usr/share/keyrings/nodesource.gpgand configure the apt source withsigned-by=..., thenapt-get update && apt-get install nodejs. - Alternatively, run the smoke test in a container image that already contains Node >=20 (to avoid installing Node via shell pipe at all).
[correctness] RPM scripts use adduser
RPM scripts use adduser
The nfpm manifest claims the same packaging manifest can produce RPMs (and includes rpm-specific dependency overrides), but the install/remove scripts call Debian-only user/group management commands (addgroup/adduser/deluser/delgroup), so an RPM install would fail at script execution time.packaging/nfpm.yaml declares RPM support (and RPM dependencies), but the lifecycle scripts are Debian-specific (addgroup/adduser/deluser/delgroup). If someone builds an RPM from this manifest, installation/removal will fail when those commands are missing.
The README also states the same manifest produces .rpm/.apk, which further implies this path is intended to work.
- packaging/nfpm.yaml[104-120]
- packaging/scripts/preinstall.sh[6-18]
- packaging/scripts/postremove.sh[19-30]
- packaging/README.md[1-5]
- If RPM/APK is in-scope:
- Add per-packager script overrides (RPM equivalents using
groupadd/useradd/userdel/groupdelfromshadow-utils), and adjust systemd unit install path if needed for RPM-based distros.
- If Debian-only for now:
- Remove/disable the RPM/APK claims in
packaging/README.mdand remove theoverrides.rpmblock (or clearly document Debian-only support) to avoid shipping a manifest that advertises broken RPM support.
[security] nfpm download not verified
nfpm download not verified
The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.The workflow installs nfpm from a downloaded .deb without any integrity verification.
Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.
- .github/workflows/deb-package.yml[64-71]
Add an integrity verification step before dpkg -i, for example:
- Download nfpm’s published
checksums.txt(or equivalent) for${NFPM_VERSION}and verify/tmp/nfpm.debwithsha256sum -c. - Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
- Only proceed to
sudo dpkg -iif verification passes.
[security] nfpm download not verified
nfpm download not verified
The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.The workflow installs nfpm from a downloaded .deb without any integrity verification.
Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.
- .github/workflows/deb-package.yml[64-71]
Add an integrity verification step before dpkg -i, for example:
- Download nfpm’s published
checksums.txt(or equivalent) for${NFPM_VERSION}and verify/tmp/nfpm.debwithsha256sum -c. - Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
- Only proceed to
sudo dpkg -iif verification passes.
| PR 7569 (2026-04-20) |
[security] Overprivileged PR token
Overprivileged PR token
The workflow runs on pull_request but now grants GITHUB_TOKEN `packages: write`, so untrusted PR-controlled code executed in the `Test` step can publish/overwrite GHCR packages despite the explicit GHCR login/push steps being gated to `push`. This increases CI blast radius and enables supply-chain compromise via CI token abuse.The workflow grants packages: write at the workflow level while still running on pull_request, which exposes package publish capability to PR-executed code.
Top-level permissions: apply to the entire workflow run (including pull_request runs), even when individual publish steps are gated by if: github.event_name == 'push'.
- .github/workflows/docker.yml[2-17]
- .github/workflows/docker.yml[64-122]
- Keep the PR test job with minimal permissions (for example only
contents: read). - Create a separate
publishjob that runs only onpush(useif: github.event_name == 'push') and setpermissions: packages: writeat the job level for that job only. - Optionally make
publishdepend on the test job (needs:) so pushes only publish after tests pass.
[maintainability] Docs miss GHCR option
Docs miss GHCR option
After adding `ghcr.io/ether/etherpad` as a publish target, Docker documentation still states the official image is only on Docker Hub and provides only Docker Hub pull commands. This makes the new GHCR mirror effectively undiscoverable for users relying on project docs.Documentation only references Docker Hub, but the workflow now publishes the same tags to GHCR.
Users following docs will not discover the GHCR mirror and may continue to hit Docker Hub rate limits.
- doc/docker.md[1-13]
- README.md[104-113]
Update Docker docs (and optionally README) to mention GHCR as an alternative mirror and add example pull commands such as:
docker pull ghcr.io/ether/etherpad:latest-
docker pull ghcr.io/ether/etherpad:<version>while keeping Docker Hub as canonical if desired.
| PR 7567 (2026-04-20) |
[correctness] Changeset base length mismatch
Changeset base length mismatch
Pad.compactHistory() packs a changeset with oldLen=2 but resets the pad's base atext to "\n" (length 1) before applying it, causing appendRevision() to throw on the mismatched apply assertion. This makes compactHistory() fail at runtime for any pad with head > 0.Pad.compactHistory() creates baseChangeset with oldLength = 2, but resets this.atext to makeAText('\n') (length 1) before calling appendRevision(baseChangeset, ...). appendRevision() applies the changeset to this.atext, and Changeset.applyToText() asserts str.length === oldLen, so this will throw.
The code comment indicates the intent is to apply the changeset on top of a freshly-initialized pad that has text "\n\n" (length 2).
- src/node/db/Pad.ts[581-613]
Update the reset state so the base text length matches the packed changeset, for example:
- Reset
this.atexttomakeAText('\n\n')(and keepoldLength = 2), or - Change the packed
oldLength(and corresponding changeset construction) to match the actual reset base text. After the change, ensurecompactHistory()can successfully callappendRevision()without triggering the mismatched-apply assertion.
[maintainability] padCreate hook emitted
padCreate hook emitted
compactHistory() sets head to -1 then calls appendRevision(), which increments head to 0 and triggers the padCreate hook, even though the pad already existed. Plugins listening for padCreate may perform incorrect initialization or logging for an existing pad being compacted.Compaction is not a pad creation event, but current logic resets head=-1 and then calls appendRevision(), which will trigger the padCreate hook when head becomes 0.
appendRevision() chooses hook = this.head === 0 ? 'padCreate' : 'padUpdate'. With head=-1 before the call, compaction will always emit padCreate.
- src/node/db/Pad.ts[606-613]
- src/node/db/Pad.ts[158-165]
Refactor compaction to avoid emitting padCreate for an existing pad. Options:
- Write revision 0 directly (compute the new atext, set
head=0,atext=..., thendb.set(pad:${id}:revs:0, ...)andsaveToDatabase()), and emit a more appropriate hook (padUpdateor a newpadCompact). - Or enhance
appendRevision()(or add a new helper) to allow specifying which hook to emit for special operations like compaction. Keep plugin semantics consistent: existing pad compacting should not look like a new pad creation.
| PR 7565 (2026-04-19) |
[correctness] `enforceReadableAuthorColors` default is true
`enforceReadableAuthorColors` default is true
The new author-color clamping behavior is enabled by default via `enforceReadableAuthorColors: true`, which changes baseline rendering behavior. This violates the requirement that new features be feature-flagged and disabled by default.The new viewer-side author background clamping feature is enabled by default (enforceReadableAuthorColors: true), but compliance requires new features to be disabled by default.
The PR correctly introduces a flag (enforceReadableAuthorColors) and wiring, but the default value is set to true in server defaults and the distributed config templates.
- src/node/utils/Settings.ts[410-415]
- settings.json.template[261-273]
- settings.json.docker[280-293]
- doc/docker.md[99-113]
[correctness] Clamp doesn't ensure AA
Clamp doesn't ensure AA
After clamping the background with ensureReadableBackground(), setAuthorStyle() still chooses the foreground via textColorFromBackgroundColor()’s 0.5 luminosity heuristic, so it can render white (or #222) text on a background that was only validated against pure black. This breaks the PR’s guarantee that rendered author highlights meet WCAG AA contrast.setAuthorStyle() clamps the background using WCAG contrast math but then selects the text color using a luminosity heuristic that can pick a foreground that does not meet the WCAG threshold with the (possibly clamped) background.
-
ensureReadableBackground()currently validates against pure black ([0,0,0]), but the actual dark foreground used bytextColorFromBackgroundColor()is#222(or a CSS var for the colibris skin). - To actually guarantee AA on render, the chosen foreground and the clamped background must be computed together using the same colors.
- src/static/js/ace2_inner.ts[242-263]
- src/static/js/colorutils.ts[115-120]
- src/static/js/colorutils.ts[139-173]
- Update the author-style rendering to:
- Determine the actual candidate foreground colors (e.g.,
#fffand#222for classic skins; consider handling colibris separately). - For hex backgrounds, choose the foreground by highest WCAG contrastRatio (not luminosity).
- If neither candidate meets
minContrast, clamp the background toward white until it meetsminContrastagainst the chosen dark foreground (likely#222), then set the foreground explicitly.
- Alternatively, make a helper that returns
{bg, fg}together (single source of truth), and havesetAuthorStyle()use it.
| PR 7564 (2026-04-19) |
[maintainability] `ace_doDuplicateSelectedLines` undocumented
`ace_doDuplicateSelectedLines` undocumented
New `editorInfo` API methods and user-facing shortcuts were added without updating documentation under `doc/`, making the public surface area unclear for plugin authors and operators. This violates the requirement to document user-facing/API/config changes in the same PR.New editorInfo APIs (ace_doDuplicateSelectedLines, ace_doDeleteSelectedLines) and new shortcut settings keys were added without updating documentation under doc/.
Plugin authors rely on doc/api/editorInfo.md to discover supported editorInfo.* methods.
- src/static/js/ace2_inner.ts[2481-2535]
- src/node/utils/Settings.ts[224-228]
- doc/api/editorInfo.md[1-208]
[correctness] Duplicate drops attributes
Duplicate drops attributes
doDuplicateSelectedLines inserts raw line text via performDocumentReplaceRange(), which assigns only the current author attribute to the inserted text; this loses formatting/list/line attributes and can surface Etherpad’s internal line-marker '*' as literal text.doDuplicateSelectedLines() duplicates only plain text and reinserts it with only the author attribute, which drops formatting/list/line attributes and can expose Etherpad’s internal * line marker as literal text.
Etherpad stores rich text formatting and line attributes in attribution data (rep.alines/apool). Copying only rep.lines.atIndex(i).text and reinserting via performDocumentReplaceRange() does not preserve those attributes.
- src/static/js/ace2_inner.ts[2481-2536]
- src/static/js/ace2_inner.ts[172-189]
Build a changeset that inserts an attributed slice of the existing document:
- Compute the character-range covering whole lines
[start..end](including the terminating\nfor each copied line). - Construct an
ATextfor that slice using the corresponding text (fromrep.alltext) and attribution (fromrep.alines.slice(start, end+1).join('')). - Apply an insertion changeset using
SmartOpAssembler+opsFromAText()(similar tosetDocAText()), so the inserted ops carry the original attributes. - Insert at the start of line
end+1(or the document end) while maintaining the final-newline invariant.
[correctness] Whole-pad delete broken
Whole-pad delete broken
The whole-pad-selected branch in doDeleteSelectedLines() deletes only part of line 0 (and uses a length derived from the last selected line), so multi-line pads are not cleared and the delete range can become invalid if the last line is longer than the first.doDeleteSelectedLines()'s whole-pad-selected case deletes from [0,0] to [0,lastLen] (line 0 only) even when the selection spans multiple lines, so it does not clear the pad and can generate an invalid range.
The code intends to blank the entire pad but keep one empty line (final newline invariant).
- src/static/js/ace2_inner.ts[2513-2531]
When start === 0 and the selection reaches the end of the document, delete from the start of the pad through the end of the last selected line:
- Compute
lastLen = rep.lines.atIndex(end).text.length. - Call
performDocumentReplaceRange([0, 0], [end, lastLen], ''). This removes all content before the document’s final newline, leaving one empty line behind.
[reliability] Duplicate EOF insertion risk
Duplicate EOF insertion risk
Duplicating the last line(s) inserts at position [rep.lines.length(), 0] (after the final newline), bypassing existing end-of-document safeguards used elsewhere to avoid inserting after the final newline.If the selected block includes the last line, doDuplicateSelectedLines() inserts at [end + 1, 0] which becomes [rep.lines.length(), 0] (after the document’s final newline), a scenario that other editor code treats as needing special handling.
This file already contains explicit end-of-document splice rewrites (both in performDocumentReplaceCharRange() and in internal splice logic) to avoid inserting after the final newline.
- src/static/js/ace2_inner.ts[2499-2511]
- src/static/js/ace2_inner.ts[1497-1523]
- src/static/js/ace2_inner.ts[1723-1731]
When duplicating at the end of the document, route the insertion through an end-safe code path (for example, compute a char offset and use performDocumentReplaceCharRange() for insertion so it can apply its end-of-doc rewrite), or explicitly rewrite the insertion point to be before the final newline (mirroring existing logic). If you implement attribute-preserving duplication via an AText/changeset insertion, incorporate the same end-of-doc invariant handling there.
| PR 7563 (2026-04-19) |
[correctness] Override test never reloads
Override test never reloads
The new test sets ETHERPAD_VERSION_STRING but only calls exportedForTestingOnly.parseSettings(), which does not recompute settings.randomVersionString; randomVersionString is computed in reloadSettings(). Because the module is already loaded earlier in this test file via require(), the later require() returns the cached instance and the assertion can fail.The test honours ETHERPAD_VERSION_STRING as an explicit override sets process.env.ETHERPAD_VERSION_STRING but never calls reloadSettings(), so settings.randomVersionString is not recomputed. The test currently calls exportedForTestingOnly.parseSettings() which only parses JSON and does not mutate the module-scope singleton.
-
randomVersionStringis derived insidereloadSettings(). - The Settings module has already been loaded earlier in the file via
require(), so requiring it again will not re-run module initialization.
- src/tests/backend/specs/settings.ts[165-183]
Update the override test to:
- Save the original env var.
- Set
process.env.ETHERPAD_VERSION_STRING. - Call
require('../../../node/utils/Settings').reloadSettings(). - Assert
randomVersionString. - Restore the env var and call
reloadSettings()again to avoid leaking state to other tests. (Optionally also ensure the default-hash test unsetsETHERPAD_VERSION_STRINGand callsreloadSettings()before asserting.)
| PR 7562 (2026-04-19) |
[reliability] Test mutates editor DOM
Test mutates editor DOM
The new Playwright test constructs a long pad by overwriting `#innerdocbody.innerHTML`, bypassing the normal editor input path used elsewhere in the suite. This can desynchronize Etherpad’s internal rep/undo state from the DOM and makes the regression test more likely to be flaky or to test an unrealistic state.The regression test populates the document by directly setting #innerdocbody.innerHTML. This bypasses editor input handling and can lead to inconsistent undo/redo behavior or flaky scroll assertions.
Other frontend-new specs that need a long pad use writeToPad() and keyboard Enter presses to generate many lines, which keeps Etherpad’s internal state consistent.
- src/tests/frontend-new/specs/undo_redo_scroll.spec.ts[26-43]
- src/tests/frontend-new/specs/undo_redo_scroll.spec.ts[81-97]
- src/tests/frontend-new/specs/page_up_down.spec.ts[16-20]
Replace the innerFrame.evaluate(() => { body.innerHTML = ... }) blocks with a loop similar to:
-
for (let i = 0; i < 120; i++) { await writeToPad(page,line ${i+1}); await page.keyboard.press('Enter'); }(or a single multi-linewriteToPadcall if you want it faster), then proceed with the undo/scroll assertions.
| PR 7559 (2026-04-19) |
[maintainability] Shell scripts use 4-space indent
Shell scripts use 4-space indent
The newly added packaging shell scripts use 4-space indentation (e.g., within `case` branches), violating the 2-space indentation standard. This can reduce consistency and trigger formatting/lint issues where the repo enforces 2-space indentation.New packaging shell scripts use 4-space indentation inside blocks, violating the repo requirement for 2-space indentation.
This affects newly added Debian packaging maintainer scripts and should be corrected to match the enforced formatting standard.
- packaging/scripts/preinstall.sh[6-18]
- packaging/scripts/postinstall.sh[13-50]
[correctness] Smoke test false-positive
Smoke test false-positive
The workflow’s /health polling loop never fails the job if the endpoint never becomes healthy, so CI can report success for a broken .deb/service. This can lead to attaching and releasing non-working packages.The smoke test loop does not fail if /health never returns 200, so the workflow can pass even when Etherpad never becomes ready.
Because curl ... && break || sleep 2 is inside a for loop, a permanently failing curl results in the loop ending with the exit code of the final sleep (0). With set -e, this still won’t fail.
- .github/workflows/deb-package.yml[107-126]
Track success and explicitly exit 1 if the endpoint never becomes healthy, e.g.:
[security] Secrets readable in /etc/default
Secrets readable in /etc/default
The package installs /etc/default/etherpad-lite as world-readable (0644) even though it is the documented place to put environment overrides for settings.json, which can include passwords and secrets. Any local user on the system can read those secrets./etc/default/etherpad-lite is installed with mode 0644, making any secrets placed there readable by all local users.
The systemd unit uses EnvironmentFile=-/etc/default/etherpad-lite, and Etherpad supports ${ENV_VAR} substitution for config values, including passwords.
- packaging/nfpm.yaml[62-68]
- packaging/systemd/etherpad-lite.service[7-14]
Tighten permissions and ownership so only root and the etherpad service user can read it:
| PR 7558 (2026-04-19) |
[reliability] Tag filter never matches
Tag filter never matches
In `.github/workflows/snap-publish.yml` the tag filter uses a regex-like pattern (`v?[0-9]+.[0-9]+.[0-9]+`) but GitHub Actions uses glob matching, so `+` is treated literally and typical release tags like `v2.6.1` will not match. As a result the Snap publish workflow will not trigger on release tags and nothing will be built/published automatically.The workflow tag trigger pattern is written like a regex, but GitHub Actions uses glob matching for on.push.tags. The current pattern will not match normal semver tags like v2.6.1, so the workflow will not run on releases.
The workflow intends to run on tags matching v?X.Y.Z (optional leading v).
- .github/workflows/snap-publish.yml[17-21]
Replace the single regex-like entry with glob patterns, for example:
v[0-9]*.[0-9]*.[0-9]*-
[0-9]*.[0-9]*.[0-9]*(or whichever variant matches your actual tagging scheme).
[security] CLI path traversal exec
CLI path traversal exec
`snap/local/bin/etherpad-cli` builds `SCRIPT_PATH` from unvalidated user input, so a caller can pass a value containing `/` or `..` to escape the intended `${APP_DIR}/bin` directory and execute arbitrary `.ts`/`.sh` files shipped in the snap. Additionally, there is no default `case` branch, so if a file exists but is not `.ts` or `.sh` the command silently does nothing and exits successfully.The snap CLI wrapper allows path traversal via the <bin-script> argument and can execute unintended files. It also silently succeeds for unsupported extensions.
This command is meant to be a thin, safe passthrough to scripts under $SNAP/opt/etherpad-lite/bin.
- snap/local/bin/etherpad-cli[17-24]
- Reject any
SCRIPT_NAMEthat contains/or..(or normalize tobasenameand compare). - Optionally enforce an allowlist derived from
$APP_DIR/bin. - Add a default
*)case that prints an error likeunsupported script typeand exits non-zero.
[maintainability] 4-space indent in scripts
4-space indent in scripts
New Bash scripts use 4-space indentation inside control blocks, violating the repository rule requiring exactly 2-space indentation. This reduces consistency and can cause style-check failures if enforced.The newly added Bash scripts use 4-space indentation inside if/case blocks, but the project compliance rule requires 2-space indentation (and no tabs).
This PR adds several wrapper scripts under snap/. To stay compliant and consistent with repo formatting expectations, indentation should be normalized to 2 spaces.
- snap/hooks/configure[9-13]
- snap/local/bin/etherpad-cli[10-14]
- snap/local/bin/etherpad-healthcheck-wrapper[8-11]
- snap/local/bin/etherpad-service[23-30]
[correctness] Settings file ignored
Settings file ignored
The snap seeds $SNAP_COMMON/etc/settings.json and exports EP_SETTINGS, but Etherpad’s settings loader does not read EP_SETTINGS, so it will keep looking for /settings.json and fall back to defaults. Defaults include a DB file under /var/rusty.db, which is inside the read-only snap mount, so the daemon will fail to persist data and may fail to start.The snap wrapper exports EP_SETTINGS and seeds $SNAP_COMMON/etc/settings.json, but Etherpad ignores EP_SETTINGS and uses argv.settings (from --settings/-s) or defaults to <install-root>/settings.json. This prevents the snap from using the seeded writable settings and can force DB paths into the read-only snap mount.
The snap already sets EP_SETTINGS (wrapper + snapcraft.yaml). The minimal, snap-friendly fix is to make Etherpad honor process.env.EP_SETTINGS (and optionally process.env.EP_CREDENTIALS) as a fallback when CLI flags are not provided.
- src/node/utils/Settings.ts[301-306]
- src/node/utils/Cli.ts[25-44]
- snap/local/bin/etherpad-service[39-43]
- Update Settings filename resolution to prefer
argv.settings, elseprocess.env.EP_SETTINGS, else'settings.json'. - Do the same for credentials (
argv.credentials->process.env.EP_CREDENTIALS->'credentials.json'). - Keep CLI flag precedence so existing behavior is unchanged.
- (Optional) Add a small unit/integration check or comment documenting the env var support.
[correctness] snap set overrides no-op
snap set overrides no-op
The service wrapper exports PORT/IP from snapctl, but the seeded settings.json uses literal ip/port values, so the configured settings file will override the env-based defaults and ignore `snap set port=` / `snap set ip=`. Users following the snap README will not see the listener move after restart.snap set etherpad-lite port=... / ip=... is implemented by exporting PORT/IP, but the seeded settings file hard-codes "ip": "0.0.0.0" and "port": 9001, which overrides env defaults. As a result, snap config changes do nothing.
Etherpad supports env-var substitution inside settings.json via strings like "${PORT:9001}", but the current template copy does not use that syntax for ip/port.
- snap/local/bin/etherpad-service[22-37]
- settings.json.template[151-162]
- During first-run bootstrap (right after copying the template), rewrite the
ipandportentries to use Etherpad’s substitution syntax: "ip": "${IP:0.0.0.0}"-
"port": "${PORT:9001}"(must be quoted per template rules) - Only apply the rewrite if the file still contains the template’s default literal values, to avoid overwriting user customizations.
- Keep the existing dirty-db path rewrite.
| PR 7554 (2026-04-19) |
[maintainability] `https://` URL in comment
`https://` URL in comment
A protocol-specific `https://` URL was introduced in a code comment, which conflicts with the project requirement to use protocol-independent URLs where appropriate. This can create inconsistent documentation style and mixed-content concerns if copied into contexts expecting protocol-relative links.A new comment includes a hardcoded https:// URL, but the project compliance rule requires protocol-independent URLs where appropriate.
The added regression test comment references issue #7138 with an https://github.com/... link.
- src/tests/backend/specs/settings.ts[151-151]
[reliability] Unused browser context
Unused browser context
The new Playwright spec creates a new BrowserContext and clears its cookies, but then navigates with the existing `page` fixture, so cookie clearing has no effect and the new context is never closed. This can leak contexts across tests and degrade test reliability/performance.The test creates a new BrowserContext and clears cookies on it, but uses the existing page fixture for navigation, so cookie clearing is ineffective and the new context is never closed.
The intent seems to be to start each test with a clean cookie jar. In Playwright, the page fixture already has a context (page.context()), so cookie operations should target that context unless you also create a new page from the new context.
- src/tests/frontend-new/specs/inactive_color_fade.spec.ts[4-8]
Replace the browser.newContext() usage with one of:
-
Simplest:
await page.context().clearCookies(); - If you truly need a new context, create a
page = await context.newPage()and use that page (and ensureawait context.close()inafterEach).
[maintainability] Docker docs missing env var
Docker docs missing env var
`settings.json.docker` adds the new env var `PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS`, but the Docker documentation’s Pad Options tables do not list it. Users relying on docs will not discover how to configure the new toggle via environment variables.settings.json.docker supports PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS, but the Docker docs do not list this env var under Pad Options.
This PR explicitly advertises Docker env var configurability; the docs should reflect the new supported variable to avoid confusion.
- doc/docker.md[97-112]
- doc/docker.adoc[179-233]
Add a row for PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS (default true) in both Pad Options tables, with a short description (e.g., "When false, do not fade inactive author colors toward white").
| PR 7553 (2026-04-19) |
[correctness] `showMenuRight` ignores readonly mode
`showMenuRight` ignores readonly mode
The new logic hides `#editbar .menu_right` only when `showMenuRight=false` is explicitly provided, rather than automatically when `window.clientVars.readonly` is true (including iframe embeds). As a result, read-only pads can still display right-side controls unless callers remember to pass the URL param, failing the read-only-hides-controls requirement.menu_right is only hidden when showMenuRight=false is provided, but compliance requires it to be hidden whenever the pad is read-only (including embeds).
There is already a read-only indicator available via window.clientVars.readonly.
- src/static/js/pad.ts[79-85]
[reliability] Unused Playwright context
Unused Playwright context
The new spec creates a new BrowserContext and clears its cookies, but the tests continue using the existing fixture `page` from a different context, making the cookie clearing ineffective. The created context is also never closed, adding unnecessary resource usage in the test suite.The test creates a new BrowserContext, clears its cookies, but then continues using the existing page fixture (which belongs to a different context). This makes the cookie clearing ineffective and leaves an extra context unclosed.
goToNewPad(page) navigates the passed-in page via page.goto(...), so any cookie cleanup must be applied to page.context() (or the test must create/use a page from the new context and close it).
- src/tests/frontend-new/specs/hide_menu_right.spec.ts[4-8]
- src/tests/frontend-new/helper/padHelper.ts[117-123]
Prefer one of:
- Simplest: replace the
browser.newContext()usage withawait page.context().clearCookies(). - If isolation via new context is desired: create a page from that context (
const page2 = await context.newPage()), use that page for navigation/assertions, and close the context inafterEach.
| PR 7552 (2026-04-19) |
[correctness] Installer clones wrong repo
Installer clones wrong repo
README.md now instructs users to run installers from the `ether/etherpad` repo, but both installer scripts still default to cloning `https://github.com/ether/etherpad-lite.git`, so the one-liner can install a different repo than the documentation implies.README.md now points users to run installers from https://raw.githubusercontent.com/ether/etherpad/..., but the installer scripts still default to cloning https://github.com/ether/etherpad-lite.git. This can cause the one-liner install path to clone an unexpected repository.
- README is the primary entry point for users.
-
bin/installer.shandbin/installer.ps1embed their own default clone URL.
- bin/installer.sh[5-12]
- bin/installer.sh[33-38]
- bin/installer.ps1[1-11]
- bin/installer.ps1[37-41]
- README.md[71-88]
- Change the default repo URL in both installer scripts to
https://github.com/ether/etherpad.git. - Update the usage examples/comments in both scripts to match the new raw GitHub URLs.
- Ensure README and installer scripts agree on the repo being installed (directory name can remain
etherpad-liteif intentional).
[observability] Broken Docker badge link
Broken Docker badge link
README.md links the Docker workflow badge to `actions/workflows/dockerfile.yml`, but this repo’s Docker workflow file is `docker.yml`, so the badge/link will 404.The README Docker CI badge points to actions/workflows/dockerfile.yml, but the repository workflow file is docker.yml. This breaks the badge and link.
The repo contains .github/workflows/docker.yml.
- README.md[35-39]
- .github/workflows/docker.yml[1-5]
Update the Docker badge/link in README to reference actions/workflows/docker.yml (both the badge.svg URL and the clickable link).
[maintainability] Conflicting GitHub URLs
Conflicting GitHub URLs
Startup logging now directs users to `https://github.com/ether/etherpad/issues`, but the default pad text still advertises `https://github.com/ether/etherpad-lite`, giving users conflicting guidance on where the project lives.User-facing text references two different GitHub repositories (etherpad vs etherpad-lite). This is confusing for users looking for source/issues.
- Startup console message points to
https://github.com/ether/etherpad/issues. - Default pad contents still include
https://github.com/ether/etherpad-lite.
- src/node/hooks/express.ts[67-72]
- src/node/utils/Settings.ts[390-397]
Update defaultPadText to point at the same GitHub repo as the startup message (and consider doing the same for other prominent user-facing URLs if they exist).
| PR 7550 (2026-04-19) |
[reliability] `anonymizeAuthor` lacks feature flag
`anonymizeAuthor` lacks feature flag
The new `anonymizeAuthor` REST/API surface is registered unconditionally and becomes available by default, without any enable/disable mechanism. This violates the requirement that new features be gated behind a feature flag and disabled by default.A new feature (anonymizeAuthor API/REST endpoint) is enabled by default and has no feature-flag gating.
Compliance requires new features to be behind a feature flag and disabled by default.
- src/node/handler/APIHandler.ts[146-152]
- src/node/db/API.ts[65-77]
[reliability] Non-resumable partial erasure
Non-resumable partial erasure
`AuthorManager.anonymizeAuthor()` persists `erased: true` before the chat-scrub loop, so any error during chat scrubbing can leave chat messages unchanged while subsequent calls short-circuit on `existing.erased` and never finish the scrub. This contradicts the documented behavior that chat message `authorId` is nulled, and makes failures non-recoverable without manual DB intervention.anonymizeAuthor() marks the author record as erased: true before finishing the chat scrub. If any error occurs during the chat loop, retries will short-circuit on existing.erased and never finish nulling authorId on chat messages.
- Current behavior uses
existing.erasedas the idempotency guard. - Docs state chat message
authorIdis nulled. - The implementation should either (a) only mark
erased: trueonce all steps have completed, or (b) track per-step completion so retries can resume unfinished work.
- src/node/db/AuthorManager.ts[336-395]
- Introduce an intermediate state (e.g.,
erasureInProgress: true) and set it before starting work. - Perform token/mapper cleanup + chat scrub.
- Only after successful completion, update the author record to
{erased: true, erasureInProgress: false}. - Alternatively: keep
erased: truebut add a separate flag (e.g.,chatScrubbed: true) and only short-circuit when both are complete; otherwise resume the missing steps.
| PR 7549 (2026-04-19) |
[security] Unsafe learnMoreUrl href
Unsafe learnMoreUrl href
showPrivacyBannerIfEnabled() assigns settings-provided learnMoreUrl directly to an anchor href, so a javascript:/data: URL would execute script when clicked. This creates a user-triggered XSS vector via configuration.src/static/js/privacy_banner.ts sets a.href from config.learnMoreUrl without validating the URL scheme. This allows javascript: (and similar) URLs to execute code if a user clicks the “Learn more” link.
learnMoreUrl is operator-controlled via settings.json and is sent to browsers via clientVars, so the client must defensively validate it.
- src/static/js/privacy_banner.ts[43-54]
- Parse
config.learnMoreUrlwithnew URL()(usinglocation.hrefas base for relative URLs). - Allowlist safe protocols (e.g.,
http:,https:; optionallymailto:depending on product policy). - If invalid/unsafe, do not render the link (and optionally log a console warning in development).
| PR 7548 (2026-04-19) |
[security] TokenTransfer returns plaintext token
TokenTransfer returns plaintext token
`GET /tokenTransfer/:token` responds with `tokenData` (including the plaintext author token) and does not invalidate the transfer record, so any script running in-origin can mint an id and exfiltrate a long-lived author token despite HttpOnly cookies. The id is also reusable indefinitely because the DB entry is never deleted.The token transfer flow still exposes the plaintext author token in an HTTP response body and leaves transfer IDs reusable.
HttpOnly prevents reading document.cookie, but it does not help if an endpoint returns the token via JSON.
- src/node/hooks/express/tokenTransfer.ts[40-67]
- After a successful GET, delete the DB entry (
tokenTransferKey:${id}) so the id is one-time. - Do not include
tokenin the JSON response body; instead, set cookies and return a minimal{ok:true}(or redirect to a safe page). - Add a short TTL/expiry check using
createdAtand reject/cleanup old transfer records. - Validate
tokenData.tokenwithpadutils.isValidAuthorToken()before re-issuing it as a cookie.
[security] Timeslider rewrites token cookie
Timeslider rewrites token cookie
The timeslider client still reads and (re)sets the author token cookie from JavaScript and sends it over socket messages, which can overwrite the new server-minted HttpOnly token cookie and keep the legacy message.token path active. This defeats the primary security goal (token not JS-readable/writable) for any user who opens the timeslider.The timeslider frontend still generates/sets the author token cookie in JavaScript and sends token in socket messages, which undermines the new HttpOnly cookie model.
The server now sets the author token as an HttpOnly cookie on /p/:pad/timeslider, but the timeslider client still treats the token as JS-managed state.
- src/static/js/timeslider.ts[52-58]
- src/static/js/timeslider.ts[100-109]
- src/static/js/pad.ts[190-212]
- Remove the
token = Cookies.get(...); if (token == null) Cookies.set(...);block. - Remove
tokenfrom the socket message payload (similar tosrc/static/js/pad.tsremovingtokenfrom CLIENT_READY). - Ensure the server path works purely via the socket.io handshake cookie (no message.token dependence for timeslider clients).
[security] tokenTransfer breaks HttpOnly model
tokenTransfer breaks HttpOnly model
The welcome page’s tokenTransfer POST reads the author token from document.cookie (now invisible due to HttpOnly), causing invalid requests, and the tokenTransfer GET endpoint re-sets the token cookie without HttpOnly, undoing the hardening. This both breaks the feature and reintroduces a JS-readable author token cookie.The tokenTransfer flow is incompatible with an HttpOnly author token: it tries to read the token in the browser and it also writes a non-HttpOnly token cookie server-side.
The author token is now intended to be server-minted and HttpOnly. Any feature that depends on reading/writing the token in browser JS will break or regress security.
- src/static/js/welcome.ts[24-33]
- src/node/hooks/express/tokenTransfer.ts[16-20]
- src/node/hooks/express/tokenTransfer.ts[42-45]
- src/node/utils/ensureAuthorTokenCookie.ts[26-33]
- Update
POST /tokenTransferto not requiretokenfrom the request body. Instead, read the token fromreq.cookies(include prefixed + unprefixed fallback). - When setting
${p}tokeninGET /tokenTransfer/:token, set cookie options consistent with PR3 (httpOnly, sameSite, secure-on-https, path='/'). - Update
welcome.tsto stop attempting to read${cp}tokenfromdocument.cookieand stop sending it in the POST body.
- Docs
- Translating
- HTTP API
- Plugin framework (API hooks)
- Plugins (available)
- Plugins (list)
- Plugins (wishlist)
- Etherpad URIs / URLs to specific resources IE export
- Etherpad Full data export
- Introduction to the source
- Release Procedure
- Etherpad Developer guidelines
- Project to-do list
- Changeset Library documentation
- Alternative Etherpad-Clients
- Contribution guidelines
- Installing Etherpad
- Deploying Etherpad as a service
- Deploying Etherpad on CloudFoundry
- Deploying Etherpad on Heroku
- Running Etherpad on Phusion Passenger
- Putting Etherpad behind a reverse Proxy (HTTPS/SSL)
- How to setup Etherpad on Ubuntu 12.04 using Ansible
- Migrating from old Etherpad to Etherpad
- Using Etherpad with MySQL
- Customizing the Etherpad web interface
- Enable import/export functionality with AbiWord
- Getting a list of all pads
- Providing encrypted web access to Etherpad using SSL certificates
- Optimizing Etherpad performance including faster page loads
- Getting to know the tools and scripts in the Etherpad /bin/ folder
- Embedding a pad using the jQuery plugin
- Using Embed Parameters
- Integrating Etherpad in a third party app (Drupal, MediaWiki, WordPress, Atlassian, PmWiki)
- HTTP API client libraries