Skip to content

.pr_agent_accepted_suggestions

qodo-merge-bot edited this page May 3, 2026 · 65 revisions
                     PR 7665 (2026-05-03)                    
[reliability] uncaughtException doesn’t exit
uncaughtException doesn’t exit tests/backend/diagnostics.ts registers an uncaughtException handler that only logs and returns, which can prevent backend-tests from exiting non-zero on fatal errors when tests/backend/common.ts is not imported. This can turn a crash into a hang/continued execution, and it also breaks the intended “convert unhandledRejection to uncaught exception” fail-fast behavior for specs that don’t load common.ts.

Issue description

src/tests/backend/diagnostics.ts installs an uncaughtException handler that only logs. If tests/backend/common.ts is not imported in a spec run, this handler can prevent a fatal error from forcing a non-zero exit (common.ts explicitly calls process.exit(1) to preserve default behavior).

Issue Context

  • src/tests/backend/common.ts has an uncaughtException handler that logs and then process.exit(1) specifically to preserve default behavior when a handler exists.
  • src/package.json now requires ./tests/backend/diagnostics.ts globally, but does not require common.ts.
  • Some specs don’t import common.ts, so diagnostics.ts can be the only handler.

Fix Focus Areas

  • src/tests/backend/diagnostics.ts[55-61]

Suggested change

After logging, ensure the process still fails fast when no other handler will do it. Options:

  1. Mimic common.ts: call process.exit(1) after logging.
  2. If you want to defer to other handlers when present: only force-exit if this is the only uncaughtException listener (e.g., if (process.listeners('uncaughtException').length === 1) process.exit(1);), otherwise return.
  3. Alternatively, set process.exitCode = 1 and schedule a setImmediate(() => process.exit(1)) so later-registered handlers still get a chance to run/log first.


                     PR 7660 (2026-05-02)                    
[correctness] Click steals rename focus
Click steals rename focus Clicking the editable name for unnamed users will also trigger the new row click handler, which then focuses #chatinput and interrupts the rename workflow. This makes it difficult or impossible to name unnamed users from the user list.

Issue description

The new delegated click handler on #otheruserstable tr[data-authorId] triggers even when the click target is the existing rename <input> in the name cell, and it later focuses #chatinput. This breaks the unnamed-user rename interaction.

Issue Context

Unnamed users are rendered with an <input> in the .usertdname cell and are wired up via #otheruserstable input.newinput.

Fix Focus Areas

  • src/static/js/pad_userlist.ts[373-410]
  • src/static/js/pad_userlist.ts[183-196]

Suggested change

Add early-return guards before doing any prefill/show work, for example:

  • Return if $(event.target).closest('input, textarea, select, button, a, [contenteditable=true]').length.
  • Or at minimum return if $(event.target).closest('.usertdname input').length. This keeps the row-click behavior while preserving rename semantics.


                     PR 7647 (2026-05-01)                    
[observability] Hardcoded 5s socket wait
Hardcoded 5s socket wait waitForSocketEvent now hardcodes a 5s timeout for all socket events, which can cause suites that rely on Mocha’s default per-test timeout to fail with a generic Mocha timeout instead of the helper’s descriptive error. It also makes failing runs wait longer before surfacing the root cause.

Issue description

waitForSocketEvent() uses a hardcoded 5000ms timeout for all socket waits. In suites that don’t increase Mocha’s default per-test timeout, failures may surface as a generic Mocha timeout instead of waitForSocketEvent’s explicit timed out waiting for <event> error.

Issue Context

Some callers (e.g., connect/handshake paths) legitimately need a longer timeout on slow CI runners, but other call sites benefit from failing fast and producing a clear error.

Fix Focus Areas

  • Add an optional timeoutMs parameter to waitForSocketEvent(socket, event, timeoutMs?), and use it in setTimeout(..., timeoutMs).
  • Update slow paths (connect(), handshake(), and any other known-slow call sites) to pass 5000 explicitly.
  • Keep a shorter default (or ensure suites that rely on defaults set this.timeout(...) high enough) to avoid Mocha masking the helper’s error.
  • src/tests/backend/common.ts[114-219]
  • src/tests/backend/specs/messages.ts[12-60]

[reliability] SessionStore waits still tight
SessionStore waits still tight SessionStore expiry tests still rely on fixed sleeps with only ~30ms headroom (e.g., expires in 300ms, then sleep 330ms), so timer jitter or event-loop delays can still cause intermittent failures under heavy CI load. This PR improves the margin vs. before, but it doesn’t eliminate the underlying flake pattern.

Issue description

Several SessionStore tests use fixed setTimeout sleeps and then assert the DB record is gone/present. Even with the increased windows, the assertions can still race the actual cleanup work if the event loop is delayed.

Issue Context

SessionStore schedules expiration cleanup with setTimeout(...) and documents races on slow systems. Tests that assume cleanup has run at an exact time remain inherently timing-fragile.

Fix Focus Areas

  • Replace fixed sleeps like await new Promise(r => setTimeout(r, 330)) + assert with a small polling helper (e.g., poll every 25ms up to a 2–5s max) that waits until the DB condition is met.
  • Keep the expiry durations modest, but remove tight coupling between “sleep duration” and “expiry duration”.
  • src/tests/backend/specs/SessionStore.ts[47-168]


                     PR 7645 (2026-05-01)                    
[reliability] Concurrency blocks PR builds
Concurrency blocks PR builds With the new pull_request trigger, PR docs builds share the same concurrency group ("pages") as real deployments, so only one run can execute at a time. This can delay PR feedback or delay production docs deployments whenever both are active.

Issue description

Docs builds on PRs now contend with production docs deployments because the workflow uses a single concurrency.group: "pages" for all events. This serializes PR builds and push deployments.

Issue Context

The PR adds a pull_request trigger but keeps the existing global concurrency group. Concurrency is useful for deployments but typically undesirable for PR-only builds.

Fix Focus Areas

  • .github/workflows/build-and-deploy-docs.yml[4-17]
  • .github/workflows/build-and-deploy-docs.yml[28-33]

Suggested direction

Either:

  • Use distinct concurrency groups per event (e.g., pages-${{ github.event_name }}), or
  • Split into separate jobs: a PR build job without the Pages concurrency lock and a push-only deploy job with the lock.

[maintainability] Undocumented `engines.node` bump
Undocumented `engines.node` bump The PR raises the minimum supported Node version via `engines.node`, but related documentation still states the old requirement. This can mislead contributors/users and makes a breaking compatibility change without updating the documented guidance.

Issue description

package.json now sets engines.node to >=22.12.0, but documentation still claims the project requires >=22.0.0.

Issue Context

This is a user-visible compatibility/requirements change and should be reflected anywhere the Node requirement is documented.

Fix Focus Areas

  • doc/npm-trusted-publishing.md[88-91]
  • package.json[45-45]


                     PR 7644 (2026-05-01)                    
[security] Unvalidated plugin names
Unvalidated plugin names `update()` installs every name from `var/installed_plugins.json` (except `ep_etherpad-lite`) without enforcing the `ep_` prefix. If that file is corrupted or modified, running `plugins update` can install arbitrary packages, unlike `checkForMigration()` which explicitly restricts to `ep_` plugins.

Issue description

bin/plugins.ts updates plugins by trusting var/installed_plugins.json and installing every entry by name. This should be restricted to actual Etherpad plugins (ep_ prefix) to prevent accidental or malicious installation of arbitrary packages.

Issue Context

checkForMigration() already enforces plugin.name.startsWith(plugins.prefix) before installing from installed_plugins.json, but update() does not.

Fix Focus Areas

  • Add an explicit ep_/plugins.prefix validation filter before invoking installPlugin().
  • Consider de-duplicating names (e.g., via new Set(names)) to avoid repeated installs if the file contains duplicates.

Fix Focus Areas (code locations)

  • bin/plugins.ts[81-112]
  • src/static/js/pluginfw/installer.ts[81-134]
  • src/static/js/pluginfw/plugins.ts[36-154]


                     PR 7636 (2026-04-30)                    
[correctness] `theme-color` skipped for non-colibris
`theme-color` skipped for non-colibris `pad.html` only emits `` when `configuredToolbarColor()` returns a value, but that helper returns `null` for any `skinName` other than `colibris`. This means pads using `no-skin` or third-party skins will not include the meta tag, failing the requirement that pad HTML output includes a theme-color meta matching the active theme.

Issue description

pad.html conditionally omits <meta name="theme-color"> for non-colibris skins because configuredToolbarColor() returns null unless skinName === 'colibris'. This violates the requirement that pad HTML output includes a theme-color meta whose content matches the active theme's toolbar color.

Issue Context

The current implementation avoids emitting a potentially wrong color for unknown skins, but the compliance requirement is explicit about always including the meta and matching the active theme.

Fix Focus Areas

  • src/templates/pad.html[9-14]
  • src/templates/pad.html[51-51]
  • src/node/utils/SkinColors.ts[23-33]

[correctness] Dark meta mismatches timeslider
Dark meta mismatches timeslider timeslider.html emits a prefers-color-scheme: dark theme-color meta whenever enableDarkMode is true, but the timeslider client does not switch to dark skin-variant classes based on prefers-color-scheme, so on dark-mode devices the browser chrome can be dark while the toolbar stays in the configured (typically light) variant.

Issue description

timeslider.html emits a dark theme-color meta based on prefers-color-scheme: dark when settings.enableDarkMode is true, but the timeslider page does not appear to switch its toolbar to a dark variant based on OS color scheme. This can cause a persistent mismatch (dark address bar vs light toolbar) on dark-mode devices.

Issue Context

Pad pages have client-side logic to switch to dark variants on dark OS preference; timeslider appears not to.

Fix Focus Areas

  • src/templates/timeslider.html[40-42]
  • src/static/js/timeslider.ts[70-129]
  • src/static/js/pad.ts[648-652]

Implementation direction (choose one)

  1. Template-only mitigation: Only emit a single theme-color meta for timeslider that matches the actual configured toolbar (no prefers-color-scheme variants), or only emit the dark variant if settings.skinVariants already includes a dark toolbar class.
  2. Proper dark-mode support for timeslider: Add early client-side logic in timeslider.ts to mirror the pad page behavior (switch skin variant classes to the dark set when enableDarkMode and matchMedia('(prefers-color-scheme: dark)') match, respecting any stored user preference if applicable). Then the existing dark theme-color meta becomes accurate. If you pick option (2), consider also updating the theme-color meta dynamically when the skin variants change so the browser chrome stays in sync.

[correctness] Light theme-color stays white
Light theme-color stays white If settings.skinVariants contains only a dark toolbar variant (for example "dark-toolbar"), toolbarThemeColors() updates only the returned "dark" color and leaves "light" at the white default, so the emitted light-scheme won't match the actual dark toolbar on light-mode devices.

Issue description

toolbarThemeColors() leaves light at #ffffff when the configured settings.skinVariants contains only a dark toolbar variant (e.g. dark-toolbar). This causes the emitted prefers-color-scheme: light theme-color meta to be white even though the toolbar background is dark.

Issue Context

This shows up when an instance is configured with a dark toolbar variant but the user's OS/browser is in light mode.

Fix Focus Areas

  • src/node/utils/SkinColors.ts[17-27]

Implementation direction

Adjust toolbarThemeColors() so that a toolbar variant token sets the effective toolbar color for both schemes unless an explicit scheme-specific override is present. For example:

  • Track the last matched *-toolbar token color as toolbar.
  • Initialize {light, dark} to {toolbar, toolbar} when toolbar is found.
  • If you want separate values, only split when both a light-toolbar and dark-toolbar token are present. Also add/extend unit tests to cover a skinVariants string that contains only dark-toolbar and assert that .light is set to #576273 (or whatever the configured toolbar color is).

[correctness] `theme-color` wrong for no-skin
`theme-color` wrong for no-skin `pad.html` always sets `theme-color` from `settings.skinVariants` via `configuredToolbarColor()`, which is hardcoded to colibris variant colors and defaults to `#ffffff`. For the `no-skin` skin, the actual toolbar background comes from core CSS (`#f4f4f4`), so the emitted `theme-color` will not match the toolbar color for that theme.

Issue description

<meta name="theme-color"> is computed from colibris skinVariants and defaults to #ffffff, which does not match the toolbar color when settings.skinName is no-skin (core toolbar background is #f4f4f4). This violates the requirement that theme-color match the toolbar color for non-default themes.

Issue Context

  • pad.html emits theme-color based solely on settings.skinVariants.
  • SkinColors.configuredToolbarColor() only knows colibris variant tokens and falls back to white.
  • no-skin uses core CSS toolbar styling (background-color: #f4f4f4).

Fix Focus Areas

  • src/templates/pad.html[51-52]
  • src/node/utils/SkinColors.ts[14-31]

[correctness] `theme-color` missing default meta
`theme-color` missing default meta The pad page emits the light `theme-color` only with `media="(prefers-color-scheme: light)"`, and omits the dark variant when `settings.enableDarkMode` is false. In a dark OS/browser color-scheme this results in no applicable `theme-color`, so the browser UI will not match the (still light) toolbar.

Issue description

pad.html sets the light toolbar theme-color only for (prefers-color-scheme: light). When settings.enableDarkMode is false and the user agent prefers dark, there is no applicable theme-color, causing the browser UI color to not match the (light) toolbar.

Issue Context

The pad does not switch to dark variants unless enableDarkMode is enabled, so the toolbar remains light even if the OS/browser prefers dark.

Fix Focus Areas

  • src/templates/pad.html[46-47]
  • src/tests/backend/specs/specialpages.ts[81-89]

[correctness] Dark meta mismatches timeslider
Dark meta mismatches timeslider timeslider.html emits a prefers-color-scheme: dark theme-color meta whenever enableDarkMode is true, but the timeslider client does not switch to dark skin-variant classes based on prefers-color-scheme, so on dark-mode devices the browser chrome can be dark while the toolbar stays in the configured (typically light) variant.

Issue description

timeslider.html emits a dark theme-color meta based on prefers-color-scheme: dark when settings.enableDarkMode is true, but the timeslider page does not appear to switch its toolbar to a dark variant based on OS color scheme. This can cause a persistent mismatch (dark address bar vs light toolbar) on dark-mode devices.

Issue Context

Pad pages have client-side logic to switch to dark variants on dark OS preference; timeslider appears not to.

Fix Focus Areas

  • src/templates/timeslider.html[40-42]
  • src/static/js/timeslider.ts[70-129]
  • src/static/js/pad.ts[648-652]

Implementation direction (choose one)

  1. Template-only mitigation: Only emit a single theme-color meta for timeslider that matches the actual configured toolbar (no prefers-color-scheme variants), or only emit the dark variant if settings.skinVariants already includes a dark toolbar class.
  2. Proper dark-mode support for timeslider: Add early client-side logic in timeslider.ts to mirror the pad page behavior (switch skin variant classes to the dark set when enableDarkMode and matchMedia('(prefers-color-scheme: dark)') match, respecting any stored user preference if applicable). Then the existing dark theme-color meta becomes accurate. If you pick option (2), consider also updating the theme-color meta dynamically when the skin variants change so the browser chrome stays in sync.

[correctness] Dark color mismatches toolbar
Dark color mismatches toolbar SkinColors.toolbarThemeColors() treats any configured token containing "dark" (e.g., "dark-toolbar") as the dark-scheme theme-color, but the client-side dark mode code always switches the toolbar class to "super-dark-toolbar". If settings.skinVariants includes "dark-toolbar", the server will emit #576273 for dark theme-color while the actual toolbar in dark mode will be super-dark (#485365).

Issue description

toolbarThemeColors() can return dark = #576273 when settings.skinVariants contains dark-toolbar, but the pad client applies super-dark-toolbar for dark mode, so the server-rendered dark theme-color can disagree with the real toolbar color.

Issue Context

Both initial dark-mode application and the UI toggle hardcode super-dark-toolbar.

Fix Focus Areas

  • src/node/utils/SkinColors.ts[17-28]
  • src/templates/pad.html[42-48]

Suggested change

Make the pad page’s dark-scheme theme-color match the toolbar class that is actually used in dark mode (super-dark-toolbar). Options include:

  • Adjust toolbarThemeColors() (or introduce a pad-specific helper) so dark is derived from TOOLBAR_COLORS['super-dark-toolbar'] instead of being overridden by dark-toolbar.
  • Update tests accordingly (the expected dark theme-color should match the forced super-dark-toolbar behavior).

[maintainability] `theme-color` lacks feature flag
`theme-color` lacks feature flag The PR introduces a new always-on code path that emits `` without any feature flag or disable-by-default mechanism. This violates the requirement that new features be gated so they can be safely toggled off if needed.

Issue description

<meta name="theme-color"> emission is a new behavior that is enabled unconditionally (for light mode) and is not protected by a feature flag that is disabled by default.

Issue Context

Compliance requires new features to be opt-in/flagged so they can be turned off safely if needed.

Fix Focus Areas

  • src/templates/pad.html[46-47]
  • src/templates/timeslider.html[41-42]

[correctness] Light theme-color stays white
Light theme-color stays white If settings.skinVariants contains only a dark toolbar variant (for example "dark-toolbar"), toolbarThemeColors() updates only the returned "dark" color and leaves "light" at the white default, so the emitted light-scheme won't match the actual dark toolbar on light-mode devices.

Issue description

toolbarThemeColors() leaves light at #ffffff when the configured settings.skinVariants contains only a dark toolbar variant (e.g. dark-toolbar). This causes the emitted prefers-color-scheme: light theme-color meta to be white even though the toolbar background is dark.

Issue Context

This shows up when an instance is configured with a dark toolbar variant but the user's OS/browser is in light mode.

Fix Focus Areas

  • src/node/utils/SkinColors.ts[17-27]

Implementation direction

Adjust toolbarThemeColors() so that a toolbar variant token sets the effective toolbar color for both schemes unless an explicit scheme-specific override is present. For example:

  • Track the last matched *-toolbar token color as toolbar.
  • Initialize {light, dark} to {toolbar, toolbar} when toolbar is found.
  • If you want separate values, only split when both a light-toolbar and dark-toolbar token are present. Also add/extend unit tests to cover a skinVariants string that contains only dark-toolbar and assert that .light is set to #576273 (or whatever the configured toolbar color is).

[maintainability] Mixed export styles
Mixed export styles SkinColors.ts uses both TypeScript named exports and a CommonJS module.exports assignment, which is redundant and makes module semantics harder to reason about across require() and TS imports. This increases the risk of accidental breakage when refactoring or changing build tooling.

Issue description

src/node/utils/SkinColors.ts mixes ES exports (export const ...) with CommonJS (module.exports = ...). This is redundant and can create confusion or subtle interop issues over time.

Issue Context

  • EJS templates load the helper via require('.../SkinColors').
  • Vitest tests import it via import { ... } from ....
  • TS is configured for module: CommonJS, so ES named exports already compile to CommonJS-compatible exports.

Fix Focus Areas

  • src/node/utils/SkinColors.ts[17-43]

Suggested fix

  • Remove module.exports = ... and rely on the existing export const ... named exports (TypeScript will emit CommonJS exports under the current tsconfig).
  • Alternatively, remove the export keywords and switch callers/tests to require() consistently, but prefer the first option for TS files.


                     PR 7635 (2026-04-30)                    
[testability] `publicURL` undocumented in `doc/`
`publicURL` undocumented in `doc/` This PR introduces a new user-facing configuration key `publicURL`, but there are no corresponding documentation updates under the `doc/` folder. Operators may miss the new setting and deploy with incorrect OG/Twitter canonical URLs.

Issue description

A new config key publicURL is added/used, but no documentation under doc/ was updated in the same PR.

Issue Context

This is a feature-impacting operator setting that affects canonical OG/Twitter URLs, so it must be documented in doc/ per the compliance checklist.

Fix Focus Areas

  • settings.json.template[111-124]
  • src/node/utils/Settings.ts[164-168]
  • src/node/utils/Settings.ts[327-338]
  • doc/docker.md[70-86]

[correctness] IPv6 Host breaks OG URLs
IPv6 Host breaks OG URLs sanitizeHost rejects valid bracketed IPv6 Host headers (e.g. "[::1]:9001"), causing buildAbsoluteUrl to fall back to "localhost" and emit incorrect og:url/og:image values on IPv6 literal-host deployments.

Issue description

sanitizeHost() rejects bracketed IPv6 literals in the Host header (required format for IPv6 in URLs/Host), which makes buildAbsoluteUrl() fall back to localhost and produces incorrect og:url / og:image.

Issue Context

Current host allowlist regex does not permit [ or ].

Fix Focus Areas

  • src/node/utils/socialMeta.ts[109-138]

Suggested fix approach

  • Update sanitizeHost to accept either:
  • DNS-style hosts (current behavior), or
  • bracketed IPv6 literals with optional port (e.g. \[[0-9a-f:.]+\](:\d{1,5})?).
  • Consider parsing publicURL with new URL() and validating host via URL properties, to avoid regex edge cases.
  • Add a unit test that asserts Host: [::1]:9001 results in og:url using that host (or at least not falling back to localhost).

[correctness] IPv6 publicURL rejected
IPv6 publicURL rejected sanitizePublicURL() rejects bracketed IPv6 hosts, so settings.publicURL values like "https://[2001:db8::1]" are ignored and og:url/og:image fall back to request-derived origin (or "localhost"). This produces incorrect canonical URLs in link previews for IPv6-based deployments.

Issue description

sanitizePublicURL()/sanitizeHost() currently reject bracketed IPv6 hosts (e.g. https://[2001:db8::1]), causing publicURL to be ignored and OG URLs to fall back to request-derived origin.

Issue Context

IPv6 literals in URLs must be bracketed per RFC 3986. The current HOST_RE only accepts DNS-like hostnames.

Fix Focus Areas

  • src/node/utils/socialMeta.ts[109-138]

Implementation notes

  • Prefer parsing with new URL() for publicURL validation instead of regex.
  • Update host validation to accept bracketed IPv6 (and optionally validate port range 1–65535).
  • Keep existing protections against CRLF/userinfo and overly long values.

[reliability] `decodeURIComponent(o.padName)` may throw
`decodeURIComponent(o.padName)` may throw `renderSocialMeta()` calls `decodeURIComponent(o.padName)` on the Express route param, which can throw for pad names that decode to strings containing `%` (e.g., `/p/100%25`). This can break pad/timeslider page responses, preventing OG tags from being emitted for some valid pad IDs.

Issue description

decodeURIComponent(o.padName) can throw for some pad names (for example those that contain % after Express has already decoded the route param).

Issue Context

This logic runs on the request path for /p/:pad and /p/:pad/timeslider. A thrown exception can prevent the response from rendering OG tags (and potentially the page).

Fix Focus Areas

  • src/node/utils/socialMeta.ts[123-129]

[reliability] XSS test allows false pass
XSS test allows false pass The XSS escape test only asserts that og:title (if present) lacks a raw `` tag, so it can pass when og:title is missing entirely (for example due to a 500 response), masking regressions in social meta rendering.

Issue description

The XSS-focused test can pass even if the meta tags are not emitted (e.g., the endpoint errors and returns no <meta property="og:title">). This reduces the test’s ability to catch regressions in social meta rendering.

Issue Context

The test currently:

  • Does not assert a status code.
  • Does not assert that og:title exists.

Fix Focus Areas

  • src/tests/backend/specs/socialMeta.ts[86-99]

Suggested fix

  • Ensure the request hits a reliably-rendering path and assert that og:title is present:
  • Use a known-good pad ID and expect 200.
  • Assert ogTag(res.text, 'og:title') is non-null and contains the escaped form (e.g., &lt;script&gt;...).
  • Optionally add a separate test that uses a pad ID containing %25 to prevent regressions related to URL decoding/URIError crashes.

[security] Host header poisons OG URLs
Host header poisons OG URLs buildAbsoluteUrl() constructs og:url and og:image using req.protocol and req.get('host'), so a forged Host/X-Forwarded-* header can make emitted metadata point at an attacker-controlled origin. This enables misleading unfurl previews and can contribute to cache poisoning if any intermediary caches HTML by path only.

Issue description

renderSocialMeta() currently builds absolute URLs using req.protocol and req.get('host'), which are derived from client-controlled headers (and, with trust proxy, from X-Forwarded-*). This can cause OG/Twitter tags to advertise attacker-chosen origins.

Issue Context

The affected values are og:url, og:image, and their Twitter equivalents, which are inserted into templates via <%- socialMetaHtml %>.

Fix Focus Areas

  • src/node/utils/socialMeta.ts[95-104]
  • src/node/utils/socialMeta.ts[132-140]

Suggested fix

  • Prefer a configured canonical external origin (e.g., a single setting such as settings.externalUrl/settings.baseUrl) when generating absolute URLs; fall back to request-derived origin only if not configured.
  • If falling back to request-derived values, validate/normalize the host (e.g., strict hostname/host:port parsing) and consider rejecting/ignoring unexpected values.
  • Keep og:image/twitter:image and og:url consistent (same origin).

[correctness] Default `socialDescription` mismatched
Default `socialDescription` mismatched The shipped default `socialDescription` string does not match the required default text, so pad pages will not emit the mandated `og:description` value out of the box. This breaks the compliance success criteria for OG metadata defaults and configurability.

Issue description

The default socialDescription value does not match the compliance-required default string.

Issue Context

Compliance requires the default og:description text to be exactly A document that everybody can edit at the same time. while still being configurable via settings.json.

Fix Focus Areas

  • src/node/utils/Settings.ts[328-334]
  • settings.json.template[111-126]
  • settings.json.docker[120-126]

[reliability] `decodeURIComponent(o.padName)` may throw
`decodeURIComponent(o.padName)` may throw `renderSocialMeta()` calls `decodeURIComponent(o.padName)` on the Express route param, which can throw for pad names that decode to strings containing `%` (e.g., `/p/100%25`). This can break pad/timeslider page responses, preventing OG tags from being emitted for some valid pad IDs.

Issue description

decodeURIComponent(o.padName) can throw for some pad names (for example those that contain % after Express has already decoded the route param).

Issue Context

This logic runs on the request path for /p/:pad and /p/:pad/timeslider. A thrown exception can prevent the response from rendering OG tags (and potentially the page).

Fix Focus Areas

  • src/node/utils/socialMeta.ts[123-129]

[reliability] XSS test allows false pass
XSS test allows false pass The XSS escape test only asserts that og:title (if present) lacks a raw `` tag, so it can pass when og:title is missing entirely (for example due to a 500 response), masking regressions in social meta rendering.

Issue description

The XSS-focused test can pass even if the meta tags are not emitted (e.g., the endpoint errors and returns no <meta property="og:title">). This reduces the test’s ability to catch regressions in social meta rendering.

Issue Context

The test currently:

  • Does not assert a status code.
  • Does not assert that og:title exists.

Fix Focus Areas

  • src/tests/backend/specs/socialMeta.ts[86-99]

Suggested fix

  • Ensure the request hits a reliably-rendering path and assert that og:title is present:
  • Use a known-good pad ID and expect 200.
  • Assert ogTag(res.text, 'og:title') is non-null and contains the escaped form (e.g., &lt;script&gt;...).
  • Optionally add a separate test that uses a pad ID containing %25 to prevent regressions related to URL decoding/URIError crashes.


                     PR 7630 (2026-04-29)                    
[reliability] `$insertorderedlistButton.first()` index use
`$insertorderedlistButton.first()` index use The updated ordered list spec still relies on `.first()` to choose a toolbar button match, which is DOM-order dependent and can change under plugins. This violates the guideline to avoid plugin-sensitive selector/index assumptions.

Issue description

The spec clicks the ordered-list toolbar button via $insertorderedlistButton.first(), which is order-dependent and may break when plugins alter the toolbar DOM.

Issue Context

Prefer a uniquely identifying selector (for example, button[data-l10n-id='pad.toolbar.ol.title'] or another stable attribute that does not depend on element order).

Fix Focus Areas

  • src/tests/frontend-new/specs/ordered_list.spec.ts[16-21]


                     PR 7628 (2026-04-28)                    
[correctness] Installer allows Node 20
Installer allows Node 20 The PR bumps the minimum supported Node.js version to >=22, but the one-line installers still only require Node 20, allowing users to proceed with an unsupported runtime and hit failures later (dependency install or runtime).

Issue description

Minimum supported Node.js is now >=22, but the POSIX and PowerShell one-line installers still accept Node 20.

Issue Context

This PR updates engines.node and the README requirement to Node >=22. The installer scripts should reject Node 20 to avoid installing a broken setup.

Fix Focus Areas

  • bin/installer.sh[33-56]
  • bin/installer.ps1[37-57]

[correctness] Packages depend on Node 20
Packages depend on Node 20 The Debian/RPM packaging metadata still declares `nodejs (>= 20)` even though Etherpad now requires Node >=22, so package managers can install Node 20 and deliver a broken Etherpad install.

Issue description

Packaging metadata still allows installation with Node 20, but the project now requires Node >=22.

Issue Context

The .deb/.rpm dependencies should enforce the same minimum Node version as package.json to prevent broken installs.

Fix Focus Areas

  • packaging/nfpm.yaml[22-26]
  • packaging/nfpm.yaml[110-119]
  • packaging/README.md[72-75]
  • .github/workflows/deb-package.yml[140-147]

[maintainability] Docs still reference Node 20
Docs still reference Node 20 Some documentation still states Node >=20 and references setup-node 20, contradicting the new minimum Node >=22 and potentially causing contributors to use an unsupported runtime.

Issue description

Docs still mention Node 20 after the project minimum was bumped to Node >=22.

Issue Context

README and engines.node have been updated; remaining docs should be consistent to avoid setup confusion.

Fix Focus Areas

  • AGENTS.MD[8-11]
  • doc/npm-trusted-publishing.md[86-92]


                     PR 7624 (2026-04-28)                    
[reliability] Publishes empty apt repo
Publishes empty apt repo The apt repo generation step never asserts that any `.deb` artifacts were actually copied into the pool, so a tagged run can wipe `site/public/apt/` and publish an empty repository if artifact names change or downloads fail silently. This can break installs/upgrades for all apt users until the next successful publish.

Issue description

The workflow can publish an empty apt repository because the copy loop skips missing globs without failing.

Issue Context

A tagged release run wipes site/public/apt and regenerates it; if no .deb files are present, the generated repo will be empty but still signed and pushed.

Fix Focus Areas

  • .github/workflows/deb-package.yml[310-339]

Suggested change

After downloading artifacts (or after the copy loop), assert at least one .deb exists and fail otherwise. For example:


[reliability] Artifact glob exits early
Artifact glob exits early The `Resolve artefact paths` step runs `ls ... | head` under `set -euo pipefail`, so if the glob matches nothing the step exits before reaching the explicit empty-check and custom error message. This can break the release publish job with a non-obvious failure mode whenever the artifact names or download step change.

Issue description

Resolve artefact paths uses ls ... | head under set -euo pipefail. If no files match, ls exits non-zero and the step terminates before the intended -z checks and friendly ::error::... message.

Issue Context

This job is release-gated (refs/tags/v*) and is expected to fail with a clear message if artifacts are missing. Current behavior fails earlier and more opaquely.

Fix Focus Areas

  • .github/workflows/deb-package.yml[250-263]

Suggested change

Replace the ls | head pipelines with a non-fatal glob match, for example:

  • AMD64=$(ls ... 2>/dev/null | head -n1 || true) (and same for ARM64), or
  • use shopt -s nullglob and pick from an array, or
  • use compgen -G 'dist/etherpad_*_amd64.deb' to test existence before ls. Ensure the step reaches the explicit missing-artifact error path when no matches exist.

[reliability] Key fetch breaks on tags
Key fetch breaks on tags The apt-publish job downloads `packaging/apt/key.asc` from raw.githubusercontent.com using `${{ github.sha }}`, which can be an annotated tag object SHA and therefore not resolve to repository contents, causing the key download (and publish) to fail on release tag runs.

Issue description

apt-publish fetches packaging/apt/key.asc using a raw.githubusercontent.com URL built with ${{ github.sha }}. On release runs, Etherpad creates annotated tags, so ${{ github.sha }} can be a tag object SHA that does not resolve to repository contents on raw.githubusercontent.com, causing the key download to fail.

Issue Context

The job is gated to tag pushes (refs/tags/v*) and includes a fallback curl to fetch packaging/apt/key.asc because only gh-pages is checked out.

Fix

Prefer a ref that resolves to a tree, e.g. ${{ github.ref_name }} (the tag name) in the raw URL, or add a second checkout of the current ref (in a different path) and copy packaging/apt/key.asc from that checkout.

Fix Focus Areas

  • .github/workflows/deb-package.yml[337-343]
  • .github/workflows/deb-package.yml[230-243]

[maintainability] Key URL inconsistency
Key URL inconsistency generate-signing-key.sh documents the public key URL as ether.github.io/etherpad/key.asc, but the workflow/README in this PR publish and instruct users to fetch it from etherpad.org/key.asc. This inconsistency will cause confusion and failed setup if someone follows the script comments.

Issue description

Docs/comments disagree on where key.asc is published.

Issue Context

  • Workflow comments and the staging step indicate site/public/key.aschttps://etherpad.org/key.asc.
  • README installation steps use https://etherpad.org/key.asc.
  • The new key generation helper script references https://ether.github.io/etherpad/key.asc.

Fix Focus Areas

  • packaging/apt/generate-signing-key.sh[11-15]
  • packaging/README.md[56-67]
  • .github/workflows/deb-package.yml[230-237]

Suggested change

Update the helper script comment to match the actual published URL (https://etherpad.org/key.asc). Ensure all references across these files use the same canonical URL.



                     PR 7623 (2026-04-28)                    
[reliability] test-ui runs admin project
test-ui runs admin project `src/package.json` now runs `npx playwright test` with no path or project filter, so it will execute all configured projects including `chromium-admin`. The admin specs require extra setup (admin UI enabled, admin frontend built) that is not part of the regular frontend test setup, causing failures/flakes when `pnpm run test-ui` is run without `--project`.

Issue description

pnpm run test-ui now runs npx playwright test without limiting projects, so it can execute the chromium-admin project (admin specs) in contexts that do not prepare the Admin UI environment.

Issue Context

Admin specs require setup that regular frontend runs do not do (enabling admin UI tests, building admin assets). CI avoids this by explicitly passing --project=chromium/--project=firefox for frontend and using a separate workflow/script for admin.

Fix Focus Areas

  • src/package.json[153-156]
  • src/playwright.config.ts[49-70]

Suggested change

Update the test-ui / test-ui:ui scripts to explicitly run only the frontend projects (e.g., --project=chromium --project=firefox), keeping test-admin as the only entry point that runs chromium-admin. Alternatively, move admin specs into a separate Playwright config and have test-admin pass -c to avoid including the admin project in the default config.


[correctness] `test-ui` path filters plugins
`test-ui` path filters plugins The Playwright config adds plugin `testMatch` globs, but `pnpm run test-ui` runs `npx playwright test tests/frontend-new/specs`, which restricts discovery to core specs and can prevent plugin-owned frontend specs from running in CI.

Issue description

Plugin frontend specs may still not run in CI because the test-ui script passes tests/frontend-new/specs to playwright test, which can filter out plugin spec paths added via testMatch.

Issue Context

The PR’s goal (Compliance ID 1) is that plugin-owned Playwright specs (for example under ../node_modules/ep_*/static/tests/frontend-new/specs/**) execute when plugins are installed. This requires the CI entrypoint (pnpm run test-ui) to not restrict discovery to core-only paths.

Fix Focus Areas

  • src/package.json[153-154]
  • src/playwright.config.ts[22-34]
  • doc/PLUGIN_FRONTEND_TESTS.md[10-13]

[correctness] Admin tests excluded
Admin tests excluded src/playwright.config.ts now defines an explicit testMatch that does not include tests/frontend-new/admin-spec, so the admin Playwright suite will not be discovered. This breaks pnpm run test-admin and the frontend-admin-tests GitHub Actions workflow (likely “No tests found” / missing coverage).

Issue description

Playwright config now sets an explicit testMatch list that excludes tests/frontend-new/admin-spec/**/*.spec.ts, so admin UI tests are no longer discovered.

Issue Context

pnpm run test-admin (and .github/workflows/frontend-admin-tests.yml) runs Playwright against tests/frontend-new/admin-spec, which depends on config-based test discovery.

Fix Focus Areas

  • src/playwright.config.ts[22-30]

Suggested change

Add an additional glob for admin tests, e.g.:

  • tests/frontend-new/admin-spec/**/*.spec.ts Optionally, consolidate with a brace pattern for readability:
  • tests/frontend-new/{specs,admin-spec}/**/*.spec.ts

[maintainability] `testMatchGlobs` uses 4-space indent
`testMatchGlobs` uses 4-space indent New/modified lines in `src/playwright.config.ts` use 4-space indentation, but the repository’s `.editorconfig` requires 2-space indentation.

Issue description

Changed/added code in src/playwright.config.ts does not follow the repository’s required 2-space indentation.

Issue Context

.editorconfig specifies indent_size = 2 for all files by default.

Fix Focus Areas

  • src/playwright.config.ts[23-52]
  • .editorconfig[3-8]


                     PR 7609 (2026-04-27)                    
[reliability] Missing server-ready abort
Missing server-ready abort In the new *-with-plugins jobs, the test step proceeds to Playwright even if Etherpad never becomes reachable within the 15s loop, which can lead to misleading failures/flakes. Etherpad startup runs plugin migration/installation when `var/installed_plugins.json` is absent, adding extra startup work that these new jobs now trigger by installing 11 plugins.

Issue description

The workflow starts Etherpad in the background and waits up to 15 seconds for http://localhost:9001/, but it never fails if the server is still unreachable. With plugins installed, Etherpad may do extra work during startup (plugin migration/installation), so the fixed wait can be insufficient and Playwright will run against a down server.

Issue Context

The script sets connected=true inside can_connect() but never checks it after the loop.

Fix Focus Areas

  • .github/workflows/frontend-tests.yml[192-209]
  • .github/workflows/frontend-tests.yml[269-286]

Suggested fix

  • Increase the timeout (for example 60–180s), and after the loop do something like:
  • if [ "$connected" != true ]; then echo "Etherpad failed to start"; tail -n +1 /tmp/etherpad-server.log; exit 1; fi
  • Optionally add set -euo pipefail and ensure the background server process is cleaned up on exit (trap).


                     PR 7602 (2026-04-26)                    
[maintainability] Hardcoded locales output path
Hardcoded locales output path The build-time copy uses a hard-coded destination (`../src/templates/admin/locales`) instead of deriving it from Vite’s resolved `build.outDir`, duplicating configuration in two places. If `outDir` is ever changed/overridden, locales may be copied to an unexpected location.

Issue description

The locale copy destination is duplicated as a literal path, separate from build.outDir. This increases the chance of the two drifting over time.

Issue Context

Express serves admin files from <settings.root>/src/templates + request path, and Vite builds admin into build.outDir.

Fix Focus Areas

  • admin/vite.config.ts[18-35]
  • admin/vite.config.ts[65-69]

Suggested fix

Capture the resolved Vite config via configResolved(resolved) and compute destDir from resolved.build.outDir (e.g., path.resolve(resolved.root, resolved.build.outDir, 'locales') or similar), instead of hard-coding ../src/templates/admin/locales.


[maintainability] Hardcoded `http://` in tests
Hardcoded `http://` in tests New code hardcodes `http://localhost:9001` in Playwright navigation/requests and in the Vite dev proxy target, which reduces protocol flexibility in environments that might run under HTTPS or use a configurable base URL.

Issue description

Hardcoded http://localhost:9001 URLs were added in tests and Vite proxy config, reducing protocol independence.

Issue Context

Compliance requires protocol-independent URLs where applicable. Tests can usually rely on Playwright baseURL (or relative navigation) and Vite proxy targets can often be derived from environment/config.

Fix Focus Areas

  • src/tests/frontend-new/admin-spec/admini18n.spec.ts[18-36]
  • admin/vite.config.ts[77-79]

[reliability] Dev URL decode crash
Dev URL decode crash The Vite dev middleware uses `decodeURIComponent()` on the request path without handling `URIError`, so a malformed percent-encoded URL can crash the `vite dev` process. This makes the admin dev server fragile to unexpected requests.

Issue description

decodeURIComponent() can throw on malformed percent-encoding. The dev middleware should not allow a bad URL to crash the Vite dev server.

Issue Context

This middleware is mounted at /admin/locales and parses req.url to map to a locale JSON file.

Fix Focus Areas

  • admin/vite.config.ts[41-50]

Suggested fix

Wrap the decode in try/catch and return next() (or respond 400) on URIError, e.g.:

  • let decodedPath; try { decodedPath = decodeURIComponent(...); } catch { return next(); }

[reliability] Unhandled stream error
Unhandled stream error The dev middleware pipes `fs.createReadStream()` to the response without an `'error'` handler, so an I/O failure can raise an unhandled stream error and crash the dev server. This is a TOCTOU-prone path because the file can disappear between `existsSync()` and streaming.

Issue description

fs.createReadStream() can emit error. Without a handler, an I/O error can crash the dev server.

Issue Context

The dev middleware serves JSON files under /admin/locales during vite dev.

Fix Focus Areas

  • admin/vite.config.ts[45-50]

Suggested fix

Create the stream, attach stream.on('error', ...), then stream.pipe(res). On error, respond with an error status (or call next(err)), e.g.:

  • const stream = fs.createReadStream(filepath);
  • stream.on('error', (err) => { res.statusCode = 500; res.end('Failed to read locale'); });
  • stream.pipe(res); Also consider return after starting the response to avoid accidental fallthrough.


                     PR 7585 (2026-04-23)                    
[correctness] NBSP lost at boundaries
NBSP lost at boundaries contentcollector.ts only preserves a NBSP run if it is between non-whitespace characters within the same DOM text node, so a user-intended NBSP at the start/end of a text node is converted back to a normal space. This breaks NBSP round-tripping when words are split across spans (e.g., formatting/attribute boundaries), despite the intent of #3037.

Issue description

The NBSP preservation heuristic in contentcollector relies on per-string neighbor characters (before/after) but textify() is applied per DOM text node. If a user-intended NBSP sits at a text-node boundary (e.g., "100" in one span and "\u00a0km" in the next), the heuristic sees before === '' or after === '' and incorrectly converts the NBSP to a normal space.

Issue Context

  • collectContent() processes each TEXT_NODE independently and calls lines.appendText(textify(txt2), ...).
  • The editor DOM is built from multiple <span> runs and processSpaces() runs on the full line HTML, so it is normal for the first character of a span’s text node to be a NBSP.

Fix Focus Areas

  • src/static/js/contentcollector.ts[79-99]
  • src/static/js/contentcollector.ts[350-402]

Suggested fix approach

Move the [ \u00a0]+ run canonicalization out of textify() (per-text-node) and into a post-processing step that runs on the fully assembled line string (or provide textify with cross-node context such as the last appended character and the next character to be appended). Ensure the transformation remains length-preserving so selection offsets and attribution lengths stay consistent.


[maintainability] `PadType` adds `spliceText`
`PadType` adds `spliceText` The PR adds `spliceText` to the exported `PadType`, expanding the Pad API surface for TS/plugin consumers, but this PR does not include a corresponding documentation update under `doc/`. This can leave integrators unaware of the new method and its expected behavior.

Issue description

A new exported Pad API method (spliceText) was added to PadType without updating the project documentation in doc/.

Issue Context

PadType is part of the typed interface for Pad objects, which are exposed to server-side hook/plugin code. Per compliance, public API surface changes should be documented in the same PR.

Fix Focus Areas

  • src/node/types/PadType.ts[15-20]
  • doc/api/hooks_server-side.md[267-325]


                     PR 7583 (2026-04-22)                    
[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.

Issue description

The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.

Issue Context

Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.

Fix Focus Areas

  • packaging/scripts/postinstall.sh[34-40]
  • packaging/systemd/etherpad.service[22-44]

Suggested fix

Implement one of the following (prefer the symlink approach to keep writes out of /opt):

  1. Symlink /opt/etherpad/var to a writable location:
  • In postinstall.sh (configure):
    • mkdir -p /var/lib/etherpad/var
    • ln -sfn /var/lib/etherpad/var /opt/etherpad/var
    • chown -R etherpad:etherpad /var/lib/etherpad/var
  • This keeps ReadWritePaths=/var/lib/etherpad sufficient.
  1. Allow /opt/etherpad/var writes explicitly:
  • Create /opt/etherpad/var in postinstall.sh and chown it to etherpad:etherpad.
  • Add /opt/etherpad/var to ReadWritePaths in the unit. Either way, ensure the directory exists before service start so fs.mkdirSync(.../var/js) and plugin migration writes don’t throw.

[security] nfpm download not verified
nfpm download not verified The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.

Issue description

The workflow installs nfpm from a downloaded .deb without any integrity verification.

Issue Context

Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.

Fix Focus Areas

  • .github/workflows/deb-package.yml[64-71]

Suggested fix

Add an integrity verification step before dpkg -i, for example:

  • Download nfpm’s published checksums.txt (or equivalent) for ${NFPM_VERSION} and verify /tmp/nfpm.deb with sha256sum -c.
  • Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
  • Only proceed to sudo dpkg -i if verification passes.

[reliability] Node before setup-node
Node before setup-node In `.github/workflows/deb-package.yml`, the “Resolve version” step runs `node -p ...` before `actions/setup-node`, so the workflow can fail on runners without a suitable preinstalled `node` binary. This breaks packaging builds on non-tag pushes/PRs where VERSION is derived from `package.json`.

Issue description

The workflow executes node to compute the version before actions/setup-node runs, which can fail on runners without a preinstalled (or compatible) Node.js.

Issue Context

This affects non-tag builds where VERSION is read from package.json.

Fix

Reorder steps so actions/setup-node runs before any node command, or avoid Node entirely in the version resolution step (e.g., parse JSON via jq or use a GitHub expression when possible).

Fix Focus Areas

  • .github/workflows/deb-package.yml[58-77]

[reliability] Smoke test doesn't ensure Node>=20
Smoke test doesn't ensure Node>=20 The deb-package workflow installs `nodejs` from the runner’s default APT sources without ensuring it meets the package dependency `nodejs (>= 20)`, so `dpkg -i` can fail to configure (or behave differently) depending on the runner image. This makes the smoke-test nondeterministic and can invalidate the subsequent assertions/startup test.

Issue description

The CI smoke test installs nodejs from default APT sources without ensuring it satisfies the package dependency (nodejs (>= 20)), making the smoke test nondeterministic and potentially invalid.

Issue Context

  • The .deb declares Depends: nodejs (>= 20).
  • The workflow currently does apt-get install -y nodejs with no repo setup/pinning.
  • packaging/test-local.sh already uses NodeSource (setup_lts.x) before installing nodejs.

Fix Focus Areas

  • .github/workflows/deb-package.yml[136-139]

Suggested fix

In the smoke-test step, install Node.js via a deterministic mechanism, for example:

  • Add NodeSource LTS repo (curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -) then sudo apt-get install -y nodejs, or
  • Use a pinned distro/repo that guarantees nodejs >= 20. This should happen before dpkg -i dist/*.deb so the package can configure and postinstall assertions are meaningful.

[reliability] Preinst depends not guaranteed
Preinst depends not guaranteed The preinstall maintainer script requires addgroup/adduser, but the package only declares adduser as a regular dependency, so `dpkg -i` on a minimal system can fail before unpacking if adduser isn’t already installed.

Issue description

preinstall.sh calls addgroup/adduser during the preinst phase, but adduser is only in Depends. Direct installs via dpkg -i can run preinst before adduser is present, causing installation to abort.

Issue Context

This is especially problematic because a preinst failure happens before unpacking, so follow-up dependency repair (apt-get -f install) may not be able to recover.

Fix Focus Areas

Implement one of:

  • Debian-correct dependency: add a Debian Pre-Depends: adduser (if nfpm supports it) so adduser is guaranteed before preinst.
  • Move user/group creation later: shift user/group creation to postinstall (configure), and adjust packaging so unpack-time ownership does not require the etherpad group to already exist (e.g., set /etc/default/etherpad group to root in the payload and chgrp it in postinstall).
  • packaging/scripts/preinstall.sh[1-18]
  • packaging/nfpm.yaml[22-26]
  • packaging/scripts/postinstall.sh[16-34]

[correctness] Wrong tsx loader mode
Wrong tsx loader mode packaging/bin/etherpad starts Etherpad with Node’s ESM preload (--import …/tsx/…/esm/index.mjs), but Etherpad’s production entrypoint expects the CommonJS tsx/cjs require hook and uses CommonJS exports. This will likely crash at startup (exports is undefined) and prevent etherpad.service from starting.

Issue description

/usr/bin/etherpad preloads tsx via Node's --import pointing at tsx's ESM loader, but Etherpad's server entrypoint is started in CommonJS mode (tsx/cjs) and uses exports.*. This mismatch can break startup.

Issue Context

Etherpad's own src/package.json uses node --require tsx/cjs node/server.ts for production. src/node/server.ts assigns exports.start = ..., which will throw if the file is evaluated as an ES module.

Fix Focus Areas

  • packaging/bin/etherpad[13-18]

Suggested change

Switch to the CJS preload used by Etherpad:

  • Replace --import "file://.../tsx/.../esm/index.mjs" with --require tsx/cjs (or an absolute path to tsx/cjs under the installed tree).
  • Keep NODE_ENV=production as you already do.

[correctness] Plugins not installable
Plugins not installable The package installs /opt/etherpad as root:root and the systemd unit uses ProtectSystem=strict without making Etherpad’s plugin install paths writable. Etherpad installs plugins into /opt/etherpad/src/plugin_packages and creates symlinks under /opt/etherpad/src/node_modules, so plugin installs/upgrades will fail with EACCES under the packaged service.

Issue description

The packaged service cannot install/upgrade plugins at runtime because Etherpad's plugin manager writes to:

  • ${settings.root}/src/plugin_packages
  • ${settings.root}/src/node_modules (creates symlinks) In the package, ${settings.root} resolves to /opt/etherpad, which is installed as root:root and is read-only under ProtectSystem=strict.

Issue Context

  • installer.ts defines pluginInstallPath under settings.root/src/plugin_packages and node_modules under .../src/node_modules.
  • LinkInstaller uses PluginManager({pluginsPath: pluginInstallPath}) and symlinks into node_modules.
  • nfpm.yaml sets /opt/etherpad tree owner/group to root:root.
  • etherpad.service only allows writes to /var/lib/etherpad, /var/log/etherpad, /etc/etherpad.
  • postinstall.sh currently only redirects /opt/etherpad/var.

Fix Focus Areas

  • packaging/systemd/etherpad.service[23-44]
  • packaging/nfpm.yaml[51-59]
  • packaging/scripts/postinstall.sh[40-50]

Suggested fix approach

  1. Redirect plugin storage to /var/lib/etherpad (similar to the existing /opt/etherpad/var redirect):
  • Create /var/lib/etherpad/plugin_packages owned by etherpad:etherpad.
  • Symlink /opt/etherpad/src/plugin_packages -> /var/lib/etherpad/plugin_packages in postinstall.sh.
  1. Allow symlink creation under /opt/etherpad/src/node_modules:
  • Add /opt/etherpad/src/node_modules to ReadWritePaths= in the unit, and
  • In postinstall.sh, make that directory writable for the service user (e.g., chgrp etherpad /opt/etherpad/src/node_modules && chmod g+w /opt/etherpad/src/node_modules). If runtime plugin installation is intentionally unsupported for the .deb, then explicitly disable/hide the plugin installer UI and document the limitation; otherwise users will hit runtime failures.

[reliability] Broken tag trigger globs
Broken tag trigger globs The deb-package workflow’s tag filters use regex-like patterns (v[0-9]+.[0-9]+.[0-9]+) but GitHub Actions uses glob matching, where `[0-9]+` is a single-character class (digit or '+'), not “one-or-more digits”. As a result, common tags like v2.10.0 will not match and the workflow won’t run.

Issue description

The workflow tag filters are written like regex ([0-9]+), but Actions tag filters are globs. In a glob, [0-9]+ matches a single character (a digit or '+'), so tags with multi-digit components (e.g., v2.10.0) won't trigger the workflow.

Issue Context

There is already a correct pattern in .github/workflows/handleRelease.yml: v*.*.*.

Fix Focus Areas

  • .github/workflows/deb-package.yml[2-7]

Suggested change

Replace the tag filters with glob patterns such as:

  • v*.*.*
  • v*.*.*-* (or a single broader v* if you prefer and then validate/tag-parse inside the job).

[reliability] Plugin migration startup failure
Plugin migration startup failure On fresh installs, Etherpad startup runs checkForMigration(), which falls back to running `pnpm ls` and then writing `/opt/etherpad/var/installed_plugins.json` if the file is missing; the package does not ship/create that file, and the systemd unit disallows writes under `/opt/etherpad`. This can cause the service to crash during startup and never reach `/health`.

Issue description

Fresh installs can fail because Etherpad calls checkForMigration() during startup, and if /opt/etherpad/var/installed_plugins.json is missing it tries to execute pnpm ls and write that file under /opt/etherpad/var. The .deb packaging currently doesn’t ship/create that file (or even /opt/etherpad/var), and the systemd unit doesn’t permit writes under /opt/etherpad.

Issue Context

  • Packaging stages /opt/etherpad from a curated file list.
  • Etherpad’s plugin migration logic treats absence of var/installed_plugins.json as a trigger to run pnpm and then persist plugin state under settings.root/var.
  • The systemd sandbox only allows writes to /var/lib/etherpad, /var/log/etherpad, and /etc/etherpad.

Fix Focus Areas

  • .github/workflows/deb-package.yml[73-85]
  • packaging/nfpm.yaml[47-59]
  • packaging/scripts/postinstall.sh[14-40]
  • packaging/systemd/etherpad.service[22-44]

Suggested fix approach

  1. Ensure /opt/etherpad/var/ exists in the packaged tree (or create it in postinstall.sh).
  2. Seed /opt/etherpad/var/installed_plugins.json at install time with minimal valid content (for example, only ep_etherpad-lite), so checkForMigration() does not attempt to run pnpm on first boot.
  3. If you want runtime plugin install/uninstall to work under the hardened unit, relocate plugin state to /var/lib/etherpad (and symlink /opt/etherpad/var to it), and/or extend ReadWritePaths to include the required writable plugin directories.

[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.

Issue description

The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.

Issue Context

Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.

Fix Focus Areas

  • packaging/scripts/postinstall.sh[34-40]
  • packaging/systemd/etherpad.service[22-44]

Suggested fix

Implement one of the following (prefer the symlink approach to keep writes out of /opt):

  1. Symlink /opt/etherpad/var to a writable location:
  • In postinstall.sh (configure):
  • mkdir -p /var/lib/etherpad/var
  • ln -sfn /var/lib/etherpad/var /opt/etherpad/var
  • chown -R etherpad:etherpad /var/lib/etherpad/var
  • This keeps ReadWritePaths=/var/lib/etherpad sufficient.
  1. Allow /opt/etherpad/var writes explicitly:
  • Create /opt/etherpad/var in postinstall.sh and chown it to etherpad:etherpad.
  • Add /opt/etherpad/var to ReadWritePaths in the unit. Either way, ensure the directory exists before service start so fs.mkdirSync(.../var/js) and plugin migration writes don’t throw.

[reliability] Readonly /opt breaks startup
Readonly /opt breaks startup The unit sets ProtectSystem=strict but Etherpad writes runtime files under settings.root/var during startup (for example var/js and var/installed_plugins.json). Because /opt/etherpad/var is neither created/redirected in postinstall nor included in ReadWritePaths, a fresh install can fail to start with mkdir/write errors under /opt/etherpad/var.

Issue description

The systemd unit makes /opt/etherpad effectively read-only (ProtectSystem=strict + ReadWritePaths omits /opt/etherpad/...), but Etherpad writes to path.join(settings.root, 'var', ...) during startup, where settings.root resolves to /opt/etherpad in this package layout. This can prevent etherpad.service from starting.

Issue Context

Etherpad startup awaits checkForMigration() which can write ${root}/var/installed_plugins.json, and the express hooks create ${root}/var/js. The package currently does not create or redirect /opt/etherpad/var to a writable location.

Fix Focus Areas

  • packaging/scripts/postinstall.sh[34-40]
  • packaging/systemd/etherpad.service[22-44]

Suggested fix

Implement one of the following (prefer the symlink approach to keep writes out of /opt):

  1. Symlink /opt/etherpad/var to a writable location:
  • In postinstall.sh (configure):
    • mkdir -p /var/lib/etherpad/var
    • ln -sfn /var/lib/etherpad/var /opt/etherpad/var
    • chown -R etherpad:etherpad /var/lib/etherpad/var
  • This keeps ReadWritePaths=/var/lib/etherpad sufficient.
  1. Allow /opt/etherpad/var writes explicitly:
  • Create /opt/etherpad/var in postinstall.sh and chown it to etherpad:etherpad.
  • Add /opt/etherpad/var to ReadWritePaths in the unit. Either way, ensure the directory exists before service start so fs.mkdirSync(.../var/js) and plugin migration writes don’t throw.

[security] Overbroad workflow token scope
Overbroad workflow token scope The deb-package workflow grants `contents: write` at the workflow level, so the build job also gets write permissions even on pull_request runs, despite only the release job needing to upload release assets.

Issue description

.github/workflows/deb-package.yml sets permissions: contents: write at the workflow level, unnecessarily broadening the GITHUB_TOKEN scope for the build job (including pull_request runs).

Issue Context

Only the release job needs to write release assets. The build job checks out code, builds packages, and uploads artifacts.

Fix Focus Areas

  • .github/workflows/deb-package.yml[34-36]
  • .github/workflows/deb-package.yml[40-194]
  • .github/workflows/deb-package.yml[195-202]

Suggested fix

  • Change workflow-level permissions to read-only (or omit entirely).
  • Keep/add permissions: contents: write only on the release job.

[reliability] Service stays enabled after removal
Service stays enabled after removal packaging/scripts/postinstall.sh enables etherpad.service on first install, but packaging/scripts/postremove.sh never disables it on remove/purge, which can leave an enabled systemd symlink pointing at a removed unit file after uninstall.

Issue description

The package enables etherpad.service on first install but does not disable it on remove/purge, which can leave stale enablement symlinks under /etc/systemd/system/* after uninstall.

Issue Context

  • postinstall.sh runs systemctl enable etherpad.service on fresh installs.
  • postremove.sh only runs systemctl daemon-reload and removes a few symlinks under /opt/etherpad, but never disables the unit.

Fix Focus Areas

  • packaging/scripts/postinstall.sh[92-103]
  • packaging/scripts/preremove.sh[6-11]
  • packaging/scripts/postremove.sh[9-34]

Suggested change

  • Add systemctl disable etherpad.service (and optionally systemctl reset-failed etherpad.service) during uninstall. The safest place is preremove.sh (before the unit file is removed), and/or in postremove.sh for both remove and purge as a fallback.

[correctness] Purge skips symlink cleanup
Purge skips symlink cleanup `packaging/scripts/postremove.sh` removes the `/opt/etherpad` symlinks only in the `remove)` case, not in the `purge)` case. If `postrm purge` runs without the `remove` cleanup, `/opt/etherpad` can be left behind containing dangling symlinks created by `postinstall.sh`.

Issue description

The purge) branch does not remove /opt/etherpad symlinks created during install, which can leave /opt/etherpad non-empty (and symlinks dangling).

Issue Context

Symlinks are created in postinstall to expose /etc/etherpad/settings.json and redirect writable paths.

Fix

In the purge) case, also remove the same symlinks as in remove) (or remove /opt/etherpad entirely if appropriate and safe).

Fix Focus Areas

  • packaging/scripts/postremove.sh[9-34]
  • packaging/scripts/postinstall.sh[36-50]
  • packaging/scripts/postinstall.sh[65-82]

[security] curl|sudo bash in CI
curl|sudo bash in CI The Debian smoke test installs NodeSource by piping a remote script directly into `sudo bash`, executing network-fetched content as root during the workflow run. This increases the blast radius of a supply-chain compromise of that endpoint to the job that builds and uploads release artifacts.

Issue description

The workflow executes curl ... | sudo bash - to install Node.js, which runs network content as root.

Issue Context

This happens in the smoke test job before installing the newly built .deb.

Fix Focus Areas

  • .github/workflows/deb-package.yml[133-143]

Suggested fix

Prefer a repository-based installation with explicit keyring + signed-by configuration (or a pinned, checksum-verified installer artifact) rather than piping to bash. For example:

  • Fetch the NodeSource GPG key into /usr/share/keyrings/nodesource.gpg and configure the apt source with signed-by=..., then apt-get update && apt-get install nodejs.
  • Alternatively, run the smoke test in a container image that already contains Node >=20 (to avoid installing Node via shell pipe at all).

[correctness] RPM scripts use adduser
RPM scripts use adduser The nfpm manifest claims the same packaging manifest can produce RPMs (and includes rpm-specific dependency overrides), but the install/remove scripts call Debian-only user/group management commands (addgroup/adduser/deluser/delgroup), so an RPM install would fail at script execution time.

Issue description

packaging/nfpm.yaml declares RPM support (and RPM dependencies), but the lifecycle scripts are Debian-specific (addgroup/adduser/deluser/delgroup). If someone builds an RPM from this manifest, installation/removal will fail when those commands are missing.

Issue Context

The README also states the same manifest produces .rpm/.apk, which further implies this path is intended to work.

Fix Focus Areas

  • packaging/nfpm.yaml[104-120]
  • packaging/scripts/preinstall.sh[6-18]
  • packaging/scripts/postremove.sh[19-30]
  • packaging/README.md[1-5]

Suggested fix options

  1. If RPM/APK is in-scope:
  • Add per-packager script overrides (RPM equivalents using groupadd/useradd/userdel/groupdel from shadow-utils), and adjust systemd unit install path if needed for RPM-based distros.
  1. If Debian-only for now:
  • Remove/disable the RPM/APK claims in packaging/README.md and remove the overrides.rpm block (or clearly document Debian-only support) to avoid shipping a manifest that advertises broken RPM support.

[security] nfpm download not verified
nfpm download not verified The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.

Issue description

The workflow installs nfpm from a downloaded .deb without any integrity verification.

Issue Context

Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.

Fix Focus Areas

  • .github/workflows/deb-package.yml[64-71]

Suggested fix

Add an integrity verification step before dpkg -i, for example:

  • Download nfpm’s published checksums.txt (or equivalent) for ${NFPM_VERSION} and verify /tmp/nfpm.deb with sha256sum -c.
  • Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
  • Only proceed to sudo dpkg -i if verification passes.

[security] nfpm download not verified
nfpm download not verified The release workflow downloads an nfpm .deb via curl and installs it with dpkg without verifying a checksum or signature. If the upstream artifact is tampered with, the workflow could produce compromised release packages.

Issue description

The workflow installs nfpm from a downloaded .deb without any integrity verification.

Issue Context

Even with HTTPS, this lacks defense-in-depth against compromised upstream release artifacts.

Fix Focus Areas

  • .github/workflows/deb-package.yml[64-71]

Suggested fix

Add an integrity verification step before dpkg -i, for example:

  • Download nfpm’s published checksums.txt (or equivalent) for ${NFPM_VERSION} and verify /tmp/nfpm.deb with sha256sum -c.
  • Alternatively, verify a published signature (GPG/cosign) if available for nfpm release artifacts.
  • Only proceed to sudo dpkg -i if verification passes.


                     PR 7569 (2026-04-20)                    
[security] Overprivileged PR token
Overprivileged PR token The workflow runs on pull_request but now grants GITHUB_TOKEN `packages: write`, so untrusted PR-controlled code executed in the `Test` step can publish/overwrite GHCR packages despite the explicit GHCR login/push steps being gated to `push`. This increases CI blast radius and enables supply-chain compromise via CI token abuse.

Issue description

The workflow grants packages: write at the workflow level while still running on pull_request, which exposes package publish capability to PR-executed code.

Issue Context

Top-level permissions: apply to the entire workflow run (including pull_request runs), even when individual publish steps are gated by if: github.event_name == 'push'.

Fix Focus Areas

  • .github/workflows/docker.yml[2-17]
  • .github/workflows/docker.yml[64-122]

Proposed fix

  1. Keep the PR test job with minimal permissions (for example only contents: read).
  2. Create a separate publish job that runs only on push (use if: github.event_name == 'push') and set permissions: packages: write at the job level for that job only.
  3. Optionally make publish depend on the test job (needs:) so pushes only publish after tests pass.

[maintainability] Docs miss GHCR option
Docs miss GHCR option After adding `ghcr.io/ether/etherpad` as a publish target, Docker documentation still states the official image is only on Docker Hub and provides only Docker Hub pull commands. This makes the new GHCR mirror effectively undiscoverable for users relying on project docs.

Issue description

Documentation only references Docker Hub, but the workflow now publishes the same tags to GHCR.

Issue Context

Users following docs will not discover the GHCR mirror and may continue to hit Docker Hub rate limits.

Fix Focus Areas

  • doc/docker.md[1-13]
  • README.md[104-113]

Proposed fix

Update Docker docs (and optionally README) to mention GHCR as an alternative mirror and add example pull commands such as:

  • docker pull ghcr.io/ether/etherpad:latest
  • docker pull ghcr.io/ether/etherpad:<version> while keeping Docker Hub as canonical if desired.


                     PR 7567 (2026-04-20)                    
[correctness] Changeset base length mismatch
Changeset base length mismatch Pad.compactHistory() packs a changeset with oldLen=2 but resets the pad's base atext to "\n" (length 1) before applying it, causing appendRevision() to throw on the mismatched apply assertion. This makes compactHistory() fail at runtime for any pad with head > 0.

Issue description

Pad.compactHistory() creates baseChangeset with oldLength = 2, but resets this.atext to makeAText('\n') (length 1) before calling appendRevision(baseChangeset, ...). appendRevision() applies the changeset to this.atext, and Changeset.applyToText() asserts str.length === oldLen, so this will throw.

Issue Context

The code comment indicates the intent is to apply the changeset on top of a freshly-initialized pad that has text "\n\n" (length 2).

Fix Focus Areas

  • src/node/db/Pad.ts[581-613]

Fix approach

Update the reset state so the base text length matches the packed changeset, for example:

  • Reset this.atext to makeAText('\n\n') (and keep oldLength = 2), or
  • Change the packed oldLength (and corresponding changeset construction) to match the actual reset base text. After the change, ensure compactHistory() can successfully call appendRevision() without triggering the mismatched-apply assertion.

[maintainability] padCreate hook emitted
padCreate hook emitted compactHistory() sets head to -1 then calls appendRevision(), which increments head to 0 and triggers the padCreate hook, even though the pad already existed. Plugins listening for padCreate may perform incorrect initialization or logging for an existing pad being compacted.

Issue description

Compaction is not a pad creation event, but current logic resets head=-1 and then calls appendRevision(), which will trigger the padCreate hook when head becomes 0.

Issue Context

appendRevision() chooses hook = this.head === 0 ? 'padCreate' : 'padUpdate'. With head=-1 before the call, compaction will always emit padCreate.

Fix Focus Areas

  • src/node/db/Pad.ts[606-613]
  • src/node/db/Pad.ts[158-165]

Fix approach

Refactor compaction to avoid emitting padCreate for an existing pad. Options:

  • Write revision 0 directly (compute the new atext, set head=0, atext=..., then db.set(pad:${id}:revs:0, ...) and saveToDatabase()), and emit a more appropriate hook (padUpdate or a new padCompact).
  • Or enhance appendRevision() (or add a new helper) to allow specifying which hook to emit for special operations like compaction. Keep plugin semantics consistent: existing pad compacting should not look like a new pad creation.


                     PR 7565 (2026-04-19)                    
[correctness] `enforceReadableAuthorColors` default is true
`enforceReadableAuthorColors` default is true The new author-color clamping behavior is enabled by default via `enforceReadableAuthorColors: true`, which changes baseline rendering behavior. This violates the requirement that new features be feature-flagged and disabled by default.

Issue description

The new viewer-side author background clamping feature is enabled by default (enforceReadableAuthorColors: true), but compliance requires new features to be disabled by default.

Issue Context

The PR correctly introduces a flag (enforceReadableAuthorColors) and wiring, but the default value is set to true in server defaults and the distributed config templates.

Fix Focus Areas

  • src/node/utils/Settings.ts[410-415]
  • settings.json.template[261-273]
  • settings.json.docker[280-293]
  • doc/docker.md[99-113]

[correctness] Clamp doesn't ensure AA
Clamp doesn't ensure AA After clamping the background with ensureReadableBackground(), setAuthorStyle() still chooses the foreground via textColorFromBackgroundColor()’s 0.5 luminosity heuristic, so it can render white (or #222) text on a background that was only validated against pure black. This breaks the PR’s guarantee that rendered author highlights meet WCAG AA contrast.

Issue description

setAuthorStyle() clamps the background using WCAG contrast math but then selects the text color using a luminosity heuristic that can pick a foreground that does not meet the WCAG threshold with the (possibly clamped) background.

Issue Context

  • ensureReadableBackground() currently validates against pure black ([0,0,0]), but the actual dark foreground used by textColorFromBackgroundColor() is #222 (or a CSS var for the colibris skin).
  • To actually guarantee AA on render, the chosen foreground and the clamped background must be computed together using the same colors.

Fix Focus Areas

  • src/static/js/ace2_inner.ts[242-263]
  • src/static/js/colorutils.ts[115-120]
  • src/static/js/colorutils.ts[139-173]

Suggested approach

  • Update the author-style rendering to:
  1. Determine the actual candidate foreground colors (e.g., #fff and #222 for classic skins; consider handling colibris separately).
  2. For hex backgrounds, choose the foreground by highest WCAG contrastRatio (not luminosity).
  3. If neither candidate meets minContrast, clamp the background toward white until it meets minContrast against the chosen dark foreground (likely #222), then set the foreground explicitly.
  • Alternatively, make a helper that returns {bg, fg} together (single source of truth), and have setAuthorStyle() use it.


                     PR 7564 (2026-04-19)                    
[maintainability] `ace_doDuplicateSelectedLines` undocumented
`ace_doDuplicateSelectedLines` undocumented New `editorInfo` API methods and user-facing shortcuts were added without updating documentation under `doc/`, making the public surface area unclear for plugin authors and operators. This violates the requirement to document user-facing/API/config changes in the same PR.

Issue description

New editorInfo APIs (ace_doDuplicateSelectedLines, ace_doDeleteSelectedLines) and new shortcut settings keys were added without updating documentation under doc/.

Issue Context

Plugin authors rely on doc/api/editorInfo.md to discover supported editorInfo.* methods.

Fix Focus Areas

  • src/static/js/ace2_inner.ts[2481-2535]
  • src/node/utils/Settings.ts[224-228]
  • doc/api/editorInfo.md[1-208]

[correctness] Duplicate drops attributes
Duplicate drops attributes doDuplicateSelectedLines inserts raw line text via performDocumentReplaceRange(), which assigns only the current author attribute to the inserted text; this loses formatting/list/line attributes and can surface Etherpad’s internal line-marker '*' as literal text.

Issue description

doDuplicateSelectedLines() duplicates only plain text and reinserts it with only the author attribute, which drops formatting/list/line attributes and can expose Etherpad’s internal * line marker as literal text.

Issue Context

Etherpad stores rich text formatting and line attributes in attribution data (rep.alines/apool). Copying only rep.lines.atIndex(i).text and reinserting via performDocumentReplaceRange() does not preserve those attributes.

Fix Focus Areas

  • src/static/js/ace2_inner.ts[2481-2536]
  • src/static/js/ace2_inner.ts[172-189]

Suggested fix approach

Build a changeset that inserts an attributed slice of the existing document:

  1. Compute the character-range covering whole lines [start..end] (including the terminating \n for each copied line).
  2. Construct an AText for that slice using the corresponding text (from rep.alltext) and attribution (from rep.alines.slice(start, end+1).join('')).
  3. Apply an insertion changeset using SmartOpAssembler + opsFromAText() (similar to setDocAText()), so the inserted ops carry the original attributes.
  4. Insert at the start of line end+1 (or the document end) while maintaining the final-newline invariant.

[correctness] Whole-pad delete broken
Whole-pad delete broken The whole-pad-selected branch in doDeleteSelectedLines() deletes only part of line 0 (and uses a length derived from the last selected line), so multi-line pads are not cleared and the delete range can become invalid if the last line is longer than the first.

Issue description

doDeleteSelectedLines()'s whole-pad-selected case deletes from [0,0] to [0,lastLen] (line 0 only) even when the selection spans multiple lines, so it does not clear the pad and can generate an invalid range.

Issue Context

The code intends to blank the entire pad but keep one empty line (final newline invariant).

Fix Focus Areas

  • src/static/js/ace2_inner.ts[2513-2531]

Suggested fix approach

When start === 0 and the selection reaches the end of the document, delete from the start of the pad through the end of the last selected line:

  • Compute lastLen = rep.lines.atIndex(end).text.length.
  • Call performDocumentReplaceRange([0, 0], [end, lastLen], ''). This removes all content before the document’s final newline, leaving one empty line behind.

[reliability] Duplicate EOF insertion risk
Duplicate EOF insertion risk Duplicating the last line(s) inserts at position [rep.lines.length(), 0] (after the final newline), bypassing existing end-of-document safeguards used elsewhere to avoid inserting after the final newline.

Issue description

If the selected block includes the last line, doDuplicateSelectedLines() inserts at [end + 1, 0] which becomes [rep.lines.length(), 0] (after the document’s final newline), a scenario that other editor code treats as needing special handling.

Issue Context

This file already contains explicit end-of-document splice rewrites (both in performDocumentReplaceCharRange() and in internal splice logic) to avoid inserting after the final newline.

Fix Focus Areas

  • src/static/js/ace2_inner.ts[2499-2511]
  • src/static/js/ace2_inner.ts[1497-1523]
  • src/static/js/ace2_inner.ts[1723-1731]

Suggested fix approach

When duplicating at the end of the document, route the insertion through an end-safe code path (for example, compute a char offset and use performDocumentReplaceCharRange() for insertion so it can apply its end-of-doc rewrite), or explicitly rewrite the insertion point to be before the final newline (mirroring existing logic). If you implement attribute-preserving duplication via an AText/changeset insertion, incorporate the same end-of-doc invariant handling there.



                     PR 7563 (2026-04-19)                    
[correctness] Override test never reloads
Override test never reloads The new test sets ETHERPAD_VERSION_STRING but only calls exportedForTestingOnly.parseSettings(), which does not recompute settings.randomVersionString; randomVersionString is computed in reloadSettings(). Because the module is already loaded earlier in this test file via require(), the later require() returns the cached instance and the assertion can fail.

Issue description

The test honours ETHERPAD_VERSION_STRING as an explicit override sets process.env.ETHERPAD_VERSION_STRING but never calls reloadSettings(), so settings.randomVersionString is not recomputed. The test currently calls exportedForTestingOnly.parseSettings() which only parses JSON and does not mutate the module-scope singleton.

Issue Context

  • randomVersionString is derived inside reloadSettings().
  • The Settings module has already been loaded earlier in the file via require(), so requiring it again will not re-run module initialization.

Fix Focus Areas

  • src/tests/backend/specs/settings.ts[165-183]

Suggested change

Update the override test to:

  1. Save the original env var.
  2. Set process.env.ETHERPAD_VERSION_STRING.
  3. Call require('../../../node/utils/Settings').reloadSettings().
  4. Assert randomVersionString.
  5. Restore the env var and call reloadSettings() again to avoid leaking state to other tests. (Optionally also ensure the default-hash test unsets ETHERPAD_VERSION_STRING and calls reloadSettings() before asserting.)


                     PR 7562 (2026-04-19)                    
[reliability] Test mutates editor DOM
Test mutates editor DOM The new Playwright test constructs a long pad by overwriting `#innerdocbody.innerHTML`, bypassing the normal editor input path used elsewhere in the suite. This can desynchronize Etherpad’s internal rep/undo state from the DOM and makes the regression test more likely to be flaky or to test an unrealistic state.

Issue description

The regression test populates the document by directly setting #innerdocbody.innerHTML. This bypasses editor input handling and can lead to inconsistent undo/redo behavior or flaky scroll assertions.

Issue Context

Other frontend-new specs that need a long pad use writeToPad() and keyboard Enter presses to generate many lines, which keeps Etherpad’s internal state consistent.

Fix Focus Areas

  • src/tests/frontend-new/specs/undo_redo_scroll.spec.ts[26-43]
  • src/tests/frontend-new/specs/undo_redo_scroll.spec.ts[81-97]
  • src/tests/frontend-new/specs/page_up_down.spec.ts[16-20]

Suggested change

Replace the innerFrame.evaluate(() => { body.innerHTML = ... }) blocks with a loop similar to:

  • for (let i = 0; i < 120; i++) { await writeToPad(page, line ${i+1}); await page.keyboard.press('Enter'); } (or a single multi-line writeToPad call if you want it faster), then proceed with the undo/scroll assertions.


                     PR 7559 (2026-04-19)                    
[maintainability] Shell scripts use 4-space indent
Shell scripts use 4-space indent The newly added packaging shell scripts use 4-space indentation (e.g., within `case` branches), violating the 2-space indentation standard. This can reduce consistency and trigger formatting/lint issues where the repo enforces 2-space indentation.

Issue description

New packaging shell scripts use 4-space indentation inside blocks, violating the repo requirement for 2-space indentation.

Issue Context

This affects newly added Debian packaging maintainer scripts and should be corrected to match the enforced formatting standard.

Fix Focus Areas

  • packaging/scripts/preinstall.sh[6-18]
  • packaging/scripts/postinstall.sh[13-50]

[correctness] Smoke test false-positive
Smoke test false-positive The workflow’s /health polling loop never fails the job if the endpoint never becomes healthy, so CI can report success for a broken .deb/service. This can lead to attaching and releasing non-working packages.

Issue description

The smoke test loop does not fail if /health never returns 200, so the workflow can pass even when Etherpad never becomes ready.

Issue Context

Because curl ... && break || sleep 2 is inside a for loop, a permanently failing curl results in the loop ending with the exit code of the final sleep (0). With set -e, this still won’t fail.

Fix Focus Areas

  • .github/workflows/deb-package.yml[107-126]

Suggested change

Track success and explicitly exit 1 if the endpoint never becomes healthy, e.g.:


[security] Secrets readable in /etc/default
Secrets readable in /etc/default The package installs /etc/default/etherpad-lite as world-readable (0644) even though it is the documented place to put environment overrides for settings.json, which can include passwords and secrets. Any local user on the system can read those secrets.

Issue description

/etc/default/etherpad-lite is installed with mode 0644, making any secrets placed there readable by all local users.

Issue Context

The systemd unit uses EnvironmentFile=-/etc/default/etherpad-lite, and Etherpad supports ${ENV_VAR} substitution for config values, including passwords.

Fix Focus Areas

  • packaging/nfpm.yaml[62-68]
  • packaging/systemd/etherpad-lite.service[7-14]

Suggested change

Tighten permissions and ownership so only root and the etherpad service user can read it:



                     PR 7558 (2026-04-19)                    
[reliability] Tag filter never matches
Tag filter never matches In `.github/workflows/snap-publish.yml` the tag filter uses a regex-like pattern (`v?[0-9]+.[0-9]+.[0-9]+`) but GitHub Actions uses glob matching, so `+` is treated literally and typical release tags like `v2.6.1` will not match. As a result the Snap publish workflow will not trigger on release tags and nothing will be built/published automatically.

Issue description

The workflow tag trigger pattern is written like a regex, but GitHub Actions uses glob matching for on.push.tags. The current pattern will not match normal semver tags like v2.6.1, so the workflow will not run on releases.

Issue Context

The workflow intends to run on tags matching v?X.Y.Z (optional leading v).

Fix Focus Areas

  • .github/workflows/snap-publish.yml[17-21]

Suggested change

Replace the single regex-like entry with glob patterns, for example:

  • v[0-9]*.[0-9]*.[0-9]*
  • [0-9]*.[0-9]*.[0-9]* (or whichever variant matches your actual tagging scheme).

[security] CLI path traversal exec
CLI path traversal exec `snap/local/bin/etherpad-cli` builds `SCRIPT_PATH` from unvalidated user input, so a caller can pass a value containing `/` or `..` to escape the intended `${APP_DIR}/bin` directory and execute arbitrary `.ts`/`.sh` files shipped in the snap. Additionally, there is no default `case` branch, so if a file exists but is not `.ts` or `.sh` the command silently does nothing and exits successfully.

Issue description

The snap CLI wrapper allows path traversal via the <bin-script> argument and can execute unintended files. It also silently succeeds for unsupported extensions.

Issue Context

This command is meant to be a thin, safe passthrough to scripts under $SNAP/opt/etherpad-lite/bin.

Fix Focus Areas

  • snap/local/bin/etherpad-cli[17-24]

Suggested change

  • Reject any SCRIPT_NAME that contains / or .. (or normalize to basename and compare).
  • Optionally enforce an allowlist derived from $APP_DIR/bin.
  • Add a default *) case that prints an error like unsupported script type and exits non-zero.

[maintainability] 4-space indent in scripts
4-space indent in scripts New Bash scripts use 4-space indentation inside control blocks, violating the repository rule requiring exactly 2-space indentation. This reduces consistency and can cause style-check failures if enforced.

Issue description

The newly added Bash scripts use 4-space indentation inside if/case blocks, but the project compliance rule requires 2-space indentation (and no tabs).

Issue Context

This PR adds several wrapper scripts under snap/. To stay compliant and consistent with repo formatting expectations, indentation should be normalized to 2 spaces.

Fix Focus Areas

  • snap/hooks/configure[9-13]
  • snap/local/bin/etherpad-cli[10-14]
  • snap/local/bin/etherpad-healthcheck-wrapper[8-11]
  • snap/local/bin/etherpad-service[23-30]

[correctness] Settings file ignored
Settings file ignored The snap seeds $SNAP_COMMON/etc/settings.json and exports EP_SETTINGS, but Etherpad’s settings loader does not read EP_SETTINGS, so it will keep looking for /settings.json and fall back to defaults. Defaults include a DB file under /var/rusty.db, which is inside the read-only snap mount, so the daemon will fail to persist data and may fail to start.

Issue description

The snap wrapper exports EP_SETTINGS and seeds $SNAP_COMMON/etc/settings.json, but Etherpad ignores EP_SETTINGS and uses argv.settings (from --settings/-s) or defaults to <install-root>/settings.json. This prevents the snap from using the seeded writable settings and can force DB paths into the read-only snap mount.

Issue Context

The snap already sets EP_SETTINGS (wrapper + snapcraft.yaml). The minimal, snap-friendly fix is to make Etherpad honor process.env.EP_SETTINGS (and optionally process.env.EP_CREDENTIALS) as a fallback when CLI flags are not provided.

Fix Focus Areas

  • src/node/utils/Settings.ts[301-306]
  • src/node/utils/Cli.ts[25-44]
  • snap/local/bin/etherpad-service[39-43]

Suggested approach

  • Update Settings filename resolution to prefer argv.settings, else process.env.EP_SETTINGS, else 'settings.json'.
  • Do the same for credentials (argv.credentials -> process.env.EP_CREDENTIALS -> 'credentials.json').
  • Keep CLI flag precedence so existing behavior is unchanged.
  • (Optional) Add a small unit/integration check or comment documenting the env var support.

[correctness] snap set overrides no-op
snap set overrides no-op The service wrapper exports PORT/IP from snapctl, but the seeded settings.json uses literal ip/port values, so the configured settings file will override the env-based defaults and ignore `snap set port=` / `snap set ip=`. Users following the snap README will not see the listener move after restart.

Issue description

snap set etherpad-lite port=... / ip=... is implemented by exporting PORT/IP, but the seeded settings file hard-codes "ip": "0.0.0.0" and "port": 9001, which overrides env defaults. As a result, snap config changes do nothing.

Issue Context

Etherpad supports env-var substitution inside settings.json via strings like "${PORT:9001}", but the current template copy does not use that syntax for ip/port.

Fix Focus Areas

  • snap/local/bin/etherpad-service[22-37]
  • settings.json.template[151-162]

Suggested approach

  • During first-run bootstrap (right after copying the template), rewrite the ip and port entries to use Etherpad’s substitution syntax:
  • "ip": "${IP:0.0.0.0}"
  • "port": "${PORT:9001}" (must be quoted per template rules)
  • Only apply the rewrite if the file still contains the template’s default literal values, to avoid overwriting user customizations.
  • Keep the existing dirty-db path rewrite.


                     PR 7554 (2026-04-19)                    
[maintainability] `https://` URL in comment
`https://` URL in comment A protocol-specific `https://` URL was introduced in a code comment, which conflicts with the project requirement to use protocol-independent URLs where appropriate. This can create inconsistent documentation style and mixed-content concerns if copied into contexts expecting protocol-relative links.

Issue description

A new comment includes a hardcoded https:// URL, but the project compliance rule requires protocol-independent URLs where appropriate.

Issue Context

The added regression test comment references issue #7138 with an https://github.com/... link.

Fix Focus Areas

  • src/tests/backend/specs/settings.ts[151-151]

[reliability] Unused browser context
Unused browser context The new Playwright spec creates a new BrowserContext and clears its cookies, but then navigates with the existing `page` fixture, so cookie clearing has no effect and the new context is never closed. This can leak contexts across tests and degrade test reliability/performance.

Issue description

The test creates a new BrowserContext and clears cookies on it, but uses the existing page fixture for navigation, so cookie clearing is ineffective and the new context is never closed.

Issue Context

The intent seems to be to start each test with a clean cookie jar. In Playwright, the page fixture already has a context (page.context()), so cookie operations should target that context unless you also create a new page from the new context.

Fix Focus Areas

  • src/tests/frontend-new/specs/inactive_color_fade.spec.ts[4-8]

Suggested change

Replace the browser.newContext() usage with one of:

  1. Simplest: await page.context().clearCookies();
  2. If you truly need a new context, create a page = await context.newPage() and use that page (and ensure await context.close() in afterEach).

[maintainability] Docker docs missing env var
Docker docs missing env var `settings.json.docker` adds the new env var `PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS`, but the Docker documentation’s Pad Options tables do not list it. Users relying on docs will not discover how to configure the new toggle via environment variables.

Issue description

settings.json.docker supports PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS, but the Docker docs do not list this env var under Pad Options.

Issue Context

This PR explicitly advertises Docker env var configurability; the docs should reflect the new supported variable to avoid confusion.

Fix Focus Areas

  • doc/docker.md[97-112]
  • doc/docker.adoc[179-233]

Suggested change

Add a row for PAD_OPTIONS_FADE_INACTIVE_AUTHOR_COLORS (default true) in both Pad Options tables, with a short description (e.g., "When false, do not fade inactive author colors toward white").



                     PR 7553 (2026-04-19)                    
[correctness] `showMenuRight` ignores readonly mode
`showMenuRight` ignores readonly mode The new logic hides `#editbar .menu_right` only when `showMenuRight=false` is explicitly provided, rather than automatically when `window.clientVars.readonly` is true (including iframe embeds). As a result, read-only pads can still display right-side controls unless callers remember to pass the URL param, failing the read-only-hides-controls requirement.

Issue description

menu_right is only hidden when showMenuRight=false is provided, but compliance requires it to be hidden whenever the pad is read-only (including embeds).

Issue Context

There is already a read-only indicator available via window.clientVars.readonly.

Fix Focus Areas

  • src/static/js/pad.ts[79-85]

[reliability] Unused Playwright context
Unused Playwright context The new spec creates a new BrowserContext and clears its cookies, but the tests continue using the existing fixture `page` from a different context, making the cookie clearing ineffective. The created context is also never closed, adding unnecessary resource usage in the test suite.

Issue description

The test creates a new BrowserContext, clears its cookies, but then continues using the existing page fixture (which belongs to a different context). This makes the cookie clearing ineffective and leaves an extra context unclosed.

Issue Context

goToNewPad(page) navigates the passed-in page via page.goto(...), so any cookie cleanup must be applied to page.context() (or the test must create/use a page from the new context and close it).

Fix Focus Areas

  • src/tests/frontend-new/specs/hide_menu_right.spec.ts[4-8]
  • src/tests/frontend-new/helper/padHelper.ts[117-123]

Suggested change

Prefer one of:

  1. Simplest: replace the browser.newContext() usage with await page.context().clearCookies().
  2. If isolation via new context is desired: create a page from that context (const page2 = await context.newPage()), use that page for navigation/assertions, and close the context in afterEach.


                     PR 7552 (2026-04-19)                    
[correctness] Installer clones wrong repo
Installer clones wrong repo README.md now instructs users to run installers from the `ether/etherpad` repo, but both installer scripts still default to cloning `https://github.com/ether/etherpad-lite.git`, so the one-liner can install a different repo than the documentation implies.

Issue description

README.md now points users to run installers from https://raw.githubusercontent.com/ether/etherpad/..., but the installer scripts still default to cloning https://github.com/ether/etherpad-lite.git. This can cause the one-liner install path to clone an unexpected repository.

Issue Context

  • README is the primary entry point for users.
  • bin/installer.sh and bin/installer.ps1 embed their own default clone URL.

Fix Focus Areas

  • bin/installer.sh[5-12]
  • bin/installer.sh[33-38]
  • bin/installer.ps1[1-11]
  • bin/installer.ps1[37-41]
  • README.md[71-88]

Suggested fix

  • Change the default repo URL in both installer scripts to https://github.com/ether/etherpad.git.
  • Update the usage examples/comments in both scripts to match the new raw GitHub URLs.
  • Ensure README and installer scripts agree on the repo being installed (directory name can remain etherpad-lite if intentional).

[observability] Broken Docker badge link
Broken Docker badge link README.md links the Docker workflow badge to `actions/workflows/dockerfile.yml`, but this repo’s Docker workflow file is `docker.yml`, so the badge/link will 404.

Issue description

The README Docker CI badge points to actions/workflows/dockerfile.yml, but the repository workflow file is docker.yml. This breaks the badge and link.

Issue Context

The repo contains .github/workflows/docker.yml.

Fix Focus Areas

  • README.md[35-39]
  • .github/workflows/docker.yml[1-5]

Suggested fix

Update the Docker badge/link in README to reference actions/workflows/docker.yml (both the badge.svg URL and the clickable link).


[maintainability] Conflicting GitHub URLs
Conflicting GitHub URLs Startup logging now directs users to `https://github.com/ether/etherpad/issues`, but the default pad text still advertises `https://github.com/ether/etherpad-lite`, giving users conflicting guidance on where the project lives.

Issue description

User-facing text references two different GitHub repositories (etherpad vs etherpad-lite). This is confusing for users looking for source/issues.

Issue Context

  • Startup console message points to https://github.com/ether/etherpad/issues.
  • Default pad contents still include https://github.com/ether/etherpad-lite.

Fix Focus Areas

  • src/node/hooks/express.ts[67-72]
  • src/node/utils/Settings.ts[390-397]

Suggested fix

Update defaultPadText to point at the same GitHub repo as the startup message (and consider doing the same for other prominent user-facing URLs if they exist).



                     PR 7550 (2026-04-19)                    
[reliability] `anonymizeAuthor` lacks feature flag
`anonymizeAuthor` lacks feature flag The new `anonymizeAuthor` REST/API surface is registered unconditionally and becomes available by default, without any enable/disable mechanism. This violates the requirement that new features be gated behind a feature flag and disabled by default.

Issue description

A new feature (anonymizeAuthor API/REST endpoint) is enabled by default and has no feature-flag gating.

Issue Context

Compliance requires new features to be behind a feature flag and disabled by default.

Fix Focus Areas

  • src/node/handler/APIHandler.ts[146-152]
  • src/node/db/API.ts[65-77]

[reliability] Non-resumable partial erasure
Non-resumable partial erasure `AuthorManager.anonymizeAuthor()` persists `erased: true` before the chat-scrub loop, so any error during chat scrubbing can leave chat messages unchanged while subsequent calls short-circuit on `existing.erased` and never finish the scrub. This contradicts the documented behavior that chat message `authorId` is nulled, and makes failures non-recoverable without manual DB intervention.

Issue description

anonymizeAuthor() marks the author record as erased: true before finishing the chat scrub. If any error occurs during the chat loop, retries will short-circuit on existing.erased and never finish nulling authorId on chat messages.

Issue Context

  • Current behavior uses existing.erased as the idempotency guard.
  • Docs state chat message authorId is nulled.
  • The implementation should either (a) only mark erased: true once all steps have completed, or (b) track per-step completion so retries can resume unfinished work.

Fix Focus Areas

  • src/node/db/AuthorManager.ts[336-395]

Suggested implementation direction

  • Introduce an intermediate state (e.g., erasureInProgress: true) and set it before starting work.
  • Perform token/mapper cleanup + chat scrub.
  • Only after successful completion, update the author record to {erased: true, erasureInProgress: false}.
  • Alternatively: keep erased: true but add a separate flag (e.g., chatScrubbed: true) and only short-circuit when both are complete; otherwise resume the missing steps.


                     PR 7549 (2026-04-19)                    
[security] Unsafe learnMoreUrl href
Unsafe learnMoreUrl href showPrivacyBannerIfEnabled() assigns settings-provided learnMoreUrl directly to an anchor href, so a javascript:/data: URL would execute script when clicked. This creates a user-triggered XSS vector via configuration.

Issue description

src/static/js/privacy_banner.ts sets a.href from config.learnMoreUrl without validating the URL scheme. This allows javascript: (and similar) URLs to execute code if a user clicks the “Learn more” link.

Issue Context

learnMoreUrl is operator-controlled via settings.json and is sent to browsers via clientVars, so the client must defensively validate it.

Fix Focus Areas

  • src/static/js/privacy_banner.ts[43-54]

Suggested fix

  • Parse config.learnMoreUrl with new URL() (using location.href as base for relative URLs).
  • Allowlist safe protocols (e.g., http:, https:; optionally mailto: depending on product policy).
  • If invalid/unsafe, do not render the link (and optionally log a console warning in development).


                     PR 7548 (2026-04-19)                    
[security] TokenTransfer returns plaintext token
TokenTransfer returns plaintext token `GET /tokenTransfer/:token` responds with `tokenData` (including the plaintext author token) and does not invalidate the transfer record, so any script running in-origin can mint an id and exfiltrate a long-lived author token despite HttpOnly cookies. The id is also reusable indefinitely because the DB entry is never deleted.

Issue description

The token transfer flow still exposes the plaintext author token in an HTTP response body and leaves transfer IDs reusable.

Issue Context

HttpOnly prevents reading document.cookie, but it does not help if an endpoint returns the token via JSON.

Fix Focus Areas

  • src/node/hooks/express/tokenTransfer.ts[40-67]

Suggested fix

  • After a successful GET, delete the DB entry (tokenTransferKey:${id}) so the id is one-time.
  • Do not include token in the JSON response body; instead, set cookies and return a minimal {ok:true} (or redirect to a safe page).
  • Add a short TTL/expiry check using createdAt and reject/cleanup old transfer records.
  • Validate tokenData.token with padutils.isValidAuthorToken() before re-issuing it as a cookie.

[security] Timeslider rewrites token cookie
Timeslider rewrites token cookie The timeslider client still reads and (re)sets the author token cookie from JavaScript and sends it over socket messages, which can overwrite the new server-minted HttpOnly token cookie and keep the legacy message.token path active. This defeats the primary security goal (token not JS-readable/writable) for any user who opens the timeslider.

Issue description

The timeslider frontend still generates/sets the author token cookie in JavaScript and sends token in socket messages, which undermines the new HttpOnly cookie model.

Issue Context

The server now sets the author token as an HttpOnly cookie on /p/:pad/timeslider, but the timeslider client still treats the token as JS-managed state.

Fix Focus Areas

  • src/static/js/timeslider.ts[52-58]
  • src/static/js/timeslider.ts[100-109]
  • src/static/js/pad.ts[190-212]

What to change

  • Remove the token = Cookies.get(...); if (token == null) Cookies.set(...); block.
  • Remove token from the socket message payload (similar to src/static/js/pad.ts removing token from CLIENT_READY).
  • Ensure the server path works purely via the socket.io handshake cookie (no message.token dependence for timeslider clients).

[security] tokenTransfer breaks HttpOnly model
tokenTransfer breaks HttpOnly model The welcome page’s tokenTransfer POST reads the author token from document.cookie (now invisible due to HttpOnly), causing invalid requests, and the tokenTransfer GET endpoint re-sets the token cookie without HttpOnly, undoing the hardening. This both breaks the feature and reintroduces a JS-readable author token cookie.

Issue description

The tokenTransfer flow is incompatible with an HttpOnly author token: it tries to read the token in the browser and it also writes a non-HttpOnly token cookie server-side.

Issue Context

The author token is now intended to be server-minted and HttpOnly. Any feature that depends on reading/writing the token in browser JS will break or regress security.

Fix Focus Areas

  • src/static/js/welcome.ts[24-33]
  • src/node/hooks/express/tokenTransfer.ts[16-20]
  • src/node/hooks/express/tokenTransfer.ts[42-45]
  • src/node/utils/ensureAuthorTokenCookie.ts[26-33]

What to change

  • Update POST /tokenTransfer to not require token from the request body. Instead, read the token from req.cookies (include prefixed + unprefixed fallback).
  • When setting ${p}token in GET /tokenTransfer/:token, set cookie options consistent with PR3 (httpOnly, sameSite, secure-on-https, path='/').
  • Update welcome.ts to stop attempting to read ${cp}token from document.cookie and stop sending it in the POST body.


General

Resources

For Developers

How to's

Set up

Advanced steps

Integrating Etherpad in your web app

for Developers

Clone this wiki locally