fixing track identication interface#12
Conversation
- Replace permissive Any configurations with specific allowlists - Add configurable CORS origins via environment variables - Restrict HTTP methods to GET, POST, PUT, DELETE, PATCH - Limit headers to Authorization, Content-Type, Accept - Add explicit credentials support with proper validation - Default origins: http://localhost:5173,http://localhost:3000 - Default credentials: false (secure by default)
- Remove useTrackWebSocket hook and all WebSocket-related code - Update notification system to rely solely on upload notifications - Remove wsMessages, wsUnreadCount, clearWsUnread from App.tsx - Simplify Navbar props to use only upload notifications - Delete unused useWebSocket.ts file This completes the migration from WebSocket to database polling for upload progress notifications as requested in the PR.
AudioRecorder Hook Improvements: - Add atomic start recording guard to prevent concurrent starts - Wrap recorder operations in try/catch blocks - Add proper cleanup of media streams and timers - Implement useEffect cleanup on unmount - Add error handling for recorder operations - Detect MIME type before creating MediaRecorder - Use recorder's actual MIME type when creating Blob Identify Page Improvements: - Memoize handleIdentify callback with useCallback - Replace hardcoded 'recording.wav' with dynamic filename based on MIME type - Add getFilenameFromMimeType helper function - Support all common audio formats (wav, webm, ogg, mp4, mpeg, aac, flac) - Preserve original MIME type instead of forcing 'audio/wav' These changes prevent memory leaks, improve performance, and provide better audio format support.
…asts - Add TrackListResponse interface to types/index.ts - Update API to import and use the shared type - Replace inline casts in Library.tsx with proper TrackListResponse type - Eliminate duplicate type definitions across the codebase - Improve type safety and code maintainability This change standardizes the paginated tracks response type throughout the application and removes the need for manual type casting.
- Add asChild prop to DropdownMenuTrigger component - Forward asChild prop to MenuPrimitive.Trigger - Prevents nested button elements when using asChild pattern - Maintains all existing functionality and data-slot attributes This fix resolves the issue where consumers were accidentally nesting buttons inside the DropdownMenuTrigger component.
…notifications TrackCard Enhancements: - Add delete confirmation dialog (prevents accidental deletions) - Fix 3-dot menu visibility (always visible, theme-aware) - Replace bell notifications with Sonner toast notifications - Fix duration field to use duration_secs from API - Add proper clipboard copy with backend integration - Show loading/success/error toasts for track link copying - Use actual S3 signed URL instead of API endpoint Navbar Updates: - Update to work with new notification system - Remove WebSocket-related props and logic CSS Updates: - Maintain existing styling while supporting new features These changes improve user experience with proper confirmation flows and immediate feedback for actions.
- Update .gitignore for better build artifact handling - Fix AudioRecorder component to prevent effect re-firing - Update backend tracks API (minor adjustments) - Update frontend build artifacts and dependencies - Update index.html for production build These are mostly build-related changes and minor component improvements.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 8 minutes and 53 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (10)
📝 WalkthroughWalkthroughThis PR adds Docker ignore files and a frontend Dockerfile/nginx config, introduces frontend CI and Docker workflows, refactors the backend identify API to accept raw uploads alongside multipart, tightens CORS and body-size handling, extends audio decoding with an extension hint, and implements frontend audio recording, UI, and notification system changes. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/src/api/tracks.rs (1)
349-364:⚠️ Potential issue | 🟡 MinorGood fix, but watch S3-before-DB-commit ordering.
Adding
tx.commit().await?;is correct — without it the DELETE was implicitly rolled back whentxdropped, leaving orphaned rows after S3 deletion.One remaining concern: S3 deletion still happens before the DB commit. If
tx.commit()fails (network blip, DB restart), the S3 object is already gone but thetracksrow persists, and subsequentget_track_urlcalls will return a presigned URL to a non-existent object. Consider either:
- Committing the DB transaction first and then deleting from S3 (accepting a potential orphaned S3 object, which is usually the lesser evil and can be swept by a background job), or
- Using a soft-delete flag with async reconciliation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/api/tracks.rs` around lines 349 - 364, The S3 deletion (state.s3.delete_file(&track.object_key)) currently happens before tx.commit(), risking object loss if commit fails; change the flow in the delete handler to commit the DB transaction (tx.commit().await?) first and only then call state.s3.delete_file(&track.object_key).await, logging or queuing any S3-delete failure for background cleanup; alternatively implement a soft-delete flag on the tracks row (e.g., set deleted=true via the SQL DELETE logic replacement) and perform the actual S3 deletion asynchronously by a reconciliation job so get_track_url keeps returning valid URLs until physical removal.backend/src/utils/file_validation.rs (1)
42-45:⚠️ Potential issue | 🟡 MinorError message is out of sync with the new allowed list.
webmis now accepted but the user-facing error still enumerates onlyMP3, WAV, FLAC, OGG, M4A AAC(also missing a comma betweenM4AandAAC). Update the message so rejected uploads reflect the real supported set.✂️ Suggested fix
return Err(AppError::Validation(format!( - "Unsupported file type: {}. Please upload a supported audio file (MP3, WAV, FLAC, OGG, M4A AAC).", + "Unsupported file type: {}. Please upload a supported audio file (MP3, WAV, FLAC, OGG, M4A, AAC, WEBM).", ext )));The doc-comment at the top of the function (listing "MP3, WAV, FLAC, OGG, M4A, AAC") should be updated to include WebM as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/utils/file_validation.rs` around lines 42 - 45, Update the user-facing error string in the file_validation code path that returns Err(AppError::Validation(...)) to list the actual supported types including "WEBM" and fix the M4A/AAC comma, and also update the doc-comment at the top of the function (the comment that currently lists "MP3, WAV, FLAC, OGG, M4A, AAC") to include WebM so documentation and the runtime error message stay in sync; locate the return in file_validation.rs (the block that constructs the Validation error using ext) and adjust the human-facing list to match the true allowed set.
🟡 Minor comments (8)
.dockerignore-1-39 (1)
1-39:⚠️ Potential issue | 🟡 MinorConsider removing the root
.dockerignorefile as it is not consumed by any build workflow.Docker reads
.dockerignorefrom the build context root. Both frontend and backend workflows explicitly usecontext: ./frontendandcontext: ./backendrespectively, which means they will use.dockerignorefiles within those subdirectories, not the root file. Thecompose.ymlalso specifiescontext: ./backend. This root.dockerignorewould only be used for directdocker buildinvocations from the repository root, which is not part of the current workflow. Either remove it or document its intended purpose if reserved for future use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.dockerignore around lines 1 - 39, The root .dockerignore file appears unused by current CI/workflows because builds specify context: ./frontend and context: ./backend (also in compose.yml); either delete the root .dockerignore to avoid confusion or update it with a clear top-of-file comment describing its intended purpose (e.g., for manual docker build from repo root or future workflows) so readers know why it exists; make the change by removing the .dockerignore file or by adding that explanatory comment at the top of the existing .dockerignore.frontend/tsconfig.app.json-13-13 (1)
13-13:⚠️ Potential issue | 🟡 MinorRemove
"ignoreDeprecations": "6.0"as no deprecated options are in use and TypeScript 6.0 is already the current version.The config contains no known deprecated compiler options (e.g.,
importsNotUsedAsValues,preserveValueImports,out,suppressExcessPropertyErrors, etc.). Additionally, TypeScript is already at version 6.0.2, so"ignoreDeprecations": "6.0"is no longer needed—it was intended to suppress warnings about options being removed in version 6.0, not after 6.0 has shipped. Removing this setting will keep the configuration cleaner and avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/tsconfig.app.json` at line 13, Remove the unnecessary "ignoreDeprecations": "6.0" entry from the tsconfig.app.json configuration: open the JSON object that contains compiler options and delete the key "ignoreDeprecations" (the string literal) so the file no longer suppresses deprecation warnings for a shipped TypeScript 6.0; ensure the resulting JSON remains valid (commas adjusted if it was between other entries) and run a quick TypeScript build to confirm no config errors.frontend/nginx.conf-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorPort coupling with the Dockerfile.
If you adopt the non-root/unprivileged approach suggested on
frontend/Dockerfile(lines 31–34), this needs to change tolisten 8080;(andEXPOSE 8080in the Dockerfile) so nginx can bind withoutCAP_NET_BIND_SERVICE. Flagging here so the two files stay in sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/nginx.conf` at line 2, The nginx config currently uses "listen 80;" which will conflict with the non-root/unprivileged setup; change the nginx directive to "listen 8080;" and update the Dockerfile to "EXPOSE 8080" (and ensure any runtime/start scripts reference 8080) so nginx can bind without CAP_NET_BIND_SERVICE when running as the non-root user mentioned in the Dockerfile lines that configure the unprivileged user.frontend/package.json-6-8 (1)
6-8:⚠️ Potential issue | 🟡 Minor
engines.noderange is semver-redundant and lets Node 21.x through.
">=20.19.0 || >=22.12.0"is a union of two open-ended ranges; the first already includes everything the second does, so the whole expression collapses to>=20.19.0— meaning Node 21.x (non-LTS, excluded by Vite/Tailwind toolchains) will still satisfy it.The intent (mirroring Vite's own engines field) is Node 20.19+ OR Node 22.12+, excluding odd-numbered releases:
🔧 Suggested fix
"engines": { - "node": ">=20.19.0 || >=22.12.0" + "node": "^20.19.0 || >=22.12.0" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/package.json` around lines 6 - 8, The engines.node range in package.json is currently a redundant open-ended union that allows Node 21.x; update the engines.node value to explicitly exclude odd-numbered 21.x releases (mirror Vite's intent) by using a range that permits Node 20.19+ up to but not including 21 and also permits Node 22.12+ (i.e., replace the current engines.node expression with one that uses a bounded range for Node 20, e.g., ">=20.19.0 <21 || >=22.12.0") so tools like Vite/Tailwind don't accept Node 21.x.frontend/src/components/ui/sonner.tsx-37-41 (1)
37-41:⚠️ Potential issue | 🟡 MinorRemove the
toastOptionsblock with the undefinedcn-toastclass.
cn-toastis not defined as a CSS class anywhere in the codebase and is not a standard Tailwind utility. This appears to be leftover placeholder code. Either remove the entiretoastOptionsblock to use Sonner's default styles, or if custom toast styling is needed, define actual classes and apply them. The component already properly uses Tailwind classes elsewhere (e.g.,className="toaster group").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ui/sonner.tsx` around lines 37 - 41, Remove the unused toastOptions block that references the undefined "cn-toast" class in the Sonner setup; specifically delete the toastOptions prop (and its classNames.toast entry) from the Sonner component so it falls back to default styles, or alternatively replace "cn-toast" with a real Tailwind/defined CSS class if custom styling is required; check for references to toastOptions and the string "cn-toast" and update accordingly (the component uses className="toaster group" already).frontend/src/pages/Identify.tsx-73-80 (1)
73-80:⚠️ Potential issue | 🟡 MinorDrag-and-drop silently swallows unsupported files.
With the old
rejectedhandling removed, dropping a non-audio file (or one whose MIME doesn't matchaccept) now does nothing — no toast, no error, no visual hint. Given the recorder can already surface errors to the user, it's worth wiringfileRejectionsintosetError(or a toast) so users understand why their drop was ignored.♻️ Suggested fix
- const onDrop = useCallback( - (accepted: File[]) => { - if (accepted[0]) { - handleIdentify(accepted[0]); - } - }, - [handleIdentify], - ); + const onDrop = useCallback( + (accepted: File[], rejected: { file: File; errors: { message: string }[] }[]) => { + if (rejected.length > 0) { + setError( + rejected[0].errors[0]?.message ?? + "Unsupported file type. Please upload an audio file.", + ); + return; + } + if (accepted[0]) { + handleIdentify(accepted[0]); + } + }, + [handleIdentify], + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/Identify.tsx` around lines 73 - 80, The onDrop handler currently ignores rejected files; update the onDrop callback signature to accept both accepted and fileRejections, check fileRejections (e.g., fileRejections[0]?.errors), and call setError (or the existing toast) with a clear message constructed from fileRejections[0].errors[0].message so users see why their file was rejected; touch the onDrop implementation and any Dropzone props that pass fileRejections to ensure rejections are surfaced instead of silently swallowed (referencing onDrop, handleIdentify, setError, and the fileRejections array).frontend/src/components/tracks/TrackCard.tsx-54-59 (1)
54-59:⚠️ Potential issue | 🟡 MinorError toast should replace the loading toast.
toast.loading(...)returns an id; the success branch correctly passesid: toastId, but the error branch omits it, so the loading spinner keeps running alongside the error toast until it auto-dismisses.♻️ Suggested fix
- const handleCopyTrackLink = async () => { + const handleCopyTrackLink = async () => { + const toastId = toast.loading("🔗 Generating track link..."); try { - const toastId = toast.loading("🔗 Generating track link..."); - const response = await trackApi.getPresignedUrl(track.id); const signedUrl = response.url; await navigator.clipboard.writeText(signedUrl); toast.success("📋 Track link copied to clipboard!", { id: toastId, description: "Link copied successfully", }); } catch (error) { console.error("Failed to get or copy track link:", error); - toast.error("❌ Failed to get track link", { + toast.error("❌ Failed to get track link", { + id: toastId, description: "Please try again later", }); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/tracks/TrackCard.tsx` around lines 54 - 59, The error handler in TrackCard.tsx is showing a new error toast without replacing the loading toast id, so the spinner remains; update the catch block that logs "Failed to get or copy track link" to pass the original toast id (toastId) into toast.error (e.g., toast.error(..., { id: toastId, description: ... })) so the loading toast is replaced; ensure the toastId variable used in the success path is in scope for the catch block.frontend/src/lib/api.ts-14-14 (1)
14-14:⚠️ Potential issue | 🟡 Minor
.x-wavis not a file extension.
x-wavappears only in the MIME subtypeaudio/x-wav; no filename ends with.x-wav. This branch is dead code —.wavalone is sufficient.♻️ Proposed fix
- if (lower.endsWith(".wav") || lower.endsWith(".x-wav")) return "audio/wav"; + if (lower.endsWith(".wav")) return "audio/wav";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/api.ts` at line 14, The branch checking for ".x-wav" is dead code; update the MIME detection logic that uses the variable "lower" (the if that currently reads if (lower.endsWith(".wav") || lower.endsWith(".x-wav") return "audio/wav";) to remove the .x-wav check so it only checks for ".wav" (i.e., keep the lower.endsWith(".wav") condition and remove the || lower.endsWith(".x-wav") clause), ensuring the function still returns "audio/wav" for .wav files.
🧹 Nitpick comments (17)
frontend/src/index.css (1)
167-170: Translate refactor preserves behavior; minor stylelint formatting nit.
-translate-x-0.5/-translate-y-0.5resolve to -2px on Tailwind's default spacing scale, so this is functionally equivalent to the previoustranslate-x-[-2px]/translate-y-[-2px]and the call sites infrontend/src/pages/Identify.tsxandfrontend/src/pages/Upload.tsxare unaffected.Separately, stylelint flagged
declaration-empty-line-beforeon line 169 (and lines 141/150) — insert a blank line between the@applyrule and the followingbox-shadow/background-imagedeclaration to satisfy the rule.✏️ Optional formatting fix
.sketch-shadow-hover { `@apply` hover:-translate-x-0.5 hover:-translate-y-0.5 transition-all; + box-shadow: 6px 6px 0px 0px var(--border); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/index.css` around lines 167 - 170, The stylelint rule requires an empty line before the next declaration after an `@apply`; update the .sketch-shadow-hover rule in index.css to insert a blank line between the `@apply` line and the subsequent box-shadow declaration, and make the same spacing change for the other occurrences flagged (the rules around background-image and other declarations referenced in the comment) so the file satisfies declaration-empty-line-before while preserving the current translate behavior of .sketch-shadow-hover (which uses hover:-translate-x-0.5 and hover:-translate-y-0.5)..github/workflows/frontend_ci.yml (1)
14-24: Pin action setup and considerpnpm/action-setup@v4beforesetup-node.Minor consistency note: the current step order (
pnpm/action-setup→setup-node) is correct forcache: 'pnpm'to find the pnpm binary — good. Consider also pinning tonode-version-file: .nvmrcorpackage.json#enginesto keep Node versions synchronized with local dev.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/frontend_ci.yml around lines 14 - 24, The workflow currently uses pinned actions (actions/checkout@v4, pnpm/action-setup@v4, actions/setup-node@v4) but you should explicitly set Node source to a version file to keep CI aligned with local dev; update the setup-node step to include node-version-file: .nvmrc (or node-version-file: package.json#engines) so the Node version is derived from your repo, and ensure pnpm/action-setup remains before actions/setup-node to preserve pnpm cache behavior.backend/.dockerignore (1)
22-24: Minor:.env.*may exclude committed example files.The
.env.*glob will also match files like.env.exampleor.env.sampleif those are ever used to document required env vars for the backend container. If you rely on a committed example, whitelist it explicitly:♻️ Optional tweak
.env .env.local .env.* +!.env.example🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/.dockerignore` around lines 22 - 24, The .dockerignore entry ".env.*" will also exclude committed example files like ".env.example"; update .dockerignore to keep the general ignore but explicitly whitelist any committed examples by adding negative patterns such as "!.env.example" (and/or "!.env.sample") or replace the broad ".env.*" with explicit entries for sensitive files only, ensuring example files remain included for the build/context..gitignore (1)
30-31: Non-anchoreddist/dist-ssrpatterns match at any depth.Switching from
/disttodistis needed to ignorefrontend/dist, but it will also silently ignore any directory nameddistanywhere in the repo (e.g., a future source folder). If you only want to target the frontend build output, consider anchoring to the frontend path instead:♻️ More targeted pattern
-dist -dist-ssr +frontend/dist +frontend/dist-ssr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 30 - 31, The .gitignore entries use non-anchored patterns "dist" and "dist-ssr" which will ignore any directory with those names at any depth; replace them with anchored patterns that target the intended build output (e.g., use "/frontend/dist" and "/frontend/dist-ssr" if the goal is to ignore the frontend build folder), or if you meant to ignore only the repository root build output use "/dist" and "/dist-ssr" instead; update the entries for "dist" and "dist-ssr" accordingly so they no longer match directories at arbitrary depths.frontend/nginx.conf (2)
1-34: Optional: add a lightweight/healthzendpoint for container probes.Container orchestrators (Kubernetes, ECS, Docker healthchecks) will want a cheap liveness/readiness endpoint that doesn't depend on the SPA shell. A 200 from nginx itself is ideal.
➕ Optional addition
location / { try_files $uri $uri/ /index.html; } + + location = /healthz { + access_log off; + add_header Content-Type text/plain; + return 200 "ok"; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/nginx.conf` around lines 1 - 34, Add a lightweight health endpoint inside the existing server block by adding a new location (e.g., location = /healthz) that returns an immediate 200 response from nginx (no upstream or SPA dependency); include minimal settings such as disabling access_log for the path (access_log off), optionally adding a simple header or content-type, and place it near the other location blocks (next to location / and location = /index.html) so container orchestrators can probe /healthz cheaply.
21-21: Minor: consider adding modern asset types togzip_typesand/or enabling Brotli.
gzip_typesis missingapplication/wasm,image/svg+xml,font/woff2, andapplication/manifest+json. Note that.woff2is already well-compressed and should typically be skipped, but SVG/JSON manifests benefit meaningfully. This is a small perf hint, not a correctness issue.♻️ Optional
- gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; + gzip_types text/plain text/css application/json application/javascript text/javascript text/xml application/xml application/xml+rss image/svg+xml application/manifest+json application/wasm;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/nginx.conf` at line 21, The gzip_types directive currently lacks modern asset MIME types; update the nginx gzip_types configuration (the gzip_types directive) to include application/wasm, image/svg+xml, and application/manifest+json (optionally include font/woff2 if you intentionally want to compress woff2 despite its already-good compression), and consider enabling Brotli by adding brotli on with a corresponding brotli_types list mirroring gzip_types to serve smaller compressed assets when supported by clients.frontend/Dockerfile (1)
17-21: Consider leveraging BuildKit cache mounts forpnpm install.
COPY . .afterpnpm installis fine for layer caching on lockfile-only changes, but pnpm's store can be reused across builds with a cache mount to significantly speed up CI builds.♻️ Optional: cache pnpm store between builds
-# Install dependencies -RUN pnpm install --frozen-lockfile +# Install dependencies (cache pnpm store across builds) +RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \ + pnpm install --frozen-lockfileRequires DOCKER_BUILDKIT=1 (default in
docker/build-push-action@v5).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/Dockerfile` around lines 17 - 21, Move dependency files copy earlier and use a BuildKit cache mount for pnpm store: copy package.json and pnpm-lock.yaml (or pnpm-workspace.yaml) before the full COPY . ., change the install step that currently runs before/after COPY . . to use a BuildKit cache (e.g. replace the plain RUN pnpm install with a RUN --mount=type=cache,target=/home/node/.pnpm-store pnpm install), keep COPY . . before RUN pnpm build so sources are available for the build, and ensure CI enables DOCKER_BUILDKIT=1 so the cache mount is honored..github/workflows/frontend_docker.yml (2)
39-42:type=ref,event=branchis a no-op onpull_requesttriggers.
docker/metadata-actiononly emitsbranchtags onpushevents. Onpull_requestevents, the event type ispr, notbranch. This means you'll only get the short-SHA tag on PR runs. Theenable=${{ github.ref == 'refs/heads/main' }}condition also won't work on pull_request events sincegithub.refwill be a PR merge ref, not a branch ref.Consider using
type=ref,event=prfor PR-specific tags, and usegithub.event.pull_request.base.reffor the base branch condition:♻️ Suggestion
tags: | - type=ref,event=branch + type=ref,event=pr type=sha,format=short - type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }} + type=raw,value=latest,enable=${{ github.event.pull_request.base.ref == 'main' }}Or keep only
sha+latestif branch-based tags aren't useful here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/frontend_docker.yml around lines 39 - 42, The docker/metadata-action tag config currently uses type=ref,event=branch which is a no-op for pull_request runs; update the tags block used by docker/metadata-action to emit branch tags for PRs by replacing type=ref,event=branch with type=ref,event=pr (so PR runs get the branch ref), and change the latest-enable condition to use github.event.pull_request.base.ref == 'refs/heads/main' (or similar) instead of github.ref; alternatively, if branch-based tags aren’t needed for PRs, simplify the tags to only include the sha and the conditional latest tag (keeping type=sha,format=short and type=raw,value=latest with the adjusted base-ref condition).
32-32: Consider using${{ secrets.GITHUB_TOKEN }}instead of the customSECRET_TOKEN.The workflows are pushing to
ghcr.iowithin the repository's namespace, andpermissions: packages: writeis already declared—sufficient for this use case. The customSECRET_TOKENintroduces unnecessary maintenance overhead and increases the blast radius if compromised. Unless there's a specific cross-org or cross-namespace requirement,GITHUB_TOKENis the preferred default.Suggested change
- password: ${{ secrets.SECRET_TOKEN }} + password: ${{ secrets.GITHUB_TOKEN }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/frontend_docker.yml at line 32, Replace the custom secret usage with the built-in token: change references to secrets.SECRET_TOKEN to use secrets.GITHUB_TOKEN in the workflow so the job that pushes to ghcr.io uses the repo-scoped GITHUB_TOKEN (ensure permissions: packages: write is present in the workflow). Update any steps or environment variables that currently reference SECRET_TOKEN (e.g., the password field) to reference GITHUB_TOKEN instead and remove the custom secret from the workflow configuration or repo secrets if no longer needed.frontend/src/lib/contexts.ts (1)
3-8: Tighten thestatustype and de-duplicate the notification shape.Two small improvements:
status: stringis too permissive given Navbar branches on specific literals ("completed","ready","failed","error") whileLibrary.tsxemits"pending" | "fingerprinting" | "ready" | "error". A union type will catch typos at compile time (e.g., someone writing"complete"instead of"completed").- The item shape is duplicated across the array and the setter. Extract it.
♻️ Suggested refactor
import { createContext } from "react"; +export type UploadNotificationStatus = + | "pending" + | "fingerprinting" + | "ready" + | "error" + | "completed" + | "failed"; + +export interface UploadNotification { + track_id: string; + status: UploadNotificationStatus; + message?: string; +} + export interface UploadNotificationContextType { - uploadNotifications: { track_id: string; status: string; message?: string }[]; - setUploadNotifications: React.Dispatch<React.SetStateAction<{ track_id: string; status: string; message?: string }[]>>; + uploadNotifications: UploadNotification[]; + setUploadNotifications: React.Dispatch<React.SetStateAction<UploadNotification[]>>; } export const UploadNotificationContext = createContext<UploadNotificationContextType | undefined>(undefined);Optionally, export a
useUploadNotifications()hook that throws when used outside a provider, so consumers don't have to repeatedly fall back on an empty setter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/contexts.ts` around lines 3 - 8, The UploadNotificationContextType currently uses a permissive status: string and duplicates the item shape; introduce a dedicated item type (e.g., UploadNotificationItem) and replace the inline shapes in UploadNotificationContextType with UploadNotificationItem[] and React.Dispatch<React.SetStateAction<UploadNotificationItem[]>>; tighten the status field on UploadNotificationItem to the union of known literals ("pending" | "fingerprinting" | "ready" | "completed" | "failed" | "error") so consumers like Navbar/Library get compile-time checks; update UploadNotificationContext to use the new UploadNotificationContextType and optionally add a useUploadNotifications() hook that reads UploadNotificationContext and throws when undefined to avoid repeated empty-setter fallbacks.frontend/src/pages/Upload.tsx (1)
48-49: Consider validatingtitle.trim()rather than truthiness.
!titlelets whitespace-only titles through the guard (e.g. a single space passes the disabled check and gets POSTed). Trimming is a cheap hardening step:- if (!file || !title) return; + if (!file || !title.trim()) return; ... - disabled={uploading || !file || !title} + disabled={uploading || !file || !title.trim()}Non-blocking, just nicer UX/validation.
Also applies to: 141-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/Upload.tsx` around lines 48 - 49, The guard in handleSubmit (and the other title checks around lines 141-143) allows whitespace-only titles; change all truthiness checks of title to trim-aware checks (e.g., use title?.trim() and check length or truthiness of title.trim()) so whitespace-only strings are rejected before POST and UI disables correctly; update the disabled logic and any early returns in functions like handleSubmit to use title.trim() (or title?.trim().length > 0) and ensure you call trim() consistently where validation is performed.backend/src/settings.rs (1)
60-63: Silent fallback whenCORS_ALLOW_CREDENTIALSis misconfigured.
parse().unwrap_or(false)silently turns typos like"True","yes","1"intofalse. For a security-relevant flag, it's worth surfacing the misconfiguration rather than hiding it — either use.context(...)?to fail fast like the other settings, or at minimum log a warning when parsing fails.🛡️ Suggested fix
- cors_allow_credentials: std::env::var("CORS_ALLOW_CREDENTIALS") - .unwrap_or_else(|_| "false".to_string()) - .parse() - .unwrap_or(false), + cors_allow_credentials: std::env::var("CORS_ALLOW_CREDENTIALS") + .unwrap_or_else(|_| "false".to_string()) + .parse() + .context("Invalid CORS_ALLOW_CREDENTIALS (expected `true` or `false`)")?,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/settings.rs` around lines 60 - 63, The current cors_allow_credentials setting silently defaults to false when parsing CORS_ALLOW_CREDENTIALS fails; update the code that sets cors_allow_credentials in settings.rs (the cors_allow_credentials assignment reading "CORS_ALLOW_CREDENTIALS") to surface misconfigurations instead of swallowing them — either propagate a parsing error (use .parse::<bool>().context("invalid CORS_ALLOW_CREDENTIALS")? to fail fast like other settings) or, if you must tolerate errors, at minimum log a warning when parse fails before falling back; ensure the change references the cors_allow_credentials assignment and the "CORS_ALLOW_CREDENTIALS" env var.backend/src/worker/workflow.rs (1)
80-81: Optional: thread the extension through fromobject_keyor track metadata.Both worker decode sites pass
None, which forces Symphonia to rely entirely on magic-byte detection. That generally works for WAV/MP3/FLAC but is noticeably less reliable for containers like M4A/AAC/WebM. Sinceobject_keyis already fetched from the DB, you could cheaply pass the extension as a hint:let ext = std::path::Path::new(&track.object_key) .extension() .and_then(|e| e.to_str()) .map(|s| s.to_ascii_lowercase()); let (samples, duration_secs) = tokio::task::spawn_blocking(move || { decode_audio(audio_data, 8000, ext.as_deref()) }).await??;Non-blocking — functionally equivalent today for most inputs, just more robust for the formats the validator now accepts (including
webm).Also applies to: 170-171
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/worker/workflow.rs` around lines 80 - 81, Extract the file extension from the existing object_key (e.g., via std::path::Path::new(&track.object_key).extension().and_then(|e| e.to_str()).map(|s| s.to_ascii_lowercase())) and pass that as the optional hint into decode_audio instead of None; update both tokio::task::spawn_blocking calls that invoke decode_audio(audio_data, 8000, None) (the one at the shown location and the similar call around lines 170-171) to forward the computed ext.as_deref() so decode_audio receives Some(extension) when available to improve format detection.frontend/src/pages/Library.tsx (1)
40-41: Prefer typingtrackApi.listover casting at the call site.Repeated
trackData as TrackListResponsecasts mean that any future divergence betweentrackApi.list's return type andTrackListResponsewon't surface as a type error here. IftrackApi.listalready returnsTrackListResponse, the cast is redundant; otherwise, fixing the API type (or generic-parameterizinguseQuery<TrackListResponse>) is the more durable approach.♻️ Suggested fix
- const { data: trackData, isLoading } = useQuery({ + const { data: trackData, isLoading } = useQuery<TrackListResponse>({ queryKey: ["tracks", page], queryFn: () => trackApi.list(itemsPerPage, page), }); - const tracks = (trackData as TrackListResponse)?.tracks || []; - const totalCount = (trackData as TrackListResponse)?.total_count || 0; + const tracks = trackData?.tracks ?? []; + const totalCount = trackData?.total_count ?? 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/Library.tsx` around lines 40 - 41, Avoid casting trackData at the usage site; instead ensure the query return type is correctly typed by either updating trackApi.list's TypeScript signature to return TrackListResponse or by supplying the generic to useQuery (e.g., useQuery<TrackListResponse>(...)) so trackData is inferred as TrackListResponse and you can remove the casts around trackData used for tracks and total_count.backend/src/api/identify.rs (1)
112-122: Drop the redundant.to_vec()clone in the blocking task.
audio_datais alreadyVec<u8>and is moved into thespawn_blockingclosure, soaudio_data.to_vec()allocates and copies the entire (up-to-identify_max_file_size) buffer for no reason.decode_audiotakesVec<u8>by value, so you can hand it the original.♻️ Suggested fix
let (sample_hashes, sample_duration) = tokio::task::spawn_blocking(move || { - let (samples, duration) = decode_audio(audio_data.to_vec(), 8000, Some(&ext))?; + let (samples, duration) = decode_audio(audio_data, 8000, Some(&ext))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/api/identify.rs` around lines 112 - 122, The closure passed to tokio::task::spawn_blocking is needlessly cloning audio_data via audio_data.to_vec(); since audio_data is already a Vec<u8> and is moved into the closure, remove the .to_vec() call and pass audio_data directly to decode_audio (i.e., call decode_audio(audio_data, 8000, Some(&ext))) so the buffer is not duplicated; adjust the closure capture/move usage if necessary to ensure audio_data is moved into the closure.frontend/src/components/identify/AudioRecorder.tsx (1)
42-51: Guard against zerostepin the waveform averaging.Today
analyser.fftSize = 256⇒frequencyBinCount = 128, sostep = 3and the loop is safe. But if someone lowersfftSizebelow 80 inuseAudioRecorder.ts,stepbecomes0, you readdataArray[0]40 times and divide by0, yieldingNaNbar heights. A tiny defensive guard avoids this coupling.♻️ Suggested hardening
- const step = Math.floor(dataArray.length / 40); + const step = Math.max(1, Math.floor(dataArray.length / 40)); const newWaveform = []; for (let i = 0; i < 40; i++) { let sum = 0; for (let j = 0; j < step; j++) { - sum += dataArray[i * step + j]; + sum += dataArray[i * step + j] ?? 0; } const avg = sum / step; newWaveform.push(Math.max(5, (avg / 255) * 100)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/identify/AudioRecorder.tsx` around lines 42 - 51, The waveform averaging code can produce step = 0 when frequencyBinCount is small; change the computation and indexing to be defensive: compute step as Math.max(1, Math.floor(dataArray.length / 40)) (or fallback to 1) and when summing use a bound check so you never read past dataArray (e.g., iterate j from 0 while i*step + j < dataArray.length && j < step). Update the block that references step, dataArray and newWaveform (the loop building newWaveform) so division by zero cannot occur and indexes are clamped.frontend/src/hooks/useAudioRecorder.ts (1)
161-182: Skip decode+re-encode when the recorder already produced WAV.When the browser falls back to
audio/wav, theonstophandler still decodes it into anAudioBufferand re-encodes it back to WAV—pure overhead that temporarily doubles peak memory for the recorded sample. A simple short-circuit avoids it.♻️ Suggested refactor
if (recordedBlob.size > 0) { + if (finalMimeType.startsWith("audio/wav")) { + setAudioBlob(recordedBlob); + } else { if (!decodeContext || decodeContext.state === "closed") { throw new Error("Audio context is not available"); } const audioBuffer = await decodeContext.decodeAudioData( await recordedBlob.arrayBuffer(), ); const wavBlob = encodeAudioBufferToWav(audioBuffer); setAudioBlob(wavBlob); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useAudioRecorder.ts` around lines 161 - 182, The onstop handler currently always decodes the recordedBlob into an AudioBuffer and re-encodes to WAV; short-circuit when the recorder already produced WAV by checking finalMimeType (e.g., finalMimeType.includes("wav") or equals "audio/wav"/"audio/x-wav") and if true call setAudioBlob(recordedBlob) directly and return; otherwise proceed with the existing decode using decodeContext.decodeAudioData, encodeAudioBufferToWav, and setAudioBlob as before. Ensure you still guard for isCurrentSession and non-zero recordedBlob.size and that recordedBlob.type is preserved when bypassing re-encoding.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7e0790fa-52e7-406f-a08c-0636f9b6de1c
⛔ Files ignored due to path filters (4)
frontend/dist/assets/index-C8bHhJC2.jsis excluded by!**/dist/**frontend/dist/assets/index-CaS2LAz4.cssis excluded by!**/dist/**frontend/dist/index.htmlis excluded by!**/dist/**frontend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (38)
.dockerignore.github/workflows/backend_ci.yml.github/workflows/backend_docker.yml.github/workflows/frontend_ci.yml.github/workflows/frontend_docker.yml.gitignorebackend/.dockerignorebackend/src/api/identify.rsbackend/src/api/mod.rsbackend/src/api/tracks.rsbackend/src/fingerprint/decode.rsbackend/src/settings.rsbackend/src/utils/file_validation.rsbackend/src/worker/workflow.rsfrontend/.dockerignorefrontend/Dockerfilefrontend/index.htmlfrontend/nginx.conffrontend/package.jsonfrontend/src/App.tsxfrontend/src/components/identify/AudioRecorder.tsxfrontend/src/components/layout/Navbar.tsxfrontend/src/components/tracks/TrackCard.tsxfrontend/src/components/ui/badge.tsxfrontend/src/components/ui/button.tsxfrontend/src/components/ui/dropdown-menu.tsxfrontend/src/components/ui/skeleton.tsxfrontend/src/components/ui/sonner.tsxfrontend/src/hooks/useAudioRecorder.tsfrontend/src/hooks/useWebSocket.tsfrontend/src/index.cssfrontend/src/lib/api.tsfrontend/src/lib/contexts.tsfrontend/src/pages/Identify.tsxfrontend/src/pages/Library.tsxfrontend/src/pages/Upload.tsxfrontend/src/types/index.tsfrontend/tsconfig.app.json
💤 Files with no reviewable changes (1)
- frontend/src/hooks/useWebSocket.ts
…ecording improvements
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (4)
backend/src/settings.rs (1)
54-69: Consider extracting origin parsing into a helper.The inline block inside the struct literal mixes parsing, trimming, filtering, and validation, which makes
from_envharder to scan. Extracting to a smallfn parse_cors_origins(raw: &str) -> Result<Vec<String>>would keep the struct literal flat, make the logic unit-testable, and localize the validation change suggested above.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/settings.rs` around lines 54 - 69, Extract the inline parsing/validation for cors_allowed_origins in from_env into a helper function named parse_cors_origins(raw: &str) -> anyhow::Result<Vec<String>>; move the split, trim, filter, and the HeaderValue::parse validation into that function, return the Vec<String> on success, and call it from the cors_allowed_origins field in from_env (replacing the current block). Ensure the helper is unit-testable and referenced by name so tests can validate behavior and error messages produced when HeaderValue parsing fails..github/workflows/frontend_ci.yml (2)
3-8: Consider adding aconcurrencygroup to cancel superseded PR runs.Without a
concurrencyblock, pushing new commits to a PR will queue multiple redundantlint/buildruns. Recommended:♻️ Proposed addition
on: push: branches: [main] pull_request: branches: [main] + +concurrency: + group: frontend-ci-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: ${{ github.event_name == 'pull_request' }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/frontend_ci.yml around lines 3 - 8, Add a top-level concurrency block to the workflow ( alongside the existing on: push/pull_request block ) so superseded runs are cancelled; add: concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.ref }} with cancel-in-progress: true to ensure new commits on the same PR/branch cancel prior lint/build runs.
22-23: Pinnode-versionto20.xfor clarity and consistency withengines.nodeinpackage.json.
frontend/package.jsondeclaresengines.node: "^20.19.0 || >=22.12.0". Whilenode-version: 20does resolve to the latest 20.x version available on the runner (currently 20.20.2 on ubuntu-latest, which satisfies the constraint), using20.xis more explicit about semantic versioning intent and makes the CI contract clearer.♻️ Suggested tweak
- uses: actions/setup-node@v4 with: - node-version: 20 + node-version: '20.x' cache: 'pnpm'Also applies to: 42-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/frontend_ci.yml around lines 22 - 23, Update the GitHub Actions workflow entries that set the Node version to be explicit about semver: replace the current node-version: 20 settings with node-version: 20.x in the workflow file so it clearly matches the engines.node requirement in package.json; ensure you change all occurrences (the node-version key at the locations around the earlier lines referenced) so the CI uses the 20.x semver range consistently.frontend/src/pages/Identify.tsx (1)
73-74: Replaceany[]withFileRejection[]from react-dropzone.react-dropzone 15.0.0 exports proper types; using them here makes the
error.codecomparisons type-checked instead of relying on string literals. The codebase already uses this pattern in Upload.tsx.♻️ Proposed change
-import { useDropzone } from "react-dropzone"; +import { useDropzone, type FileRejection } from "react-dropzone"; @@ const onDrop = useCallback( - (accepted: File[], fileRejections: any[]) => { + (accepted: File[], fileRejections: FileRejection[]) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/Identify.tsx` around lines 73 - 74, Change the onDrop callback signature to use react-dropzone's FileRejection type instead of any[] so error codes are type-checked: replace the second parameter type in onDrop (currently "fileRejections: any[]") with "fileRejections: FileRejection[]" and add an import for FileRejection from "react-dropzone"; update any local error.code comparisons in the onDrop body to rely on the typed FileRejection structure (e.g., rejection.errors) if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/src/settings.rs`:
- Around line 70-73: The current cors_allow_credentials parse call swallows
invalid values via .unwrap_or(false); change it to surface parse errors like the
other settings by replacing the .parse().unwrap_or(false) with a fallible parse
that attaches context and propagates the error (e.g., use
.parse::<bool>().context("failed to parse CORS_ALLOW_CREDENTIALS as bool")? or
similar), keeping the same default for a missing env var but failing fast on
invalid values for cors_allow_credentials so misconfiguration is not silently
ignored.
- Around line 63-67: The current loop in settings.rs validates CORS origins only
via axum::http::HeaderValue::parse which allows invalid origins; change the
validation to first parse each origin string using url::Url (e.g., via
url::Url::parse), then assert the scheme is "http" or "https", that host() is
Some, and that path(), query(), and fragment() are empty/none; only after these
URL checks succeed convert to HeaderValue (or call HeaderValue::from_str) so
startup rejects entries like "localhost:5173" or "http://example.com/" with a
path. Ensure you reference the existing origin variable in the same loop and
preserve the existing error handling semantics by returning a descriptive
anyhow::Error on validation failure.
In `@frontend/Dockerfile`:
- Around line 24-33: The nginx unprivileged image runs as user nginx and cannot
bind to privileged port 80, so change the nginx config's "listen 80;" to "listen
8080;" in frontend/nginx.conf and update the Dockerfile's EXPOSE 80 to EXPOSE
8080; also ensure any docker-compose or k8s manifests that forward or target
container port 80 are updated to point to container port 8080 so the service
matches the new container listening port.
In `@frontend/src/App.tsx`:
- Line 26: The current code uses useTheme().theme which can be "system" and
breaks the toggle and Navbar; update the toggle logic and the Navbar prop to use
resolvedTheme instead of theme (i.e., read const { theme, resolvedTheme,
setTheme } = useTheme() and use resolvedTheme for computing the next theme and
for passing a "light" | "dark" value to <Navbar>); alternatively, if you only
want explicit light/dark, set enableSystem={false} on the ThemeProvider so theme
will never be "system".
In `@frontend/src/components/tracks/TrackCard.tsx`:
- Around line 266-282: The disable guard and submit handler use the raw title,
allowing whitespace-only strings; trim the title before validating and sending:
update the disabled condition to use title.trim() (e.g., disabled={isSaving ||
!title?.trim()}) and in the onClick async handler compute const trimmedTitle =
title.trim(), validate that trimmedTitle is non-empty, then call
onEdit(track.id, { title: trimmedTitle, artist: artist || undefined }), keeping
the existing setIsSaving, setEditOpen, toast and error handling logic.
- Around line 41-60: The loading toast created in handleCopyTrackLink is not
dismissed on error because toastId is scoped inside the try; move the
declaration const toastId = toast.loading("🔗 Generating track link...") so it
is declared before the try block (or otherwise accessible in catch), then pass
that toastId into the failing path by calling toast.error(..., { id: toastId,
description: ... }) so the loading spinner is replaced by the error toast;
ensure the success path still uses toast.success(..., { id: toastId,
description: ... }) and that trackApi.getPresignedUrl(track.id) and
navigator.clipboard calls remain unchanged.
In `@frontend/src/index.css`:
- Around line 140-170: Stylelint complains because declarations
(background-image and box-shadow) immediately follow `@apply` without a blank
line; update the CSS rules containing the `@apply` usages—specifically the body
selector (first block with `@apply` min-h-screen ...), body::before (with `@apply`
fixed inset-0 ...), and the .sketch-shadow rule—to insert a single blank line
between the `@apply` line and the subsequent background-image / box-shadow
declarations so each `@apply` is separated from following declarations by an empty
line.
In `@frontend/src/lib/api.ts`:
- Around line 58-82: The match function accepts options.transparency but never
sends it to the backend; either remove the option or forward it and update the
backend. To fix, modify the frontend match function (match) to include
options?.transparency as a query param named transparency in the api.post call
params alongside filename, and update the backend IdentifyRawQuery (and the
/identify/raw handler) to accept transparency: boolean (and handle its
semantics). Ensure the request still sets filename and Content-Type and keep
timeout unchanged.
---
Nitpick comments:
In @.github/workflows/frontend_ci.yml:
- Around line 3-8: Add a top-level concurrency block to the workflow ( alongside
the existing on: push/pull_request block ) so superseded runs are cancelled;
add: concurrency: group: ${{ github.workflow }}-${{ github.head_ref ||
github.ref }} with cancel-in-progress: true to ensure new commits on the same
PR/branch cancel prior lint/build runs.
- Around line 22-23: Update the GitHub Actions workflow entries that set the
Node version to be explicit about semver: replace the current node-version: 20
settings with node-version: 20.x in the workflow file so it clearly matches the
engines.node requirement in package.json; ensure you change all occurrences (the
node-version key at the locations around the earlier lines referenced) so the CI
uses the 20.x semver range consistently.
In `@backend/src/settings.rs`:
- Around line 54-69: Extract the inline parsing/validation for
cors_allowed_origins in from_env into a helper function named
parse_cors_origins(raw: &str) -> anyhow::Result<Vec<String>>; move the split,
trim, filter, and the HeaderValue::parse validation into that function, return
the Vec<String> on success, and call it from the cors_allowed_origins field in
from_env (replacing the current block). Ensure the helper is unit-testable and
referenced by name so tests can validate behavior and error messages produced
when HeaderValue parsing fails.
In `@frontend/src/pages/Identify.tsx`:
- Around line 73-74: Change the onDrop callback signature to use
react-dropzone's FileRejection type instead of any[] so error codes are
type-checked: replace the second parameter type in onDrop (currently
"fileRejections: any[]") with "fileRejections: FileRejection[]" and add an
import for FileRejection from "react-dropzone"; update any local error.code
comparisons in the onDrop body to rely on the typed FileRejection structure
(e.g., rejection.errors) if needed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 555e619b-1510-4039-a728-c8a4f489699e
📒 Files selected for processing (18)
.github/workflows/backend_docker.yml.github/workflows/frontend_ci.yml.github/workflows/frontend_docker.ymlbackend/src/api/identify.rsbackend/src/api/mod.rsbackend/src/settings.rsbackend/src/utils/file_validation.rsfrontend/.dockerignorefrontend/Dockerfilefrontend/nginx.conffrontend/package.jsonfrontend/src/App.tsxfrontend/src/components/identify/AudioRecorder.tsxfrontend/src/components/tracks/TrackCard.tsxfrontend/src/index.cssfrontend/src/lib/api.tsfrontend/src/pages/Identify.tsxfrontend/src/pages/Library.tsx
✅ Files skipped from review due to trivial changes (2)
- .github/workflows/backend_docker.yml
- frontend/.dockerignore
🚧 Files skipped from review as they are similar to previous changes (6)
- backend/src/utils/file_validation.rs
- backend/src/api/mod.rs
- frontend/nginx.conf
- .github/workflows/frontend_docker.yml
- backend/src/api/identify.rs
- frontend/src/pages/Library.tsx
Summary by CodeRabbit
New Features
Bug Fixes
Chores