Skip to content

fixing track identication interface#12

Merged
lessan-cyber merged 33 commits intomainfrom
dev
Apr 24, 2026
Merged

fixing track identication interface#12
lessan-cyber merged 33 commits intomainfrom
dev

Conversation

@lessan-cyber
Copy link
Copy Markdown
Owner

@lessan-cyber lessan-cyber commented Apr 24, 2026

  • adding a body limit to identify endpoint
  • converting the recorded sample from webm to waw to avoid compabibility issue
  • adding ci workflows for the frontend
  • adding components
  • fix cors

Summary by CodeRabbit

  • New Features

    • Live waveform during recording; improved recorder lifecycle
    • Raw audio upload endpoint alongside multipart uploads
    • Toaster-based toast notifications and theme provider integration
    • Track details dialog, link sharing, edit/save flows, and delete confirmation
    • Frontend and backend Docker setups and CI workflows
  • Bug Fixes

    • Added WEBM audio acceptance
    • Ensure track status is returned after updates
    • Improved microphone/error handling during recording
  • Chores

    • Tightened CORS defaults and added .dockerignore files
    • Removed legacy WebSocket hook; notification handling updated

- Replace permissive Any configurations with specific allowlists
- Add configurable CORS origins via environment variables
- Restrict HTTP methods to GET, POST, PUT, DELETE, PATCH
- Limit headers to Authorization, Content-Type, Accept
- Add explicit credentials support with proper validation
- Default origins: http://localhost:5173,http://localhost:3000
- Default credentials: false (secure by default)
- Remove useTrackWebSocket hook and all WebSocket-related code
- Update notification system to rely solely on upload notifications
- Remove wsMessages, wsUnreadCount, clearWsUnread from App.tsx
- Simplify Navbar props to use only upload notifications
- Delete unused useWebSocket.ts file

This completes the migration from WebSocket to database polling for
upload progress notifications as requested in the PR.
AudioRecorder Hook Improvements:
- Add atomic start recording guard to prevent concurrent starts
- Wrap recorder operations in try/catch blocks
- Add proper cleanup of media streams and timers
- Implement useEffect cleanup on unmount
- Add error handling for recorder operations
- Detect MIME type before creating MediaRecorder
- Use recorder's actual MIME type when creating Blob

Identify Page Improvements:
- Memoize handleIdentify callback with useCallback
- Replace hardcoded 'recording.wav' with dynamic filename based on MIME type
- Add getFilenameFromMimeType helper function
- Support all common audio formats (wav, webm, ogg, mp4, mpeg, aac, flac)
- Preserve original MIME type instead of forcing 'audio/wav'

These changes prevent memory leaks, improve performance, and provide
better audio format support.
…asts

- Add TrackListResponse interface to types/index.ts
- Update API to import and use the shared type
- Replace inline casts in Library.tsx with proper TrackListResponse type
- Eliminate duplicate type definitions across the codebase
- Improve type safety and code maintainability

This change standardizes the paginated tracks response type throughout
the application and removes the need for manual type casting.
- Add asChild prop to DropdownMenuTrigger component
- Forward asChild prop to MenuPrimitive.Trigger
- Prevents nested button elements when using asChild pattern
- Maintains all existing functionality and data-slot attributes

This fix resolves the issue where consumers were accidentally nesting
buttons inside the DropdownMenuTrigger component.
…notifications

TrackCard Enhancements:
- Add delete confirmation dialog (prevents accidental deletions)
- Fix 3-dot menu visibility (always visible, theme-aware)
- Replace bell notifications with Sonner toast notifications
- Fix duration field to use duration_secs from API
- Add proper clipboard copy with backend integration
- Show loading/success/error toasts for track link copying
- Use actual S3 signed URL instead of API endpoint

Navbar Updates:
- Update to work with new notification system
- Remove WebSocket-related props and logic

CSS Updates:
- Maintain existing styling while supporting new features

These changes improve user experience with proper confirmation flows
and immediate feedback for actions.
- Update .gitignore for better build artifact handling
- Fix AudioRecorder component to prevent effect re-firing
- Update backend tracks API (minor adjustments)
- Update frontend build artifacts and dependencies
- Update index.html for production build

These are mostly build-related changes and minor component improvements.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

Warning

Rate limit exceeded

@lessan-cyber has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 53 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 8 minutes and 53 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e807b2c2-6ca9-4587-be2a-2b6c99e25838

📥 Commits

Reviewing files that changed from the base of the PR and between 9ff55ae and 1813147.

⛔ Files ignored due to path filters (1)
  • backend/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • .github/workflows/frontend_ci.yml
  • backend/Cargo.toml
  • backend/src/api/mod.rs
  • backend/src/settings.rs
  • backend/src/utils/cors.rs
  • backend/src/utils/mod.rs
  • frontend/Dockerfile
  • frontend/nginx.conf
  • frontend/src/App.tsx
  • frontend/src/components/tracks/TrackCard.tsx
📝 Walkthrough

Walkthrough

This PR adds Docker ignore files and a frontend Dockerfile/nginx config, introduces frontend CI and Docker workflows, refactors the backend identify API to accept raw uploads alongside multipart, tightens CORS and body-size handling, extends audio decoding with an extension hint, and implements frontend audio recording, UI, and notification system changes.

Changes

Cohort / File(s) Summary
Container images & ignores
/.dockerignore, backend/.dockerignore, frontend/.dockerignore, frontend/Dockerfile, frontend/nginx.conf
Adds .dockerignore files for frontend/backend, adds a two-stage frontend Dockerfile, and provides an nginx config for serving the SPA with security headers, caching, and gzip.
CI/CD workflows
.github/workflows/backend_ci.yml, .github/workflows/backend_docker.yml, .github/workflows/frontend_ci.yml, .github/workflows/frontend_docker.yml
Renames backend CI display name, adjusts backend Docker workflow name, and adds frontend CI and frontend Docker workflows (lint/build and build/push to GHCR with metadata).
Backend identify & API
backend/src/api/identify.rs, backend/src/api/mod.rs, backend/src/api/tracks.rs
Splits identify into multipart POST / and raw POST /raw, centralizes processing, enforces per-route body size limit, constrains CORS using settings, and ensures DELETE transaction commit / UPDATE returns status.
Backend decoding & validation
backend/src/fingerprint/decode.rs, backend/src/utils/file_validation.rs, backend/src/worker/workflow.rs
decode_audio gains an optional extension hint passed to Symphonia probe; webm added to allowed extensions; worker calls updated to pass the new parameter.
Backend settings
backend/src/settings.rs
Adds identify_max_file_size, cors_allowed_origins, and cors_allow_credentials with env parsing/validation.
Frontend recording & audio hook
frontend/src/components/identify/AudioRecorder.tsx, frontend/src/hooks/useAudioRecorder.ts
Implements analyser-driven live waveform, isStarting state, WAV encoding helper (encodeAudioBufferToWav), improved lifecycle/error handling, and exposes analyser/isStarting from the hook.
Frontend notification & WebSocket removal
frontend/src/App.tsx, frontend/src/lib/contexts.ts, frontend/src/components/layout/Navbar.tsx, frontend/src/hooks/useWebSocket.ts
Removes the WebSocket hook, introduces UploadNotificationContext for local upload notifications, updates App/Navbar to use local notifications, and adds a Toaster integration.
Frontend UI components
frontend/src/components/tracks/TrackCard.tsx, frontend/src/components/ui/badge.tsx, frontend/src/components/ui/button.tsx, frontend/src/components/ui/dropdown-menu.tsx, frontend/src/components/ui/skeleton.tsx, frontend/src/components/ui/sonner.tsx
Enhances TrackCard with details/edit/delete flows and async callbacks, removes badgeVariants and buttonVariants exports, tweaks dropdown styling, adds Skeleton and a Sonner Toaster component.
Frontend API, types & pages
frontend/src/lib/api.ts, frontend/src/types/index.ts, frontend/src/pages/Identify.tsx, frontend/src/pages/Library.tsx, frontend/src/pages/Upload.tsx
identifyApi.match now uploads raw blob to POST /identify/raw with optional filename; axios client gains timeout and no localhost fallback; adds TrackListResponse type; pages updated for new API, error handling, and polling limits.
Frontend config & styling
frontend/index.html, frontend/package.json, frontend/src/index.css, frontend/tsconfig.app.json
Changes page title to "Auri", adds next-themes and sonner deps and Node engine constraint, updates font quote style and small CSS utility tweaks, and sets TS deprecation ignore.
VCS ignore
.gitignore
Makes dist and dist-ssr ignore patterns non-root-anchored.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰
Hopped through code, I shaped a Docker nest,
Waveforms wiggle, recordings sound their best,
WebSockets slept as local notes took flight,
Raw bytes hum home in morning’s gentle light. 🎶

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.94% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'fixing track identication interface' is partially related to the changeset but does not capture the main change; it contains a typo ('identication' vs 'identification') and is too narrow to reflect the extensive scope of changes across frontend, backend, Docker, CI/CD, and infrastructure. Revise the title to be more comprehensive and accurate, e.g., 'Add frontend CI workflows, improve audio format handling, and tighten CORS security' or similar, to better reflect the multi-faceted nature of the changes.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch dev

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/src/api/tracks.rs (1)

349-364: ⚠️ Potential issue | 🟡 Minor

Good fix, but watch S3-before-DB-commit ordering.

Adding tx.commit().await?; is correct — without it the DELETE was implicitly rolled back when tx dropped, leaving orphaned rows after S3 deletion.

One remaining concern: S3 deletion still happens before the DB commit. If tx.commit() fails (network blip, DB restart), the S3 object is already gone but the tracks row persists, and subsequent get_track_url calls will return a presigned URL to a non-existent object. Consider either:

  1. Committing the DB transaction first and then deleting from S3 (accepting a potential orphaned S3 object, which is usually the lesser evil and can be swept by a background job), or
  2. Using a soft-delete flag with async reconciliation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/api/tracks.rs` around lines 349 - 364, The S3 deletion
(state.s3.delete_file(&track.object_key)) currently happens before tx.commit(),
risking object loss if commit fails; change the flow in the delete handler to
commit the DB transaction (tx.commit().await?) first and only then call
state.s3.delete_file(&track.object_key).await, logging or queuing any S3-delete
failure for background cleanup; alternatively implement a soft-delete flag on
the tracks row (e.g., set deleted=true via the SQL DELETE logic replacement) and
perform the actual S3 deletion asynchronously by a reconciliation job so
get_track_url keeps returning valid URLs until physical removal.
backend/src/utils/file_validation.rs (1)

42-45: ⚠️ Potential issue | 🟡 Minor

Error message is out of sync with the new allowed list.

webm is now accepted but the user-facing error still enumerates only MP3, WAV, FLAC, OGG, M4A AAC (also missing a comma between M4A and AAC). Update the message so rejected uploads reflect the real supported set.

✂️ Suggested fix
             return Err(AppError::Validation(format!(
-                "Unsupported file type: {}. Please upload a supported audio file (MP3, WAV, FLAC, OGG, M4A AAC).",
+                "Unsupported file type: {}. Please upload a supported audio file (MP3, WAV, FLAC, OGG, M4A, AAC, WEBM).",
                 ext
             )));

The doc-comment at the top of the function (listing "MP3, WAV, FLAC, OGG, M4A, AAC") should be updated to include WebM as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/utils/file_validation.rs` around lines 42 - 45, Update the
user-facing error string in the file_validation code path that returns
Err(AppError::Validation(...)) to list the actual supported types including
"WEBM" and fix the M4A/AAC comma, and also update the doc-comment at the top of
the function (the comment that currently lists "MP3, WAV, FLAC, OGG, M4A, AAC")
to include WebM so documentation and the runtime error message stay in sync;
locate the return in file_validation.rs (the block that constructs the
Validation error using ext) and adjust the human-facing list to match the true
allowed set.
🟡 Minor comments (8)
.dockerignore-1-39 (1)

1-39: ⚠️ Potential issue | 🟡 Minor

Consider removing the root .dockerignore file as it is not consumed by any build workflow.

Docker reads .dockerignore from the build context root. Both frontend and backend workflows explicitly use context: ./frontend and context: ./backend respectively, which means they will use .dockerignore files within those subdirectories, not the root file. The compose.yml also specifies context: ./backend. This root .dockerignore would only be used for direct docker build invocations from the repository root, which is not part of the current workflow. Either remove it or document its intended purpose if reserved for future use.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.dockerignore around lines 1 - 39, The root .dockerignore file appears
unused by current CI/workflows because builds specify context: ./frontend and
context: ./backend (also in compose.yml); either delete the root .dockerignore
to avoid confusion or update it with a clear top-of-file comment describing its
intended purpose (e.g., for manual docker build from repo root or future
workflows) so readers know why it exists; make the change by removing the
.dockerignore file or by adding that explanatory comment at the top of the
existing .dockerignore.
frontend/tsconfig.app.json-13-13 (1)

13-13: ⚠️ Potential issue | 🟡 Minor

Remove "ignoreDeprecations": "6.0" as no deprecated options are in use and TypeScript 6.0 is already the current version.

The config contains no known deprecated compiler options (e.g., importsNotUsedAsValues, preserveValueImports, out, suppressExcessPropertyErrors, etc.). Additionally, TypeScript is already at version 6.0.2, so "ignoreDeprecations": "6.0" is no longer needed—it was intended to suppress warnings about options being removed in version 6.0, not after 6.0 has shipped. Removing this setting will keep the configuration cleaner and avoid confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/tsconfig.app.json` at line 13, Remove the unnecessary
"ignoreDeprecations": "6.0" entry from the tsconfig.app.json configuration: open
the JSON object that contains compiler options and delete the key
"ignoreDeprecations" (the string literal) so the file no longer suppresses
deprecation warnings for a shipped TypeScript 6.0; ensure the resulting JSON
remains valid (commas adjusted if it was between other entries) and run a quick
TypeScript build to confirm no config errors.
frontend/nginx.conf-2-2 (1)

2-2: ⚠️ Potential issue | 🟡 Minor

Port coupling with the Dockerfile.

If you adopt the non-root/unprivileged approach suggested on frontend/Dockerfile (lines 31–34), this needs to change to listen 8080; (and EXPOSE 8080 in the Dockerfile) so nginx can bind without CAP_NET_BIND_SERVICE. Flagging here so the two files stay in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/nginx.conf` at line 2, The nginx config currently uses "listen 80;"
which will conflict with the non-root/unprivileged setup; change the nginx
directive to "listen 8080;" and update the Dockerfile to "EXPOSE 8080" (and
ensure any runtime/start scripts reference 8080) so nginx can bind without
CAP_NET_BIND_SERVICE when running as the non-root user mentioned in the
Dockerfile lines that configure the unprivileged user.
frontend/package.json-6-8 (1)

6-8: ⚠️ Potential issue | 🟡 Minor

engines.node range is semver-redundant and lets Node 21.x through.

">=20.19.0 || >=22.12.0" is a union of two open-ended ranges; the first already includes everything the second does, so the whole expression collapses to >=20.19.0 — meaning Node 21.x (non-LTS, excluded by Vite/Tailwind toolchains) will still satisfy it.

The intent (mirroring Vite's own engines field) is Node 20.19+ OR Node 22.12+, excluding odd-numbered releases:

🔧 Suggested fix
     "engines": {
-        "node": ">=20.19.0 || >=22.12.0"
+        "node": "^20.19.0 || >=22.12.0"
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/package.json` around lines 6 - 8, The engines.node range in
package.json is currently a redundant open-ended union that allows Node 21.x;
update the engines.node value to explicitly exclude odd-numbered 21.x releases
(mirror Vite's intent) by using a range that permits Node 20.19+ up to but not
including 21 and also permits Node 22.12+ (i.e., replace the current
engines.node expression with one that uses a bounded range for Node 20, e.g.,
">=20.19.0 <21 || >=22.12.0") so tools like Vite/Tailwind don't accept Node
21.x.
frontend/src/components/ui/sonner.tsx-37-41 (1)

37-41: ⚠️ Potential issue | 🟡 Minor

Remove the toastOptions block with the undefined cn-toast class.

cn-toast is not defined as a CSS class anywhere in the codebase and is not a standard Tailwind utility. This appears to be leftover placeholder code. Either remove the entire toastOptions block to use Sonner's default styles, or if custom toast styling is needed, define actual classes and apply them. The component already properly uses Tailwind classes elsewhere (e.g., className="toaster group").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ui/sonner.tsx` around lines 37 - 41, Remove the
unused toastOptions block that references the undefined "cn-toast" class in the
Sonner setup; specifically delete the toastOptions prop (and its
classNames.toast entry) from the Sonner component so it falls back to default
styles, or alternatively replace "cn-toast" with a real Tailwind/defined CSS
class if custom styling is required; check for references to toastOptions and
the string "cn-toast" and update accordingly (the component uses
className="toaster group" already).
frontend/src/pages/Identify.tsx-73-80 (1)

73-80: ⚠️ Potential issue | 🟡 Minor

Drag-and-drop silently swallows unsupported files.

With the old rejected handling removed, dropping a non-audio file (or one whose MIME doesn't match accept) now does nothing — no toast, no error, no visual hint. Given the recorder can already surface errors to the user, it's worth wiring fileRejections into setError (or a toast) so users understand why their drop was ignored.

♻️ Suggested fix
-    const onDrop = useCallback(
-        (accepted: File[]) => {
-            if (accepted[0]) {
-                handleIdentify(accepted[0]);
-            }
-        },
-        [handleIdentify],
-    );
+    const onDrop = useCallback(
+        (accepted: File[], rejected: { file: File; errors: { message: string }[] }[]) => {
+            if (rejected.length > 0) {
+                setError(
+                    rejected[0].errors[0]?.message ??
+                        "Unsupported file type. Please upload an audio file.",
+                );
+                return;
+            }
+            if (accepted[0]) {
+                handleIdentify(accepted[0]);
+            }
+        },
+        [handleIdentify],
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/Identify.tsx` around lines 73 - 80, The onDrop handler
currently ignores rejected files; update the onDrop callback signature to accept
both accepted and fileRejections, check fileRejections (e.g.,
fileRejections[0]?.errors), and call setError (or the existing toast) with a
clear message constructed from fileRejections[0].errors[0].message so users see
why their file was rejected; touch the onDrop implementation and any Dropzone
props that pass fileRejections to ensure rejections are surfaced instead of
silently swallowed (referencing onDrop, handleIdentify, setError, and the
fileRejections array).
frontend/src/components/tracks/TrackCard.tsx-54-59 (1)

54-59: ⚠️ Potential issue | 🟡 Minor

Error toast should replace the loading toast.

toast.loading(...) returns an id; the success branch correctly passes id: toastId, but the error branch omits it, so the loading spinner keeps running alongside the error toast until it auto-dismisses.

♻️ Suggested fix
-    const handleCopyTrackLink = async () => {
+    const handleCopyTrackLink = async () => {
+        const toastId = toast.loading("🔗 Generating track link...");
         try {
-            const toastId = toast.loading("🔗 Generating track link...");
-
             const response = await trackApi.getPresignedUrl(track.id);
             const signedUrl = response.url;

             await navigator.clipboard.writeText(signedUrl);

             toast.success("📋 Track link copied to clipboard!", {
                 id: toastId,
                 description: "Link copied successfully",
             });
         } catch (error) {
             console.error("Failed to get or copy track link:", error);
-            toast.error("❌ Failed to get track link", {
+            toast.error("❌ Failed to get track link", {
+                id: toastId,
                 description: "Please try again later",
             });
         }
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/tracks/TrackCard.tsx` around lines 54 - 59, The error
handler in TrackCard.tsx is showing a new error toast without replacing the
loading toast id, so the spinner remains; update the catch block that logs
"Failed to get or copy track link" to pass the original toast id (toastId) into
toast.error (e.g., toast.error(..., { id: toastId, description: ... })) so the
loading toast is replaced; ensure the toastId variable used in the success path
is in scope for the catch block.
frontend/src/lib/api.ts-14-14 (1)

14-14: ⚠️ Potential issue | 🟡 Minor

.x-wav is not a file extension.

x-wav appears only in the MIME subtype audio/x-wav; no filename ends with .x-wav. This branch is dead code — .wav alone is sufficient.

♻️ Proposed fix
-    if (lower.endsWith(".wav") || lower.endsWith(".x-wav")) return "audio/wav";
+    if (lower.endsWith(".wav")) return "audio/wav";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/lib/api.ts` at line 14, The branch checking for ".x-wav" is dead
code; update the MIME detection logic that uses the variable "lower" (the if
that currently reads if (lower.endsWith(".wav") || lower.endsWith(".x-wav")
return "audio/wav";) to remove the .x-wav check so it only checks for ".wav"
(i.e., keep the lower.endsWith(".wav") condition and remove the ||
lower.endsWith(".x-wav") clause), ensuring the function still returns
"audio/wav" for .wav files.
🧹 Nitpick comments (17)
frontend/src/index.css (1)

167-170: Translate refactor preserves behavior; minor stylelint formatting nit.

-translate-x-0.5 / -translate-y-0.5 resolve to -2px on Tailwind's default spacing scale, so this is functionally equivalent to the previous translate-x-[-2px] / translate-y-[-2px] and the call sites in frontend/src/pages/Identify.tsx and frontend/src/pages/Upload.tsx are unaffected.

Separately, stylelint flagged declaration-empty-line-before on line 169 (and lines 141/150) — insert a blank line between the @apply rule and the following box-shadow / background-image declaration to satisfy the rule.

✏️ Optional formatting fix
     .sketch-shadow-hover {
         `@apply` hover:-translate-x-0.5 hover:-translate-y-0.5 transition-all;
+
         box-shadow: 6px 6px 0px 0px var(--border);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/index.css` around lines 167 - 170, The stylelint rule requires
an empty line before the next declaration after an `@apply`; update the
.sketch-shadow-hover rule in index.css to insert a blank line between the `@apply`
line and the subsequent box-shadow declaration, and make the same spacing change
for the other occurrences flagged (the rules around background-image and other
declarations referenced in the comment) so the file satisfies
declaration-empty-line-before while preserving the current translate behavior of
.sketch-shadow-hover (which uses hover:-translate-x-0.5 and
hover:-translate-y-0.5).
.github/workflows/frontend_ci.yml (1)

14-24: Pin action setup and consider pnpm/action-setup@v4 before setup-node.

Minor consistency note: the current step order (pnpm/action-setupsetup-node) is correct for cache: 'pnpm' to find the pnpm binary — good. Consider also pinning to node-version-file: .nvmrc or package.json#engines to keep Node versions synchronized with local dev.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/frontend_ci.yml around lines 14 - 24, The workflow
currently uses pinned actions (actions/checkout@v4, pnpm/action-setup@v4,
actions/setup-node@v4) but you should explicitly set Node source to a version
file to keep CI aligned with local dev; update the setup-node step to include
node-version-file: .nvmrc (or node-version-file: package.json#engines) so the
Node version is derived from your repo, and ensure pnpm/action-setup remains
before actions/setup-node to preserve pnpm cache behavior.
backend/.dockerignore (1)

22-24: Minor: .env.* may exclude committed example files.

The .env.* glob will also match files like .env.example or .env.sample if those are ever used to document required env vars for the backend container. If you rely on a committed example, whitelist it explicitly:

♻️ Optional tweak
 .env
 .env.local
 .env.*
+!.env.example
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/.dockerignore` around lines 22 - 24, The .dockerignore entry ".env.*"
will also exclude committed example files like ".env.example"; update
.dockerignore to keep the general ignore but explicitly whitelist any committed
examples by adding negative patterns such as "!.env.example" (and/or
"!.env.sample") or replace the broad ".env.*" with explicit entries for
sensitive files only, ensuring example files remain included for the
build/context.
.gitignore (1)

30-31: Non-anchored dist/dist-ssr patterns match at any depth.

Switching from /dist to dist is needed to ignore frontend/dist, but it will also silently ignore any directory named dist anywhere in the repo (e.g., a future source folder). If you only want to target the frontend build output, consider anchoring to the frontend path instead:

♻️ More targeted pattern
-dist
-dist-ssr
+frontend/dist
+frontend/dist-ssr
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 30 - 31, The .gitignore entries use non-anchored
patterns "dist" and "dist-ssr" which will ignore any directory with those names
at any depth; replace them with anchored patterns that target the intended build
output (e.g., use "/frontend/dist" and "/frontend/dist-ssr" if the goal is to
ignore the frontend build folder), or if you meant to ignore only the repository
root build output use "/dist" and "/dist-ssr" instead; update the entries for
"dist" and "dist-ssr" accordingly so they no longer match directories at
arbitrary depths.
frontend/nginx.conf (2)

1-34: Optional: add a lightweight /healthz endpoint for container probes.

Container orchestrators (Kubernetes, ECS, Docker healthchecks) will want a cheap liveness/readiness endpoint that doesn't depend on the SPA shell. A 200 from nginx itself is ideal.

➕ Optional addition
     location / {
         try_files $uri $uri/ /index.html;
     }
+
+    location = /healthz {
+        access_log off;
+        add_header Content-Type text/plain;
+        return 200 "ok";
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/nginx.conf` around lines 1 - 34, Add a lightweight health endpoint
inside the existing server block by adding a new location (e.g., location =
/healthz) that returns an immediate 200 response from nginx (no upstream or SPA
dependency); include minimal settings such as disabling access_log for the path
(access_log off), optionally adding a simple header or content-type, and place
it near the other location blocks (next to location / and location =
/index.html) so container orchestrators can probe /healthz cheaply.

21-21: Minor: consider adding modern asset types to gzip_types and/or enabling Brotli.

gzip_types is missing application/wasm, image/svg+xml, font/woff2, and application/manifest+json. Note that .woff2 is already well-compressed and should typically be skipped, but SVG/JSON manifests benefit meaningfully. This is a small perf hint, not a correctness issue.

♻️ Optional
-    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
+    gzip_types text/plain text/css application/json application/javascript text/javascript text/xml application/xml application/xml+rss image/svg+xml application/manifest+json application/wasm;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/nginx.conf` at line 21, The gzip_types directive currently lacks
modern asset MIME types; update the nginx gzip_types configuration (the
gzip_types directive) to include application/wasm, image/svg+xml, and
application/manifest+json (optionally include font/woff2 if you intentionally
want to compress woff2 despite its already-good compression), and consider
enabling Brotli by adding brotli on with a corresponding brotli_types list
mirroring gzip_types to serve smaller compressed assets when supported by
clients.
frontend/Dockerfile (1)

17-21: Consider leveraging BuildKit cache mounts for pnpm install.

COPY . . after pnpm install is fine for layer caching on lockfile-only changes, but pnpm's store can be reused across builds with a cache mount to significantly speed up CI builds.

♻️ Optional: cache pnpm store between builds
-# Install dependencies
-RUN pnpm install --frozen-lockfile
+# Install dependencies (cache pnpm store across builds)
+RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \
+    pnpm install --frozen-lockfile

Requires DOCKER_BUILDKIT=1 (default in docker/build-push-action@v5).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/Dockerfile` around lines 17 - 21, Move dependency files copy earlier
and use a BuildKit cache mount for pnpm store: copy package.json and
pnpm-lock.yaml (or pnpm-workspace.yaml) before the full COPY . ., change the
install step that currently runs before/after COPY . . to use a BuildKit cache
(e.g. replace the plain RUN pnpm install with a RUN
--mount=type=cache,target=/home/node/.pnpm-store pnpm install), keep COPY . .
before RUN pnpm build so sources are available for the build, and ensure CI
enables DOCKER_BUILDKIT=1 so the cache mount is honored.
.github/workflows/frontend_docker.yml (2)

39-42: type=ref,event=branch is a no-op on pull_request triggers.

docker/metadata-action only emits branch tags on push events. On pull_request events, the event type is pr, not branch. This means you'll only get the short-SHA tag on PR runs. The enable=${{ github.ref == 'refs/heads/main' }} condition also won't work on pull_request events since github.ref will be a PR merge ref, not a branch ref.

Consider using type=ref,event=pr for PR-specific tags, and use github.event.pull_request.base.ref for the base branch condition:

♻️ Suggestion
                   tags: |
-                      type=ref,event=branch
+                      type=ref,event=pr
                       type=sha,format=short
-                      type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
+                      type=raw,value=latest,enable=${{ github.event.pull_request.base.ref == 'main' }}

Or keep only sha + latest if branch-based tags aren't useful here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/frontend_docker.yml around lines 39 - 42, The
docker/metadata-action tag config currently uses type=ref,event=branch which is
a no-op for pull_request runs; update the tags block used by
docker/metadata-action to emit branch tags for PRs by replacing
type=ref,event=branch with type=ref,event=pr (so PR runs get the branch ref),
and change the latest-enable condition to use github.event.pull_request.base.ref
== 'refs/heads/main' (or similar) instead of github.ref; alternatively, if
branch-based tags aren’t needed for PRs, simplify the tags to only include the
sha and the conditional latest tag (keeping type=sha,format=short and
type=raw,value=latest with the adjusted base-ref condition).

32-32: Consider using ${{ secrets.GITHUB_TOKEN }} instead of the custom SECRET_TOKEN.

The workflows are pushing to ghcr.io within the repository's namespace, and permissions: packages: write is already declared—sufficient for this use case. The custom SECRET_TOKEN introduces unnecessary maintenance overhead and increases the blast radius if compromised. Unless there's a specific cross-org or cross-namespace requirement, GITHUB_TOKEN is the preferred default.

Suggested change
-                  password: ${{ secrets.SECRET_TOKEN }}
+                  password: ${{ secrets.GITHUB_TOKEN }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/frontend_docker.yml at line 32, Replace the custom secret
usage with the built-in token: change references to secrets.SECRET_TOKEN to use
secrets.GITHUB_TOKEN in the workflow so the job that pushes to ghcr.io uses the
repo-scoped GITHUB_TOKEN (ensure permissions: packages: write is present in the
workflow). Update any steps or environment variables that currently reference
SECRET_TOKEN (e.g., the password field) to reference GITHUB_TOKEN instead and
remove the custom secret from the workflow configuration or repo secrets if no
longer needed.
frontend/src/lib/contexts.ts (1)

3-8: Tighten the status type and de-duplicate the notification shape.

Two small improvements:

  1. status: string is too permissive given Navbar branches on specific literals ("completed", "ready", "failed", "error") while Library.tsx emits "pending" | "fingerprinting" | "ready" | "error". A union type will catch typos at compile time (e.g., someone writing "complete" instead of "completed").
  2. The item shape is duplicated across the array and the setter. Extract it.
♻️ Suggested refactor
 import { createContext } from "react";

+export type UploadNotificationStatus =
+    | "pending"
+    | "fingerprinting"
+    | "ready"
+    | "error"
+    | "completed"
+    | "failed";
+
+export interface UploadNotification {
+    track_id: string;
+    status: UploadNotificationStatus;
+    message?: string;
+}
+
 export interface UploadNotificationContextType {
-    uploadNotifications: { track_id: string; status: string; message?: string }[];
-    setUploadNotifications: React.Dispatch<React.SetStateAction<{ track_id: string; status: string; message?: string }[]>>;
+    uploadNotifications: UploadNotification[];
+    setUploadNotifications: React.Dispatch<React.SetStateAction<UploadNotification[]>>;
 }

 export const UploadNotificationContext = createContext<UploadNotificationContextType | undefined>(undefined);

Optionally, export a useUploadNotifications() hook that throws when used outside a provider, so consumers don't have to repeatedly fall back on an empty setter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/lib/contexts.ts` around lines 3 - 8, The
UploadNotificationContextType currently uses a permissive status: string and
duplicates the item shape; introduce a dedicated item type (e.g.,
UploadNotificationItem) and replace the inline shapes in
UploadNotificationContextType with UploadNotificationItem[] and
React.Dispatch<React.SetStateAction<UploadNotificationItem[]>>; tighten the
status field on UploadNotificationItem to the union of known literals ("pending"
| "fingerprinting" | "ready" | "completed" | "failed" | "error") so consumers
like Navbar/Library get compile-time checks; update UploadNotificationContext to
use the new UploadNotificationContextType and optionally add a
useUploadNotifications() hook that reads UploadNotificationContext and throws
when undefined to avoid repeated empty-setter fallbacks.
frontend/src/pages/Upload.tsx (1)

48-49: Consider validating title.trim() rather than truthiness.

!title lets whitespace-only titles through the guard (e.g. a single space passes the disabled check and gets POSTed). Trimming is a cheap hardening step:

-        if (!file || !title) return;
+        if (!file || !title.trim()) return;
...
-                            disabled={uploading || !file || !title}
+                            disabled={uploading || !file || !title.trim()}

Non-blocking, just nicer UX/validation.

Also applies to: 141-143

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/Upload.tsx` around lines 48 - 49, The guard in
handleSubmit (and the other title checks around lines 141-143) allows
whitespace-only titles; change all truthiness checks of title to trim-aware
checks (e.g., use title?.trim() and check length or truthiness of title.trim())
so whitespace-only strings are rejected before POST and UI disables correctly;
update the disabled logic and any early returns in functions like handleSubmit
to use title.trim() (or title?.trim().length > 0) and ensure you call trim()
consistently where validation is performed.
backend/src/settings.rs (1)

60-63: Silent fallback when CORS_ALLOW_CREDENTIALS is misconfigured.

parse().unwrap_or(false) silently turns typos like "True", "yes", "1" into false. For a security-relevant flag, it's worth surfacing the misconfiguration rather than hiding it — either use .context(...)? to fail fast like the other settings, or at minimum log a warning when parsing fails.

🛡️ Suggested fix
-            cors_allow_credentials: std::env::var("CORS_ALLOW_CREDENTIALS")
-                .unwrap_or_else(|_| "false".to_string())
-                .parse()
-                .unwrap_or(false),
+            cors_allow_credentials: std::env::var("CORS_ALLOW_CREDENTIALS")
+                .unwrap_or_else(|_| "false".to_string())
+                .parse()
+                .context("Invalid CORS_ALLOW_CREDENTIALS (expected `true` or `false`)")?,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/settings.rs` around lines 60 - 63, The current
cors_allow_credentials setting silently defaults to false when parsing
CORS_ALLOW_CREDENTIALS fails; update the code that sets cors_allow_credentials
in settings.rs (the cors_allow_credentials assignment reading
"CORS_ALLOW_CREDENTIALS") to surface misconfigurations instead of swallowing
them — either propagate a parsing error (use .parse::<bool>().context("invalid
CORS_ALLOW_CREDENTIALS")? to fail fast like other settings) or, if you must
tolerate errors, at minimum log a warning when parse fails before falling back;
ensure the change references the cors_allow_credentials assignment and the
"CORS_ALLOW_CREDENTIALS" env var.
backend/src/worker/workflow.rs (1)

80-81: Optional: thread the extension through from object_key or track metadata.

Both worker decode sites pass None, which forces Symphonia to rely entirely on magic-byte detection. That generally works for WAV/MP3/FLAC but is noticeably less reliable for containers like M4A/AAC/WebM. Since object_key is already fetched from the DB, you could cheaply pass the extension as a hint:

let ext = std::path::Path::new(&track.object_key)
    .extension()
    .and_then(|e| e.to_str())
    .map(|s| s.to_ascii_lowercase());
let (samples, duration_secs) = tokio::task::spawn_blocking(move || {
    decode_audio(audio_data, 8000, ext.as_deref())
}).await??;

Non-blocking — functionally equivalent today for most inputs, just more robust for the formats the validator now accepts (including webm).

Also applies to: 170-171

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/worker/workflow.rs` around lines 80 - 81, Extract the file
extension from the existing object_key (e.g., via
std::path::Path::new(&track.object_key).extension().and_then(|e|
e.to_str()).map(|s| s.to_ascii_lowercase())) and pass that as the optional hint
into decode_audio instead of None; update both tokio::task::spawn_blocking calls
that invoke decode_audio(audio_data, 8000, None) (the one at the shown location
and the similar call around lines 170-171) to forward the computed
ext.as_deref() so decode_audio receives Some(extension) when available to
improve format detection.
frontend/src/pages/Library.tsx (1)

40-41: Prefer typing trackApi.list over casting at the call site.

Repeated trackData as TrackListResponse casts mean that any future divergence between trackApi.list's return type and TrackListResponse won't surface as a type error here. If trackApi.list already returns TrackListResponse, the cast is redundant; otherwise, fixing the API type (or generic-parameterizing useQuery<TrackListResponse>) is the more durable approach.

♻️ Suggested fix
-    const { data: trackData, isLoading } = useQuery({
+    const { data: trackData, isLoading } = useQuery<TrackListResponse>({
         queryKey: ["tracks", page],
         queryFn: () => trackApi.list(itemsPerPage, page),
     });

-    const tracks = (trackData as TrackListResponse)?.tracks || [];
-    const totalCount = (trackData as TrackListResponse)?.total_count || 0;
+    const tracks = trackData?.tracks ?? [];
+    const totalCount = trackData?.total_count ?? 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/Library.tsx` around lines 40 - 41, Avoid casting trackData
at the usage site; instead ensure the query return type is correctly typed by
either updating trackApi.list's TypeScript signature to return TrackListResponse
or by supplying the generic to useQuery (e.g., useQuery<TrackListResponse>(...))
so trackData is inferred as TrackListResponse and you can remove the casts
around trackData used for tracks and total_count.
backend/src/api/identify.rs (1)

112-122: Drop the redundant .to_vec() clone in the blocking task.

audio_data is already Vec<u8> and is moved into the spawn_blocking closure, so audio_data.to_vec() allocates and copies the entire (up-to-identify_max_file_size) buffer for no reason. decode_audio takes Vec<u8> by value, so you can hand it the original.

♻️ Suggested fix
     let (sample_hashes, sample_duration) = tokio::task::spawn_blocking(move || {
-        let (samples, duration) = decode_audio(audio_data.to_vec(), 8000, Some(&ext))?;
+        let (samples, duration) = decode_audio(audio_data, 8000, Some(&ext))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/api/identify.rs` around lines 112 - 122, The closure passed to
tokio::task::spawn_blocking is needlessly cloning audio_data via
audio_data.to_vec(); since audio_data is already a Vec<u8> and is moved into the
closure, remove the .to_vec() call and pass audio_data directly to decode_audio
(i.e., call decode_audio(audio_data, 8000, Some(&ext))) so the buffer is not
duplicated; adjust the closure capture/move usage if necessary to ensure
audio_data is moved into the closure.
frontend/src/components/identify/AudioRecorder.tsx (1)

42-51: Guard against zero step in the waveform averaging.

Today analyser.fftSize = 256frequencyBinCount = 128, so step = 3 and the loop is safe. But if someone lowers fftSize below 80 in useAudioRecorder.ts, step becomes 0, you read dataArray[0] 40 times and divide by 0, yielding NaN bar heights. A tiny defensive guard avoids this coupling.

♻️ Suggested hardening
-            const step = Math.floor(dataArray.length / 40);
+            const step = Math.max(1, Math.floor(dataArray.length / 40));
             const newWaveform = [];
             for (let i = 0; i < 40; i++) {
                 let sum = 0;
                 for (let j = 0; j < step; j++) {
-                    sum += dataArray[i * step + j];
+                    sum += dataArray[i * step + j] ?? 0;
                 }
                 const avg = sum / step;
                 newWaveform.push(Math.max(5, (avg / 255) * 100));
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/identify/AudioRecorder.tsx` around lines 42 - 51, The
waveform averaging code can produce step = 0 when frequencyBinCount is small;
change the computation and indexing to be defensive: compute step as Math.max(1,
Math.floor(dataArray.length / 40)) (or fallback to 1) and when summing use a
bound check so you never read past dataArray (e.g., iterate j from 0 while
i*step + j < dataArray.length && j < step). Update the block that references
step, dataArray and newWaveform (the loop building newWaveform) so division by
zero cannot occur and indexes are clamped.
frontend/src/hooks/useAudioRecorder.ts (1)

161-182: Skip decode+re-encode when the recorder already produced WAV.

When the browser falls back to audio/wav, the onstop handler still decodes it into an AudioBuffer and re-encodes it back to WAV—pure overhead that temporarily doubles peak memory for the recorded sample. A simple short-circuit avoids it.

♻️ Suggested refactor
                     if (recordedBlob.size > 0) {
+                        if (finalMimeType.startsWith("audio/wav")) {
+                            setAudioBlob(recordedBlob);
+                        } else {
                             if (!decodeContext || decodeContext.state === "closed") {
                                 throw new Error("Audio context is not available");
                             }

                             const audioBuffer = await decodeContext.decodeAudioData(
                                 await recordedBlob.arrayBuffer(),
                             );
                             const wavBlob = encodeAudioBufferToWav(audioBuffer);
                             setAudioBlob(wavBlob);
+                        }
                     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/hooks/useAudioRecorder.ts` around lines 161 - 182, The onstop
handler currently always decodes the recordedBlob into an AudioBuffer and
re-encodes to WAV; short-circuit when the recorder already produced WAV by
checking finalMimeType (e.g., finalMimeType.includes("wav") or equals
"audio/wav"/"audio/x-wav") and if true call setAudioBlob(recordedBlob) directly
and return; otherwise proceed with the existing decode using
decodeContext.decodeAudioData, encodeAudioBufferToWav, and setAudioBlob as
before. Ensure you still guard for isCurrentSession and non-zero
recordedBlob.size and that recordedBlob.type is preserved when bypassing
re-encoding.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7e0790fa-52e7-406f-a08c-0636f9b6de1c

📥 Commits

Reviewing files that changed from the base of the PR and between 9128916 and fb31c89.

⛔ Files ignored due to path filters (4)
  • frontend/dist/assets/index-C8bHhJC2.js is excluded by !**/dist/**
  • frontend/dist/assets/index-CaS2LAz4.css is excluded by !**/dist/**
  • frontend/dist/index.html is excluded by !**/dist/**
  • frontend/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (38)
  • .dockerignore
  • .github/workflows/backend_ci.yml
  • .github/workflows/backend_docker.yml
  • .github/workflows/frontend_ci.yml
  • .github/workflows/frontend_docker.yml
  • .gitignore
  • backend/.dockerignore
  • backend/src/api/identify.rs
  • backend/src/api/mod.rs
  • backend/src/api/tracks.rs
  • backend/src/fingerprint/decode.rs
  • backend/src/settings.rs
  • backend/src/utils/file_validation.rs
  • backend/src/worker/workflow.rs
  • frontend/.dockerignore
  • frontend/Dockerfile
  • frontend/index.html
  • frontend/nginx.conf
  • frontend/package.json
  • frontend/src/App.tsx
  • frontend/src/components/identify/AudioRecorder.tsx
  • frontend/src/components/layout/Navbar.tsx
  • frontend/src/components/tracks/TrackCard.tsx
  • frontend/src/components/ui/badge.tsx
  • frontend/src/components/ui/button.tsx
  • frontend/src/components/ui/dropdown-menu.tsx
  • frontend/src/components/ui/skeleton.tsx
  • frontend/src/components/ui/sonner.tsx
  • frontend/src/hooks/useAudioRecorder.ts
  • frontend/src/hooks/useWebSocket.ts
  • frontend/src/index.css
  • frontend/src/lib/api.ts
  • frontend/src/lib/contexts.ts
  • frontend/src/pages/Identify.tsx
  • frontend/src/pages/Library.tsx
  • frontend/src/pages/Upload.tsx
  • frontend/src/types/index.ts
  • frontend/tsconfig.app.json
💤 Files with no reviewable changes (1)
  • frontend/src/hooks/useWebSocket.ts

Comment thread .github/workflows/backend_docker.yml Outdated
Comment thread .github/workflows/frontend_ci.yml Outdated
Comment thread backend/src/api/mod.rs
Comment thread backend/src/api/mod.rs
Comment thread frontend/.dockerignore Outdated
Comment thread frontend/src/components/tracks/TrackCard.tsx
Comment thread frontend/src/components/ui/sonner.tsx
Comment thread frontend/src/index.css Outdated
Comment thread frontend/src/lib/api.ts Outdated
Comment thread frontend/src/lib/api.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (4)
backend/src/settings.rs (1)

54-69: Consider extracting origin parsing into a helper.

The inline block inside the struct literal mixes parsing, trimming, filtering, and validation, which makes from_env harder to scan. Extracting to a small fn parse_cors_origins(raw: &str) -> Result<Vec<String>> would keep the struct literal flat, make the logic unit-testable, and localize the validation change suggested above.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/settings.rs` around lines 54 - 69, Extract the inline
parsing/validation for cors_allowed_origins in from_env into a helper function
named parse_cors_origins(raw: &str) -> anyhow::Result<Vec<String>>; move the
split, trim, filter, and the HeaderValue::parse validation into that function,
return the Vec<String> on success, and call it from the cors_allowed_origins
field in from_env (replacing the current block). Ensure the helper is
unit-testable and referenced by name so tests can validate behavior and error
messages produced when HeaderValue parsing fails.
.github/workflows/frontend_ci.yml (2)

3-8: Consider adding a concurrency group to cancel superseded PR runs.

Without a concurrency block, pushing new commits to a PR will queue multiple redundant lint/build runs. Recommended:

♻️ Proposed addition
 on:
     push:
         branches: [main]
     pull_request:
         branches: [main]
+
+concurrency:
+    group: frontend-ci-${{ github.workflow }}-${{ github.ref }}
+    cancel-in-progress: ${{ github.event_name == 'pull_request' }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/frontend_ci.yml around lines 3 - 8, Add a top-level
concurrency block to the workflow ( alongside the existing on: push/pull_request
block ) so superseded runs are cancelled; add: concurrency: group: ${{
github.workflow }}-${{ github.head_ref || github.ref }} with cancel-in-progress:
true to ensure new commits on the same PR/branch cancel prior lint/build runs.

22-23: Pin node-version to 20.x for clarity and consistency with engines.node in package.json.

frontend/package.json declares engines.node: "^20.19.0 || >=22.12.0". While node-version: 20 does resolve to the latest 20.x version available on the runner (currently 20.20.2 on ubuntu-latest, which satisfies the constraint), using 20.x is more explicit about semantic versioning intent and makes the CI contract clearer.

♻️ Suggested tweak
             - uses: actions/setup-node@v4
               with:
-                  node-version: 20
+                  node-version: '20.x'
                   cache: 'pnpm'

Also applies to: 42-43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/frontend_ci.yml around lines 22 - 23, Update the GitHub
Actions workflow entries that set the Node version to be explicit about semver:
replace the current node-version: 20 settings with node-version: 20.x in the
workflow file so it clearly matches the engines.node requirement in
package.json; ensure you change all occurrences (the node-version key at the
locations around the earlier lines referenced) so the CI uses the 20.x semver
range consistently.
frontend/src/pages/Identify.tsx (1)

73-74: Replace any[] with FileRejection[] from react-dropzone.

react-dropzone 15.0.0 exports proper types; using them here makes the error.code comparisons type-checked instead of relying on string literals. The codebase already uses this pattern in Upload.tsx.

♻️ Proposed change
-import { useDropzone } from "react-dropzone";
+import { useDropzone, type FileRejection } from "react-dropzone";
@@
     const onDrop = useCallback(
-        (accepted: File[], fileRejections: any[]) => {
+        (accepted: File[], fileRejections: FileRejection[]) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/Identify.tsx` around lines 73 - 74, Change the onDrop
callback signature to use react-dropzone's FileRejection type instead of any[]
so error codes are type-checked: replace the second parameter type in onDrop
(currently "fileRejections: any[]") with "fileRejections: FileRejection[]" and
add an import for FileRejection from "react-dropzone"; update any local
error.code comparisons in the onDrop body to rely on the typed FileRejection
structure (e.g., rejection.errors) if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/settings.rs`:
- Around line 70-73: The current cors_allow_credentials parse call swallows
invalid values via .unwrap_or(false); change it to surface parse errors like the
other settings by replacing the .parse().unwrap_or(false) with a fallible parse
that attaches context and propagates the error (e.g., use
.parse::<bool>().context("failed to parse CORS_ALLOW_CREDENTIALS as bool")? or
similar), keeping the same default for a missing env var but failing fast on
invalid values for cors_allow_credentials so misconfiguration is not silently
ignored.
- Around line 63-67: The current loop in settings.rs validates CORS origins only
via axum::http::HeaderValue::parse which allows invalid origins; change the
validation to first parse each origin string using url::Url (e.g., via
url::Url::parse), then assert the scheme is "http" or "https", that host() is
Some, and that path(), query(), and fragment() are empty/none; only after these
URL checks succeed convert to HeaderValue (or call HeaderValue::from_str) so
startup rejects entries like "localhost:5173" or "http://example.com/" with a
path. Ensure you reference the existing origin variable in the same loop and
preserve the existing error handling semantics by returning a descriptive
anyhow::Error on validation failure.

In `@frontend/Dockerfile`:
- Around line 24-33: The nginx unprivileged image runs as user nginx and cannot
bind to privileged port 80, so change the nginx config's "listen 80;" to "listen
8080;" in frontend/nginx.conf and update the Dockerfile's EXPOSE 80 to EXPOSE
8080; also ensure any docker-compose or k8s manifests that forward or target
container port 80 are updated to point to container port 8080 so the service
matches the new container listening port.

In `@frontend/src/App.tsx`:
- Line 26: The current code uses useTheme().theme which can be "system" and
breaks the toggle and Navbar; update the toggle logic and the Navbar prop to use
resolvedTheme instead of theme (i.e., read const { theme, resolvedTheme,
setTheme } = useTheme() and use resolvedTheme for computing the next theme and
for passing a "light" | "dark" value to <Navbar>); alternatively, if you only
want explicit light/dark, set enableSystem={false} on the ThemeProvider so theme
will never be "system".

In `@frontend/src/components/tracks/TrackCard.tsx`:
- Around line 266-282: The disable guard and submit handler use the raw title,
allowing whitespace-only strings; trim the title before validating and sending:
update the disabled condition to use title.trim() (e.g., disabled={isSaving ||
!title?.trim()}) and in the onClick async handler compute const trimmedTitle =
title.trim(), validate that trimmedTitle is non-empty, then call
onEdit(track.id, { title: trimmedTitle, artist: artist || undefined }), keeping
the existing setIsSaving, setEditOpen, toast and error handling logic.
- Around line 41-60: The loading toast created in handleCopyTrackLink is not
dismissed on error because toastId is scoped inside the try; move the
declaration const toastId = toast.loading("🔗 Generating track link...") so it
is declared before the try block (or otherwise accessible in catch), then pass
that toastId into the failing path by calling toast.error(..., { id: toastId,
description: ... }) so the loading spinner is replaced by the error toast;
ensure the success path still uses toast.success(..., { id: toastId,
description: ... }) and that trackApi.getPresignedUrl(track.id) and
navigator.clipboard calls remain unchanged.

In `@frontend/src/index.css`:
- Around line 140-170: Stylelint complains because declarations
(background-image and box-shadow) immediately follow `@apply` without a blank
line; update the CSS rules containing the `@apply` usages—specifically the body
selector (first block with `@apply` min-h-screen ...), body::before (with `@apply`
fixed inset-0 ...), and the .sketch-shadow rule—to insert a single blank line
between the `@apply` line and the subsequent background-image / box-shadow
declarations so each `@apply` is separated from following declarations by an empty
line.

In `@frontend/src/lib/api.ts`:
- Around line 58-82: The match function accepts options.transparency but never
sends it to the backend; either remove the option or forward it and update the
backend. To fix, modify the frontend match function (match) to include
options?.transparency as a query param named transparency in the api.post call
params alongside filename, and update the backend IdentifyRawQuery (and the
/identify/raw handler) to accept transparency: boolean (and handle its
semantics). Ensure the request still sets filename and Content-Type and keep
timeout unchanged.

---

Nitpick comments:
In @.github/workflows/frontend_ci.yml:
- Around line 3-8: Add a top-level concurrency block to the workflow ( alongside
the existing on: push/pull_request block ) so superseded runs are cancelled;
add: concurrency: group: ${{ github.workflow }}-${{ github.head_ref ||
github.ref }} with cancel-in-progress: true to ensure new commits on the same
PR/branch cancel prior lint/build runs.
- Around line 22-23: Update the GitHub Actions workflow entries that set the
Node version to be explicit about semver: replace the current node-version: 20
settings with node-version: 20.x in the workflow file so it clearly matches the
engines.node requirement in package.json; ensure you change all occurrences (the
node-version key at the locations around the earlier lines referenced) so the CI
uses the 20.x semver range consistently.

In `@backend/src/settings.rs`:
- Around line 54-69: Extract the inline parsing/validation for
cors_allowed_origins in from_env into a helper function named
parse_cors_origins(raw: &str) -> anyhow::Result<Vec<String>>; move the split,
trim, filter, and the HeaderValue::parse validation into that function, return
the Vec<String> on success, and call it from the cors_allowed_origins field in
from_env (replacing the current block). Ensure the helper is unit-testable and
referenced by name so tests can validate behavior and error messages produced
when HeaderValue parsing fails.

In `@frontend/src/pages/Identify.tsx`:
- Around line 73-74: Change the onDrop callback signature to use
react-dropzone's FileRejection type instead of any[] so error codes are
type-checked: replace the second parameter type in onDrop (currently
"fileRejections: any[]") with "fileRejections: FileRejection[]" and add an
import for FileRejection from "react-dropzone"; update any local error.code
comparisons in the onDrop body to rely on the typed FileRejection structure
(e.g., rejection.errors) if needed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 555e619b-1510-4039-a728-c8a4f489699e

📥 Commits

Reviewing files that changed from the base of the PR and between fb31c89 and 9ff55ae.

📒 Files selected for processing (18)
  • .github/workflows/backend_docker.yml
  • .github/workflows/frontend_ci.yml
  • .github/workflows/frontend_docker.yml
  • backend/src/api/identify.rs
  • backend/src/api/mod.rs
  • backend/src/settings.rs
  • backend/src/utils/file_validation.rs
  • frontend/.dockerignore
  • frontend/Dockerfile
  • frontend/nginx.conf
  • frontend/package.json
  • frontend/src/App.tsx
  • frontend/src/components/identify/AudioRecorder.tsx
  • frontend/src/components/tracks/TrackCard.tsx
  • frontend/src/index.css
  • frontend/src/lib/api.ts
  • frontend/src/pages/Identify.tsx
  • frontend/src/pages/Library.tsx
✅ Files skipped from review due to trivial changes (2)
  • .github/workflows/backend_docker.yml
  • frontend/.dockerignore
🚧 Files skipped from review as they are similar to previous changes (6)
  • backend/src/utils/file_validation.rs
  • backend/src/api/mod.rs
  • frontend/nginx.conf
  • .github/workflows/frontend_docker.yml
  • backend/src/api/identify.rs
  • frontend/src/pages/Library.tsx

Comment thread backend/src/settings.rs Outdated
Comment thread backend/src/settings.rs Outdated
Comment thread frontend/Dockerfile Outdated
Comment thread frontend/src/App.tsx Outdated
Comment thread frontend/src/components/tracks/TrackCard.tsx
Comment thread frontend/src/components/tracks/TrackCard.tsx Outdated
Comment thread frontend/src/index.css
Comment thread frontend/src/lib/api.ts
Comment thread frontend/src/pages/Identify.tsx
@lessan-cyber lessan-cyber merged commit 5d7bdf1 into main Apr 24, 2026
5 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant