Skip to content

feat: enhance local-dev with PR review & MinIO S3#1975

Open
Asi0Flammeus wants to merge 2 commits intodevfrom
enhance/local-dev-pr-review
Open

feat: enhance local-dev with PR review & MinIO S3#1975
Asi0Flammeus wants to merge 2 commits intodevfrom
enhance/local-dev-pr-review

Conversation

@Asi0Flammeus
Copy link
Copy Markdown
Collaborator

Summary

Extends local-dev.sh with two capabilities that make local development self-sufficient:

PR review support (57ea0e6)

  • local-dev.sh branch --content <branch> switches the content branch and re-syncs
  • local-dev.sh sync now reports error counts, enabling agent-driven review workflows
  • Structured exit codes for scripted usage

MinIO S3 storage (81b228a)

  • Problem: Local sync produced 54 ECONNREFUSED errors because no S3 service existed locally. This also blocked garbage collection (entity cleanup) since it only runs when syncErrors.length === 0.
  • Solution: Added MinIO as a local S3-compatible object store, fully managed by local-dev.sh
  • Container lifecycle integrated into up/down/nuke/status
  • Auto-creates bucket on first start (via one-shot minio/mc container)
  • Patches S3 environment variables automatically (both on fresh setup and existing installs)
  • S3 client auto-detects non-AWS endpoints and uses path-style addressing (1-line heuristic, no config flag needed)
  • MinIO console exposed at localhost:9001 for inspecting stored objects

Result

  • Before: 55 sync errors (54 S3 + 1 content bug), GC never runs
  • After: 0 sync errors on dev, 54 assignment PDFs stored locally

Test plan

  • local-dev.sh up — MinIO starts, bucket created, env patched
  • local-dev.sh up (again) — idempotent, skips existing MinIO
  • local-dev.sh sync on dev — 0 errors, PDFs uploaded to MinIO
  • local-dev.sh status — shows MinIO status
  • local-dev.sh nuke — removes MinIO container + volume

Asi0Flammeus and others added 2 commits March 29, 2026 13:07
- Add --content-pr <number> to switch content to any PR branch (handles forks)
- Add --content-reset to restore upstream repo + main branch
- Restart dev servers on content env change (API reads config at boot)
- Surface sync errors (e.g. missing fields) after sync completes
- Reset all branches to dev on `down` for clean slate
- Show fork indicator in status and branch menu

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local sync was producing 54 ECONNREFUSED errors because no S3 service
was available. This adds MinIO as a local S3-compatible store, eliminating
all S3 errors and enabling assignment PDF upload/download in local dev.

- local-dev.sh: manage MinIO container lifecycle (up/down/nuke/status),
  auto-create bucket, patch S3 vars on first run
- S3 client: auto-detect non-AWS endpoints and use path-style addressing
  (required by MinIO and all S3-compatible stores)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
secretAccessKey: config.secretKey,
},
endpoint: config.endpoint,
forcePathStyle: !config.endpoint.includes('amazonaws.com'),

Check failure

Code scanning / CodeQL

Incomplete URL substring sanitization High

'
amazonaws.com
' can be anywhere in the URL, and arbitrary hosts may come before or after it.

Copilot Autofix

AI 6 days ago

In general, the problem is that the code checks whether the string "amazonaws.com" appears anywhere in config.endpoint, instead of reliably determining whether the endpoint really belongs to AWS. To fix this, we should parse the endpoint as a URL, extract its hostname, normalize it, and then compare it against a whitelist pattern, such as “is the hostname exactly amazonaws.com or ends with .amazonaws.com”. For plain hostnames without scheme, we can normalize by prepending https:// before parsing.

The best minimal fix here is to replace the .includes('amazonaws.com') call with a small helper that safely determines whether an endpoint is an AWS S3 endpoint. That helper can (1) accept the endpoint string, (2) ensure it has a scheme (prepend https:// if missing), (3) use the standard URL class to parse it, (4) extract hostname, and (5) return true only if the hostname is amazonaws.com or ends with .amazonaws.com. We then compute a boolean (e.g. isAwsEndpoint) and set forcePathStyle based on its negation as before, preserving behavior for genuine AWS endpoints but avoiding substring‑based misclassification for arbitrary URLs. This change can be implemented entirely inside packages/s3/src/index.ts without new external dependencies, using the built‑in URL global.

Concretely:

  • Add a small helper function (e.g. isAwsEndpoint) above createS3Service, within the same file.
  • In that helper, normalize and parse the endpoint string and perform host comparison as described.
  • In createS3Service, replace forcePathStyle: !config.endpoint.includes('amazonaws.com') with forcePathStyle: !isAwsEndpoint(config.endpoint).
Suggested changeset 1
packages/s3/src/index.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/packages/s3/src/index.ts b/packages/s3/src/index.ts
--- a/packages/s3/src/index.ts
+++ b/packages/s3/src/index.ts
@@ -58,6 +58,28 @@
   return path.replaceAll(/\/+/g, '/');
 };
 
+const isAwsEndpoint = (endpoint: string): boolean => {
+  if (!endpoint) {
+    return false;
+  }
+
+  let urlString = endpoint;
+  if (!/^[a-zA-Z][a-zA-Z0-9+\-.]*:/.test(urlString)) {
+    urlString = `https://${urlString}`;
+  }
+
+  try {
+    const url = new URL(urlString);
+    const hostname = url.hostname.toLowerCase();
+    return (
+      hostname === 'amazonaws.com' ||
+      hostname.endsWith('.amazonaws.com')
+    );
+  } catch {
+    return false;
+  }
+};
+
 export const createS3Service = (config: S3Config): S3Service => {
   const s3 = new S3Client({
     credentials: {
@@ -65,7 +87,7 @@
       secretAccessKey: config.secretKey,
     },
     endpoint: config.endpoint,
-    forcePathStyle: !config.endpoint.includes('amazonaws.com'),
+    forcePathStyle: !isAwsEndpoint(config.endpoint),
     region: config.region,
   });
 
EOF
@@ -58,6 +58,28 @@
return path.replaceAll(/\/+/g, '/');
};

const isAwsEndpoint = (endpoint: string): boolean => {
if (!endpoint) {
return false;
}

let urlString = endpoint;
if (!/^[a-zA-Z][a-zA-Z0-9+\-.]*:/.test(urlString)) {
urlString = `https://${urlString}`;
}

try {
const url = new URL(urlString);
const hostname = url.hostname.toLowerCase();
return (
hostname === 'amazonaws.com' ||
hostname.endsWith('.amazonaws.com')
);
} catch {
return false;
}
};

export const createS3Service = (config: S3Config): S3Service => {
const s3 = new S3Client({
credentials: {
@@ -65,7 +87,7 @@
secretAccessKey: config.secretKey,
},
endpoint: config.endpoint,
forcePathStyle: !config.endpoint.includes('amazonaws.com'),
forcePathStyle: !isAwsEndpoint(config.endpoint),
region: config.region,
});

Copilot is powered by AI and may make mistakes. Always verify output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants