Skip to content

Sync: New API endpoint (from #10643) #120

@RapierCraft

Description

@RapierCraft

Ecosystem Sync

Source: RapierCraftStudios/AlterLab @ staging
PR: #10643
Commits:

07a19df Merge pull request #10643 from RapierCraftStudios/milestone/api-docs-auto-sync
e5738d8 merge: resolve import conflict in scrape schema
d5b06aa Merge pull request #10630 from RapierCraftStudios/feat/ci-spec-export
1eb7e55 feat(ci): add docs validation and spec export to test-web job (#10559)
7191407 feat(ci): export OpenAPI spec before web Docker build (#10559)
729ae30 feat(ci): add shell wrapper for OpenAPI spec export (#10559)
e922fe2 Merge pull request #10621 from RapierCraftStudios/feat/sdk-sync-openapi
3c23a43 Merge pull request #10614 from RapierCraftStudios/feat/x-internal-markers
61c7ef5 feat(ci): update ecosystem-sync to use OpenAPI spec for SDK drift (#10558)
bbc175e feat(sync): add SDK drift detection script against OpenAPI spec (#10558)
9c036bc feat(schemas): add x-internal markers to business-sensitive fields (#10557)
776bc2a Merge pull request #10613 from RapierCraftStudios/feat/makefile-sync-validate
77ec1cd feat(make): add sync-docs and validate-docs targets (#10555)
f0df7fd Merge pull request #10609 from RapierCraftStudios/feat/enrich-decorators
f83a8b4 Merge pull request #10608 from RapierCraftStudios/feat/doc-undocumented-endpoints
093f972 feat(api): enrich FastAPI decorators with OpenAPI metadata (#10551)
bea7b39 docs(api): document undocumented public endpoints (#10556)
4d695d6 Merge pull request #10592 from RapierCraftStudios/feat/validate-docs
164c5f9 feat(scripts): add docs-vs-OpenAPI-spec consistency validator (#10553)
0e840cc Merge pull request #10591 from RapierCraftStudios/feat/serve-generated-spec
8f080fc feat(web): serve generated OpenAPI spec instead of hand-written object (#10550)
b198d17 Merge pull request #10577 from RapierCraftStudios/feat/openapi-export-script
a07099b feat(scripts): add OpenAPI export script with admin/business data filtering (#10549)
019abea Merge pull request #10561 from RapierCraftStudios/feat/admin-include-in-schema
2a049ab Merge pull request #10562 from RapierCraftStudios/feat/doc-inaccuracy-fixes
af86b00 feat(api): add include_in_schema=False to all admin and internal routers (#10552)
f892e6b fix(docs): correct timeout default, formats list, and webhook path (#10554)

Detected by: ecosystem-sync.yml workflow


What to Update in MCP Server

Review the diff below and update the relevant tool schemas, client code, and types.

Likely files

  • src/tools/ — Tool schemas
  • src/client.ts — API client
  • src/types.ts — Type definitions

API Changes (Diff)

services/api/app/routers/scrape_js.py

diff --git a/services/api/app/routers/scrape_js.py b/services/api/app/routers/scrape_js.py
index 56da72a..36c6b0e 100644
--- a/services/api/app/routers/scrape_js.py
+++ b/services/api/app/routers/scrape_js.py
@@ -34,7 +34,14 @@ def extract_content_string(content) -> str:
     return ""
 
 
-@router.post("/js", response_model=ScrapeResponse)
+@router.post(
+    "/js",
+    response_model=ScrapeResponse,
+    deprecated=True,
+    summary="Scrape (JS render mode) [DEPRECATED]",
+    description="**Deprecated**: Use POST /api/v1/scrape with mode='js' instead.",
+    response_description="Scrape result",
+)
 async def scrape_js(
     request: ScrapeJSRequest,
     http_request: Request,

services/api/app/routers/scrape_light.py

diff --git a/services/api/app/routers/scrape_light.py b/services/api/app/routers/scrape_light.py
index 6c45c02..e10e32c 100644
--- a/services/api/app/routers/scrape_light.py
+++ b/services/api/app/routers/scrape_light.py
@@ -34,7 +34,14 @@ def extract_content_string(content) -> str:
     return ""
 
 
-@router.post("/light", response_model=ScrapeResponse)
+@router.post(
+    "/light",
+    response_model=ScrapeResponse,
+    deprecated=True,
+    summary="Scrape (light mode) [DEPRECATED]",
+    description="**Deprecated**: Use POST /api/v1/scrape with mode='html' instead.",
+    response_description="Scrape result",
+)
 async def scrape_light(
     request: ScrapeLightRequest,
     http_request: Request,

services/api/app/routers/scrape_ocr.py

diff --git a/services/api/app/routers/scrape_ocr.py b/services/api/app/routers/scrape_ocr.py
index 8710e42..0a9be65 100644
--- a/services/api/app/routers/scrape_ocr.py
+++ b/services/api/app/routers/scrape_ocr.py
@@ -34,7 +34,14 @@ def extract_content_string(content) -> str:
     return ""
 
 
-@router.post("/ocr", response_model=ScrapeResponse)
+@router.post(
+    "/ocr",
+    response_model=ScrapeResponse,
+    deprecated=True,
+    summary="Scrape (OCR mode) [DEPRECATED]",
+    description="**Deprecated**: Use POST /api/v1/scrape with mode='ocr' instead.",
+    response_description="Scrape result",
+)
 async def scrape_ocr(
     request: ScrapeOCRRequest,
     http_request: Request,

services/api/app/routers/scrape_pdf.py

diff --git a/services/api/app/routers/scrape_pdf.py b/services/api/app/routers/scrape_pdf.py
index 92d8110..a983d99 100644
--- a/services/api/app/routers/scrape_pdf.py
+++ b/services/api/app/routers/scrape_pdf.py
@@ -34,7 +34,14 @@ def extract_content_string(content) -> str:
     return ""
 
 
-@router.post("/pdf", response_model=ScrapeResponse)
+@router.post(
+    "/pdf",
+    response_model=ScrapeResponse,
+    deprecated=True,
+    summary="Scrape (PDF mode) [DEPRECATED]",
+    description="**Deprecated**: Use POST /api/v1/scrape with mode='pdf' instead.",
+    response_description="Scrape result",
+)
 async def scrape_pdf(
     request: ScrapePDFRequest,
     http_request: Request,

services/api/app/routers/scrape_unified.py

diff --git a/services/api/app/routers/scrape_unified.py b/services/api/app/routers/scrape_unified.py
index 3a1f9e5..8cece50 100644
--- a/services/api/app/routers/scrape_unified.py
+++ b/services/api/app/routers/scrape_unified.py
@@ -600,7 +600,33 @@ def should_use_inline_execution(request: UnifiedScrapeRequest) -> bool:
 # - Removed 156 lines of dead code
 
 
-@router.post("", response_model=UnifiedScrapeResponse)
+@router.post(
+    "",
+    response_model=UnifiedScrapeResponse,
+    summary="Scrape a web page",
+    description=(
+        "Scrape a single URL with intelligent tier escalation. The engine "
+        "automatically selects the cheapest scraping tier that succeeds "
+        "(curl → HTTP → stealth → browser → captcha solver) and only charges "
+        "for the tier actually used.\n\n"
+        "**Modes**: `auto` (default — detect best mode), `html`, `js` "
+        "(headless browser), `pdf`, `ocr`.\n\n"
+        "**Sync vs async**: `sync=true` (default) blocks until the result is "
+        "ready (up to 120 s) and returns **200** with content. `sync=false` "
+        "returns **202** immediately with a `job_id` for polling via "
+        "`GET /v1/jobs/{job_id}`.\n\n"
+        "**Cost**: 1–20 credits per request depending on the tier used. "
+        "Use `cost_controls.max_cost` to cap spend."
+    ),
+    response_description="Scrape result with content, metadata, and billing breakdown",
+    responses={
+        200: {"description": "Scrape completed (sy
... (truncated)

Source Files Changed

  • services/api/app/routers/scrape_js.py
  • services/api/app/routers/scrape_light.py
  • services/api/app/routers/scrape_ocr.py
  • services/api/app/routers/scrape_pdf.py
  • services/api/app/routers/scrape_unified.py

Acceptance Criteria

  • Parameter/endpoint parity with AlterLab API
  • TypeScript types updated
  • Build passes (npm run build)
  • Tested against local AlterLab instance

Metadata

Metadata

Assignees

No one assigned

    Labels

    P2Medium prioritysyncSync with AlterLab API changes

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions