Skip to content

Conversation

@issuetopr-dev
Copy link

@issuetopr-dev issuetopr-dev bot commented Jan 4, 2026

Summary

  • Adds a manual end-to-end Jest test that exercises the full workflow run lifecycle for autoResolveIssue.
  • The test enqueues a job via the Next.js API route, then polls the Workflow Runs list and events APIs to verify events are persisted and the workflow completes.

What’s included

  • tests/config/jest.config.workflow.e2e.ts: Dedicated Jest config for E2E workflow tests (20 min timeout).
  • tests/e2e/workflow-run.workflow.e2e.test.ts: The E2E test itself.
  • package.json: Adds script pnpm run test:e2e:workflow.

How it works

  1. Mocks next-auth’s auth() to provide a minimal session profile (GitHub login) needed by the enqueue route.
  2. Calls POST /api/queues/[queueId]/jobs with name=autoResolveIssue and your repo/issue.
  3. Polls GET /api/workflow-runs?repo=&issue= to discover the latest run for the issue.
  4. Polls GET /api/workflow-runs/[workflowId]/events until it sees workflow.completed.
  5. Asserts event persistence and completion. (Optionally logs status messages.)

Environment and execution

  • This test is intended to be run manually and is skipped by default.
  • To run, ensure your local stack is running (like pnpm dev): Redis, Neo4j, workers, Next app, and that GitHub/OpenAI credentials are set.

Required env vars for the test:

  • RUN_WORKFLOW_E2E=true
  • E2E_REPO_FULL_NAME=owner/repo (e.g., youngchingjui/openai-realtime-agents-test-playground)
  • E2E_ISSUE_NUMBER=123
  • E2E_GITHUB_LOGIN=your-github-username (used to mock auth())

Optional:

  • E2E_BRANCH=feature/e2e-test-branch

Run it:

  • pnpm run test:e2e:workflow

Notes

  • We intentionally avoid hard-asserting PR creation to reduce flakiness across environments, but the workflow is designed to create a PR and you can verify on GitHub after the run completes.
  • The test only runs when RUN_WORKFLOW_E2E=true to prevent accidental execution in CI (costly and slow).

Closes #1460


Updates in response to review

  • Remove RUN_WORKFLOW_E2E gate: The E2E test now runs exclusively under its own Jest project via pnpm test:e2e:workflow, so no special env flag is required. This keeps it out of CI by default.
  • Preflight checks: Added lightweight checks at test start to verify:
    • Required env vars for the run (E2E_REPO_FULL_NAME, E2E_ISSUE_NUMBER)
    • Redis availability (REDIS_URL + PING)
    • Neo4j availability (simple health check)
      If a check fails, the test exits early with a clear console warning and setup reminders.
  • Test env example: Added __tests__/env.workflow.e2e.example (non-sensitive) to document/test variables. The test will auto-load __tests__/.env.workflow.e2e if present.
  • Clarify Next.js origin: For the workflow-runs list call, use http://localhost:3000 and add a comment explaining that the origin is only used to construct a URL for request.nextUrl (no HTTP request is made).

Notes

  • We kept the PR-creation assertion optional to avoid flakiness. If desired, we can add a follow-up flag to stub PR creation at the tool-call boundary for fully local runs.

…nfig: __tests__/config/jest.config.workflow.e2e.ts\n- New test: __tests__/e2e/workflow-run.workflow.e2e.test.ts\n - Enqueues autoResolveIssue via API route\n - Polls workflow run list and events APIs\n - Awaits workflow.completed and validates persisted events\n- Add npm script: test:e2e:workflow\n\nThis long-running, external-API test is skipped by default and only runs when RUN_WORKFLOW_E2E=true. It is designed to be executed manually against a fully running local stack (Redis, Neo4j, workers, GitHub/OpenAI credentials).
@issuetopr-dev issuetopr-dev bot added the AI generated AI-generated Pull Requests label Jan 4, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 4, 2026

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

@youngchingjui
Copy link
Owner

I wonder if we could mock the PR creation write at the tool call boundary. For example, I expect the LLM agent to call the create PR tool call. And when that's called, we don't need to actually make a PR - we can mock the actual execution of that tool.


;(shouldRun ? describe : describe.skip)("Workflow lifecycle E2E", () => {
it("launches autoResolveIssue via queue and observes completion + events", async () => {
const repoFullName = process.env.E2E_REPO_FULL_NAME
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are these env variables saved? If they're in the tests folder (not root), then that's good. We should create a .env.example file if it doesn't already exist with default non-sensitive values.

}
}

const shouldRun = process.env.RUN_WORKFLOW_E2E === "true"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using an env variable to check if "shouldRun", I was expecting this to be another class of tests, where we would run something like "pnpm test:e2e" just like we do with the neo4j tests. So then we wouldn't need this env variable.

Seems like there's a "--selectProjects" option that might be useful.


// Step 2: Poll the Workflow Runs list API for this specific issue until a run appears
// Build a NextRequest with query params: ?repo=...&issue=...
const listUrl = new URL("http://localhost/api/workflow-runs")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work without specifying the port for NextJS? Usually 3000. Double check.

// IMPORTANT:
// - This test is intended to be run manually. It will be skipped unless
// RUN_WORKFLOW_E2E === "true".
// - It expects your local environment to be running just like `pnpm dev`:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to check if these services are running and exit early if they're not? With clear messaging and reminders saying that these services need to be running

// - Proper GitHub App + OpenAI credentials in env
//
// Required env vars for this test:
// RUN_WORKFLOW_E2E=true
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take this var out. Just use pnpm test:e2e:workflow and wrap these e2e workflows under a project or something so they're not normally run during CI

…e env file\n\n- Drop RUN_WORKFLOW_E2E flag; rely on dedicated Jest project (pnpm test:e2e:workflow)\n- Add preflight checks for required env, Redis, and Neo4j connectivity with clear messages\n- Load optional __tests__/.env.workflow.e2e via dotenv\n- Provide __tests__/env.workflow.e2e.example with non-sensitive defaults and guidance\n- Clarify URL origin/port usage and set to http://localhost:3000
@vercel vercel bot temporarily deployed to Preview – issue-to-pr-storybook January 6, 2026 15:40 Inactive
@vercel vercel bot temporarily deployed to Preview – issue-to-pr-realtime January 6, 2026 15:40 Inactive
@vercel vercel bot temporarily deployed to Preview – issue-to-pr-realtime January 13, 2026 07:39 Inactive
@vercel vercel bot temporarily deployed to Preview – issue-to-pr-storybook January 13, 2026 07:39 Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI generated AI-generated Pull Requests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add end-to-end Jest test for workflow run lifecycle

2 participants