Skip to content

Commit 757c844

Browse files
tclaude
andcommitted
fix(orchestration): WorkflowDocumenter migrate chat_completion → chat (#7042)
Closes the runtime AttributeError discovered behind PR #7036 (#6983). After the #3185 LLMInterface retirement, ``WorkflowDocumenter`` retained its docstring-typed ``llm_interface`` slot and called ``self.llm_interface.chat_completion(...)`` — a method that doesn't exist on LLMService. Boot succeeded (the slot is ``Optional[Any]``), but the summary path raised AttributeError silently behind the broad ``except Exception`` guard at the call site, so workflow documentation always shipped without the ``generated_summary`` field. Migration applied ----------------- - Constructor param ``llm_interface`` → ``llm_service`` (also closes #7092 item 2). Single external caller is ``orchestrator.py:535``, updated in the same commit. - Stored attribute ``self.llm_interface`` → ``self.llm_service``. - Method call ``self.llm_interface.chat_completion(...)`` → ``self.llm_service.chat(messages=...)`` per the LLMService contract. - Response read pattern ``summary_result.get("content", "")`` → ``response.content`` gated by ``not response.error`` (LLMResponse is a Pydantic model with attribute access; the dict-style access would have been the same class of bug as #6940). - Dropped the ``model="default"`` placeholder kwarg — LLMService picks its own default when ``model_name`` is omitted; the old LLMInterface.chat_completion accepted "default" as a sentinel that no longer exists. - Cleaned up the transient ``# #6983: ...`` comment block at orchestrator.py:531-534 (the "tracked separately" follow-up landed in this commit; the comment is no longer accurate). Per CLAUDE.md style: rationale-only WHY comments, no issue-number prefix. Verification ------------ Added ``orchestration/workflow_documentation_summary_test.py`` — 7 tests pinning the contract: - Constructor param renamed → only ``llm_service`` works - Stored attribute renamed → ``self.llm_service`` only - **Method-call regression pin**: stub ``llm_service.chat_completion`` set to ``AssertionError`` side-effect; test fails if a future change reintroduces the legacy call - No-LLM path early-returns without writing a summary field - LLMResponse.error truthy → no partial summary written - chat() raises → broad except still swallows + logs warning - **Source-level pin**: assert ``.chat_completion(`` is absent from the migrated method's source $ python3 -m pytest \ autobot-backend/orchestration/workflow_documentation_summary_test.py -xvs ============================== 7 passed in 0.75s =============================== Verified post-fix: $ PYTHONPATH=autobot-backend:. python3 -c "import orchestrator" orchestrator OK $ grep -n "llm_interface" workflow_documentation.py orchestrator.py (empty — all renamed) Per the [verify-return-shape-in-method-migration](https://github.com/mrveiss/AutoBot-AI/blob/Dev_new_gui/.claude/projects/feedback_verify_return_shape_in_method_migration.md) memory: - LLMService.chat() returns LLMResponse (verified via inspect.signature) - LLMResponse.content: str / .error: Optional[str] (verified earlier) - chat() accepts messages= kwarg (verified) Closes #7042 Closes #7092 item 2 (WorkflowDocumenter param rename) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 2b3291b commit 757c844

3 files changed

Lines changed: 177 additions & 15 deletions

File tree

autobot-backend/orchestration/workflow_documentation.py

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,17 +29,17 @@ class WorkflowDocumenter:
2929
def __init__(
3030
self,
3131
knowledge_base: Optional[Any] = None,
32-
llm_interface: Optional[Any] = None,
32+
llm_service: Optional[Any] = None,
3333
):
3434
"""
3535
Initialize the workflow documenter.
3636
3737
Args:
3838
knowledge_base: Knowledge base for storing documentation
39-
llm_interface: LLM interface for generating summaries
39+
llm_service: LLM service used to generate workflow summaries
4040
"""
4141
self.knowledge_base = knowledge_base
42-
self.llm_interface = llm_interface
42+
self.llm_service = llm_service
4343
self._documentation: Dict[str, WorkflowDocumentation] = {}
4444
self._metrics = {
4545
"documentation_generated": 0,
@@ -157,7 +157,7 @@ async def _generate_llm_summary(
157157
workflow_doc: The workflow documentation to update
158158
execution_result: Results from workflow execution
159159
"""
160-
if not self.llm_interface:
160+
if not self.llm_service:
161161
return
162162

163163
try:
@@ -172,15 +172,12 @@ async def _generate_llm_summary(
172172
Provide a brief summary of what was accomplished and any key insights.
173173
"""
174174

175-
summary_result = await self.llm_interface.chat_completion(
176-
model="default",
175+
response = await self.llm_service.chat(
177176
messages=[{"role": "user", "content": summary_prompt}],
178177
)
179178

180-
if summary_result:
181-
workflow_doc.content["generated_summary"] = summary_result.get(
182-
"content", ""
183-
)
179+
if response and not response.error:
180+
workflow_doc.content["generated_summary"] = response.content
184181

185182
except Exception as e:
186183
logger.warning("Failed to generate workflow summary: %s", e)
Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
# AutoBot - AI-Powered Automation Platform
2+
# Copyright (c) 2025 mrveiss
3+
# Author: mrveiss
4+
"""Contract tests for ``WorkflowDocumenter._generate_llm_summary`` (#7042).
5+
6+
After the #3185 LLMInterface retirement, the documenter still called
7+
``self.llm_interface.chat_completion(...)`` — a method that doesn't exist
8+
on LLMService (LLMService exposes ``chat()``). The summary path raised
9+
AttributeError silently behind the broad ``except Exception`` guard at
10+
the call site, so workflow documentation always shipped without the
11+
``generated_summary`` field.
12+
13+
These tests pin the migrated shape:
14+
15+
- Param renamed ``llm_interface`` → ``llm_service``
16+
- Method call renamed ``chat_completion`` → ``chat``
17+
- Response is read via ``.content`` / ``.error`` (LLMResponse contract)
18+
"""
19+
20+
from __future__ import annotations
21+
22+
import inspect
23+
from datetime import datetime, timezone
24+
from typing import Any
25+
from unittest.mock import AsyncMock
26+
27+
import pytest
28+
29+
from orchestration.types import DocumentationType, WorkflowDocumentation
30+
from orchestration.workflow_documentation import WorkflowDocumenter
31+
32+
33+
@pytest.fixture
34+
def make_llm_response():
35+
"""Return a factory that builds a stub ``LLMResponse``-shaped object."""
36+
37+
class _StubResponse:
38+
def __init__(self, *, content: str = "", error: str | None = None):
39+
self.content = content
40+
self.error = error
41+
42+
return _StubResponse
43+
44+
45+
def _make_workflow_doc(workflow_id: str = "wf-1") -> WorkflowDocumentation:
46+
"""Minimal WorkflowDocumentation suitable for the summary path."""
47+
now = datetime.now(tz=timezone.utc)
48+
return WorkflowDocumentation(
49+
workflow_id=workflow_id,
50+
title="Test workflow",
51+
description="A test workflow for the summary path",
52+
created_at=now,
53+
updated_at=now,
54+
documentation_type=DocumentationType.WORKFLOW_SUMMARY,
55+
content={},
56+
)
57+
58+
59+
# ---------------------------------------------------------------------------
60+
# Constructor contract — param rename + attribute rename
61+
# ---------------------------------------------------------------------------
62+
63+
64+
def test_constructor_accepts_llm_service_keyword() -> None:
65+
"""The post-#7042 keyword is ``llm_service`` (renamed from
66+
``llm_interface`` to match LLMService convention)."""
67+
sig = inspect.signature(WorkflowDocumenter.__init__)
68+
params = list(sig.parameters.keys())
69+
assert "llm_service" in params
70+
assert "llm_interface" not in params, (
71+
"param must be renamed to 'llm_service' — orchestrator.py and any "
72+
"future caller depend on the new name"
73+
)
74+
75+
76+
def test_attribute_renamed_to_llm_service() -> None:
77+
"""Stored attribute must be ``self.llm_service`` (not the old
78+
``self.llm_interface``)."""
79+
documenter = WorkflowDocumenter(llm_service=object())
80+
assert hasattr(documenter, "llm_service")
81+
assert not hasattr(documenter, "llm_interface")
82+
83+
84+
# ---------------------------------------------------------------------------
85+
# Method-call contract — chat() not chat_completion()
86+
# ---------------------------------------------------------------------------
87+
88+
89+
@pytest.mark.asyncio
90+
async def test_summary_calls_llm_service_chat_not_chat_completion(make_llm_response: Any) -> None:
91+
"""Regression pin for the original #7042 bug: the helper must call
92+
``llm_service.chat(...)``, never the legacy ``chat_completion``.
93+
"""
94+
llm_service = AsyncMock()
95+
llm_service.chat = AsyncMock(return_value=make_llm_response(content="Summary text", error=None))
96+
# If a future change resurrects the legacy method call, fail loudly.
97+
llm_service.chat_completion = AsyncMock(side_effect=AssertionError("legacy method"))
98+
99+
documenter = WorkflowDocumenter(llm_service=llm_service)
100+
workflow_doc = _make_workflow_doc()
101+
102+
await documenter._generate_llm_summary(workflow_doc, {"status": "completed"})
103+
104+
llm_service.chat.assert_awaited_once()
105+
llm_service.chat_completion.assert_not_awaited()
106+
assert workflow_doc.content.get("generated_summary") == "Summary text"
107+
108+
109+
@pytest.mark.asyncio
110+
async def test_summary_skipped_when_no_llm_service(make_llm_response: Any) -> None:
111+
"""No LLM configured → early-return, no exception, no summary field."""
112+
documenter = WorkflowDocumenter(llm_service=None)
113+
workflow_doc = _make_workflow_doc()
114+
115+
await documenter._generate_llm_summary(workflow_doc, {"status": "completed"})
116+
117+
assert "generated_summary" not in workflow_doc.content
118+
119+
120+
@pytest.mark.asyncio
121+
async def test_summary_skipped_on_llm_response_error(make_llm_response: Any) -> None:
122+
"""LLMResponse.error truthy → skip (no partial/poisoned summary written)."""
123+
llm_service = AsyncMock()
124+
llm_service.chat = AsyncMock(return_value=make_llm_response(content="", error="rate limit"))
125+
126+
documenter = WorkflowDocumenter(llm_service=llm_service)
127+
workflow_doc = _make_workflow_doc()
128+
129+
await documenter._generate_llm_summary(workflow_doc, {"status": "completed"})
130+
131+
assert "generated_summary" not in workflow_doc.content
132+
133+
134+
@pytest.mark.asyncio
135+
async def test_summary_swallows_exception_and_logs(make_llm_response: Any, caplog) -> None:
136+
"""The broad except-guard remains — ensures one bad summary call
137+
doesn't break the rest of the documentation pipeline.
138+
"""
139+
import logging
140+
141+
llm_service = AsyncMock()
142+
llm_service.chat = AsyncMock(side_effect=RuntimeError("boom"))
143+
144+
documenter = WorkflowDocumenter(llm_service=llm_service)
145+
workflow_doc = _make_workflow_doc()
146+
147+
with caplog.at_level(logging.WARNING):
148+
await documenter._generate_llm_summary(workflow_doc, {"status": "completed"})
149+
150+
assert "generated_summary" not in workflow_doc.content
151+
assert any("Failed to generate workflow summary" in r.message for r in caplog.records)
152+
153+
154+
# ---------------------------------------------------------------------------
155+
# Source-level pin — catches reintroduction of the bug
156+
# ---------------------------------------------------------------------------
157+
158+
159+
def test_source_does_not_call_chat_completion() -> None:
160+
"""Read the source of the migrated method and assert the legacy
161+
method name is gone. Catches future regressions even if no test
162+
runtime path covers them.
163+
"""
164+
src = inspect.getsource(WorkflowDocumenter._generate_llm_summary)
165+
assert ".chat_completion(" not in src, (
166+
"_generate_llm_summary must call llm_service.chat(...), "
167+
"not the deleted-from-LLMService chat_completion(...)"
168+
)
169+
assert ".chat(" in src, "expected a llm_service.chat(...) call"

autobot-backend/orchestrator.py

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -528,13 +528,9 @@ async def _process_simple_request(
528528

529529
def _get_enhanced_documenter(self) -> WorkflowDocumenter:
530530
if not hasattr(self, "_enh_documenter") or self._enh_documenter is None:
531-
# #6983: WorkflowDocumenter still names its dep `llm_interface` (legacy slot,
532-
# typed Optional[Any]). Pass the LLMService instance — the rename is tracked
533-
# separately. WorkflowDocumenter's internal call to .chat_completion() is
534-
# broken-on-LLMService (different method name) — see follow-up tracker.
535531
self._enh_documenter = WorkflowDocumenter(
536532
knowledge_base=self.knowledge_base,
537-
llm_interface=self.llm_service,
533+
llm_service=self.llm_service,
538534
)
539535
return self._enh_documenter
540536

0 commit comments

Comments
 (0)