-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Summary
The LLM module directly accesses dictionary keys without error handling, which will crash if the response format differs from expected.
Current code
src/blogtuner/ai/llm.py:48-50:
response = model.prompt(prompt, schema=MarkdDownContent)
markdown_content = json.loads(response.text())["markdown"]
return markdown_contentIssues
- No try/except around
json.loads()- will crash on invalid JSON - Direct
["markdown"]access - will crash if key is missing - No validation that response matches expected schema
Proposed change
try:
response = model.prompt(prompt, schema=MarkdDownContent)
data = json.loads(response.text())
markdown_content = data.get("markdown")
if markdown_content is None:
raise ValueError("LLM response missing 'markdown' field")
return markdown_content
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON response from LLM: {e}") from ePriority
High
File
src/blogtuner/ai/llm.py
Metadata
Metadata
Assignees
Labels
No labels