-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
history of the issue
When a model is forced to provide a structured output, it will not longer output text messages as described in #2158 (comment).
And it would be helpful to allow non-reasoning models to still "reason" using text output.
idea
Discussing this issue on Slack thread, namely with @DouweM and @dmontagu a new idea arose.:
Allow the model to output structured output or text and only force it to use structured output if it's the last message.
elaboration
So the proposed agent mode would effectively do what the following pseudo-code implementation with current pydatic AI tooling would do:
def validator(_input):
if isinstance(_input, str):
rise ModelRetry("please use the structured output tool, not string")
....
agent = Agent(output_types=[str, MyOutputType], output_validator=validator)
res = agent.run()
....
But instead of nicely asking the LLM to use the other tool, the last tool call could be reissued with a tool-set that has only the structured output tool at it's disposal. This could be done internally by PydanticAI, subsequently forcing a structured output for the last tool call (if it didnt work without force)
considerations
Issues that I could imagine with this mode:
- what if the message of the AI was some kind of "i give up", which does not fit the format of a structured-"success" output. but on the other hand this issue would also occur if the only available tool to begin with was the "success"-tool...
- it could result in more messages than needed otherwise, raising costs and making the behavior of the agent less understandable for a user....
References
The slack thread this idea originated in: https://pydanticlogfire.slack.com/archives/C083V7PMHHA/p1756933362513579