Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returning pydantic result of tool as final_result #847

Open
bbkgh opened this issue Feb 4, 2025 · 4 comments
Open

Returning pydantic result of tool as final_result #847

bbkgh opened this issue Feb 4, 2025 · 4 comments

Comments

@bbkgh
Copy link

bbkgh commented Feb 4, 2025

Hi. There is a simple agent code that i suppose to return result of my_func (list[ProductsResponse]) as final result but it fails to execute this task (the model is probably powerful enough). Is there any mechanism for fixing this issue?
sth like set some tools as a final tools that their result returns as response of agent.run and execution stops at them.
or setting sth in ctx for finishing flow (e.g ctx.break = True)

also making some tools as final results to better performance (reduce one llm call for parsing output of them separately)

from dataclasses import dataclass

import logfire
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel

logfire.configure(send_to_logfire="if-token-present")

@dataclass
class Deps:
    url: str


class ProductsResponse(BaseModel):
    """Use this for result of search"""

    handle: str = Field(description="handle of product")
    title: str = Field(description="title of product")
    image_url: str = Field(description="image_url of product")


_model = OpenAIModel(model_name="llama3.3", base_url="https://ollama.mysite.com/v1")

new_agent = Agent(
    _model,
    result_type=list[ProductsResponse],  # type: ignore
    deps_type=Deps,
    model_settings={"temperature": 0.0},
    retries=1,
)


@new_agent.system_prompt
async def make_general_system_prompts(ctx: RunContext[Deps]) -> str:
    result = """Always use my_func tool"""
    return result


@new_agent.tool
async def my_func(ctx: RunContext[Deps]) -> list[ProductsResponse]:
    return [ProductsResponse(title="test", handle="test", image_url="test")]

if __name__ == "__main__":
    deps = Deps(url="google.com")
    result = new_agent.run_sync("hi", deps=deps)
    print(result.data)
    print(result.new_messages_json().decode("utf8"))
    # {"message_id":}
@aristideubertas
Copy link

Also looking into how to return tool output without having the llm ingest it in its context and regurgitate it.

@HamzaFarhan
Copy link

Instead of:

@new_agent.tool
async def my_func(ctx: RunContext[Deps]) -> list[ProductsResponse]:
    return [ProductsResponse(title="test", handle="test", image_url="test")]

You could do:

# result_validator always gets called in the end but it needs to receive an argument of type `result_type`
@new_agent.result_validator
async def my_func(ctx: RunContext[Deps], result: list[ProductsResponse]) -> list[ProductsResponse]:
    return [ProductsResponse(title="test", handle="test", image_url="test")]

sth like set some tools as a final tools that their result returns as response of agent.run and execution stops at them. or setting sth in ctx for finishing flow (e.g ctx.break = True)

You can do this by returning End from a graph Node

@bbkgh
Copy link
Author

bbkgh commented Feb 5, 2025

What do you mean by the Instead of tool? I hardcoded it to simplify the issue. Do you mean I should add a new result_validator alongside the tool that returns it's input? Another issue is that my actual agent response type is list[...] | str.
Also, I think that using Graph is over-engineering for my project; it makes my code harder to understand and somehow locks me in.

@HamzaFarhan
Copy link

I mean whatever you return from the result_validator would be the exact result of the run. So you could leverage that somehow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants