docs/concepts/structured_outputs/ #28978
Replies: 3 comments 1 reply
-
model = ChatGroq(model="llama3-8b-8192") Invoke the model to produce structured output that matches the schemastructured_output = model_with_structured_output.invoke( |
Beta Was this translation helpful? Give feedback.
-
Can we return the response metadata that are contained to the simple invoke eg (AIMessage) with the structured schema? response_metadata={'token_usage': {'completion_tokens': 107, 'prompt_tokens': 409, 'total_tokens': 516, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_xxxxxxxx', 'finish_reason': 'stop', 'logprobs': None}, id='run-xxxxxxxxxxxxxx', usage_metadata={'input_tokens': 409, 'output_tokens': 107, 'total_tokens': 516} |
Beta Was this translation helpful? Give feedback.
-
Using a prompt template along with tool calling is giving me an error. Also it seems the llm is not able to read the input prompt provided. import dotenv
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, SystemMessage
from pydantic import BaseModel, Field
dotenv.load_dotenv()
class ResponseFormatter(BaseModel):
"""Always use this tool to structure your response to the user."""
language: str = Field(description="The language of the translation")
text: str = Field(description="The text to be translated")
translation: str = Field(description="The translation of the text")
open_ai = ChatOpenAI(
model="gpt-4o-mini"
)
open_ai_tools = open_ai.bind_tools([ResponseFormatter])
system_template = """
Translate the following from English into {language}:
"""
prompt_template = ChatPromptTemplate.from_messages(
[("system", system_template), ("user", "{text}")]
)
prompt = prompt_template.invoke({"language" : "German", "text" : "hi!"})
print(prompt)
ai_msg = open_ai_tools.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls[0]["args"])
pydantic_object = ResponseFormatter.model_validate(ai_msg.tool_calls[0]["args"])
print(pydantic_object) Output
|
Beta Was this translation helpful? Give feedback.
-
docs/concepts/structured_outputs/
Overview
https://python.langchain.com/docs/concepts/structured_outputs/
Beta Was this translation helpful? Give feedback.
All reactions