forked from openai/openai-agents-python
-
Notifications
You must be signed in to change notification settings - Fork 0
merge from origin #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
vcshih
wants to merge
373
commits into
veris-ai:v0.0.16-tool-call
Choose a base branch
from
openai:main
base: v0.0.16-tool-call
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Made a mistake here - we were ignoring the settings passed into the Runner init., and only usig the override settings passed to run()
This pull request adds a new script for docs, which generates missing `ref/**/*.md` files. The script can be executed when you run `make build-docs` command. The script does not do: - overwrite the existing ones - create files for _XXX.py and `__init__.py` Note that the title part is generated like `tool_context` to `Tool Context`. `openai_provider` will be `Openai Provider`, so some of them needs a little manual work to adjust. The direct need is to add `tool_context.md` for #1043 but it should be useful for future maintenance.
Used the wrong action 🤦
Action isn't published yet, so gotta do this
1. **Grammar fix**: Remove duplicate "can" in the sentence about configuring trace names 2. **Correct default value**: Update "Agent trace" to "Agent workflow" to match the actual default value in the codebase
Automated update of translated documentation
## Summary - document `agents.realtime.model` so the RealtimeModel link works - include the new file in the documentation navigation ## Testing - `make format` - `make lint` - `make mypy` - `make tests` ------ https://chatgpt.com/codex/tasks/task_i_687fadfee88883219240b56e5abba76a
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
## Summary - Added LiteLLM to the acknowledgements section in README.md - Recognized LiteLLM as a unified interface for 100+ LLMs, which aligns with the SDK's provider-agnostic approach ## Test plan - [x] Verify README renders correctly - [x] Ensure link to LiteLLM repository is functional
The current script gives ``` Traceback (most recent call last): File "/Users/xmxd289/code/openai-agents-python/examples/reasoning_content/main.py", line 19, in <module> from agents.types import ResponseOutputRefusal, ResponseOutputText # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'agents.types' ``` because it should be `from openai.types.responses import ResponseOutputRefusal, ResponseOutputText` rather than `from agents.types import ResponseOutputRefusal, ResponseOutputText` --------- Co-authored-by: Michelangelo D'Agostino <[email protected]>
…behavior (#1233) The Trace class was using @abc.abstractmethod decorators without inheriting from abc.ABC, which meant the abstract methods weren't enforced. This change makes the class properly abstract while maintaining all existing functionality since no code directly instantiates Trace() - all usage goes through the concrete implementations NoOpTrace and TraceImpl.
Adds LangDB AI Gateway to the External tracing processors list so developers can stream Agent‑SDK traces directly into LangDB’s dashboards. ## Highlights - End‑to‑end observability of every agent step, tool invocation, and guardrail. - Access to 350+ LLM models through a single OpenAI‑compatible endpoint. - Quick integration: `pip install "pylangdb[openai]" ` ```python from pylangdb.openai import init init() client = AsyncOpenAI( api_key=os.environ["LANGDB_API_KEY"], base_url=os.environ["LANGDB_API_BASE_URL"], default_headers={"x-project-id": os.environ["LANGDB_PROJECT_ID"]}, ) set_default_openai_client(client) ``` - Live demo Thread: https://app.langdb.ai/sharing/threads/53b87631-de7f-431a-a049-48556f899b4d <img width="1636" height="903" alt="image" src="https://github.com/user-attachments/assets/075538fb-c1af-48e8-95fd-ff3d729ba37d" /> Fixes #1222 --------- Co-authored-by: mutahirshah11 <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Updated the error message to replace the placeholder `your_type` with `YourType`, removing the underscore and adopting PascalCase, which aligns with standard Python type naming conventions.
The Financial Research Agent example is broken because it uses the model `gpt-4.5-preview-2025-02-27`. This model has been [removed and is no longer available](https://platform.openai.com/docs/deprecations#2025-04-14-gpt-4-5-preview). This change follows the recommendation to replace gpt-4.5-preview with gpt-4.1. --------- Co-authored-by: Kazuhiro Sera <[email protected]>
#1098) ### 1. Description This PR fixes an issue where reasoning content from models accessed via LiteLLM was not being correctly parsed into the `ChatCompletionMessage` format. This was particularly noticeable when using reasoning models. ### 2. Context I am using the `openai-agents-python` library in my project, and it has been incredibly helpful. Thank you for building such a great tool! My setup uses `litellm` to interface with `gemini-2.5-pro`. I noticed that while the agent could receive a response, the reasoning(thinking) from the Gemini model was lost during the conversion process from the LiteLLM response format to the OpenAI `ChatCompletionMessage` object. I saw that PR #871 made progress on a similar issue, but it seems the specific response structure from LiteLLM still requires a small adaptation. This fix adds the necessary logic to ensure that these responses are handled. **Relates to:** #871 ### 3. Key Changes - `LitellmConverter.convert_message_to_openai`: add `reasoing_content` - `Converter.items_to_messages`: just pass the reasoning item
…locks to tool calls (#1784)
## Background Currently, the `RunHooks` lifecycle (`on_tool_start`, `on_tool_end`) exposes the `Tool` and `ToolContext`, but does not include the actual arguments passed to the tool call. resolves #939 ## Solution This implementation is inspired by [PR #1598](#1598). * Add a new `tool_arguments` field to `ToolContext` and populate it via from_agent_context with tool_call.arguments. * Update `lifecycle_example.py` to demonstrate tool_arguments in hooks * Unlike the proposal in [PR #253](#253), this solution is not expected to introduce breaking changes, making it easier to adopt.
Update the user agent override context var to override headers generally instead of just the ua header. This allows us to pass in rich header info from other OA sdks running alongside Agents SDK.
…racking (#1662) Co-authored-by: Kazuhiro Sera <[email protected]>
Co-authored-by: Kazuhiro Sera <[email protected]>
- This PR was started from [PR 1606: Tool Guardrails](#1606) - It adds input and output guardrails at the tool level which can trigger `ToolInputGuardrailTripwireTriggered` and `ToolOutputGuardrailTripwireTriggered` exceptions - It includes updated documentation, a runnable example, and unit tests - `make check` and unit tests all pass ## Edits since last review: - Extracted nested tool running logic in `_run_impl.py` - Added rejecting tool call or tool call output and returning a message to the model (rather than only raising an exception) - Added the tool guardrail results to the `RunResult` - Removed docs
Co-authored-by: Kazuhiro Sera <[email protected]>
…alization options (#1833)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.